Taking a ‘big data’ view of regulatory information management

13th Nov 2018

For pharmaceutical and life sciences firms, a case-by-case approach to information and content management has been prevalent for too long

For pharmaceutical and life sciences firms, a case-by-case approach to information and content management has been prevalent for too long. It is inefficient and results in different parts of a business collating overlapping data, rendering the prospect of consolidating information more or less impossible. This is especially true of product and regulatory information in life sciences.

This case-by-case approach to information and content management poses a challenge to progress – especially when it comes to companies’ ambitions for innovation and process automation. In life sciences, as in most other industries, it is now a commonly-stated strategic aim for organisations to become more ‘data-driven’: able to react at speed and to predict, plan and pre-empt future scenarios using sophisticated intelligence gleaned from everyday data. That could be signals about potential issues with new products, alerts to emerging gaps in the market, or insight into what constitutes a successful regulatory submission – and the ability to skip straight to a more robust initial application, auto-filled with high-quality, pre-approved content.

The pursuit of this data-driven approach is driving life sciences and pharma firms to rethink the way they organise and manage routine information, and combine this with broader intelligence to create something much more useful and powerful than the sum of its parts.

Information capture and storage

Getting to this new, more dynamic, data-driven state begins with new thinking about the way information is captured and stored. If information is locked inside static documents, or proprietary, single-use database entries specific to a particular function, its value will be limited.

Yet this is a common restriction. Re-using information in other parts of the organisation may involve manual data re-entry into other systems, or complex and expensive systems integration. Unless data-sharing capabilities were envisioned from the outset, organisations risk complexity, cost and data integrity as they try to fashion something empowering and inclusive from systems which, by and large, were designed to stand alone.

Attempts to achieve more holistic regulatory information management (RIM) have highlighted the constraints and challenges caused by the traditional piecemeal approach to managing data.

Big data and RIM

Historically, the different elements of product information and regulatory intelligence have existed in pockets across the business, making it very hard for responsible teams to get a clear and accurate view of the current, correct status of anything at any given time. This is in contrast to the big data analytics world, where information is combined to create meaningful insights at speed, however large and diverse the original sources.

It’s here that the key to more dynamic RIM lies: the ability to slice and dice contributing data and content sources quickly, easily and reliably to arrive at something insightful, meaningful and of new value. To achieve this, companies need to move away from traditional ways of collating information and building content (eg forms, reports, regulatory submissions, or labels) from this – because these are too manual, repetitive and risk-laden, the very opposite of what organisations want and need.

The relevance of ‘big data’ thinking comes from the concept of a ‘data lake’ – which promotes a definitive central store for all related data in all its forms – ranging from raw source data to information and content which has been collated and prepared for a range of different tasks.

The higher ideal is a go-to place for vetted information and ready-to-use content fragments (groups of approved data assets/parts of documents/images such as photos or logos) which can be mixed and matched at speed, and with confidence, to meet each new need. So, instead of having different document stores and databases in each business function or department, which must all be updated individually, the starting point for everything is a single master resource from which everything else flows. Each onward manifestation of that information will be correct because every document, every use case will be drawing its content from the same, correct original.

Treating data as a series of ‘objects’

In technical terms, this is about treating approved data and combinations of data as a series of ‘objects’, held in a graph database where they can be called up and brought into play as needed. Organisations that get to this point are at less risk of using the wrong information, filing unsatisfactory submissions, and creating excess work and cost each time they access and do something with product or regulatory information. Even authorised translations of content can be stored in the central repository as ready-to-use assets, to support international submissions, labelling and other purposes – related to the original document.

Turning definitive master data into reusable content building blocks could contribute to scenario where companies are able to take advantage of increased automation opportunities – such as structured authoring. The picture we are moving towards is one not of ‘integrated’ RIM, but of ‘integral RIM’, where systems have been architected from the outset to support the confident re-use of master data for multiple different purposes.

Finally, the more this approach can take in numerous different entities – documents, data, processes, organisations, sites, etc), the greater the scope for value-added benefits (eg via automation of everything from content preparation to smart data analytics) as well as reduced complexity. The technology is there to make all of this possible today and RIM can deliver so much more if managed via a ‘big data’ approach.

Romuald Braun is VP of Strategy for Life Sciences at AMPLEXOR

Tags