Every year, the corpus of published scientific literature increases by 2.5 million papers [1]. To contend with this staggering number, data mining and knowledge management must become key priorities to maintain a viable position in the rapidly advancing biopharma market. Despite the fact that acquiring knowledge drives the advancement of the medical industry, this process reflects a lack of progress.

Limiting knowledge management to humans can ultimately cost a company millions of dollars per year [2]. A human (who is already well versed in the process of digesting papers) takes about 5 minutes to sift through an abstract, and 58.7 minutes per article [3]. This only accounts for reading; data generation, knowledge management, and accurately compiling spreadsheets eats up even more time (and brain power). When mapping out advancements and gaps in therapeutic areas (whether through comparative effective analyses, evidence-gap maps or other visualization techniques), this number only increases. 

In fact, in our own internal study, we found that it took a trained scientist an average of 84 minutes to compile a search, sift through dozens- to- hundreds of available articles, choose the most relevant papers, extract the data, and analyze it – and this is for abstracts alone  (see methodology below). In contrast, generating this same report takes 2 minutes using our AI. To put that into perspective, our AI can scope out a therapeutic landscape and provide statistical data 42 times faster than a human replicating the same tasks manually.

Scientists are paid to discover and share novel insights, not to spend hours robotically sorting through and regurgitating scientific literature. Given that the average scientist spends 10-20 hours per week [4] reviewing publications, automation can recapture a fifth of a scientist’s time which can be redirected towards more innovative and thoughtful tasks.

Knowledge is expensive, until it doesn’t have to be. 

Thanks to the advanced science of AI companies (such as Evid Science), data gathering and knowledge management can be delegated to machines that do the heavy lifting of data collection and digestion so that humans can elevate that information and transform it into insights, strategies, and progress. In short, machines generate the information and humans turn it into wisdom. Ideally, that wisdom extends into choosing the best tools to make the most of the human and machine symbiosis. Evid Science makes that choice very easy. Contact us to learn more.

Evid Science AI D.E.B. versus Manual Spreadsheet

Methodology:

10 papers were chosen from the Evid Science database by searching for “atrial fibrillation” (7 papers) and “bulimia” (3 papers).

Extraction of results from the abstracts were timed. Time count began with the opening of the abstract and finished with the completion of a results table either a) automatically generated within the AI and manually approved and edited or  b) generated manually within an Excel spreadsheet. The AI results table and Excel spreadsheets shared identical categories (“Intervention”, “Outcome”, “Comparison”, “Numerator”, “Denominator”, and “Percentage.”)

Per individual paper, the initial method of results extraction alternated between the AI generated results table and the Excel sheets. Half of the articles were initially processed with the AI, the other half was initially processed with Excel.  This alternating allowed for the observation of each method’s efficacy without prior exposure to the article’s contents.

Times were recorded and compiled into averages. Each article’s abstract length was measured by word count and also recorded and compiled into averages. Result extraction rates were calculated by dividing the average abstract word count by each method’s average extraction rate. Final rates are given in seconds. The average words per second for each process was calculated by dividing the average abstract length by respective results time. The rates were then applied to a hypothetical set of 100 papers to reflect the productivity differences in a typical work day.

Results

The table below shows the results of our reading and extraction experimentation.

Comparative Effectiveness Analysis: Evid Science AI versus Manual Processing

Methodology:

Time count began at the start of search and ended with the completion of a results table and chart to visualize the collected data. Within the manual process, the duration of individual processes was also recorded. This was unnecessary within the Evid Science app since the entire analysis is performed in one single process. We chose the first 3 papers that were relevant to keep the sample size reasonable and make the methodology comparable across searches.

Adalimumab and a placebo were chosen as the comparison interventions in regards to psoriasis. The desired outcome was an improvement in PASI score.  For the other study, intramedullary nailing was chosen as the intervention in regards to fractures, and the targeted outcome was infection.

Automated Analysis

In the Evid Science app, the query “psoriasis and adalimumab” was used to generate Outcomes Evidence. All “PICO” facets pertaining to psoriasis, placebo, and improvement in PASI score were chosen. The analysis filtered for randomized controlled trials. After facet selection, the comparative effectiveness analysis was generated. Total time for this process was 2 minutes.

This was repeated for search term “femoral fracture OR tibial fracture” with PICO facets as follows: fracture, intramedullary nailing, and infection, and the process took 2 minutes as well. 

Manual Analysis

On the PubMed website, advanced search was used to find papers that reported on “psoriasis” AND “adalimumab” AND “placebo” AND “improvement in PASI score” with a filter for clinical trials. 10 of 76 resulting papers were chosen for abstract reads based on their titles. Each abstract was read and 3 papers were chosen based on relevance. Results from those papers were extracted into a spreadsheet and a chart was generated to display a visual representation of the various interventions’ performances.

In another study, advanced search was used to find papers that reported on “intramedullary nailing” AND “fracture” AND “infection” with a filter for clinical trials. 15 of 80 papers were chosen for abstract reads based on their titles. Each abstract was read and 3 papers were chosen based on relevance. Results from those papers were extracted into a spreadsheet and a chart was generated to visually represent the various interventions’ performances. Time and processes are shown below.

Results for Manual Processes

Psoriasis, Adalimumab, Placebo, and PASI Tibular and Femoral Fracture, Intramedullary Nail, and Infections
                        Time (min)
Search Query 6
Initial Selection 6 25
Abstract Read 50 45
Final Paper Selection and Results Extraction 20 10
Report Generation 2 2
Total Time 79 88

The resulting average process time is 83.5 minutes.

 

Author Information

Wynona Dayao is a Scientific Customer Liaison at Evid Science. In addition to helping customers maximize their value with the Evid Science platform, she uses her microbiology background to maximize the lactic acids and gluten strands in her sourdough bread.