Announced on Oct 26, 2017
By Michael Chang, ORT Times Writer and UHN Trainee
Drug discovery programs are long and expensive. They involve i) detecting potential drug targets with antibodies, 2) finding candidate molecules with a therapeutic effect on the disease state, and 3) confirming effectiveness through preclinical/clinical trials. The entire process can take up to 15 years and can cost $1B USD per drug (Hughes et al. 2011). However, the process is worth it: over the past quarter-century, child mortality in developing countries has dropped by 50%, largely due to newfound vaccines (Kassebaum 2017). Current drug discovery programs are making promising progress in developing antibody-based treatments for the world’s deadliest diseases, such as cancer, HIV and Ebola. Furthermore, medical advancements are being made at a record pace with the advent of computer technology that has endowed humankind with the highest level of productivity ever witnessed in history (www.worldbank.org). Yet, a lot of work remains to be done as there are several thousand diseases and only 500 treatments, according to the National Institute of Health.
One key bottleneck in productivity, despite having computers to automate data analysis, is the massive amount of data output that still needs to be interpreted. Computers can help store, index and filter through infinite datasets, but a human is still required to understand the results and gain insights. The issue is that it remains difficult for a human to read through large amounts of information and make sense of it. This problem is very apparent during the early stages of drug discovery when medical researchers struggle to validate pharmacological targets because they must spend hours reading through the scientific literature to find appropriate antibodies for pharmacological targets. Moreover, if inappropriate antibodies are used, false negatives and failed experiments can translate to a significant waste of time and money. In fact, the inappropriate use of antibodies is one of the leading causes of the reproducibility crisis (Baker 2016).
BenchSci, a start-up in Toronto, aims to address this productivity bottleneck by using A.I. to interpret and organize the massive amounts of data. BenchSci, founded by former University of Toronto graduate students, has created an A.I. driven platform to help facilitate the search for antibodies to use in any set of unique experimental conditions. The platform also provides references in the form of figures from relevant scientific articles so researchers can verify recommendations. BenchSci accomplishes all this by scanning through massive amounts of scientific literature and indexing all the antibody usage details with a smart computer program based on supervised machine learning. BenchSci has made their platform (www.BenchSci.com) freely available to all researchers at academic institutions, such as UHN, to promote good science and facilitate new discoveries at a faster rate. BenchSci’s platform will effectively standardize experimental protocols, increase researchers’ productivity, and prevent usage of inappropriate antibodies that lead to erroneous results. This A.I. driven platform may be a game changer—one that could greatly improve the way medical research is being done.
- Baker M. 2016. Is there a reproducibility crisis? A Nature survey lifts the lid on how researchers view the'crisis rocking science and what they think will help. Nature. 533:452-455.
- Hughes JP, Rees S, Kalindjian SB, Philpott KL. 2011. Principles of early drug discovery. Br J Pharmacol. 162:1239-1249.
- Kassebaum NJ. 2017. Child and adolescent health from 1990 to 2015: Findings from the global burden of diseases, injuries, and risk factors 2015 study. JAMA Pediatrics. 171:573-592.
The lounge/waiting area at BenchSci.
The BenchSci work station.