This is a three-part blog series reflecting on the ACoP (American Conference on Pharmacometrics) 13 meeting held October 30th - November 2nd, 2022.
Several weeks ago I finally had the opportunity to attend my first ACoP in person after the pandemic, and I was especially excited for the talks and discussions around artificial intelligence/machine learning (AI/ML) in pharmacometrics. After reflecting on the conference I noticed some themes, which tie in nicely with the internal MetrumRG discussions, projects, and viewpoint on AI/ML.
Moving past the hype: most of the talks and posters had realistic goals, reasonable scope, and perspective on the effort required.
Machine learning algorithms have made widely recognized impacts and become ubiquitous in a wide variety of fields (e.g., computer vision, chess, language translation, etc.) for a number of years, and there were expectations that ML could similarly transform pharmacometrics. However, this was not the case, and ML for pharmacometrics is now focused on specific applications and improvements in existing problems. In fact, no poster or presentation was promising a single ML algorithm would have a wide impact. This is in contrast to the ML solves all problems and domain knowledge isn’t required, perspective embodied by the quote: “Every time I fire a linguist, the performance of the speech recognizer goes up” by Frederick Jelinek (1985) in regards to natural language processing.
For example, I had a nice discussion with Timothy Rumbell about his poster on conditional GANs (generative adversarial networks) (Rumbell, et al. Constructing virtual cohorts that recreate data distributions using generative adversarial networks. Poster M-052.). GANs are a neural network often used to generate data, such as the distribution of covariates in a disease population, or an image (this Deep Generative Modeling presentation by Ava Soleimany provides a nice explanation). In particular, GANs are useful for simulations because they can generate parameter and/or covariate sets that are plausible, but did not occur in the data. However, setting up the appropriate architecture with hierarchical data from multiple studies was a non-trivial problem, even with a collaboration between statisticians and ML engineers. Most importantly, ACoP attendees shared their progress and examples, without promising their method would solve everything in pharmacometrics.
Several talks dived into discussing the neural network architecture and the key implementation details and I appreciated that these important considerations were given more than a cursory mention. For example, James Lu and Nicholas Ellinwood presented "Explainable Machine Learning for Disease Progression Modeling & Digital Twins, and Stefan Groha presented "Neural ODEs for Multi-State Modeling and Cause-Specific Time-to-Event Analysis." These presentations did not shy away from the work and iterations required for these models. In addition, special thanks to Stefan Groha and anyone else for publishing their code and models in well-organized and commented python notebooks on Github (https://github.com/stefangroha/SurvNODE) and contributing to open science. Other speakers presented targeted use of ML to solve a specific problem (Venkatesh Reddy on DDI- "Comparing the Applications of Machine Learning and PBPK/Pop-PK Models in Pharmacokinetic Drug-Drug Interaction Modeling), provided specific scientific insight (Frank Kloprogge on knowledge extraction- "PKPDAI: A Pharmacometric Knowledge Repository Structured and Curated with Natural Language Processing"), and increased the confidence in their models, which brings me to the second theme: ML methodologies to improve modeling.
Stay tuned for the next blog of this series for more on ML methodologies to improve modeling!