Dragutin Petkovic from the Computer Science Department at San Francisco State University presented “Toward Explainable Machine Learning – RFEX: Improving Random Forest Explainability” at last week’s Mobilize Center seminar. The abstract for the talk is below and the presentation slides are available here.
Abstract: Machine Learning (ML) methods are now influencing major decisions about patient care, new medical methods, drug development and their use and importance are rapidly increasing in all areas. However, these ML methods are inherently complex and often difficult to understand and explain resulting in barriers to their adoption and validation. We define explainability in ML as easy to use information explaining why and how the ML approach made its decisions. We believe that much greater effort is needed to address the issue of ML explainability because of the ever increasing use and dependence on ML in many applications and the need for increased adoption by non-ML experts.
In our talk, we will 1) summarize a workshop discussion on ML explainability organized jointly with Profs. L. Kobzik and C. Re at the 2018 Pacific Symposium on Biocomputing (PSB) and 2) describe our work on Random Forest Explainability (RFEX) (joint work with Prof. R. Altman, M. Wong and A. Vigil). RFEX provides easy-to-interpret explainability summary reports from trained RF classifiers to improve the explainability for users who are often non-experts. We tested RFEX with the FEATURE program to predict functional sites in 3-D molecules based on their electrochemical signatures (features). Through formal usability testing with expert and non-expert users, we found the RFEX explainability report significantly increased explainability and user confidence in RF classification.