Skip to main content
Loading Events

« All Events

  • This event has passed.

Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning

November 9, 2023 @ 12:00 pm - 1:30 pm

Fall 2023 Sheps IT Lunch & Learn

 

Thursday November 9, 2023
12:00 – 1:30 PM EST

Zoom:

Register in advance for this meeting:
https://zoom.us/meeting/register/tJwldeuoqzgqHNKphBkXa8S20E2FZecY56-R

After registering, you will receive a confirmation email containing information about joining the meeting.

 

Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning

In domains such as healthcare and finance it is important that the AI models we deploy be accurate, unbiased and safe. In this talk we’ll present a half-dozen case studies where glassbox machine learning uncovers surprising patterns in data that would make deploying blackbox models trained on that data very risky. By using glassbox AI, we are able to not only detect these problems, but in many cases repair the problems by editing the models prior to deployment. Model interpretability will help us develop models that are safer for use in critical domains such as healthcare, and also less biased and more privacy preserving.

 

Rich Caruana image

Speaker: Rich Caruana, PhD

Rich Caruana is a senior principal researcher at Microsoft Research. Before joining Microsoft, Rich was on the CS faculty at Cornell , at UCLA’s Med School, and at CMU’s Center for Learning and Discovery. Rich’s Ph.D. is from CMU, where he worked with Tom Mitchell and Herb Simon. His thesis on Multi-Task Learning helped create interest in a new subfield of machine learning called Transfer Learning. Rich received an NSF CAREER Award in 2004 for Meta Clustering, best paper awards in 2005 (with Alex Niculescu-Mizil), 2007 (with Daria Sorokina), and 2014 (with Todd Kulesza, Saleema Amershi, Danyel Fisher, and Denis Charles), and co-chaired KDD in 2007. His current research focus is on learning for medical decision making, interpretable AI, and large language models.