#70 RE-RUN: Making Black Box Models Explainable with Christoph Molnar– Interpretable Machine Learning Researcher


Manage episode 300667407 series 2310475
由Player FM以及我们的用户群所搜索的Felipe Flores — 版权由出版商所拥有,而不是Player FM,音频直接从出版商的伺服器串流. 点击订阅按钮以查看Player FM更新,或粘贴收取点链接到其他播客应用程序里。

Christoph Molnar is a data scientist and Ph.D. candidate in interpretable machine learning. He is interested in making the decisions from algorithms more understandable for humans. Christoph is passionate about using statistics and machine learning on data to make humans and machines smarter.

In this episode, Christoph explains how he decided to study statistics at university, which eventually led him to his passion for machine learning and data. Starting out studying with a senior researcher gave Christoph exposure to many different projects. It is an excellent program for students and companies whom both benefit greatly. Christoph learned so much about statistics that he would not have been able to acquire otherwise. The clients got nine hours of consulting for free, which is very valuable for their businesses. When Christoph started his statistical consulting career, he did patient analysis to assess if a medication was affecting the spine. He found this very interesting as it differed significantly from his previous consulting.

When labeling data, Christoph says to label and always compare continuously. For instance, when a student labeled one photo, later on, Christoph would show a student the same photo and see if it got labeled identically. Sometimes people will see the same image but label it differently; so, this is one thing you can do to ensure labeling data is going smoothly. If you have multiple labelers, you will need to compare how each labeler will mark the same photo. Do not be blind to the quality of your data; it is easy to adjust the numbers.

Then, Christoph speaks about pursuing his Ph.D. in Interpretable Machine Learning. He publishes his book, Interpretable Machine Learning, on his website chapter by chapter. Christoph gets feedback and uses it while continuing his writing on future chapters. Learning about interpretable machine learning is not exactly present at university now. Some schools and professors are starting to integrate it into the curriculum. Stay tuned to hear Christoph discuss accumulated local effects, deep learning, and his book, Interpretable Machine Learning.

Enjoy the show!

--- Send in a voice message: https://anchor.fm/datafuturology/message