From RH Snacks:
Zuckerberg announces new AI hires for “Meta Labs”
Name
|
History
|
School
|
Degree
|
Focus
|
Network
|
Trapit Bansal
|
Microsoft, OpenAI
|
U Mass Amherst
|
PhD
|
Few Shot NLP, |
Ilya Sutskever, Tsendsuren Munkhdalai, Da-Cheng Juan, Sujith Ravi, A
McCallum
|
Lucas Beyer
|
Google, OpenAI
|
RWTH achen
|
PhD
|
Deep visual sensing - robotics
|
James Bergstra, Bastian Liebe
|
Shuchao Bi
|
YouTube, OpenAI
|
UC Berkeley
|
PhD
|
Math
|
Olga Holtz
|
Nat Friedman
|
Xamarin, MSFT, GitHub
|
MIT
|
BS
|
CS, Math
|
Linux Torvalds
|
Joel Pobar
|
Facebook, Anthropic
|
Queensland Tech
|
BS
|
CS
|
|
Shengjia Zhao
|
OpenAI
|
Stanford
|
PhD
|
Unsupervised learning
|
Stefano Ermon, Syrine Belakaria, Rui Shu, Volodymyr Kuleshov, Tri Dao,
Jiaming Song, Burak Uzkent, Aditya Grover
|
WSJ's Meghan Bobrowsky : It's all about more ad revenue and trying to fix the disaster of Llama Behemoth - the LLM that never was |
I would like to thank everyone in Ermon group who I had the fortune to collaborate or conduct research
with, including
Aditya Grover
![]() ![]() ![]() |
The thesis itself (Uncertainty and Information for ML-Powered Decision Making)
Contribution 1
Proposing a new approach to convey confidence to downstream decision makers who will use the predictions for (high stakes) decisions by accurate uncertainty quantification. Accurate uncertainty quantification can be achieved by predicting the true probability of the outcome of interest (such as the true probability of a patient’s illness given the symptoms). While outputting these probabilities exactly is impossible in most cases, I show that it is surprisingly possible to learn probabilities that are indistinguishable from the true probabilities for large classes of decision making tasks. Indistinguishability ensures reliability for decision makers, because they should not be able to tell the difference between the predicted probability and the true probability in their decision tasks. As an application, I develop predictions models in domains such as medical diagnosis, flight delay prediction, and poverty prediction. I show that by using my methods, decision makers can confidently make decisions leading good outcomes.
Contribution 2
Developing a new theory of information to rigorously reason about and optimize the “usefulness” of ML predictions in a wide range of decision tasks. Shannon information theory has wide applications in machine learning, but suffers several limitations when applied to complex learning and decision tasks. For example, consider a dataset of securely encrypted messages intercepted from an opponent. According to information theory, these encrypted messages have high mutual information with the opponent’s plans, yet any computationally bounded decision maker cannot utilize this information. To address these limitations, I put forward a new framework called “utilitarian information theory” that generalizes Shannon’s entropy, information and divergence to account for how information will be used by a decision maker
No comments:
Post a Comment