Daniela Amodei
Daniela Amodei has been tweeting about various topics related to AI research lately. One of the trends she has mentioned is the improvement in performance of their latest model, Claude 2, which excels in coding, math, and reasoning. She also highlights that this model can produce longer responses and is available in a new public-facing beta website. Another trend she discusses is the challenge of interpretability in neural networks, specifically focusing on the phenomenon known as 'polysemanticity' where unrelated concepts are packed into a single neuron. Daniela shares their latest work on building toy models to understand the origins of polysemanticity fully.
Language models and their abilities have also caught Daniela's attention recently. She references a paper titled "Language Models (Mostly) Know What They Know," where it is shown that language models can evaluate whether what they say is true and predict if they'll be able to answer questions correctly beforehand. Additionally, she explores ways to improve understanding of Transformer MLP neurons by using different activation functions like Softmax Linear Units (SoLU).
In terms of AI trends, Daniela mentions double descent phenomena associated with training language models using only a small fraction of data but repeating it multiple times. She also highlights her involvement with Anthropic, an AI safety and research company focusing on areas like interpretability, reinforcement learning, societal impacts, and more.
Overall sentiment analysis suggests that Daniela Amodei has expressed positive views regarding the direction AI is heading. Her tweets often focus on advancements in models' performance and capabilities while discussing challenges and potential solutions for improving interpretability.
Books By Professor Daniela Amodei
Deep Learning: Concepts and Applications
Advances in Natural Language Processing