
Goodfire, a San Francisco, CA-based developer of interpretability tools for AI models, has raised $150 million in a Series B funding round led by B Capital.
The round also saw participation from Juniper Ventures, Menlo Ventures, Lightspeed Venture Partners, South Park Commons, and Wing Venture Capital, as well as new investors DFJ Growth, Salesforce Ventures, Eric Schmidt, and others.
The company is valued at $ 1.25 B.
Read More:Flock AI Raises $6M in Seed Funding
The new funding will help Goodfire advance its work on a “model design environment”—a platform for understanding, debugging, and intentionally designing AI models at scale. Using advanced interpretability techniques, the platform will let users explore models, pinpoint the parts that drive specific behaviours, and directly train or adjust those components.
The company plans to use the funds to support advanced research, develop the next version of its core product, and expand partnerships in AI agents and life sciences.
Goodfire is part of a new wave of research-focused “neolabs”—AI companies exploring breakthroughs in model training that have been overlooked by larger “scaling labs” like OpenAI and Google DeepMind.
“We’re building some of the most important technology of our time without fully understanding how to design models to behave as we want,” said Yan-David “Yanda” Erlich, former COO and CRO at Weights & Biases and General Partner at B Capital. “At Weights & Biases, I saw thousands of ML teams struggle with the same issue: they could track experiments and monitor models, but they couldn’t truly understand why the models acted the way they did. Goodfire is bridging that gap—giving teams the ability to guide what models learn, make them safer and more useful, and unlock the vast knowledge they hold.”
Interpretability is the study of how neural networks work inside and how changing their internal processes can alter their behaviour—for example, adjusting a reasoning model’s concepts to change how it thinks and responds. It also allows AI to share knowledge with humans, helping extract new insights from powerful models. Using these techniques, Goodfire recently discovered a new class of Alzheimer’s biomarkers with an epigenetic model built by Prima Mente—the first major natural science breakthrough achieved by reverse-engineering a foundation model.
Most AI companies today treat their models as black boxes. Goodfire believes this leaves society “flying blind” and that understanding how models work inside is essential for creating safe and powerful AI. The company focuses on research that makes AI understandable, debuggable, and deliberately designed—just like traditional software.
“Interpretability is our toolset for a new kind of science: a way to form hypotheses, run experiments, and intentionally design intelligence instead of discovering it by chance,” said Goodfire CEO Eric Ho. “Every engineering field has relied on fundamental science—like steam engines before thermodynamics—and AI is at that same turning point today.”
On the scientific discovery side, Goodfire works with partners such as Mayo Clinic, Arc Institute, and Prima Mente to study foundation models, including the discovery of a new class of biomarkers for Alzheimer’s detection. Since AI models already outperform humans in areas like materials science and protein folding, understanding how they work can reveal new insights and push the boundaries of knowledge. The company plans to expand its scientific discovery efforts by adding more collaborators.
On the model design side, Goodfire focuses on teaching AI models by working directly with their internal mechanisms. The company has developed methods to retrain a model’s behaviour by targeting specific parts of its inner workings. Using this approach, they cut hallucinations in a large language model by half. Goodfire believes this method could change how AI is built, making models more reliable and enabling people to control their behaviour precisely and efficiently, without unintended effects.
Goodfire’s team brings together top AI researchers from DeepMind and OpenAI, leading academics from Harvard, Stanford, and other institutions, and expert ML engineers from OpenAI and Google. Key members include Nick Cammarata, a core contributor to OpenAI’s interpretability team; co-founder Tom McGrath, who started the interpretability team at Google DeepMind; and Leon Bergen, a professor at UC San Diego (on leave).
The company also plans to keep exploring fundamental model understanding and developing new interpretability techniques.
About Goodfire
Founded in 2024 and led by CEO and co-founder Eric Purdy, Goodfire is a San Francisco based research company and public benefit corporation focused on using interpretability to understand, learn from, and design AI systems. The company’s mission is to build the next generation of safe, powerful AI—not just by scaling, but by truly understanding the intelligence being created. Goodfire aims to make AI that can be understood, debugged, and shaped like software. Its team includes leaders in neural network interpretability from OpenAI, DeepMind, Stanford, and Harvard. The company is backed by over $200M from B Capital, Menlo Ventures, Lightspeed, Eric Schmidt, and others.
Read More:Urban SDK Raises $65M in Growth Funding


