Original: Swyx · 11/03/2026
Summary
Yann LeCun’s AMI Labs has launched with a $1.03B seed funding to develop AI models that understand the physical world, marking a significant milestone in AI development.Key Insights
“AMI aims to build AI models that understand the physical world.” — Describing the mission of AMI Labs as articulated by Yann LeCun.
“This is LeCun finally getting the capital and team to prove his long-argued alternative to LLM-centric AI.” — Supportive view on the significance of AMI Labs’ funding and vision.
“Intelligent agents need hierarchical representations to understand the world.” — LeCun’s critique of pure autoregressive LLMs and the need for grounded understanding.
Topics
Full Article
AI News for 3/9/2026-3/10/2026. We checked 12 subreddits, 544 Twitters no more Discord (see below). Estimated reading time saved (at 200wpm): 2649 minutes. AINews’ website lets you search all past issues. As a reminder, AINews is now a section of Latent Space. You can opt in/out of email frequencies!Most days, AINews op-ed is human written, while the sections below are human curated selections from multiple LLM generations. However, some days theres one clear big story, and in those cases, weve been developing a new methodology that reports the big story in more detail. Today is one of those days GPT 5.4 won todays battle in describing AI Twitter coverage of AMI. So, read on, feedback welcome. Ed.AI Twitter RecapTop Story: Yann LeCuns AMI Labs launches with a 1.03B seed round (also cited as 890M) at a reported 3.5B pre-money valuation, described as one of the largest seed rounds ever and likely the largest for a European company. The announcement came directly from LeCun, who said the company had completed one of the largest seeds ever and was hiring @ylecun, and from CEO Alex Lebrun, who framed the mission as a long-term scientific endeavor to build systems that truly understand the real world @lxbrun. Multiple press reports converged on the same core facts: AMI aims to build AI models that understand the physical world and reflects LeCuns long-running view that human-level AI will come from world modeling rather than scaling language prediction alone @TechCrunch @WIRED @business @Reuters @ZeffMax. The founding and senior team includes LeCun; Alex Lebrun as CEO @lxbrun; Saining Xie as cofounder/CSO @sainingxie; Laurent Solly as COO @laurentsolly; Pascale Fung as Co-Founder and Chief Research & Innovation Officer @pascalefung; plus a wave of prominent founding researchers joining to work specifically on world models, representation learning, pretraining, scaling, and video @sanghyunwoo1219 @jihanyang13 @duchao0726 @zhouxy2017 @jingli9111.**Facts vs. opinionsFacts reported across tweets and coverageFunding size: 1.03B seed / 890M @ylecun @lxbrun @laurentsolly.Valuation: 1.03B raise and world-model framing.@BFMTV: French-language mainstream framing of the raise as historic.@WIRED: contextualizes LeCuns long-running thesis that physical-world mastery, not language alone, is the route to human-level AI.@business: Bloomberg confirmation of the funding magnitude.@iScienceLuvr: adds the 3.5B pre-money valuation figure.@sainingxie: AMI is not a conventional lab, and Xie joins as cofounder/CSO.@lxbrun: CEO announcement; mission is long-term scientific effort toward real-world understanding.@ZeffMax: concise summary that AMI is LeCun betting big on world models after years of advocacy.@teortaxesTex: gets a chance to prove his vision.@Brian_Bo_Li: real intelligence into the real world slogan.@sanghyunwoo1219: joined from day one specifically to work on world models.@laurentsolly: COO announcement; repeats funding and next AI frontier models.@mavenlin: enthusiasm from another team member, signaling depth of founding bench.@crystalsssup: notes Saining Xies presence as a signal of AMIs seriousness.@ylecun: official unveiling; one of the largest seeds ever, likely largest for a European company.@jihanyang13: founding-team join announcement.@giffmana: asks whether AMI becomes a PyTorch or JAX shop.@France24_fr: French media framing as a paradigm shift.@TheRundownAI: short summary of beyond language models to build world models.@pascalefung: Fung joins as CRIO; emphasizes human-centered AI that perceives, learns, reasons, acts.@EmmanuelMacron: political endorsement and national strategic framing.@franceinter: media amplification around LeCuns broader claims about jobs and AI transformation.@mervenoyann: bullish on world models as a leap forward for embodied research and likes the open stance.@kimmonismus: adds healthcare/Nabla commercialization angle and hallucination-risk framing.@pascalefung: hiring for Paris team.@zhouxy2017: founding member working on world models.@Reuters: calls AMI an alternative AI approach.@NVIDIAAI and related Thinking Machines/NVIDIA posts are not about AMI; omitted from focus.@chris_j_paxton: notes absence of Bay Area in listed locations; suggests geographic differentiation.@giffmana: clarifies Zrich is one of the locations.@lilianweng: building technologies for better human-AI collaboration on next gen hardware at scale. Indirect but clearly tied to joining/working with the AMI orbit.@Yuchenj_UW: juxtaposes LeCuns world-model startup and Metas Moltbook acquisition, highlighting the contrast between long-horizon foundational bets and near-term agent/social-product bets.@LiorOnAI: the most explicit technical gloss on JEPA and why latent-space predictive modeling may matter.@sainingxie: appreciation reply; minor but confirms continued engagement.@NandoDF @DrJimFan @denisyarats: peer congratulations; low-information but signal broad respect.Bottom lineAMI Labs is the strongest institutional challenge yet to the idea that scaling autoregressive language models is the sole or dominant route to AGI. The hard facts are unusually concrete 1.03B seed, 3.5B pre-money, elite vision/world-model-heavy team, France/Europe strategic backing while the technical promise remains largely thesis-level for now: JEPA-style latent predictive world models that learn from real-world sensor data and support planning/action without reconstructing every bit of noise. Supporters view it as the overdue next paradigm; neutrals see a high-stakes test of whether LeCuns critique of LLMs can finally cash out in products and benchmarks; skeptics, even when not stated bluntly, will judge it on whether world models can outcompete rapidly improving LLM agents before the market closes around the current stack.Other TopicsAgents, coding workflows, and the builder vs reviewer shiftA broad theme across the timeline is that coding agents are changing software org structure: implementation is no longer the bottleneck; review, architecture, and product judgment are @renilzac @clairevo @dexhorthy. Multiple reactions converged on the framing that engineers increasingly become either builders with product taste or reviewers with systems thinking @radek__w @ZhitaoLi224653.Agent harnesses emerged as a major practical concept: Agent = Model + Harness, with filesystems, memory, browsers, routing, orchestration, and sandboxes all part of the real product surface @Vtrivedy10 @techczech @AstasiaMyers @omarsar0.Tooling updates reflected that trend:VS Code Agent Hooks for policy enforcement and workflow guidance @codeGitHub/Figma MCP closes designcode loops @githubLangGraph deploy and LangGraph 1.1 simplify productionization @LangChain @sydneyrunkleTogether MCP server and Together GPU Clusters add infra for agent-driven app building and scale @togethercompute @togethercomputeOllama scheduled prompts in Claude Code adds simple automation loops @ollamaProduct reactions were split between enthusiasm and caution:Perplexity Computer replacing routine knowledge work and marketing tasks was cited as a strong founder use case @GabbbarSingh @AravSrinivas @AravSrinivasBut several posts warned against optimizing for % AI-written code or abandoning code comprehension entirely @karrisaarinen @dexhorthy.UX matters as much as raw capability: Claude Code/Hermes/OpenClaw users repeatedly noted trust, feedback loops, memory, and interface presentation as key to perceived competence @StudioYorktown @sudoingX @cz_binance.Benchmarks, evals, and reliability researchCameron Wolfe posted a practical stats thread on making LLM evals more reliable: model scores as sample means, estimate standard error as std / sqrt(n), and report 95% confidence intervals as x 1.96SE instead of raw mean-only metrics @cwolferesearch @cwolferesearch.New benchmark work focused on grounding and human validity:Opposite-Narrator Contradictions for sycophancy @LechMazurOfficeQA Pro: enterprise grounded reasoning remains hard, with frontier agents still <50% @kristahopsalong @DbrxMosaicAISWE-bench Verified appears overstated relative to maintainer reality: maintainers would merge only about half of agent PRs that pass the grader @whitfill_parker @joel_bkrAuditBench introduces 56 LLMs with implanted hidden behaviors for alignment-auditing evaluation @abhayesianCodeClash probes long-horizon coding/planning; top models still fare poorly in sustained agentic adversarial settings @OfirPress @OfirPressInterpretability of reasoning traces continues to be contested: one paper summary claimed 97%+ of thinking steps are decorative and CoT monitoring is unreliable @shi_weiyan.Models, infrastructure, and training systemsMegatron Core MoE drew strong attention as an open framework for large-scale MoE training, with a claim of 1233 TFLOPS/GPU for DeepSeek-V3-685B @EthanHe_42 @eliebakouch. Commentary suggested DeepSeek-style MoE training efficiency is becoming commoditized @teortaxesTex.Gemini Embedding 2 launched as Googles first fully multimodal embedding model:single embedding space for text, images, video, audio, docs8,192-token text inputs100+ languagesoutput dims 3072 / 1536 / 768 via MRLup to 6 images, 120s video, 6-page PDFs per request @OfficialLoganK @_philschmid @googleaidevs.Hugging Face Storage Buckets launched as S3-like mutable storage built on Xet deduplication, starting at 8/TB/month, positioned for checkpoints, logs, traces, eval outputs, and agent artifacts @victormustar @huggingface @Wauplin.Other notable model/system releases:RWKV-7 G1e in 13B/7B/3B/1B sizes @BlinkDL_AIHume TADA open-source TTS model: zero content hallucinations across 1,000+ test samples, 5x faster than comparable LLM-TTS, and 2,048 tokens 700s of audio @hume_aiPhi-4-reasoning-vision-15B highlighted as a compact open multimodal model @dl_weeklyBaseten/Harvard prefix-caching collaboration for inference efficiency @chutes_aiAutonomous research, AlphaGo lineage, and recursive improvementThe strongest meta-theme outside AMI was automated ML research:Karpathys autoresearch concept overnight experiment loops with code edits, short training runs, and metric-based keep/discard logic was widely discussed @NerdyRodent @_philschmidYuchen Jin ran a Claude-driven chief scientist loop for 11+ hours, 568 experiments, on 8 GPUs, observing a progression from broad exploration to focused refinement to heavy validation @Yuchenj_UWKarpathy hinted at AgentHub, GitHub for agents, as the next layer for multi-agent research collaboration @karpathy @Yuchenj_UWAlphaGos 10-year anniversary triggered many reflections:Demis Hassabis argued AlphaGos search-and-planning ideas remain central to AGI and science @demishassabisGoogle/DeepMind linked AlphaGo to AlphaEvolve and broader compute/science optimization @Google @GoogleDeepMindNoam Brown-style framing that current reasoning models follow the AlphaGo recipe: imitation, inference-time search, then RL @polynoamialRecursive self-improvement discourse remained active:Schmidhuber resurfaced his long-running meta-learning/RSI work @SchmidhuberAICommentary on unsupervised RLVR suggested naive recursive improvement currently hits ceilings @teortaxesTexCapability milestones, applications, and deploymentOne of the most striking capability claims: a possible AI-assisted resolution of a FrontierMath open problem, first from users claiming GPT-5.4 Pro solved it and later from observers noting this could be the first FrontierMath open problem solved by AI if validated @spicey_lemonade @kevinweil @GregHBurnham @AcerFur.Google reported a prospective clinical study of AMIE in urgent care workflows: blinded evaluation found similar differential-diagnosis and management-plan quality overall versus PCPs, but PCPs outperformed on practicality and cost effectiveness (p=0.003, p=0.004) @iScienceLuvr.Google Sheets with Gemini reached 70.48% on SpreadsheetBench, described as near human-expert ability @GoogleAI.Google Workspace/Gemini rollout expanded across Docs, Sheets, Slides, and Drive, with claims of Sheets tasks 9x faster, AI-generated slide layouts, and Drive-level cross-document answers @Google @sundarpichai.Microsoft reported health as the #1 topic for Copilot mobile users in 2025, based on analysis of 500k+ conversations @mustafasuleyman.Sharon Zhou claimed superhuman performance on AI kernel optimization in production settings, suggesting automatic GPU-porting/optimization may soon be practical @realSharonZhou.AI Reddit Recap/r/LocalLlama + /r/localLLM Recap Read moreRelated Articles
[AINews] OpenAI closes $110B raise from Amazon, NVIDIA, SoftBank in largest startup fundraise in history @ $840B post-money
Swyx · reference · 75% similar
[AINews] AI Engineer will be the LAST job
Swyx · explanation · 71% similar
[AINews] WTF Happened in December 2025?
Swyx · explanation · 70% similar
Originally published at https://www.latent.space/p/ainews-yann-lecuns-ami-labs-launches.