Newsfeed / People / Yann LeCun
Yann LeCun

Yann LeCun

Chief AI Scientist

Meta

researchmetapioneerworld-models

About Yann LeCun

Yann LeCun is the Chief AI Scientist at Meta and a Turing Award winner (2018, with Geoffrey Hinton and Yoshua Bengio). He pioneered convolutional neural networks in the 1980s-90s and is now one of AI's most vocal skeptics of LLM-only approaches to AGI.

Career Highlights

  • Meta (2013-present): Chief AI Scientist, VP of AI Research
  • AMI (2025): Co-founder of Advanced Machine Intelligence startup
  • Turing Award (2018): With Hinton and Bengio for deep learning
  • NYU (2003-present): Silver Professor of Computer Science
  • Bell Labs (1988-2002): Invented convolutional neural networks

Notable Positions

On LLMs Being Insufficient

LeCun's contrarian thesis:

"You cannot get to human-level AI through text alone. Training an LLM requires 30 trillion tokens - effectively all internet text. That same 10^14 bytes represents just 15,000 hours of video - 30 minutes of YouTube uploads."

On World Models (JEPA)

His alternative approach:

"JEPA predicts in abstract representation space, not pixel space - eliminates unpredictable details while preserving structure for planning."

On Open Research

"You cannot call it research unless you publish - internal hype creates delusion. Scientists need external validation."

Key Quotes

  • "LLMs cannot get us to human-level AI."
  • "World models trained on video, not text."
  • "You cannot call it research unless you publish."
  • World Models - LeCun's path to AGI
  • JEPA - Joint Embedding Predictive Architecture

Video Appearances

Data efficiency of vision vs text

Data efficiency of vision vs text

You cannot get to human-level AI through text alone. Training an LLM requires 30 trillion tokens - all internet text. That same data is just 15,000 hours of video - 30 minutes of YouTube uploads.

at 00:10:00

Open research philosophy

Open research philosophy

AMI will publish openly because you cannot call it research unless you publish - internal hype creates delusion.

at 00:20:00

Joint Embedding Predictive Architecture

Joint Embedding Predictive Architecture

JEPA predicts in abstract representation space, not pixel space - eliminates unpredictable details while preserving structure.

at 00:15:00

LLM limitations and world models

LLM limitations and world models

Dr. Yann LeCun argues LLMs are 'an off-ramp in the highway of AI studies' - impressive but limited as token-to-token generators. He left Meta to create AMI, a startup focusing on world models.

at 01:55:00

Related People