Researchers use statistical physics and "toy models" to explain how neural networks avoid overfitting and stabilize learning in high-dimensional spaces.
Tech Xplore on MSN
A simple physics-inspired model sheds light on how AI learns
Artificial intelligence systems based on neural networks—such as ChatGPT, Claude, DeepSeek or Gemini—are extraordinarily powerful, yet their internal workings remain largely a "black box." To better ...
Physicists at Harvard University have developed a simplified, physics-inspired mathematical model to better understand how neural networks learn, potentially explaining why large AI systems often ...
Physics meets AI: Harvard scientists applied renormalization theory to a simplified model, revealing how large neural networks stabilize learning in high‑dimensional spaces. Scaling mystery solved?: ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results