GPT-5.2 Codex drops, Stanford pushes AGI coordination thesis, Mistral ships OCR 3, OpenAI eyes $840B, DeepMind teams with OpenAI, plus new taxonomy on LLM harms.
The LLM harms taxonomy section captures something underappreciated--that risk surfaces at every lifecycle stage, not just at deployment. Grouping harms by temporal phase (pre-release, output generation, misuse, societal, embedded systems) makes the problem tractable in a way that monolithic "AI safety" discussions dont. The observation about how errors in healthcare or finance tools can "quietly shape real decisions" is particularly sharp becuase it highlights the invisibility problem: users often can't tell when an LLM-assisted conclusion drifted from grounded reasoning. I've noticed similar issues in code review tools where subtle hallucinations get merged because they look plausible enough.
The LLM harms taxonomy section captures something underappreciated--that risk surfaces at every lifecycle stage, not just at deployment. Grouping harms by temporal phase (pre-release, output generation, misuse, societal, embedded systems) makes the problem tractable in a way that monolithic "AI safety" discussions dont. The observation about how errors in healthcare or finance tools can "quietly shape real decisions" is particularly sharp becuase it highlights the invisibility problem: users often can't tell when an LLM-assisted conclusion drifted from grounded reasoning. I've noticed similar issues in code review tools where subtle hallucinations get merged because they look plausible enough.