🧑🏫 ChatGPT launches study mode to encourage academic use
ChatGPT debuts study mode, Google beefs up NotebookLM video tools, Tencent unveils Hunyuan3D, Alibaba ships Wan2.2, GPT-5 rumor, Claude prompt leak.
Read time: 10 min
📚 Browse past editions here.
( I publish this newletter daily. Noise-free, actionable, applied-AI developments only).
⚡In today’s Edition (29-July-2025):
🧑🏫 ChatGPT launches study mode to encourage academic use
🏆 Google Enhances NotebookLM with AI Video Summaries and Studio Upgrades
📡 Tencent Hunyuan just released Hunyuan3D World Model 1.0!
🈵 China’s open-source AI engine is still running hot, Alibaba’s Wan2.2 video model launched.
🗞️ Byte-Size Briefs:
As per widely circulating rumor, OpenAI is dropping GPT-5 next week! with 1M token input window, 100k output tokens.
Somebody shared the full Claude code system prompt on GitHub.
🧑🎓 OPINION: Six Strange Aliases, One Giant Model: The GPT‑5 Scavenger Hunt
🧑🏫 ChatGPT launches study mode to encourage academic use
OpenAI announced Study Mode for ChatGPT, a new feature that fundamentally changes how students interact with artificial intelligence by withholding direct answers in favor of Socratic questioning and step-by-step guidance.
Available to logged-in Free, Plus, Pro, Team users, with availability in ChatGPT Edu coming in the coming weeks.
It will guide students with questions instead of answers, aiming for deeper learning. OpenAI hopes the tutor style flow captures a slice of the $80.5B ed‑tech market. The system even resists students’ attempts to obtain quick answers. When prompted with “just give me the answer,” Study Mode responds that “the point of this is to learn, not just to give you the answer.”
Instead of spitting out a solution, the bot fires Socratic prompts that probe what the learner already knows, offers hints, then asks for the next step. Quick toggling lets users shift between normal chat and this coaching mode, so it fits homework, exam prep, or new topics.
Behind the curtain, OpenAI injects custom system instructions written with teachers across 40 institutions. The rules control cognitive load, scaffold concepts, check retention with mini quizzes, and adjust depth from novice to advanced. OpenAI will later fuse this behavior into future models after real world feedback.
Early testers report big confidence boosts and finally cracking tough ideas like positional encodings. Rivals Anthropic and Google chase similar aims, showing that guided questioning is the next battleground as 1 in 3 U.S. college students lean on chatbots.
🏆 Google Enhances NotebookLM with AI Video Summaries and Studio Upgrades
📹 Video Overviews allow users to turn dense multimedia, such as raw notes, PDFs, and images, into digestible visual presentations.
The video overviews feature enables NotebookLM to produce narrated video summaries, complete with visuals and voiceovers, drawing directly from user-uploaded sources like PDFs, web pages, or notes. This isn’t just a simple text-to-video conversion; it’s an AI-orchestrated presentation that highlights key insights, timelines, and connections, making it ideal for quick briefings or educational content.
It also lets each notebook hold multiple custom outputs. NotebookLM already offered Audio Overviews, tiny podcasts that read course packs or legal briefs so users could learn hands‑free. Handy in traffic, but charts, formulas, and process diagrams still begged for visuals.
Video Overviews plugs that hole. An AI host builds short narrated slides, pulling images, diagrams, quotes, and raw numbers straight from the upload, then plays the set like a brisk explainer. Prompts let users set topics, goals, and audience depth.
The updated Studio panel adds 4 tiles and stores many outputs of each type, making multilingual audio, role‑specific video, or chapter‑based mind maps possible in one notebook. It even runs an Audio Overview while you explore other views, trimming tool‑switching friction.
📡 Tencent Hunyuan just released Hunyuan3D World Model 1.0!
Tencent just released Hunyuan3D World Model 1.0, which they’re calling the first open-source AI system that can build full 3D worlds from just a text prompt or an image. Basically, it’s like a 3D version of Sora—but one you can walk around in, edit, and even drop into a game engine.
What used to take weeks to build can now be generated in just a few minutes. It’s up on GitHub and Hugging Face, and there’s a live demo you can play with right now. But there’s a caveat: the license isn't fully open-source. It’s more of a ‘source-available’ setup, with some heavy restrictions, especially around where and how it can be used.
This is set to transform game development, VR, digital content creation and so on. Get started now👇🏻
360° immersive experiences via panoramic world proxies;
mesh export capabilities for seamless compatibility with existing computer graphics pipelines;
disentangled object representations for augmented interactivity.
The core of our framework is a semantically layered 3D mesh representation that leverages panoramic images as 360° world proxies for semantic-aware world decomposition and reconstruction, enabling the generation of diverse 3D worlds.
Achieves SOTA performance in generating coherent, explorable, and interactive 3D worlds while enabling versatile applications in virtual reality, physical simulation, game development, and interactive content creation.
Tencent HunyuanWorld-1.0's generation architecture integrates panoramic proxy generation, semantic layering, and hierarchical 3D reconstruction to achieve high-quality scene-scale 360° 3D world generation, supporting both text and image inputs.
🈵 China’s open-source AI engine is still running hot, Alibaba’s Wan2.2 video model launched.
🈵 China’s open-source AI engine is still running hot.
Alibaba’s Tongyi Lab just dropped Wan2.2, an open-source video model built for sharp motion and film-like quality in both text-to-video and image-to-video. Licensed under the Apache 2.0
You get user-controlled lighting in a single 4090-sized GPU. Visually equal result to Seedance, Kling, Hailuo, and Sora on looks and motion.
Wan2.2 spreads its load across two expert networks: one sketches each frame, the other polishes fine detail.
This mixture-of-experts keeps compute steady yet lifts capacity, so scenes stay coherent even through rapid camera swings.
The lab fed the model 66% more images and 83% more videos than Wan2.1. That extra variety helps it read text inside frames, follow tricky shots, and keep characters moving naturally.
A new Wan2.2-VAE squeezes footage by 16x16x4, so a 5B parameter system streams 720p 24fps on a single RTX4090. Users can nudge exposure, color, and shot length without retraining, giving near-director-level control.
China’s open models now span speech, language, and video. Wan2.2 shows how more data plus smart routing can close the gap with closed labs while staying fully shareable.
🗞️ Byte-Size Briefs
As per widely circulating rumor, OpenAI is dropping GPT-5 next week!
—1M token input window, 100k output tokens
—MCP support, parallel tool calls
—Dynamic short + long reasoning
—Uses Code Interpreter and other tools
Codenames are:
o3-alpha > nectarine (GPT-5) > lobster (mini) > starfish (nano)
Somebody shared the full Claude code system prompt on GitHub. its super long at 144K characters / 16.9K+ words. (note, this is Claude Code, not the Claude webui prompt which is open-sourced anyway)
🧑🎓 OPINION: Six Strange Aliases, One Giant Model: The GPT‑5 Scavenger Hunt
🏁 Snapshot
OpenAI has been spotted field‑testing 6 anonymous models on the coding site LM Arena. Community sleuths link those codenames, Zenith, Summit, Lobster, Starfish, Nectarine, and o3‑alpha, to different slices of the forthcoming GPT‑5. Each slice shows a standout skill, from giant code dumps to slick SVG animation.
The pattern fits a Mixture of Experts blueprint, where specialized “experts” are tuned in public, then stitched together for the final release. The Verge and Windows Central both caught Microsoft wiring Copilot for GPT‑5’s arrival in August 2025, and leaked Copilot code already references a “Smart” mode that flips between quick and deep reasoning.
🚨 The mystery line‑up
Right now LM Arena’s leaderboard suddenly features Zenith, Summit, Lobster, Starfish, Nectarine, and o3‑alpha, and they sit above nearly every public model on tough reasoning and coding tasks. A Reddit thread catalogued the names the moment they appeared, noting how Zenith consistently tops the chart while o3‑alpha wins coding head‑to‑head challenges.
Somebody pushed Summit with a playful prompt and got a fully interactive starship control panel, more than 2,300 lines of JavaScript, that ran without tweaks on the first try. He posted the raw code and a live demo, which rocketed through social feeds as proof of a new coding ceiling.
🧩 Why the pieces look like MoE specialists
Its a simple theory: each Arena model is an “expert” trained for one domain, such as long‑form writing (Zenith) or dense code refactoring (o3‑alpha). That lines up with mixture‑of‑experts research, where a router sends each token to the few specialists that matter, slashing compute while boosting capacity. Hugging Face’s explainer and IBM’s primer both show how MoE layers keep only a couple of experts hot per request, letting a giant network act small at inference time. OpenAI can therefore expose individual experts, harvest real‑world feedback, then glue them together for the full GPT‑5 launch.
👩💻 Coding power is the headline feature
Many report say GPT‑5 is built “to crush coding tasks,” from refactoring monolithic codebases to automating browser work. Public Arena runs back that claim: Summit writes production‑grade React when asked, Zenith drafts clean TypeScript scaffolds, and o3‑alpha debugs obscure stack traces without breaking a sweat. The Information piece adds that GPT‑5 lets callers set how long the model “thinks” before answering, much like a dimmer control for reasoning depth.
On Microsoft’s side, insiders found Copilot’s UI now sporting a hidden “Smart” toggle that explicitly calls GPT‑5 to “think quickly or deeply,” showing Redmond is prepping its flagship assistant for the hand‑off.
🔍 Why this public test matters
Running the experts in the open hands OpenAI a mountain of messy, real‑world edge cases it cannot simulate in‑house. Every strange prompt, every unexpected coding bug, and every refusal fallout lands as fresh gradient data for the final checkpoints. For users, the experiment hints at a new release style: instead of a surprise monolith drop, we may see modules pop up, prove themselves, and then fuse into one generalist mega‑model. If GPT‑5 emerges with a modular backbone, future upgrades could arrive expert‑by‑expert, cutting retraining bills and letting OpenAI bolt on new skills on the fly.
🎓 Takeaway
The leaderboard stunt shows OpenAI taking a page from game studios that ship public betas: let the crowd break each part early, fix the cracks, then ship the polished whole. For builders, the message is clear.
Expect bigger context windows, richer code support, and a model that can throttle thinking depth per call. For competitors, the clock is ticking, the GPT‑5 assembly line is already humming.
That’s a wrap for today, see you all tomorrow.





