🗞️ China's Seedance 2.0 Is So Impressive That It’s Scaring Hollywood
Seedance 2.0’s AI video model shaking Hollywood and Disney’s cease-and-desist, a cheaper agentic evolution method, Yang Liping x Qwen, OpenAI vs DeepSeek, and GPT-5.2’s physics result.
Read time: 6 min
📚 Browse past editions here.
( I publish this newletter daily. Noise-free, actionable, applied-AI developments only).
⚡In today’s Edition (15-Feb-2026):
🗞️ China’s Seedance 2.0 Is So Impressive That It’s Scaring Hollywood
🗞️ New paper gives a solid way to make agentic evolution cheaper without changing the search loop
🗞️ A stunning new Yang Liping x Qwen app collaboration is raising the bar for what AI can contribute to art.
🗞️ OpenAI claims DeepSeek is stealing AI capabilities ahead of its next model launch
🗞️ GPT-5.2 landed a new theoretical physics result.
🗞️ China’s Seedance 2.0 Is So Impressive That It’s Scaring Hollywood- Disney sent ByteDance a cease-and-desist letter
Hollywood isn’t happy about the new Seedance 2.0 video generator
A viral 15-second clip of AI versions of Tom Cruise and Brad Pitt fighting, made from a 2-line prompt, pushed Hollywood’s fear from “maybe someday” to “this is here now” after ByteDance released Seedance2.0.
A single user can now generate short, photoreal-looking action footage fast enough to spread at internet speed.
Older video generators often gave away the trick with jittery motion, inconsistent faces, and broken editing logic, so the output rarely looked like a real film shot.
Seedance 2.0 claims better motion stability plus joint audio and video generation, and it also lets users guide shots using reference images, audio, or video, which reduces randomness and makes repeats match.
That combination makes it easier to produce a coherent “fake moment” with celebrity likeness with a tiny prompt.
The Motion Picture Association said ByteDance is using copyrighted works without permission at massive scale. Disney also sent a cease-and-desist letter.
For studios, the hard problem is proving what data trained the model and getting enforceable licensing terms.
The Disney corporation has issued a scathing cease-and-desist letter to Chinese tech firm ByteDance this week, accusing the TikTok creator of running a “virtual smash-and-grab” on the company’s intellectual property with its new generative AI video maker Seedance
🗞️ A stunning new Yang Liping x Qwen app collaboration is raising the bar for what AI can contribute to art.
Usually, AI in art feels a bit gimmicky, but here it’s used more for inspirations.
Yang Liping (who is a legend for a reason) used the tech to pull fresh inspiration into traditional Chinese dance.
It doesn’t feel like the AI is replacing the dancer; it feels like it’s helping her evolve the aesthetic into something modern but still deeply rooted in heritage.
It’s one of those rare moments where tech actually makes the art feel more human, not less.
🗞️ New paper gives a solid way to make agentic evolution cheaper without changing the search loop
Evolutionary AI agents are like a code tinkerer that keeps trying small edits until the code passes tests. AdaptEvolve makes that tinkerer cheaper by not using the biggest AI model for every single try.
It keeps about the same success rate while cutting total model compute by 37.9%. The waste in older setups is simple, even easy edits still get sent to a huge 32B model, which costs a lot per call.
AdaptEvolve instead starts each step with a smaller 4B model, and only upgrades to 32B when the 4B output looks unreliable. To decide that, it watches how “unsure” the 4B model is while it is generating, using 4 quick uncertainty scores from its token probabilities.
Those scores include overall certainty, the worst local dip in certainty, how stable the ending is, and how much of the output is stuck in very low certainty. A small decision tree learns rules from just 50 warm-up examples, like “if certainty drops this way, use 32B,” and it keeps updating those rules as the tasks shift.
On LiveCodeBench, it gets 73.6% accuracy versus 75.2% for always using 32B, while using 34.4% less compute. On MBPP, it finds that about 85% of problems can stay on 4B, cutting cost by 41.5% while keeping 91.3% accuracy versus 94.0% for always 32B.
The big deal is that it turns “which model to use” into a step-by-step choice, so the system pays for 32B only when it really needs it.
🗞️ OpenAI claims DeepSeek is stealing AI capabilities ahead of its next model launch
OpenAI told the U.S. House Select Committee on China that it believes China’s DeepSeek has been training its own models by collecting outputs from U.S. frontier models and using them as teacher data.
The memo argues this is “free-riding”, because the expensive part is getting a top model to reliably produce high-quality answers in the first place.
The method is called “distillation”, where a stronger model’s answers are treated like labels, and a smaller or newer model is trained to imitate them across many prompts.
That can work even without the teacher’s original training data, because the student learns patterns from the teacher’s outputs, including style, reasoning shortcuts, and task-specific behavior.
OpenAI also claims DeepSeek-linked accounts tried to bypass access controls by routing requests through masked infrastructure, then pulling responses in automated batches for training.
🗞️ GPT-5.2 landed a new theoretical physics result.
GPT-5.2 derived a new result in theoretical physics.
OpenAI published a paper on this.
This story is about a tiny “math rule” in particle physics that people treated as always giving 0, and it turns out it can give a real answer in a special setup.
GPT-5 took a pile of messy hand-worked answers, spotted the repeating pattern inside them, and turned it into 1 short rule the researchers could actually prove.
That’s a wrap for today, see you all tomorrow.





