GPT-5.2 Rewrites Knowledge Work. Disney Licenses Characters for Sora. Google Opens Deep Research. FACTS Reveals Model Weaknesses.

Show notes

The AI news for December 12th, 2025

--- This episode is sponsored by ---

Find our more about our today's sponsor Pickert at pickert.de.

---

Would you like to create your own AI-generated and 100% automated podcast on your chosen topic? --> Reach out to us, and we’ll make it happen.

Here are the details of the day's selected top stories:

ChatGPT-5.2 is here: Why this update changes everything.
Source: https://www.all-ai.de/news/top-news24/chatgpt-5-2-release
Why did we choose this article?
Major model release with clear practical implications: big gains in accuracy, agentic tool use, long-context handling and reduced hallucinations make this immediately relevant for teams automating knowledge work and developers integrating smarter agents.

OpenAI partnership: Create your own Disney films with Sora starting in 2026.
Source: https://www.all-ai.de/news/top-news24/openai-disney
Why did we choose this article?
High-impact industry partnership that signals how major IP owners may choose licensing and product integration over litigation; practical for creators, platform engineers, and policy watchers planning for branded generative media.

Google introduces new deep-research agents and a new AI API.
Source: https://the-decoder.de/google-stellt-neuen-deep-research-agenten-und-neue-ki-api-vor/
Why did we choose this article?
Developer-facing update: Google upgrading Deep-Research with Gemini 3 Pro and an API that better supports agentic workflows matters for teams building autonomous tool chains and research assistants — signals where platform-level agent support is heading.

FACTS-Benchmark: Even top AI models struggle with the truth
Source: https://the-decoder.de/facts-benchmark-auch-top-ki-modelle-kaempfen-mit-der-wahrheit/
Why did we choose this article?
Critical reliability research: DeepMind's FACTS benchmark highlights persistent truthfulness gaps even in leading models — essential context for teams relying on LLM outputs and for designing verification layers and human-in-the-loop checks.

Do you have any questions, comments, or suggestions for improvement? We welcome your feedback at podcast@pickert.de.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.