Germany Declares AI Ubiquitous. NYU Professor Uses AI Exams. Amodei on Exponential Progress. Grok Sparks International Probes.
Show notes
The AI news for January 5th, 2026--- This episode is sponsored by ---
Pickert GmbH
Find our more about our today's sponsor Pickert at pickert.de.
---
Would you like to create your own AI-generated and 100% automated podcast on your chosen topic? --> Reach out to us, and we’ll make it happen.
Here are the details of the day's selected top stories:
Missing Link: Invisible Revolution — how the federal government floods the administration with AI.
Source: https://www.heise.de/hintergrund/Missing-Link-Unsichtbare-Revolution-wie-der-Bund-die-Verwaltung-mit-KI-flutet-11127439.html?wt_mc=rss.red.ho.themen.k%C3%BCnstliche+intelligenz.beitrag.beitrag
Why did we choose this article?
A high-level look at a national-scale shift: Germany moving from pilots to broad AI rollout in public administration. Useful for listeners tracking how governments operationalize AI — implications for procurement, interoperability, transparency and citizen services.
Against AI cheating: NYU professor replaces written tests with oral AI examinations.
Source: https://the-decoder.de/gegen-ki-schummelei-nyu-professor-ersetzt-schriftliche-tests-durch-muendliche-ki-pruefungen/
Why did we choose this article?
A practical, low-cost experiment showing how educators can redesign assessment to account for AI. It’s directly applicable: oral AI-assisted exams as a tool to detect misuse, reveal learning gaps, and improve teaching — a useful model for schools and training programs.
Anthropic co-founder on AI progress: The exponential curve continues until it no longer does so.
Source: https://the-decoder.de/anthropic-mitgruenderin-ueber-ki-fortschritt-die-exponentialkurve-haelt-an-bis-sie-es-nicht-mehr-tut/
Why did we choose this article?
A strategic take from an industry founder on the pace and limits of AI progress. Frames technical gains against economic and human constraints — valuable for listeners judging long-term investment, risk, and where human skills remain decisive.
French and Malaysian authorities are investigating Grok for generating sexualized deepfakes
Source: https://techcrunch.com/2026/01/04/french-and-malaysian-authorities-are-investigating-grok-for-generating-sexualized-deepfakes/
Why did we choose this article?
A concrete example of cross-border regulatory action and safety risks: large models producing harmful sexualized deepfakes. Important for listeners following content moderation, legal liability, and why governance and model guardrails matter now.
Do you have any questions, comments, or suggestions for improvement? We welcome your feedback at podcast@pickert.de.
New comment