Police Face Recognition Doubles. EU Bans AI Images. Suno Copyright Loophole. Gemini Steers Users to Help.
Show notes
The AI news for April 8th, 2026--- This episode is sponsored by ---
Rocket Routine GmbH
Find our more about our today's sponsor Rocket Routine at
Grid search 2.0: Strong increase in police facial recognition
Source: https://www.heise.de/news/Rasterfahndung-2-0-Starke-Zunahme-der-polizeilichen-Gesichtserkennung-11247865.html?wt_mc=rss.red.ho.themen.k%C3%BCnstliche+intelligenz.beitrag.beitrag
Why did we choose this article?
Deutlich mehr polizeiliche Gesichtserkennung bedeutet higher privacy and misidentification risk for everyday people; it affects how citizens, activists, and organizations are surveilled and may require policy or compliance responses.
Authenticity Offensive: EU bodies ban AI images from their communications.
Source: https://www.heise.de/news/Authentizitaetsoffensive-EU-Gremien-verbannen-KI-Bilder-aus-ihrer-Kommunikation-11247907.html?wt_mc=rss.red.ho.themen.k%C3%BCnstliche+intelligenz.beitrag.beitrag
Why did we choose this article?
EU institutions banning AI-generated images in official communications changes how public bodies signal trustworthiness and sets a policy precedent that could affect political messaging, media use, and public expectations.
Suno: How easily copyright restrictions can be bypassed in AI music.
Source: https://www.heise.de/news/Suno-So-leicht-lassen-sich-Copyright-Sperren-bei-KI-Musik-umgehen-11246792.html?wt_mc=rss.red.ho.themen.k%C3%BCnstliche+intelligenz.beitrag.beitrag
Why did we choose this article?
If AI-music tools can be made to reproduce copyrighted songs despite protections, that directly affects musicians, rights holders, and consumers (who may encounter infringing content) and could lead to legal and licensing consequences.
Gemini is making it faster for distressed users to reach mental health resources.
Source: https://www.theverge.com/ai-artificial-intelligence/907842/google-gemini-mental-health-interface-update
Why did we choose this article?
A concrete change to how an AI assistant routes users in crisis affects user safety, emergency response behavior, and the liability and trust placed in AI products — relevant for anyone who uses or oversees AI chat tools.
Do you have any questions, comments, or suggestions for improvement? We welcome your feedback at podcast@pickert.de.
New comment