Here's your daily roundup of the most relevant AI and ML news for April 13, 2026. Today's digest includes 1 security-focused story. We're also covering 7 research developments. Click through to read the full articles from our curated sources.
Security & Safety
1. OpenAI GPT worst AI GPT/model vs. Claude/MinMax
Article URL: https://AlitaGPT.com Comments URL: https://news.ycombinator.com/item?id=47751162 Points: 2
Comments: 5
Source: Hacker News - ML Security | 1 hours ago
Research & Papers
2. XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers
arXiv:2604.09489v1 Announce Type: cross Abstract: Model poisoning attacks pose a significant security threat to Federated Learning (FL). Most existing model poisoning attacks rely on collusion, requiring adversarial clients to coordinate by exchanging local benign models and synchronizing the ge...
Source: arXiv - Machine Learning | 10 hours ago
3. Mosaic: Multimodal Jailbreak against Closed-Source VLMs via Multi-View Ensemble Optimization
arXiv:2604.09253v1 Announce Type: cross Abstract: Vision-Language Models (VLMs) are powerful but remain vulnerable to multimodal jailbreak attacks. Existing attacks mainly rely on either explicit visual prompt attacks or gradient-based adversarial optimization. While the former is easier to dete...
Source: arXiv - AI | 10 hours ago
4. Kill-Chain Canaries: Stage-Level Tracking of Prompt Injection Across Attack Surfaces and Model Safety Tiers
arXiv:2603.28013v3 Announce Type: replace-cross Abstract: Multi-agent LLM systems are entering production -- processing documents, managing workflows, acting on behalf of users -- yet their resilience to prompt injection is still evaluated with a single binary: did the attack succeed? This leave...
Source: arXiv - Machine Learning | 10 hours ago
5. Adversarial Evasion Attacks on Computer Vision using SHAP Values
arXiv:2601.10587v3 Announce Type: replace-cross Abstract: The paper introduces a white-box attack on computer vision models using SHAP values. It demonstrates how adversarial evasion attacks can compromise the performance of deep learning models by reducing output confidence or inducing misclass...
Source: arXiv - AI | 10 hours ago
6. Weak Adversarial Neural Pushforward Method for the Wigner Transport Equation
arXiv:2604.08763v1 Announce Type: cross Abstract: We extend the Weak Adversarial Neural Pushforward Method to the Wigner transport equation governing the phase-space dynamics of quantum systems. The central contribution is a structural observation: integrating the nonlocal pseudo-differential po...
Source: arXiv - Machine Learning | 10 hours ago
7. GRM: Utility-Aware Jailbreak Attacks on Audio LLMs via Gradient-Ratio Masking
arXiv:2604.09222v1 Announce Type: cross Abstract: Audio large language models (ALLMs) enable rich speech-text interaction, but they also introduce jailbreak vulnerabilities in the audio modality. Existing audio jailbreak methods mainly optimize jailbreak success while overlooking utility preserv...
Source: arXiv - AI | 10 hours ago
8. Thermally Activated Dual-Modal Adversarial Clothing against AI Surveillance Systems
arXiv:2511.09829v3 Announce Type: replace Abstract: Adversarial patches have emerged as a popular privacy-preserving approach for resisting AI-driven surveillance systems. However, their conspicuous appearance makes them difficult to deploy in real-world scenarios. In this paper, we propose a th...
Source: arXiv - AI | 10 hours ago
About This Digest
This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.
Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.