Here's your daily roundup of the most relevant AI and ML news for May 13, 2026. We're also covering 8 research developments. Click through to read the full articles from our curated sources.
Research & Papers
1. Controlled Steering-Based State Preparation for Adversarial-Robust Quantum Machine Learning
arXiv:2605.10954v1 Announce Type: cross Abstract: Quantum machine learning (QML) provides a promising framework for leveraging quantum-mechanical effects in learning tasks. However, its vulnerability to adversarial perturbations remains a major challenge for practical deployment. In QML systems,...
Source: arXiv - AI | 10 hours ago
2. Adversarial SQL Injection Generation with LLM-Based Architectures
arXiv:2605.11188v1 Announce Type: cross Abstract: SQL injection (SQLi) attacks are still one of the serious attacks ranked in the Open Worldwide Application Security Project (OWASP) Top 10 threats. Today, with advances in Artificial Intelligence (AI), especially in Large Language Models (LLMs), ...
Source: arXiv - AI | 10 hours ago
3. Robustness Certificates for Neural Networks against Adversarial Attacks
arXiv:2512.20865v2 Announce Type: replace Abstract: The increasing use of machine learning in safety-critical domains amplifies the risk of adversarial threats, especially data poisoning attacks that corrupt training data to degrade performance or induce unsafe behavior. Most existing defenses l...
Source: arXiv - Machine Learning | 10 hours ago
4. IPI-proxy: An Intercepting Proxy for Red-Teaming Web-Browsing AI Agents Against Indirect Prompt Injection
arXiv:2605.11868v1 Announce Type: cross Abstract: Web-browsing AI agents are increasingly deployed in enterprise settings under strict whitelists of approved domains, yet adversaries can still influence them by embedding hidden instructions in the HTML pages those domains serve. Existing red-tea...
Source: arXiv - AI | 10 hours ago
5. AESOP: Adversarial Execution-path Selection to Overload Deep Learning Pipelines
arXiv:2605.10987v1 Announce Type: new Abstract: Modern machine learning deployments increasingly compose specialized models into dynamic inference pipelines, where upstream components produce intermediate predictions that determine the workload and inputs of downstream components. The cost of pr...
Source: arXiv - Machine Learning | 10 hours ago
6. Seir\^enes: Adversarial Self-Play with Evolving Distractions for LLM Reasoning
arXiv:2605.11636v1 Announce Type: new Abstract: We present Seir\^enes, a self-play RL framework that transforms contextual interference from a failure mode of LLM reasoning into an internal training signal for co-evolving more resilient reasoners. While RL with verifiable rewards has significant...
Source: arXiv - AI | 10 hours ago
7. Persona-Conditioned Adversarial Prompting: Multi-Identity Red-Teaming for Adversarial Discovery and Mitigation
arXiv:2605.11730v1 Announce Type: new Abstract: Automated red-teaming for LLMs often discovers narrow attack slices, missing diverse real-world threats, and yielding insufficient data for safety fine-tuning. We introduce Persona-Conditioned Adversarial Prompting (PCAP), which conditions adversar...
Source: arXiv - Machine Learning | 10 hours ago
8. Internalizing Curriculum Judgment for LLM Reinforcement Fine-Tuning
arXiv:2605.11235v1 Announce Type: new Abstract: In LLM Reinforcement Fine-Tuning (RFT), curriculum learning drives both efficiency and performance. Yet, current methods externalize curriculum judgment via handcrafted heuristics or auxiliary models, risking misalignment with the policy's training...
Source: arXiv - Machine Learning | 10 hours ago
About This Digest
This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.
Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.