Here's your daily roundup of the most relevant AI and ML news for April 14, 2026. We're also covering 8 research developments. Click through to read the full articles from our curated sources.
Research & Papers
1. ClawGuard: A Runtime Security Framework for Tool-Augmented LLM Agents Against Indirect Prompt Injection
arXiv:2604.11790v1 Announce Type: cross Abstract: Tool-augmented Large Language Model (LLM) agents have demonstrated impressive capabilities in automating complex, multi-step real-world tasks, yet remain vulnerable to indirect prompt injection. Adversaries exploit this weakness by embedding mali...
Source: arXiv - AI | 10 hours ago
2. QShield: Securing Neural Networks Against Adversarial Attacks using Quantum Circuits
arXiv:2604.10933v1 Announce Type: cross Abstract: Deep neural networks remain highly vulnerable to adversarial perturbations, limiting their reliability in security- and safety-critical applications. To address this challenge, we introduce QShield, a modular hybrid quantum-classical neural netwo...
Source: arXiv - Machine Learning | 10 hours ago
3. CapyMOA: Efficient Machine Learning for Data Streams and Online Continual Learning in Python
arXiv:2502.07432v2 Announce Type: replace Abstract: CapyMOA is an open-source Python library for efficient machine learning on data streams and online continual learning. It provides a structured framework for real-time learning, supporting adaptive models that evolve over time. CapyMOA's archit...
Source: arXiv - Machine Learning | 10 hours ago
4. Robust Adversarial Policy Optimization Under Dynamics Uncertainty
arXiv:2604.10974v1 Announce Type: new Abstract: Reinforcement learning (RL) policies often fail under dynamics that differ from training, a gap not fully addressed by domain randomization or existing adversarial RL methods. Distributionally robust RL provides a formal remedy but still relies on ...
Source: arXiv - Machine Learning | 10 hours ago
5. Continuous Adversarial Flow Models
arXiv:2604.11521v1 Announce Type: new Abstract: We propose continuous adversarial flow models, a type of continuous-time flow model trained with an adversarial objective. Unlike flow matching, which uses a fixed mean-squared-error criterion, our approach introduces a learned discriminator to gui...
Source: arXiv - Machine Learning | 10 hours ago
6. Adversarial Robustness of Graph Transformers
arXiv:2407.11764v2 Announce Type: replace Abstract: Existing studies have shown that Message-Passing Graph Neural Networks (MPNNs) are highly susceptible to adversarial attacks. In contrast, despite the increasing importance of Graph Transformers (GTs), their robustness properties are unexplored...
Source: arXiv - Machine Learning | 10 hours ago
7. Property-Preserving Hashing for $\ell_1$-Distance Predicates: Applications to Countering Adversarial Input Attacks
arXiv:2504.16355v2 Announce Type: replace-cross Abstract: Perceptual hashing is used to detect whether an input image is similar to a reference image with a variety of security applications. Recently, they have been shown to succumb to adversarial input attacks which make small imperceptible cha...
Source: arXiv - Machine Learning | 10 hours ago
8. AdvDINO: Domain-Adversarial Self-Supervised Representation Learning for Spatial Proteomics
arXiv:2508.04955v2 Announce Type: replace-cross Abstract: Self-supervised learning (SSL) has emerged as a powerful approach for learning visual representations without manual annotations. However, the robustness of standard SSL methods to domain shift -- systematic differences across data source...
Source: arXiv - AI | 10 hours ago
About This Digest
This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.
Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.