Here's your daily roundup of the most relevant AI and ML news for March 20, 2026. Today's digest includes 1 security-focused story. We're also covering 7 research developments. Click through to read the full articles from our curated sources.
Security & Safety
1. Mistral CEO: AI companies should pay a content levy in Europe
Article URL: https://www.ft.com/content/d63d6291-687f-4e05-8b23-4d545d78c64a Comments URL: https://news.ycombinator.com/item?id=47453959 Points: 3
Comments: 1
Source: Hacker News - ML Security | just now
Research & Papers
2. Epistemic Generative Adversarial Networks
arXiv:2603.18348v1 Announce Type: new Abstract: Generative models, particularly Generative Adversarial Networks (GANs), often suffer from a lack of output diversity, frequently generating similar samples rather than a wide range of variations. This paper introduces a novel generalization of the ...
Source: arXiv - Machine Learning | 10 hours ago
3. Best-of-Both-Worlds Multi-Dueling Bandits: Unified Algorithms for Stochastic and Adversarial Preferences under Condorcet and Borda Objectives
arXiv:2603.18972v1 Announce Type: new Abstract: Multi-dueling bandits, where a learner selects $m \geq 2$ arms per round and observes only the winner, arise naturally in many applications including ranking and recommendation systems, yet a fundamental question has remained open: can a single alg...
Source: arXiv - Machine Learning | 10 hours ago
4. Adversarial Latent-State Training for Robust Policies in Partially Observable Domains
arXiv:2603.07313v3 Announce Type: replace Abstract: Robustness under latent distribution shift remains challenging in partially observable reinforcement learning. We formalize a focused setting where an adversary selects a hidden initial latent distribution before the episode, termed an adversar...
Source: arXiv - Machine Learning | 10 hours ago
5. Systematic Scaling Analysis of Jailbreak Attacks in Large Language Models
arXiv:2603.11149v2 Announce Type: replace Abstract: Large language models remain vulnerable to jailbreak attacks, yet we still lack a systematic understanding of how jailbreak success scales with attacker effort across methods, model families, and harm types. We initiate a scaling-law framework ...
Source: arXiv - Machine Learning | 10 hours ago
6. Rethinking Gradient-based Adversarial Attacks on Point Cloud Classification
arXiv:2505.21854v2 Announce Type: replace-cross Abstract: Gradient-based adversarial attacks are widely used to evaluate the robustness of 3D point cloud classifiers, yet they often rely on uniform update rules that neglect point-wise heterogeneity, leading to perceptible perturbations. We propo...
Source: arXiv - AI | 10 hours ago
7. myMNIST: Benchmark of PETNN, KAN, and Classical Deep Learning Models for Burmese Handwritten Digit Recognition
arXiv:2603.18597v1 Announce Type: cross Abstract: We present the first systematic benchmark on myMNIST (formerly BHDD), a publicly available Burmese handwritten digit dataset important for Myanmar NLP/AI research. We evaluate eleven architectures spanning classical deep learning models (Multi-La...
Source: arXiv - AI | 10 hours ago
8. Box Maze: A Process-Control Architecture for Reliable LLM Reasoning
arXiv:2603.19182v1 Announce Type: new Abstract: Large language models (LLMs) demonstrate strong generative capabilities but remain vulnerable to hallucination and unreliable reasoning under adversarial prompting. Existing safety approaches -- such as reinforcement learning from human feedback (R...
Source: arXiv - AI | 10 hours ago
About This Digest
This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.
Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.