Here's your daily roundup of the most relevant AI and ML news for March 27, 2026. Today's digest includes 1 security-focused story. We're also covering 6 research developments. Click through to read the full articles from our curated sources.
Security & Safety
1. Claude Extension Flaw Enabled Zero-Click XSS Prompt Injection via Any Website
Cybersecurity researchers have disclosed a vulnerability in Anthropic's Claude Google Chrome Extension that could have been exploited to trigger malicious prompts simply by visiting a web page. The flaw "allowed any website to silently inject prompts into that assistant as if the user wrote them,...
Source: The Hacker News (Security) | 1 day ago
Research & Papers
2. DeepFAN, a transformer-based deep learning model for human-artificial intelligence collaborative assessment of incidental pulmonary nodules in CT scans: a multi-reader, multi-case trial
arXiv:2603.25607v1 Announce Type: cross Abstract: The widespread adoption of CT has notably increased the number of detected lung nodules. However, current deep learning methods for classifying benign and malignant nodules often fail to comprehensively integrate global and local features, and mo...
Source: arXiv - AI | 10 hours ago
3. Neural Uncertainty Principle: A Unified View of Adversarial Fragility and LLM Hallucination
arXiv:2603.19562v2 Announce Type: replace Abstract: Adversarial vulnerability in vision and hallucination in large language models are conventionally viewed as separate problems, each addressed with modality-specific patches. This study first reveals that they share a common geometric origin: th...
Source: arXiv - Machine Learning | 10 hours ago
4. Divided We Fall: Defending Against Adversarial Attacks via Soft-Gated Fractional Mixture-of-Experts with Randomized Adversarial Training
arXiv:2512.20821v3 Announce Type: replace Abstract: Machine learning is a powerful tool enabling full automation of a huge number of tasks without explicit programming. Despite recent progress of machine learning in different domains, these models have shown vulnerabilities when they are exposed...
Source: arXiv - Machine Learning | 10 hours ago
5. Neural Network Conversion of Machine Learning Pipelines
arXiv:2603.25699v1 Announce Type: new Abstract: Transfer learning and knowledge distillation has recently gained a lot of attention in the deep learning community. One transfer approach, the student-teacher learning, has been shown to successfully create ``small'' student neural networks that mi...
Source: arXiv - Machine Learning | 10 hours ago
6. Generative Adversarial Perturbations with Cross-paradigm Transferability on Localized Crowd Counting
arXiv:2603.24821v1 Announce Type: cross Abstract: State-of-the-art crowd counting and localization are primarily modeled using two paradigms: density maps and point regression. Given the field's security ramifications, there is active interest in model robustness against adversarial attacks. Rec...
Source: arXiv - AI | 10 hours ago
7. Knowledge-Guided Adversarial Training for Infrared Object Detection via Thermal Radiation Modeling
arXiv:2603.25170v1 Announce Type: cross Abstract: In complex environments, infrared object detection exhibits broad applicability and stability across diverse scenarios. However, infrared object detection is vulnerable to both common corruptions and adversarial examples, leading to potential sec...
Source: arXiv - AI | 10 hours ago
Industry News
8. Mistral releases a new open source model for speech generation
The model, which lets enterprises build voice agents for sales and customer engagement, puts Mistral in direct competition with the likes of ElevenLabs, Deepgram, and OpenAI.
Source: TechCrunch - AI | 1 day ago
About This Digest
This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.
Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.