Here's your daily roundup of the most relevant AI and ML news for March 31, 2026. Today's digest includes 2 security-focused stories. We're also covering 6 research developments. Click through to read the full articles from our curated sources.
Security & Safety
1. Axios Supply Chain Attack Pushes Cross-Platform RAT via Compromised npm Account
The popular HTTP client known as Axios has suffered a supply chain attack after two newly published versions of the npm package introduced a malicious dependency that delivers a trojan capable of targeting Windows, macOS, and Linux systems. Versions 1.14.1 and 0.30.4 of Axios have been found to i...
Source: The Hacker News (Security) | 7 hours ago
2. Vertex AI Vulnerability Exposes Google Cloud Data and Private Artifacts
Cybersecurity researchers have disclosed a security "blind spot" in Google Cloud's Vertex AI platform that could allow artificial intelligence (AI) agents to be weaponized by an attacker to gain unauthorized access to sensitive data and compromise an organization's cloud environment. According to...
Source: The Hacker News (Security) | just now
Research & Papers
3. Kill-Chain Canaries: Stage-Level Tracking of Prompt Injection Across Attack Surfaces and Model Safety Tiers
arXiv:2603.28013v1 Announce Type: cross Abstract: We present a stage-decomposed analysis of prompt injection attacks against five frontier LLM agents. Prior work measures task-level attack success rate (ASR); we localize the pipeline stage at which each model's defense activates. We instrument e...
Source: arXiv - Machine Learning | 10 hours ago
4. FlowPure: Continuous Normalizing Flows for Adversarial Purification
arXiv:2505.13280v2 Announce Type: replace Abstract: Despite significant advances in the area, adversarial robustness remains a critical challenge in systems employing machine learning models. The removal of adversarial perturbations at inference time, known as adversarial purification, has emerg...
Source: arXiv - Machine Learning | 10 hours ago
5. Evasion Adversarial Attacks Remain Impractical Against ML-based Network Intrusion Detection Systems, Especially Dynamic Ones
arXiv:2306.05494v5 Announce Type: replace-cross Abstract: Machine Learning (ML) has become pervasive, and its deployment in Network Intrusion Detection Systems (NIDS) is inevitable due to its automated nature and high accuracy compared to traditional models in processing and classifying large vo...
Source: arXiv - Machine Learning | 10 hours ago
6. Low-Rank Adaptation Reduces Catastrophic Forgetting in Sequential Transformer Encoder Fine-Tuning: Controlled Empirical Evidence and Frozen-Backbone Representation Probes
arXiv:2603.27707v1 Announce Type: new Abstract: Sequential fine-tuning of pretrained language encoders often overwrites previously acquired capabilities, but the forgetting behavior of parameter-efficient updates remains under-characterized. We present a controlled empirical study of Low-Rank Ad...
Source: arXiv - Machine Learning | 10 hours ago
7. A Multi-agent AI System for Deep Learning Model Migration from TensorFlow to JAX
arXiv:2603.27296v1 Announce Type: cross Abstract: The rapid development of AI-based products and their underlying models has led to constant innovation in deep learning frameworks. Google has been pioneering machine learning usage across dozens of products. Maintaining the multitude of model sou...
Source: arXiv - AI | 10 hours ago
8. Does Tone Change the Answer? Evaluating Prompt Politeness Effects on Modern LLMs: GPT, Gemini, and LLaMA
arXiv:2512.12812v2 Announce Type: replace-cross Abstract: Prompt engineering has emerged as a critical factor influencing large language model (LLM) performance, yet the impact of pragmatic elements such as linguistic tone and politeness remains underexplored, particularly across different model...
Source: arXiv - AI | 10 hours ago
About This Digest
This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.
Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.