← Back to Blog

AI News Digest: March 18, 2026

Daily roundup of AI and ML news - 8 curated stories on security, research, and industry developments.

Here's your daily roundup of the most relevant AI and ML news for March 18, 2026. Today's digest includes 1 security-focused story. We're also covering 7 research developments. Click through to read the full articles from our curated sources.

Security & Safety

1. Claude Code Security and Magecart: Getting the Threat Model Right

When a Magecart payload hides inside the EXIF data of a dynamically loaded third-party favicon, no repository scanner will catch it – because the malicious code never actually touches your repo. As teams adopt Claude Code Security for static analysis, this is the exact technical boundary where AI...

Source: The Hacker News (Security) | 2 hours ago

Research & Papers

2. NanoFlux: Adversarial Dual-LLM Evaluation and Distillation For Multi-Domain Reasoning

arXiv:2509.23252v3 Announce Type: replace Abstract: We present NanoFlux, a novel adversarial framework for generating targeted training data to improve LLM reasoning, where adversarially-generated datasets containing fewer than 200 examples outperform conventional fine-tuning approaches. The fra...

Source: arXiv - Machine Learning | 10 hours ago

3. Improving Generative Adversarial Network Generalization for Facial Expression Synthesis

arXiv:2603.15648v1 Announce Type: cross Abstract: Facial expression synthesis aims to generate realistic facial expressions while preserving identity. Existing conditional generative adversarial networks (GANs) achieve excellent image-to-image translation results, but their performance often deg...

Source: arXiv - Machine Learning | 10 hours ago

4. An Efficient Heterogeneous Co-Design for Fine-Tuning on a Single GPU

arXiv:2603.16428v1 Announce Type: cross Abstract: Fine-tuning Large Language Models (LLMs) has become essential for domain adaptation, but its memory-intensive property exceeds the capabilities of most GPUs. To address this challenge and democratize LLM fine-tuning, we present SlideFormer, a nov...

Source: arXiv - AI | 10 hours ago

5. Pre-training LLM without Learning Rate Decay Enhances Supervised Fine-Tuning

arXiv:2603.16127v1 Announce Type: cross Abstract: We investigate the role of learning rate scheduling in the large-scale pre-training of large language models, focusing on its influence on downstream performance after supervised fine-tuning (SFT). Decay-based learning rate schedulers are widely ...

Source: arXiv - Machine Learning | 10 hours ago

6. Efficient LLM Safety Evaluation through Multi-Agent Debate

arXiv:2511.06396v2 Announce Type: replace Abstract: Safety evaluation of large language models (LLMs) increasingly relies on LLM-as-a-Judge frameworks, but the high cost of frontier models limits scalability. We propose a cost-efficient multi-agent judging framework that employs Small Language M...

Source: arXiv - AI | 10 hours ago

7. Amnesia: Adversarial Semantic Layer Specific Activation Steering in Large Language Models

arXiv:2603.10080v2 Announce Type: replace-cross Abstract: Warning: This article includes red-teaming experiments, which contain examples of compromised LLM responses that may be offensive or upsetting. Large Language Models (LLMs) have the potential to create harmful content, such as generatin...

Source: arXiv - Machine Learning | 10 hours ago

8. How Vulnerable Are AI Agents to Indirect Prompt Injections? Insights from a Large-Scale Public Competition

arXiv:2603.15714v1 Announce Type: cross Abstract: LLM based agents are increasingly deployed in high stakes settings where they process external data sources such as emails, documents, and code repositories. This creates exposure to indirect prompt injection attacks, where adversarial instructio...

Source: arXiv - AI | 10 hours ago


About This Digest

This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.

Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.