← Back to Blog

AI News Digest: April 30, 2026

Daily roundup of AI and ML news - 8 curated stories on security, research, and industry developments.

Here's your daily roundup of the most relevant AI and ML news for April 30, 2026. Today's digest includes 3 security-focused stories. We're also covering 5 research developments. Click through to read the full articles from our curated sources.

Security & Safety

Cybersecurity researchers are sounding the alarm about a new supply chain attack campaign targeting SAP-related npm Packages with credential-stealing malware. According to reports from Aikido Security, Onapsis, OX Security, SafeDep, Socket, StepSecurity, and Google-owned Wiz, the campaign – ...

Source: The Hacker News (Security) | 21 hours ago

2. Show HN: Trent – Contextual architectural security reviews inside Claude Code

Article URL: https://trent.ai/solutions/claude-code-security/ Comments URL: https://news.ycombinator.com/item?id=47962091 Points: 4

Comments: 1

Source: Hacker News - ML Security | just now

3. CHERI memory safety mitigates LLM-discovered vulnerability in FreeBSD

Article URL: https://cheri-alliance.org/cheri-memory-safety-mitigates-llm-discovered-vulnerability-in-freebsd/ Comments URL: https://news.ycombinator.com/item?id=47961974 Points: 1

Comments: 0

Source: Hacker News - ML Security | just now

Research & Papers

4. Adversarial Robustness of NTK Neural Networks

arXiv:2604.25965v1 Announce Type: cross Abstract: Deep learning models are widely deployed in safety-critical domains, but remain vulnerable to adversarial attacks. In this paper, we study the adversarial robustness of NTK neural networks in the context of nonparametric regression. We establish ...

Source: arXiv - Machine Learning | 10 hours ago

5. Robust Federated Learning under Adversarial Attacks via Loss-Based Client Clustering

arXiv:2508.12672v4 Announce Type: replace Abstract: Federated Learning (FL) enables collaborative model training across multiple clients without sharing private data. We consider FL scenarios wherein FL clients are subject to adversarial (Byzantine) attacks, while the FL server is trusted (hones...

Source: arXiv - Machine Learning | 10 hours ago

6. SD2AIL: Adversarial Imitation Learning from Synthetic Demonstrations via Diffusion Models

arXiv:2512.18583v2 Announce Type: replace Abstract: Adversarial Imitation Learning (AIL) is a dominant framework in imitation learning that infers rewards from expert demonstrations to guide policy optimization. Although providing more expert demonstrations typically leads to improved performanc...

Source: arXiv - Machine Learning | 10 hours ago

7. Frontier Coding Agents Can Now Implement an AlphaZero Self-Play Machine Learning Pipeline For Connect Four That Performs Comparably to an External Solver

arXiv:2604.25067v2 Announce Type: replace-cross Abstract: Forecasting when AI systems will become capable of meaningfully accelerating AI research is a central challenge for AI safety. Existing benchmarks measure broad capability growth, but may not provide ample early warning signals for recurs...

Source: arXiv - Machine Learning | 10 hours ago

8. Adaptive and Fine-grained Module-wise Expert Pruning for Efficient LoRA-MoE Fine-Tuning

arXiv:2604.26340v1 Announce Type: new Abstract: LoRA-MoE has emerged as an effective paradigm for parameter-efficient fine-tuning, combining the low training cost of LoRA with the increased adaptation capacity of Mixture-of-Experts (MoE). However, existing LoRA-MoE frameworks typically adopt a f...

Source: arXiv - Machine Learning | 10 hours ago


About This Digest

This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.

Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.