← Back to Blog

AI News Digest: March 24, 2026

Daily roundup of AI and ML news - 8 curated stories on security, research, and industry developments.

Here's your daily roundup of the most relevant AI and ML news for March 24, 2026. We're also covering 8 research developments. Click through to read the full articles from our curated sources.

Research & Papers

1. Detection of adversarial intent in Human-AI teams using LLMs

arXiv:2603.20976v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed in human-AI teams as support agents for complex tasks such as information retrieval, programming, and decision-making assistance. While these agents' autonomy and contextual knowledge enables t...

Source: arXiv - Machine Learning | 10 hours ago

2. Open-weight genome language model safeguards: Assessing robustness via adversarial fine-tuning

arXiv:2511.19299v2 Announce Type: replace Abstract: Novel deep learning architectures are increasingly being applied to biological data, including genetic sequences. These models, referred to as genomic language models (gLMs), have demonstrated impressive predictive and generative capabilities, ...

Source: arXiv - Machine Learning | 10 hours ago

3. Adversarial Attacks on Locally Private Graph Neural Networks

arXiv:2603.20746v1 Announce Type: new Abstract: Graph neural network (GNN) is a powerful tool for analyzing graph-structured data. However, their vulnerability to adversarial attacks raises serious concerns, especially when dealing with sensitive information. Local Differential Privacy (LDP) off...

Source: arXiv - Machine Learning | 10 hours ago

4. TTP: Test-Time Padding for Adversarial Detection and Robust Adaptation on Vision-Language Models

arXiv:2512.16523v2 Announce Type: replace-cross Abstract: Vision-Language Models (VLMs), such as CLIP, have achieved impressive zero-shot recognition performance but remain highly susceptible to adversarial perturbations, posing significant risks in safety-critical scenarios. Previous training-t...

Source: arXiv - AI | 10 hours ago

5. CAMA: Exploring Collusive Adversarial Attacks in c-MARL

arXiv:2603.20390v1 Announce Type: new Abstract: Cooperative multi-agent reinforcement learning (c-MARL) has been widely deployed in real-world applications, such as social robots, embodied intelligence, UAV swarms, etc. Nevertheless, many adversarial attacks still exist to threaten various c-MAR...

Source: arXiv - Machine Learning | 10 hours ago

6. OmniPatch: A Universal Adversarial Patch for ViT-CNN Cross-Architecture Transfer in Semantic Segmentation

arXiv:2603.20777v1 Announce Type: new Abstract: Robust semantic segmentation is crucial for safe autonomous driving, yet deployed models remain vulnerable to black-box adversarial attacks when target weights are unknown. Most existing approaches either craft image-wide perturbations or optimize ...

Source: arXiv - Machine Learning | 10 hours ago

7. Can LLMs Fool Graph Learning? Exploring Universal Adversarial Attacks on Text-Attributed Graphs

arXiv:2603.21155v1 Announce Type: new Abstract: Text-attributed graphs (TAGs) enhance graph learning by integrating rich textual semantics and topological context for each node. While boosting expressiveness, they also expose new vulnerabilities in graph learning through text-based adversarial s...

Source: arXiv - AI | 10 hours ago

8. Adversarial Camouflage

arXiv:2603.21867v1 Announce Type: cross Abstract: While the rapid development of facial recognition algorithms has enabled numerous beneficial applications, their widespread deployment has raised significant concerns about the risks of mass surveillance and threats to individual privacy. In this...

Source: arXiv - AI | 10 hours ago


About This Digest

This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.

Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.