← Back to Blog

AI News Digest: February 02, 2026

Daily roundup of AI and ML news - 8 curated stories on security, research, and industry developments.

Here's your daily roundup of the most relevant AI and ML news for February 02, 2026. Today's digest includes 1 security-focused story. We're also covering 7 research developments. Click through to read the full articles from our curated sources.

Security & Safety

1. Open VSX Supply Chain Attack Used Compromised Dev Account to Spread GlassWorm

Cybersecurity researchers have disclosed details of a supply chain attack targeting the Open VSX Registry in which unidentified threat actors compromised a legitimate developer's resources to push malicious updates to downstream users. "On January 30, 2026, four established Open VSX extensions pu...

Source: The Hacker News (Security) | 18 hours ago

Research & Papers

2. A Systematic Literature Review on LLM Defenses Against Prompt Injection and Jailbreaking: Expanding NIST Taxonomy

arXiv:2601.22240v1 Announce Type: cross Abstract: The rapid advancement and widespread adoption of generative artificial intelligence (GenAI) and large language models (LLMs) has been accompanied by the emergence of new security vulnerabilities and challenges, such as jailbreaking and other prom...

Source: arXiv - AI | 18 hours ago

3. Are Modern Speech Enhancement Systems Vulnerable to Adversarial Attacks?

arXiv:2509.21087v3 Announce Type: replace-cross Abstract: Machine learning approaches for speech enhancement are becoming increasingly expressive, enabling ever more powerful modifications of input signals. In this paper, we demonstrate that this expressiveness introduces a vulnerability: advanc...

Source: arXiv - Machine Learning | 18 hours ago

4. Qualitative Evaluation of LLM-Designed GUI

arXiv:2601.22759v1 Announce Type: cross Abstract: As generative artificial intelligence advances, Large Language Models (LLMs) are being explored for automated graphical user interface (GUI) design. This study investigates the usability and adaptability of LLM-generated interfaces by analysing t...

Source: arXiv - AI | 18 hours ago

5. Statistical Estimation of Adversarial Risk in Large Language Models under Best-of-N Sampling

arXiv:2601.22636v1 Announce Type: new Abstract: Large Language Models (LLMs) are typically evaluated for safety under single-shot or low-budget adversarial prompting, which underestimates real-world risk. In practice, attackers can exploit large-scale parallel sampling to repeatedly probe a mode...

Source: arXiv - AI | 18 hours ago

6. Make Anything Match Your Target: Universal Adversarial Perturbations against Closed-Source MLLMs via Multi-Crop Routed Meta Optimization

arXiv:2601.23179v1 Announce Type: new Abstract: Targeted adversarial attacks on closed-source multimodal large language models (MLLMs) have been increasingly explored under black-box transfer, yet prior methods are predominantly sample-specific and offer limited reusability across inputs. We ins...

Source: arXiv - AI | 18 hours ago

7. Defending Large Language Models Against Jailbreak Attacks via In-Decoding Safety-Awareness Probing

arXiv:2601.10543v2 Announce Type: replace Abstract: Large language models (LLMs) have achieved impressive performance across natural language tasks and are increasingly deployed in real-world applications. Despite extensive safety alignment efforts, recent studies show that such alignment is oft...

Source: arXiv - AI | 18 hours ago

8. Impact of Phonetics on Speaker Identity in Adversarial Voice Attack

arXiv:2509.15437v2 Announce Type: replace-cross Abstract: Adversarial perturbations in speech pose a serious threat to automatic speech recognition (ASR) and speaker verification by introducing subtle waveform modifications that remain imperceptible to humans but can significantly alter system o...

Source: arXiv - AI | 18 hours ago


About This Digest

This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.

Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.