← Back to Blog

AI News Digest: January 23, 2026

Daily roundup of AI and ML news - 8 curated stories on security, research, and industry developments.

Here's your daily roundup of the most relevant AI and ML news for January 23, 2026. We're also covering 8 research developments. Click through to read the full articles from our curated sources.

Research & Papers

1. Crafting Adversarial Inputs for Large Vision-Language Models Using Black-Box Optimization

arXiv:2601.01747v4 Announce Type: replace-cross Abstract: Recent advancements in Large Vision-Language Models (LVLMs) have shown groundbreaking capabilities across diverse multimodal tasks. However, these models remain vulnerable to adversarial jailbreak attacks, where adversaries craft subtle p...

Source: arXiv - Machine Learning | 18 hours ago

2. RECAP: A Resource-Efficient Method for Adversarial Prompting in Large Language Models

arXiv:2601.15331v1 Announce Type: cross Abstract: The deployment of large language models (LLMs) has raised security concerns due to their susceptibility to producing harmful or policy-violating outputs when exposed to adversarial prompts. While alignment and guardrails mitigate common misuse, t...

Source: arXiv - Machine Learning | 18 hours ago

3. Boundary-Aware Adversarial Filtering for Reliable Diagnosis under Extreme Class Imbalance

arXiv:2511.17629v2 Announce Type: replace Abstract: We study classification under extreme class imbalance where recall and calibration are both critical, for example in medical diagnosis scenarios. We propose AF-SMOTE, a mathematically motivated augmentation framework that first synthesizes mino...

Source: arXiv - Machine Learning | 18 hours ago

4. DDSA: Dual-Domain Strategic Attack for Spatial-Temporal Efficiency in Adversarial Robustness Testing

arXiv:2601.14302v1 Announce Type: cross Abstract: Image transmission and processing systems in resource-critical applications face significant challenges from adversarial perturbations that compromise mission-specific object classification. Current robustness testing methods require excessive co...

Source: arXiv - AI | 1 day ago

5. How Worst-Case Are Adversarial Attacks? Linking Adversarial and Statistical Robustness

arXiv:2601.14519v1 Announce Type: cross Abstract: Adversarial attacks are widely used to evaluate model robustness, yet their validity as proxies for robustness to random perturbations remains debated. We ask whether an adversarial perturbation provides a representative estimate of robustness un...

Source: arXiv - AI | 1 day ago

6. The Good, the Bad and the Ugly: Meta-Analysis of Watermarks, Transferable Attacks and Adversarial Defenses

arXiv:2410.08864v2 Announce Type: replace-cross Abstract: We formalize and analyze the trade-off between backdoor-based watermarks and adversarial defenses, framing it as an interactive protocol between a verifier and a prover. While previous works have primarily focused on this trade-off, our a...

Source: arXiv - AI | 1 day ago

7. Call2Instruct: Automated Pipeline for Generating Q&A Datasets from Call Center Recordings for LLM Fine-Tuning

arXiv:2601.14263v1 Announce Type: cross Abstract: The adaptation of Large-Scale Language Models (LLMs) to specific domains depends on high-quality fine-tuning datasets, particularly in instructional format (e.g., Question-Answer - Q&A). However, generating these datasets, particularly from u...

Source: arXiv - AI | 1 day ago

8. Federated Transformer-GNN for Privacy-Preserving Brain Tumor Localization with Modality-Level Explainability

arXiv:2601.15042v1 Announce Type: cross Abstract: Deep learning models for brain tumor analysis require large and diverse datasets that are often siloed across healthcare institutions due to privacy regulations. We present a federated learning framework for brain tumor localization that enables ...

Source: arXiv - AI | 1 day ago


About This Digest

This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.

Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.