← Back to Blog

AI News Digest: January 13, 2026

Daily roundup of AI and ML news - 8 curated stories on security, research, and industry developments.

Here's your daily roundup of the most relevant AI and ML news for January 13, 2026. Today's digest includes 1 security-focused story. We're also covering 7 research developments. Click through to read the full articles from our curated sources.

Security & Safety

1. Anthropic Launches Claude AI for Healthcare with Secure Health Record Access

Anthropic has become the latest Artificial intelligence (AI) company to announce a new suite of features that allows users of its Claude platform to better understand their health information. Under an initiative called Claude for Healthcare, the company said U.S. subscribers of Claude Pro and Ma...

Source: The Hacker News (Security) | 21 hours ago

Research & Papers

2. Paraphrasing Adversarial Attack on LLM-as-a-Reviewer

arXiv:2601.06884v1 Announce Type: cross Abstract: The use of large language models (LLMs) in peer review systems has attracted growing attention, making it essential to examine their potential vulnerabilities. Prior attacks rely on prompt injection, which alters manuscript content and conflates ...

Source: arXiv - Machine Learning | 1 hours ago

3. Stress Testing Machine Learning at $10^{10}$ Scale: A Comprehensive Study of Adversarial Robustness on Algebraically Structured Integer Streams

arXiv:2601.06117v1 Announce Type: new Abstract: This paper presents a large-scale stress test of machine learning systems using structured mathematical data as a benchmark. We evaluate the robustness of tree-based classifiers at an unprecedented scale, utilizing ten billion deterministic samples...

Source: arXiv - Machine Learning | 1 hours ago

4. Overcoming the Retrieval Barrier: Indirect Prompt Injection in the Wild for LLM Systems

arXiv:2601.07072v1 Announce Type: cross Abstract: Large language models (LLMs) increasingly rely on retrieving information from external corpora. This creates a new attack surface: indirect prompt injection (IPI), where hidden instructions are planted in the corpora and hijack model behavior onc...

Source: arXiv - AI | 1 hours ago

5. Adversarial Attacks on Medical Hyperspectral Imaging Exploiting Spectral-Spatial Dependencies and Multiscale Features

arXiv:2601.07056v1 Announce Type: cross Abstract: Medical hyperspectral imaging (HSI) enables accurate disease diagnosis by capturing rich spectral-spatial tissue information, but recent advances in deep learning have exposed its vulnerability to adversarial attacks. In this work, we identify tw...

Source: arXiv - AI | 1 hours ago

6. TabImpute: Accurate and Fast Zero-Shot Missing-Data Imputation with a Pre-Trained Transformer

arXiv:2510.02625v3 Announce Type: replace Abstract: Missing data is a pervasive problem in tabular settings. Existing solutions range from simple averaging to complex generative adversarial networks, but due to each method's large variance in performance across real-world domains and time-consum...

Source: arXiv - Machine Learning | 1 hours ago

7. Accelerating Targeted Hard-Label Adversarial Attacks in Low-Query Black-Box Settings

arXiv:2505.16313v3 Announce Type: replace-cross Abstract: Deep neural networks for image classification remain vulnerable to adversarial examples -- small, imperceptible perturbations that induce misclassifications. In black-box settings, where only the final prediction is accessible, crafting t...

Source: arXiv - Machine Learning | 1 hours ago

8. Structure-Aware Diversity Pursuit as an AI Safety Strategy against Homogenization

arXiv:2601.06116v1 Announce Type: new Abstract: Generative AI models reproduce the biases in the training data and can further amplify them through mode collapse. We refer to the resulting harmful loss of diversity as homogenization. Our position is that homogenization should be a primary concer...

Source: arXiv - AI | 1 hours ago


About This Digest

This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.

Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.