← Back to Blog

AI News Digest: January 14, 2026

Daily roundup of AI and ML news - 8 curated stories on security, research, and industry developments.

Here's your daily roundup of the most relevant AI and ML news for January 14, 2026. Today's digest includes 2 security-focused stories. We're also covering 6 research developments. Click through to read the full articles from our curated sources.

Security & Safety

1. Anthropic Launches Claude AI for Healthcare with Secure Health Record Access

Anthropic has become the latest Artificial intelligence (AI) company to announce a new suite of features that allows users of its Claude platform to better understand their health information. Under an initiative called Claude for Healthcare, the company said U.S. subscribers of Claude Pro and Ma...

Source: The Hacker News (Security) | 1 day ago

2. n8n Supply Chain Attack Abuses Community Nodes to Steal OAuth Tokens

Threat actors have been observed uploading a set of eight packages on the npm registry that masqueraded as integrations targeting the n8n workflow automation platform to steal developers' OAuth credentials. One such package, named "n8n-nodes-hfgjf-irtuinvcm-lasdqewriit," mimics a Google Ads integ...

Source: The Hacker News (Security) | 1 day ago

Research & Papers

3. A New Formulation for Zeroth-Order Optimization of Adversarial EXEmples in Malware Detection

arXiv:2405.14519v2 Announce Type: replace Abstract: Machine learning malware detectors are vulnerable to adversarial EXEmples, i.e., carefully-crafted Windows programs tailored to evade detection. Unlike other adversarial problems, attacks in this context must be functionality-preserving, a cons...

Source: arXiv - Machine Learning | 1 hours ago

4. CausAdv: A Causal-based Framework for Detecting Adversarial Examples

arXiv:2411.00839v2 Announce Type: replace Abstract: Deep learning has led to tremendous success in computer vision, largely due to Convolutional Neural Networks (CNNs). However, CNNs have been shown to be vulnerable to crafted adversarial perturbations. This vulnerability of adversarial examples...

Source: arXiv - Machine Learning | 1 hours ago

5. Measuring and Fostering Peace through Machine Learning and Artificial Intelligence

arXiv:2601.05232v2 Announce Type: replace-cross Abstract: We used machine learning and artificial intelligence: 1) to measure levels of peace in countries from news and social media and 2) to develop on-line tools that promote peace by helping users better understand their own media diet. For ne...

Source: arXiv - Machine Learning | 1 hours ago

6. IGAN: A New Inception-based Model for Stable and High-Fidelity Image Synthesis Using Generative Adversarial Networks

arXiv:2601.08332v1 Announce Type: cross Abstract: Generative Adversarial Networks (GANs) face a significant challenge of striking an optimal balance between high-quality image generation and training stability. Recent techniques, such as DCGAN, BigGAN, and StyleGAN, improve visual fidelity; howe...

Source: arXiv - AI | 1 hours ago

7. QueryIPI: Query-agnostic Indirect Prompt Injection on Coding Agents

arXiv:2510.23675v2 Announce Type: replace-cross Abstract: Modern coding agents integrated into IDEs orchestrate powerful tools and high-privilege system access, creating a high-stakes attack surface. Prior work on Indirect Prompt Injection (IPI) is mainly query-specific, requiring particular use...

Source: arXiv - AI | 1 hours ago

8. Synergy over Discrepancy: A Partition-Based Approach to Multi-Domain LLM Fine-Tuning

arXiv:2511.07198v3 Announce Type: replace Abstract: Large language models (LLMs) demonstrate impressive generalization abilities, yet adapting them effectively across multiple heterogeneous domains remains challenging due to inter-domain interference. To overcome this challenge, we propose a par...

Source: arXiv - Machine Learning | 1 hours ago


About This Digest

This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.

Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.