← Back to Blog

AI News Digest: January 12, 2026

Daily roundup of AI and ML news - 8 curated stories on security, research, and industry developments.

Here's your daily roundup of the most relevant AI and ML news for January 12, 2026. We're also covering 7 research developments. Click through to read the full articles from our curated sources.

Research & Papers

1. PromptScreen: Efficient Jailbreak Mitigation Using Semantic Linear Classification in a Multi-Staged Pipeline

arXiv:2512.19011v2 Announce Type: replace-cross Abstract: Prompt injection and jailbreaking attacks pose persistent security challenges to large language model (LLM)-based systems. We present PromptScreen, an efficient and systematically evaluated defense architecture that mitigates these threat...

Source: arXiv - Machine Learning | 1 hours ago

2. Spectral Masking and Interpolation Attack (SMIA): A Black-box Adversarial Attack against Voice Authentication and Anti-Spoofing Systems

arXiv:2509.07677v4 Announce Type: replace-cross Abstract: Voice Authentication Systems (VAS) use unique vocal characteristics for verification. They are increasingly integrated into high-security sectors such as banking and healthcare. Despite their improvements using deep learning, they face se...

Source: arXiv - AI | 1 hours ago

3. The Echo Chamber Multi-Turn LLM Jailbreak

arXiv:2601.05742v1 Announce Type: cross Abstract: The availability of Large Language Models (LLMs) has led to a new generation of powerful chatbots that can be developed at relatively low cost. As companies deploy these tools, security challenges need to be addressed to prevent financial loss an...

Source: arXiv - AI | 1 hours ago

4. Generalizable Blood Pressure Estimation from Multi-Wavelength PPG Using Curriculum-Adversarial Learning

arXiv:2509.12518v1 Announce Type: cross Abstract: Accurate and generalizable blood pressure (BP) estimation is vital for the early detection and management of cardiovascular diseases. In this study, we enforce subject-level data splitting on a public multi-wavelength photoplethysmography (PPG) d...

Source: arXiv - Machine Learning | 1 hours ago

5. Transferability of Adversarial Attacks in Video-based MLLMs: A Cross-modal Image-to-Video Approach

arXiv:2501.01042v4 Announce Type: replace-cross Abstract: Video-based multimodal large language models (V-MLLMs) have shown vulnerability to adversarial examples in video-text multimodal tasks. However, the transferability of adversarial videos to unseen models - a common and practical real-worl...

Source: arXiv - Machine Learning | 1 hours ago

6. HogVul: Black-box Adversarial Code Generation Framework Against LM-based Vulnerability Detectors

arXiv:2601.05587v1 Announce Type: cross Abstract: Recent advances in software vulnerability detection have been driven by Language Model (LM)-based approaches. However, these models remain vulnerable to adversarial attacks that exploit lexical and syntax perturbations, allowing critical flaws to...

Source: arXiv - AI | 1 hours ago

7. Hi-ZFO: Hierarchical Zeroth- and First-Order LLM Fine-Tuning via Importance-Guided Tensor Selection

arXiv:2601.05501v1 Announce Type: new Abstract: Fine-tuning large language models (LLMs) using standard first-order (FO) optimization often drives training toward sharp, poorly generalizing minima. Conversely, zeroth-order (ZO) methods offer stronger exploratory behavior without relying on expli...

Source: arXiv - Machine Learning | 1 hours ago

Tech & Development

8. Ask HN: Cursor (LLM) Costs

Hey guys just a simple question to all of you who heavily leverage LLM coding daily. I essentially have cursor run in 1-3 projects in parallel all day, asking for the next steps implementation like every ~10 +- 5 minutes, and with Claude opus 4.5 it just works for me naturally without much discus...

Source: Hacker News - AI | 6 hours ago


About This Digest

This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.

Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.