Here's your daily roundup of the most relevant AI and ML news for January 15, 2026. We're also covering 8 research developments. Click through to read the full articles from our curated sources.
Research & Papers
1. From Adversarial Poetry to Adversarial Tales: An Interpretability Research Agenda
arXiv:2601.08837v1 Announce Type: cross Abstract: Safety mechanisms in LLMs remain vulnerable to attacks that reframe harmful requests through culturally coded structures. We introduce Adversarial Tales, a jailbreak technique that embeds harmful content within cyberpunk narratives and prompts mo...
Source: arXiv - Machine Learning | 1 hours ago
2. MORE: Multi-Objective Adversarial Attacks on Speech Recognition
arXiv:2601.01852v2 Announce Type: replace-cross Abstract: The emergence of large-scale automatic speech recognition (ASR) models such as Whisper has greatly expanded their adoption across diverse real-world applications. Ensuring robustness against even minor input perturbations is therefore cri...
Source: arXiv - Machine Learning | 1 hours ago
3. QueryIPI: Query-agnostic Indirect Prompt Injection on Coding Agents
arXiv:2510.23675v3 Announce Type: replace-cross Abstract: Modern coding agents integrated into IDEs orchestrate powerful tools and high-privilege system access, creating a high-stakes attack surface. Prior work on Indirect Prompt Injection (IPI) is mainly query-specific, requiring particular use...
Source: arXiv - AI | 1 hours ago
4. An Attention Infused Deep Learning System with Grad-CAM Visualization for Early Screening of Glaucoma
arXiv:2505.17808v2 Announce Type: replace-cross Abstract: This research work reveals the strengths of intertwining a deep custom convolutional neural network with a disruptive Vision Transformer, both fused together with a radical Cross-Attention module. Here, two high-yielding datasets for arti...
Source: arXiv - AI | 1 hours ago
5. VIGIL: Defending LLM Agents Against Tool Stream Injection via Verify-Before-Commit
arXiv:2601.05755v2 Announce Type: replace-cross Abstract: LLM agents operating in open environments face escalating risks from indirect prompt injection, particularly within the tool stream where manipulated metadata and runtime feedback hijack execution flow. Existing defenses encounter a criti...
Source: arXiv - AI | 1 hours ago
6. Comprehensive Machine Learning Benchmarking for Fringe Projection Profilometry with Photorealistic Synthetic Data
arXiv:2601.08900v1 Announce Type: cross Abstract: Machine learning approaches for fringe projection profilometry (FPP) are hindered by the lack of large, diverse datasets and comprehensive benchmarking protocols. This paper introduces the first open-source, photorealistic synthetic dataset for F...
Source: arXiv - Machine Learning | 1 hours ago
7. RIFT: Repurposing Negative Samples via Reward-Informed Fine-Tuning
arXiv:2601.09253v1 Announce Type: new Abstract: While Supervised Fine-Tuning (SFT) and Rejection Sampling Fine-Tuning (RFT) are standard for LLM alignment, they either rely on costly expert data or discard valuable negative samples, leading to data inefficiency. To address this, we propose Rewar...
Source: arXiv - Machine Learning | 1 hours ago
8. Exploring Fine-Tuning for Tabular Foundation Models
arXiv:2601.09654v1 Announce Type: new Abstract: Tabular Foundation Models (TFMs) have recently shown strong in-context learning capabilities on structured data, achieving zero-shot performance comparable to traditional machine learning methods. We find that zero-shot TFMs already achieve strong ...
Source: arXiv - Machine Learning | 1 hours ago
About This Digest
This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.
Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.