Here's your daily roundup of the most relevant AI and ML news for January 01, 2026. Today's digest includes 1 security-focused story. We're also covering 7 research developments. Click through to read the full articles from our curated sources.
Security & Safety
1. Trust Wallet Chrome Extension Hack Drains $8.5M via Shai-Hulud Supply Chain Attack
Trust Wallet on Tuesday revealed that the second iteration of the Shai-Hulud (aka Sha1-Hulud) supply chain outbreak in November 2025 was likely responsible for the hack of its Google Chrome extension, ultimately resulting in the theft of approximately $8.5 million in assets. "Our Developer GitHub...
Source: The Hacker News (Security) | 13 hours ago
Research & Papers
2. Adversarial Lens: Exploiting Attention Layers to Generate Adversarial Examples for Evaluation
arXiv:2512.23837v1 Announce Type: cross Abstract: Recent advances in mechanistic interpretability suggest that intermediate attention layers encode token-level hypotheses that are iteratively refined toward the final output. In this work, we exploit this property to generate adversarial examples...
Source: arXiv - Machine Learning | 1 hours ago
3. Projection-based Adversarial Attack using Physics-in-the-Loop Optimization for Monocular Depth Estimation
arXiv:2512.24792v1 Announce Type: cross Abstract: Deep neural networks (DNNs) remain vulnerable to adversarial attacks that cause misclassification when specific perturbations are added to input images. This vulnerability also threatens the reliability of DNN-based monocular depth estimation (MD...
Source: arXiv - Machine Learning | 1 hours ago
4. RAJ-PGA: Reasoning-Activated Jailbreak and Principle-Guided Alignment Framework for Large Reasoning Models
arXiv:2508.12897v2 Announce Type: replace Abstract: Large Reasoning Models (LRMs) face a distinct safety vulnerability: their internal reasoning chains may generate harmful content even when the final output appears benign. To address this overlooked risk, we first propose a novel attack paradig...
Source: arXiv - AI | 1 hours ago
5. TabMixNN: A Unified Deep Learning Framework for Structural Mixed Effects Modeling on Tabular Data
arXiv:2512.23787v1 Announce Type: new Abstract: We present TabMixNN, a flexible PyTorch-based deep learning framework that synthesizes classical mixed-effects modeling with modern neural network architectures for tabular data analysis. TabMixNN addresses the growing need for methods that can han...
Source: arXiv - Machine Learning | 1 hours ago
6. Enhancing LLM-Based Neural Network Generation: Few-Shot Prompting and Efficient Validation for Automated Architecture Design
arXiv:2512.24120v1 Announce Type: cross Abstract: Automated neural network architecture design remains a significant challenge in computer vision. Task diversity and computational constraints require both effective architectures and efficient search methods. Large Language Models (LLMs) present ...
Source: arXiv - AI | 1 hours ago
7. Privacy-Preserving Semantic Communications via Multi-Task Learning and Adversarial Perturbations
arXiv:2512.24452v1 Announce Type: cross Abstract: Semantic communications conveys task-relevant meaning rather than focusing solely on message reconstruction, improving bandwidth efficiency and robustness for next-generation wireless systems. However, learned semantic representations can still l...
Source: arXiv - Machine Learning | 1 hours ago
8. ALF: Advertiser Large Foundation Model for Multi-Modal Advertiser Understanding
arXiv:2504.18785v3 Announce Type: replace Abstract: We present ALF (Advertiser Large Foundation model), a multi-modal transformer architecture for understanding advertiser behavior and intent across text, image, video, and structured data modalities. Through contrastive learning and multi-task o...
Source: arXiv - Machine Learning | 1 hours ago
About This Digest
This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.
Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.