Here's your daily roundup of the most relevant AI and ML news for January 28, 2026. Today's digest includes 1 security-focused story. We're also covering 5 research developments. Click through to read the full articles from our curated sources.
Security & Safety
1. Jellyfin LLM/"AI" Development Policy
Article URL: https://jellyfin.org/docs/general/contributing/llm-policies/ Comments URL: https://news.ycombinator.com/item?id=46801976 Points: 100
Comments: 51
Source: Hacker News - ML Security | 1 hours ago
Research & Papers
2. LLM-VA: Resolving the Jailbreak-Overrefusal Trade-off via Vector Alignment
arXiv:2601.19487v1 Announce Type: new Abstract: Safety-aligned LLMs suffer from two failure modes: jailbreak (answering harmful inputs) and over-refusal (declining benign queries). Existing vector steering methods adjust the magnitude of answer vectors, but this creates a fundamental trade-off -...
Source: arXiv - Machine Learning | 18 hours ago
3. Native LLM and MLLM Inference at Scale on Apple Silicon
arXiv:2601.19139v1 Announce Type: new Abstract: The growing adoption of Apple Silicon for machine learning development has created demand for efficient inference solutions that leverage its unique unified memory architecture. However, existing tools either lack native optimization (PyTorch MPS) ...
Source: arXiv - Machine Learning | 18 hours ago
4. Bandits in Flux: Adversarial Constraints in Dynamic Environments
arXiv:2601.19867v1 Announce Type: new Abstract: We investigate the challenging problem of adversarial multi-armed bandits operating under time-varying constraints, a scenario motivated by numerous real-world applications. To address this complex setting, we propose a novel primal-dual algorithm ...
Source: arXiv - Machine Learning | 18 hours ago
5. Learning to Detect Unseen Jailbreak Attacks in Large Vision-Language Models
arXiv:2508.09201v4 Announce Type: replace-cross Abstract: Despite extensive alignment efforts, Large Vision-Language Models (LVLMs) remain vulnerable to jailbreak attacks. To mitigate these risks, existing detection methods are essential, yet they face two major challenges: generalization and ac...
Source: arXiv - AI | 18 hours ago
6. LLM-Generated Explanations Do Not Suffice for Ultra-Strong Machine Learning
arXiv:2509.00961v2 Announce Type: replace-cross Abstract: Ultra Strong Machine Learning (USML) refers to symbolic learning systems that not only improve their own performance but can also teach their acquired knowledge to quantifiably improve human performance. We introduce LENS (Logic Programmi...
Source: arXiv - Machine Learning | 18 hours ago
Industry News
7. Tiny startup Arcee AI built a 400B-parameter open source LLM from scratch to best Meta’s Llama
30-person startup Arcee AI has released a 400B model called Trinity, which it says is one of the biggest open source foundation models from a U.S. company.
Source: TechCrunch - AI | 6 hours ago
Tech & Development
8. Getting a Custom PyTorch LLM onto the Hugging Face Hub
Article URL: https://www.gilesthomas.com/2026/01/custom-automodelforcausallm-frompretrained-models-on-hugging-face Comments URL: https://news.ycombinator.com/item?id=46802931 Points: 1
Comments: 0
Source: Hacker News - AI | just now
About This Digest
This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.
Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.