← Back to Blog

AI News Digest: February 12, 2026

Daily roundup of AI and ML news - 8 curated stories on security, research, and industry developments.

Here's your daily roundup of the most relevant AI and ML news for February 12, 2026. Today's digest includes 1 security-focused story. We're also covering 7 research developments. Click through to read the full articles from our curated sources.

Security & Safety

1. Show HN: LLM Welcome – explicitly opt in for AI contributions on your GH issues

LLM Welcome is a GitHub app that as a maintainer you have to install on your project. Issues you then label with llm welcome are then listed on the site and offered to anyone who prompts their agent to find and solve issues.This makes it extremely explicit about the work you'd be happy to have ...

Source: Hacker News - ML Security | 1 hours ago

Research & Papers

2. from Benign import Toxic: Jailbreaking the Language Model via Adversarial Metaphors

arXiv:2503.00038v5 Announce Type: replace-cross Abstract: Current studies have exposed the risk of Large Language Models (LLMs) generating harmful content by jailbreak attacks. However, they overlook that the direct generation of harmful content from scratch is more difficult than inducing LLM t...

Source: arXiv - AI | 18 hours ago

3. GPU-Fuzz: Finding Memory Errors in Deep Learning Frameworks

arXiv:2602.10478v1 Announce Type: cross Abstract: GPU memory errors are a critical threat to deep learning (DL) frameworks, leading to crashes or even security issues. We introduce GPU-Fuzz, a fuzzer locating these issues efficiently by modeling operator parameters as formal constraints. GPU-Fuz...

Source: arXiv - Machine Learning | 18 hours ago

4. Spatiotemporal Field Generation Based on Hybrid Mamba-Transformer with Physics-informed Fine-tuning

arXiv:2505.11578v5 Announce Type: replace-cross Abstract: This research confronts the challenge of substantial physical equation discrepancies encountered in the generation of spatiotemporal physical fields through data-driven trained models. A spatiotemporal physical field generation model, nam...

Source: arXiv - AI | 18 hours ago

5. Deep learning outperforms traditional machine learning methods in predicting childhood malnutrition: evidence from survey data

arXiv:2602.10381v1 Announce Type: new Abstract: Childhood malnutrition remains a major public health concern in Nepal and other low-resource settings, while conventional case-finding approaches are labor-intensive and frequently unavailable in remote areas. This study provides the first comprehe...

Source: arXiv - Machine Learning | 18 hours ago

6. AD$^2$: Analysis and Detection of Adversarial Threats in Visual Perception for End-to-End Autonomous Driving Systems

arXiv:2602.10160v1 Announce Type: cross Abstract: End-to-end autonomous driving systems have achieved significant progress, yet their adversarial robustness remains largely underexplored. In this work, we conduct a closed-loop evaluation of state-of-the-art autonomous driving agents under black-...

Source: arXiv - AI | 18 hours ago

7. When the Prompt Becomes Visual: Vision-Centric Jailbreak Attacks for Large Image Editing Models

arXiv:2602.10179v1 Announce Type: cross Abstract: Recent advances in large image editing models have shifted the paradigm from text-driven instructions to vision-prompt editing, where user intent is inferred directly from visual inputs such as marks, arrows, and visual-text prompts. While this p...

Source: arXiv - AI | 18 hours ago

8. A Jointly Efficient and Optimal Algorithm for Heteroskedastic Generalized Linear Bandits with Adversarial Corruptions

arXiv:2602.10971v1 Announce Type: new Abstract: We consider the problem of heteroskedastic generalized linear bandits (GLBs) with adversarial corruptions, which subsumes various stochastic contextual bandit settings, including heteroskedastic linear bandits and logistic/Poisson bandits. We propo...

Source: arXiv - Machine Learning | 18 hours ago


About This Digest

This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.

Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.