← Back to Blog

AI News Digest: February 17, 2026

Daily roundup of AI and ML news - 8 curated stories on security, research, and industry developments.

Here's your daily roundup of the most relevant AI and ML news for February 17, 2026. We're also covering 8 research developments. Click through to read the full articles from our curated sources.

Research & Papers

1. Lorica: A Synergistic Fine-Tuning Framework for Advancing Personalized Adversarial Robustness

arXiv:2506.05402v3 Announce Type: replace-cross Abstract: The growing use of large pre-trained models in edge computing has made model inference on mobile clients both feasible and popular. Yet these devices remain vulnerable to adversarial attacks, threatening model robustness and security. Fed...

Source: arXiv - Machine Learning | 18 hours ago

2. Stay in Character, Stay Safe: Dual-Cycle Adversarial Self-Evolution for Safety Role-Playing Agents

arXiv:2602.13234v1 Announce Type: new Abstract: LLM-based role-playing has rapidly improved in fidelity, yet stronger adherence to persona constraints commonly increases vulnerability to jailbreak attacks, especially for risky or negative personas. Most prior work mitigates this issue with train...

Source: arXiv - AI | 18 hours ago

3. Adversarial Network Imagination: Causal LLMs and Digital Twins for Proactive Telecom Mitigation

arXiv:2602.13203v1 Announce Type: cross Abstract: Telecommunication networks experience complex failures such as fiber cuts, traffic overloads, and cascading outages. Existing monitoring and digital twin systems are largely reactive, detecting failures only after service degradation occurs. We p...

Source: arXiv - AI | 18 hours ago

4. AISA: Awakening Intrinsic Safety Awareness in Large Language Models against Jailbreak Attacks

arXiv:2602.13547v1 Announce Type: cross Abstract: Large language models (LLMs) remain vulnerable to jailbreak prompts that elicit harmful or policy-violating outputs, while many existing defenses rely on expensive fine-tuning, intrusive prompt rewriting, or external guardrails that add latency a...

Source: arXiv - AI | 18 hours ago

5. Machine Learning as a Tool (MLAT): A Framework for Integrating Statistical ML Models as Callable Tools within LLM Agent Workflows

arXiv:2602.14295v1 Announce Type: cross Abstract: We introduce Machine Learning as a Tool (MLAT), a design pattern in which pre-trained statistical machine learning models are exposed as callable tools within large language model (LLM) agent workflows. This allows an orchestrating agent to invok...

Source: arXiv - AI | 18 hours ago

6. Improving Data Efficiency for LLM Reinforcement Fine-tuning Through Difficulty-targeted Online Data Selection and Rollout Replay

arXiv:2506.05316v4 Announce Type: replace-cross Abstract: Reinforcement learning (RL) has become an effective approach for fine-tuning large language models (LLMs), particularly to enhance their reasoning capabilities. However, RL fine-tuning remains highly resource-intensive, and existing work ...

Source: arXiv - AI | 18 hours ago

7. Detecting Jailbreak Attempts in Clinical Training LLMs Through Automated Linguistic Feature Extraction

arXiv:2602.13321v1 Announce Type: new Abstract: Detecting jailbreak attempts in clinical training large language models (LLMs) requires accurate modeling of linguistic deviations that signal unsafe or off-task user behavior. Prior work on the 2-Sigma clinical simulation platform showed that manu...

Source: arXiv - AI | 18 hours ago

8. SkillJect: Automating Stealthy Skill-Based Prompt Injection for Coding Agents with Trace-Driven Closed-Loop Refinement

arXiv:2602.14211v1 Announce Type: cross Abstract: Agent skills are becoming a core abstraction in coding agents, packaging long-form instructions and auxiliary scripts to extend tool-augmented behaviors. This abstraction introduces an under-measured attack surface: skill-based prompt injection, ...

Source: arXiv - AI | 18 hours ago


About This Digest

This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.

Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.