Here's your daily roundup of the most relevant AI and ML news for February 25, 2026. Today's digest includes 2 security-focused stories. We're also covering 5 research developments. Click through to read the full articles from our curated sources.
Security & Safety
1. Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model
Anthropic on Monday said it identified "industrial-scale campaigns" mounted by three artificial intelligence (AI) companies, DeepSeek, Moonshot AI, and MiniMax, to illegally extract Claude's capabilities to improve their own models. The distillation attacks generated over 16 million exchanges wit...
Source: The Hacker News (Security) | 1 day ago
2. Show HN: I cut LLM API bill by 55% with a Python text compressor, no AI involved
Article URL: https://agentready.cloud/hn Comments URL: https://news.ycombinator.com/item?id=47151514 Points: 1
Comments: 0
Source: Hacker News - ML Security | just now
Research & Papers
3. Intent Laundering: AI Safety Datasets Are Not What They Seem
arXiv:2602.16729v2 Announce Type: replace-cross Abstract: We systematically evaluate the quality of widely used AI safety datasets from two perspectives: in isolation and in practice. In isolation, we examine how well these datasets reflect real-world adversarial attacks based on three key prope...
Source: arXiv - Machine Learning | 9 hours ago
4. A Statistical Learning Perspective on Semi-dual Adversarial Neural Optimal Transport Solvers
arXiv:2502.01310v4 Announce Type: replace Abstract: Neural network-based optimal transport (OT) is a recent and fruitful direction in the generative modeling community. It finds its applications in various fields such as domain translation, image super-resolution, computational biology and other...
Source: arXiv - Machine Learning | 9 hours ago
5. ICON: Indirect Prompt Injection Defense for Agents based on Inference-Time Correction
arXiv:2602.20708v1 Announce Type: new Abstract: Large Language Model (LLM) agents are susceptible to Indirect Prompt Injection (IPI) attacks, where malicious instructions in retrieved content hijack the agent's execution. Existing defenses typically rely on strict filtering or refusal mechanisms...
Source: arXiv - AI | 9 hours ago
6. AdapTools: Adaptive Tool-based Indirect Prompt Injection Attacks on Agentic LLMs
arXiv:2602.20720v1 Announce Type: cross Abstract: The integration of external data services (e.g., Model Context Protocol, MCP) has made large language model-based agents increasingly powerful for complex task execution. However, this advancement introduces critical security vulnerabilities, par...
Source: arXiv - AI | 9 hours ago
7. Wireless Federated Multi-Task LLM Fine-Tuning via Sparse-and-Orthogonal LoRA
arXiv:2602.20492v1 Announce Type: new Abstract: Decentralized federated learning (DFL) based on low-rank adaptation (LoRA) enables mobile devices with multi-task datasets to collaboratively fine-tune a large language model (LLM) by exchanging locally updated parameters with a subset of neighbori...
Source: arXiv - Machine Learning | 9 hours ago
Tech & Development
8. Show HN: A live Python REPL with an agentic LLM that edits and evaluates code
I built PyChat.ai, an open-source Python REPL written in Rust that embeds an LLM agent capable of inspecting and modifying the live Python runtime state.A sample interaction: py> def succ(n): py> n + 1 py> succ(42) None ai> why is succ not working?
Thinking...
...
Source: Hacker News - AI | just now
About This Digest
This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.
Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.