Here's your daily roundup of the most relevant AI and ML news for December 30, 2025. We're also covering 8 research developments. Click through to read the full articles from our curated sources.
Research & Papers
1. Multilingual Hidden Prompt Injection Attacks on LLM-Based Academic Reviewing
arXiv:2512.23684v1 Announce Type: cross Abstract: Large language models (LLMs) are increasingly considered for use in high-impact workflows, including academic peer review. However, LLMs are vulnerable to document-level hidden prompt injection attacks. In this work, we construct a dataset of app...
Source: arXiv - AI | 13 hours ago
2. Prompt Injection attack against LLM-integrated Applications
arXiv:2306.05499v3 Announce Type: replace-cross Abstract: Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introdu...
Source: arXiv - AI | 13 hours ago
3. Unrolled Creative Adversarial Network For Generating Novel Musical Pieces
arXiv:2501.00452v3 Announce Type: replace-cross Abstract: Music generation has emerged as a significant topic in artificial intelligence and machine learning. While recurrent neural networks (RNNs) have been widely employed for sequence generation, generative adversarial networks (GANs) remain r...
Source: arXiv - Machine Learning | 13 hours ago
4. Involuntary Jailbreak: On Self-Prompting Attacks
arXiv:2508.13246v3 Announce Type: replace-cross Abstract: In this study, we disclose a worrying new vulnerability in Large Language Models (LLMs), which we term \textbf{involuntary jailbreak}. Unlike existing jailbreak attacks, this weakness is distinct in that it does not involve a specific att...
Source: arXiv - AI | 13 hours ago
5. PHANTOM: Physics-Aware Adversarial Attacks against Federated Learning-Coordinated EV Charging Management System
arXiv:2512.22381v1 Announce Type: cross Abstract: The rapid deployment of electric vehicle charging stations (EVCS) within distribution networks necessitates intelligent and adaptive control to maintain the grid's resilience and reliability. In this work, we propose PHANTOM, a physics-aware adve...
Source: arXiv - Machine Learning | 13 hours ago
6. Hierarchical Pedagogical Oversight: A Multi-Agent Adversarial Framework for Reliable AI Tutoring
arXiv:2512.22496v1 Announce Type: cross Abstract: Large Language Models (LLMs) are increasingly deployed as automated tutors to address educator shortages; however, they often fail at pedagogical reasoning, frequently validating incorrect student solutions (sycophancy) or providing overly direct...
Source: arXiv - AI | 13 hours ago
7. Towards Reliable Evaluation of Adversarial Robustness for Spiking Neural Networks
arXiv:2512.22522v1 Announce Type: cross Abstract: Spiking Neural Networks (SNNs) utilize spike-based activations to mimic the brain's energy-efficient information processing. However, the binary and discontinuous nature of spike activations causes vanishing gradients, making adversarial robustne...
Source: arXiv - AI | 13 hours ago
8. EquaCode: A Multi-Strategy Jailbreak Approach for Large Language Models via Equation Solving and Code Completion
arXiv:2512.23173v1 Announce Type: cross Abstract: Large language models (LLMs), such as ChatGPT, have achieved remarkable success across a wide range of fields. However, their trustworthiness remains a significant concern, as they are still susceptible to jailbreak attacks aimed at eliciting ina...
Source: arXiv - AI | 13 hours ago
About This Digest
This digest is automatically curated from leading AI and tech news sources, filtered for relevance to AI security and the ML ecosystem. Stories are scored and ranked based on their relevance to model security, supply chain safety, and the broader AI landscape.
Want to see how your favorite models score on security? Check our model dashboard for trust scores on the top 500 HuggingFace models.