As we close out 2025, it's worth reflecting on how dramatically the AI security landscape has changed. What started as niche concerns discussed in academic papers became front-page news and boardroom priorities. Here's our review of the year that made AI security impossible to ignore.
The Wake-Up Calls
Supply Chain Vulnerabilities Exposed
The AI ecosystem's dependency on shared models and libraries created a massive attack surface. In 2025, we saw:
- Poisoned model weights uploaded to public repositories, affecting thousands of downstream applications
- Dependency confusion attacks targeting popular ML frameworks
- Credential leaks from improperly sanitized training pipelines
These weren't theoretical risks anymore. Organizations reported actual breaches traced back to compromised AI components.
The SBOM Mandate Momentum
Following the 2021 Executive Order on cybersecurity, 2025 saw real enforcement around Software Bills of Materials. But AI models presented unique challenges:
- Traditional SBOMs don't capture model weights, training data provenance, or fine-tuning history
- The AI supply chain is more complex, with models built on models built on models
- Licensing became a minefield as foundation model terms varied wildly
This is exactly why we built HuggingHugh - to bring transparency to AI model dependencies and risks.
Key Security Developments
SafeTensors Adoption Accelerated
The push away from pickle-based serialization hit a tipping point. Major model providers made SafeTensors the default, and platforms began warning users about pickle files. Our data shows:
- 73% of top 500 models now use SafeTensors (up from ~40% in early 2025)
- Models using SafeTensors score significantly higher on our trust assessments
- The "No Pickle Files" factor became a key differentiator
Model Cards Got Serious
What started as optional documentation became expected. HuggingFace's model card guidelines evolved, and the community began demanding:
- Clear license declarations
- Training data disclosures
- Known limitations and biases
- Security contact information
Vulnerability Scanning for ML
Traditional security tools weren't designed for ML pipelines. 2025 saw the emergence of specialized scanners that understand:
- Python dependency chains in ML contexts
- GPU driver and CUDA vulnerabilities
- Container security for ML workloads
- Model-specific attack vectors
What We Learned From Scanning 500+ Models
Running HuggingHugh daily across hundreds of models taught us patterns:
- Verified organizations score higher - but verification isn't enough
- Popular doesn't mean secure - some of the most downloaded models have concerning dependencies
- Updates matter - stale models accumulate vulnerabilities
- License clarity varies wildly - many models have ambiguous or conflicting terms
The average trust score across our scanned models is around 60/100 - showing significant room for improvement industry-wide.
Looking Ahead to 2026
Predictions
- AI-specific CVEs will increase - as the attack surface grows, so will documented vulnerabilities
- Regulatory pressure will mount - expect more requirements around AI transparency
- Model provenance will become standard - knowing where your model came from will be table stakes
- Security scoring will proliferate - more tools like HuggingHugh will emerge
What Organizations Should Do Now
- Inventory your AI assets - know what models you're using and where they came from
- Establish model governance - create policies for model selection and updates
- Automate security checks - integrate scanning into your ML pipelines
- Plan for incidents - have a response plan for compromised models
Conclusion
2025 proved that AI security isn't optional. The same models powering innovation can become vectors for attack if not properly vetted. As we enter 2026, the organizations that treat AI security as a first-class concern will be the ones that thrive.
HuggingHugh will continue scanning and scoring models daily, helping you make informed decisions about which models to trust. Check our dashboard to see how your favorite models stack up.
Here's to a more secure 2026.