AI platform Hugging Face says hackers stole auth tokens from Spaces
AI platform Hugging Face says that its Spaces platform was breached, allowing hackers to access authentication secrets for its members.
Hugging Face Spaces is a repository of AI apps created and submitted by the community's users, allowing other members to demo them.
"Earlier this week our team detected unauthorized access to our Spaces platform, specifically related to Spaces secrets," warned Hugging Face in a blog post.
"As a consequence, we have suspicions that a subset of Spaces' secrets could have been accessed without authorization."
Hugging Face says they have already revoked authentication tokens in the compromised secrets and have notified those impacted by email.
However, they recommend that all Hugging Face Spaces users refresh their tokens and switch to fine-grained access tokens, which allow organizations to have tighter control over who has access to their AI models.
The company is working with external cybersecurity experts to investigate the breach and report the incident to law enforcement and data protection agencies.
The AI platform says they have been tightening security over the past few days due to the incident.
"Over the past few days, we have made other significant improvements to the security of the Spaces infrastructure, including completely removing org tokens (resulting in increased traceability and audit capabilities), implementing key management service (KMS) for Spaces secrets, robustifying and expanding our system’s ability to identify leaked tokens and proactively invalidate them, and more generally improving our security across the board. We also plan on completely deprecating “classic” read and write tokens in the near future, as soon as fine-grained access tokens reach feature parity. We will continue to investigate any possible related incident."
❖ Hugging FaceAs Hugging Face grows in popularity, it has also become a target for threat actors, who attempt to abuse it for malicious activities.
In February, cybersecurity firm JFrog found approximately 100 instances of malicious AI ML models used to execute malicious code on a victim's machine. One of the models opened a reverse shell that allowed a remote threat actor to access a device running the code.
More recently, security researchers at Wiz discovered a vulnerability that allowed them to upload custom models and leverage container escapes to gain cross-tenant access to other customers' models.
source: BleepingComputer
Free security scan for your website
Top News:
Google Chrome uses AI to analyze pages in new scam detection feature
December 21, 2024CISA orders federal agencies to secure Microsoft 365 tenants
December 18, 2024Recorded Future CEO applauds "undesirable" designation by Russia
December 19, 2024Five lesser known Task Manager features in Windows 11
December 25, 2024DDoS Attacks Surge as Africa Expands Its Digital Footprint
December 26, 2024