Microsoft Cracks Down on Malicious Copilot AI Use
Microsoft's Digital Crimes Unit is pursuing legal action to disrupt cybercriminals who create malicious tools that evade the security guardrails and guidelines of generative AI (GenAI) services to create harmful content.
According to an unsealed complaint in the Eastern District of Virginia, though the company goes to great lengths to create and enhance secure AI products and services, cybercriminals continue to innovate their tactics and bypass security measures.
"With this action, we are sending a clear message: the weaponization of our AI technology by online actors will not be tolerated," said Microsoft in a blog post about the lawsuit.
In the court filings that were unsealed on Jan. 13, Microsoft noted that it had "observed a foreign-based threat-actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites."
The group tried to access accounts with generative AI services in order to alter the capabilities of those services, then resold this unlawful access to other malicious actors, providing instructions on how to use the tools to create harmful content.
Since discovering the group's actions, Microsoft has revoked access and enhanced safeguards to mitigate this kind of activity in the future.
As the company continues to seek out proactive measures it can take alongside legal action, it highlights a report, "Protecting the Public From Abusive AI-Generated Content," that provides recommendations for organizations and governments to protect the public from AI-created threats.
source: DarkReading
Free online web security scanner
Top News:
Ivanti Flaw CVE-2025-0282 Actively Exploited, Impacts Connect Secure and Policy Secure
January 9, 2025CISA Adds Second BeyondTrust Flaw to KEV Catalog Amid Active Attacks
January 14, 2025Windows Server 2025 released—here are the new features
November 5, 2024