Microsoft AI experts inadvertently leaked a vast amount of sensitive information, including private keys and passwords, while uploading an open-source dataset to GitHub.
Cloud security startup, Wiz, discovered this oversight while investigating unintentional cloud data exposure.
They found a GitHub repository from Microsoft’s AI team that had instructions for downloading AI models for image recognition via an Azure Storage link.
Wiz discovered that the provided URL unintentionally granted access to the entire storage, revealing 38 terabytes of confidential data.
This breach encompassed personal backups from two Microsoft employees’ computers, passwords to Microsoft services, secret keys, and a plethora of internal Microsoft Teams chats.
“AI unlocks huge potential for tech companies,” Wiz co-founder and CTO Ami Luttwak told TechCrunch. “However, as data scientists and engineers race to bring new AI solutions to production, the massive amounts of data they handle require additional security checks and safeguards.”
Luttwak added: “With many development teams needing to manipulate massive amounts of data, share it with their peers or collaborate on public open source projects, cases like Microsoft’s are increasingly hard to monitor and avoid.”
Following its investigation, the tech firm assured that no customer data was compromised. Subsequently, Microsoft enhanced its GitHub monitoring service to better detect potential data exposures.
Earlier this month, Microsoft reported that the recent Chinese hack targeting top US officials originated from a breach of the company’s engineer’s corporate account.
This account breach was orchestrated by a hacking collective named Storm-0558, believed to be responsible for intercepting numerous emails from high-ranking U.S. figures.