ipt asyncsrc="https://pagad2.googlesyndication.com/gad/js/adsbygoogle.js?client=a-pub-7095507915765606"     crossorigin="anonymous">

The Unintended Consequences of AI: Exposed GitHub Repositories Remain Accessible

0

 

The rise of generative AI chatbots has brought about a new era of innovation, but also raises concerns about data privacy and security. A recent discovery by Lasso, an Israeli cybersecurity company, highlights the unintended consequences of AI-powered tools. Thousands of once-public GitHub repositories, now private, can still be accessed through Microsoft Copilot, a generative AI chatbot.

 

The Issue: Cached Data and AI-Powered Access

 

Lasso’s research revealed that even brief exposure of data to the internet can linger in online generative AI chatbots like Microsoft Copilot. This means that sensitive information, including intellectual property, corporate data, access keys, and tokens, can be accessed through Copilot, even if the original data is no longer publicly available.

 

The Scope of the Problem

 

Lasso’s investigation identified over 20,000 since-private GitHub repositories that still had data accessible through Copilot, affecting more than 16,000 organizations. The list of affected companies includes Amazon Web Services, Google, IBM, PayPal, Tencent, and Microsoft itself.

 

The Implications

 

This discovery raises significant concerns about data privacy and security. The fact that Copilot can access sensitive information, even if it’s no longer publicly available, highlights the need for more robust security measures. Lasso’s findings also underscore the importance of rotating or revoking compromised keys and tokens.

 

Microsoft’s Response

 

Lasso informed Microsoft of its findings in November 2024, but the company classified the issue as “low severity.” Microsoft stated that the caching behavior was “acceptable” and disabled the caching feature in December 2024. However, Lasso notes that Copilot still had access to the data, indicating a temporary fix.

 

Conclusion

 

The unintended consequences of AI-powered tools highlight the need for more robust security measures and greater transparency. As generative AI chatbots become increasingly prevalent, it’s essential to address the potential risks and ensure that sensitive information is protected.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *