Study Finds Regulated Data at Risk in Use of Generative AI Apps: Netskope Threat Labs Report
Artificial intelligence technology has become increasingly prevalent in businesses, with genAI applications being used by a large majority of enterprises. However, the growing use of genAI apps also presents new security challenges, particularly when it comes to the sharing of regulated data.
The latest research from Netskope highlights that more than a third of sensitive data shared with genAI applications is regulated data, posing a risk of costly data breaches for businesses. While the majority of businesses are now taking steps to block certain genAI apps to limit this risk, there is still a need for a more robust data loss prevention (DLP) effort.
One positive finding from the research is that many enterprises are implementing real-time user coaching to help guide user interactions with genAI apps. This proactive approach has proven to be effective in mitigating data risks, with a significant number of users altering their actions after receiving coaching alerts.
It is clear that enterprises need to adapt their risk frameworks specifically to AI and genAI usage. Netskope recommends several tactical steps to address risk from generative AI, including assessing current AI usage, implementing core security controls, planning for advanced controls, and continuously measuring and refining security measures.
As businesses continue to embrace AI technology, it is essential that they prioritize data security and implement strong controls to protect sensitive information. By taking proactive steps to mitigate risks associated with genAI apps, businesses can ensure that they are harnessing the power of AI technology while safeguarding their data against potential breaches.