Potential Risk of Sensitive Data Breach

sensitive data

A report by Netskope has revealed that healthcare workers regularly attempt to upload sensitive data to unapproved locations on the internet.

Netskope Threat Labs has published its latest threat report dedicated to healthcare, revealing that workers in the industry regularly attempt to upload sensitive data to unapproved locations on the web or in the cloud.

Generative AI applications such as Chatgpt or Google Gemini, the two most commonly used by healthcare workers, are often involved in data policy violations as their use in the workplace goes mainstream. 

Key findings revealed that 81 percent of all data policy violations occurring in healthcare organisations in the last twelve months were for regulated healthcare data, which is data protected by local, national or international regulations, and includes sensitive medical and clinical information. Passwords and keys, source code, or intellectual property were also impacted (19 percent altogether), and a large number of those violations were from individuals uploading sensitive data to personal Microsoft OneDrive or Google Drive accounts.

GenAI has become ubiquitous in the sector, and genAI applications are now used in 88 percent of healthcare organisations. A large proportion of data policy violations are now occurring in the context of genAI usage by healthcare workers, with 44 percent involving regulated data, 29 percent source code, 25 percent intellectual property, and two percent passwords and keys. Additional risks of data leaks can come from applications that leverage user data for training, or that incorporate genAI features, which are used in 96 percent and 98 percent of healthcare organisations, respectively.

More than two in three genAI users in healthcare use and send sensitive data to their personal genAI accounts at work. This behaviour is hindering security teams’ visibility over genAI-related activity among their staff, and without proper data protection guardrails, their ability to detect and prevent data leaks.

“GenAI applications offer innovative solutions, but also introduce new vectors for potential data breaches, especially in high-pressure, high-stakes environments like healthcare, where workers and practitioners often need to operate with speed and agility,” said Gianpietro Cutolo, Cloud Threat Researcher at Netskope Threat Labs.

“Healthcare organisations must balance the benefits of genAI with the deployment of security and data protection guardrails to mitigate those risks.” 

In doing so, they can consider deploying organisation-approved genAI applications among the workforce to centralise genAI usage in applications approved, monitored, and secured by the organisation, and reduce the use of personal accounts and “shadow AI”. The use of personal genAI accounts by healthcare workers, while still high, has already declined from 87 percent to 71 percent over the past year, as organisations increasingly shift towards organisation-approved genAI solutions. 

Deploying strict Data Loss Prevention (DLP) policies to monitor and control access to genAI applications and define the type of data that can be shared with them provides an added layer of security should workers attempt risky actions. The proportion of healthcare organisations deploying DLP policies for genAI has increased from 31 percent to 54 percent over the past year. Organisations can also consider deploying real-time user coaching, a tool alerting employees if they are taking risky actions. For example, if a healthcare worker attempts to upload a file into ChatGPT that includes patient names, a prompt will ask the user if they want to proceed. A separate report shows that a large majority of employees (73 percent) across all industries do not proceed when presented with coaching prompts. 

“In the healthcare sector, the rapid adoption of genAI apps and growing use of cloud platforms have brought new urgency to protecting regulated health data,” said Cutolo.

“As genAI becomes more embedded in clinical and operational workflows, organisations are accelerating the rollout of controls like DLP and app blocking policies to reduce risk. Healthcare organisations are making progress, but continued focus on secure, enterprise-approved solutions will be critical to ensure data remains protected in this evolving landscape.”

More news here.

0 replies on “Potential Risk of Sensitive Data Breach”