Generative AI presents companies of all sizes with opportunities to increase efficiency and drive innovation. With this opportunity comes a new set of cybersecurity requirements particularly focused on data that has begun to reshape the responsibilities of data security teams. The 2024 Microsoft Data Security Index focuses on key statistics and actionable insights to secure your data used and referenced by your generative AI applications.
What is generative aI?
84% of surveyed organizations want to feel more confident about managing and discovering data input into AI apps and tools. This report includes research to provide you with the actionable industry-agnostic insights and guidance to better secure your data used by your generative AI applications.
Gain deeper insights about generative AI and its influence on data security.
In 2023, we commissioned our first independent research that surveyed more than 800 data security professionals to help business leaders develop their data security strategies. This year, we expanded the survey to 1,300 security professionals to uncover new learnings on data security and AI practices.
Some of the top-level insights from our expanded research are:
On average, organizations are juggling 12 different data security solutions, creating complexity that increases their vulnerability. This is especially true for the largest organizations: On average, medium enterprises use nine tools, large enterprises use 11, and extra-large enterprises use 14. In addition, 21% of decision-makers cite the lack of consolidated and comprehensive visibility caused by disparate tools as their biggest challenge and risk.
Fragmented solutions make it difficult to understand data security posture since data is isolated and disparate workflows could limit comprehensive visibility into potential risks. When tools don’t integrate, data security teams have to build processes to correlate data and establish a cohesive view of risks, which can lead to blind spots and make it challenging to detect and mitigate risks effectively.
As a result, the data also shows a strong correlation between the number of data security tools used and the frequency of data security incidents. In 2024, organizations using more data security tools (11 or more) experienced an average of 202 data security incidents, compared to 139 incidents for those with 10 or fewer tools.
In addition, a growing area of concern is the rise in data security incidents from the use of AI applications, which nearly doubled from 27% in 2023 to 40% in 2024. Attacks from the use of AI apps not only expose sensitive data but also compromise the functionality of the AI systems themselves, further complicating an already fractured data security landscape.
In short, there’s an increasingly urgent need for more integrated and cohesive data security strategies that can address both traditional and emerging risks linked to the use of AI tools.
User adoption of generative AI increases the risk and exposure of sensitive data. As AI becomes more embedded in daily operations, organizations recognize the need for stronger protection. 96% of companies surveyed admitted that they harbored some level of reservation about employee use of generative AI. However, 93% of companies also reported that they had taken proactive action and were at some stage of either developing or implementing new controls around employee use of generative AI.
Unauthorized AI applications can access and misuse data, leading to potential breaches. The use of these unauthorized AI applications often occurs with employees logging in with personal credentials or using personal devices for work-related tasks. On average, 65% of organizations admit that their employees are using unsanctioned AI apps.
Given these concerns, it is important for organizations to implement the right data security controls and to mitigate these risks and ensure that AI tools are used responsibly. Currently, 43% of companies are focused on preventing sensitive data from being uploaded into AI apps, while another 42% are logging all activities and content within these apps for potential investigations or incident response. Similarly, 42% are blocking user access to unauthorized tools, and an equal percentage are investing in employee training on secure AI use.
To implement the right data security controls, customers need to increase their visibility of their AI application usage as well as the data that is flowing through those applications. In addition, they need a way to assess the risk levels of emerging generative AI applications and be able to apply conditional access policies to those applications based on a user’s risk levels.
Finally, they need to be able to access audit logs and generate reports to help them assess their overall risk levels as well as provide transparency and reporting for regulatory compliance.
Traditional data security measures often struggle to keep up with the sheer volume of data generated in today’s digital landscape. AI, however, can sift through this data, identifying patterns and anomalies that might indicate a security threat. Regardless of where they are in their generative AI adoption journeys, organizations that have implemented AI-enabled data security solutions often gain both increased visibility across their digital estates and increased capacity to process and analyze incidents as they are detected.
77% of organizations believe that AI will accelerate their ability to discover unprotected sensitive data, detect anomalous activity, and automatically protect at-risk data. 76% believe AI will improve the accuracy of their data security strategies, and an overwhelming 93% are at least planning to use AI for data security.
Organizations already using AI as part of their data security operations also report fewer alerts. On average, organizations using AI security tools receive 47 alerts per day, compared to an average 79 alerts among those that have yet to implement similar AI solutions.
AI’s ability to analyze vast amounts of data, detect anomalies, and respond to threats in real-time offers a promising avenue for strengthening data security. This optimism is also driving investments in AI-powered data security solutions, which are expected to play a pivotal role in future security strategies.
As we look to the future, customers are looking for ways to streamline how they discover and label sensitive data, provide more effective and accurate alerts, simplify investigations, make recommendations to better secure their data environments, and ultimately reduce the number of data security incidents.
So, what can be made of this new generative AI revolution, especially as it pertains to data security? For those beginning their adoption roadmap or looking for ways to improve, here are three broadly applicable recommendations:
Gain deeper insights about generative AI and its influence on data security by exploring Data Security Index: Trends, insights, and strategies to keep your data secure and navigate generative AI. There you’ll also find in-depth sentiment analysis from participating data security professionals, providing even more insight into common thought processes around generative AI adoption. For further reading, you can also check out the Data Security as a Foundation for Secure AI Adoption white paper.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
The post Microsoft Data Security Index annual report highlights evolving generative AI security needs appeared first on Microsoft Security Blog.
Source: Microsoft Security