In a recent study, cloud-native network detection and response company ExtraHop revealed a troubling trend: businesses are grappling with the security implications of using employee-created artificial intelligence.
Their new research report, The Generative AI Tipping Pointsheds light on the challenges organizations face as genetic AI technology becomes more prevalent in the workplace.
The report delves into how organizations approach the use of AI production tools, revealing a significant cognitive dissonance between IT and security leaders. Surprisingly, 73 percent of these leaders admitted that their employees often use artificial intelligence tools or large language models (LLM) in their work. Despite this, a surprising majority admitted to being uncertain about how to effectively address the associated security risks.
When asked about their concerns, IT and security leaders expressed more concern about the potential for inaccurate or illogical answers (40%) than critical security issues such as exposure of customer and employee personally identifiable information (PII) (36%) or financial loss ( 25%).
Raja Mukerji, Co-Founder and Chief Scientist at ExtraHop, said: “Combining innovation with strong safeguards, genetic AI will continue to be a force that will elevate entire industries in the years to come.”
One of the surprising revelations from the study was the ineffectiveness of generative AI bans. About 32 percent of respondents said their organizations had banned the use of these tools. However, only five percent reported that workers never used these tools – indicating that bans alone are not enough to curb their use.
The study also highlighted a clear desire for guidance, particularly from government agencies. A significant 90 percent of respondents expressed a need for government involvement, with 60 percent supporting mandatory regulations and 30 percent supporting government standards for businesses to adopt voluntarily.
Despite feeling confident in their current security infrastructure, the study revealed gaps in key security practices.
While 82 percent felt confident in their security stack’s ability to protect against generative AI threats, less than half had invested in technology to monitor the use of generative AI. Worryingly, only 46 percent had established policies governing acceptable use, and only 42 percent provided training to users on the safe use of these tools.
The findings come in the wake of rapid adoption of technologies such as ChatGPT, which have become an integral part of modern businesses. Business leaders are urged to understand their employees’ AI genetic usage to identify potential security vulnerabilities.
A full copy of the report can be found here here.
(Photo by Henny Stander on Unscrew)
See also: BSI: Closing ‘AI trust gap’ key to unlock benefits
Want to learn more about AI and big data from industry leaders? Checkout AI & Big Data Expo takes place in Amsterdam, California and London. The comprehensive event is co-located with Digital Transformation Week.
Explore other upcoming corporate tech events and webinars powered by TechForge here.