A recent study by Software AG highlights the widespread use of Shadow AI across workplaces.
- Half of the surveyed employees are utilising non-company issued AI tools.
- Nearly half of these employees would not abandon these tools even if banned by their employers.
- The study underscores the necessity for enterprises to develop comprehensive AI strategies.
- Cybersecurity and data governance remain key concerns amidst the rising AI usage.
In a revealing study conducted by Software AG, it was discovered that 50% of employees are engaging with Shadow AI, defined as AI tools not issued by their employers. This usage is widespread among the 6,000 knowledge workers surveyed from the US, UK, and Germany, conducted between September 13-25, 2024.
The study further reveals a strong attachment to these AI tools, with 46% of workers stating they would continue using them even if their employers imposed a total ban. This highlights a significant challenge for businesses to address employee reliance on unauthorised AI resources.
Steve Ponting, Director at Software AG, indicated that while 2023 marked a period of AI experimentation, 2024 will witness GenAI becoming more entrenched in workplaces. Currently, 75% of knowledge workers utilise AI, a figure anticipated to rise to 90%, driven by AI’s time-saving advantages (83%), job facilitation (81%), and productivity benefits (71%).
With the escalation in AI adoption, there are considerable risks such as cyber attacks, data leaks, and regulatory breaches. Hence, organisations must establish robust plans to mitigate these risks efficiently.
Notably, the study found that 47% of employees believe their use of AI tools will expedite their career advancement, indicating a future where AI integration will be integral to job success.
Many workers prefer personal AI tools due to their autonomy (53%), while 33% cite the unavailability of required tools from their IT departments as a reason. This points to the need for businesses to reassess their provision of official AI tools.
Although risks are acknowledged by employees, with high awareness of cybersecurity (72%) and data governance (70%) issues, only a minority engage in proactive risk management measures such as security scans (27%) or data policy reviews (29%).
J-M Erlendson, Global Evangelist at Software AG, stressed that consistent AI users tend to be better at mitigating risks compared to infrequent users. He urged organisations to implement comprehensive training programmes to prepare staff as AI usage proliferates.
Erlendson also commented on the persistence of Shadow AI, stating that it contributes to operational disorder within organisations. He emphasised the importance of transparent frameworks and understanding employee preferences for AI tools along with required training.
The findings emphasise the critical need for companies to address both the opportunities and challenges posed by Shadow AI use.