AI restriction polices are unworkable, active oversight is required
02/06/2025 | ComputerWeekly
An opinion piece in ComputerWeekly highlights a growing challenge concerning the unapproved use of artificial intelligence (AI) tools by employees in the workplace. Commonly known as shadow AI, the issue is similar to risk posed by unauthorised USB drives and personal cloud storage, although the potential stakes are higher than previous shadow IT concerns.
The core issue, much like the rise of shadow IT, stems from employees' desire for efficiency and to "work smarter, not harder." However, each interaction with an unvetted AI system introduces potential exposure points for sensitive data.
The article identifies the risk as the combination of readily accessible powerful AI tools and the implicit trust users place in AI-generated outputs. Cautioning against employees making decisions based on unverified AI content, the article likens AI to a confident intern who, while helpful, requires oversight and verification.
Forward-thinking organisations are now moving beyond simple restriction policies, opting instead to develop frameworks that embrace AI's value while integrating necessary and appropriate safeguards to manage these evolving risks.
A related article considers the CISOs role in deploying secure AI systems.
As generative AI tools become increasingly integrated into enterprise operations, they introduce both transformative potential and substantial risks. The challenge for CISOs is how to balance innovation with the tasks of securing data, ensuring cross-border compliance, and preparing for the inherent unpredictability of large language models and AI agents.
The consequences of a compromised or poorly governed AI tool are significant, potentially leading to sensitive data exposure, violations of global data protection laws, or critical decisions being made based on false or manipulated information.
The article proposes CISOs fundamentally reassess their cybersecurity strategies and policies across three key domains: data use, data sovereignty, and AI safety.
On data use, the article stresses that the most immediate risk associated with AI adoption is not malicious actors, but rather a lack of understanding. Many organisations are integrating third-party AI tools without a comprehensive grasp of how their data will be utilised, stored, or shared.
Meanwhile, a third article examines the risk associated with open source AI solutions.

What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.