Gartner predicts that by 2026 over 80% of workplaces will be using AI to some degree. While this will no doubt be a boost in productivity, business managers must create policies that dictate how AI will be used in order to comply with privacy and other risks associated with AI.

In 2023, a Samsung employee uploaded proprietary code into ChatGPT asking the AI to look at the code and make efficiency and other recommendations. The AI ‘learned’ from the code that the Samsung employee uploaded and was able to offer snippets of the same code to others looking for enhancements to their own code. The problem was that the code was considered intellectual property by Samsung.

As a result, Samsung created a policy to dictate how AI systems can be used in the workplace. Other organizations from banks to telecom followed suit. This example shows the importance of having a written policy that dictates how AI can be used and what information can be provided to the AI system.

AI policies should be subsets of data and privacy policies which aim to classify data and dictate how data should be stored, viewed and moved.

  • Define acceptable uses of AI and start from an ‘allow’ rule rather than a ‘deny’ rule.
  • Define process for confirming copyright & accuracy before posting AI generated content.
  • Refraining from inputting sensitive data into AI models.
  • Check AI for bias, hallucinations and poisoning effects before posting.
  • Require employees to disclose their use of AI.
  • Because AI is a nascent technology, policies should be reviewed and updated.

Leave a comment

Your email address will not be published. Required fields are marked *

error: Sorry, copy/paste is disabled
Skip to content