How can information security leaders ensure that employee usage of artificial intelligence tools to help increase organizational productivity is not putting sensitive company data at risk of leakage? A look at most LinkedIn or news feeds will make most of us wonder which AI apps are safe to use, which aren’t, which of them should be blocked, and which should be limited or restricted.
As reported by Mashable, Samsung employees inadvertently leaked confidential company information to ChatGPT1 on at least three separate occasions in April, prompting the company initially to restrict the length of employee ChatGPT prompts to a kilobyte. Then, they decided on May 1st to temporarily ban employee use of generative AI tools on company-owned devices, as well as on non-company-owned devices running on internal networks.
Whether it’s the need to protect intellectual property (IP), personally identifiable information (PII), or any other private or confidential data, the Samsung example is good reason for CISOs everywhere to scrutinize the risks of similar data leakage events happening to their organizations. It also creates an opportunity to put technology, policies, and processes in place to prevent them. This means having sound data loss prevention and application control capabilities in place that enable administrators to discover organizational usage of AI tools, assess the risks those tools present to data security, determine whether they are safe or need to be blocked, and then block them or allow usage – either in full or limited capacity.

See below:
https://umbrella.cisco.com/blog/controlling-chaptgpt-risk-keeping-your-users-productive-and-data-safe