On AI: Exactly How Private And Secure Is Your Company’s Data On ChatGPT?
The risks of using AI assistants with regard to sensitive data
(This column originally appeared in Forbes)
According to a survey recently released from Cox Communications (a client of my firm), two-thirds of small business owners invested in AI for their company last year and 53 percent plan to invest in AI even more in 2024.
No, most of these businesses are not operating robots or developing their own large language models…yet. What’s meant by “invested in AI” means — in 2024 — that small businesses and their employees are mostly using generative AI platforms like ChatGPT, Google Gemini, Microsoft Copilot and Anthropic for a number of back office tasks like helping to analyze spreadsheets, attend and record meetings, craft emails, perform research, create policies and analyze contracts.
But there’s a concern here.
Using these platforms requires employees to share and upload data. And, given the enormous advances in AI Assistants, it’s safe to assume that over the next few years businesses will be leaning on these cloud-based AI platforms like these to help them run their businesses more efficiently. Already ChatGPT 4o can “see” and “hear” conversations. Google’s Gemini recognizes everything in an office, including where a lost pair of glasses are or recognizing software code displaying on an open monitor. All of this information is going somewhere to be processed, right? It is. It’s going right into the hands of those big technology companies. Should we be concerned? Damn right we should.
While governments, governors and tech companies issue edicts, mandates and promises about the safe use of AI, businesses have an equally pressing concern: Given the amount of data these AI apps need to do as promised, exactly how private and secure is our actual data? Spoiler alert: you’re not going to be happy with the answer.
OpenAI’s data privacy policy is not unlike Microsoft and Google. The company says it won’t sell your data and only share it with your consent. However, its policy does stipulate that it can use your data to:
-Provide and improve services.
-Communicate with you.
-Ensure compliance with legal requirements.
-Conduct research and analysis.
-Enhance security and prevent fraud.
All of these uses give OpenAI wide latitude. For example, is using your company’s proprietary data necessary to “enhance its services?” Or “develop new features?” Or help it “respond to lawful requests?” Good luck fighting their army of attorneys who will be deployed to defend these actions. And what difference would it make at that point? Your data would have already been sucked into the company’s “learning model.”
So no, your data is not completely private. Nor is it completely secure. I’m sure OpenAI employs the best and the brightest security professionals to ensure that the critical information you’ve uploaded is protected, safe and free from data breaches. Just like Twitter/X, Adobe, Dropbox and LinkedIn. Or Trello, Vanderbilt University, Boeing and Sony. Feeling better now?
This is an opinion column so here’s my opinion: your company’s data is not completely private nor is it complete safe, whether it’s with OpenAI, Google, Microsoft or any AI platform. Sure, these companies all have policies, but they’re written to favor their own interests and can be easily defended against any small organization who dares to question. And yes, their security protocols are at infinitely higher levels of quality than most companies can afford. But, as we’ve seen, none of this provides a 100 percent guarantee.
So how are my smartest clients addressing these concerns? The same way they address all issues impacting their decisions. By weighing risks and rewards.
Future AI tools will use our data to perform many core functions in our businesses, from stacking shelves to autonomously fixing issues. Yes, there are data loss or misuse risks and smart business leaders will be weighing these risks against the rewards, just like we do as individuals every time we drive our cars, eat fast food, swim in the ocean or go for a blind date with someone you met online.
The risks of your data being stolen or used without your knowledge or consent are very real. However, the rewards of using AI platforms like ChatGPT — better customer service, increased sales, higher levels of productivity, more profits as a result — are pretty significant. There will never be a concrete answer to the data privacy or security issue. Just a continuing evaluation of what’s at stake.
Originally published at https://www.forbes.com.