Dangerous Tools. What are the Threats to “Ordinary Users” from ChatGPT, Gemini, and Copilot?

Luc Williams

Just as no one is encouraging anyone to stop using the Internet to ensure their own safety and that of their loved ones, it would be equally naive to believe that tools based on generative artificial intelligencesuch as ChatGPT, Gemini (from Google) or Copilot (from Microsoft) will not be in our homes. Technological progress is rather irreversible, we will use AI en masse and that’s it. As in the case of the Internet, the fundamental question is – how to do it without harming ourselves? Cheryl Winokur Munk sought the answer for CNBC.

How to check if AI respects your data?

Data. We could stop there, because it is the key to most of the threats generated by widely available AI tools, such as ChatGPT, Gemini or Copilot. As Munk reminds us, each of the companies providing these tools has a separate privacy policy, which should be read carefully. To understand whether it is transparent, we can look at the questions that privacy experts believe we should ask each of these tools before we start our adventure with them. Here they are:

To what extent does the company intend to collect and how will it use the data you provide in exchange for access to the tool?

Can you turn off data sharing?

Can you limit how a company can use your data and for how long?

Can you remove your data from the company database?

Is the account cancellation procedure simple and intuitive?

If a company doesn’t provide you with easy-to-find answers to these questions, then you can safely assume that protecting your data isn’t a priority. A positive example, cited by Jodi Daniels, a privacy expert at Red Clover Advisors, is the proofreading tool Grammarly, which prominently displays these rules in an easy-to-understand manner.

Don’t let your acid reflux be your training material

I have already written elsewhere that we are able to feel empathy towards generative AI, and here, following the experts interviewed by Munk, I would like to add that it is definitely not worth trusting it. As Andrew Frost Moroz, founder of the alternative web browser Aloha Browser, says, one should avoid introducing sensitive data into large language models.

If you shrug at this point because you’re not talking to ChatGPT about anything private, I advise you not to let your guard down. Imagine a situation where you have a physical document, and this document consists of a wall of text that you want to have written down in the editor. So you take a photo of this document, send it to the chat, and after a moment you can paste the text into Word. Very convenient and… risky. If this document is, for example, a notarial deed or a medical note about your health, this data goes to the company that provides the tool. Are you sure you want the model to continue training on the description of your gastric reflux?

In short, AI tools should be used wisely. To translate a text from a language you don’t read, to summarize a Wikipedia article, to draw conclusions based on some data – absolutely. But if the Wikipedia article is about your embarrassing disease, and this data is your employer’s confidential data – then watch out.

The model will not “unlearn” what it learned from your data

It is also worth remembering that even if we have become so accustomed to support from Copilot or Gemini that we cannot imagine giving up this service, we are still able to protect ourselves from possible abuse. Each of the companies offering free AI tools, as already mentioned, has its own privacy policy.

Using Gemini, we can specify how long the company can store our data and to what extent. In the case of ChatGPT, we can stipulate that we do not want our conversations to be used to further train the model. It is worth doing this because apart from the pleasant awareness that we have our small contribution to the development of this technology, we do not receive any benefits from the fact that the algorithm is improved on our texts, and this may be associated with threats. What are they? Until now, when we shared data with a tool, recovering it was technically very easy – it was enough to delete it. The moment the algorithm is trained on our data, the matter begins to get complicated – it will not be able to “unlearn” what it learned from it.

When it comes to Microsoft’s Copilot, the company does not use or share our data without express consent. However, we can decide that we want to allow it to do so by expressing appropriate consent in the Power Platform administration panel. This will increase the effectiveness of the features offered by the tool, but at the same time we will lose control over our data. It is worth considering carefully whether the effectiveness of the feature is worth it. Let’s also remember that even if we have already given such consent, we can withdraw it at any time.

About LUC WILLIAMS

Luc's expertise lies in assisting students from a myriad of disciplines to refine and enhance their thesis work with clarity and impact. His methodical approach and the knack for simplifying complex information make him an invaluable ally for any thesis writer.