Elon Musk encouraged his followers to upload medical documents such as MRI scans and X-rayshis AI chatbot called Grok. Despite many questionable issues, many of his fans did just that. In some cases, they even made their results publicly available – describes the Futurism website.
AI-served data: Public private information
Experts warn against sharing such information with Grok. This raises security concerns, but also highlights the lack of transparency around Musk’s companies. “It’s very personal information and it’s not entirely clear what Grok intends to do with them,” Bradley Malin, a professor of biomedical informatics at Vanderbilt University, told The New York Times.
Chatbot and data privacy protection
People sharing theirs medical data Elon Musk’s chatbot may think they are protected by the Health Insurance Portability and Accountability Act. The NYT notes, however, that protection is provided by federal law USAwhich prevents a doctor from sharing private health information, does not go beyond the medical scope. Once they are shared on a forum, for example on a social networking site, they are no longer protected.
That’s a stark contrast to when technology companies enter into formal partnerships with hospitals to obtain data, Bradley Malin said, that are defined by detailed agreements on how that information is stored, shared and used. “Publishing personal information on a place like Grok is more like: >>Juhu! Let’s throw in this data and hope that the company will do what I expect it to do,’ Malin told the NYT.
Dangerous diagnoses, disclosure of confidential information
The risk of inaccurate answers can also put patients at risk. For example, Grok incorrectly identified a broken collarbone as a dislocated shoulder. Doctors also said the chatbot failed to recognize a “textbook case” of tuberculosis and, in another case, mistook a benign cyst for testicles.
There are also concerns about how chatbots themselves use information. Large language models rely on conversations to develop their capabilities. This means that potentially anything you say can be used to train the chatbot, so the risk of inadvertently revealing confidential information is not unreasonable.
Insufficient chatbot privacy policy
At the same time, the privacy policies of X and Grok, the xAI developer, are insufficient. The latter, for example, says it won’t sell user data to third parties, but according to the NYT it already does so with “affiliated companies.”
As Futurism writes, there are enough reasons to doubt data protection. After all, Musk himself brazenly encouraged people to submit medical records, even though xAI’s policy states that it “does not intend to collect sensitive personal information,” including health and biometric data.