Hardware

Experts Alarmed by People Uploading Their Medical Scans to Elon Musk’s Grok AI

Experts Alarmed by People Uploading Their Medical Scans to Elon Musk’s Grok AI


Image by Matteo Della Torre / NurPhoto via Getty / Futurism

To celebrate its new image-understanding capabilities, Elon Musk has encouraged his followers to share medical documents like MRI scans and x-rays with Grok, his AI chatbot integrated into X-formerly-Twitter.

“This is still early stage, but it is already quite accurate and will become extremely good,” Musk wrote in a tweet at the end of last month. “Let us know where Grok gets it right or needs work.”

Despite the many baffling privacy implications, many of his fans have done just that. In some cases, they’ve even shared their results publicly.

Now, experts are warning against sharing such information with Grok — echoing security concerns with chatbots at large, but also emphasizing some of the lack of transparency around Musk’s companies.

“This is very personal information, and you don’t exactly know what Grok is going to do with it,” Bradley Malin, a professor of biomedical informatics at Vanderbilt University, told The New York Times.

People sharing their medical information with Musk’s chatbot may be under the impression that it’s protected by the Health Insurance Portability and Accountability Act, or HIPAA.

But the protections enshrined by the federal law, which prevent your doctor from sharing your private health info, do not extend beyond the medical purview, the NYT notes. Once you put it out in the open, like on a social media site, it’s fair game.

This is in stark contrast to when tech companies have official partnerships with hospitals to obtain data, Malin said, which are stipulated with detailed agreements on how that information is stored, shared, and used.

“Posting personal information to Grok is more like, ‘Wheee! Let’s throw this data out there, and hope the company is going to do what I want them to do,'” Mailin told the NYT.

The risks of inaccurate answers may also put patients in danger. Grok, for instance, misidentified a broken clavicle for a dislocated shoulder, according to the report. Doctors responding to Musk’s tweet also found that the chatbot failed to recognize a “textbook case” of tuberculosis, and in another case mistook a benign cyst for testicles.

Then there are also concerns with how the chatbots themselves use the information, because their underlying large language models rely on the conversations they have to fine-tune their capabilities. That means that potentially anything you tell one could be used to train the chatbot, and considering their proclivity for hallucinating, the risk of one inadvertently blurting out sensitive information is not unfounded.

To that end, the privacy policies of X and Grok developer xAI, are unsatisfying. The latter’s, for example, claims that it will not sell user data to third parties, but that it does with “related companies,” per the NYT.

There’s reason enough to doubt how faithfully this is enforced in practice, however, because Musk brazenly encouraged people to submit medical documents, even though xAI’s policy states it “does not aim to collect sensitive personal information,” including health and biometric data.

Still, it’s possible that Musk’s companies may have explicit guardrails around health information shared with Grok that hasn’t been shared publicly, according to Matthew McCoy, an assistant professor of medical ethics and health policy at the University of Pennsylvania.

“But as an individual user, would I feel comfortable contributing health data? Absolutely not,” McCoy told the NYT.

More on Grok: Sam Altman Points Out That Elon Musk’s “Anti-Woke” Grok AI Seems to Actually Support Kamala Harris

Experts Alarmed by People Uploading Their Medical Scans to Elon Musk's Grok AI

Source link