Healthcare Technology Company Settles Texas Attorney General Allegations Regarding Accuracy of Generative AI Products

Healthcare Technology Company Settles Texas Attorney General Allegations Regarding Accuracy of Generative AI Products


On September 18, 2024, the Texas Office of the Attorney General (“OAG”) announced that it reached “a first-of-its-kind settlement with a Dallas-based artificial intelligence healthcare technology called Pieces Technologies” (“Pieces”) to resolve “allegations that the company deployed its products at several Texas hospitals after making a series of false and misleading statements about the accuracy and safety of its products.”

According to the press release, “at least four major Texas hospitals have been providing their patients’ healthcare data in real time to Pieces so that its generative AI product can ‘summarize’ patients’ condition and treatment for hospital staff.”  Pieces developed “a series of metrics to claim that its healthcare AI products were ‘highly accurate,’ including advertising and marketing the accuracy of its products and services by claiming an error rate or ‘severe hallucination rate’ of  ‘<1 per 100,000.’”  The OAG claimed that its “investigation found that these metrics were likely inaccurate and may have deceived hospitals about the accuracy and safety of the company’s products” in violation of the Texas Deceptive Trade Practices Act.

Among other restrictions, the settlement requires that Pieces provide customers with “documentation that clearly and conspicuously discloses any known or reasonably knowable harmful or potentially harmful uses or misuses of its products or services,” including:

  • “the type of data and/or models used to train its products and services;”
  • an “explanation of the intended purpose and use of its products and services, as well as any training or documentation needed to facilitate proper use of its products and services;”
  • “any known, or reasonably knowable, limitations of its products or services, including risks to patients and healthcare providers;” and
  • “any known, or reasonably knowable, misuses of a product or service that can increase the risk of inaccurate outputs or increase the risk of harm to individuals.”

Pieces also agreed under the settlement to include certain disclosures in its marketing or advertising if it includes statements “regarding any metrics, benchmarks, or similar measurements describing the outputs of its generative AI products” such as how the standards were calculated.

The OAG signaled that healthcare AI may become an enforcement priority. 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *