Massachusetts Attorney General Artificial Intelligence Guidance: What Healthcare Providers and Health IT Developers Need to Know

May 6, 2024 Alerts and Newsletters

With the explosion of Artificial Intelligence (AI) use and application, the Massachusetts Attorney General’s recent release of AI guidelines should serve as a reminder to healthcare providers and health IT developers of the various legal risks of using AI.

While Massachusetts does not have specific AI legislation, obligations under Massachusetts consumer protection, anti-discrimination, and data security laws remain in force. On April 16, 2024, Massachusetts Attorney General Andrea Campbell issued an Attorney General Advisory on the Application of the Commonwealth’s Consumer Protection, Civil Rights, and Data Privacy Laws to Artificial Intelligence (the “Advisory”). The Advisory stresses that while AI has tremendous benefits for society, it is not without its risks.

Instances of AI producing incorrect and biased information have highlighted the urgency of regulatory intervention. Particularly alarming are cases where AI is exploited to deceive consumers, such as through deepfakes, voice cloning, or chatbots used for fraudulent purposes to collect sensitive personal data. In light of these challenges, the Advisory points out that existing Massachusetts regulations, such as Chapter 93A, which governs business practices for consumer protection, will play a pivotal role in holding AI developers and providers accountable.

For example, the AG identified the following acts or practices as unfair or deceptive:

  • Falsely advertising the quality, value, or usability of AI systems.
  • Supplying an AI system that is defective, unusable, or impractical for the purpose advertised.
  • Misrepresenting the reliability, manner of performance, safety, or condition of an AI system.
  • Offering for sale or use an AI system in breach of the warranty in that the system is not fit for the ordinary purposes for which such systems are used or that is unfit for the specific purpose for which it is sold where the supplier knows of such purpose.
  • Misrepresenting audio or video content of a person to deceive another to engage in a business transaction or supplying personal information as if to a trusted business partner, as in the case of deepfakes, voice cloning, or chatbots used to engage in fraud.

Beyond the examples listed above, the Advisory warns that failure to follow federal consumer protection statutes, the Commonwealth’s standards for safeguarding personal information used by AI systems, or the Commonwealth’s Anti-Discrimination Law may also be an unfair or deceptive act or practice.

The Advisory identifies the potential liability under Chapter 93A, but it is crucial to acknowledge that it does not encompass the entirety of laws and statutes applicable to AI or its diverse applications. It additionally serves as a reminder that new legislation is not required for the AG to regulate the use of AI. Existing laws and regulations governing consumer products and applications within the realm of consumer protection extend to AI systems. These regulations serve as the cornerstone for safeguarding consumers against potential risks associated with AI, including bias, lack of transparency, and infringement upon data privacy rights.

Compliance with Massachusetts statutes and regulations aimed at protecting the public's health, safety, and welfare is imperative for ensuring the responsible development and deployment of AI systems. By upholding these regulatory standards, stakeholders—such as healthcare providers and health IT developers—can mitigate the potential risks posed by AI while fostering innovation and consumer trust in the Commonwealth.

Safeguarding consumer interests in the era of AI necessitates a multifaceted approach that combines regulatory oversight, transparency, and accountability. By addressing the challenges outlined in the Advisory and adhering to existing regulatory frameworks, policymakers, industry stakeholders, and consumers can collectively navigate the evolving landscape of AI while promoting ethical and responsible AI adoption.