Skip to content

Brought to you by

Dentons On Call

Making health law a little more accessible and a lot less daunting.

open menu close menu

Dentons On Call

  • Home
  • About Us

Ep. 58 – Addressing Potential Discrimination in Patient Care Decision Support Tools

By Susan Freed
April 24, 2025
  • Podcast
Share on Facebook Share on Twitter Share via email Share on LinkedIn

Effective May 1, 2025, the Section 1557 regulations require covered healthcare providers to take reasonable steps by May 1, 2025, to identify and mitigate the risk of discrimination when they use AI and other emergency technologies in patient care that use race, color, national origin, sex, age or disability as input variables. Whether a provider has taken reasonable efforts to mitigate discrimination risks will depend on a variety of factors, including the provider’s size and resources; how the provider is utilizing the tool; whether the provider customized the tool; and the processes the provider has in place to evaluate the tool for potential discrimination. These requirements do not apply to AI tools utilized outside of patient care, such as in billing or scheduling.

Providers utilizing AI tools to support patient care decisions should have a process in place to evaluate AI tools for potential discrimination both prior to purchase and then ongoing thereafter. Such evaluation should identify whether the AI tool uses race, color, national origin, sex, age or disability as input variables and if so, what if any, information is publicly available on the potential for bias or discrimination. Providers should also reach out to the product’s developer and/or the entity through whom the provider purchased the tool for additional information. 

If an AI tool has the potential for bias or discrimination, the provider should consider ways in which that potential needs to be addressed in how its staff utilizes the tool, including educating end users of the potential for bias and discrimination and any recommendations or best practices developed to minimize the potential for bias and discrimination. Provider policies should also address how patients, providers, and others can submit a complaint regarding bias and discrimination in the use of an AI tool and how such complaints will be handled.

On our podcast this week, we discuss steps small providers can take to address potential discrimination in patient care decision support tools and meet the rule’s requirements. We also updated our Sample Generative AI policy to address the rule’s requirements for patient care decision support tools:

Ep 58 Sample generative ai policyDownload

Subscribe to our podcast.

Get our latest posts by email.

Stay in touch
Share on Facebook Share on Twitter Share via email Share on LinkedIn
Subscribe and stay updated
Receive our latest blog posts by email.
Stay in Touch
Susan Freed

About Susan Freed

Susan helps health care providers and health plans operate successfully in a challenging regulatory and reimbursement landscape. She approaches each client’s problems with practical solutions tailored to the individual client’s needs.

All posts Full bio

RELATED POSTS

  • Podcast

Ep. 46 – Creating Compliance Champions

By Susan Freed
  • Podcast

Ep. 17 – New Section 1557 Regulations: What the new non-discrimination rules mean for providers

By Susan Freed
  • Podcast

Three Habits of Highly Successful Compliance Officers

By Susan Freed

About Dentons

Redefining possibilities. Together, everywhere. For more information visit dentons.com

Categories

  • Anti-Kickback Statute
  • Compliance
  • Corporate
  • Digital Health
  • Digital IT
  • Fraud & Abuse
  • Health Care IT
  • Hospitals & Health Systems
  • Managed Care
  • Medicaid
  • Medicare
  • News Flash
  • Pharmaceuticals
  • Podcast
  • Privacy & Security
  • Reimbursement
  • Stark Law
  • US Health Care

Subscribe and stay updated

Receive our latest blog posts by email.

Stay in Touch

© 2025 Dentons

  • Legal notices
  • Privacy policy
  • Terms of use
  • Cookies on this site