Last week, we kicked off a three-part series on artificial intelligence, starting with a discussion on how to manage AI risks related to employees. This week, we’re covering the basic processes providers should adopt for the purchase of AI technology or the purchase of services using AI technology.
Understand when AI Technology May Be Relevant
A strong starting point is building one simple question into your contracting process:
“Does this product or service involve artificial intelligence?”
This should become as routine as asking whether a vendor will handle protected health information for HIPAA purposes.
To implement this, organizations may need to:
- Update contracting or procurement policies.
- Educate staff who review vendor agreements.
- Train leadership to flag potential AI use in services.
Sometimes the easiest way to identify AI risk is simply to ask the vendor directly whether the product or service incorporates AI technology.
Evaluate the AI Technology
“Does this product or service involve artificial intelligence?”
If the answer is yes, the next step is conducting a basic evaluation before signing the contract.
This process does not have to be complex, but organizations should understand how the technology will be used and what data will be shared. Based on the answers to those questions, the technology should be evaluated internally for its appropriateness for the intended use.
For tools involved in patient care, clinical leadership should review safety information. When tools involve employment decisions, or billing functions, the tools need to be vetted by HR and billing leaders. If the tool uses recordings or surveillance, ensure you understand what consents may be required by your state law.
Providers should document how they evaluated the tool and why they are comfortable using it. Vendor-provided materials, public research, and internal expert review can support that documentation.
Understand the Data Being Shared
A critical part of evaluating AI tools is understanding what information the system will receive.
If the tool will access protected health information, confidential business information, or data covered by non-disclosure or confidentiality agreements, the provider should confirm that the vendor’s practices align with their legal obligations, such as requiring a Business Associate Agreement or specific confidentiality provisions.
Ensure the Contract Contains Basic Protections
After a decision has been made to move forward with the technology, it is important to ensure the agreement with the vendor or contractor contains basic protection for the organization and meets the organization’s legal requirements. For small providers, a contract review checklist can be helpful for evaluating vendor agreements and terms of use. Legal counsel should also be consulted in the process and should help you build your own contract checklist. For a sample, check out our free contract review checklist template below.
Monitor AI Tools Over Time
Compliance efforts should not stop once the contract is signed. Organizations should continue monitoring AI tools by:
- Checking in with vendors about updates or new information.
- Gathering feedback from staff using the technology.
- Watching for potential bias or discrimination issues.
- Reviewing how the tool performs in practice.
- Ensuring the tool is used only as intended.
Setting a cadence and process for reviewing AI tools can minimize compliance risks.
Related Episodes:
Ep. 14 – Implementing AI and Mitigating Compliance Risks – Part I
Ep. 15 – Implementing AI and Mitigating Compliance Risks – Part II
Ep. 97 – AI and Your Employees
Subscribe to our podcast.
Get our latest posts by email.