Image courtesy of PLI, from left to right, Van Lindberg, Peter Schildkraut, me, and Ron Eritano
Glynna Christian and Van Lindberg co-chaired a terrific and very comprehensive two-day program “Artificial Intelligence Law” at PLI this January in New York. I served as faculty for two of the panels, “Update on the Regulation of AI,” and “Contracting Around AI: Important Considerations.”
The first panel covered the EU AI Act, the Executive Order on AI, state-level AI legislation, the prospects of federal legislation, and briefly touched on legal regimes in other countries. We didn’t have slides that for this panel, but I can share some of the related resources here:
- Co-panelist Ron Eritano’s Normandy Group put together a summary of the federal AI-related legislation proposed in the 118th Congress
- Everything I covered with respect to the EU AI Act is also available in Part 2 of my post “Choose Your Own Adventure: The EU AI Act and Openish AI”
- The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
- Co-Panelist Kristen Johnston pointed us to a great resource maintained by EPIC, tracking AI-related law across all the states
The second panel covered unique AI-related risks, contractual risk-mitigation measures, vendor screening and vendor management. Glynna Christian and her firm Holland & Knight, were kind enough to allow me to reprint a suggested clause Glynna included as part of the slide deck for this panel. It is a sample customer-favorable clause on the use of AI technologies:
[Except as otherwise described in each SOW,] Vendor represents and warrants that it will not perform any Services that uses or incorporates, in whole or in part, any AI Tools (or depends in any way upon any AI Tools), including without limitation, any collection or processing of any [Customer Data or Personal Information] using any AI Tools. “AI Tools” means any and all deep learning, machine learning, and other artificial intelligence technologies, including any and all (i) algorithms, heuristics, models, and methodologies, whether in source code, object code, human readable form or other form, (ii) proprietary algorithms, software or other IT Systems that make use of or employ expert systems, natural language processing, computer vision, automated speech recognition, automated planning and scheduling, neural networks, statistical learning algorithms (like linear and logistic regression, support vector machines, random forests, k-means clustering), or
reinforcement learning, and (iii) proprietary embodied artificial intelligence and related hardware or equipment.With respect to any and all AI Tools described in an SOW approved by Customer, Vendor further represents and warrants that:
(a) each applicable SOW accurately identifies and fully describes all AI Tools;
(b) the AI Tools will (i) perform with a high degree of accuracy in accordance with the Specifications and (ii) not produce materially inaccurate results when used in accordance with the Documentation;
(c) Vendor will monitor the performance of the AI Tools to ensure continued accuracy in accordance with the Specifications, including processes and policies for the regular assessment and validation of the AI Tools’ outputs;
(d) Vendor has obtained, and is in compliance with, all rights and licenses necessary to use all AI Tools as described in the applicable SOW;
(e) Vendor has complied with all the Laws [and industry standards] applicable to (i) Vendor’s development and provision of all AI Tools as described in the applicable SOW and Customer’s use of all of the AI Tools as described in the applicable SOW;
(f) [Vendor will comply with all Customer policies and procedures relating to the use of AI Tools];
(g) Vendor will notify Customer at least [X] days’ prior to any [material] changes pertaining to any of the AI Tools (in whole or in part);
(h) Vendor will cooperate and comply with Customer’s privacy, security, and proprietary rights questionnaires and assessments concerning all such AI Tools and all proposed changes thereto;
(i) Vendor will, upon Customer’s request, allow Customer (or its agent) to audit or review all Services for usage of AI Tools and will provide Customer with all related necessary assistance;
(j) there have been no interruptions in use of any such AI Tool in the past [X] months;
(k) Vendor (i) retains and maintains information in human-readable form that explains or could be used to explain the decisions made or facilitated by the AI Tools, and Vendor maintains such information in a form that can readily be provided to Customer or Governmental Authorities upon request;
(l) Vendor maintains or adheres to [industry standard] policies and procedures relating to the ethical or responsible use of AI Tools at and by Vendor, including policies, protocols and procedures for (i) developing and implementing AI Tools in a way that promotes transparency, accountability and human interpretability; (ii) identifying and mitigating bias in training data or in the algorithmic model used in AI Tools, including without limitation, implicit racial, gender, or ideological bias; and (iii) management oversight and approval of employees’ use or implementation of AI Tools (collectively, “Vendor AI Policies”);
(m) there has been (i) no actual or alleged non-compliance with any such Vendor AI Policies; (ii) no actual or alleged failure of any AI Tool to satisfy the requirements or guidelines specified in any such Vendor AI Policies; (iii) no Claim alleging that any training data used in the development, training, improvement or testing of any AI Tool was falsified, biased, untrustworthy or manipulated in an unethical or unscientific way; and no report, finding or impact assessment by any employee, contractor, or third party that makes any such allegation; and (iv) no request from any Governmental Authority concerning any AI Tool.
Co-panelist Jason Mark Anderman also contributed a customer-friendly clause to the slide deck related to training, confidentiality and privacy:
Training and Instance Confidentiality Limits. Company will ensure that the Services and Software, provided via a third-party cloud (“Cloud Service Provider”) and AI environment (“Cloud AI Service Environment”), shall maintain strict confidentiality and security of Customer Data constituting personal data or confidential information of Customer’s clients (“Personal/Client Data”). The Personal/Client Data will be securely retained within the specific, dedicated Cloud AI Service Environment allocated for the Company, and will not contribute to the training of the Company’s, or the Cloud Service Provider’s AI models, nor be utilized by any third party outside of the Customer’s expressly nominated clients (in writing). Upon receipt of a notice from Customer, Company will remove all Customer Data from the Cloud AI Service Environment. Company will ensure that the governing contractual terms (e.g., terms of service) issued by the Cloud Service Provider include provisions materially consistent with this provision, and will identify the forgoing to Customer. Company will allow Customer to first approve in writing a given Cloud Service Provider and its Cloud AI Service Environment, such approval not to be unreasonably withheld or delayed. If there is any conflict or ambiguity between this provision and the rest of the Agreement, this provision governs and controls.
Here are some other related resources:
- My panel chair from the first panel, Peter Schildkraut, is the lead author on a great article describing the FTC’s settlement with Rite Aid regarding its use of facial recognition technology. It’s relevant because the settlement essentially spells out what the FTC thinks a good AI vendor management policy needs to include in order to avoid FTC charges of unfair and deceptive practices. The short of it is that I think it’s nearly impossible to do what the FTC requires unless a company either has staff with fairly deep AI expertise or it hires consultants who do
- Microsoft’s AI Security Risk Assessment
