Heather Meeker and I co-chaired this year’s OSS program for PLI once again: “Open Source Software 2024 – From Compliance to Cooperation.” Check it out for a smattering of open source-related topics, both beginner and expert. I also presented a new segment with Yusuf Safari, “Open Source and Artificial Intelligence – Practical Licensing Issues” (aka “Open Source and Machine Learning – Practical Licensing Issues”). It covers the AI licensing landscape, “ethical” licenses, the data licensing landscape, the EU AI Act’s treatment of “open” AI, and AI corporate policies. Slides for the presentation along with speaker notes are available here.
Category: Speaking Engagements
Keynote at the International Conference on Learning Representations (ICLR)
Myself with ICLR’s two principal organizers, Been Kim (Research Scientist at Deep Mind) and Yisong Yue (Machine Learning Professor at CalTech) at the Organizing Committee reception.
This past spring I was invited to present a keynote at the 12th International Conference on Learning Representations (ICLR) in Vienna: “Copyright Fundamentals for AI Researchers.” If you were not one of the lucky 2,000 people in the audience – not to worry – ICLR has posted a video of my talk and the slides here. The keynote explored the current state of copyright law with respect to AI in the U.S., potential claims and defenses, as well as practical tips for minimizing legal risk.
PLI’s Artificial Intelligence Law Program 2024
Image courtesy of PLI, from left to right, Van Lindberg, Peter Schildkraut, me, and Ron Eritano
Glynna Christian and Van Lindberg co-chaired a terrific and very comprehensive two-day program “Artificial Intelligence Law” at PLI this January in New York. I served as faculty for two of the panels, “Update on the Regulation of AI,” and “Contracting Around AI: Important Considerations.”
The first panel covered the EU AI Act, the Executive Order on AI, state-level AI legislation, the prospects of federal legislation, and briefly touched on legal regimes in other countries. We didn’t have slides that for this panel, but I can share some of the related resources here:
- Co-panelist Ron Eritano’s Normandy Group put together a summary of the federal AI-related legislation proposed in the 118th Congress
- Everything I covered with respect to the EU AI Act is also available in Part 2 of my post “Choose Your Own Adventure: The EU AI Act and Openish AI”
- The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
- Co-Panelist Kristen Johnston pointed us to a great resource maintained by EPIC, tracking AI-related law across all the states
The second panel covered unique AI-related risks, contractual risk-mitigation measures, vendor screening and vendor management. Glynna Christian and her firm Holland & Knight, were kind enough to allow me to reprint a suggested clause Glynna included as part of the slide deck for this panel. It is a sample customer-favorable clause on the use of AI technologies:
[Except as otherwise described in each SOW,] Vendor represents and warrants that it will not perform any Services that uses or incorporates, in whole or in part, any AI Tools (or depends in any way upon any AI Tools), including without limitation, any collection or processing of any [Customer Data or Personal Information] using any AI Tools. “AI Tools” means any and all deep learning, machine learning, and other artificial intelligence technologies, including any and all (i) algorithms, heuristics, models, and methodologies, whether in source code, object code, human readable form or other form, (ii) proprietary algorithms, software or other IT Systems that make use of or employ expert systems, natural language processing, computer vision, automated speech recognition, automated planning and scheduling, neural networks, statistical learning algorithms (like linear and logistic regression, support vector machines, random forests, k-means clustering), or
reinforcement learning, and (iii) proprietary embodied artificial intelligence and related hardware or equipment.With respect to any and all AI Tools described in an SOW approved by Customer, Vendor further represents and warrants that:
(a) each applicable SOW accurately identifies and fully describes all AI Tools;
(b) the AI Tools will (i) perform with a high degree of accuracy in accordance with the Specifications and (ii) not produce materially inaccurate results when used in accordance with the Documentation;
(c) Vendor will monitor the performance of the AI Tools to ensure continued accuracy in accordance with the Specifications, including processes and policies for the regular assessment and validation of the AI Tools’ outputs;
(d) Vendor has obtained, and is in compliance with, all rights and licenses necessary to use all AI Tools as described in the applicable SOW;
(e) Vendor has complied with all the Laws [and industry standards] applicable to (i) Vendor’s development and provision of all AI Tools as described in the applicable SOW and Customer’s use of all of the AI Tools as described in the applicable SOW;
(f) [Vendor will comply with all Customer policies and procedures relating to the use of AI Tools];
(g) Vendor will notify Customer at least [X] days’ prior to any [material] changes pertaining to any of the AI Tools (in whole or in part);
(h) Vendor will cooperate and comply with Customer’s privacy, security, and proprietary rights questionnaires and assessments concerning all such AI Tools and all proposed changes thereto;
(i) Vendor will, upon Customer’s request, allow Customer (or its agent) to audit or review all Services for usage of AI Tools and will provide Customer with all related necessary assistance;
(j) there have been no interruptions in use of any such AI Tool in the past [X] months;
(k) Vendor (i) retains and maintains information in human-readable form that explains or could be used to explain the decisions made or facilitated by the AI Tools, and Vendor maintains such information in a form that can readily be provided to Customer or Governmental Authorities upon request;
(l) Vendor maintains or adheres to [industry standard] policies and procedures relating to the ethical or responsible use of AI Tools at and by Vendor, including policies, protocols and procedures for (i) developing and implementing AI Tools in a way that promotes transparency, accountability and human interpretability; (ii) identifying and mitigating bias in training data or in the algorithmic model used in AI Tools, including without limitation, implicit racial, gender, or ideological bias; and (iii) management oversight and approval of employees’ use or implementation of AI Tools (collectively, “Vendor AI Policies”);
(m) there has been (i) no actual or alleged non-compliance with any such Vendor AI Policies; (ii) no actual or alleged failure of any AI Tool to satisfy the requirements or guidelines specified in any such Vendor AI Policies; (iii) no Claim alleging that any training data used in the development, training, improvement or testing of any AI Tool was falsified, biased, untrustworthy or manipulated in an unethical or unscientific way; and no report, finding or impact assessment by any employee, contractor, or third party that makes any such allegation; and (iv) no request from any Governmental Authority concerning any AI Tool.
Co-panelist Jason Mark Anderman also contributed a customer-friendly clause to the slide deck related to training, confidentiality and privacy:
Training and Instance Confidentiality Limits. Company will ensure that the Services and Software, provided via a third-party cloud (“Cloud Service Provider”) and AI environment (“Cloud AI Service Environment”), shall maintain strict confidentiality and security of Customer Data constituting personal data or confidential information of Customer’s clients (“Personal/Client Data”). The Personal/Client Data will be securely retained within the specific, dedicated Cloud AI Service Environment allocated for the Company, and will not contribute to the training of the Company’s, or the Cloud Service Provider’s AI models, nor be utilized by any third party outside of the Customer’s expressly nominated clients (in writing). Upon receipt of a notice from Customer, Company will remove all Customer Data from the Cloud AI Service Environment. Company will ensure that the governing contractual terms (e.g., terms of service) issued by the Cloud Service Provider include provisions materially consistent with this provision, and will identify the forgoing to Customer. Company will allow Customer to first approve in writing a given Cloud Service Provider and its Cloud AI Service Environment, such approval not to be unreasonably withheld or delayed. If there is any conflict or ambiguity between this provision and the rest of the Agreement, this provision governs and controls.
Here are some other related resources:
- My panel chair from the first panel, Peter Schildkraut, is the lead author on a great article describing the FTC’s settlement with Rite Aid regarding its use of facial recognition technology. It’s relevant because the settlement essentially spells out what the FTC thinks a good AI vendor management policy needs to include in order to avoid FTC charges of unfair and deceptive practices. The short of it is that I think it’s nearly impossible to do what the FTC requires unless a company either has staff with fairly deep AI expertise or it hires consultants who do
- Microsoft’s AI Security Risk Assessment
Co-Chairing PLI’s Annual “Open Source Software – From Compliance to Cooperation” Program + Slides to New Presentation “Licensing ‘Open’ AI Models”

I co-chaired the Practising Law Institute’s annual “Open Source Software 2023 – from Compliance to Cooperation” program with Heather Meeker again this September. The program is a day-long continuing legal education event with a variety of open source licensing and compliance experts covering both introductory and advanced topics as well as recent developments in OSS licensing.
As part of the program, Luis Villa, founder and general counsel of Tidelift, and I presented a session titled “Licensing ‘Open’ AI Models” (it’s called “Open Source and Artificial Intelligence” on the PLI site). We did a deep dive on what “open” AI licenses currently entail, the legal and technical pros and cons of using “open” AI models, the applicability of open source principles to the AI domain, and how “open” AI licenses interact with traditional OSS licenses. This presentation is useful for anyone thinking of using or releasing a publicly available AI model.
Aaron Williamson of Williamson Legal (former counsel for the Software Freedom Law Center and general counsel to the Fintech Open Source Foundation) and I did a session titled “OSS in Transactions, Licensing and M&A” where we took a close look at contractual provisions related to open source software and provided some advice on where and how they should be implemented. Our presentation was loosely based on a white paper we co-authored titled “IoT and the Special Challenges of Open Source Software Licensing,” which was published in the ABA’s Landslide magazine.
Co-Chairing PLI’s Annaual “Open Source Software – From Compliance To Cooperation” Program
This September I co-chaired the Practising Law Institute’s annual “Open Source Software – From Compliance to Cooperation” program with Heather Meeker. The program is a day-long continuing legal education event with a variety of open source licensing and compliance experts covering both introductory and advanced topics as well as recent developments in OSS licensing. We had an all-star lineup this year, so it’s a terrific way for attorneys to pick up 6 CLE credits, including 1 elimination of bias credit.
As part of the program, Aaron Williamson of Williamson Legal (former counsel for the Software Freedom Law Center and general counsel to the Fintech Open Source Foundation) and I did a session titled “OSS in Transactions, Licensing and M&A” where we did a deep dive on contractual provisions related to open source software and provided some advice on where and how they should be implemented. Our slides are available for download here. Our presentation was loosely based on a white paper we co-authored titled “IoT and the Special Challenges of Open Source Software Licensing,” which was also published in the ABA’s Landslide magazine.
