• Subscribe
  • Log In
  • Sign up for email updates
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

The Texas Lawbook

Free Speech, Due Process and Trial by Jury

  • Appellate
  • Bankruptcy
  • Commercial Litigation
  • Corp. Deal Tracker/M&A
  • GCs/Corp. Legal Depts.
  • Firm Management
  • White-Collar/Regulatory
  • Pro Bono/Public Service/D&I

AI Addenda — Navigating the Legal Landscape Across the Software Lifecycle

April 21, 2025 Aly Dossa & Marcus Burnside

The rise of artificial intelligence has rapidly transformed industries, driving new innovations and efficiencies across organizations. However, the introduction of AI into business operations brings a complex set of legal and operational risks. AI addenda have become a key element in software contracts, with the inclusion of these provisions expanding beyond traditional legal terms.

This article outlines how AI addenda play a crucial role across different stages of the software lifecycle — from contracting and purchasing to implementation and operation — with a focus on the legal challenges and risks at each phase. Further, this article will assume a basic understanding of how AI software products are utilized whereby a user enters a prompt and receives an output.

Software Lifecycle for an Entity

The lifecycle of software within an entity is typically divided into three stages: contracting/purchasing, implementation and operation. Arguably, there is a fourth stage, where software is decommissioned or otherwise removed from an entity’s workflow, but this article will not address this fourth stage. Each stage introduces unique legal risks, making AI addenda increasingly integral for each of these stages. Understanding the legal implications and anticipating potential challenges throughout these stages is critical to ensuring that organizations can effectively mitigate risk and remain compliant with emerging AI regulations and application of existing rules to AI applications.

Stage 1 — Contracting/Purchasing: How Is Software Sold?

Software is typically sold through licenses, and the previous rise of cloud computing has caused software to be sold as a service, typically under a software as a service agreement. Essentially, rather than purchasing the product, a customer is granted temporary access to the service. Through such agreements, where the customer has much less control over the service, organizations must consider a variety of legal factors that influence both their operational ability and legal exposure.

Previously, the rise of cloud computing-enabled software providers to provide a more uniform and secure solution for customers and agreements for cloud computing software, typically in the form of a SaaS agreement. These agreements focused on standard legal terms such as liability, intellectual property ownership and indemnity, and, perhaps, added issues related to availability and security, which were solved through largely technical addenda, such as data protection addenda and service level agreements.

However, as AI becomes a core component of many software services, the value of the data produced by customers has become critical to the advancement of software provider’s services. As such, contracts are evolving to include provisions tailored to the specific risks associated with AI technology.

Master Services Agreement (MSA)

At the contracting stage, a master services agreement is typically used for long-term agreements, often spanning five or more years. This MSA frequently outlines the framework for the general terms between the customer and the software provider in a way that enables the customer to purchase a number of different services from the software provider.

As such, this MSA is often supplemented by a number of addenda that contain technical and legal obligations related to different services, jurisdictions, and technical obligations. Each of these addenda have added a new layer of review, each of which typically requires review by a new individual or group within an organization. However, MSAs increasingly include an AI addendum that implicates a variety of novel legal and operational challenges.

Moving from On-Premises to Cloud

As organizations transition from on-premises software solutions to cloud-based systems, and particularly cloud-based systems that utilize AI, they face new risks in areas such as data ownership, security and compliance. Cloud-based systems and ones that integrate AI systems can introduce further complexities, such as the AI’s data storage, processing and functionality, which may span multiple jurisdictions with different regulatory requirements. This necessitates a more comprehensive review and may require the review of a large number of individuals within an organization.

Traditional Legal Terms vs. AI Risks

Previously, the contract-review process primarily focused on traditional legal issues like liability, intellectual property rights and indemnification. However, as AI and, as an extension, control of data becomes increasingly integrated into software solutions, contracts now also require review of several other legal areas, such as:

  • Privacy: The use of AI often involves the processing of large datasets, including personal data. Organizations must ensure that AI systems comply with privacy regulations such as Europe’s General Data Privacy Regulation, the California Consumer Privacy Act, the Texas Data Privacy and Security Act and others.
  • Cybersecurity: AI can introduce vulnerabilities, and contracts need to include clauses addressing how data will be protected against breaches or data leaking (such as allowing users to retrieve training data through creative prompts).
  • Data Ownership: AI systems often rely on third-party data, raising questions about the ownership and licensing of this training data as well as output data that is generated by the AI systems.
  • AI-Specific Risks: New risks arise due to the unique nature of AI, requiring further scrutiny. As AI continues to develop, new areas of concern, such as hallucinations (AI-generated outputs that are inaccurate or nonsensical), need to be addressed specifically within the contract. Further, AI providers may be required to take actions in response to legal requirements that reduce the efficacy of their AI products.

Each of these new areas are growing in importance and are increasingly treated as a separate addendum in contracts, with AI becoming a specific section of growing complexity.

AI Addendum Considerations

The inclusion of AI-specific clauses introduces a unique set of challenges:

AI Definition: One of the major issues is the lack of a clear, consistent definition of AI across legal documents. Various technologies are being labeled as AI, but they can differ widely in terms of capabilities and risks. For example, some definitions encompass even the most basic of rule-based algorithms, while others are limited only to more sophisticated generative AI systems.

Rapidly Changing Technology: AI technology evolves rapidly, making it difficult to foresee all potential risks. As discussed above, MSAs are typically intended to remain in place for more than five years. Meanwhile, AI technology is advancing far more quickly than contracts. For example, the idea underpinning modern generative AI systems (a transformer-based neural network, such as ChatGPT), was first published in 2017, merely five years before ChatGPT was first published. As such, the typical lifecycle of MSAs are already shorter than paradigm-shifting discoveries in AI are made. Thus, contracts must anticipate these changes by including provisions that allow for frequent updates or amendments to the agreement as the technology evolves.

Technical and Legal Language: AI sections in contracts often blend technical jargon with legal terms. Reviewers must have expertise in both domains to fully understand and evaluate the risks, focusing on areas such as:

— Product Use: How the AI will be deployed and used within the organization.

— Output Ownership: Who owns the results of the AI’s output, especially if it generates creative work or decisions.

— Hallucinations: The risks associated with AI systems generating misleading or incorrect results.

— AI Training: How the AI system is trained, including the sources of training data and the potential biases introduced.

At this stage, AI addenda are still a rapidly developing section in software contracts, and, as such, there has not yet been an opportunity for standardization across the industry. Thus, reviewing the AI addenda for what is and, more importantly, what is not present is essential for readying your organization for implementing AI products.

Stage 2 — Implementation Issues: Readying the AI Product

Once a contract is in place, organizations face several key legal and operational considerations during the implementation phase. Testing the AI product and ensuring it aligns with the organization’s intended use case are central tasks. Further, assuring that the MSA provides for issues encountered when implementing the AI product. This issue has always been present in software-related contracts, but, as with everything else involving AI, the risks are heightened as your organization’s use case may unexpectedly misalign with the ability of the AI product. Further, ensuring control over which data is used as inputs to the AI products — and, further, which data is output and used by the AI products, commonly referred to as data lineage — is critical to providing guard rails to the risks implicated by AI products. As such, data governance and compliance issues emerge as major points of concern.

Testing Your Use Case

Ensuring that the AI system functions as intended within the organization’s specific use case is essential. Misalignment between the AI’s capabilities and the organization’s needs can lead to operational failures and legal challenges. This requires thorough testing of the AI product, often in a controlled environment, before full deployment, which further implicates the need to integrate legal and technical groups within an organization.

Who Can Use the AI Product?

Limiting access to the AI system is critical, especially when considering data security and regulatory compliance. Identifying who in the organization can access the AI and under what circumstances is an important step in integrating an AI product into your organization’s workflow. This may involve the creation of internal user agreements or policies outlining acceptable use.

Data Input Restrictions

Not all data is appropriate for input into AI systems, and the wrong data can lead to inaccurate outputs or breaches of privacy. Legal considerations must address what information can and cannot be input into the AI system to avoid potential compliance violations, particularly when dealing with sensitive personal data. Again, this may involve both a legal and technical understanding of what both the MSA states and what the AI product is capable of performing.

Data Governance and Impact Assessments

Data governance ensures that the AI product complies with regulatory requirements, while impact assessments evaluate potential legal and ethical concerns. AI products must undergo assessments to ensure they do not violate privacy laws or introduce bias into decision making. As part of the acquisition and implementation of an AI product, each party — the customer and the provider — should implement data governance and, depending on the data or uses implicated, impact assessments as well.

Having armed yourself with a proper AI addendum, many of the implementation issues are likely to have already been considered. Further, while each AI product is unique and thus carries with it unique issues, establishing these implementation steps can enable one to quickly adapt them to each new AI product.

Stage 3 — Operational Issues: Ongoing Management of AI Systems

The operational phase is when the AI product is fully integrated into the organization’s processes. Managing updates, monitoring compliance, and ensuring ongoing oversight are critical to managing AI risk and adhering to legal obligations.

Updates from the Provider

AI systems require ongoing updates to stay current with evolving technologies and regulatory requirements. These updates may require the reassessment of implementation steps, including compliance checks, testing and adjustments to ensure the system operates within legal boundaries. Further, AI systems can differ from traditional software in that errors in their outputs can be more difficult to identify. Thus, it may be beneficial to require more testing of updated versions prior to rolling out updates to the users.

Compliance and Legal Requirements

AI systems must be continually monitored to ensure that their use remains compliant with applicable laws, including data protection regulations, intellectual property rights and consumer protection laws. This includes determining what must be disclosed to customers and clients about the use of AI, as well as how to handle feedback and concerns regarding the technology.

Ongoing Oversight Requirements

Legal teams must ensure that there is continuous oversight of the AI product, particularly in industries subject to strict regulatory frameworks, such as healthcare and finance. Regular audits and reviews of the AI system’s performance and outputs may be required to mitigate legal risk and ensure compliance with evolving laws. Further, many jurisdictions are presently contemplating various AI regulations. As such, lawyers must monitor such legal updates along with ensuring that an operational pathway to implement such regulations is present.

Conclusion

As AI technology continues to shape business operations, it introduces a new layer of complexity to the software lifecycle. Legal professionals and organizations must adapt to the shifting landscape by addressing AI-related risks across contracting, implementation and operational phases. The growing need for AI addenda, encompassing everything from privacy and cybersecurity to data ownership and hallucinations, underscores the importance of a proactive and comprehensive approach to managing AI risks. By focusing on the unique challenges that AI presents, organizations can navigate the legal complexities of integrating AI into their operations while safeguarding their interests and remaining compliant with evolving regulations.

Aly Dossa is a shareholder at Chamberlain Hrdlicka, where he chairs the Data Security & Privacy and Intellectual Property Practices. His combination of legal and technical knowledge positions him well to advise clients on privacy compliance, data breaches, cybersecurity, and cross-border data transfers. As a Certified Information Privacy Professional (CIPP/US) and Certified Information Privacy Manager (CIPM), Aly offers informed guidance on navigating the evolving landscape of data privacy regulations.

Marcus Burnside is a senior associate as Chamberlain Hrdlicka, where he focuses his practice on intellectual property for both domestic and foreign clients. With knowledge of both mechanical and electrical engineering, Marcus assists clients in a broad range of technologies including computer hardware, oil and gas, automation technologies, robotics, communication hardware and software, data storage, gas turbines, and green energy technologies including solar and wind. Marcus’ intellectual property experience enables him to assist clients in drafting patent applications, responses to office actions, and appeal briefs.

©2025 The Texas Lawbook.

Content of The Texas Lawbook is controlled and protected by specific licensing agreements with our subscribers and under federal copyright laws. Any distribution of this content without the consent of The Texas Lawbook is prohibited.

If you see any inaccuracy in any article in The Texas Lawbook, please contact us. Our goal is content that is 100% true and accurate. Thank you.

Primary Sidebar

Recent Stories

  • NRG to Acquire 18 Gas-Fired Power Plants for $12B
  • Cheniere AGC Latest In-House Lawyer Going Back to Practice
  • Premium Subscriber Q&A: Cheryl-Lynne Davis and Teresa Jones
  • BWEL — A Creative Partnership that Enhances the Entire Legal Community
  • CDT Roundup: Energy Sector Leads the Charge

Footer

Who We Are

  • About Us
  • Our Team
  • Contact Us
  • Submit a News Tip

Stay Connected

  • Sign up for email updates
  • Article Submission Guidelines
  • Premium Subscriber Editorial Calendar

Our Partners

  • The Dallas Morning News
The Texas Lawbook logo

1409 Botham Jean Blvd.
Unit 811
Dallas, TX 75215

214.232.6783

© Copyright 2025 The Texas Lawbook
The content on this website is protected under federal Copyright laws. Any use without the consent of The Texas Lawbook is prohibited.