- Director, Value Creation, Private Investments
Skip to main content
- Funds
- Insights
- Capabilities
- About Us
- My Account
The views expressed are those of the authors at the time of writing. Other teams may hold different views and make different investment decisions. The value of your investment may become worth more or less than at the time of original investment. While any third-party data used is considered reliable, its accuracy is not guaranteed. For professional, institutional, or accredited investors only.
The world is sprinting into the future as the 2022 launch of ChatGPT has fueled widespread interest in generative AI (gen AI). S&P 500 earnings calls exploded with gen AI chatter, roughly doubling AI mentions year over year to hit a record high in 2Q23.1 We believe the hype is warranted as gen AI has the potential to be a critical tool to create efficiencies and productivity gains for companies of all sizes. In fact, one paragraph in this paper was written by Wellington’s gen AI tool. And we’re not alone: One-third of companies are already using gen AI regularly and 40% are gearing up to invest in the technology.2 Crucially, however, only 21% of companies have AI risk management policies in place.3
The gen AI era is upon us — viewed by many as teeming with untapped potential — but are companies ready to responsibly harness its might? We believe that gen AI governance policies and procedures are an essential risk management practice for private companies who leverage the technology, regardless of size or stage.
As “responsible AI” is a key area of interest for Wellington’s portfolio companies, this paper highlights important considerations for companies looking to develop appropriate gen AI governance measures and avoid a range of “irresponsible” practices.
Private companies are naturally seeking to grow and scale their operations. Gen AI can be a useful tool in this endeavor, but companies should be aware of the risks involved while seeking efficiencies.
Customer harm
Companies that rely on gen AI to make decisions or generate content are often removing the human element of review. Importantly, AI-generated recommendations can be susceptible to errors, limitations, and bias that may result in adverse outcomes for customers and reputational damage for the company. As one of several recent examples, the MyCity chatbot, designed to offer legal advice to local businesses in New York City, was found to be advising users to break the law, giving advice that recommended discriminating based on income and firing employees on illegal grounds.4 In another, Air Canada was sued by a customer due to its chatbot giving incorrect fare information.5 AI leaders across industries are actively developing specialty frameworks to help mitigate inaccuracies and reduce customer harm, particularly in high-risk areas such as health care, employment, finance, and legal fields, among others.
Vendor vulnerabilities
In addition to evaluating the risk of using gen AI internally, companies should understand how their third-party vendors and partners are using gen AI in support of their contracted services. Third parties that don’t monitor outcomes for accuracy and bias or lack appropriate security measures could inadvertently put your company at risk when relying on their support. We believe it is especially critical to conduct regular due diligence on third-party providers and their gen AI services and establish contractual terms up front that cover liability for errors and incidents.
Data privacy and copyright infringement
As gen AI usage has become widespread, the world has seen a number of legal actions concerning the unauthorized use of data to train models. Issues of individual privacy and licensing may arise depending on how the data was sourced. While the majority of cases seen thus far have involved developers of gen AI, we believe companies that leverage gen AI to support their business may be at risk for legal action if appropriate precautions are not employed.
Environmental considerations
Cloud-based gen AI, reliant on energy-intensive GPUs, is predicted to increase data center electricity use by a 1% – 2% CAGR annually.6 These centers also consume a significant amount of water (0.2 – 1.6 liters of water per kWh) for cooling.7 As AI evolves, factors like model efficiency, chip and data center architecture, and edge computing will likely shape its significant environmental footprint. We think companies should pay close attention to how gen AI use impacts both their own and their customers’ environmental risks and objectives.
As regulators work to address these risks, companies leveraging gen AI face an unclear regulatory landscape. Global privacy laws are well established and apply to the use of AI technologies by companies worldwide, but there are different levels of risk and compliance requirements depending on the geographical scope of a company’s operations. These laws apply to companies using gen AI with customer data or the data of individuals in scope for a relevant regulation. Furthermore, the EU AI Act, passed in March 2024, expands upon the tenets of GDPR to include additional requirements for companies using AI technologies. We believe companies should be mindful of the GDPR and EU AI Act’s compliance requirements as they set global standards for best practice in data privacy and AI governance. Importantly, given how quickly AI is evolving, regulations may not always be able to keep up. Ultimately, we think self-regulation will be necessary, and companies should understand their culpability for any negative externalities of the technologies they employ.
GDPR applies to any company or entity that a) processes personal data and is established in the EU, or b) is established outside of the EU but conducts business/collects data of individuals located in the EU.8 The basic tenets of GDPR — that individuals have a right to privacy and that privacy warrants protection — are reflected in privacy laws around the world.
The EU AI Act applies to anyone who is using or developing AI, and a company is considered in scope if an output from their system is intended for use in the EU. Given the recency of the Act, it is yet to be determined how “intent” will be interpreted in the Act’s enforcement. However, in our view, a prudent company will consider itself in scope if its AI system is likely to be used in the EU. Furthermore, we believe it is likely that the EU AI Act will set the tone for emerging legislation globally and is thus worth using as a standard for compliance. Those who are in scope will be required to a) disclose content that was generated by AI, b) design the model to prevent it from generating illegal content, and c) publish summaries of copyrighted data used for training.9
AI governance best practices can help companies navigate this evolving risk and regulatory landscape while harnessing the efficiency gains of gen AI. Gen AI governance may look different across company stages, but we believe the following core principles are broadly applicable. Industry groups such as Responsible Innovation Labs are also seeking to support private companies as they develop proper oversight for these emerging technologies.
Leveraging public company best practices, below we outline five measures that we believe all companies using gen AI internally should consider taking to help mitigate today’s risks.
The gen AI landscape continues to evolve, and a company’s AI governance policies and procedures will often require regular evaluation and revision. As usage of gen AI scales, we believe companies should consider building a dedicated team led by the chief information officer (or equivalent) with leadership interaction up to the board level. The gen AI team would be charged with a regular cadence of reporting to the board. Over time, boards should consider whether adding a member with tech experience is warranted given the level of gen AI usage internally.
APPENDIX A: QUESTIONS YOUR INVESTORS MAY ASK
The below list of questions outlines key inquiries private companies might expect to receive from investors regarding their AI governance practices.
Current use
Governance and risk management
Future proofing
APPENDIX B: ADDITIONAL BEST PRACTICES FOR AI DEVELOPERS
In addition to the risks and governance considerations outlined above, we believe the following areas are particularly relevant to companies who are developers of gen AI technologies.
APPENDIX C: AI GOVERNANCE RESOURCES
1FactSet, “Second-highest number of S&P 500 companies citing “AI” on earnings calls over past 10 years,” 15 March 2024. | 2QuantumBlack AI by McKinsey. “The state of AI in 2023: Generative AI’s breakout year,” 1 August 2023. | 3ibid. | 4AP News, 3 April 2024. | 5Forbes, 19 February 2024. | 6Proprietary Wellington analysis as of 30 June 2024. | 7Publicly reported figures by major tech companies. | 8European Commission, “Who does the data protection law apply to?” | 9European Parliament, “EU AI Act: first regulation on artificial intelligence.”
Stay up to date with the latest market insights and our point of view.
Trends and transformation distilled: Our 2024 outlook in brief
Continue readingMultiple authors