AI governance for private companies

Multiple authors
8 min read
2025-07-30
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.
server-room-gen-ai-hero-banner

The views expressed are those of the authors at the time of writing. Other teams may hold different views and make different investment decisions. The value of your investment may become worth more or less than at the time of original investment. While any third-party data used is considered reliable, its accuracy is not guaranteed. For professional, institutional, or accredited investors only. 

The world is sprinting into the future as the 2022 launch of ChatGPT has fueled widespread interest in generative AI (gen AI). S&P 500 earnings calls exploded with gen AI chatter, roughly doubling AI mentions year over year to hit a record high in 2Q23.1 We believe the hype is warranted as gen AI has the potential to be a critical tool to create efficiencies and productivity gains for companies of all sizes. In fact, one paragraph in this paper was written by Wellington’s gen AI tool. And we’re not alone: One-third of companies are already using gen AI regularly and 40% are gearing up to invest in the technology.2 Crucially, however, only 21% of companies have AI risk management policies in place.3

As “responsible AI” is a key area of interest for Wellington’s portfolio companies, this paper highlights important considerations for companies looking to develop appropriate gen AI governance measures and avoid a range of “irresponsible” practices.

Figure 1
when-the-fed-sneezes-fig1

Gen AI risks for private companies

Private companies are naturally seeking to grow and scale their operations. Gen AI can be a useful tool in this endeavor, but companies should be aware of the risks involved while seeking efficiencies.

Customer harm

Companies that rely on gen AI to make decisions or generate content are often removing the human element of review. Importantly, AI-generated recommendations can be susceptible to errors, limitations, and bias that may result in adverse outcomes for customers and reputational damage for the company. As one of several recent examples, the MyCity chatbot, designed to offer legal advice to local businesses in New York City, was found to be advising users to break the law, giving advice that recommended discriminating based on income and firing employees on illegal grounds.4 In another, Air Canada was sued by a customer due to its chatbot giving incorrect fare information.5 AI leaders across industries are actively developing specialty frameworks to help mitigate inaccuracies and reduce customer harm, particularly in high-risk areas such as health care, employment, finance, and legal fields, among others.

Vendor vulnerabilities

In addition to evaluating the risk of using gen AI internally, companies should understand how their third-party vendors and partners are using gen AI in support of their contracted services. Third parties that don’t monitor outcomes for accuracy and bias or lack appropriate security measures could inadvertently put your company at risk when relying on their support. We believe it is especially critical to conduct regular due diligence on third-party providers and their gen AI services and establish contractual terms up front that cover liability for errors and incidents.

Data privacy and copyright infringement

As gen AI usage has become widespread, the world has seen a number of legal actions concerning the unauthorized use of data to train models. Issues of individual privacy and licensing may arise depending on how the data was sourced. While the majority of cases seen thus far have involved developers of gen AI, we believe companies that leverage gen AI to support their business may be at risk for legal action if appropriate precautions are not employed.

Environmental considerations

Cloud-based gen AI, reliant on energy-intensive GPUs, is predicted to increase data center electricity use by a 1% – 2% CAGR annually.6 These centers also consume a significant amount of water (0.2 – 1.6 liters of water per kWh) for cooling.7 As AI evolves, factors like model efficiency, chip and data center architecture, and edge computing will likely shape its significant environmental footprint. We think companies should pay close attention to how gen AI use impacts both their own and their customers’ environmental risks and objectives.

Emerging gen AI regulations

As regulators work to address these risks, companies leveraging gen AI face an unclear regulatory landscape. Global privacy laws are well established and apply to the use of AI technologies by companies worldwide, but there are different levels of risk and compliance requirements depending on the geographical scope of a company’s operations. These laws apply to companies using gen AI with customer data or the data of individuals in scope for a relevant regulation. Furthermore, the EU AI Act, passed in March 2024, expands upon the tenets of GDPR to include additional requirements for companies using AI technologies. We believe companies should be mindful of the GDPR and EU AI Act’s compliance requirements as they set global standards for best practice in data privacy and AI governance. Importantly, given how quickly AI is evolving, regulations may not always be able to keep up. Ultimately, we think self-regulation will be necessary, and companies should understand their culpability for any negative externalities of the technologies they employ.

AI governance best practices

AI governance best practices can help companies navigate this evolving risk and regulatory landscape while harnessing the efficiency gains of gen AI. Gen AI governance may look different across company stages, but we believe the following core principles are broadly applicable. Industry groups such as Responsible Innovation Labs are also seeking to support private companies as they develop proper oversight for these emerging technologies.

Leveraging public company best practices, below we outline five measures that we believe all companies using gen AI internally should consider taking to help mitigate today’s risks.

  • Test the systems in the manner customers/employees are using them (e.g., go through the exercise of how a customer/employee would interact with gen AI). 
  • Review outputs for accuracy, completeness, and the presence of bias. 
  • Be able to determine how and why AI came to make a particular recommendation, prediction, or decision (this is often referred to as model “interpretability,” “explainability,” or “traceability”).
  • Have a structure in place to report the results of this testing to company leadership regularly (cadence may depend on the level of usage).

  • Assign a qualified individual to be accountable for gen AI governance, eventually looking to build out a team to accommodate increased usage of the technology.
  • Consider how a job function is changing with gen AI, and whether employees using the technology have an adequate level of support and governance. This might take the form of structured training and/or procedural requirements for review and oversight. 
  • Stay ahead of potential employee concerns about roles and responsibilities. Communicate effectively on how work is evolving with AI as a productivity enhancer and offer reskilling opportunities to support employee transitions.
  • Implement layers of access for employees using gen AI (e.g., access only for those who need it) and have mechanisms in place to ensure that the technology is only being used for its intended purposes.

  • Track and monitor the customer data that is being used by gen AI, as well as data that is pulled in from other content providers. All data sources should have the required authorization (i.e., customer consent, licensing).
  • Conduct thorough due diligence on vendors to understand their gen AI usage and cybersecurity measures. For gen AI providers, this should include an understanding of the foundational model. Vendors should only have access to information necessary for their role, with particular scrutiny on access to customer data. Contracts with vendors should be written with clear expectations of data usage and who is liable in the event of an incident.

  • Collect informed consent from customers for any data that is stored and used by gen AI. Ensure users are aware of how their data is being used by AI systems as well as the risks. Customers should also be notified any time they are interacting with gen AI, and content that is AI generated should be marked accordingly. 

  • Add gen AI governance as a board agenda topic to seek out perspectives from stakeholders and outside advisors.
  • Document and communicate the company’s commitment to strong AI governance and responsible usage of the technology, both internally and to external stakeholders. Some companies have implemented AI governance policies that include:
    • Who is accountable internally and mechanisms for keeping company leadership informed of gen AI usage.
    • Procedures for preventing errors and bias in outcomes, offering transparency and explainability to relevant stakeholders (e.g., regulators).
    • References to broader company privacy policy and cybersecurity policies, and how they specifically apply to gen AI usage internally.
    • Policies for monitoring downstream energy and water usage of cloud providers.

Preparing for the future of AI

The gen AI landscape continues to evolve, and a company’s AI governance policies and procedures will often require regular evaluation and revision. As usage of gen AI scales, we believe companies should consider building a dedicated team led by the chief information officer (or equivalent) with leadership interaction up to the board level. The gen AI team would be charged with a regular cadence of reporting to the board. Over time, boards should consider whether adding a member with tech experience is warranted given the level of gen AI usage internally. 

APPENDIX A: QUESTIONS YOUR INVESTORS MAY ASK

The below list of questions outlines key inquiries private companies might expect to receive from investors regarding their AI governance practices.

Current use

  • Where are you using gen AI (areas of business, use cases)? How long has it been in use and how do you identify the most beneficial applications?
  • How do you assess the potential intended and unintended impacts of a specific tool on operations, decision making, and productivity?
  • What training and controls do you have in place to ensure humans can intervene, when necessary, in the outputs or actions coming from gen AI?
  • How are you following/tracking model interpretability (i.e., understanding how the model reaches a conclusion)?

Governance and risk management

  • Do you have an AI governance policy, and what specific risks and opportunities does it cover?
  • Do you have a dedicated individual/team for gen AI governance and risk management? What are their responsibilities and how does activity get reported to leadership? How do they ensure continual self-education?

Future proofing

  • How are you managing the evolution of gen AI and its benefits and risks as both the technology and external context change?
  • How would you respond to emerging regulation and reporting requirements?

APPENDIX B: ADDITIONAL BEST PRACTICES FOR AI DEVELOPERS 

In addition to the risks and governance considerations outlined above, we believe the following areas are particularly relevant to companies who are developers of gen AI technologies.

  • Set the tone
    • Establish clear principles, governance, and a controls process for AI development.
  • Product usage beyond intended use
    • Consider whether there are broader applications for use of your product, particularly use cases that may be banned, and embed protections to prevent such usage into your product.
    • Understand existing technologies used by customers and interoperability with your product.
  • Legal obligations and liability in customer contracts
    • Similar to the considerations noted in the vendor section above, AI developers should also review their customer contracts for clarity of legal obligations and liability for errors and incidents.
  • Nondiscrimination and fairness in model development and auditing
    • Continually test the product to eliminate presence of bias.
    • Ensure representativeness and quality of data sets.
  • Disclosure of system capabilities and limitations
    • Notify users about what the product is capable of — and what it is not capable of.
  • Efficiency best practices to reduce energy and water usage
    • Track and monitor energy usage of cloud providers hosting your product.
    • Explore ways to reduce the energy and water usage of your product (e.g., code design efficiencies).
  • Human capital of data labeling/annotation
    • Evaluate labor practices for data labeling and annotation workforce.

APPENDIX C: AI GOVERNANCE RESOURCES

1FactSet, “Second-highest number of S&P 500 companies citing “AI” on earnings calls over past 10 years,” 15 March 2024. | 2QuantumBlack AI by McKinsey. “The state of AI in 2023: Generative AI’s breakout year,” 1 August 2023. | 3ibid. | 4AP News, 3 April 2024. | 5Forbes, 19 February 2024. | 6Proprietary Wellington analysis as of 30 June 2024. | 7Publicly reported figures by major tech companies. | 8European Commission, “Who does the data protection law apply to?” | 9European Parliament, “EU AI Act: first regulation on artificial intelligence.” 

Experts

morales-andrew-7120-w316
Associate Director, Value Creation, Private Investments
conway-caroline-9688

Caroline Conway

ESG Analyst

Related insights

Showing of Insights Posts
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.

US election special: which investment themes win at the polls?

Continue reading
event
7 min
Article
2025-10-31
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.

Thematic investing focus: how exactly will AI transform the world?

Continue reading
event
8 min
Article
2025-06-30
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.

Beyond the hype: finding AI’s long-term winners

Continue reading
event
4 min
Article
2025-04-30
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.

Three ways social considerations can enhance portfolios

Continue reading
event
5 min
Article
2025-04-30
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.

Thematic investing focus: advancing the next generation of energy

Continue reading
event
11 min
Article
2025-04-30
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.

Thematic investing focus: evolving the current generation of energy

Continue reading
event
11 min
Article
2025-04-30
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.

Thematic investing focus: security in a world of great-power competition

Continue reading
event
Article
2025-02-28
Archived info
Archived pieces remain available on the site. Please consider the publish date while reading these older pieces.

Read next