Gen AI risks for private companies
Private companies are naturally seeking to grow and scale their operations. Gen AI can be a useful tool in this endeavor, but companies should be aware of the risks involved while seeking efficiencies.
Customer harm
Companies that rely on gen AI to make decisions or generate content are often removing the human element of review. Importantly, AI-generated recommendations can be susceptible to errors, limitations, and bias that may result in adverse outcomes for customers and reputational damage for the company. As one of several recent examples, the MyCity chatbot, designed to offer legal advice to local businesses in New York City, was found to be advising users to break the law, giving advice that recommended discriminating based on income and firing employees on illegal grounds.4 In another, Air Canada was sued by a customer due to its chatbot giving incorrect fare information.5 AI leaders across industries are actively developing specialty frameworks to help mitigate inaccuracies and reduce customer harm, particularly in high-risk areas such as health care, employment, finance, and legal fields, among others.
Vendor vulnerabilities
In addition to evaluating the risk of using gen AI internally, companies should understand how their third-party vendors and partners are using gen AI in support of their contracted services. Third parties that don’t monitor outcomes for accuracy and bias or lack appropriate security measures could inadvertently put your company at risk when relying on their support. We believe it is especially critical to conduct regular due diligence on third-party providers and their gen AI services and establish contractual terms up front that cover liability for errors and incidents.
Data privacy and copyright infringement
As gen AI usage has become widespread, the world has seen a number of legal actions concerning the unauthorized use of data to train models. Issues of individual privacy and licensing may arise depending on how the data was sourced. While the majority of cases seen thus far have involved developers of gen AI, we believe companies that leverage gen AI to support their business may be at risk for legal action if appropriate precautions are not employed.
Environmental considerations
Cloud-based gen AI, reliant on energy-intensive GPUs, is predicted to increase data center electricity use by a 1% – 2% CAGR annually.6 These centers also consume a significant amount of water (0.2 – 1.6 liters of water per kWh) for cooling.7 As AI evolves, factors like model efficiency, chip and data center architecture, and edge computing will likely shape its significant environmental footprint. We think companies should pay close attention to how gen AI use impacts both their own and their customers’ environmental risks and objectives.
Emerging gen AI regulations
As regulators work to address these risks, companies leveraging gen AI face an unclear regulatory landscape. Global privacy laws are well established and apply to the use of AI technologies by companies worldwide, but there are different levels of risk and compliance requirements depending on the geographical scope of a company’s operations. These laws apply to companies using gen AI with customer data or the data of individuals in scope for a relevant regulation. Furthermore, the EU AI Act, passed in March 2024, expands upon the tenets of GDPR to include additional requirements for companies using AI technologies. We believe companies should be mindful of the GDPR and EU AI Act’s compliance requirements as they set global standards for best practice in data privacy and AI governance. Importantly, given how quickly AI is evolving, regulations may not always be able to keep up. Ultimately, we think self-regulation will be necessary, and companies should understand their culpability for any negative externalities of the technologies they employ.
Trends and transformation distilled: Our 2024 outlook in brief
Our experts explore investment opportunities and risks for the year ahead.
Multiple authors