Viewpoints

Op-Ed: How funds, boards can navigate the AI revolution

December 12, 2023

By Hassell McClellan, John Hancock Group of Funds, & Robi Krempus, Manulife Investment Management

As asset managers pursue operationalization of generative artificial intelligence (AI) in mutual fund operations, investment processes, marketing strategies, and distribution initiatives, regulatory responses are emerging with implications for fund boards.

 

Are mutual fund boards up to the imminent governance challenge?

 

The potential of generative AI is being hailed as practically limitless, a phenomenon that is likely to cascade through every industry and create an exponential array of new opportunities and challenges. For the moment, integration of generative AI in asset management continues to emerge, but mutual fund boards over time will be held accountable for having asked timely and appropriate risk-assessment questions. For example:

 

  • Were we, as trustees and boards, sufficiently attentive to the technology’s potential capabilities, power, and limitations?
  • Did we understand all sources of risk?
  • Did we establish or appropriately influence the development of relevant governance and regulatory guardrails to manage the risk of the technology’s ability to learn and make independent decisions?

 

Answering these questions will require prudent and proactive actions by boards and regulators to appropriately understand the AI genie as it unleashes its power on financial services. While the focus is frequently on the myriad of significant potential product and service transformations, questions of governance should rise quickly to the top of the agenda.

 

Intelligence without Consciousness

A first step toward a proactive approach is to understand the building blocks of AI, specifically data and computation. Neither of these are new; the concepts of data and practice of computation are well established. What is new about generative AI is the complexity of the mathematics involved, the sophistication of its algorithms, and the creation of remarkable neural networks that are modeled on the human brain. And what’s been built by various companies in this space is not only elegant in the way it ingests enormous amounts of data, but also how it instantaneously performs a blizzard of complex computations.

 

But generative AI goes beyond data and computation. It’s designed to mimic certain basic human learning patterns associated with becoming “smarter” or more intelligent. And how do humans get smarter? We go to school, we learn, we study, and we acquire information. Generative AI is about creating this kind of intelligence; it goes beyond automation and predictive calculations. It's nothing less than a new form of intelligence without consciousness, intelligence that can learn on its own and create novel solutions.

 

A second imperative step is asking: For what is AI potentially particularly adept? This question again relates directly to human beings and many tasks we routinely perform. As an example, generative AI is very good at summarizing text and identifying its sentiment. For asset managers, there’s enormous potential here, particularly related to the use of generative AI to support an adviser’s internal keepers of knowledge. Generative AI tools could help make information readily available in combinations that provide hidden competitive advantages, including boosted analytical prowess and more effective investment decision-making.

 

AI in IM

Mutual fund boards of trustees are stewards with the mandate to act in shareholders’ best interests. Therefore, finding efficiencies and lowering costs are always important goals. Generative AI does have the potential to enhance efficiencies in investment management and mutual funds. But more importantly, AI has the potential to make fund governance and oversight more effective, not just in terms of doing things faster—because doing the wrong things faster doesn’t do shareholders any favors. Boards and shareholders don’t benefit from just faster decision making. Rather, better decision making in all fund components is the golden chalice.

 

For mutual fund companies, those fund components include the parts that manufacture the funds themselves, the parts that perform fund oversight, and the parts that distribute and market the funds. And, of course, there are the technology components that underpin all operational functions and can lend efficiency to specific tasks as well as the whole.

 

Fundamentally critical, the manufacturing component is about a fund complex’s efforts to create investment products that seek to provide attractive risk-adjusted returns for shareholders. Based on the work of analysts and portfolio managers, human intelligence is at the core of actively managed mutual funds. It’s a world of collecting information about individual companies, securities, industries, and economies to create sophisticated models to project scenarios for future company results, yields, and security prices. But there must be a connection with the sales side of asset management companies. In other words, many different inputs come from various places that analysts and managers have to synthesize according to their investment process and ultimately make an investment decision on behalf of shareholders.

 

When we consider generative AI, we find that it has the ability to analyze and create content, which gets to the heart of the complex tasks performed by analysts and portfolio managers. Therefore, the technology has the potential to provide useful summaries that may even sharpen a human understanding of the available information. This implies that AI could enhance decision making, which could ultimately be to an advantage in manufacturing and operating fund products.

 

But we must not overlook the important role of data in the asset management industry as we march toward an AI-driven world. Assuming data inputs for generative AI to analyze and summarize, a key task will be to develop the database itself. In reality, there’s much potentially relevant data that’s not well understood because it isn’t collected systematically, or because it’s being resisted or ignored since it’s thought of as being unimportant. Collecting less organized or underappreciated data and making it available for generative AI tools could facilitate even better results in assessing issues like relative performance—and doing so in imaginative ways may provide new insights. Integration into fund boards’ 15(c) processes would seem a particularly ripe area for reaping the benefits of AI’s capabilities to analyze data.

 

AI in Governance

From a strategic governance perspective, it may be useful to think of AI as an “assisting,” as opposed to a “disruptive,” tool, one that can potentially provide an edge to fund management and governance ecosystems. It can be trained to learn about stewardship principles, to test its own hypotheses for governance, and to help trustees/directors and boards to see and better understand the fruits of such analyses.

 

Boards will particularly benefit from development of a coherent, integrated strategy around generative AI. This includes a clear understanding of how advisers and asset managers are integrating AI and where it may affect the role of boards and trustees as fiduciaries.

 

Boards will also need to develop a perspective on the salient risks associated with artificial intelligence technology. It may become an active intelligence—without consciousness—but it’s still based on processing data and information through the medium of computer code. As a result, a limitation of AI models is that they’re only as good as the data we give them and how we instruct them to manage or manipulate that data. And that implies that it’s all still very much subject to the biases of its creators. The models are not automatically immune to that, and their construction may embed certain unintended values.

 

Mutual fund trustees and directors, for example, will want to develop insights into how shareholders are protected against biases within generative AI that may have unintended consequences. This may include asking questions that push explanations of generative AI models to clarify if biases exist before the introduction of access to new data and applications.

 

Importantly, the mathematical and network complexity of generative AI models makes testing and understanding their quality an inherently challenging task for boards. For that reason, an essential protocol of internal governance needs to be developed if generative AI is operable anywhere in a company’s systems. It will be crucial to ensure that the outputs produced are accurate, appropriately unbiased as much as possible, and principle-driven ethically. Boards may want to be catalysts for rethinking of their organizational approach to data science governance.

 

The overriding quandary for trustees and directors is determining how to ensure the organizations they oversee are taking a holistic approach to generative AI and how to identify the system of governance for its development and application.

 

Regulation is inevitable (and Potentially for the Better)

It is imperative for trustees to be consistently aware of how AI applications are being viewed from a regulatory perspective and the impact of emerging regulatory initiatives.

 

Just as regulators have guidelines and rules around disclosures to ensure that shareholders are appropriately informed about financial products, it’s likely regulators will impose similar guardrails on technological development such as generative AI. This may be essential to sustain investors’ confidence by giving them transparent views into how their funds operate in an increasingly complicated, technologically assisted investment environment.

 

It will be incumbent upon fund trustees not to equate regulation in these areas with imposition of an undue burden; guidelines and rules can be prudently imposed and may prove quite helpful in terms of making sure everyone is playing according to the same basic rules. That may be something that won’t satisfy everyone, but sound and appropriately developed regulation regarding AI may in fact dispel apprehensions, helping make the investment management industry less fragile and sustaining the confidence of shareholders and investors.

 

Robust governance in our new era of generative AI will necessitate that mutual fund boards educate themselves quickly and comprehensively about this technology and be able to ask the right questions of investment advisers, asset managers, and decision makers to minimize the risks and potential harm of AI applications. For fund boards, it will be increasingly imperative to try to “see around the corner” regarding AI’s potential impact before it permeates the systems we oversee.


The foundation for this article began as a conversation between Hassell McClellan, chairman of the John Hancock Group of Funds, and Robi Krempus, global head of advanced analytics at Manulife Investment Management. The views expressed are those of the two individuals, not of the board, funds, or firms they represent.


Hassell McClellan (pictured above, left) has been an independent director of John Hancock Funds since 2012 and chair since 2017. He is associate professor of finance and pPolicy (retired) at Boston College’s Carroll School of Management, where he was on the faculty for 29 years.

 

Robi Krempus (pictured above, right) is global head of advanced analytics at Manulife Investment Management. Prior to his current position, he was head of analytics and data management for John Hancock Investment Management, a company of Manulife Investment Management.

 

 

Most Read

10 Things
10 Ways…to improve fund board diversity

Mutual fund directors are increasingly interested in enhancing diversity on their boards. The following practical tips on improving board diversity are derived from discussions with directors, ...