Viewpoints

Fund Directors: Here’s what you should know about AI

August 6, 2019

By Jilaine Bauer

In 2016, the White House published its report on the future of artificial intelligence with 23 specific recommendations for further action by federal agencies and others to help ensure AI applications are used for public good. Since then, the financial services industry and its regulators have moved quickly to broaden deployment of AI to solve all sorts of problems, make better decisions, increase processing speed and reduce costs. In turn, this has created a need for those who lead and govern to keep pace with how AI is being used in data analytics, product design, operations, risk management, and compliance.

 

Unlike robotic process automation, which uses preconfigured software to mimic human actions, AI is technology that simulates human functions of sensing, learning, inference, and reasoning to perform tasks. Narrow (or weak) AI performs these functions using information from a defined data set, whether it's Siri conversing with you, Watson winning at chess, or a trading algorithm outwitting a portfolio manager. General (or strong) AI attempts to replicate the fluidity and flexibility of human thought to address more abstract problems. As these tools are embraced, it is critical to be sure they function and perform within the bounds of human behavior measured by responsibility, transparency, auditability, incorruptibility, and predictability. Fiduciary duties of mutual fund directors and good governance require it—and regulators expect no less.

 

1. Know that knowing the 'metes and bounds' of AI is not enough

Directors need to know more than the "metes and bounds" of an AI application to determine whether it is "fit for purpose." Use of a poor GPS system is testament to that fact! While directors don’t need to read scope documents, technical specs, or user manuals, they do need to understand and be comfortable with business use cases and how the application aligns with business goals. They also should know what data is being used, how it is obtained and maintained, and how it will be secured and protected. Directors also must understand the steps taken to make sure business (and compliance) requirements are accurately defined and interpreted. Finally, directors should be satisfied that the process for designing, engineering, testing, and maintaining the application is sufficiently robust. And, while directors necessarily rely on others to perform these activities, they cannot satisfy their due care obligations without also making sure the non-directors have the requisite skills, experience, and resources.

 

2. Use—but don't entrust your business to—the intelligence of AI

Whether the activity involves products, marketing and sales, customers, investment and trading, operations, or regulatory requirements, compliance with a requirement does not always lend itself to a "yes" or "no" answer. Often the answer is contextual and nuanced. Sometimes the question is asked in a manner that is engineered to elicit a specific response, making "reverse engineering" necessary. For these and other good reasons, AI can inform, but cannot replace, the judgment and critical thinking of humans who are knowledgeable about both business requirements and the limitations of the tool. In words adapted from those of author Neil Gaiman describing the value of a 21st century librarian: "Google can bring back a hundred thousand answers. A human being can bring you back the right one!"

 

3. Good governance is good for AI

Since AI can produce both good and bad outcomes, it's important to consider adopting and clearly communicating principles and guidelines for its development and use. Areas for consideration include:

 

  • The purposes for which intelligence may be created and used;
  • How and the extent to which development and use will be governed and supervised;
  • Whether and how third parties may access and use the intelligence;
  • Privacy considerations, including how customer data may be used and whether consents must be obtained;
  • Protocols to avoid creating (or reducing) biases in underlying algorithms;
  • Protocols to promote and test for security and safety; and
  • Interpretation and application of relevant government regulations and technology standards.

 

Examples of AI principles include the Asilomar AI Principles developed by the Future of Life Institute in 2017 and Google’s AI Principles.


4. AI implementation

In the words of Amit Kalantri, an IT executive, author and magician: "Be creative while inventing ideas, but be disciplined while implementing them." What does good AI implementation look like?

 

  • The development team should be experienced in developing the type of AI application under consideration for the intended use—preferably in the financial services industry.
  • They should be familiar with the financial jargon used in describing business use cases and requirements. While it goes without saying that business use cases and requirements should be discussed, they should be discussed not just with development teams, but also with testers, users, audit and compliance personnel, and other stakeholders.
  • The technology solution must be capable of change and adaptation without undue difficulty, delay, or expense throughout the development life cycle.
  • Computer logic (including fields and values) and test plans should be clearly explained and associated with relevant business requirements in documentation.
  • Types of customer data should be clearly identified and a separate analysis performed to determine applicable privacy and cybersecurity requirements that must be satisfied and to confirm that the data use complies with corporate policies and fiduciary or other standards.
  • If the AI application is dependent on use of third-party systems, determine whether they should adhere to your AI governance principles and whether they need to be named in privacy disclosures and consents to use.
  • Make sure provisions are made (and documented) to ensure the AI application is monitored, reviewed and "refreshed" as business needs, requirements, and operating environment change.

 

Artificial intelligence is fast becoming a major driver in the financial services industry. It is incumbent upon mutual fund directors to help ensure it is used for the benefit of investors and that it does them no harm.  


Jilaine Bauer provides practical advice to boards, compliance officers, executives, and staff of mutual funds, investment advisers, broker-dealers, insurance companies, and banks on using her  "insider's perspective" curated as general counsel, chief compliance officer, consultant and mentee. After four years at a global fintech company, she now is a "1st gen" (for her age) proponent of using technology smartly to solve problems, accelerate analytics, and free up time for critical thinking. Her J.D. is from Loyola University (Chicago), and her B.S. is from the University of Illinois (Champaign-Urbana). She can be found on LinkedIn and at www.jilainebauer.com.

 

 

Most Read