Amid all the excitement about the
potential uses of artificial intelligence in financial advice, there is still a lot of uncertainty about how this new technology will fit into the existing regulatory framework governing financial services firms.
Current laws do not provide sufficient principles for the broad adoption of AI, said Rebecca Eisner, a partner at the law firm Mayer Brown. Firms looking to adopt AI technology face complex legal questions that may require a rethinking of fundamental principles of compliance.
"Regulatory frameworks require transparency controls and auditability, and yet these are very challenging concepts for deep learning and artificial intelligence," she said during a recent webinar that examined the legal implications of AI in financial services.
(More: U.S. regulators falling behind in supporting fintech innovation)
For example, tort and agency laws are written with the presupposition that the actors involved are humans. If an AI engine recommends an investment that breaches a fiduciary contract and leads to losses, what recourse does the investor have? Who can be held responsible when intelligent machines are making decisions?
Perhaps a firm doesn't want to use AI for investments, but is interested in a machine that automates data processing and producing documents. Current patent and copyright law is designed only to protect humans, and the firm may have legal difficulty protecting the ownership of forms created by a computer.
Executives at financial institutions need to think about these issues and prepare a robust due diligence process when building or buying AI technology, Ms. Eisner said. Ongoing monitoring will be required, and management will have to be able to answer any questions about how the AI is being used, where data are coming from and how the firm is complying with privacy requirements to ensure it isn't harming customers.
(More: Advisers warming up to AI)
In addition to the fact that existing regulations don't accommodate AI, David Beam, another partner at Mayer Brown, said developers should decide early on where compliance fits with AI. Is there a digital overlay, is a human monitoring the system, or can rules be embedded directly into the system?
Management also has to decide on who will be held responsible for any violations by the AI and be able to explain the technology's decision process. If regulators comes knocking, they're going to want to know specific reasons why a computer did a specific task, Mr. Beam said.
(More: Treasury recommends lighter regulatory touch for fintech companies)
While some firms see AI as way to gain an advantage over the competition, Mayer Brown partner Alex Lakatos said the industry should resist the temptation to "hide the secret sauce."
"Artificial intelligence can't be a black box," Mr. Lakatos said. "Secrecy is not the path to brand loyalty … it's not going to be popular to have a system that can't be explained."