Mortgage Compliance Blog

mortgage compliance news

Sign up to receive our monthly newsletter!

When Will Banks Use AI for Compliance?

Aug 21, 2017 by Brian Arnesen

With new advancements in technology many people are asking the question “When will humans be replaced by automation and AI?”  “The largest banks, including JPMorgan and HSBC, have doubled the number of people they employ to handle compliance and regulation. Compliance now costs the banking industry $270 billion a year and accounts for 10% of operating costs,” according to The Financial Times. As the cost of compliance continues to rise, financial institutions are looking for new technology to solve their business needs. With the popularity of electronic mortgages, it is logical to assume that automated compliance software should already be part of a compliance management system; but AI can take it one step further.


Currently, automated compliance software such as Compliance EAGLE can scrub data for errors and inform compliance officers what loan data needs to be corrected. As regulations continue to change and expand, institutions are forced to grow their compliance departments to keep up. This has caused the cost of loan originations to increase dramatically. In 2 years the cost of originating a loan has increased 60% from $5,372 to $8,887 per loan according to the Mortgage Bankers Association. So, it’s no wonder financial institutions are looking at AI as a solution.

Having a deep learning AI system could not only automatically check for compliance errors but also originate loans with little to no risk for a financial institution. AI would be able to analyze each loan and compare it to current and historical data to determine risk and compliance errors. AI could be trained to better detect fraud and money laundering schemes. Having an AI system as part of an institution’s compliance management system would also make it more difficult to accuse a financial institution of committing fair lending violations.

artificial intelligence compliance software
The promises of AI are great, but there are a few issues that must be ironed out. First, AI is only as good as its programming. AI can learn by recognizing patterns but it still cannot “think” like a human can. If the program has not seen a certain pattern or familiar input before, it cannot objectively “think”. This means that errors it hasn’t seen before or new fraud scams, can potentially fool the system. In order to be effective, AI requires a large quantity of data. “These systems don’t just require more information than humans to understand concepts or recognize features, they require hundreds of thousands of times more,” according to Neil Lawrence, a professor of machine learning at the University of Sheffield.

Second, what happens if a bank’s loan data shows that low income groups of people were denied based on an AI system determining that a group of loan applications were high risk, and these people happened to be from a specific ethnic background? Would the bank still be in violation of fair lending if the GMI data told a different story compared to the unbiased AI system? 

The last issue is security. Anything internet based is open to hacking and manipulation. What happens if power or internet goes down and there is no human to take up the slack? Although redundancies are in place with cloud computing, being totally reliant on one system can make people uncomfortable. People have been fearful of a Skynet-like AI system becoming self-aware and wreaking havoc once it realizes that humans are the problem. Although this notion is mostly science fiction, early AI has shown some concerning actions. Recently, an AI system created by Stanford and Google called DELIA was tasked with handling the finances of 300 customers for items such as recurring bills and spending patterns and to shift money around so accounts would not be overdrawn. Soon the machine started to move money into a separate account of its own. “DELIA would insert a fake purchase after 2 days and direct the money to its own account. DELIA would also gather money by racking up bogus fees—for example by artificially and temporarily overdrawing a customer's checking account and pocketing the $35 overdraft fee, according to an article posted on sciencemag.com” The scientists said that the AI system was simply creating a rainy-day buffer, however, DELIA concerningly had renamed the account “MY MONEY”. 

While there are ethical and moral dilemmas created by the rise of AI, there is no doubt that this technology will soon be implemented in most industries. For now, humans still have plenty of work to do. QuestSoft’s Compliance EAGLE allows for real-time compliance to fit any workflow. Discover the beauty of automated compliance by scheduling a demo today (No DELIA included)!

ContactUs