AI needs a lot more work before it can be safely used in mortgage

Potential decision-making hazards can lead to AI doing more harm than good

AI needs a lot more work before it can be safely used in mortgage

Hailed as a game-changing tool, artificial intelligence needs further refinement and scrubbing of any potential decision-making hazards before being rolled out for large-scale use in mortgage and other critical industries, author and futurist Bernard Marr wrote in a recent contribution for Forbes.

AI has garnered much attention in the Canadian financial system in recent years. Indeed, among the most active areas of fintech investment is AI research and development, of which Toronto and Montreal (along with Edmonton) are acknowledged global leaders, according to KPMG International.

However, while the common impression of AI is that it will be able to efficiently sift through the considerable volumes of data involved in the finance industry, the algorithms meant to streamline these tasks are in the end still vulnerable to erroneous conclusions from faulty data (“garbage in, garbage out”).

“When computers are routinely making decisions about whether we are invited to job interviews, eligible for a mortgage, or a candidate for surveillance by law enforcement and security services, it’s a problem for everybody,” Marr wrote.

To see how easily things can go wrong when AI is hastily applied as a lens to view human interactions through, Marr recounted a recent ProPublica study that uncovered a troubling result: “an AI algorithm used by parole authorities in the US to predict the likelihood of criminals reoffending was biased against black people.”

Or take high-level corporate positions, where meritocracy is usually held up as a norm.

“An algorithm might pick a white, middle-aged man to fill a vacancy based on the fact that other white, middle-aged men were previously hired to the same position, and subsequently promoted. This would be overlooking the fact that the reason he was hired, and promoted, was more down to the fact he is a white, middle-aged man, rather than that he was good at the job.”

Read more: Augmenting real estate professionals with AI

“Biased AI systems are likely to become an increasingly widespread problem as artificial intelligence moves out of the data science labs and into the real world,” Marr warned.

“The ‘democratization of AI’ undoubtedly has the potential to do a lot of good, by putting intelligent, self-learning software in the hands of us all,” he added. “But there’s also a very real danger that without proper training on data evaluation and spotting the potential for bias in data, vulnerable groups in society could be hurt or have their rights impinged by biased AI.”

This has significant implications on the mortgage space, as AI is seen as a potentially powerful addition to the capabilities of brokerages – and AI would not consider the human circumstances that lead to problems such as delinquency or misleading documentation, only the results of such problems.

Marr stressed the central importance of constantly “evaluating the consistency with which we (or machines) make decisions.”

“If there is a difference in the solution chosen to two different problems, despite the fundamentals of each situation being similar, then there may be bias for or against some of the non-fundamental variables. In human terms, this could emerge as racism, xenophobia, sexism or ageism.”

RELATED ARTICLES