Anthropomorphism and AI: Dialing back the fear knob

As humans, we love projecting our emotions on non-human things like animals, objects and weather events. I’m told the psychological term for this is anthropomorphism, which is difficult to say three times super fast. We especially seem to label human-like characteristics for things that we can’t fully understand or have no means of explaining – like my first car had a ‘bedtime’ since it would always break down at 2 a.m. in the middle of nowhere.  

Anthropomorphizing has some helpful secondary effects for us. It allows us to feel more connectedness and empathy, which can result in better treatment toward others. It can inspire imagination and creativity as we take in the world around us, especially for children as they play with their toys. However, it can also lead to false assumptions about what is actually happening and lead to misdiagnosis of situations. It also turns out that the desire for social connection has a nasty flip side. If we don’t understand the emotional intent, then we are likely to think it is probably out to get us. I can’t think of a better example of this tendency than our approach to artificial intelligence (AI), and especially AI in mortgage.

Even the name “Artificial Intelligence” speaks to something that is cold and emotionless – something not quite real, even a facade. Words like “shady” are already creeping into my head as I write that. Until recently, most uses of AI were behind the scenes and operating squarely in the world of software and IT and somewhat shielded from anthropomorphism. This is especially true for machine learning, since much of the “AI” happens before you really interact with it as a user. For instance, we started working with machine learning for automated valuation models back in 2017. But since the output is ultimately a prediction of value and confidence score, I don’t remember a lot of complaints about the AI coming for our jobs. Then ChatGPT came on the scene.

Perhaps it is the conversational aspects of ChatGPT and other GenAI-based tools that captured our emotional attention – or, the rapid increase of AI-powered companion apps that are now available, which demand that we project our emotions upon them for the best experience.  Suddenly, Generative AI has lit up the right side of our brains by creating creative content, imagery and even music. Reactions to the use of all AI disciplines have become much more emotion-based than before. Unfortunately, much of that discourse goes straight to worst-case scenarios like rampant job replacement and heavily biased automated decisions that we seemingly have no control over. It certainly doesn’t help that GenAI models’ results can be inaccurate or completely made up “hallucinations”.

The fact is that AI-based technology has no intent or emotional capability at all. It is a tool like all other innovations preceding it, and we as humans get to decide how and when it is used.  Like any powerful tool, it is essential to install proper fundamentals such as training, testing, accountability and continuous improvement before implementing. Defining the role that we are allowing AI to play in our processes can be an effective way to dial down the fear level and not rush to the scariest potential outcome. Let’s quickly look at the roles of AI for Prediction, AI for Communication and AI for Decisioning.

AI for Prediction

Machine learning is a proven discipline for creating remarkably accurate predictions and quantitative information. At Clear Capital, we started using machine learning for our automated valuation model in 2017. We found that it not only performed better than statistical models in predicting the sale value of a house, but it also adapted better to local market nuances with a lot less effort and faster build time. Suddenly, we could refresh new predictions on the entire country in a few hours, rather than weeks, which allowed for rapid iteration and improvement.  

One by one, industry players began making the switch to machine learning-based models, as well. The machine learning model can be supervised, the data used for training can be governed, and results can be tested and even explainable given the appropriate design. The use of machine learning is already firmly embedded into business and consumer apps alike.

AI for Communication

At its core, Generative AI is really about language and communication, hence the name Large Language Model. Text-based language, but also visual language via imagery, drives our communication. When ChatGPT was unveiled to the world, there was almost an audible sigh of relief to prolific internet search users. Finally, there was a way to engage with vast amounts of information in a conversation rather than a list of ranked results that, over time, has morphed into a longer and longer list of advertisements.  

The new combinations of generated content are different from our normal patterns, and yet well-formed enough to be used to spark imagination and creativity. The ability to summarize large quantities of data through inputting direct conversational data like emails and documents can create efficiency, reduce duplicative tasks, and even automate communication back to clients.

But herein lies that problem.  Language is messy and often biased and subjective. The data from the internet used to train large language models is not always accurate. And like any AI model, the results are also not always accurate. The lack of accuracy and possibility of “hallucinations” becomes troublesome if language interpretation is important in order to automate decisioning.

AI for Decisioning

In order to automate decisions, especially complex decisions like underwriting a mortgage, there are a couple of foundational skills needed: accurate analysis of facts and an accurate alignment of documentation to requirements, all within an ethical understanding of the regulatory framework. Communication and language interpretation are just as essential as calculations.  

It is evident that over the past several years our industry has made several advances in the use of AI for quantitative output like property value predictions and identification of likely comparable properties. More recently, the use of deep learning for image recognition has also been exploding leading to an accurate understanding of square footage measurements from imagery and detection of specific home features. All of this has led to automation and efficiency of several underwriting tasks, especially for collateral.

It is also clear that we are at the beginning of the journey in the use of Generative AI to automate communication and analysis of language in mortgage. There is a lot of promise in the summarization of documents presented to an underwriter, or AI-generated communication being given to a loan officer to assist them in working with a borrower. But we are still a long way off from these areas being fully automated in all cases. The accuracy just isn’t there yet to fulfill the trust and confidence needed to make the loan process safe and consistent for the consumer without a human in the loop.

That being said, the heightened concerns of deploying GenAI for automated decisioning should not cause us to take a step back on the existing use of machine learning and deep learning techniques for prediction. We need to be specific in our language and description of AI disciplines and their application. AI is not out to replace us, it doesn’t care, but the use of AI could be the difference in the level of success for multiple professions.  

A Balanced Perspective

In conclusion, the intersection of anthropomorphism and AI presents both opportunities and challenges in various domains, including mortgage lending. While anthropomorphism enriches our emotional connection and fosters creativity, it can also lead to misconceptions and unfounded fears, particularly when applied to artificial intelligence.

AI, devoid of intent or emotions, is merely a tool shaped by human decisions and actions. Understanding its capabilities and limitations is crucial in mitigating fear and harnessing its potential effectively. By delineating clear roles for AI, such as prediction, communication, and decision-making, stakeholders can navigate the evolving landscape of AI in mortgage lending with prudence and foresight.

The strides made in leveraging machine learning for predictive analytics and deep learning for image recognition signify significant progress in automating underwriting tasks. However, the journey toward leveraging generative AI for communication and language analysis is still in its infancy. While promising, the accuracy and reliability of these AI applications remain areas of concern, necessitating continued refinement and scrutiny to ensure the safety and consistency of the loan process for consumers.

As we move forward, it is imperative to approach AI adoption with a balanced perspective, emphasizing rigorous training, testing, accountability, and continuous improvement. By doing so, we can dial down the fear knob and pave the way for responsible and beneficial integration of AI into mortgage lending practices, ultimately enhancing efficiency while preserving trust and confidence in the lending process.

This column does not necessarily reflect the opinion of HousingWire’s editorial department and its owners.

To contact the editor responsible for this piece: [email protected]

Source link

About The Author

Scroll to Top