The right chatbot for being wrong

Does non-humanity in technology help us be more human?

[click to open transcript]

The doom slinging about AI, that’s a new term I made up apparently, doom slinging, it’s felt relentless and exhausting to me. But I saw a really positive development recently and wanted to share a couple thoughts about it.

About two weeks ago, or two weeks as of me recording this segment, the journal Science published a study: “Generative AI as a tool for truth”. The researchers in this paper had over 2000 people who believed in various conspiracy theories talk to a chatbot for an average of around 8 minutes, to see if a chatbot could maybe get through to them and challenge their conspiratorial beliefs.

The tl;dr is that the researchers saw a 20% reduction in misinformed beliefs, 1 in 4 people fully stopped believing in their chosen conspiracy theory, AND these changes lasted for at least two months afterwards. For 8 minutes of chatbot conversation. How? Why?

There’s definitely more to figure out about that qualitative side of things, but from what I read, it sounded to me like three factors played a major role:

First, the chatbot listened and was able to take it ALL in: all the different pieces of information that had added up in each person’s unique brain to end up in a belief in a conspiracy. And I just want to pause here and say that we all have misinformed beliefs sometimes, it is impossible not to in the chaotic storm of information we all weather every day. So I don’t want to us vs them conspiracy believers. These study participants are us, and they had a bunch of “facts” that were wrong. So the chatbot took it all in and THEN, second factor:

The chatbot was able to provide facts in response to every single piece of inaccurate information, vs a person trying to intervene by providing facts. Now I don’t know if you’ve ever tried to talk someone out of a conspiracy belief but I have and I can tell you… even with all the compassion I could possibly muster, I got frustrated after maybe the 3rd wrong thing I had to address. The chatbot in this study didn’t have an emotional reaction, it just provided more information.

Which brings us to factor 3. which is the most interesting to me: that a chatbot is NOT a person. It’s not a human being who is going to judge you, who will forever remember that time you were wrong, or who will even witness that you changed your mind. It’s a piece of technology that provided some facts, and it gave the participants in this study the space and the grace to take in those facts and decide that something else might be true after all.

So my thought here is this: despite all the doom slinging about how AI is going to take over everything, I think there is an untapped strength in letting technology be technology, and pairing it with human intentions and human care for others. The medical field has been on the forefront of this approach, and I think every other field could be taking notes. Thanks for listening.

Sources: https://www.science.org/content/article/ai-chatbot-shows-promise-talking-people-out-conspiracy-theories and https://mitsloan.mit.edu/press/can-ai-talk-us-out-conspiracy-theories