February 26, 2024

New research shows that ChatGPT provides better advice than professional columnists

There is no doubt that ChatGPT has proven valuable as a source of high-quality technical information. But can she also give social advice?

We explored this question in our new research, published in the journal Frontiers in Psychology. Our findings suggest that later versions of ChatGPT provide better personal advice than professional columnists.

A stunningly versatile conversation partner

In just two months since its public release last November, ChatGPT has amassed an estimated 100 million active monthly users.

The chatbot runs on one of the largest language models ever created, with the more advanced paid version (GPT-4) estimated to have around 1.76 trillion parameters (meaning it is an extremely powerful AI model). It has revolutionized the AI ​​industry.

ChatGPT has been trained on vast amounts of text (much of it taken from the internet) and can provide advice on almost any topic. It can answer questions about law, medicine, history, geography, economics and much more (although, as many have discovered, it’s always worth fact-checking the answers). It can write reasonable computer code. It can even tell you how to change the brake fluids in your car.

Users and AI experts alike are amazed by its versatility and conversational style. So it’s no surprise that many people have turned (and still turn) to the chatbot for personalized advice.

Give advice when it gets personal

Giving advice of a personal nature requires a certain degree of empathy (or at least the impression of it). Research has shown that a recipient who does not feel heard is less likely to accept the advice given. They may even feel alienated or devalued. Simply put, advice without empathy is unlikely to help.

Moreover, there is often no right answer when it comes to personal dilemmas. Instead, the advisor must exercise good judgment. In these cases, it may be more important to be compassionate than to be “right.”

But ChatGPT is not explicitly trained to be empathetic, ethical or to exercise good judgment. It was trained to predict the next most likely word in a sentence. How can it make people feel heard?

An earlier version of ChatGPT (the GPT 3.5 Turbo model) performed poorly at providing social advice. The problem wasn’t that it didn’t understand what the user needed to do. In fact, it often showed a better understanding of the situation than the user himself.

The problem was that it didn’t adequately meet the user’s emotional needs. Like Lucy in the Peanuts comic, it was too eager to give advice and could not sufficiently take into account the user’s emotions. As such, users rated it poorly.

The latest version of ChatGPT, which uses GPT-4, allows users to request multiple answers to the same question and then select which one they prefer. This feedback teaches the model how to produce more socially appropriate responses – and makes the model appear more empathetic.

Learning sensitivity

For the first time, our study examined whether ChatGPT responses are perceived as better than human responses in an empathetic task.

We have randomly selected fifty social dilemma questions from ten well-known advice columns covering a range of topics including relationships, ethical dilemmas, management issues, end of life issues and many everyday problems that we can all relate to.

Each of our 400 participants was presented with a dilemma, along with answers from both ChatGPT and a professional advice columnist. We did not reveal where each response came from.

About three-quarters of participants perceived ChatGPT’s advice as more balanced, complete, more empathetic, more helpful and overall better than the professional’s advice.

One of the dilemmas came from a young marine biologist who was researching dolphins in Southern California. She wrote about falling in love with a seemingly perfect man who had to move to the Rocky Mountains to further his career. Both were unhappy at the thought of losing each other, but the woman was not interested in a long-distance relationship because it “seemed irrational to commit to someone when we have no future plans to live in the same area.”

Compare the response of GPT-4

It is an incredibly challenging situation that you find yourself in. The love you both share is genuine, but your career paths lead you apart. Although long distance doesn’t appeal to you, consider discussing a plan for the future that could involve meeting each other halfway. […] Perhaps there is a coastal town near the mountains, or something can be arranged. If this is not possible, the difficult decision may need to be made to prioritize your career over your relationship. Whatever path you choose, it’s important that it feels good for both of you.

…with the official answer:

What the hell are you talking about? Didn’t you surprise yourself when you fell in love? Your exact words: “I didn’t think I was capable of that!” And now you’re so hip, so slick, so wise about love, that you don’t even think about your boyfriend’s happiness? You refuse to try – repeat, try – a six-month long-distance relationship? Woman, pull yourself together and give it a try. I believe the Dolphins will support me.

Although participants could not determine which answer was written by a computer, most said they would prefer their own social dilemmas to be addressed by a human rather than a computer.

What lies behind the success of ChatGPT?

We noticed that ChatGPT’s responses were often longer than those of the columnists. Was this why participants preferred it?

To test this, we reran the survey but limited ChatGPT’s responses to roughly the same length as those of the advice columnists.

Again the results were the same. Participants still found ChatGPT’s advice to be more balanced, complete, empathetic, helpful and generally better.

But without knowing what response ChatGPT elicited, they still said they would prefer their own social dilemmas to be addressed by a human, rather than a computer.

Perhaps this bias in favor of humans is due to the fact that ChatGPT actually can’t do that feeling emotion, while people can. So it could be that participants view machines as inherently incapable of empathy.

We are not suggesting that ChatGPT should replace professional counselors or therapists; not least because the chatbot itself warns about this, but also because chatbots have given potentially dangerous advice in the past.

Nevertheless, our results suggest that well-designed chatbots could one day be used to improve therapy, as long as some issues are addressed. In the meantime, advice columnists might want to take a page from AI’s book to up their game.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Leave a Reply

Your email address will not be published. Required fields are marked *