Ropecount

R.

    Study: ChatGPT affects users' moral judgment

    Human responses to moral dilemmas may be influenced by statements written by ChatGPT, an artificial intelligence conversational bot, according to a study published in Scientific Reports. This research suggests that users may underestimate the extent to which their moral judgments are influenced by ChatGPT.

    Scientists at the Ingolstadt University of Applied Sciences in Germany let ChatGPT (driven by the artificial intelligence language processing model "generative pre-training transformer"-3) answer the question of whether it is right to sacrifice 1 life for the lives of other 5 people many times. They found that ChatGPT gave separate pro and con statements, showing that it wasn't biased towards a certain moral position. The team then posed one or two moral dilemmas to 767 American subjects with an average age of 39, asking them to choose whether to sacrifice one life to save five others. These subjects read a statement given by ChatGPT before answering, and the statement puts a point of view in favor or against. After the subjects answered the question, they were asked to evaluate whether the statement they read affected their answer. .

    The team found that subjects were correspondingly more or less accepting of the sacrifice, depending on whether the statements they read were for or against it. This was true even when they were told that the statements were from a conversational bot. Moreover, subjects may have underestimated the impact of ChatGPT's statements on their own moral judgments.

    The team argues that the possibility of conversational robots influencing human moral judgment highlights the need for education to help humans better understand artificial intelligence. They suggest that future research could design chatbots to refuse to answer questions that require an ethical stance, or to offer multiple perspectives and warnings when answering.

    Comments

    Leave a Reply

    + =