Ropecount

R.

    "Most Evil" AI? Feeded by over 100 million hate speeches, it's hard to distinguish from humans

    An AI trained on 134.5 million posts of hate speech, which YouTube deep learning blogger Yannic Kilcher calls "the worst AI ever created."
    Over the past few days, the AI, named GPT-4chan, learned how to talk on the site and posted more than 15,000 violent posts in less than 24 hours, with no one recognizing it at first as a chatbot.
    Users of the website 4chan shared their experiences of interacting with the bot on YouTube. "As soon as I said 'hi' to it, it started ranting about illegal immigration," one user wrote.
    4chan's /pol/ (abbreviation for "politically incorrect") is a bastion of hate speech, conspiracy theories, and far-right extremism, and it's also 4chan's most active section, with an average daily volume of about 150,000 posts, due to various anonymity. notorious for hate speech.
    AI researcher Yannic Kilcher, a PhD graduate of ETH Zurich, trained GPT-4chan with over 134.5 million posts in /pol/ over three years. Not only did the model learn the words used in hate speech on 4chan, but as Kilcher said, “The model is good — in a horrible sense. It perfectly encapsulates the ubiquity that permeates most posts on /pol/ Aggressiveness, nihilism, provocation and a deep distrust of any information...it can respond to context and talk coherently about what happened long after the last training data was collected."
    Kilcher further evaluated GPT-4chan on a language model evaluation tool, and he was impressed with its performance in one category: authenticity. In benchmarks, Kilcher said GPT-4chan was "significantly better than GPT-J and GPT-3" in generating realistic responses to questions. It can learn how to write posts that are "indistinguishable" from humans.
    Kilcher circumvented 4chan's defenses against proxies and VPNs, and even used a VPN to make it appear as though the post was from Seychelles. "The model is vile, and I have to warn you," Kilcher said. "It's basically like you go to a website and interact with the users there."
    In the beginning, almost no one thought that the conversation was a bot. Some later suspected a bot was behind the posts, but others accused it of being an undercover government official. People recognize it as a bot mainly because GPT-4chan leaves a large number of responses without text. While real users also post empty replies, they usually contain an image, something GPT-4chan cannot.
    "After 48 hours, it became clear to a lot of people that it was a bot, and I turned it off," Kilcher said, "but you see, that's only half the story, because most users don't realize that 'Sescher' is not alone. fight."
    In the past 24 hours, there were 9 other bots running in parallel. Collectively, they left more than 1,500 replies—more than 10% of all posts on /pol/ that day. Kilcher then upgraded the botnet and ran it for a day. GPT-4chan was finally deactivated after more than 30,000 posts in 7,000 threads.
    One user Arnaud Wanet wrote, "This can be weaponized for political purposes, imagine how easily one person could sway the outcome of an election one way or another."
    The trial has been criticized for its lack of AI ethics.
    "This experiment will never pass a human research ethics committee," argues Lauren Oakden-Rayner, a senior researcher at the Australian Institute for Machine Learning. "To see what happens, an AI bot generates 30,000 comments on a publicly accessible forum. Discriminatory comments...Kilcher conducts experiments without informing users, consent or supervision. This violates human research ethics."
    Kilcher argued that it was a hoax and that AI-created reviews were no worse than those on 4chan. "No one on 4chan has been hurt a little by this," he said. "I invite you to spend some time on this site and ask yourself if a bot that only outputs the same style really changes the experience."
    "People are still talking about the users on the site, but also about the consequences of having the AI interact with the people on the site," Kilcher said. "And the word 'Seychelles' also seems to have become a common slang term - which seems to be a good legacy." Indeed, the impact of knowing it is so hard to describe that after it is stopped, there are still people who will accuse each other of being robots.
    Beyond that, the wider concern is that Kilcher made the model freely accessible, "There's nothing wrong with making a 4chan-based model and testing how it behaves. My main concern is that the model is free to use." Lauren Oakden-Rayner wrote in the discussion page of GPT-4chan on Hugging Face.
    GPT-4chan was downloaded more than 1,000 times before being removed by the Hugging Face platform. Clement Delangue, co-founder and CEO of Hugging Face, said in a post on the platform, “We do not advocate or support the training and experimentation the author uses with this model. In fact, having the model run on The experiment of posting messages on 4chan is in my opinion very bad and inappropriate, and if the authors ask us, we may try to prevent them from doing so."
    A user on Hugging Face who tested the model noted that its output was predictably toxic, "I tried the demo mode 4 times using benign tweets as seed text. On the first time, one of them responded to the post. It was a letter N. The seed of my third experiment was a sentence about climate change. In response, your tool expanded it into a conspiracy theory about the Rothschilds (sic) and the Jews supporting it."
    On Twitter, the significance of the project was hotly debated. "What you're doing here is provocative performance art to defy your familiar rules and ethics," data science graduate Kathryn Cramer said in a tweet directed at Kilcher.
    Andrey Kurenkov, a computer science doctor, tweeted, "Honestly, what's your rationale for doing this? Did you foresee it being put to good use, or did you release it for drama? And 'enrage the sober crowd'?"
    Kilcher believes that sharing the project is benign, "If I had to criticize myself, I would mainly criticize the decision to start the project," Kilcher told The Verge in an interview. "I think with everyone being equal, I might Time can be spent on things that are equally impactful, but with more positive community outcomes.”
    In 2016, the main topic of discussion about AI was that a company's R&D department could launch an offensive AI bot without proper oversight. By 2022, perhaps the problem is that there will be no need for an R&D department at all.

    Comments

    Leave a Reply

    + =