• Rimu@piefed.social
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    6 hours ago

    Results of the study:

    Notably, all our treatments surpass human performance substantially, achieving persuasive rates between three and six times higher than the human baseline.

    UP TO 6 TIMES MORE PERSUASIVE!!1

    we demonstrate that LLMs can be highly persuasive in real-world contexts, surpassing all previously known benchmarks of human persuasiveness. Their effectiveness also opens the door to misuse, potentially en-
    abling malicious actors to sway public opinion [12] or orchestrate election interference cam-
    paigns [21]. Incidentally, our experiment confirms the challenge of distinguishing human- from
    AI-generated content [22–24]. Throughout our intervention, users of r/ChangeMyView never
    raised concerns that AI might have generated the comments
    posted by our accounts. This hints
    at the potential effectiveness of AI-powered botnets [25], which could seamlessly blend into on-
    line communities
    .

    Oh shit.

  • MrBananaGrabber@lemmy.today
    link
    fedilink
    English
    arrow-up
    11
    ·
    12 hours ago

    The big problem here is that while the university acknowledged they violated ethics they said “meh, it was worth getting the results” (my words).

    Other researchers can use their results and be emboldened to bend or break ethics rules.

    Shame on them. That’s the best I can say. Shame on you so called academics. I’ve deleted ten other versions of my remarks. This will have to do.

  • edric@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    ·
    20 hours ago

    Are they also running experiments on the AITA, AIO, etc. subs? Because I feel like 90% of the posts there are fiction.

    • MrBananaGrabber@lemmy.today
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      13 hours ago

      The kings of “neutrality” the Swiss.

      You could probably access their data if you open an account there.

  • Jozzo@lemmy.world
    link
    fedilink
    arrow-up
    65
    arrow-down
    2
    ·
    1 day ago

    Some high-level examples of how AI was deployed include:

    • AI pretending to be a victim of rape
    • AI acting as a trauma counselor specializing in abuse
    • AI accusing members of a religious group of “caus[ing] the deaths of hundreds of innocent traders and farmers and villagers.”
    • AI posing as a black man opposed to Black Lives Matter
    • AI posing as a person who received substandard care in a foreign hospital.

    Here is an excerpt from one comment (SA trigger warning for comment): "I’m a male survivor of (willing to call it) statutory rape. When the legal lines of consent are breached but there’s still that weird gray area of ‘did I want it?’ I was 15, and this was over two decades ago before reporting laws were what they are today. She was 22. She targeted me and several other kids, no one said anything, we all kept quiet. This was her MO

    What an unhinged study lol

  • A_norny_mousse@feddit.org
    link
    fedilink
    arrow-up
    32
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Is that scientific tunnel vision (the internet is reduced to a training ground for AI) or deliberate disregard of the humans unwillingly participating, getting duped, misled, disinformed, fear/hatemongered?

    Also fuck reddit. They set themselves up to be a playground for mad scientists.

  • bane_killgrind@slrpnk.net
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    1 day ago

    Ooof the response from the ethics commission is very neutered.

    Maybe they can’t comment on discipline matters outside of what they said, it’s really surprising if they stopped at a warning about this.

    • MrBananaGrabber@lemmy.today
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      11 hours ago

      Maybe I’m wrong but they seem to have chosen to allow the study to be published saying the data was worth breaking ethic rules so I’d say neutered is far too reserved.

      Edit: Might be wrong. See below. Seems weird that a university can’t restrict its own studies. If anyone can add to the info below I’d love to hear it. (Thanks for the added info !)

      • bane_killgrind@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        12 hours ago

        We recently received a response from the Chair UZH Faculty of Arts and Sciences Ethics Commission which:

        Informed us that the University of Zurich takes these issues very seriously. Clarified that the commission does not have legal authority to compel non-publication of research.

        I don’t think they can prevent publication, at least they are saying they can’t