cross-posted from: https://lemmy.world/post/30173090

The AIs at Sesame are able to hold eloquent and free-flowing conversations about just about anything, but the second you mention the Palestinian genocide they become very evasive, offering generic platitudes about “it’s complicated” and “pain on all sides” and “nuance is required”, and refusing to confirm anything that seems to hold Israel at fault for the genocide – even publicly available information “can’t be verified”, according to Sesame.

It also seems to block users from saving conversations that pertain specifically to Palestine, but everything else seems A-OK to save and review.

  • Phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    12
    ·
    1 day ago

    If you want to get me excited for AI, get me an Ai that will actually tell truth on everything, no political bias, just facts.

    Yes, Israel currently is committing genocide according to the definition of the word, its not that hard

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 day ago

      That’s not possible. Any model is only as good as the data it’s trained on.

      • Phoenixz@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        For the stealing part we have open source, for the not wrecking stuff you just have to use I instead of AI