• drkt@scribe.disroot.org
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    2 days ago

    No, they’re using a corporate model that was trained unethically. I don’t see what your point is, though. That’s not inherent to how LLMs or other AIs work, that’s just corporations being leeches. In other words, business as usual in capitalist society.

    • mke@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      You’re right about it not being inherent to the tech, and I sincerely apologize if I insist too much despite that. This will be my last reply to you. I hope I gave you something constructive to think about rather than just noise.

      The issue, and my point, is that you’re defending a technicality that doesn’t matter in real world usage. Nearly no one uses non-corporate, ethical AI. Most organizations working with it aren’t starting from scratch because it’s disadvantageous or outright unfeasible resourcewise. Instead, they use pre-existing corporate models.

      Edd may not be technically right, but he is practically right. The people he’s referring to are extremely unlikely to be using or creating completely ethical datasets/AI.

      • drkt@scribe.disroot.org
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        The issue, and my point, is that you’re defending a technicality that doesn’t matter in real world usage.

        You’re right and I need to stop doing it. That’s a good reminder to go and enjoy the fresh spring air 😄