• LuxSpark@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    21
    ·
    2 months ago

    Does AI have views? Anyway, I would throw up this message if questioned about these subjects because you don’t want to pollute your AI with bigoted bullshit.

    • bitcrafter@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      6
      ·
      2 months ago

      Supposedly the following is a real problem:

      For example, one major AI model changed the race or sex of historical figures — including the Pope, the Founding Fathers, and Vikings — when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy.

      Given the administration’s immense reputation for veracity—the biggest we have ever seen—I see no reason to doubt this.

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        9
        ·
        2 months ago

        GOP is all about free market, right? So let the free market decide who wants a model that gives out information that is weighted in one direction or another. If you want accuracy, you aren’t going to buy into a model that skews things in a different direction. Oh right, they only talk about free market when it works in their best interests…

        • bitcrafter@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          This executive order only sets policy for AI use by the federal government, not for how AIs must behave for the entire country.

          • floofloof@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 months ago

            Still, it takes a lot of work and resources to design and train a model, so American AI companies may self-censor everywhere so that they don’t have to do the work twice, once for the US Government and once for general use.

            Hopefully they’ll just wrap uncensored models in additional filters when they’re serving the US Government, or add an instruction to answer as a Nazi would, and the rest of us can avoid those Nazified versions. But I don’t trust the AI techbros.

            • bitcrafter@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              I agree that is a legitimate concern, though I would hope that even in that case there would be less popular alternative models that people could use, just like how those of us who want to stay away from the big social networks can use Lemmy. This would not save us from AI chatbots subtly reprogramming the population just like how Facebook did with its algorithm, though…