• Hegar@fedia.io
    link
    fedilink
    arrow-up
    19
    arrow-down
    5
    ·
    2 months ago

    Increasing wealth has only ever been observed to fuel greater inequality.

    I don’t see any evidence that the value that increasing automation is bringing will be distributed more evenly.

    We produce enough food for everyone and still let people starve - equal access to AI is even harder to justify than equal access to food.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      2 months ago

      I’m not so sure about that. When we compare medieval wealth inequality to now, it was worse back then. Ew, a link to Reddit, but it’s got good info.

      Not saying we don’t need to fix things… we need to destroy even the concept of billionaires. While things are bad, and trending worse, they’re not yet “literally eat the rich” bad.

      • Hegar@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        2 months ago

        I’m not sure that link does have good info.

        That’s a 0 point comment on ask historians, from 11 years ago, with no sources listed, no details and little explanation. The follow-up comments have a little more info but only from 1870, and even then it’s only talking about land not wealth. Also the only source linked is a NY Review of Books article that 404s.

        I think it’s fairly safe to assume that wealth inequality was lower before industrialization. That really supercharges the power of capital, encouraging and rewarding larger and larger accumulations of capital. Before that it’s also much harder to get reliable data.

        Aristotle in the politics mentions a plan to cap wealth inequality at 1:5. Once you have more than 5 times the poorest citizen, your wealth is redistributed. He thinks it too radical, but could you imagine anyone talking about capping CEO pay at 5 times the janitor? That’s unthinkable to us.

    • JackGreenEarth@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      6
      ·
      2 months ago

      You would just have to let an superintelligent (aligned) AI robot loose and prompt it to produce enough food for everyone. It wouldn’t even be any maintaining effort, once the robot had been created. If it doesn’t have any negative consequences to the creators to have positive consequences for everyone else, and there are any empathetic people on the board of creators, I don’t see why it wouldn’t be programmed to benefit everyone.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        2 months ago

        As long as it doesn’t generate any negative externalities, sure. That’s a huge alignment problem though.

        • JackGreenEarth@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          2 months ago

          True, and I have my doubts on the alignment problem being solved. But that’s a technical problem, a separate conversation from whether even attempting it is worthwhile in the first place.