• petrol_sniff_king@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    2
    ·
    6 days ago

    Imagine however, that a machine objectively makes the better decisions than any person.

    You can’t know if a decision is good or bad without a person to evaluate it. The situation you’re describing isn’t possible.

    the people who deploy a machine […] should be accountable for those actions.

    How is this meaningfully different from just having them make the decisions in the first place? Are they too stupid?

    • psud@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      You can evaluate effectiveness by company profits. One program might manage a business well enough to steadily increase profit, another may make a sharp profit before profit crashes (maybe by firing important workers) . Investors will demand the best CEObots

      Edit to add: of course any CEObot will be more sociopathic than any human CEO. They won’t care about literally anything unless a score is attached to it

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        ·
        23 hours ago

        This… requires a person to look at the profit numbers. To care about them, even. I’m not really sure what you’re getting at.

        I think you’re saying that computers can be very good at chess, but we are the ones who decide what the rules to chess are.