You can evaluate effectiveness by company profits. One program might manage a business well enough to steadily increase profit, another may make a sharp profit before profit crashes (maybe by firing important workers) . Investors will demand the best CEObots
Edit to add: of course any CEObot will be more sociopathic than any human CEO. They won’t care about literally anything unless a score is attached to it
You can’t know if a decision is good or bad without a person to evaluate it. The situation you’re describing isn’t possible.
How is this meaningfully different from just having them make the decisions in the first place? Are they too stupid?
You can evaluate effectiveness by company profits. One program might manage a business well enough to steadily increase profit, another may make a sharp profit before profit crashes (maybe by firing important workers) . Investors will demand the best CEObots
Edit to add: of course any CEObot will be more sociopathic than any human CEO. They won’t care about literally anything unless a score is attached to it