hexaglycogen [they/them, he/him]

  • 2 Posts
  • 8 Comments
Joined 2 years ago
cake
Cake day: January 12th, 2024

help-circle



  • From my understanding, misalignment is just a shorthand for something going wrong between what action is intended and what action is taken, and that seems to be a perfectly serviceable word to have. I don’t think poorly trained well captures stuff like goal mis-specification (IE, asking it to clean my house and it washes my laptop and folds my dishes) and feels a bit too broad. Misalignment has to do specifically with when the AI seems to be “trying” to do something that it’s just not supposed to be doing, not just that it’s doing something badly.

    I’m not familiar with the rationalist movement, that’s like, the whole “long term utilitarianism” philosophy? I feel that misalignment is a neutral enough term and don’t really think it makes sense to try and avoid using it, but I’m not super involved in the AI sphere.