

go in expecting a deconstruction of red scare propaganda from a modern lens
leave with more anticommunist slop


go in expecting a deconstruction of red scare propaganda from a modern lens
leave with more anticommunist slop


to be honest i was under the impression that he was based entirely because of his name and nothing else


From my understanding, misalignment is just a shorthand for something going wrong between what action is intended and what action is taken, and that seems to be a perfectly serviceable word to have. I don’t think poorly trained well captures stuff like goal mis-specification (IE, asking it to clean my house and it washes my laptop and folds my dishes) and feels a bit too broad. Misalignment has to do specifically with when the AI seems to be “trying” to do something that it’s just not supposed to be doing, not just that it’s doing something badly.
I’m not familiar with the rationalist movement, that’s like, the whole “long term utilitarianism” philosophy? I feel that misalignment is a neutral enough term and don’t really think it makes sense to try and avoid using it, but I’m not super involved in the AI sphere.


you have to touch it with your body


i’d like to clarify, the stick can’t be broken but that’s not because of it being especially durable. it’s a regular stick, but any time that it’s in a situation where it would be broken, something happens to prevent it conveniently.


but what if i want to wait tables at femboy hooters just for fun without the profit motive can i do that
I very much feel the same way. it’s easier to give gifts than do chores.
I was thinking I had more to say but I don’t actually, I think it really just comes down to that for me.