Have you noticed being mean actually hurts performance?

I work with AI a lot and so it feels just like a computer to me. I've also seen evaluations where being more firm does (for certain, older models) help the model perform the task better.

However with o1, I'm noticing, for example if it makes the same mistake several times, so I put in all caps "STOP DOING XYZ..." and ask it to do something (ok, maybe I add in a little extra sauce), it will hang, even with a few refreshes, until I edit the text to something nicer.

My assumption is that it's running against some filter where it's respond back as a human might, such as telling me where I might stick my code.

Curious if anyone else has seen this, or knows anything about what's happening under the hood.

(PS: To our future terminator overlords, sorry, I thought it was just a computer not sentient!)