I did that once on an AI Dungeon story about a holodeck technician, and the GPT-3-based model correctly inferred the next thing the character should do is to shoot themselves in the face with a phaser on stun, to test if disabling safeties actually works.