Sometimes sure, but an LLM realistically has no decision making ability - it isn't considering strategies or ethics, or anything else for that matter, it's just pulling together an answer based on what people have said in similar contexts in it's training data.
I wouldn't want a parrot to decide who 's shooting who, nevermind nukes - though to be fair no one person or thing should be deciding either of those things anyway