Eh, it used to be the case that some models generated images with watermarks, as those pics were part of their training data. I remember seeing some hard to read alarmy and Shutterstock watermarks on ai posts
Sometimes yes. Those old models saw that exact text a LOT so yeah. Just like how today, stable diffusion XL can somewhat reliably generate glasses of Nutella.