My professional ANN experience is with computer vision and object detection. A bit with image and sound GANs too.
LLMs that I've spent time training and experimenting with (and I argue GANs as a class of ANNs, in general) tend to "hallucinate" or "dream harder" after several tens of queries within the same instance.
But one can improve output "fidelity" based on constraint parameters on the user and inference self-check algorithms.
Addendum:
ANN = artificial neural network (a class of algorithms in machine learning whose architecture resembles a mesh of intercommunicative neuron cells in nervous tissue)
GAN = generative adversarial network (a categorical subset of ANNs
LLM = large language model (a categorical subset of GANs)