"programmed by the training"
Again, if you look at the source code--which is the best representation of the intent of the program--their literal and explicit behavior as they are written is to simply produce a single number from a multi-dimensional matrix, repeated until conditions are met (limit or EOT token).
As far as those goals you believe it "told" you, they are generated from a "context" matrix that is the composition of a hidden system prompt (likely containing general instructions, which could in fact be those goals), from the tokens you supplied it and from a history of tokens you have shared with it previously, chosen non-deterministically with weights from training. The hidden system prompt can make a base model behave in many different ways, despite how its trained or programmed, because it sets the context. IOW, tokens were likely prepended to your question such as "...you are a helpful assistant. your goals are...." and processed along with your question, and it regurgitated the system prompt like a parrot. Ex:
https://www.reddit.com/r/PromptEngineering/comments/1j5mca4/i_made_chatgpt_45_leak_its_system_prompt/
Training is the process of using an automated technique called "gradient descent" in which "loss" is minimized by means of guessing weights in tiny increments. In fact, you can have two models using the exact same source code that have completely different weights based on their training material (and thereby give drastically different answers). You can even have models with slightly different weights that are trained on the exact same corpus.
The Karparthy Zero to Hero really breaks it down in a neat way. Its a parlor trick.