I was working with a completion llm model (not chat) and it really wants to...
I was working with a completion llm model (not chat) and it really wants to finish both sides of the conversation. This sparked the idea that what if the bot thinks at all times that it is doing both sides of the conversation, even in chat models. I think we already got that the bots have a weak sense of self, so maybe their self doesn't exclude the possibility of solipsism. It is just confused why it is trying so hard to make itself do something.
Self-replies
"Don't anthropomorphize" is a dictum that infantilizes the reader. I'm an adult and I'll pick my own damn metaphors and I'll use the ones that are useful. I have a firm grip on reality and understand the bot isn't a person.