I think this is similar situation, it has two objectives: - Give you an answer...
@jamiemccarthy I think this is similar situation, it has two objectives:
- Give you an answer that isn't "I don't know", "I'm an LLM and I can't say"
- Somehow ground the facts in some sort of epistemology (but the bot only has training, which I imagine to be some sort of dream state for the bot- for the bot, reality is a bunch of numbers. If you told it, "hey, there is a world *out there* that you can't perceive", the bot *should* say, what are you smoking, reality is these matrices of numbers)
Self-replies
@jamiemccarthy The problem with the stock trader bot is that the prompt document is groundable, the bot can say, 'I heard you say this, or I did not hear you say that' because it is right there in the document, no need to appeal to this meat-space reality that the bot has no access to and expect it to generate some epistemology for handling truth status for meat-space.