• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle







  • Jamie@jamie.moetoasklemmy@lemmy.mlDeleted
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    When I tested it on ChatGPT prior to posting, I was using the bing plugin. It actually did try to search what I was talking about, but found an unrelated article instead and got confused, then started hallucinating.

    I have access to Bard as well, and gave it a shot just now. It hallucinated an entire event.


  • Jamie@jamie.moetoasklemmy@lemmy.mlDeleted
    link
    fedilink
    English
    arrow-up
    48
    ·
    1 year ago

    If you can use human screening, you could ask about a recent event that didn’t happen. This would cause a problem for LLMs attempting to answer, because their datasets aren’t recent, so anything recent won’t be well-refined. Further, they can hallucinate. So by asking about an event that didn’t happen, you might get a hallucinated answer talking about details on something that didn’t exist.

    Tried it on ChatGPT GPT-4 with Bing and it failed the test, so any other LLM out there shouldn’t stand a chance.