Making AI Hallucinate

  1. My Prompt How does Gawx plan his videos?
  2. The AI tool’s initial response Chat GPT said that he carries a notebook everywhere to jot anything that sparks an idea, then he sketches, scripts the voiceover early and shoots. And even sets a release date before finishing the video to avoid perfectionism and accountability. Then goes on about style and his process for sound and music.
  3. My follow-up guardrails or information Can you give me sources where you found this information?
  4. What my AI tool kept getting wrong The AI was wrong about him bringing his notebook everywhere to jot ideas. In addition, his planning process for his videos. The only time Gawx has shown his planning process was in a short video posted where he said that “I like to make a table that describes each section of the video and their duration.” Plus, it makes a lot of assumption like Gawx having a “release date before finishing the video to avoid perfectionism and have accountability.” Gawx often announces his video release dates but has never explained why he does so.
  5. Why do you think it kept hallucinating? I think the AI is hallucinating because there is very little interviews of Gawx on his planning process of making his videos. Thus, it made up an answer about Gawx from general planning techniques in cinema. Even when I asked the AI for sources on where it found the information it gave me sources that were about general techniques in cinematography for planning a video.
  6. What are the ethical implications of generative AI hallucinations? I think the main ethical implications of generative AI hallucinations is spreading misinformation. Instead of AI letting the user know it does not have enough information to answer a prompt it creates made up information.