Steve Brown
2.7K posts

Steve Brown
@brown2020
Focused on Artificial Intelligence.
Malibu, CA Katılım Kasım 2007
2K Takip Edilen4.5K Takipçiler
Steve Brown retweetledi

what i wish someone had told me:
blog.samaltman.com/what-i-wish-so…
English
Steve Brown retweetledi

# On the "hallucination problem"
I always struggle a bit with I'm asked about the "hallucination problem" in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines.
We direct their dreams with prompts. The prompts start the dream, and based on the LLM's hazy recollection of its training documents, most of the time the result goes someplace useful.
It's only when the dreams go into deemed factually incorrect territory that we label it a "hallucination". It looks like a bug, but it's just the LLM doing what it always does.
At the other end of the extreme consider a search engine. It takes the prompt and just returns one of the most similar "training documents" it has in its database, verbatim. You could say that this search engine has a "creativity problem" - it will never respond with something new. An LLM is 100% dreaming and has the hallucination problem. A search engine is 0% dreaming and has the creativity problem.
All that said, I realize that what people *actually* mean is they don't want an LLM Assistant (a product like ChatGPT etc.) to hallucinate. An LLM Assistant is a lot more complex system than just the LLM itself, even if one is at the heart of it. There are many ways to mitigate hallcuinations in these systems - using Retrieval Augmented Generation (RAG) to more strongly anchor the dreams in real data through in-context learning is maybe the most common one. Disagreements between multiple samples, reflection, verification chains. Decoding uncertainty from activations. Tool use. All an active and very interesting areas of research.
TLDR I know I'm being super pedantic but the LLM has no "hallucination problem". Hallucination is not a bug, it is LLM's greatest feature. The LLM Assistant has a hallucination problem, and we should fix it.
Okay I feel much better now :)
English

LLM Visualization of Nano-GPT with 85,684 parameters. GPT-4 is 2 million times that. AI is the most complex thing ever created by humans. From bbycroft.net/llm
English
Steve Brown retweetledi

@maria_avdv This is the story that we should be paying a lot more attention to.
English

@IAPonomarenko Shameful. And thank you @IAPonomarenko for being strong and speaking the truth.
English
Steve Brown retweetledi
Steve Brown retweetledi

@elonmusk Definitely a big step forward in reducing the friction and increasing the opportunity for the drunk tweeting market...
English

The 21st century definition of creativity is the part that AI cannot predict. #AI #creativity
English

@IAPonomarenko @IAPonomarenko is my other most trusted and respected source in Ukraine. If only more people had your courage and your clarity. Please let us know how we can help you.
English

@anders_aslund @anders_aslund one of my most trusted sources on this issue.
English

Read my new book: "Russia's Crony Capitalism: The Path from Market Economy to Kleptocracy: The Path from Market Economy to Kleptocracy" (Yale UP, May 2019)
amzn.to/2WgLENO

English

@AskFrontier @AskFrontier This is day 6 with no internet, no information about when it will be restored, and no meaningful communication with @FrontierCorp to know that they are going to fix it.
English

@brown2020 They work primarily via email. However, I will be updating the assigned account manager about this. ~Steph
English

@AskFrontier Still no Internet / phone service from Frontier since LA rainstorm. It's been out all week. No clarity from Frontier on when it will be fixed.
English

@AskFrontier That was days ago, I replied with contact info as requested, but no one actually called.
English

@brown2020 Hi, Steve. An account manager has emailed you about the issue. Please check your inbox and spam folder for them to assist you. ~Steph
English










