David Holmes

497 posts

David Holmes banner
David Holmes

David Holmes

@DavidHolmes

Long-time blogger & digital curator | sound engineer & musician | Internet music aficionado. Specialized lists available.

UK, London Katılım Mart 2007
242 Takip Edilen877 Takipçiler
David Holmes
David Holmes@DavidHolmes·
A cherished item for ambient music fans: 'The Box,' a special collection of early recordings by UK producer Dennis Huddleston, AKA 36 @3sixrecordings. Lovingly remixed and remastered for superior sound quality. 3six.net/album/the-box
English
1
1
3
815
David Holmes
David Holmes@DavidHolmes·
Today’s computer algorithms can change music but vibrations from the past remain sublimely beautiful even when stretched in time - 400% slower. ‘Music can change the world’ - Ludwig van Beethoven youtu.be/6xxmL7xzZC4?si…
YouTube video
YouTube
English
0
0
0
153
David Holmes retweetledi
Chris Stokel-Walker
Chris Stokel-Walker@stokel·
Grok, xAI's LLM chatbot, started referring to OpenAI in responses over the weekend. It could be the first warning sign of a significant collapse in our collective knowledge, experts fear. By me for @FastCompany fastcompany.com/90998360/grok-…
English
1
7
13
7.2K
David Holmes
David Holmes@DavidHolmes·
Vera Molnár, an early pioneer of computer generative art, passed away on December 7th at the age of 99. She began using computers creatively in 1968, and her NFT auction at Sotheby’s this year generated $1.2 million. She will be deeply missed. youtu.be/8tNESHtfkr0?si…
YouTube video
YouTube
English
0
0
0
122
David Holmes retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
# On the "hallucination problem" I always struggle a bit with I'm asked about the "hallucination problem" in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines. We direct their dreams with prompts. The prompts start the dream, and based on the LLM's hazy recollection of its training documents, most of the time the result goes someplace useful. It's only when the dreams go into deemed factually incorrect territory that we label it a "hallucination". It looks like a bug, but it's just the LLM doing what it always does. At the other end of the extreme consider a search engine. It takes the prompt and just returns one of the most similar "training documents" it has in its database, verbatim. You could say that this search engine has a "creativity problem" - it will never respond with something new. An LLM is 100% dreaming and has the hallucination problem. A search engine is 0% dreaming and has the creativity problem. All that said, I realize that what people *actually* mean is they don't want an LLM Assistant (a product like ChatGPT etc.) to hallucinate. An LLM Assistant is a lot more complex system than just the LLM itself, even if one is at the heart of it. There are many ways to mitigate hallcuinations in these systems - using Retrieval Augmented Generation (RAG) to more strongly anchor the dreams in real data through in-context learning is maybe the most common one. Disagreements between multiple samples, reflection, verification chains. Decoding uncertainty from activations. Tool use. All an active and very interesting areas of research. TLDR I know I'm being super pedantic but the LLM has no "hallucination problem". Hallucination is not a bug, it is LLM's greatest feature. The LLM Assistant has a hallucination problem, and we should fix it. Okay I feel much better now :)
English
696
2.4K
14.8K
2.4M
David Holmes retweetledi
Open Culture
Open Culture@openculture·
Download 448 Free Art Books from The Metropolitan Museum of Art goo.gl/Z66KQS
Open Culture tweet media
English
0
126
159
0
David Holmes
David Holmes@DavidHolmes·
Hatchet - A new kind of social music player and fan community. Listen and share music with friends across platforms hatchet.is
English
0
0
0
0
David Holmes
David Holmes@DavidHolmes·
Mindfulness meditation app with choice of ambient background sounds & guided sessions from 2-20 mins calm.com by @tewy
English
0
0
2
0