๐š…๐š’๐š—๐šŒ๐šŽ๐š—๐š

1.3K posts

๐š…๐š’๐š—๐šŒ๐šŽ๐š—๐š banner
๐š…๐š’๐š—๐šŒ๐šŽ๐š—๐š

๐š…๐š’๐š—๐šŒ๐šŽ๐š—๐š

@wivincent

in pursuit of the numinous ๐Ÿ‘จโ€๐Ÿ’ป๐Ÿชข๐Ÿฑ

Tower of Babel Katฤฑlฤฑm Ocak 2023
304 Takip Edilen743 Takipรงiler
SabitlenmiลŸ Tweet
๐š…๐š’๐š—๐šŒ๐šŽ๐š—๐š
In case you wanted to know how to find a treasure: ๐Ÿดโ€โ˜ ๏ธ "Eighteen Bold letters Preserved In a clearing" There are 18 gigantic concrete letters piled in a field near India Basin Park, remnants of a sign created by Michael Manwaring and removed in 2020. They're intended to eventually be repurposed as park signage. The physical letters are uppercase I, B, and P, lowercase the rest. This wouldn't be locatable from the Internet alone. "a dark room's view..." Camera Obscura and its view to surfers, and surf fisherman, reeling. "dry ruin's gate" Fleishhacker Pool. Only a meager doorway remains of this once great saltwater pool. For those looking at Sutro Baths, we'd note that those ruins are still quite wet. "solar's tall mast" The Point of Infinity on what is technically Yerba Buena Island, but colloquially referred to as Treasure(!) Island. The sculpture is billed as a quasi-sundial by artist Hiroshi Sugimoto. Yes, this clue is within the city of San Francisco. "chart a historic cross" Draw lines between these points and, in true treasure hunting fashion, X really does mark the spot. The X lands just off the Historic Trail in Mount Sutro Open Space Preserve. "To trace the true route...night...pack a light" From the X there are small indicators which glow under UV light directing you about 5 minutes down the trail and up a side trail to a rock outcropping. "steadfast basin, where feet part and agree" The chest was buried about 8 inches deep, in a hollow nested amid large boulders. The trail briefly and clearly splits in two, parting around the rocks.
๐š…๐š’๐š—๐šŒ๐šŽ๐š—๐š tweet media
English
19
31
651
103.8K
๐š…๐š’๐š—๐šŒ๐šŽ๐š—๐š
- "machinenglish": hidden layer to hidden layer A2A comm (not decoded into english/json)
Andrej Karpathy@karpathy

We're missing (at least one) major paradigm for LLM learning. Not sure what to call it, possibly it has a name - system prompt learning? Pretraining is for knowledge. Finetuning (SL/RL) is for habitual behavior. Both of these involve a change in parameters but a lot of human learning feels more like a change in system prompt. You encounter a problem, figure something out, then "remember" something in fairly explicit terms for the next time. E.g. "It seems when I encounter this and that kind of a problem, I should try this and that kind of an approach/solution". It feels more like taking notes for yourself, i.e. something like the "Memory" feature but not to store per-user random facts, but general/global problem solving knowledge and strategies. LLMs are quite literally like the guy in Memento, except we haven't given them their scratchpad yet. Note that this paradigm is also significantly more powerful and data efficient because a knowledge-guided "review" stage is a significantly higher dimensional feedback channel than a reward scaler. I was prompted to jot down this shower of thoughts after reading through Claude's system prompt, which currently seems to be around 17,000 words, specifying not just basic behavior style/preferences (e.g. refuse various requests related to song lyrics) but also a large amount of general problem solving strategies, e.g.: "If Claude is asked to count words, letters, and characters, it thinks step by step before answering the person. It explicitly counts the words, letters, or characters by assigning a number to each. It only answers the person once it has performed this explicit counting step." This is to help Claude solve 'r' in strawberry etc. Imo this is not the kind of problem solving knowledge that should be baked into weights via Reinforcement Learning, or least not immediately/exclusively. And it certainly shouldn't come from human engineers writing system prompts by hand. It should come from System Prompt learning, which resembles RL in the setup, with the exception of the learning algorithm (edits vs gradient descent). A large section of the LLM system prompt could be written via system prompt learning, it would look a bit like the LLM writing a book for itself on how to solve problems. If this works it would be a new/powerful learning paradigm. With a lot of details left to figure out (how do the edits work? can/should you learn the edit system? how do you gradually move knowledge from the explicit system text to habitual weights, as humans seem to do? etc.).

English
0
0
1
107
Desh Raj
Desh Raj@rdesh26ยท
After 2 wonderful years, I left Meta this week. During this time, I worked on several projects related to speech and LLMs: - Built the first multi-channel audio foundation model with M-BEST-RQ (arxiv.org/abs/2409.11494) - Made ASR with SpeechLLMs faster (arxiv.org/abs/2409.08148) and more accurate (ieeexplore.ieee.org/document/10890โ€ฆ) - Shipped the first production-ready full-duplex voice assistant (about.fb.com/news/2025/04/iโ€ฆ) - Improved Moshiโ€™s reasoning capability with chain-of-thought (arxiv.org/abs/2510.07497) I am grateful to my managers for having my back on critical projects, and fortunate to have collaborated with several brilliant researchers and engineers during this time. As to what's next, I am still in NYC and continuing to do speech research. More on that later!
Desh Raj tweet media
English
6
2
98
40.6K
๐š…๐š’๐š—๐šŒ๐šŽ๐š—๐š retweetledi
Charles Foster
Charles Foster@CFGeekยท
The mechanization of agriculture didnโ€™t wait for a โ€œdrop-in substitute for a field workerโ€. Neither will the mechanization of knowledge work wait for a โ€œdrop-in substitute for a remote workerโ€.
Declaration of Memes@LibertyCappy

How close are we to this?

English
6
9
121
31.3K