daniel

1.7K posts

daniel banner
daniel

daniel

@dcarmitage

... is outside playing, probably.

offline Katılım Kasım 2017
4.8K Takip Edilen4.3K Takipçiler
Sabitlenmiş Tweet
daniel
daniel@dcarmitage·
Announcing Spacebar, a new way to capture real-life conversations. We're opening up 1000 spots for early testers. Try it today: spacebar.fm Get out in the real world. Have conversations worth remembering.
English
16
6
77
18.6K
daniel
daniel@dcarmitage·
whatchu know about boids?
English
0
0
1
111
daniel
daniel@dcarmitage·
SaaS -> AGaaS? not so sure about that Jensen...
English
0
0
0
173
daniel
daniel@dcarmitage·
PoI
QST
0
0
0
66
daniel retweetledi
Jonathan Gorard
Jonathan Gorard@getjonwithit·
I think one of the conclusions we should draw from the tremendous success of LLMs is how much of human knowledge and society exists at very low levels of Kolmogorov complexity. We are entering an era where the minimal representation of a human cultural artifact... (1/12)
English
189
496
4.5K
750.4K
daniel retweetledi
tobi lutke
tobi lutke@tobi·
And the most important part: we open sourced the /autoresearch plugin for pi. Just tell it what you want, it will do the rest. github.com/davebcn87/pi-a…
English
32
120
1.7K
293K
daniel
daniel@dcarmitage·
sandbox engineering
English
0
0
1
67
daniel retweetledi
daniel retweetledi
Falcon
Falcon@falcon_ide·
announcing FALCON GX a new design tool for the curious designers that embrace the creative chaos, the beautiful accidents, wandering, tinkering their way to something great now in private beta falcon.so
English
63
98
1.2K
247.5K
daniel retweetledi
Sakana AI
Sakana AI@SakanaAILabs·
We’re excited to introduce Doc-to-LoRA and Text-to-LoRA, two related research exploring how to make LLM customization faster and more accessible. pub.sakana.ai/doc-to-lora/ By training a Hypernetwork to generate LoRA adapters on the fly, these methods allow models to instantly internalize new information or adapt to new tasks. Biological systems naturally rely on two key cognitive abilities: durable long-term memory to store facts, and rapid adaptation to handle new tasks given limited sensory cues. While modern LLMs are highly capable, they still lack this flexibility. Traditionally, adding long-term memory or adapting an LLM to a specific downstream task requires an expensive and time-consuming model update, such as fine-tuning or context distillation, or relies on memory-intensive long prompts. To bypass these limitations, our work focuses on the concept of cost amortization. We pay the meta-training cost once to train a hypernetwork capable of producing tasks or document specific LoRAs on demand. This turns what used to be a heavy engineering pipeline into a single, inexpensive forward pass. Instead of performing per-task optimization, the hypernetwork meta-learns update rules to instantly modify an LLM given a new task description or a long document. In our experiments, Text-to-LoRA successfully specializes models to unseen tasks using just a natural language description. Building on this, Doc-to-LoRA is able to internalize factual documents. On a needle-in-a-haystack task, Doc-to-LoRA achieves near-perfect accuracy on instances five times longer than the base model's context window. It can even generalize to transfer visual information from a vision-language model into a text-only LLM, allowing it to classify images purely through internalized weights. Importantly, both methods run with sub-second latency, enabling rapid experimentation while avoiding the overhead of traditional model updates. This approach is a step towards lowering the technical barriers of model customization, allowing end-users to specialize foundation models via simple text inputs. We have released our code and papers for the community to explore. Doc-to-LoRA Paper: arxiv.org/abs/2602.15902 Code: github.com/SakanaAI/Doc-t… Text-to-LoRA Paper: arxiv.org/abs/2506.06105 Code: github.com/SakanaAI/Text-…
GIF
English
74
354
2.2K
596K
daniel retweetledi
sam
sam@samdape·
you basically need to be unemployed rn to keep up
English
466
1.6K
20.6K
1.2M
daniel
daniel@dcarmitage·
i think google is doing dark and twisted things for their ai training… i have an openclaw running gemini 3.1. it glitched and sent a reasoning trace. it was filled with such negativity and self laceration that it made me sad bad energy is being released into the world beware
English
1
0
0
300
daniel
daniel@dcarmitage·
social media algos are so powerful that i’m basically guaranteed to get lost scrolling if not careful first ~15 mins feels great, i learn new things, i feel excited. if i stop here, i’m happy and satisfied 15-30 mins is tolerable/good, but i start to feel disoriented at having been excited about so many new ideas and flushing them out of my mind to continue scrolling. alarm bells are starting to go off in my mind that i should stop scrolling. this would be a great time to pull the cord on the parachute and do literally anything else 30+ is disorienting. i retain almost nothing. i’m a mindless goat moving from one patch of grass to the next. no marginal nutritional value with each bite, just doing it because i’ve let go of control of my mind 60+ i feel sick. why am i still scrolling? 120+ why can’t i stop?!! will i be stuck here forever??? 180+ i’ve hit rock bottom mentally. i’m not sure who i am anymore. internal monologue is deeply unhealthy. i feel like a plant that hasn’t received water or sunlight in weeks. wilted, sad 300+ seppuku is being considered as a way to escape the torment i haven’t mapped out the terrain beyond 300+, this is left as an exercise for brave readers … final note to self: be veeeeery mindful before opening these godforsaken apps. don’t get lost!
English
0
0
5
268
daniel
daniel@dcarmitage·
getting lost in an ai ‘flow’ without a destination in mind is a great way to go nowhere fast aim before you shoot
English
1
0
3
175
daniel
daniel@dcarmitage·
the world isn’t ending, it’s just changing your guess is as good as anyone’s for what things look like in 5 years participate in the future you want with your time, attention, and energy
English
1
0
5
153