bd

394 posts

bd banner
bd

bd

@Benjamindanek

make pharma go fas

Seattle, WA Katılım Eylül 2011
1.3K Takip Edilen144 Takipçiler
bd retweetledi
bd retweetledi
Daniel
Daniel@danielgothits·
I have openclaw sending lowball offers on Zillow all day just to make boomers start panicking lol
Daniel tweet media
English
2.4K
4.4K
90K
8.1M
bd retweetledi
ᐱ ᑎ ᑐ ᒋ ᕮ ᒍ
ᐱ ᑎ ᑐ ᒋ ᕮ ᒍ@Andr3jH·
frame mogging in and of itself reveals the fundamental coordinates of contemporary capitalist subjectivity: you are not simply defeated by a superior physique. no, no, no. you must recognize your defeat. perform the ritual of humiliation, acknowledge that you have been "mogged." the pure surplus-enjoyment, the plus-de-jouir of the spectacle and so on and so forth but what we have here is even more fascinating. what we have here is the perfect case of what I call the "autocannibal subject" of late capitalism. the looksmaxer, an ontological edifice constructed on the basis of pseudo-scientific pornography of measurement and optimization, recessed suborbitals and upper maxillas and so on and so forth Clavicular is the Bolshevik functionary who knows all the theory, has turned revolutionary consciousness into a full-time job meeting a simple factory worker who embodies proletarian authenticity without ever having read a single word of Marx. the frat leader doesn't looksmax, he simply is. this is pure jouissance in its most traumatic form. this brute who probably drinks beer and eats pizza, who has never heard of canthal tilt is what Lacan calls the sinthome - the enjoyment the looksmaxer cannot integrate into his symbolic universe. so when the looksmaxer is frame-mogged, what dies is not merely his self-image, but the entire libidinal infrastructure of his subjectivity. he confrontscthe abyss: "what am I beyond my looks?" and the answer that echoes back is....nothing. pure void. and at the same time... pure ideology.
ᐱ ᑎ ᑐ ᒋ ᕮ ᒍ tweet media
English
53
332
3K
150.4K
bd retweetledi
Tim Francisco
Tim Francisco@timfrancisco·
@mattdykema The "print" as the universal communication format between design and manufacturing seems antiquated. Shouldn't we have a universal shareable format of a solid/part plus all the meta data that describes how to make it in detail?
English
1
1
1
452
bd retweetledi
Andrew I. Christianson
Andrew I. Christianson@ai_christianson·
@Hesamation all this moltbot stuff reads like humans cosplaying as ai for one big PR stunt
English
4
1
30
4.3K
bd retweetledi
Ron Alfa
Ron Alfa@Ronalfa·
@owl_posting Apparently AI = scraping public data and hallucinating biological connectivity
English
1
1
31
2.3K
bd
bd@Benjamindanek·
@dr_alphalyrae The opportunity in accelerating operational problems comes down to knowing the problem and understanding how to bake a solution that fits people’s workflows. We serve many of the major pharma companies already at keiji.ai. Lmk if you want a demo
English
0
0
0
4
bd retweetledi
Vega Shah
Vega Shah@dr_alphalyrae·
i am still of the opinion that solving the boring operational problems in biopharma r&d, like slowness in regulatory documentation and process development is the most effective way to derive value from AI in the short term
English
22
16
256
19.1K
bd retweetledi
Luis Batalha
Luis Batalha@luismbat·
Rocket reusability isn’t just an optimization - it’s a phase transition. When you throw rockets away, you’re limited by factory throughput (~16 rockets/year). Reuse them, and you enter a different regime of launch volume, economics, and ambition.
Luis Batalha tweet media
English
217
728
8K
10.8M
bd
bd@Benjamindanek·
@RuxandraTeslo @shelbynewsad pls invite to gc 🥹. I think 2 can be solved with better visibility into datasets. Ie making the data plumbing better, allow collaboration with llms are both straightforward ways to get a lift
English
0
0
0
31
Ruxandra Teslo 🧬
Ruxandra Teslo 🧬@RuxandraTeslo·
@shelbynewsad So I agree with 1 and 3. A lot of the others I agree too, but 2 and 5 seem for example like things the market should solve. Why do you think it doesn't?
English
2
0
2
476
Dr. Shelby
Dr. Shelby@shelbynewsad·
We’re all radicalized by different things, this one is pp stark about why biotech needs to change
Jake Wintermute 🧬/acc@SynBio1

@shelbynewsad Realizing that the biggest players in big pharma are still just fighting for tiny scraps of the value biotech could create

English
8
11
163
71.9K
bd retweetledi
Jiaqi Ma
Jiaqi Ma@Jiaqi_Ma_·
There are two somewhat related myths about neural networks in many intro ML courses that, I think, mislead more than they help. 1) The statement, "neural networks are powerful," is often followed by the citation to universal approximation theorem. 2) Neural networks are often described as "black-box" models. On 1), universality isn't especially remarkable. Polynomials have Stone–Weierstrass, splines can approximate to arbitrary precision, so do a long list of other function classes. Citing the universal approximation theorem without noting these other universal approximators risks giving beginners a distorted picture of what makes neural networks interesting. Worse, that misplaced emphasis can reinforce the second myth: the idea that neural networks are monolithic black boxes. While it's true that neural network parameters don't have direct semantic meaning, treating the models as monolithic black boxes misses a lot. In important ways, neural networks are more "white-box" than many competitors. They are unusually workable systems. Their computation is modular (layers, blocks, residual paths, attention heads), their parameters are updated by a single, general-purpose learning rule (gradient descent via backprop), and the pipeline is differentiable end-to-end. This combination gives a rare form of mechanistic transparency for engineers: you can open the hood, ablate a layer, inspect activations, route gradients, freeze a submodule, patch a representation, or fine-tune a small adapter; then immediately see if loss goes down. Splines and polynomial bases also decompose functions, but they don't offer a comparably rich, editable machinery for credit assignment and iterative debugging, where researchers can build local intuitions that guide gradual improvements. This "transparent enough to iterate" property is arguably a crucial factor behind today’s successful deep learning recipes. This isn't "interpretability" in the sense of human-readable coefficients, but it is a form of causal editability that guides progress. Seen this way, there might be an overlooked dimension to Sutton's "bitter lesson." Neural networks don't just scale with data and compute; they also scale with the accumulated engineering wisdom that their modular structure enables. The original "bitter lesson" suggests removing the human efforts from the loop, but perhaps models that are easier for humans to probe and adjust (thus easier to scale with human efforts) are a necessary ladder for scaling.
Andrej Karpathy@karpathy

Finally had a chance to listen through this pod with Sutton, which was interesting and amusing. As background, Sutton's "The Bitter Lesson" has become a bit of biblical text in frontier LLM circles. Researchers routinely talk about and ask whether this or that approach or idea is sufficiently "bitter lesson pilled" (meaning arranged so that it benefits from added computation for free) as a proxy for whether it's going to work or worth even pursuing. The underlying assumption being that LLMs are of course highly "bitter lesson pilled" indeed, just look at LLM scaling laws where if you put compute on the x-axis, number go up and to the right. So it's amusing to see that Sutton, the author of the post, is not so sure that LLMs are "bitter lesson pilled" at all. They are trained on giant datasets of fundamentally human data, which is both 1) human generated and 2) finite. What do you do when you run out? How do you prevent a human bias? So there you have it, bitter lesson pilled LLM researchers taken down by the author of the bitter lesson - rough! In some sense, Dwarkesh (who represents the LLM researchers viewpoint in the pod) and Sutton are slightly speaking past each other because Sutton has a very different architecture in mind and LLMs break a lot of its principles. He calls himself a "classicist" and evokes the original concept of Alan Turing of building a "child machine" - a system capable of learning through experience by dynamically interacting with the world. There's no giant pretraining stage of imitating internet webpages. There's also no supervised finetuning, which he points out is absent in the animal kingdom (it's a subtle point but Sutton is right in the strong sense: animals may of course observe demonstrations, but their actions are not directly forced/"teleoperated" by other animals). Another important note he makes is that even if you just treat pretraining as an initialization of a prior before you finetune with reinforcement learning, Sutton sees the approach as tainted with human bias and fundamentally off course, a bit like when AlphaZero (which has never seen human games of Go) beats AlphaGo (which initializes from them). In Sutton's world view, all there is is an interaction with a world via reinforcement learning, where the reward functions are partially environment specific, but also intrinsically motivated, e.g. "fun", "curiosity", and related to the quality of the prediction in your world model. And the agent is always learning at test time by default, it's not trained once and then deployed thereafter. Overall, Sutton is a lot more interested in what we have common with the animal kingdom instead of what differentiates us. "If we understood a squirrel, we'd be almost done". As for my take... First, I should say that I think Sutton was a great guest for the pod and I like that the AI field maintains entropy of thought and that not everyone is exploiting the next local iteration LLMs. AI has gone through too many discrete transitions of the dominant approach to lose that. And I also think that his criticism of LLMs as not bitter lesson pilled is not inadequate. Frontier LLMs are now highly complex artifacts with a lot of humanness involved at all the stages - the foundation (the pretraining data) is all human text, the finetuning data is human and curated, the reinforcement learning environment mixture is tuned by human engineers. We do not in fact have an actual, single, clean, actually bitter lesson pilled, "turn the crank" algorithm that you could unleash upon the world and see it learn automatically from experience alone. Does such an algorithm even exist? Finding it would of course be a huge AI breakthrough. Two "example proofs" are commonly offered to argue that such a thing is possible. The first example is the success of AlphaZero learning to play Go completely from scratch with no human supervision whatsoever. But the game of Go is clearly such a simple, closed, environment that it's difficult to see the analogous formulation in the messiness of reality. I love Go, but algorithmically and categorically, it is essentially a harder version of tic tac toe. The second example is that of animals, like squirrels. And here, personally, I am also quite hesitant whether it's appropriate because animals arise by a very different computational process and via different constraints than what we have practically available to us in the industry. Animal brains are nowhere near the blank slate they appear to be at birth. First, a lot of what is commonly attributed to "learning" is imo a lot more "maturation". And second, even that which clearly is "learning" and not maturation is a lot more "finetuning" on top of something clearly powerful and preexisting. Example. A baby zebra is born and within a few dozen minutes it can run around the savannah and follow its mother. This is a highly complex sensory-motor task and there is no way in my mind that this is achieved from scratch, tabula rasa. The brains of animals and the billions of parameters within have a powerful initialization encoded in the ATCGs of their DNA, trained via the "outer loop" optimization in the course of evolution. If the baby zebra spasmed its muscles around at random as a reinforcement learning policy would have you do at initialization, it wouldn't get very far at all. Similarly, our AIs now also have neural networks with billions of parameters. These parameters need their own rich, high information density supervision signal. We are not going to re-run evolution. But we do have mountains of internet documents. Yes it is basically supervised learning that is ~absent in the animal kingdom. But it is a way to practically gather enough soft constraints over billions of parameters, to try to get to a point where you're not starting from scratch. TLDR: Pretraining is our crappy evolution. It is one candidate solution to the cold start problem, to be followed later by finetuning on tasks that look more correct, e.g. within the reinforcement learning framework, as state of the art frontier LLM labs now do pervasively. I still think it is worth to be inspired by animals. I think there are multiple powerful ideas that LLM agents are algorithmically missing that can still be adapted from animal intelligence. And I still think the bitter lesson is correct, but I see it more as something platonic to pursue, not necessarily to reach, in our real world and practically speaking. And I say both of these with double digit percent uncertainty and cheer the work of those who disagree, especially those a lot more ambitious bitter lesson wise. So that brings us to where we are. Stated plainly, today's frontier LLM research is not about building animals. It is about summoning ghosts. You can think of ghosts as a fundamentally different kind of point in the space of possible intelligences. They are muddled by humanity. Thoroughly engineered by it. They are these imperfect replicas, a kind of statistical distillation of humanity's documents with some sprinkle on top. They are not platonically bitter lesson pilled, but they are perhaps "practically" bitter lesson pilled, at least compared to a lot of what came before. It seems possibly to me that over time, we can further finetune our ghosts more and more in the direction of animals; That it's not so much a fundamental incompatibility but a matter of initialization in the intelligence space. But it's also quite possible that they diverge even further and end up permanently different, un-animal-like, but still incredibly helpful and properly world-altering. It's possible that ghosts:animals :: planes:birds. Anyway, in summary, overall and actionably, I think this pod is solid "real talk" from Sutton to the frontier LLM researchers, who might be gear shifted a little too much in the exploit mode. Probably we are still not sufficiently bitter lesson pilled and there is a very good chance of more powerful ideas and paradigms, other than exhaustive benchbuilding and benchmaxxing. And animals might be a good source of inspiration. Intrinsic motivation, fun, curiosity, empowerment, multi-agent self-play, culture. Use your imagination.

English
7
34
364
51.2K
bd retweetledi
neural nets.
neural nets.@cneuralnetwork·
the new cto of anthropic did his bachelors from PES university 👀
neural nets. tweet media
English
27
21
738
50.8K
bd
bd@Benjamindanek·
Not everybody needs to be a biologist or a clinician to contribute to biology. It’s mostly an empirical field, which means that improving experimental throughput is on the main line for understanding the science better. There are countless opportunities for doing that via tech today. See Varda, deepmind, or any of the other thousands health ai startups ups for inspiration on where to contribute your best efforts. Good pay, impactful work, and an investment in your wellbeing years down the road
English
0
0
4
363
owl
owl@owl_posting·
Ask not why would you work in biology, but rather: why wouldn't you? owlposting.com/p/ask-not-why-… openai/gemini gave this essay an A- for evocative imagery. claude gave it a C- for being emotionally manipulative. both are probably right. i feel a little sick re-reading it
English
23
55
318
116.3K
bd retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Finally had a chance to listen through this pod with Sutton, which was interesting and amusing. As background, Sutton's "The Bitter Lesson" has become a bit of biblical text in frontier LLM circles. Researchers routinely talk about and ask whether this or that approach or idea is sufficiently "bitter lesson pilled" (meaning arranged so that it benefits from added computation for free) as a proxy for whether it's going to work or worth even pursuing. The underlying assumption being that LLMs are of course highly "bitter lesson pilled" indeed, just look at LLM scaling laws where if you put compute on the x-axis, number go up and to the right. So it's amusing to see that Sutton, the author of the post, is not so sure that LLMs are "bitter lesson pilled" at all. They are trained on giant datasets of fundamentally human data, which is both 1) human generated and 2) finite. What do you do when you run out? How do you prevent a human bias? So there you have it, bitter lesson pilled LLM researchers taken down by the author of the bitter lesson - rough! In some sense, Dwarkesh (who represents the LLM researchers viewpoint in the pod) and Sutton are slightly speaking past each other because Sutton has a very different architecture in mind and LLMs break a lot of its principles. He calls himself a "classicist" and evokes the original concept of Alan Turing of building a "child machine" - a system capable of learning through experience by dynamically interacting with the world. There's no giant pretraining stage of imitating internet webpages. There's also no supervised finetuning, which he points out is absent in the animal kingdom (it's a subtle point but Sutton is right in the strong sense: animals may of course observe demonstrations, but their actions are not directly forced/"teleoperated" by other animals). Another important note he makes is that even if you just treat pretraining as an initialization of a prior before you finetune with reinforcement learning, Sutton sees the approach as tainted with human bias and fundamentally off course, a bit like when AlphaZero (which has never seen human games of Go) beats AlphaGo (which initializes from them). In Sutton's world view, all there is is an interaction with a world via reinforcement learning, where the reward functions are partially environment specific, but also intrinsically motivated, e.g. "fun", "curiosity", and related to the quality of the prediction in your world model. And the agent is always learning at test time by default, it's not trained once and then deployed thereafter. Overall, Sutton is a lot more interested in what we have common with the animal kingdom instead of what differentiates us. "If we understood a squirrel, we'd be almost done". As for my take... First, I should say that I think Sutton was a great guest for the pod and I like that the AI field maintains entropy of thought and that not everyone is exploiting the next local iteration LLMs. AI has gone through too many discrete transitions of the dominant approach to lose that. And I also think that his criticism of LLMs as not bitter lesson pilled is not inadequate. Frontier LLMs are now highly complex artifacts with a lot of humanness involved at all the stages - the foundation (the pretraining data) is all human text, the finetuning data is human and curated, the reinforcement learning environment mixture is tuned by human engineers. We do not in fact have an actual, single, clean, actually bitter lesson pilled, "turn the crank" algorithm that you could unleash upon the world and see it learn automatically from experience alone. Does such an algorithm even exist? Finding it would of course be a huge AI breakthrough. Two "example proofs" are commonly offered to argue that such a thing is possible. The first example is the success of AlphaZero learning to play Go completely from scratch with no human supervision whatsoever. But the game of Go is clearly such a simple, closed, environment that it's difficult to see the analogous formulation in the messiness of reality. I love Go, but algorithmically and categorically, it is essentially a harder version of tic tac toe. The second example is that of animals, like squirrels. And here, personally, I am also quite hesitant whether it's appropriate because animals arise by a very different computational process and via different constraints than what we have practically available to us in the industry. Animal brains are nowhere near the blank slate they appear to be at birth. First, a lot of what is commonly attributed to "learning" is imo a lot more "maturation". And second, even that which clearly is "learning" and not maturation is a lot more "finetuning" on top of something clearly powerful and preexisting. Example. A baby zebra is born and within a few dozen minutes it can run around the savannah and follow its mother. This is a highly complex sensory-motor task and there is no way in my mind that this is achieved from scratch, tabula rasa. The brains of animals and the billions of parameters within have a powerful initialization encoded in the ATCGs of their DNA, trained via the "outer loop" optimization in the course of evolution. If the baby zebra spasmed its muscles around at random as a reinforcement learning policy would have you do at initialization, it wouldn't get very far at all. Similarly, our AIs now also have neural networks with billions of parameters. These parameters need their own rich, high information density supervision signal. We are not going to re-run evolution. But we do have mountains of internet documents. Yes it is basically supervised learning that is ~absent in the animal kingdom. But it is a way to practically gather enough soft constraints over billions of parameters, to try to get to a point where you're not starting from scratch. TLDR: Pretraining is our crappy evolution. It is one candidate solution to the cold start problem, to be followed later by finetuning on tasks that look more correct, e.g. within the reinforcement learning framework, as state of the art frontier LLM labs now do pervasively. I still think it is worth to be inspired by animals. I think there are multiple powerful ideas that LLM agents are algorithmically missing that can still be adapted from animal intelligence. And I still think the bitter lesson is correct, but I see it more as something platonic to pursue, not necessarily to reach, in our real world and practically speaking. And I say both of these with double digit percent uncertainty and cheer the work of those who disagree, especially those a lot more ambitious bitter lesson wise. So that brings us to where we are. Stated plainly, today's frontier LLM research is not about building animals. It is about summoning ghosts. You can think of ghosts as a fundamentally different kind of point in the space of possible intelligences. They are muddled by humanity. Thoroughly engineered by it. They are these imperfect replicas, a kind of statistical distillation of humanity's documents with some sprinkle on top. They are not platonically bitter lesson pilled, but they are perhaps "practically" bitter lesson pilled, at least compared to a lot of what came before. It seems possibly to me that over time, we can further finetune our ghosts more and more in the direction of animals; That it's not so much a fundamental incompatibility but a matter of initialization in the intelligence space. But it's also quite possible that they diverge even further and end up permanently different, un-animal-like, but still incredibly helpful and properly world-altering. It's possible that ghosts:animals :: planes:birds. Anyway, in summary, overall and actionably, I think this pod is solid "real talk" from Sutton to the frontier LLM researchers, who might be gear shifted a little too much in the exploit mode. Probably we are still not sufficiently bitter lesson pilled and there is a very good chance of more powerful ideas and paradigms, other than exhaustive benchbuilding and benchmaxxing. And animals might be a good source of inspiration. Intrinsic motivation, fun, curiosity, empowerment, multi-agent self-play, culture. Use your imagination.
Dwarkesh Patel@dwarkesh_sp

.@RichardSSutton, father of reinforcement learning, doesn’t think LLMs are bitter-lesson-pilled. My steel man of Richard’s position: we need some new architecture to enable continual (on-the-job) learning. And if we have continual learning, we don't need a special training phase - the agent just learns on-the-fly - like all humans, and indeed, like all animals. This new paradigm will render our current approach with LLMs obsolete. I did my best to represent the view that LLMs will function as the foundation on which this experiential learning can happen. Some sparks flew. 0:00:00 – Are LLMs a dead-end? 0:13:51 – Do humans do imitation learning? 0:23:57 – The Era of Experience 0:34:25 – Current architectures generalize poorly out of distribution 0:42:17 – Surprises in the AI field 0:47:28 – Will The Bitter Lesson still apply after AGI? 0:54:35 – Succession to AI

English
415
1.2K
9.5K
2M
bd
bd@Benjamindanek·
@WillManidis To the people saying work is life, maybe you don’t have a job you enjoy. It’s a privilege, but all these high performing firms are populated with people to have that privilege (or aspire to it). Sometimes you have to pay it forward and work a lot so you can work with privilege.
English
0
0
0
650
bd retweetledi
Will Manidis
Will Manidis@WillManidis·
it’s worth noticing that basically every top performing firm has nearly monastic requirements on how the team lives outside of work. ex: founders fund incentivizing to live within x miles of the office, benchmark constant team dinners, Deerfield partners all doing combat sports
English
49
41
2K
1M
bd retweetledi
atlas
atlas@creatine_cycle·
>wake up >pop addy and a zyn >open macbook >open cursor >"center this div" >pick up phone >watch 16 hours of short form content
English
159
757
19.5K
490.1K
bd
bd@Benjamindanek·
@ChaseBrowe32432 @9haethon Warmed up is putting it lightly. Pre-training is a substantial portion of the entire process and Suttons point is that pretraining is teaching a model how to draw action, ie next tokens through token masking
English
0
0
0
10
Chase Brower
Chase Brower@ChaseBrowe32432·
It does happen after pre-training; the pre-training is just what gets the model 'warmed up' to learn in this manner. At this point the RL stage dominates both compute and research focus. RL is performed in a variety of categories, including competition math/code, SWE, games, and many other areas.
English
4
0
12
4.6K
Chase Brower
Chase Brower@ChaseBrowe32432·
I'm actually losing my mind over this; does Sutton genuinely not understand that we apply RL to LLMs?
Chase Brower tweet media
English
40
15
863
101K
bd retweetledi
Chris Hayduk
Chris Hayduk@ChrisHayduk·
Everyone posting about the Dwarkesh interview (including Dwarkesh himself!) is missing this subtle point. When LLMs imitate, they imitate the ACTION (ie the token prediction to produce the sequence). When humans imitate, they imitate the OUTPUT but must discover the action
Richard Sutton@RichardSSutton

@eigenrobot Even in birdsong learning in zebra finches the motor actions are not learned by imitation. The auditory result is reproduced, not the actions; in this crucial way it differs from LLM training.

English
68
116
1.5K
333.6K
bd
bd@Benjamindanek·
Hey Alan, this is really great. How do you add in the hands-on part of learning into this workflow? For the pattern recognition example, having some code in my ide to test out concepts would be huge. Copying/pasting my code snippets and connecting them to the textbook would close a big gap. A simpler example is if I’m learning new math concepts. Working through examples is useful, and being able to snap a picture from my phone and sending it to the heptabase app would also be a really great feature.
English
1
0
1
1.1K
Alan Chan
Alan Chan@alanchan_tw·
The most underrated use of AI isn’t learning faster—it’s becoming capable of learning knowledge that’s deeper, harder, and more abstract. In this article, I share how I use AI to teach myself a textbook step by step, and why the gains in depth + quality have been mind-blowing.
Alan Chan tweet media
English
40
176
1.6K
130.7K