Reuben Marks

2K posts

Reuben Marks banner
Reuben Marks

Reuben Marks

@ReubenMarks5

exploring possibility space - currently building GoodMeals a universal recipe/meal plan generator.

Katılım Mayıs 2019
970 Takip Edilen244 Takipçiler
Reuben Marks retweetledi
François Chollet
François Chollet@fchollet·
AI automates tasks, not jobs, and when a task gets cheaper, demand for the job grows. AI cannot automate jobs end-to-end because it lacks autonomy and cannot operate without supervision. There is still zero job from 2022 that can be performed end-to-end by AI, not even translator or customer support associate.
James Pethokoukis ⏩️⤴️@JimPethokoukis

"A decade ago, AI was supposed to replace radiologists. Today, radiologists make more than $500,000 per year, and their employment continues to grow, see chart below. Reading scans is a task, not a job, and when the task gets cheaper, demand for the job grows."

English
141
231
1.5K
137.5K
Reuben Marks retweetledi
William MacAskill
William MacAskill@willmacaskill·
In collaboration with Christian Tarsney, I’ve developed a new theory of population ethics, which I call the Saturation View. I think that, from a purely intellectual perspective, it’s probably the best idea I’ve ever had. It was certainly great fun to work on. The motivation is that many views of population ethics, like the total view, suffer from some major problems. Some are already widely discussed: The Repugnant Conclusion: For any utopian outcome, there’s always another outcome containing an enormous number of barely-positive lives that is better. Fanaticism: For any guaranteed utopian outcome, there’s always some gamble with a vanishingly small probability of an even better outcome that has higher expected value. Infinitarian Paralysis: Given that the universe contains an infinite number of both positive and negative lives, no finite or infinite change to the world makes any difference to overall value. These are pretty bad! But there’s another less-discussed problem, too: The Monoculture Problem: Given fixed resources, the best-possible future consists essentially only of qualitatively identical replicas of a small number of lives. Essentially all extant impartial accounts of population ethics suffer from the monoculture problem. It follows from Pareto and Anonymity alone — you don't need totalism. And perfectly-replicable digital minds mean this is a real issue that future generations will face. But a monoculture seems far from ideal. Endless galaxies containing nothing but the same blissful experience, repeated and repeated, seem impoverished; like a song with only one note. The Saturation view deals with all these problems at once, using broadly the same machinery for all of them. The core idea is that the realisation value of a type of life (or experience) is determined by both the wellbeing of that life, and by how many very similar lives there are in the world. Endlessly creating replicas of the same identical life becomes progressively less valuable, tending to an upper bound. The total value of a world is given by the integral of realisation value over the space of types. Think of types of life as forming a landscape. Adding different types of life lights up different parts of the landscape. The value of the world is given by how fully illuminated the landscape is. Why does this help? In brief: Monoculture: Because there are diminishing returns to increasing wellbeing of very similar types, there’s greater value in having a diversity of lives. Repugnant Conclusion: The classic path to the Repugnant Conclusion requires trading a utopian world for an enormous population of barely-positive lives. But, on the Saturation view, barely-positive lives can only illuminate a tiny corner of the landscape. The path to the Repugnant Conclusion is blocked. Fanaticism: Total achievable value is bounded above. That means no tiny-probability gamble can have arbitrarily high expected value. Infinite ethics: In any infinite universe, the value of a world is finite and well-defined — even if some locations have infinite wellbeing. Unlike other approaches, this does not depend on spatiotemporal structure or choice of ultrafilter. Separability: Like nearly all non-totalist views, Saturationism is non-separable — background populations can affect how we rank options. But the violations are tame: populations with sufficiently different populations simply add, and at small scales the view behaves just like totalism. If the Saturation View is right, then the best future isn't the one where we've found the optimal experience and copy-pasted it across the cosmos. The best future is the one where we've gone exploring, and we've fully lit up the landscape of possible experiences.
English
86
27
464
106.4K
Donald Hoffman
Donald Hoffman@donalddhoffman·
Pioneering research by the Levin Lab is exploring multi-scale collective intelligence in biological systems. Here I discuss the potential of a newly discovered “trace logic” to model their findings and to offer a recursive theory of agency. youtube.com/watch?v=YnfaT5…
YouTube video
YouTube
English
16
33
155
12.8K
Visa is doing marketing consults (see pinned!)
@nosilverv the teacher’s job is not to give the student every bit of information and to assemble it for them. this is essentially an impossible task in a single sitting because of the constraints of human cognition. you have to light a fire rather than fill a bucket (or build a Rome)
English
1
6
152
2.1K
Ideas Guy
Ideas Guy@nosilverv·
I… do not understand how you're supposed to ever convey anything given the ginormous amount of background concepts you need to convey first
English
82
45
1.3K
184.6K
Reuben Marks retweetledi
Nikita Bier
Nikita Bier@nikitabier·
@ianmiles I’ll be honest. Even without the rockets, this is my hell.
English
181
92
13.9K
304.8K
Reuben Marks retweetledi
maurer8photography
maurer8photography@maurer8photo·
Looking up always pays off. Private KC-135 refueling an F-35 just after pixels started disappearing from the desert blue sky. Really cool to be able to see all of this! KC-135, N572MA
maurer8photography tweet mediamaurer8photography tweet mediamaurer8photography tweet media
English
17
87
1.6K
104.9K
Reuben Marks retweetledi
Curiosity
Curiosity@CuriosityonX·
🚨: This isn't an aerial photo taken by a helicopter or drone. This is a high-resolution satellite image taken from orbit, looking down at the Artemis I rocket as it prepares to leave Earth. Technology looking at technology.
Curiosity tweet media
English
102
599
7.5K
132.3K
Reuben Marks
Reuben Marks@ReubenMarks5·
@parmita Yes but only for a second. Must get back to expanding the light cone
English
0
0
2
379
Parmita Mishra
Parmita Mishra@parmita·
Can we all just take a second, pause, and acknowledge that the two shootings at MIT and Brown were related. That is wild.
English
13
6
132
10.1K
Reuben Marks retweetledi
Andrew McCalip
Andrew McCalip@andrewmccalip·
I’m in total agreement with John on this one. The rocket analogy is dead on. There’s a specific disease in engineering where we treat the last sliver of efficiency like it’s sacred, and we treat time like it’s free. Then we act shocked when the result is a bespoke artifact with a multi year lead time. There's often an alternate universe where one could increase the absolute output by 10x and simply eat the losses. It still nets out positive. I always think about the area under the curve of engineer-hours. All the effort that goes into these grotesquely beautiful optimized structures. At some point you have to ask the annoying question. Would it have been faster and cheaper to accept lower performance and build a system that scales. Starship is basically that argument made out of steel. Transformer discourse feels similar. Losses are real heat. Lead time is also real money. If the project misses the window, the most efficient transformer on Earth is functionally useless. I once waited 11 months to get upgraded from a 200A service to 800A, and by the time it was ready I’d already moved out. The killer is that tightening constraints doesn’t scale linearly. It explodes complexity. Make losses smaller sounds like one knob, but it drags in core steel, lamination process, stacking quality, insulation system, cooling margins, QA, testing, and sometimes an entire supply chain that only a couple plants can do. This is the part people miss. When margins get thin, everything couples. A slightly bigger fastener costs cents. A design that can’t tolerate inefficiency forces you to pay for world class structures, thermal, manufacturing, and quality, just to qualify a rivet or bolted joint. Not being able to run inefficiently ripples out into third and fourth order consequences and balloons cost. My mind always anchors back to ASME pressure vessel code. It’s a cheat code for civilization. Someone did the hard work once, baked conservative rules into a standard, and now thousands of teams don’t have to re-derive fracture mechanics for every tank they want to weld. It’s rated for pressure and temperature. Follow the rules and you don’t have to think about it. That’s the beauty of abstraction. You see that beauty everywhere once you notice it. In any commercial building you’re surrounded by standard conduit sizes, wire gauges, NEMA enclosures, schedule 40 pipe, flange patterns, breaker panels, UL listings. The same handful of materials and geometries repeating. It’s not that no one could optimize each one. It’s that society decided the target was throughput, safety, and repeatability, not heroics. I think parts of the grid need more of that mindset. Not be sloppy. Not ignore heat. Just stop worshipping the last nine when it creates a one-off supply chain. Sometimes the economically correct transformer is the one that’s slightly worse on paper and available this year. It blows my mind what we obsess over and what we shrug at. We treat time like it’s infinite and efficiency like it’s sacred. It isn’t. The real scarce resource is years. The thought that I might only see a few dozen big transformer cycles left in my whole career is equal parts terrifying and motivating. Anyway, the point is, ASME vessel code is awesome.
Andrew McCalip tweet media
John Carmack@ID_AA_Carmack

I have always struggled to appreciate why the big transformers take multiple years to source, and this article didn’t really clear it up. I wonder if there is a hangup on absolute efficiency akin to the US ISP fixation with rocket engines, and trading off a little bit of efficiency results in better economics when lead time has a cost. Instead of using highly specialized, long-lead-time “grain-oriented electrical steel and high-purity, insulated copper”, just use whatever you can buy in bulk and eat some percentage of inefficiency. For a lot of projects like solar farms and data centers, their economics are not dominated by the low order bits of the cost of electricity.

English
69
130
1.1K
87.1K
Reuben Marks
Reuben Marks@ReubenMarks5·
Simultaneously refreshing and terrifying
Peter Girnus 🦅@gothburz

Last quarter I rolled out Microsoft Copilot to 4,000 employees. $30 per seat per month. $1.4 million annually. I called it "digital transformation." The board loved that phrase. They approved it in eleven minutes. No one asked what it would actually do. Including me. I told everyone it would "10x productivity." That's not a real number. But it sounds like one. HR asked how we'd measure the 10x. I said we'd "leverage analytics dashboards." They stopped asking. Three months later I checked the usage reports. 47 people had opened it. 12 had used it more than once. One of them was me. I used it to summarize an email I could have read in 30 seconds. It took 45 seconds. Plus the time it took to fix the hallucinations. But I called it a "pilot success." Success means the pilot didn't visibly fail. The CFO asked about ROI. I showed him a graph. The graph went up and to the right. It measured "AI enablement." I made that metric up. He nodded approvingly. We're "AI-enabled" now. I don't know what that means. But it's in our investor deck. A senior developer asked why we didn't use Claude or ChatGPT. I said we needed "enterprise-grade security." He asked what that meant. I said "compliance." He asked which compliance. I said "all of them." He looked skeptical. I scheduled him for a "career development conversation." He stopped asking questions. Microsoft sent a case study team. They wanted to feature us as a success story. I told them we "saved 40,000 hours." I calculated that number by multiplying employees by a number I made up. They didn't verify it. They never do. Now we're on Microsoft's website. "Global enterprise achieves 40,000 hours of productivity gains with Copilot." The CEO shared it on LinkedIn. He got 3,000 likes. He's never used Copilot. None of the executives have. We have an exemption. "Strategic focus requires minimal digital distraction." I wrote that policy. The licenses renew next month. I'm requesting an expansion. 5,000 more seats. We haven't used the first 4,000. But this time we'll "drive adoption." Adoption means mandatory training. Training means a 45-minute webinar no one watches. But completion will be tracked. Completion is a metric. Metrics go in dashboards. Dashboards go in board presentations. Board presentations get me promoted. I'll be SVP by Q3. I still don't know what Copilot does. But I know what it's for. It's for showing we're "investing in AI." Investment means spending. Spending means commitment. Commitment means we're serious about the future. The future is whatever I say it is. As long as the graph goes up and to the right.

English
0
0
0
28
Reuben Marks
Reuben Marks@ReubenMarks5·
Where attention goes Energy flows
Autism Capital 🧩@AutismCapital

@IterIntellectus That's exactly what's happening. Whatever you focus on grows. By constantly focusing on "the problem," instead of focusing on the behaviors the exact opposite of whatever "the problem is," they are growing and watering neurosis. Constantly talking about a problem reinforces it.

English
0
0
0
26
Reuben Marks retweetledi
Michael Levin
Michael Levin@drmichaellevin·
Final version is out: authors.elsevier.com/c/1mEoa5bD-sxf… "Neural cellular automata: Applications to biology and beyond classical AI" @LPiolopez Benedikt Hartl "Neural Cellular Automata (NCA) represent a powerful framework for modeling biological self-organization, extending classical rule-based systems with trainable, differentiable (or evolvable) update rules that capture the adaptive self-regulatory dynamics of living matter. By embedding Artificial Neural Networks (ANNs) as local decision-making centers and interaction rules between localized agents, NCA can simulate processes across molecular, cellular, tissue, and system-level scales, offering a multiscale competency architecture perspective on evolution, development, regeneration, aging, morphogenesis, and robotic control. These models not only reproduce canonical, biologically inspired target patterns but also generalize to novel conditions, demonstrating robustness to perturbations and the capacity for open-ended adaptation and reasoning through embodiment. Given their immense success in recent developments, we here review current literature of NCAs that are relevant primarily for biological or bioengineering applications. Moreover, we emphasize that beyond biology, NCAs display robust and generalizing goal-directed dynamics without centralized control, e.g., in controlling or regenerating composite robotic morphologies or even on cutting-edge reasoning tasks such as ARC-AGI-1. In addition, the same principles of iterative state-refinement is reminiscent to modern generative Artificial Intelligence (AI), such as probabilistic diffusion models. Their governing self-regulatory behavior is constraint to fully localized interactions, yet their collective behavior scales into coordinated system-level outcomes. We thus argue that NCAs constitute a unifying computationally lean paradigm that not only bridges fundamental insights from multiscale biology with modern generative AI, but have the potential to design truly bio-inspired collective intelligence capable of hierarchical reasoning and control."
English
25
97
548
32.1K
Reuben Marks
Reuben Marks@ReubenMarks5·
@gfodor Keep your friends close, and your enemies closer.
English
0
0
0
17
gfodor.id
gfodor.id@gfodor·
Not sure how Elon didn’t talk Trump out of giving Sam the light cone. Anyway
English
26
2
140
8.3K
Joscha Bach
Joscha Bach@Plinz·
This conversation between Adam Brown and @dwarkesh_sp is the most intellectually delightful podcast in the series (which is a high bar). Adam's casual brilliance, his joyful curiosity and the scope of his arguments on the side of life are exhilarating. dwarkeshpatel.com/p/adam-brown
English
27
62
906
94.7K