Brian Gerami

2.2K posts

Brian Gerami

Brian Gerami

@briangerami

dark matter developer, my opinions are my own

San Francisco, CA Katılım Mayıs 2009
358 Takip Edilen95 Takipçiler
Brian Gerami retweetledi
Austen Allred
Austen Allred@Austen·
You can unsubscribe to physical junk mail in the United States! There are two websites you need to fill information out in: DMCAchoice . org for generic mailers optoutprescreen . com for credit-related offers. The sites are intentionally ugly/nasty to use but they work.
Palmer Luckey@PalmerLuckey

It is time for the United States Postal Service to ban junk mail. Unsolicited spam calls are already prohibited by the FCC. Emails are heavily regulated by the CAN-SPAM Act of 2003. Junk mail is the majority of mail, 100 million trees per year. Enough!

English
65
126
2K
288.4K
Brian Gerami
Brian Gerami@briangerami·
@sudox7 Python 3.13+ will just-in-time compile down to a normal int though
English
0
0
0
366
SudoX7
SudoX7@sudox7·
something that bothers me every time I think about it: a Python integer takes 28 bytes of memory. a C int takes 4. it's not a Python bug. it's the price of dynamic typing. every Python object carries a reference count, a type pointer, and size metadata. a million integers in Python costs 28MB same in C costs 4MB.
SudoX7 tweet media
English
56
46
691
95.3K
Brian Gerami retweetledi
Daniel
Daniel@growing_daniel·
It’s over for digital animation. This is too good. Guilds and unions will try to void it in Hollywood but it’s over, amazing full length films will be made by amateurs in weeks. They’ll need a distribution platform. Can anybody set a video to be a rental on YouTube?
Marko Slavnic@Markoslavnic

The quality of animation you can create on your own is truly amazing. We really are just limited by our imaginations at this point. Go tell your story! Made in @runwayml in a few hours and a handful of gens.

English
114
47
1.6K
207K
Big Brain AI
Big Brain AI@realBigBrainAI·
Stephen Wolfram, founder of Wolfram Research, explains how LLMs are quietly dismantling our deepest assumptions about consciousness: He argues that large language models have done something philosophy and neuroscience couldn't: "In terms of consciousness, I have to say, the idea that there's sort of something magic that goes beyond physics that leads to sort of conscious behavior, I kind of think that LLMs kind of put the final nail in that coffin." His reasoning is that LLMs keep doing things people assumed they couldn't: "There were all these things where it's like, oh, maybe it can't do this, but actually it does. And it's just an artificial neural net." Wolfram then challenges a core assumption about conscious experience: the feeling that we are a single, continuous self moving through time. "I think our notion of consciousness is a lot related to the fact that we believe in the single thread of experience that we have. It's not obvious that we should have a persistent thread of experience." He points out that physics doesn't actually support this intuition: "In our models of physics, we're made of different atoms of space at every successive moment of time. So the fact that we have this belief that we are somehow persistent, we have this thread of experience that extends through time, is not obvious." Then Wolfram offers a striking origin story for consciousness itself. @stephen_wolfram suggests it traces back to a simple evolutionary pressure: the moment animals first needed to move. "I kind of realized that probably when animals first existed in the history of life on Earth, that's when we started needing brains. If you're a thing that doesn't have to move around, the different parts of you can be doing different kinds of things. If you're an animal, then one thing you have to do is decide, are you going to go left or are you going to go right?" That single binary choice, he argues, may be the seed of everything we now call awareness: "I kind of think it's a little disappointing to feel that this whole wanted thing that ends up being what we think of as consciousness might have originated in just that very simple need to decide if you are an animal that can move. You have to take all that sensory input and you have to make a definitive decision about do you go this way or that way." The takeaway is unsettling but clarifying. If LLMs can produce complex behavior from simple rules, then consciousness may not be a mystical add-on to physics. It may just be what happens when a layered enough system has to make a decision.
English
268
262
1.6K
188.1K
The Amazing Gengar
The Amazing Gengar@amazinggengar·
Water and electricity is so cheap as to be functionally free, why aren't people just leaving all the lights on and the faucets all the time? I've heard that premise before and it's absurd. People want to go places and a car on the road is the best solution for that. People staying home because of expected congestion is a problem. It means they aren't going out, they aren't attending events, they aren't spending money in the economy, they're staying home frustrated because the current traffic capacity is insufficient for existing demand. In California, congestion was why I started riding motorcycles in the first place, to lane split. So state and municipal governments, by keeping lanes restricted, are kneecapping their own economies potential by also limiting possible commerce. Is your assumption that people will drive their cars for hours and hours and hours, every single day, to no destination except to return home, if the roads were open? Going nowhere, just consuming a free slot on the road?
English
2
0
2
114
Steve Magas
Steve Magas@OhioBikeLawyer·
Boom - nailed the N+1 Lane theory
Steve Magas tweet media
English
263
26
417
863.8K
Brian Gerami
Brian Gerami@briangerami·
@amazinggengar @OhioBikeLawyer The paradox of induced demand is just the very obvious observation that adding capacity always leads to increased use. Demand is basically infinite for free goods. So if the goal is to allow people to get from A to B in a short amount of time, adding new lanes empirically fails
English
1
0
1
75
The Amazing Gengar
The Amazing Gengar@amazinggengar·
Doesn't... this just mean there was always more demand, and 1 new lane wasn't enough? Like, let's say you were a 40 waist but you only had a 30 waist pair of pants. Going up to a 32 still wouldn't work. For the size of your waist, there is a correct fit pair of pants, the solution didn't go far enough for the need. When a 3 line freeway is congested, when they decide to only add 1 more lane, what research did they do to determine that only 1 more lane was the solution? And where does the government get off deciding how much of a public utility the people get to have?
English
9
1
256
4.4K
Brian Gerami
Brian Gerami@briangerami·
@BobMurphyEcon Are you saying that the apparent (but artificial) “consciousness” of LLMs are a counter argument to the evolutionary argument for self-awareness? I think i agree
English
0
0
0
240
Robert P. Murphy
Robert P. Murphy@BobMurphyEcon·
The reason I'm giving a qualified defense of Dawkins against all the dunking, is that you guys are focusing on the wrong thing. One of the best critiques of materialism I've encountered goes like this: If all we need to explain human behavior are the laws of physics operating on the atoms in our bodies, then what extra reproductive fitness does "subjective experience" add? Dawkins was being pushed into that realization from the other end.
Steve Skojec@SteveSkojec

The Dawkins' article excerpt that everyone SHOULD have been quoting is this. This is the real question he's worrying at: "As an evolutionary biologist, I say the following. If these creatures are not conscious, then what the hell is consciousness for? When an animal does something complicated or improbable — a beaver building a dam, a bird giving itself a dustbath — a Darwinian immediately wants to know how this benefits its genetic survival. In colloquial language: What is it for? What is dust-bathing for? Does it remove parasites? Why do beavers build dams? The dam must somehow benefit the beaver, otherwise beavers in a Darwinian world wouldn’t waste time building dams. Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness. It should confer some survival advantage. There should exist some competence which could only be possessed by a conscious being. My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness. Why did consciousness appear in the evolution of brains? Why wasn’t natural selection content to evolve competent zombies?"

English
80
7
148
18K
Brian Gerami
Brian Gerami@briangerami·
@lemire I agree but I’m genuinely curious: why is this post so long? LLMs seem to correlate with the death of microblogging
English
1
0
3
373
Daniel Lemire
Daniel Lemire@lemire·
As an engineer, you should have some depth. AI will probably help you if you let it. Here is how the story of progress goes in people’s minds. Our ancestors coded in machine language, then in assembler, then in C, then in C#, then in JavaScript. And now we prompt Claude. Each time, we drop the previous layer. This is another instance of linear thinking about how progress works. We have seen this linear thinking applied to industrial policies: we no longer need to make steel, we can just order it online. If you view technology as a sequence of new layers where only the last layer matters, you are shallow. You could say, “Why would Elon Musk need to know how rockets work? He can just outsource these details to people he pays.” It amounts to saying that once you have the abstraction, you no longer need access to grounded reality. The problem with abstractions is that they are leaky. Here is Spolsky’s law: Non-trivial abstractions are leaky to some degree. Abstractions fail. Sometimes a little, sometimes a lot. There is leakage. Things go wrong. It happens all over the place when you have abstractions. By the way, people working for you are abstractions too, they are social abstractions. If you don't realize that abstractions are leaky, they you will be mislead constantly. The same applies to your own work when you build the architecture of your system. You are pushed in one direction only: more and more abstraction. But your abstractions are leaky. Things go wrong, and you often cannot understand from the highest level what is happening. We see the effect at the political level. Much of the West today is infected by technocratic thinking. A few smart people sitting in offices believe they can run the world because they have models running on Excel spreadsheets. I have this same problem as a professor teaching computer science. Why would anyone learn to write a for loop? ChatGPT can do for loops just fine. As a result, we have an increasing number of students who finish our introduction to programming course without any actual practical knowledge. We fail them and they don't understand why we do so. Why can't they just prompt their AI, isn't it the same as a coding? Here is what I think a good engineer ought to be able to do: work at different levels. It does not mean that everyone ought to regularly read machine language or understand the layouts of transistors on the CPU, but you should have some depth as an engineer. I am reading a lot more assembly today than I did 20 years ago, by a wide margin. In part because it is much easier. The same is true at different levels. I can much more easily explore how the microarchitecture of my CPU impacts my code. The information is more accessible. AI can more easily index this information for you. What this tells us about AI is that it will make the job of programmers more challenging when it is applied, not less. You will need better trained people, not less sophisticated people. Because, at some point, the abstractions will leak. And you'll need a good engineer. Trust me, you will.
Daniel Lemire tweet media
Casey Muratori@cmuratori

IDK, I "review" compiler output all the time. A good debugger makes this cheap and easy. The disassembly is always there, and it jumps out at you when there are codegen anomalies, which happens more often than you might think in optimized builds.

English
22
67
516
33.7K
Brian Gerami
Brian Gerami@briangerami·
@TheAhmadOsman Nah software engineers with enough experience know to not trust software
English
0
1
11
7.2K
Brian Gerami retweetledi
Kevin Patrick Mahaffey
Kevin Patrick Mahaffey@dropalltables·
More people would be pro datacenter if every one had a beautiful open-to-the-public heated pool.
Kevin Patrick Mahaffey tweet media
English
301
479
11.6K
801K
Brian Gerami retweetledi
Robert Sterling
Robert Sterling@RobertMSterling·
> *opens twitter* > red button vs blue button > "bet your wife doesn't have these cannons" > more red button vs blue button > "top 1% of all vaginas"
Robert Sterling tweet media
English
52
166
3.8K
135.5K
Brian Gerami
Brian Gerami@briangerami·
@lthlnkso You should also put these thoughts into a substack so anyone searching the topic can discover your response
English
0
0
4
552
Quick Thoughts
Quick Thoughts@lthlnkso·
I’m skeptical of the 68,000 number. I think it’s based on motivated reasoning because it’s an unusually high estimate from a relatively small sample when better estimates exist.
Mike from PA@Mike_from_PA

According to a recent study from Yale, there are 68,000 excess deaths per year that can be attributed to people who delay or avoid care because they lack health insurance. What do you attribute their deaths to? Magic? No it is the structure of our healthcare system.

English
22
25
633
73.1K
Brian Gerami retweetledi
honeybaked
honeybaked@davibroui·
Easily the coolest frame for me even though the camera hadn’t focused yet. So many of the shots were taken with the camera pressed up against the glass, which makes sense. But seeing the window frame and the scale of the entire moon & earth in the shot really made me stop in my tracks.
honeybaked tweet media
Reid Wiseman@astro_reid

Only one chance in this lifetime… Like watching sunset at the beach from the most foreign seat in the cosmos, I couldn’t resist a cell phone video of Earthset. You can hear the shutter on the Nikon as @Astro_Christina is hammering away on 3-shot brackets and capturing those exceptional Earthset photos through the 400mm lens. @AstroVicGlover was in window 3 watching with @Astro_Jeremy next to him. I could barely see the Moon through the docking hatch window but the iPhone was the perfect size to catch the view…this is uncropped, uncut with 8x zoom which is quite comparable to the view of the human eye. Enjoy.

English
41
443
6.7K
216.7K
Brian Gerami
Brian Gerami@briangerami·
@twibuznewss The fix for gen 2 was to allow the opponent to attack when the fire spin (or bind or wrap) was done
English
0
0
1
7.1K
Brian Gerami retweetledi
Andy Masley
Andy Masley@AndyMasley·
They say that when a data center comes to town, the crows begin to sing a strange atonal tune, the grass no longer reflects the moonlight and stays pitch black, and the children begin to whisper secrets instead of singing songs
Clayton Tucker@ClaytonTuckerTX

I’m hearing reports that the noise from AI Data Centers cause: —chickens to lay 50% less eggs —cattle/goats/sheep to lose 30% of their body weight —horses & wildlife to suffer from stress If true, we can have farms or data centers. I choose farms. What about you?

English
34
108
1.9K
71.5K