Meher Roy

7.6K posts

Meher Roy

Meher Roy

@MeherRoy

Chemical engineer, biotechnologist, entrepreneur

Basel, Switzerland Katılım Mayıs 2014
1.1K Takip Edilen5.4K Takipçiler
Meher Roy retweetledi
Kevin Sekniqi 🔺
Kevin Sekniqi 🔺@kevinsekniqi·
avalanche is on the precipice of absolutely obliterating bandwidth capacity upper limits of all other networks
English
28
59
393
32.9K
Dr Brad Stanfield
Dr Brad Stanfield@BradStanfieldMD·
@ElJudge2020 But why does he mix that war against ultra processed foods (which is something I am absolutely behind) with this vaccine rubbish? I don’t get it?
English
4
0
8
371
Dr Brad Stanfield
Dr Brad Stanfield@BradStanfieldMD·
Community notes for the win. Why doesn’t RKJ Jr wage war against ultra processed foods that are hyper palatable, low in fibre and protein, and loaded with calories? That’s something we can all get behind
English
17
3
85
12.1K
Meher Roy
Meher Roy@MeherRoy·
RT @MarcosArrut: On the one hand, aspartame: a massively sold chemical linked to the increase of cancer. On the other, CCR5 gene editing in…
English
0
1
0
45
Meher Roy
Meher Roy@MeherRoy·
Trump should be dictator, and then Elon should be next in line. Simple.
English
4
0
3
850
Meher Roy
Meher Roy@MeherRoy·
@heckerhut Maybe Trump and Elon are too early. The next cycle of plutocrats will be the ones to take over.
English
0
0
1
72
Meher Roy
Meher Roy@MeherRoy·
@heckerhut Society is transitioning into a plutocracy. Wealth ever concentrated, AGI incoming, all jobs are going to be lost. It's only natural absolute power concentrates with wealth.
English
2
0
1
106
Meher Roy
Meher Roy@MeherRoy·
This would be the most interesting timeline.
English
0
0
1
526
Meher Roy retweetledi
Brian Fabian Crain
Brian Fabian Crain@crainbf·
Is there something that could disrupt Proof-of-Stake? In my view, the most likely contender would be zk proofs. In the end, PoS tells you the state of some system. And you can trust it because economically it would be infeasible to lie. With zk proofs you can also get the state of some system, but potentially much cheaper. Of course, zk is not consensus and you often still need consensus. But you could have a single chain provide consensus for many zk systems. And that might dramatically reduce the need for PoS chains some day.
English
5
2
19
2.2K
Meher Roy retweetledi
Alexey Guzey
Alexey Guzey@alexeyguzey·
Why you shouldn't build your career around existential risk. I feel weird writing this because the core of the argument is almost metaphysical for me. I believe that attention is the most powerful thing in the world and I have a very deep sense that whatever we pay attention to -- whether positively or negatively -- we bring more of into the universe. [1] Patrick MacKenzie once noted that if you want a problem solved, you give it to someone as a project. If you don't want a problem to be solved, you give it to someone as a job [2]: "The Department of X, for the 25th straight year, has reported that they did a lot about X, that they have made progress on initiatives A, B, and C with metrics to show for it, that X is nonetheless more pressing than last year, and that they need more headcount." If you're anti-capitalist, you need capitalism. If you're anti-communist, you need communism. "Any PR is good PR". Any attention is good attention. If you're anti-something it means that something exists and it's important enough to be anti-it. In fact, the bigger it is, the better for your career. I'm especially bothered by people having existential risk jobs and careers. If you built your entire career around a certain existential risk, then what happens to you if this risk is dealt away with? You no longer have a job. You no longer have a career. I mean, what happens to Eliezer Yudkowsky's -- the biggest advocate of stopping all AI research due to AI existential risk -- career if it turns out that AI risk is simply not an existential concern? Would anyone care about him at all? And what would he do with his life then? Become an e/acc? People believe what they must believe. And they bring their beliefs into the world with all of their life force and intelligence. (notably, Nick Bostrom, who taught Yudkowsky about AI risk -- but hasn't centered his entire career around it -- has recoiled and now believes the risks are overblown. [3]) There's clearly a way in which this argument is stupid. Like, if there's a giant asteroid hurtling towards Earth that will reach us in 10 years causing a mass extinction, that's an existential risk. And I think working on it would be amazing. But it would be amazing because it's a concrete problem facing us and nobody will build their careers around it. We'll deal with the asteroid and move on to other things. There's also the immense opportunity cost of working on existential risks. All of these incredibly talented and smart people, all of the capital, and instead of working towards building a better future, solving real problems, they got one-shotted by scary thought experiments when they were in high school and college, built their entire career around these thought experiments, and are now stuck. That's just so sad. How many diseases would we have cured? How much physics and engineering progress would we have made? How much great art would've been created? But instead we have some of the smartest minds of the generation staring into the abyss most of their waking time, waiting for the abyss to stare back. In fact, it has already stared back at many of them. Sam Altman noted that Eliezer Yudkowsky probably did more than anyone else to speed up the advent of AGI by waking everyone up to AI, inspiring Altman to start OpenAI, and helping Hassabis to fundraise for DeepMind very early on. [4] Let's not wait until the abyss stares at the rest of us as well. Let's work towards the future we want, not against the future we don't want. After all, the fate of the universe might depend on this.
English
18
7
141
19.1K