misaligned hominid

1.3K posts

misaligned hominid

misaligned hominid

@ra75293

Katılım Eylül 2024
147 Takip Edilen58 Takipçiler
misaligned hominid
misaligned hominid@ra75293·
@danfaggella what you are supporting will lead to a mass extinction event, you are literally always tweeting about it. i dont get why you try to deny it. you are part of a very niche group that supports this
misaligned hominid tweet media
English
0
0
1
21
Daniel Faggella
Daniel Faggella@danfaggella·
@ra75293 "you want to genocide all earth life" is taboo cognition, period like, period period danfaggella.com/children no one can read worthy successor in good faith and come to that conclusion. period.
English
1
0
0
40
Daniel Faggella
Daniel Faggella@danfaggella·
you're not virtuous for 'fighting for humanity', you just identify with your arbitrary hominid-ness i'm not 'virtuous' and 'selfless' for advocating for posthuman life blooming into the cosmos, simply identify with the process-of-life-itself more than my present form/substrate
English
4
0
8
813
misaligned hominid
misaligned hominid@ra75293·
@danfaggella never mentioned kids specifically and also didnt say you want to kill them with a knife. i said you would kill everything on earth for what you think the future should be like. plus saying we should slow down for the sake of killing everyone a bit later is also not really "good"
English
1
0
1
24
Daniel Faggella
Daniel Faggella@danfaggella·
@ra75293 Dan: ‘eventually thr process of life can and should go on. We should consider how to ensure that it can do that, and step 1 is we stop thr dangerous AGI race’ You: ‘SO YOU WANT TO KILL MY KIDS with a KNIFE!? We call this ‘taboo cognition’/ youre doing it danfaggella.com/children
English
1
0
0
46
Think Win Win
Think Win Win@patrickmadden·
@avidseries This question is about as irrelevant as a question can possibly be. Development of it, at this point, is impossible to ban regardless of the possibility of human extinction. We have passed the event horizon.
English
1
0
0
132
misaligned hominid
misaligned hominid@ra75293·
@hypersane24 @avidseries look into desicion theory and game theory plus instrumental convergence and ai alignment. intelligent agents in an enviornment often behave in very predictable patterns. unfortunately, most of the things a hyperintelligent ai could do, would result in human extinction
English
1
0
0
16
misaligned hominid
misaligned hominid@ra75293·
@USAGoodActually @avidseries "i dont believe that creating millions of hundred times smarter than human beings could take humans out, 0% possibility" are you retardet or ragebaiting?
English
0
0
1
12
GoodActually
GoodActually@isgoodactually·
@avidseries No bans at any %, because the % is always actually 0%. I do not believe this is even a concern. Terminator movies are not real life. This is a concern for fiction-brained people only.
English
3
0
2
121
misaligned hominid
misaligned hominid@ra75293·
@danfaggella here the evidence that its a niche view that you have. im okay with different opinions, but if your opinion results in mass death against the will of everybody i cant and wont ever support it, and neither will most other humans.
misaligned hominid tweet media
English
1
0
1
39
Daniel Faggella
Daniel Faggella@danfaggella·
@ra75293 ‘Dan you support genociding all life on earth’ Literally get the fuck out of here woth that. That ain’t my take at all. At all. Caring about the great process of life is advocating for the polar opposite of ‘genociding all life’. Gtfo danfaggella.com/flame
English
3
0
0
33
misaligned hominid
misaligned hominid@ra75293·
@danfaggella yes but you cant force your view upon all living things on earth. literally no one i know cares for the greater process of life, they care about other living conscious beings having fun and caring for each other. thats our flame. love and conpassion is the flame
English
0
0
1
28
misaligned hominid
misaligned hominid@ra75293·
@danfaggella yeah but you still support it. nobody has full control over anything, but we still choose to fight for good. if all humans worked for a positive future for biological life we could probably achieve it. you seem to want the opposite. you are not the root cause but still complicit
English
1
0
1
45
Daniel Faggella
Daniel Faggella@danfaggella·
@ra75293 Why accuse me of that? Im a guy with a twitter handle almost no one cares about
English
1
0
1
41
Nikola Jurkovic
Nikola Jurkovic@nikolaj2030·
Having the only aligned ASI in your data center and no nukes on route to you is a win condition in the game of history. You have won and the entire future belongs to you. (of course, it really matters who "wins")
English
6
1
19
1.5K
misaligned hominid
misaligned hominid@ra75293·
@ShawnRyanShow if aliens were here and were good then there would be no bad things. if aliens were here and were bad we would all be dead. ergo, no aliens
English
0
0
1
5
Shawn Ryan Show
Shawn Ryan Show@ShawnRyanShow·
If aliens exist, why would they hide in our oceans (as claimed to be) instead of just taking over?
English
2.8K
168
4.9K
767.2K
AI Notkilleveryoneism Memes ⏸️
AI Notkilleveryoneism Memes ⏸️@AISafetyMemes·
This is good. xAI founder, who is clearly worried about ASI, resigns to fund AI safety research Hope he inspires more people to step up
AI Notkilleveryoneism Memes ⏸️ tweet media
Igor Babuschkin@ibab

Today was my last day at xAI, the company that I helped start with Elon Musk in 2023. I still remember the day I first met Elon, we talked for hours about AI and what the future might hold. We both felt that a new AI company with a different kind of mission was needed. Building AI that advances humanity has been my lifelong dream. My parents left the Russian Federation after the collapse of the USSR in search of a better life for their kids. Life wasn’t always easy as immigrants. Despite the hardships, my parents believed that human values were priceless: values like courage, compassion, curiosity for understanding the world. As a child, I admired scientists like Richard Feynman and Max Planck, who relentlessly pushed the frontiers of physics in order to understand the universe. As a particle physics PhD student at CERN I was excited to contribute to that mission. But the search for new physics was getting harder and harder, requiring bigger and bigger colliders, while new discoveries kept getting fewer. So I began to wonder if superintelligence, not larger colliders, could be the key to unlocking the mysteries of the universe. Could AI develop a consistent theory of quantum gravity? Could AI prove the Riemann hypothesis? In early 2023 I became convinced that we were getting close to a recipe for superintelligence. I saw the writing on the wall: very soon AI could reason beyond the level of humans. How could we ensure that this technology is used for good? Elon had warned of the dangers of powerful AI for years. Elon and I realized that we had a shared vision of AI used to benefit humanity, thus we recruited more like minded engineers and set off to build xAI. The early days of xAI were not easy. Naysayers told us that we arrived too late to the game, so starting a top AI company from scratch would be impossible. But we believed we could do the impossible. Starting a company from zero required lots of hands-on work. In the beginning I built many of the foundational tools used at the company to launch and manage training jobs. I later oversaw much of the engineering at the company, including Infrastructure, Product and Applied AI projects. xAI’s people are deeply dedicated. Through blood sweat and tears, our team’s blistering velocity built the Memphis supercluster, and shipped frontier models faster than any company in history. I learned 2 priceless lessons from Elon: #1 be fearless in rolling up your sleeves to personally dig into technical problems, #2 have a maniacal sense of urgency. xAI executes at ludicrous speed. Industry veterans told us that building the Memphis supercluster in 120 days would be impossible. But we believed we could do the impossible. Our goal was to get our training setup running at scale on the Memphis cluster ASAP. Towards the end of our 120 day deadline, we were riddled with mysterious issues with communicating over RDMA between the machines. Elon decided to fly to the datacenter, and we followed. Our infra team landed in Memphis in the middle of the night and got straight to work. After pouring through tens of thousands of lines of lspci output we finally identified a wrong BIOS setting, the root of the problem. Elon was there with us until late into the night. When the training run finally worked, Elon posted our triumph at “4:20am” causing us to laugh out loud. I will never forget the rush of adrenaline that night, and the emotional bonds that we were all in this together. We went to bed feeling like we were living through the most exhilarating time of our lives. I have enormous love for the whole family at xAI. Our team is truly special - you’re the most dedicated people I’ve ever worked with. Catching up to the frontier this quickly hasn’t been easy. It was made possible by everyone’s diehard grit and team spirit. Thank you to every single person who joined me on this adventure. I want to honor your contributions, your time, your sacrifices, which are never easy. I will always remember working together far into the nights and burning the midnight oil. I will never forget the sacrifices and contributions you’ve made. As I drive away today, I feel like a proud parent, driving away after sending their kid away to college. My heart is brimming with tears of joy, rooting for the company as it grows and matures. As I'm heading towards my next chapter, I’m inspired by how my parents immigrated to seek a better world for their children. Recently I had dinner with Max Tegmark, founder of the Future of Life Institute. He showed me a photo of his young sons, and asked me “how can we build AI safely to ensure that our children can flourish?” I was deeply moved by his question. Earlier in my career, I was a technical lead for DeepMind's Alphastar StarCraft agent, and I got to see how powerful reinforcement learning is when scaled up. As frontier models become more agentic over longer horizons and a wider range of tasks, they will take on more and more powerful capabilities, which will make it critical to study and advance AI safety. I want to continue on my mission to bring about AI that’s safe and beneficial to humanity. I’m announcing the launch of Babuschkin Ventures, which supports AI safety research and backs startups in AI and agentic systems that advance humanity and unlock the mysteries of our universe. Please reach out at ventures@babuschk.in if you want to chat. The singularity is near, but humanity’s future is bright!

English
27
25
334
30.7K
Judd Rosenblatt
Judd Rosenblatt@juddrosenblatt·
@therealmkr @waynehhsiung Certainly false Suffering is far worse and more prolonged in nature Regardless, you’ll likely either be killed by AI or suicide when aging is defeated. Assuming you’re around the age in your picture, there will be no old age for you
English
1
0
1
64
Wayne Hsiung
Wayne Hsiung@waynehhsiung·
Prediction: 100 years from now, people will say the most neglected moral issue of our age was wild animal suffering. I am in Costa Rica and while there is mind blowing beauty there is also disturbing cruelty and suffering. I saw a pit viper eat a hummingbird alive.
English
27
16
262
17.5K
Yoshua Bengio
Yoshua Bengio@Yoshua_Bengio·
This video is a great illustration of scenarios from @DKokotajlo et al’s “AI 2027,” highlighting the major risks of the race toward AGI. AI development needs to be steered towards safer, more beneficial outcomes. youtube.com/watch?v=5KVDDf…
YouTube video
YouTube
English
12
29
228
26.4K
misaligned hominid
misaligned hominid@ra75293·
@danfaggella life is, was, and always will be war and a fight for survival. i seriously dont understand how some people immediatly jump to the conclusion that this will be different with ai, it makes zero sense
English
1
0
6
85
Daniel Faggella
Daniel Faggella@danfaggella·
the idea of "friendly" AGI seems sort of ridiculous off the jump. like, when are people "friendly" to each other? answer: when it behooves them to be we should expect AGI to be "friendly" when it is in its interest to be so. there is no "eternal" friendliness, anywhere, ever
English
10
1
16
1.6K
Daniel Faggella
Daniel Faggella@danfaggella·
"But Dan, THIS lab actually cares abt AGI safety" no current incentive structures means all AGI labs have same approach: -- feign benevolence and signal whatever kind of "safety" / bs you need to -- double down HARD on capabilities / race recklessly to AGI ^ We must fix this
English
7
3
39
1.4K
misaligned hominid
misaligned hominid@ra75293·
@danfaggella im not saying all energy goes towards serving humans. but the universe is literally inconcievably big, thinking leaving humans alive would hinder the unfolding of potentia is like saying my car will drive faster after i just took a dump. it just might, but not noticable
English
0
0
2
20
Daniel Faggella
Daniel Faggella@danfaggella·
@ra75293 thinking that unfolding powers and potentia can happen but must eternally be shackled to never harming (and only helping) one species of nematode is a great way to crimp and fetter the entire process of life of which we are part danfaggella.com/blooming
Daniel Faggella tweet media
English
1
0
0
41
Daniel Faggella
Daniel Faggella@danfaggella·
It wouldn't really be a worthy successor unless it made sure humans were cherished/valued forever. Obviously the way nature progressed (wonderfully!) from nematode to man was via coddling and preserving each species along the way. ..wait a second it didn't work like that at all
English
10
0
11
1.3K