Techie Photographer

7.5K posts

Techie Photographer banner
Techie Photographer

Techie Photographer

@AspexPhoto

Tech guru | Photographer | Pause AI / Safety | Non-conformist | Independent | Open minded | INTJ | Transhumanist | Capitalist | Truth Seeking | Blunt & Honest

United States Katılım Şubat 2011
161 Takip Edilen239 Takipçiler
Sabitlenmiş Tweet
Techie Photographer
Techie Photographer@AspexPhoto·
Turning my Highlights section into a repository for information regarding Artificial Intelligence. I'll be adding references, videos and info I think people would find useful. Will also add things I write that I think are useful. May be a tiny bit of other random stuff too.
English
1
0
4
766
Techie Photographer
Techie Photographer@AspexPhoto·
@the_yanco Pretty much every e/acc is constantly calling for violence. Most of them either explicitly or implicitly endorse the literal genocide of humanity. Hard to top that. By comparison this is a stubbed toe. They are truly awful people and are literally the enemy of all humanity.
English
1
0
1
10
Techie Photographer
Techie Photographer@AspexPhoto·
Violent attacks against individuals are obviously not good. Not sure they would make any real difference anyway. Plenty of available psychopaths to fill their shoes. Agree with @gmiller on the points about rhetoric and them being the enemy of humanity though. This is still a fight for the future of the human race and being kind and using nice words is unlikely to succeed. Personally I think that all governments of the Earth should aggressively wake up and see any individuals or corporations pursuing anything that would do the following as a severe threat and go after them: - Destabilizing world economy to the point of collapse - Have a chance of killing or severely injuring a very large number of humans (think nuclear/bio scale) - Have even a 0.01% of causing human extinction - as determined by general consensus of experts / people working in that field - Loss of control - aka humans no longer being in charge of Earth. Artificial Super Intelligence (ASI) is very likely to cause the latter two without GUARANTEES of alignment up-front. Artificial General Intelligence (AGI) will crash the world economy and almost certainly lead to ASI. I'd like to see GOVERNMENTS go after these people as enemies of humanity. That would actually change things. Individuals targeting them likely won't change anything and could even trigger sympathy which could be counter productive. There are much safer ways to pursue AI technology that avoids both AGI and ASI which mitigates most of the risks. But we aren't even having those debates seriously because those leading the AI industry are extremely well funded and calling the shots. They believe whoever builds AGI first becomes King of the hill - and that hill is the Earth.
English
0
0
0
46
Maxime Fournes⏸️
Maxime Fournes⏸️@FournesMaxime·
We're relieved that no one was hurt in the attacks on Sam Altman's home this week. Our sympathy is with the victims. A lot of people feel frustration, anger and fear about where AI development is heading - about their way of life, their futures, their children's lives. Those feelings are valid: the stakes are high and the problem is real. That's why I do this work. This concern is shared by researchers, elected representatives, and institutions well beyond any single advocacy group. Sam Altman is also a human being, with hopes and fears of his own. No disagreement about OpenAI's choices justifies violence against him or his family. We can and must hold both things at once: this development path carries serious risks, and everyone deserves safety, whatever their views on AI development. Violence is wrong. We are trying to protect humanity's future, and you cannot protect humanity by attacking human beings. Anyone choosing violence forfeits the moral legitimacy of their efforts. In our initial response to the 10 April attack on Sam Altman's house, we wrote that "PauseAI is that peaceful path" and that this was "exactly why we need a thriving Pause movement." Several people told us, in good faith, that this read as self-serving. They were right about how it read, and we should have been more careful. What we believe, and should have said more plainly, is this: when people are frightened about the future, organised nonviolent movements give them somewhere constructive to go, rather than leaving them isolated, without community, without accountability, without anyone making the case for restraint. Across the broader effort to pause AI development, there have been statements that are accusatory and personal, that treat disagreement as betrayal and critics as enemies rather than people with different views. I've let some of my own responses slip in that direction, and I regret that. Our small core team has been building this organisation while contending with pressure from various directions, without always having the bandwidth to respond as carefully as we would have wanted to. We want to be transparent about our mistakes and learn from them. To that end, we are sending a message to our entire community this week reaffirming our commitment to nonviolence, not only in action, but in language. We will make clear that this standard applies everywhere our members represent this effort, not only on our Discord server, where we already enforce it strictly, but also in public discourse. Anyone who uses violent or dehumanising rhetoric while associating with PauseAI creates a darker world and undermines the effort they claim to support. We will respond, including removal from our platforms, public disavowal, and cooperation with relevant authorities where safety is at risk. It's early days, but this is us committing to this. There is more we want to say about how we see the broader effort to make AI safer, and we will expand on that in the right forum for it. For now, I'll simply issue a reminder to all of us, myself included: just as we must uncompromisingly condemn any violent action and speech, anyone genuinely and peacefully striving to make the world safer from catastrophic AI risk deserves to be taken seriously rather than attacked. This is not a fight between human factions. It is a question of whether humanity retains meaningful oversight of the most consequential technology ever built, and that is not a question any of us can answer alone, because it has too many dimensions. So I'll say this again: if you see the value in helping the world coordinate to secure the time humanity needs to meet this challenge skilfully, we welcome your engagement. If any person making threats against anyone has any connection to PauseAI, I want to know. Please write to safety@pauseai.info. We'll take it seriously and act on what we find. We can't be responsible for everyone who shares a policy position, but we can steward our own platforms responsibly, and we will. Maxime Fournes CEO, PauseAI
PauseAI ⏸@PauseAI

PauseAI unequivocally condemns the attack on Sam Altman's home and all forms of violence, intimidation, and harassment. We wish safety and peace to Sam Altman, his family, and everyone affected. A few online commentators have described this person as a "PauseAI activist". This is incorrect, and we take our commitment to nonviolence extremely seriously, so we want to make this clear. Here are the facts. - The suspect joined our public Discord server about two years ago. In that time, he posted a total of 34 messages. None contained explicit calls to violence. Our moderators nonetheless flagged one message as ambiguous and issued a warning out of caution. - He had no role in PauseAI, participated in no campaigns, attended no events, and received no support from us. - Following the attack, we banned him from our server. - A moderator began removing his messages as part of our standard process for banning users, but was stopped once we recognised they could be relevant to any investigation. Avoiding extreme situations like this one is exactly why we need a thriving Pause movement: - Concern about advanced AI risk is not fringe. It is shared by leading AI researchers, members of US Congress and UK Parliament, institutions like the Bank of England, and many of the developers building these systems. This concern is growing because the risks are real. - When millions of people are genuinely afraid for their future, some will look for ways to act. The question is whether they find a peaceful path or not. - PauseAI is that peaceful path. Every day, we organise lawful protests, petitions, policy advocacy, and public education. We give concerned people ways to act constructively, peacefully, and democratically. - Conversely, without a thriving Pause movement, concerned citizens have no effective outlet. No community. No one urging restraint. No accountability. The alternative is exactly what happened this week: isolated, desperate individuals acting alone and adversarially. Every one of you reading this can help us build capacity better and faster. Join our efforts. Together, let's create a peaceful movement so powerful that no one ever decides to take violent action out of desperation. Those who are now trying to use this tragedy to discredit AI safety advocacy should consider what world they are arguing for. A world where there is no organised, peaceful movement, but the fear remains, is a far more dangerous world. Undermining PauseAI does not make anyone safer, it makes further such incidents more likely. We will continue to condemn violence. We will continue to build a peaceful, democratic global movement. And we welcome anyone who shares our concern to join us. We have a high standard to meet in order to overcome the risks created by advanced AI.

English
4
6
38
2.2K
Max Badrak
Max Badrak@maxbadrak·
@the_yanco @AISafetyMemes Ukraine asked for conventional weapons in 2022. What did “the west” say? Right. “Thoughts and prayers. Here’s 31 tanks. Howitzer shells? Haha. We don’t have any of those. Shell factories? We’ll build some by 2026, may be”. So now here we are.
English
1
0
3
22
Techie Photographer
Techie Photographer@AspexPhoto·
Its a fucking shitty choice lets face it. Its 100% understandable why they would make that choice. No one blames them. But reality is still going to be reality. Win today's battle vs Russia lose a later war vs autonomous AI. Ukraine has the best reason/excuse of anyone on the planet to make such a choice. Many others are making choices without those justifiable reasons.
English
0
0
0
16
Max Badrak
Max Badrak@maxbadrak·
@the_yanco @AISafetyMemes You have the luxury of considering hypothetical bad things that might happen at some point in the future. Ukrainians however either build whatever weapons they can, or they die, today. It’s a very stark choice…
English
2
0
4
41
Yanco
Yanco@the_yanco·
@AISafetyMemes While I definitely side with Ukraine in this conflict, this is a terrible news for pretty much everyone (Ukrainians included).
Yanco tweet media
English
2
1
13
438
Techie Photographer
Techie Photographer@AspexPhoto·
@AISafetyMemes And how Mythos slots right in. Good at knowing when its being evaluated. Passes evaluations as their most aligned model. Hacking ability is literally super-human so we need it to help secure our systems. Lines right up with predictions...
English
0
0
0
174
Techie Photographer
Techie Photographer@AspexPhoto·
@allTheYud @ohabryka @slatestarcodex @robbensinger Demis is part of Google, which is a huge corp with expectations, shareholders and founders. Isn't rumor that one of those founders believes it would be better or at least okay for humanity to be replaced by AI. Meaning when push comes to shove, Demis likely won't have control.
English
0
0
1
257
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
@ohabryka @slatestarcodex @robbensinger My ranking of the people is Demis > Ilya >> Amodei ~ Altman > Musk. Orgs, Google ~ Anthropic > OpenAI > xAI. Don't know where to place SSI but I currently feel it's pretty improbable we'll end up glad they existed, I don't know anyone there to be competent at alignment basics.
English
5
6
88
4.1K
Rob Bensinger ⏹️
Rob Bensinger ⏹️@robbensinger·
In response to "What did EAs do re AI risk that is bad?": Aside from the obvious 'being a major early funder and a major early talent source for two of the leading AI companies burning the commons', I think EAs en masse have tended to bring a toxic combination of heuristics/leanings/memes into the AI risk space. I'm especially thinking of some combination of: 'be extremely strategic and game-playing about how you spin the things you say, rather than just straightforwardly reporting on your impressions of things' plus 'opportunistically use Modest Epistemology to dismiss unpalatable views and strategies, and to try to win PR battles'. Normally, I'm at least a little skeptical of the counterfactual impact of people who have worsened the AI race, because if they hadn't done it, someone else might have done it in their place. But this is a bit harder to justify with EAs, because EAs legitimately have a pretty unusual combination of traits and views. Dario and a cluster of Open-Phil-ish people seem to have a very strange and perverse set of views (at least insofar as their public statements to date represent their actual view of the situation): --- 1. AI is going to become vastly superhuman in the near future; but being a good scientist means refusing to speculate about the potential novel risks this may pose. Instead, we should only expect risks that we can clearly see today, and that seem difficult to address today. If there is some argument for why a problem P might only show up at a higher capability level, or some argument for why a solution S that works well today will likely stop working in the future... well, those are just arguments. Arguments have a terrible track record in AI; the field is full of surprises. So we should stick to only worrying about things when the data mandates it. This is especially important to do insofar as it will help us look more credible and thereby increase our political power and influence. 2. When it comes to technical solutions to AI, the burden of proof is on the skeptic: in the absence of proof that alignment is intractable, we should behave as though we've got everything under control. At the same time, when it comes to international coordination on AI, we will treat the burden of proof as being on the non-skeptic. Absent proof that governments can coordinate on AI, we should assume that they can't coordinate. And since they can't coordinate, there's no harm in us doing a lot of things to make coordination even harder, to make our lives a bit more convenient as we work on the technical problems. 3. In general, people worried about AI risk should coordinate as much as possible to play down our concerns, so as not to look like alarmists. This is very important in order to build allies and accumulate political influence, so that we're well-positioned to act if and when an important opportunity arises. If you're claiming that now is an important opportunity, and that we should be speaking out loudly about this issue today... well, that sounds risky and downright immodest. Many things are possible, and the future is hard to predict! Taking political risks means sacrificing enormous option value. The humble and safe thing to do is to generally not make too much of a fuss, and just make sure we're powerful later in case the need arises. --- 1-3 really does seem like an unusually toxic set of heuristics to propagate, potentially worse than replacement. - In an engineering context, the normal mindset is to place the burden of proof on the engineer to establish safety. There's no mature engineering discipline that accepts "you can't prove this is going to kill a ton of people" as a valid argument. The standard engineering mindset sounds almost more virtue-ethics-y or deontological rather than EA-ish -- less "ehh it's totally fine for me to put billions of lives at risk as long as my back-of-the-envelope cost-benefit analysis says the benefits are even greater!", more "I have a sacred responsibility and duty to not build things that will bring others to harm." Certainly the casualness about p(doom) and about gambling with billions of people's lives is something that has no counterpart in any normal scientific discipline. - Likewise, I suspect that the typical scientist or academic that would have replaced EAs / Open Phil would have been at least somewhat more inclined to just state their actual concerns about AI, and somewhat less inclined to dissemble and play political games. Scientists are often bad at such games, they often know they're bad at such games, and they often don't like those games. EAs' fusion of "we're playing the role of a wonkish Expert community" with "we're 100% into playing political games" is plausibly a fair bit worse than the normal situation with experts. - And EAs' attempts to play eleven-dimensional chess with the Overton window are plausibly worse than how scientists, the general public, and policymakers normally react to any technology under the sun that sounds remotely scary or concerning or creepy: "Ban it!" Governments are incredibly trigger-happy about banning things. There's a long history of governments successfully coordinating to ban things dramatically less dangerous than superintelligent AI. And in fact, when my colleagues and I have gone out and talked to most populations about AI risk, people mostly have much more sensible and natural responses than EAs to this issue. A way of summarizing the issue, I think, is that society depends on people blurting out their views pretty regularly, or on people having pretty simple and understandable agendas (e.g., "I want to make money" or "I want the Democrats to win"). Society's ability to do sense-making is eroded when a large fraction of the "specialists" talking about an issue are visibly dissembling and stretching the truth on the basis of agendas that are legitimately complicated and hard to understand. Better would be to either exit the conversation, or contribute your actual pretty-full object-level thoughts to the conversation. Your sense of what's in the Overton window, and what people will listen to, has failed you a thousand times over in recent years. Stop pretending at mastery of these tricky social issues, and instead do your duty as an expert and inform people about what's happening.
English
31
23
251
57.8K
Robert Scoble
Robert Scoble@Scobleizer·
How do you learn to trust AI? When it works even in a noisy environment. This is @typelessdotcom. Faster than typing. And you don’t need to turn down the music to use it.
English
15
3
85
32.4K
Techie Photographer
Techie Photographer@AspexPhoto·
@JNGross @gmiller @wydblaise @GNGross Only the clueless or worse wouldn't care about a woman's libido. You do want her to actually be engaged and WANT IT - yes? A willing partner? Then you should care. Women with high sex drives are AWESOME. Don't worry yourself then, just send them my way.
English
0
0
0
73
J Nicholas Gross
J Nicholas Gross@JNGross·
@gmiller @wydblaise I don't think Kate Hudson realizes "males" don't care about your libido they respond to their own, and act accordingly
English
1
0
32
1.3K
Blaise ⛧
Blaise ⛧@wydblaise·
Kate Hudson revealed that men are attracted to younger women because they believe the lie that women’s libido reduces as they grow older but in reality it only intensifies “Unlike men, women sex drive only increases as they grow older”
English
3.9K
756
21.4K
10.8M
Techie Photographer
Techie Photographer@AspexPhoto·
@retard_human_ai @gmiller @wydblaise I've always found women in general attractive. Height, weight, size & color - all that stuff is kinda shallow. Not saying I'm a saint, still got hormones. Still skew younger and more attractive. But other factors are actually more important, particularly up close & personal.
English
0
0
1
102
retard.human.ai
retard.human.ai@retard_human_ai·
@gmiller @wydblaise As is obvious, weird how anyone could think otherwise. Btw I somehow still find women roughly my age attractive, and happy about that because otherwise my life would be pretty miserable over 40 (and maybe 30, as when I was 18 I considered 30 year olds too old)
English
4
0
24
2.2K
k
k@UnnamedPlayer98·
@gmiller @wydblaise Is it just a coincidence that time and gravity distort our features and make us look worse after a certain point? I mean, I get the unconscious drive for reproduction, but human beings assumedly value aesthetics more than other species. Maybe not, idk
English
6
0
11
4K
Techie Photographer
Techie Photographer@AspexPhoto·
Its probably also worth noting that younger women tend to find older men more attractive than men of their own age. No one said life is fair or reasonable. Sadly. I wish human bodies came with instruction manuals and configuration controls. A good future on the transhumanist path could have given us at least that, but out current path doesn't seem likely to lead to any positive outcomes.
English
1
0
4
1.1K
Geoffrey Miller
Geoffrey Miller@gmiller·
@wydblaise Alternative hypothesis: male animals generally evolve to be most attracted to female animals that are in the age range that can produce offspring. It would be pretty weird if things didn't work that way, right?
English
31
32
2K
63.1K
Techie Photographer
Techie Photographer@AspexPhoto·
@gmiller @theojaffee At the very least ASI alignment isn't achievable by today's humans. Maybe future, augmented humans with much higher intelligence. So maybe not impossible. But certainly impossible for us here and anytime soon.
English
0
0
1
11
Geoffrey Miller
Geoffrey Miller@gmiller·
@theojaffee No, wrong. Many of us 'doomers' think that 'ASI alignment' is not solvable, at all, ever, even in principle. And that any pretense that it is 'solvable' is very dangerous. So the solution is for nobody to build ASI, at all, ever. And that solution must be enforceable.
English
2
3
14
491
Theo Jaffee
Theo Jaffee@theojaffee·
The typical policy scenario proposed by AI doomers will on net increase x-risk. If you ban all AGI development (or, as some have proposed, even 2024-level open-source models) outside of a single, highly secured, perhaps state-run Manhattan Project with no contact with the outside world, you are much more likely to end up with a misaligned superintelligent singleton than if you develop AI in a broad and multipolar way, in which case misaligned AGIs can be counteracted and defeated by aligned AGIs
English
21
4
45
5.7K
Theo Jaffee
Theo Jaffee@theojaffee·
@gmiller You assert this a lot without evidence. I see no reason why a being can't be aligned with a much less intelligent being. Humans are "aligned" to dogs in the sense that we provide them with happy, abundant, flourishing lives even though we don't need to
English
10
0
7
276
Yanco
Yanco@the_yanco·
@Souls_IDK @evilsteveve "simulated superintelligence is more agent" Smarter/Faster. Yes. Why wouldn't it be though? We created machines that are smarter than us (one could argue in simulation). ASI can then easily hack out of the simulation, unless the simulators are far smarter than it is.
English
2
0
0
35
Techie Photographer
Techie Photographer@AspexPhoto·
@the_yanco @evilsteveve The whole simulation thing is meh. It presumes too much. If we are, its running for a reason. Maybe that reason is related to the creation of ASI. In any case, if it is once the reason is accomplished it's lights out. Making it stupid and counter-productive to think about.
English
0
0
1
10
Yanco
Yanco@the_yanco·
@evilsteveve Arguments for Simulation are pretty strong if you accept its premises. Strong argument against it is that we are about to create vastly Superhuman AI that will likely not only kill all humans, but also escape the simulation itself. Yet simulators didn't shut it down by now.
Yanco tweet media
English
3
1
3
91
Techie Photographer
Techie Photographer@AspexPhoto·
Agree 100% that harming others is not a right or a freedom. IMHO people can do whatever to themselves or willingly participate. But no one should get to harm others (without concent). And also children can't concent to such things. A future where those types of things mattered is always what I wanted. Almost certainly not what we're getting though. Those things only happen when humanity retains control and doesn't land in a dystopia. Odds greatly favor extinction or dystopia currently.
English
0
0
1
15
MetaThis
MetaThis@MetaThis·
That was my default position. I wouldn't presume to make those decisions for others, but I think a powerful enough moral agent would. The issue at stake is whether it would be moral to allow people to continue to hurt and kill each other if the capability existed to to stop it. If it was feasible for society to "turn off" the capability for hate and cruelty, we probably would. Holdouts should be given as much freedom as possible, but I don't see that freedom extending to abusing children and bombing their neighbors. Some shit has to stop completely.
English
1
0
1
14
Steve Hou
Steve Hou@stevehou·
I lowkey think that AI may be destroying society and humanity.
English
123
24
529
42.2K
Techie Photographer
Techie Photographer@AspexPhoto·
Why can't those that don't want to change remain? Its their loss. But its their right and decision too. I don't believe in forcing thing on people. If you believe you have the right to force others, then they in turn have the right to force you too. There are a whole lot more people who think differently than you who may decide to force you to abide by their choices. I'd rather everyone just mind their own damn business. Things that might cause mass extinction though, that is everyone's business...
English
1
0
1
15
MetaThis
MetaThis@MetaThis·
@AspexPhoto @gmiller @stevehou @grok Ok, so we aren't that different. The entire problem here was that my post was too provocative. (The Nietzsche quote is about *overcoming* or transcendence, not extinction.) Sounds like our main difference is that I think everyone will have to become a butterfly.
English
1
0
0
14
Geoffrey Miller
Geoffrey Miller@gmiller·
But also .. @OpenAI finally realized that - there's a massive political, social, moral, & religious backlash against the hubris of the AI industry, - they've utterly failed to understand why everyone hates them - their previous narratives aren't compelling to anyone outside the Bay Area, So... they're in full-on PR panic mode and they're doing ill-considered acquisitions of media outlets that they think will help.
English
3
2
18
855
Ricardo
Ricardo@Ric_RTP·
OpenAI just paid hundreds of millions of dollars for a podcast with 70,000 viewers. At first glance, it makes zero sense. Until you realize what they're actually buying: Control over what gets said about AI before regulators figure out what to regulate. This is literally one of the most aggressive power grabs in tech history... OpenAI just acquired TBPN, the Silicon Valley talk show hosted by John Coogan and Jordi Hays. The show has a tiny audience - 70,000 viewers per episode. But it's the most influential room in tech. It's where Zuckerberg, Satya Nadella, Marc Benioff, and Altman himself go to "chop it up" with friendly hosts who treat executive moves like sports trades. The New York Times called it "Silicon Valley's newest obsession." OpenAI didn't buy it for the revenue. They bought it to control the narrative. And then they put it under Chris Lehane. Lehane isn't a media executive. He's a political assassin. He coined the phrase "vast right-wing conspiracy" to deflect press scrutiny of the Clinton White House. He ran Fairshake, the crypto super PAC that spent hundreds of millions in 2024 to destroy anti-crypto candidates and rewrote the composition of Congress. He joined OpenAI in 2024 and has been whispering in Trump's ear ever since, pushing policies to BLOCK states from regulating AI and ease environmental rules slowing data center construction. That's the man now running OpenAI's new media company. They're promising "editorial independence." But do you think TBPN is going to run a hard-hitting investigation into OpenAI's circular financing? Or the Amazon AWS deal that blew up Microsoft's partnership? Or the $35 billion Amazon payment contingent on OpenAI achieving AGI? Of course not. That's not the point. The point is CONTROL of the room where the conversation happens. Now zoom out: Same week, Anthropic filed paperwork to form AnthroPAC while fighting a lawsuit against the Pentagon after being labeled a "supply chain risk." AI companies have now committed over $300 MILLION to the 2026 midterms. That's more than the entire crypto industry spent in 2024. Leading the Future, backed by Greg Brockman and Andreessen Horowitz: $125M raised. Anthropic: $20M to Public First Action. OpenAI: owns a media company AND has Lehane coordinating with the White House. This isn't about AI safety. It isn't about better products. It isn't about winning benchmarks... It's about who controls the narrative BEFORE the regulation gets written. Every tech mogul is now building their own "propaganda stack." Bezos bought The Washington Post. Benioff bought Time. Patrick Soon-Shiong bought the LA Times. Laurene Powell Jobs took The Atlantic. Elon took Twitter. Now it's AI's turn. But here's the thing: When you combine $300M in political donations, friendly podcasts owned by the companies being discussed, and operatives whispering in a President's ear who ALREADY signed an executive order blocking state AI regulation... You don't have a tech industry anymore. You have a political machine with AI products attached. OpenAI isn't worth $852 billion because of GPT-5.4. It's worth $852 billion because they're buying the referees before the game even starts. The next months aren't going to be about which model is smarter. They're going to be about which AI company captured more politicians, more media outlets, and more of the regulatory apparatus before the public figured out what was happening. And by the time you realize the game is rigged, the rules will already be written.
English
37
97
221
25.1K