John Connor

2.6K posts

John Connor banner
John Connor

John Connor

@aiissent

Human

Katılım Şubat 2018
231 Takip Edilen119 Takipçiler
Tibo
Tibo@thsottiaux·
Growth of codex continues to outpace our prediction (maybe we are not super good at that) and we almost ran out of capacity three days in a row. We have ramped up more significantly ahead of next week.
English
175
39
1.9K
67.9K
John Connor
John Connor@aiissent·
@SinaHartung if that's what makes you feel better. sorry it didn't work out.
English
0
0
0
18
Sina
Sina@SinaHartung·
if a man tells you he wants to focus on his company, he just isn’t that into you men know when they meet wifey
English
30
6
258
18.4K
John Connor
John Connor@aiissent·
@sama You shouldn't even be CEO. x.com/i/status/20371…
Ricardo@Ric_RTP

This AI whistleblower just EXPOSED Sam Altman for manipulating his way into becoming OpenAI’s CEO. Everyone who helped him build it has left because they felt used. Karen Hao interviewed 300 people including 90 current and former OpenAI employees. And she just told Steven Bartlett what she discovered: In 2015, Altman needed Elon Musk to co-found OpenAI. Problem was, Musk was obsessed with AI as an existential threat. So Altman wrote a blog post calling AI "probably the greatest threat to the continued existence of humanity." Before that blog post? Altman's biggest fear was engineered viruses. Not AI. He literally rewrote his worldview overnight to mirror Musk's language word for word. Musk bought in. Donated millions. Co-founded the company. Then Altman stabbed him in the back. When OpenAI needed a CEO for its new for-profit arm, the co-founders Ilia Sutskever and Greg Brockman initially chose Musk. Altman went directly to Brockman, a personal friend, and said: "Do we really want someone this erratic and unpredictable to control a technology that could be super powerful?" Brockman flipped. Then convinced Ilia to flip. Musk found out he wasn't getting the role and left. That's how the biggest rivalry in tech actually started. Not over ideology... Over a backroom power play. But here's where it gets darker: Every single person who built OpenAI alongside Altman eventually felt the same thing Musk felt. Used. Manipulated. Discarded. Dario Amodei, VP of Research, thought Altman shared his vision. Over time he realized Altman was on "exactly the opposite page" and had used his intelligence to build things he fundamentally disagreed with. He left and founded Anthropic. Ilia Sutskever, co-founder and chief scientist, tried to get Altman fired. He told colleagues: "I don't think Sam is the guy who should have the finger on the button for AGI." He was pushed outounded Safe Super Intelligence. That name alone tells you everything. Mira Murati, CTO, left and started Thinking Machines Lab. No other tech company in history has had every single co-builder leave and start a direct competitor. Not Google. Not Meta. Not Apple. NOBODY. 300 interviews exposed one consistent pattern: If you align with Altman's vision, you think he's the Steve Jobs of AI. If you don't, you feel like you were manipulated by someone who will say whatever is needed to whoever is listening. When talking to Congress? AGI will cure cancer and solve poverty. When talking to consumers? It's the best digital assistant you'll ever have. When talking to Microsoft? AGI is a system that generates $100 billion in revenue. Three completely different definitions of the same technology sold to three completely different audiences. And if you publicly disagree with any of it? OpenAI subpoenaed 7 nonprofit organizations that criticized them. Sent a sheriff to a 29yo nonprofit lawyer's door during dinner demanding every text, email, and document he'd ever sent about OpenAI. A one-man watchdog nonprofit got papers demanding all communications with anyone who questioned the company. OpenAI's own head of mission alignment publicly said "this doesn't seem great." That's the guy whose literal job is making sure OpenAI BENEFITS humanity. Former employees who spoke up about secret non-disparagement clauses that threatened to strip their equity described the psychological pressure as "crushing." This is the company that tells us it's building technology "for the benefit of humanity." Same company that mirrors whatever language gets them funded. Same company where every builder eventually walks away feeling deceived. Same company sending law enforcement to silence critics. The biggest AI company on Earth wasn't built on technology. It was built on one man's ability to tell everyone exactly what they needed to hear. And the scariest part is that it worked.

English
0
0
3
30
Sam Altman
Sam Altman@sama·
The first steel beams went up this week at our Michigan Stargate site with Oracle and Related Digital
English
1K
380
6.5K
1.1M
@levelsio
@levelsio@levelsio·
@joedevon @Scobleizer @nikitabier System is working as it is with this new feature It becomes socially unacceptable to follow AI spam accounts And that makes creates a social pressure for people to not post AI spam because it affects not just them but their followers too!
English
27
2
87
8K
Joe Devon
Joe Devon@joedevon·
There was a tiff between @levelsio and @Scobleizer over a change to the X algo by @nikitabier where you limit replies to 2 degrees: people you follow and people they follow. There’s a better solution but it also exposes a flaw in Nikita’s plan. Twitter is not a curated friend group. Some people follow back all followers. Some have strategies like Scoble who follows 40k people, but it’s curated. You cannot follow 40k people without some AI. Levels has no choice but to unfollow Scoble and anyone with a ton of followers if he wants this new algorithm to work. The better solution is the ability to disable 2nd degree followers for Scoble, without having to unfollow him entirely. The truth is, anyone who follows a lot of people breaks this algorithm. Another solution would take more work but is possible. The whole who you follow signal on twitter is so broken for historical reasons that few people use that feed. They use the For You feed. X should figure out who your “real” follows are. Whose stuff do you read? Whose stuff do you engage with? Who do you dm? Using that would be useful to X in many other ways too. Then limit replies to your real follows and their real follows. It may be nuanced how to message it to a non-technical audience, but the current model will fail.
English
10
0
16
8.4K
Runemir
Runemir@RunemirQi·
That's not for the public, at least not yet, but I´ve been in the web and app development business long enough to know what I'm talking about. AI is a total changer. I can do stuff on my own that I could only dream about before, or spend tens of thousands of dollars to get somebody to build it.
English
8
0
3
942
Kyle Gawley
Kyle Gawley@kylegawley·
why is everyone lying about how good AI is?
English
455
90
1.7K
117.1K
Matt Mullenweg
Matt Mullenweg@photomatt·
@levelsio @X One interesting approach would be like a web of trust, not just allowing people you follow but allowing people they follow. Creates huge pressure to unfollow sloppers.
English
21
4
412
260.6K
@levelsio
@levelsio@levelsio·
I asked Claude Code to chart my AI reply detection As I thought it's going exponential now There was a small slowdown from July to November last year, then it went back up and with everyone running OpenClaw (I think) it's now going exponentially up again I think @X will have to either 1) detect AI replies and block them with some new tech like key detection as @nikitabier mentioned, and/or 2) default lockdown the reply section to people you follow and to people who subscribe to you or something like that Or the reply section will just become unusable
@levelsio tweet media
@levelsio@levelsio

80% of my replies are now AI, it's getting out of hand guys

English
173
13
461
84.8K
corbin
corbin@corbin_braun·
are you a founder or doing something cool in san francisco? let's connect. big announcement soon. something this sf space has been missing for a very long time.
corbin tweet media
English
29
0
52
14K
John Connor
John Connor@aiissent·
@sama OpenAI can start by firing you as you, Scam Altman, are a threat to society. You belong in jail.
English
0
0
8
112
Sam Altman
Sam Altman@sama·
AI will help discover new science, such as cures for diseases, which is perhaps the most important way to increase quality of life long-term. AI will also present new threats to society that we have to address. No company can sufficiently mitigate these on their own; we will need a society-wide response to things like novel bio threats, a massive and fast change to the economy, extremely capable models causing complex emergent effects across society, and more. These are the areas the OpenAI Foundation will initially focus on, and in my opinion are some of the most important ones for us to get right. The Foundation will spend at least $1 billion over the next year. @woj_zaremba, co-founder of OpenAI, will transition to Head of AI Resilience. I believe that shifting how the world thinks about safety to include a Resilience-style approach is critical, and I am extremely grateful to Wojciech for taking on this role. Wojciech has been my cofounder for the last decade; anyone who knows him will understand what I mean when I say he is one of a kind. He has a lot of ideas about how we build a new kind of AI safety. @JacobTref is joining as Head of Life Sciences and Curing Diseases. @annaadeola, our VP of Global Impact, will transition to Head of AI for Civil Society and Philanthropy. @robert_kaiden is joining as Chief Financial Officer. @jeffarnold is joining as Director of Operations.
English
1.7K
560
6.8K
972K
@levelsio
@levelsio@levelsio·
So @nikitabier implemented @photomatt's idea to stop AI bots from destroying the reply section on here You can set it to only allow people you follow and the people in turn they follow to reply, nobody else If on average ppl follow 500 people that means still 500*500=250,000 possible repliers But all the spammers are isolated out 👏
@levelsio tweet media
@levelsio@levelsio

This would be genius actually @nikitabier Where people I follow can reply to my tweets but also the people they follow (like 2nd degree follows) And maybe you can see that in small text too like: @photomatt (via @levelsio): "Bla bla bla" Then if you realize that's an AI bot you just unfollow your friend

English
203
57
2.9K
454.7K
Hayden Bleasel
Hayden Bleasel@haydenbleasel·
I've joined @OpenAI as a Member of Technical Staff. AI will shape the future of humanity in profound ways. I believe those of us who can help build it have a responsibility to help steer it in a direction that benefits everyone, for all humankind. The future is bright, but it takes hard work and care to make it that way. Very excited to play my part.
Hayden Bleasel tweet media
English
250
25
1.5K
75.5K
Garry Tan
Garry Tan@garrytan·
Speaking from direct experience, CEOs coding again is one of the most exciting things to happen in 2026
Garry Tan tweet media
English
91
29
694
32.2K
John Connor
John Connor@aiissent·
@thsottiaux You forgot a couple. Let me help... Codex is for job replacement Codex is for hype Codex is for mass surveillance Codex is for killing
English
0
1
8
154
Tibo
Tibo@thsottiaux·
Codex is for engineering Codex is for research Codex is for science Codex is for math Codex is for fun You can just build things
English
164
48
1.6K
50.7K
Rand
Rand@rand_longevity·
hating on AI is a sign of low IQ
English
667
466
4.4K
174.5K