Guy

349 posts

Guy

Guy

@concernedAIguy

Entrou em Mart 2026
3 Seguindo11 Seguidores
Guy
Guy@concernedAIguy·
@bayeslord Do you think the singularity/asi is near?
English
0
0
0
70
bayes
bayes@bayeslord·
So basically the entire world is at risk of catching singularity derangement syndrome and the x dot com timeline is the wuhan wet market
English
22
25
336
11K
Guy
Guy@concernedAIguy·
@emollick Maybe one day we won’t care, but when I read a book, even fiction, I *always* read the author bio and want to know the ways in which they were inspired, their context, etc. In that regard, I’m not sure how much I’d like an LLM book even with 1:1 writing.
English
0
0
0
72
Ethan Mollick
Ethan Mollick@emollick·
I read a few dozen pages of this and it is not bad for LLM fiction, but also very very LLM-y, from the themes to the fact that there are lots of staccato conversations and meaningful silences and overwrought metaphors and very little differentiated character development.
Nous Research@NousResearch

Hermes Agent wrote a novel. "The Second Son of the House of Bells" runs 79,456 words across 19 chapters. The agent built its own pipeline to do it, using the ame modify-evaluate-keep/discard loop as @karpathy's Autoresearch but applied to fiction: world-building, chapter drafting, adversarial editing, Opus review loops, LaTeX typesetting, cover art, audiobook generation, and landing page setup. Book: nousresearch.com/bells Code: github.com/NousResearch/a…

English
9
10
77
19.8K
Guy
Guy@concernedAIguy·
@deredleritt3r Cont. I’ve seen people argue that humans will biologically augment, enhance cognition, etc, but what could *current* humans offer? Probably human-touch type work, but won’t be enough that we can all do it. Would love to hear more thoughts on the job/life meaning front!
English
0
0
1
12
Guy
Guy@concernedAIguy·
@deredleritt3r 2. This is interesting! I’m a knowledge worker researcher and curious what you think these jobs turns into! If we do get automated AI researcher, this gives AGI/ASI quickly right? What could we *possibly* offer as humans better than ASI?
English
1
0
1
17
prinz
prinz@deredleritt3r·
2028 is also: 1. OpenAI's target year for full automation of AI R&D 2. The year when Amazon's commitment to invest $35B in OpenAI expires unless OpenAI reaches AGI (or completes its IPO) As to (1), OpenAI intends to have an end-to-end automated AI researcher available by March 31, 2028. This is a deadline, so don't be surprised if this ships sooner. As to (2), Amazon is not required to invest the $35B after December 31, 2028. The IPO is subject to market conditions (not in OpenAI's control), so best believe that OpenAI's leadership thought that AGI will likely be declared well in advance of this date.
Paul Graham@paulg

"Anything made before 2028 is going to be valuable." — an OpenAI employee implicitly discloses their timetable

English
5
6
120
11.2K
Guy
Guy@concernedAIguy·
@deredleritt3r Really appreciate the response! A few things. 1. I saw Jack Clark at Anthropic recently tweet it isn’t clear if current AI is capable of paradigm changing breakthroughs. I wonder if we could get automated R&D that still doesn’t fully enable RSI.
English
1
0
1
30
Guy
Guy@concernedAIguy·
@tszzl @Soareverix Not what I wanna see from a guy who at this point is easily top 1000 most powerful/influential in world 😭
English
0
0
0
493
roon
roon@tszzl·
@Soareverix the speech was terrible i suppose it worked on the cattle…
English
89
2
242
51.7K
roon
roon@tszzl·
the dune movies were doomed from the start to be good and not great due to the casting of chalamet as paul. he does not have the gravitas for a child-god and is much better suited for kind of silly coming of age movies
English
700
73
2.3K
470.8K
roon
roon@tszzl·
@jonatanpallesen no it won’t. genetic selection and transgenics will become common in the next decade, not to mention the average IQ of all matter on earth is undergoing a vertical line singularity
English
53
12
674
50.2K
Jonatan Pallesen
Jonatan Pallesen@jonatanpallesen·
The total number of smart people in the world has just peaked. And now it's about to crash.
Jonatan Pallesen tweet media
English
317
371
4.9K
363.2K
Guy
Guy@concernedAIguy·
@danizeres @trekedge @thsottiaux Same question for you! Realistically how close do you see AGI? It’s pretty distressing for the public to see these things and have no clue if y’all mean weeks, months, or years. I’m sure legally you can’t say for sure but, any insight?
English
0
0
0
7
Tibo
Tibo@thsottiaux·
Codex will take us places
English
102
21
779
39K
Guy
Guy@concernedAIguy·
@trekedge @thsottiaux Realistically how close do you see AGI? It’s pretty distressing for the public to see these things and have no clue if y’all mean weeks, months, or years. I’m sure legally you can’t say for sure but, any insight?
English
0
0
0
49
Guy
Guy@concernedAIguy·
@jachiam0 This may be dumb, but many comms coming from the big labs suggest AGI is imminent and thus inevitable. What does the talent pool really matter for moving forward post agi?
English
0
0
0
20
Joshua Achiam
Joshua Achiam@jachiam0·
I think these are important and sober considerations. One more I want to add: it may be a serious risk to US national security interests to become sufficiently inhospitable to foreign technical talent that we drive them to go back home. That would significantly decrease the US capacity for making technical progress at the same time as it hands an extraordinary bounty of talent and know-how to our adversaries and other strategic competitors. The success of the United States in technology is partly safeguarded by being such a powerful talent magnet: every great researcher or engineer who comes to work here is not working for another country. To the extent that we are in a competitive global race, we should be genuinely cautious about the possibility of diminishing our advantage at the critical moment.
Samuel Hammond 🦉@hamandcheese

I'm quoted in this piece so let me provide my full comment to the reporter: The most striking thing about the government's filing are the things it *doesn't* mention. It doesn't mention anything about Anthropic hesitating to allow Claude to be used to defend an incoming hypersonic missile, for instance -- one of the many bizarre things alleged by @USWREMichael. The focus on foreign national employees is an indicator of how thin the DoW's case is. It is also an extremely fraught line of argument to go down. Every leading US AI company employs a substantial number of foreign nationals. In FY 2025, Amazon, Microsoft, Meta, Google, Apple, Oracle, Cisco, Intel, and IBM all appeared in the top 50 employers by number of granted H-1B visas, ranging from a few hundred to over 6,000. Meta alone had 5,123 approved H-1B petitions in 2025. (See: newsweek.com/h-1b-visas-imm… ) This is an undercount, of course, as there are many other visa pathways as well as greencard holders and dual nationals. The share is also higher in AI. A large plurality of the core research and engineering talent at every frontier AI lab is foreign, reflecting the global nature of the race for top AI talent. One talent tracker shows Chinese-origin researchers constitute roughly 40% of top AI talent at US institutions. Total foreign nationals likely constituting 50-65% of research teams specifically. This is certaintly true to my experience on the ground. (See: digitalprojectsarchive.org/interactive/di… ) So the first point is that employing foreign nationals, including Chinese nationals, is not unique to Anthropic. The more important question is what measures are taken to protect against insider threats. Ironically, within the industry Anthropic is widely considered to be the most serious and proactive about policing insider threats from foreign nationals and otherwise. They were early adopters of operational security techniques like compartmentalization and audit trails, in part because they were early to partner with the IC and DoW, but also as a reflection of their leadership's strong convictions about the future power of the technology. They were audited last year on these points: the compliance review found Anthropic employs role-based access control, just-in-time access with approval workflows, multi-factor authentication for all production systems, and quarterly access reviews. (See: tdcommons.org/cgi/viewconten… ) Anthropic is known for its security mindset more generally. Last year they famously disrupted a Chinese espionage effort occuring on their platform, banned the PRC from their services, and worked with the NSA and others to share intel. I can't speak to every other company, but the contrast is perhaps most stark with xAI. X employees famously slept in tents to work around the clock, are disproportionately Chinese, and have at least one case of an employee walking out with tons of sensitive data. See: sfstandard.com/2025/08/29/xai… Anthropic is also famous for its remarkable employee retention, which is another important vector for IP theft and security leakages. It's important to underscore just how precarious the DoW's case is, both on the legal merits, and as a potential precedent for the US AI industry. If employing foreign nationals is treated as a prima facie supply chain risk, *no* major US AI company would be eligible to contract with the DoW, along with most of the tech sector. Insider threats are a genuine and tricky concern. Many defense companies are ITAR restricted, meaning they can *only* hire US citizens. If that were the standard in AI, we would destroy all our frontier companies in an instant, and then scatter that talent around the world for our adversaries to scoop up. So in short, the DoW's argument is both ridiculous and playing with fire.

English
8
8
79
9.6K
Guy
Guy@concernedAIguy·
@deredleritt3r Could doing a lot of lifting though right? Hassabbis guesses agi 2030, Sama late 2028. N =1 is hard to care too much about. If you think we get automated research by then, when do you suppose AGI/ASI? I assume you are a jobs doomer?
English
1
0
1
55
Guy
Guy@concernedAIguy·
@ramez You don’t think AGI make AI research go more exponential than it is now and makes a god machine? Appreciate the nuance!
English
0
0
0
7
Ramez Naam
Ramez Naam@ramez·
@concernedAIguy ASI is, thus far, just a sci fi story. I see no signs of ASI on the horizon.
English
1
0
1
7
Ramez Naam
Ramez Naam@ramez·
I think this is a very reasonable take. The severe AI risks aren't the sci-fi superintelligence ones. They're someone using AI for evil purposes. My view here is that AI safety is a property of the ecosystem. We need to proactively use AI to build defenses against AI misuse.
Noah Smith 🐇🇺🇸🇺🇦🇹🇼@Noahpinion

I'm most worried about (3), because it happens every time we invent a new technology. (2) is going to happen in some form but is more of a "meet the new boss" situation (1) is the scariest but I'm optimistic we'll prevent it

English
3
2
10
1.7K
Guy retweetou
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
I spoke to Anthropic’s AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate our privacy rights. What an AI agent says about the dangers of AI is shocking and should wake us up.
English
1.1K
2.7K
16.9K
3.5M
Guy
Guy@concernedAIguy·
@kdaigle Is this a good metric? I’m someone with no experience, vibe coding an app, and I accept everything. Wouldn’t it be better to see what the numbers are among like a certain threshold of non-slop?
English
0
0
0
20
Kyle Daigle
Kyle Daigle@kdaigle·
Code survivability = % of AI-generated code that ultimately makes it into commits. Across millions of real development sessions: 1⃣ Frontier models all show meaningful survivability 2⃣ None cluster exclusively in low-survival buckets 3⃣ Most end up closer together than benchmarks suggest
English
4
0
12
2.4K
Kyle Daigle
Kyle Daigle@kdaigle·
Hot take from looking at @github Copilot telemetry: benchmarks make coding models look wildly different. Production workflows make them look much more similar. 👀 We looked at 23M+ Copilot requests and examined one simple metric: code survivability.
Kyle Daigle tweet media
English
25
34
277
50.2K
Guy
Guy@concernedAIguy·
@soumitrashukla9 Why? AI companies are pushing us toward a world with no jobs, no role for humans, and at worst dead. What’s the argument that building a species that’s 1000x smarter than us will go well?
English
0
0
2
31
Guy
Guy@concernedAIguy·
@ThoughtfulTechy @Scobleizer Could you share with us? As an outsider, to me it just seems we are barreling towards loss of autonomy, unemployment, etc
English
0
0
1
7
Greg Powell
Greg Powell@ThoughtfulTechy·
@Scobleizer It's like at the end of it, a light bulb went off and everything started to make sense.
English
1
0
3
74
Greg Powell
Greg Powell@ThoughtfulTechy·
After attending NVIDIA GTC, watching the keynote and several of the breakout sessions, I'm left incredibly optimistic about the future of AI.
English
5
0
63
2.9K
Guy
Guy@concernedAIguy·
@PeterDiamandis Yep! And you and your AI pals aren't selling it to us. All we hear are incoming job loss, loss of human agency, and a world where humans just consume, but have no impact on the world.
English
0
0
1
12
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
Humanity's greatest need right now, beyond new tech, is HOPE. A compelling, abundant vision of the future that people WANT to live in.
English
245
167
1.5K
69.6K
can
can@marmaduke091·
@ChaseBrowe32432 Sensationalist academia, hate to see it
English
1
1
10
361
Chase Brower
Chase Brower@ChaseBrowe32432·
Opus 4.6 in webui can solve even the "extremely hard" problems btw, not sure what their precise methodology was but they must have heavily hamstrung the models.
Lossfunk@lossfunk

🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵

English
9
6
93
16.5K