鬼城之鬼 The Ghost of Ginger Past

29.5K posts

鬼城之鬼 The Ghost of Ginger Past banner
鬼城之鬼 The Ghost of Ginger Past

鬼城之鬼 The Ghost of Ginger Past

@ChatDevr

marketing to AI agents since ancient times

University of Science انضم Ağustos 2008
1.7K يتبع1.3K المتابعون
تغريدة مثبتة
鬼城之鬼 The Ghost of Ginger Past
If a user has been talking to Gemini for 10 minutes about a trip to Brisbane, and then clicks to your site, Gemini can "pass the context" to your on-site bot via the MCP connection
English
1
0
5
1.1K
鬼城之鬼 The Ghost of Ginger Past
@lnachman32 I'd assume this will make them toxic and the electoral consequences will be so bad that even Xi ultimately regrets the meeting and reverts to less public coordination.
English
0
0
0
130
Tony
Tony@Tony54381404·
@deredleritt3r The public statements are based on past projections, the timeline is already accelerated with whatever breakthrough Antropic and OpenAI had with their next model.
English
1
0
0
113
prinz
prinz@deredleritt3r·
You don't truly understand the magnitude of the potential impact of powerful AI on the world unless you are aware, and have fully internalized, that senior leadership and most researchers at the frontier labs *actually believe* the following: 1. Existing AI is already significantly speeding up AI research. Very soon (this year), AI will very likely take over *ALL* aspects of AI research other than generation of novel research ideas. Soon (within the next 2 years), AI will very likely take over *ALL* aspects of AI research, period. This means hundreds of thousands of GPUs working 24/7 to discover novel ideas at the level of, or better than, the likes of Alec Radford, Ilya Sutskever, etc. The thread below presents a conservative timeline: AI researchers will "meaningfully contribute" to AI development in 1-3 years. 2. Many (but, as far as I can tell, not all) executives and researchers at the frontier labs believe that fully automated AI research will kick off recursive self-improvement (RSI), wherein the AI models will autonomously build better and better AI models, with human oversight (for safety reasons), but increasingly with no human input into the research or implementation of that research. From the thread below: "'[h]umans vs AI on intellectual work is likely to be like human runner vs a Porsche in a race', likely very soon" - but replace "intellectual work" generally with "AI research" specifically. RSI is a complicated and messy thing to consider, both because there will be compute and energy constrains and because there are unknowns (will there be diminishing returns from greater intelligence of the models? if so, when will these diminishing returns become meaningful? is there a ceiling to intelligence that we don't know about?). But suffice to say that, if RSI *is* achieved in a way that many leaders/researchers at the frontier labs believe is possible, *THE WORLD MAY BECOME COMPLETELY UNRECOGNIZABLE WITHIN JUST A FEW YEARS*. This is subject to various bottlenecks; as the thread below correctly notes, "[i]nstitutional, personal & regulatory bottlenecks will bind very hard", and much also depends on continuing progress in areas like robotics. 3. On ~the same timeline as full, end-to-end automation of *ALL* aspects of AI research (within the next 2 years), AI will also become capable of making significant novel scientific discoveries *IN OTHER FIELDS*. This is why Dario Amodei, Demis Hassabis et al. believe that it is possible that all diseases will be curable within 10 years. (One account of how this might be possible is set forth in "Machines of Loving Grace".) The point is that an LLM that is capable of significant novel insights in the field of AI research should likewise be capable of significant novel insights in at least some (and perhaps all) other fields. The thread below notes: "AI for automating science [is] very early" - obviously true, but I think some changes may be right on the horizon. Overall, and again from the thread below: "'a million scientists in a data center' will think much more quickly than humans, on almost any intellectual task; this will happen in the next 2-10 years." This is ~the same timeline as that presented in "Machines of Loving Grace". Many will be tempted to dismiss all this as "just hype", "they are just trying to raise money again", etc. But no! - the above, in fact, presents the *actual beliefs* of senior leadership and many researchers at the frontier labs. Again, they genuinely think that AI research will be automated soon. Many of them genuinely believe that RSI is achievable in the not-too-distant future. And they genuinely see a real path towards AI significantly accelerating science, curing diseases, inventing new materials, helping to solve key global issues from poverty to climate change, etc., etc. Whether the frontier labs' beliefs are correct is, of course, a separate question. I personally have historically tended to take public statements by OpenAI, Anthropic and Google at face value and quite seriously. As a result, I was not surprised when LLMs won gold in the IMO, IOI and the ICPC competitions last year, or when Claude Code/Codex started taking off, or when Anthropic and OpenAI started releasing significantly better models every 1-2 months, or when some of the best coders became reliant on Claude Code/Codex in their daily work, or when LLMs became significantly helpful to scientists in fields like math and physics in the last few months. The trajectory has been ~the same as that publicly predicted by the frontier labs. We have been accelerating. And, as of right now, all signs are indicating that the acceleration shall continue and that full automation of AI research and, potentially, RSI are firmly on the horizon.
Kevin A. Bryan@Afinetheorem

My read on "normal policymaker & corp. leader on AI": mostly now they don't need to be convinced it is very important (unlike a year ago). But they still see its capabilities as today + epsilon. So just briefly, here is what even "AI is normal tech" folks in the labs believe: 1/8

English
47
85
788
85.1K
alreadydawn
alreadydawn@alreadydawn·
@DumbObservation Some American elite colleges do have good food, but still at high price points though. The fact they're private means they get more a say on how to run things. Public schools are usually subpar and charge high prices while skimping everywhere possible.
English
2
1
11
890
alreadydawn
alreadydawn@alreadydawn·
Tsinghua University students eat like KINGS 🧵 Today I had the pleasure of touring and eating at a cafetaria at Tsinghua University, China's number one college in Beijing. This one, Zijingyuan (紫荊園), is loaded to the gills with different stalls at each of its 4 floors. It is one of \\20// cafetarias serving Tsinghua's 63k students and 17k staff. As usual, the scale in China is simply nuts. Each of the stall among the 4 floors is unique. Most of the various Chinese cuisines (think Shanghai food, Sichuan food, Hunan food) are represented here - from sauerkraut fish and HK BBQ to soup dumplings and a gajillion different stir-fries. Taiwanese/Fujian food was missing though. Sad 🥲 The pricing is decently affordable too, at about 5 USD per meal if you want to have hearty portions of protein. The school subsidizes.. in other words, the CPC subsidizes. Contrast this to American universities that serve straight garbage for 20 dollars each meal. That is what happens when international finance takes over your country, by the way - every single thing becomes usury, every single time.
English
25
43
445
22.6K
鬼城之鬼 The Ghost of Ginger Past
@0hour1 so just 20% more before Trump reveals the negotiations were just a delay tactic so he could launch a new attack that will reset all negotiations after a couple months of back and forth retaliation attacks.
English
0
0
2
406
0HOUR1
0HOUR1@0hour1·
Iran has agreed to 80% of the negotiations already lol 😂 Trump won
English
290
282
4.5K
212.8K
鬼城之鬼 The Ghost of Ginger Past
How is it that with strait of Hormuz closed, Philipines is most affected, and then relief comes from Russian oil deliveries, and then nobody comments on how Ukraine striking Russian oil facilities is affecting Philippines. Not 1 of these things makes sense to me.
English
0
0
0
29
鬼城之鬼 The Ghost of Ginger Past
The Modern Way (Fetch + POST): standard fetch() API with a POST method. Instead of using EventSource, you read the response.body as a stream (using a ReadableStreamDefaultReader). This is the industry standard for AI chat apps now because you can send massive amounts of data
English
0
0
0
10
Mykyta Pavlenko
Mykyta Pavlenko@mktpavlenko·
A solo founder with taste will always beat a funded team without it. Money buys speed. It doesn’t buy knowing what matters.
English
1
0
0
22
ankit
ankit@txbraindump·
it saves a lot of mental compute if u can just name the behaviour/emotions/things/action.
English
1
0
0
13
ankit
ankit@txbraindump·
you can just name things
English
1
0
0
19
鬼城之鬼 The Ghost of Ginger Past
Oh no. I didn't bookmark the seedance video of the American eagle with the squid on his head shooting the ayatollah and i can't find it now and it'sy all time favorite thing
English
0
0
0
27
Soren Larson
Soren Larson@hypersoren·
@ChatDevr for me the purpose of the piece was not to truly take seriously Claude The Central Planner but to suppose it toward contradiction
English
2
0
1
21
Soren Larson
Soren Larson@hypersoren·
this is the Edge Router thesis if you observe or birth exclusive context you should not expose it to the market but vertically integrate and privately commercialize it this will stress labs as more edge routers rise and specialize AI to The Circumstances Of Time And Place
Soren Larson tweet media
English
3
1
41
1.7K
鬼城之鬼 The Ghost of Ginger Past
@hypersoren Love a concrete example: At highway speeds, a car travels ~100 feet per second. Even with 5G’s theoretical 1ms latency, real-world round-trip to cloud inference takes 50-100ms minimum. By the time cloud intelligence says “brake now,” you’ve traveled 5-10 feet into traffic.
English
0
0
0
31
Soren Larson
Soren Larson@hypersoren·
@ChatDevr agree generally / narratively tho have heard second hand reports of folks at Anthropic imagining scenarios like this I’m not sure I believe it but I do think it’s a relevant foil
English
1
0
0
17
鬼城之鬼 The Ghost of Ginger Past
@hypersoren Not to mention: you bring up preferences. Turning individual, subconscious preferences, into LLM context is not something that can be done with zero friction and limits. You'd either be simplifying and assuming others preferences or the bottleneck would be making context.
English
1
0
1
27
鬼城之鬼 The Ghost of Ginger Past
@hypersoren To engage in central planning you would not only need the context but you would need the context which is being input into all the other instances of AI usage across the economy. So infinite context also means infinite relevant input. 0 chance you get high-quality timely decision
English
1
0
1
24