Jason

6.2K posts

Jason

Jason

@80jillion

Prognosticator. Inventor. Thinker.

Katılım Eylül 2012
194 Takip Edilen88 Takipçiler
Jason
Jason@80jillion·
@ibuildthecloud Ai code writing the best thing to happen to powershell... I too was amazed by how clunky the syntax was when I first saw it.
English
0
0
0
111
Darren Shepherd
Darren Shepherd@ibuildthecloud·
I feel like the creators of PowerShell did not understand that scripting language exist specifically because you don't want to write a program.
English
54
24
1.9K
111.6K
Jason
Jason@80jillion·
@FutureInclined @peterrhague This is interesting. The migration to silicon instead of "copying" so that continuity of experience in mind/body is addressed. Otherwise it's a mind-clone that absolutely does not feel immortal for your dying brain/body.
English
0
0
1
24
Future Inclined
Future Inclined@FutureInclined·
@peterrhague We need to invent an intermediate substrate which grows parallel to your neurons and maps to them 1:1 in real time. It must be biocompatible and able to substitute for your neurons as they die. This may answer the mind body problem, but it's decades off at best.
English
10
0
19
6.6K
Jason
Jason@80jillion·
@peterrhague people underestimate the amount of time things take by an order of magnitude sometimes
English
0
0
0
3
Jason
Jason@80jillion·
@braden_tewinkel @NVIDIADC I'd also be curious about the density of power supplied from solar panels vs. the power consumption of those chips...
English
0
0
0
58
NVIDIA Data Center
NVIDIA Data Center@NVIDIADC·
The next chapter of space computing is here 🛰️ NVIDIA and its ecosystem are advancing AI from Earth-to-space across: ✔️ Earth Orbit and Infrared Imagery ✔️ Radio Frequency and Synthetic Aperture Radar ✔️ Autonomous Space Operations Leading commercial space companies and mission-grade, radiation-hardened partners are scheduling deployments of NVIDIA Jetson Orin, IGX Thor, and the Vera Rubin Space-1 module for on-orbit AI inference and ground data processing. Explore the final frontier of AI 🔗 nvda.ws/4wb6qQd
NVIDIA Data Center tweet media
English
124
606
5.7K
3.9M
Jason
Jason@80jillion·
@braden_tewinkel @mirsblog @NVIDIADC Most of this audience don't understand that temperature in a vacuum is not the same as temp in a pool of air/water/etc. To answer your question, the heat is radiated via infrared into the vacuum, but I have no idea of the efficiency numbers. They will be running very hot I think.
English
0
0
0
38
Jason
Jason@80jillion·
@Castellani2014 Marcello Hernandez as Sebastian Maniscalco was prob the best SNL impression of the decade.
English
0
0
0
1.1K
Jason
Jason@80jillion·
@buccocapital Doesn’t a sufficiently intelligent Ai not need instructions on how to build their products? Not like it’s rocket science. Only thing keeping them from doing a copy is customer relationships.
English
0
0
0
565
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
In the 2010s, Apple used the App Store as outsourced R&D, converting top apps into native iOS functionality Anthropic is poised to do the same thing to software. Except this time, companies are *paying* to hand Anthropic the literal instructions for how to build their products
English
36
51
1K
74.5K
Jason
Jason@80jillion·
@engineers_feed False. Air resistance will slow you constantly until after some number of oscillations you end up floating in the center I think
English
0
0
1
2
World of Engineering
World of Engineering@engineers_feed·
What happens if you drill a hole straight through the Earth and jump in? You’d fall for 42 minutes. Then stop exactly at the other side. Then fall back. Forever. It’s called a gravity train. The math works out perfectly. The engineering? Slightly more complicated.
English
140
29
365
51.2K
Jason
Jason@80jillion·
@BrianG12321 @CWood_sdf May I ask where this game is? If Ai is writing that much well directed code, the game should be nearing a playable state no? I'm curiously looking around for major products that are actually Ai generated and serving a market profitably.
English
1
0
0
101
Brian
Brian@BrianG12321·
No no no, specifications are the wrong take. You write a extraordinarily detailed desired user experience. Then let the AI work backwards from there. Before I started my game I had 30,000 lines of markdown files detailing the user experience. Now on days when I am distracted my other work I can tell the AI to reference the docs and come up with 20 action items and execute on them. And I usually end keeping 90% of the work it does. And none of those docs have technical specifications.
English
8
0
20
2.1K
Chris Wood
Chris Wood@CWood_sdf·
i love how people are saying "if we write a sufficiently detailed specification, the agent can write all our code" do you know what writing a sufficiently detailed specification that deterministically maps to what a computer's actions is? it's coding
English
359
1.7K
21.2K
568.6K
Jason
Jason@80jillion·
@Tazerface16 They have a model that tells them what word comes next just like you do. You’re just too arrogant to expand your definition of thought.
English
0
0
0
10
Christopher David
Christopher David@Tazerface16·
People understand that LLMs aren't actually "thinking," right?
Drexel-Alvernon, AZ 🇺🇸 English
1.7K
699
15.6K
853.3K
Jason
Jason@80jillion·
@atrupar The guy chanting about divine provenance guiding our bombs is calling another guy an ideological lunatic 😂
English
0
0
0
8
Aaron Rupar
Aaron Rupar@atrupar·
Hegseth: "Anthropic is run by an ideological lunatic"
English
64
60
304
107.1K
Jason
Jason@80jillion·
@thejmkane @nejatian Go pour a cup of coffee. Make a client feel confident. Demonstrate energy and enthusiasm. Everything outside of the spreadsheet and word processor that requires situational awareness.
English
0
0
0
30
John Michael Kane
John Michael Kane@thejmkane·
@nejatian You’re hiring junior consultants? What could they possibly do that AI can’t?
English
1
0
0
587
Jason
Jason@80jillion·
@ianbremmer weird and confusing joke... 😂
English
0
0
0
5
ian bremmer
ian bremmer@ianbremmer·
it was embarrassing and tacky when biden put his face on the us passport. it’s embarrassing and tacky when trump does it.
ian bremmer tweet media
English
128
36
602
206.9K
Jason
Jason@80jillion·
@vanguard_btc @ChrisCamillo We don't need AGI to be wildly more efficient in almost every task. AGI is the icing on an already rich cake.
English
0
0
0
27
Chris Camillo
Chris Camillo@ChrisCamillo·
Today’s tech earnings are meaningless. An F5 AI tornado is about to touch down and investors are fixated on next quarter guidance, margin noise, and CapEx timing
English
60
39
1.1K
141.8K
Jason
Jason@80jillion·
@howardlindzon More of his art of the deal bs. If he says it should have been twice as bad, he can invent a narrative that things are twice as good as they should be.
English
0
0
0
32
Howard Lindzon
Howard Lindzon@howardlindzon·
Trump 'surprised' stocks not down 20 percent and oil at $200 He always says quiet part out loud - which is lol and frightening The $MU not $LULU stock market and the stock market is NOT the economy He feels vindicated so far
Howard Lindzon tweet mediaHoward Lindzon tweet mediaHoward Lindzon tweet media
English
6
0
14
3.2K
Jason
Jason@80jillion·
@EpsilonTheory I’d guess there are trillions of objects out there locked in orbits like these, free floating through space in the dark.
English
0
0
0
10
Jason
Jason@80jillion·
@algoflows Cocaine bear is about meet his match
English
0
0
0
4
Jason
Jason@80jillion·
@howardlindzon Damn. The degenerate economy runs deep...state
English
1
0
1
166
Howard Lindzon
Howard Lindzon@howardlindzon·
I received my trademark for 'The Degenerate Economy' just in time
George Noble@gnoble79

Wall Street is trying to build a casino inside a casino. In February and March, asset managers filed to launch DOZENS of exchange-traded funds offering 4x and 5x daily leverage on individual stocks. Tesla at five times. Nvidia at five times. Bitcoin at five times. The SEC shut them down. Rule 18f-4 caps leverage at 2x. But the fact that 9 separate issuers tried to push through these products tells you everything about where we are in the cycle. Meanwhile, on prediction markets, you can now bet whether Bitcoin goes up or down in the NEXT FIVE MINUTES. 24 hours a day. Polymarket and Kalshi are doing $70 million a DAY in these ultra-short-term crypto bets. 5 and 15 minute contracts now make up more than half of all crypto trading on both platforms. People are feeding price data into AI chatbots and asking them to predict 5 minute Bitcoin moves. One guy told Yahoo Finance he doubled his returns doing this and insists it's not gambling. It IS gambling. This is what late-cycle speculation looks like. When the instruments get more leveraged, the time horizons get shorter, and the language gets more delusional. That's not innovation. And it's happening against the most reckless corporate spending spree in history. The 4 hyperscalers are set to spend roughly $650 BILLION on AI infrastructure in 2026. That's a 67% increase from last year's already historic levels. Amazon alone guided $200 billion. Morgan Stanley projects Amazon will burn through $17 billion in NEGATIVE free cash flow this year. Bank of America sees a $28 billion deficit. Amazon filed with the SEC that it may need to raise equity and debt to keep going. Alphabet's free cash flow is projected to collapse roughly 90%. From $73 billion down to about $8 billion. Their long-term debt quadrupled in 2025 to $46.5 billion. Hyperscalers now hold more debt than cash for the first time. They called it a "yellow flag." I'd call it a red one. And what are these companies getting for $650 billion? The hallucination problem still isn't solved. OpenAI's own reasoning models hallucinate up to 33-48% of the time on certain question types. A 2025 mathematical proof confirmed that hallucinations cannot be fully eliminated under current architectures. They're simply features of how these systems work. More researchers are coming to the conclusion that no amount of scaling will fix this. Which raises an uncomfortable question: why are we spending $650 billion on data centers for technology that may never work as advertised? Some AI experts are now arguing for smaller, more efficient models that can run locally on a laptop. No data center required. Over 40% of enterprise AI workloads already include a local inference component. Downloads of small model weights grew 320% YOY. Think about what that means. The entire investment thesis behind hyperscaler AI capex is that you NEED massive centralized infrastructure. If the industry shifts toward smaller local models, those data centers become the fiber optic cables of 2001 - built for demand that never showed up. JP Morgan projects $300 billion in investment-grade bonds for AI data centers in 2026 alone. That's the SAME fragility that destroyed the telecom builders. Cheap debt financing infrastructure before anyone proved the revenue existed to service it. My point is that the gambling fever and the spending fever are the same thing: One is retail. One is corporate. Both are betting that the future arrives before the math catches up. The 5 minute Bitcoin bets and the $650 billion AI capex binge are two symptoms of the same disease. And the cure is NEVER pleasant. Are you listening?

English
2
1
16
9.4K
Jason
Jason@80jillion·
@davidchalmers42 @WorldSciFest @bgreene Treating consciousness as a binary is a huge error. I think current LLMs have partial consciousness rooted in language. They experience the world through words and we'd see that more clearly if they went from static models to continuous active neural processing.
English
0
0
0
10
David Chalmers
David Chalmers@davidchalmers42·
this clip of me talking about AI consciousness seems to have gone wide. it's from a @worldscifest panel where @bgreene asked for "yes or no" opinions (not arguments!) on the issue. if i were to turn the opinion into an argument, it might go something like this: (1) biology can support consciousness. (2) biology and silicon aren't relevantly different in principle [such that one can support consciousness and the other not]. therefore: (3) silicon can support consciousness in principle. note that this simple argument isn't at all original -- some version of it can probably be found in putnam, turing, or earlier. note also that the (controversial!) claim that the brain is a machine (which comes down to what one means by "machine") plays no essential role in the argument. of course reasonable people can disagree about the premises! perhaps the key premise is (2) and it requires support. one way to support it is to go through various candidates for a relevant principled difference between biology and silicon and argue that none of them are plausible. another way is through the neuromorphic replacement argument that i discuss later in the same conversation. some see a tension between (1)/(3) and the hard problem. but there's not much tension: one can simultaneously allow that brains support consciousness and observe that there's an explanatory gap between the two that may take new principles to bridge. the same goes for AI systems. this isn't a change of mind: i've argued for the possibility of AI consciousness since the 1990s. my 1994 talk on the hard problem (youtube.com/watch?v=_lWp-6…) outlined an "organizational invariance" principle that tends to support AI consciousness. you can find versions of the two strategies above for arguing for premise 2 in chapters 6 and 7 of my 1996 book "the conscious mind". i'm not suggesting that current AI systems are conscious. but in a separate article on the possibility of consciousness in language models (bostonreview.net/articles/could…), i've made a related argument that within ten years or so, we may well have systems that are serious candidates for consciousness. the strategy in that article on LLM consciousness is analogous to the first strategy above in arguing for AI consciousness more generally. i go through the most plausible obstacles to consciousness in language models, and i argue that even if these obstacles exclude consciousness in current systems, they may well be overcome in a decade. of course none of this is certain. but i think AI consciousness is something we have to take seriously. [the full conversation with @bgreene and @anilkseth can be found at youtube.com/watch?v=06-iq-…]
YouTube video
YouTube
YouTube video
YouTube
Tsarathustra@tsarnick

David Chalmers says it is possible for an AI system to be conscious because the brain itself is a machine that produces consciousness, so we know this is possible in principle

English
151
151
776
341.2K