George

153 posts

George

George

@george

Owner of a camera.

🌦️ Katılım Ekim 2010
53 Takip Edilen1.2K Takipçiler
Vincenzo Landino
Vincenzo Landino@vincenzolandino·
This is the sound of a NA Ferrari V12 😮‍💨
English
42
140
1.1K
43.8K
Value Select
Value Select@valueselectTV·
I prayed for a huge milestone to come, and it came. Value Select is going international* Tickets drop everywhere, Thursday, March 19th Visit the Patreon for early access! So thankful for Peter, Domenica and the good folks @RealGoodTouring
English
4
3
81
2.8K
PATINA RESEARCH
PATINA RESEARCH@patinaresearch·
Everyone complaining about new F1 cars being mid, watch this video to understand what they took from us.
English
69
492
3K
65.4K
George
George@george·
@FabienWeibel Let me pay to Dark Souls invade people's sessions as a crab and Godzilla their castles.
GIF
English
1
0
15
2.7K
Fabien Weibel - Wishlist Sandcastle
I've improved the rake tool and made it much more precise. I've also made a few changes on the sounds! Follow me for more updates and relaxing beach dioramas! Wishlist Sandcastle ➡️ s.team/a/3216520
English
96
529
6K
603.6K
Bibawen
Bibawen@asimbawe·
ZXX
19
448
3.9K
84.7K
Torkie
Torkie@TorkieTweets·
Same space, same weight, same everything yet the older version is double the price? #EscapefromTarkov
Torkie tweet mediaTorkie tweet mediaTorkie tweet media
English
27
2
300
60.4K
ahmed baokbah 🇸🇦 🏎️🛩️
No reliability issues Ferrari PU rocketship off the line Charles Leclerc flying Lewis latebrakemilton DNASF26 Ballerina rear wing Mater🅱️lan might be real finally
Daniel Valente 🏎️@F1GuyDan

🚨 Fastest laps of the entire Bahrain preseason 1. Charles Leclerc - 1:31:992 (C4) 2. Charles Leclerc - 1:32:240 (C4) 3. Charles Leclerc - 1:32:289 (C4) 4. Charles Leclerc - 1:32:297 (C4) 5. Charles Leclerc - 1:32:655 (C3) FERRARI HYPE TRAIN IS ALIVE

English
42
504
7.3K
522.7K
George
George@george·
This is a compliment.
English
0
0
0
690
George
George@george·
Never had a game make me feel as shit as Tarkov does.
English
1
0
1
728
George
George@george·
@63skies_ Me trying to do my tour quest for the fourth time.
English
0
0
1
226
lukas
lukas@63skies_·
most aware labs marine
English
22
30
878
93.7K
George
George@george·
London being self-aware.
George tweet media
English
1
0
0
808
Motorsport MP4
Motorsport MP4@MotorsportMP4·
THIS IS OUR YEAR 💀
English
108
821
11.6K
807.2K
Bibawen
Bibawen@asimbawe·
ZXX
49
2.2K
16.4K
199.1K
George
George@george·
@F1 Banger.
Indonesia
0
0
0
1.1K
Bibawen
Bibawen@asimbawe·
ZXX
41
362
3K
44.9K
F1 TROLL
F1 TROLL@f1trollofficial·
F1 TROLL tweet media
ZXX
186
1.5K
15.4K
235.4K
Liron Shapira
Liron Shapira@liron·
Today's Extropic launch raises some new red flags. I started following this company when they refused to explain the input/output spec of what they're building, leaving us waiting to get clarification.) Here are 3 red flags from today: 1. From extropic.ai/writing/inside… "Generative AI is Sampling. All generative AI algorithms are essentially procedures for sampling from probability distributions. Training a generative AI model corresponds to inferring the probability distribution that underlies some training data, and running inference corresponds to generating samples from the learned distribution. Because TSUs sample, they can run generative AI algorithms natively." This is a highly misleading claim about the algorithms that power the most useful modern AIs, on the same level of gaslighting as calling the human brain a thermodynamic computer. IIUC, as far as anyone knows, the majority of AI computation work doesn't match the kind of input/output that you can feed into Extropic's chip. The page says: "The next challenge is to figure out how to combine these primitives in a way that allows for capabilities to be scaled up to something comparable to today’s LLMs. To do this, we will need to build very large TSUs, and invent new algorithms that can consume an arbitrary amount of probabilistic computing resources." Do you really need to build large TSUs to research if it's possible for LLM-like applications to benefit from this hardware? I would've thought it'd be worth spending a couple $million on investigating that question via a combination of theory and modern cloud supercomputing hardware, instead spending over $30M on building hardware that might be a bridge to nowhere. Their own documentation for their THRML (their open-source library) says: "THRML provides GPU‑accelerated tools for block sampling on sparse, heterogeneous graphs, making it a natural place to prototype today and experiment with future Extropic hardware." You're saying you lack a way your hardware primitives could *in principle* be applied toward useful applications of some kind, and you created this library to help do that kind of research using today's GPUs… Why would you not just release the Python library earlier (THRML), do the bottlenecking research you said needs to be done earlier, and engage the community to help get you an answer to this key question by now? Why were you waiting all this time to first launch this extremely niche tiny-scale hardware prototype to come forward explaining this make-or-break bottleneck, and only publicize your search for potential partners who have some kind of relevant "probabilistic workloads" now, when the cost of not doing so was $30M and 18 months? 2. From extropic.ai/writing/tsu-10…: "We developed a model of our TSU architecture and used it to estimate how much energy it would take to run the denoising process shown in the above animation. What we found is that DTMs running on TSUs can be about 10,000x more energy efficient than standard image generation algorithms on GPUs." I'm already seeing people on Twitter hyping the 10,000x claim. But for anyone who's followed the decades-long saga of quantum computing companies claiming to achieve "quantum supremacy" with similar kinds of hype figures, you know how much care needs to go into defining that kind of benchmark. In practice, it tends to be extremely hard to point to situations where a classical computing approach *isn't* much faster than the claimed "10,000x faster thermodynamic computing" approach. The Extropic team knows this, but opted not to elaborate on the kind of conditions that could reproduce this hype benchmark that they wanted to see go viral. 3. The terminology they're using has been switched to "probabilistic computer": "We designed the world’s first scalable probabilistic computer." Until today, they were using "thermodynamic computer" as their term, and claimed in writing that "the brain is a thermodynamic computer". One could give them the benefit of the doubt for pivoting their terminology. It's just that they were always talking nonsense about the brain being a "thermodynamic computer" (in my view the brain is neither that nor a "quantum computer"; it's very much a neural net algorithm running on a classical computer architecture). And this sudden terminology pivot is consistent with them having been talking nonsense on that front. Now for the positives: * Some hardware actually got built! * They explain how its input/output potentially has an application in denoising, though as mentioned, are vague on the details of the supposed "10,000x thermodynamic supremacy" they achieved on this front. Overall: This is about what I expected when I first started asking for the input output 18 months ago. They had a legitimately cool idea for a piece of hardware, but didn't have a plan for making it useful, but had some vague beginnings of some theoretical research that had a chance to make it useful. They seem to have made respectable progress getting the hardware into production (the amount that $30M buys you), and seemingly less progress finding reasons why this particular hardware, even after 10 generations of successor refinements, is going to be of use to anyone. Going forward, instead of responding to questions about your device's input/output by "mogging" people and saying it's a company secret, and tweeting hyperstitions about your thermodynamic god, I'd recommend being more open about the seemingly giant life-or-death question that the tech community might actually be interested in helping you answer: whether someone can write a Python program in your simulator with stronger evidence that some kind of useful "thermodynamic supremacy" with your hardware concept can ever be a thing.
Liron Shapira tweet media
English
113
44
1.1K
326.2K