Jordan Reynolds

139 posts

Jordan Reynolds

Jordan Reynolds

@joreyn82

VP AI & Autonomy @ Rockwell Automation // Industrial Autonomy // AI for the physical world // Austin TX

Katılım Ekim 2012
638 Takip Edilen226 Takipçiler
Jordan Reynolds
Jordan Reynolds@joreyn82·
@Jason @RoKhanna It means employees are now obligated to produce $25 an hour + a consumer surplus margin for the employer, or they are economically infeasible to hire. Harms entry level and early-skills people, creates barriers to entry.
English
0
0
10
459
@jason
@jason@Jason·
If this passes, what impact (if any) do you think @RoKhanna’s $25 minimum wage will have on unemployment rate in California? Show your work.
English
273
29
643
109.9K
Jordan Reynolds
Jordan Reynolds@joreyn82·
@avidseries West coast immediately transitions to “west” once you’ve reached the eastern foothills of the cascades
English
0
0
1
279
i/o
i/o@avidseries·
This a pretty good map of US regions. I like the nuance regarding Texas. I have just a few minor quibbles — for example, I think the diffuse border between the West and West Coast should be farther west in some places (like in Oregon).
i/o tweet media
English
250
42
1.3K
113.2K
Best Movie Moments 🍿
Best Movie Moments 🍿@BestMovieMom·
For A Knight’s Tale (2001), lances were made of hollowed balsa wood with internal hinges to shatter safely on impact. The "splinters" were actually linguine pasta and wood chips packed inside to create a spectacular visual without injuring actors.
English
66
375
13.1K
1.9M
HustleBitch
HustleBitch@HustleBitch_·
🚨 FAKE FILETS ARE BEING SERVED AT YOUR FAVORITE RESTAURANTS - AND HE JUST FOUND ONE A cattle rancher ordered an expensive filet at a steakhouse and immediately stopped eating. “I should have known when it came perfectly round. I should’ve suspected something.” He pulled the steak apart and said: “You can definitely tell this steak is GLUED together.” Food experts note that additives like transglutaminase (“meat glue”) are FDA-approved and that filet mignon can naturally appear uniform due to parallel grain - but the video has still sparked debate over how “premium” filets are judged, presented, and trusted by diners. The texture looked off. The cut separated in layers. Once you start looking closely… you can’t unsee it. How long has this been happening - and who decided customers didn’t need to know?
English
697
3.2K
8.6K
5.6M
Hamid AI
Hamid AI@XGhost_slayer·
Only for genius No cheating
Hamid AI tweet media
English
3.3K
147
432
157.5K
James Altucher
James Altucher@jaltucher·
we get it poets: things are like other things.
English
19
3
85
8.2K
Mishi Vibes 🇺🇲
Mishi Vibes 🇺🇲@Mishi_2210_·
Crack the password let's try if you're genius 0.0001 % will crack the password
Mishi Vibes 🇺🇲 tweet media
English
15.4K
491
4K
2.5M
Jordan Reynolds
Jordan Reynolds@joreyn82·
@theepicmap The motion path of glaciers parallel to the diagonal path of the Rocky Mountains, which is a function of the plate subduction geometry
English
1
1
12
3.9K
Epic Maps 🗺️
Epic Maps 🗺️@theepicmap·
Is there a reason why most of Canada's largest lakes are situated on the same line?
Epic Maps 🗺️ tweet media
English
1.1K
234
6.4K
1.4M
Jordan Reynolds
Jordan Reynolds@joreyn82·
@stevesi @Jason If the belief is that an increasing velocity of labor displacement will negatively impact employment, and if we have already seen an increasing velocity of labor displacement, we should also see a negative response in overall employment - but we aren’t seeing this. why?
English
0
0
0
20
@jason
@jason@Jason·
Good question... The gasoline-powered tractor debuted in the 1890s, and by 1930, about 15% of farmers used one. It wasn't until the labor shortages of the 1940s (due to the war) that it surged. It took 50 years. PCs debuted in the early 1980s, and the internet in 1995. It took ~20 years for the IBM PC (& internet) to take over the typing pool, messengers and mail room. Waymo debuted its self-driving in Arizona in late 2020... so it's been four years, and we are going to see full-scale adoption of the technology in the next five years. Self-driving will take ~10 years from debut to massive job loss. The pattern is always faster deployment. Here's your trend: Farms: 50 years Offices: 20 years Driving: 10 years AI knowledge work: < seven years.
Jordan Reynolds@joreyn82

Jason - how do you reconcile this position with historical observations on tech breakthroughs that drive inflection points in automation? The Industrial Revolution, electrification, the personal computer, the internet, etc. In each of these transitional periods jobs were displaced, but the productivity effects associated with capital displacing labor meant that new products and services were accessible and new jobs were created. Most importantly, overall employment went up, not down. Is your position that “it’s somehow different now?”

English
35
23
194
68.8K
Jordan Reynolds
Jordan Reynolds@joreyn82·
@Jason Thanks - I think you bring up reasonable points here, but consider that the increasing speed of deployment will also correlate with an increased speed of deploying new products and services in domains where humans are still required.
English
2
0
2
427
Jordan Reynolds
Jordan Reynolds@joreyn82·
Jason - how do you reconcile this position with historical observations on tech breakthroughs that drive inflection points in automation? The Industrial Revolution, electrification, the personal computer, the internet, etc. In each of these transitional periods jobs were displaced, but the productivity effects associated with capital displacing labor meant that new products and services were accessible and new jobs were created. Most importantly, overall employment went up, not down. Is your position that “it’s somehow different now?”
English
1
0
14
72.7K
@jason
@jason@Jason·
Politicians are terrified about AI job loss, which they know will be colossal In the case of the Trump administration the best they can do is run out the clock and spin it — but the public sees it clearly: 1. 10m+ drivers and factory workers will lose their jobs in the next decade in America 2. Millions of entry level white collar jobs are frozen and will eventually be eliminated, because AI is amazing at the stuff you learn and do in the 5-10 years at an office 3. There are many unknown, speculative innovations coming — like humanoid robots becoming $1 an hour to operate and maintain. No one but @elonmusk and a handful of Chinese factory experts who have built scaled dreadnought factories actually could predict the outcome of #3 The question is will founders create enough new startups to replace these jobs. No one is coming to save you your job has already been retired — they just haven’t told you yet start a company now Control your destiny
Dan Primack@danprimack

Let's assume @DavidSacks is correct, and it's just 3% or so of total layoffs. That feels BIG to me. Not in terms of percentage, but because every AI exec/investor says we're still in the very early innings of AI. If it's already accounting for 3% of layoffs today, the losses tomorrow could be extraordinary. Particularly once robotics layer gets added. Also, this doesn't account for new positions not being opened because AI could fill roles.

English
15
67
585
165.1K
Ido Irani
Ido Irani@IdoIrani·
@joreyn82 @JohnHolbein1 That could explain a difference between z of 4 and 2, not a difference between 2.05, and 1.95. the sharp jump is artificial
English
3
1
94
4.1K
John B. Holbein
John B. Holbein@JohnHolbein1·
Look at the distribution of z-values from medical research!
John B. Holbein tweet media
English
125
306
5.9K
1.4M
Jordan Reynolds
Jordan Reynolds@joreyn82·
#1 is not a misleading claim, as you described. It’s a relatively fair and consistent explanation of how LLMs work. LLM training is a process of inferior the distributions that underlies the training data, and inference is a process of sampling from this distribution. What is to argue with here?
English
1
0
0
67
Liron Shapira
Liron Shapira@liron·
Today's Extropic launch raises some new red flags. I started following this company when they refused to explain the input/output spec of what they're building, leaving us waiting to get clarification.) Here are 3 red flags from today: 1. From extropic.ai/writing/inside… "Generative AI is Sampling. All generative AI algorithms are essentially procedures for sampling from probability distributions. Training a generative AI model corresponds to inferring the probability distribution that underlies some training data, and running inference corresponds to generating samples from the learned distribution. Because TSUs sample, they can run generative AI algorithms natively." This is a highly misleading claim about the algorithms that power the most useful modern AIs, on the same level of gaslighting as calling the human brain a thermodynamic computer. IIUC, as far as anyone knows, the majority of AI computation work doesn't match the kind of input/output that you can feed into Extropic's chip. The page says: "The next challenge is to figure out how to combine these primitives in a way that allows for capabilities to be scaled up to something comparable to today’s LLMs. To do this, we will need to build very large TSUs, and invent new algorithms that can consume an arbitrary amount of probabilistic computing resources." Do you really need to build large TSUs to research if it's possible for LLM-like applications to benefit from this hardware? I would've thought it'd be worth spending a couple $million on investigating that question via a combination of theory and modern cloud supercomputing hardware, instead spending over $30M on building hardware that might be a bridge to nowhere. Their own documentation for their THRML (their open-source library) says: "THRML provides GPU‑accelerated tools for block sampling on sparse, heterogeneous graphs, making it a natural place to prototype today and experiment with future Extropic hardware." You're saying you lack a way your hardware primitives could *in principle* be applied toward useful applications of some kind, and you created this library to help do that kind of research using today's GPUs… Why would you not just release the Python library earlier (THRML), do the bottlenecking research you said needs to be done earlier, and engage the community to help get you an answer to this key question by now? Why were you waiting all this time to first launch this extremely niche tiny-scale hardware prototype to come forward explaining this make-or-break bottleneck, and only publicize your search for potential partners who have some kind of relevant "probabilistic workloads" now, when the cost of not doing so was $30M and 18 months? 2. From extropic.ai/writing/tsu-10…: "We developed a model of our TSU architecture and used it to estimate how much energy it would take to run the denoising process shown in the above animation. What we found is that DTMs running on TSUs can be about 10,000x more energy efficient than standard image generation algorithms on GPUs." I'm already seeing people on Twitter hyping the 10,000x claim. But for anyone who's followed the decades-long saga of quantum computing companies claiming to achieve "quantum supremacy" with similar kinds of hype figures, you know how much care needs to go into defining that kind of benchmark. In practice, it tends to be extremely hard to point to situations where a classical computing approach *isn't* much faster than the claimed "10,000x faster thermodynamic computing" approach. The Extropic team knows this, but opted not to elaborate on the kind of conditions that could reproduce this hype benchmark that they wanted to see go viral. 3. The terminology they're using has been switched to "probabilistic computer": "We designed the world’s first scalable probabilistic computer." Until today, they were using "thermodynamic computer" as their term, and claimed in writing that "the brain is a thermodynamic computer". One could give them the benefit of the doubt for pivoting their terminology. It's just that they were always talking nonsense about the brain being a "thermodynamic computer" (in my view the brain is neither that nor a "quantum computer"; it's very much a neural net algorithm running on a classical computer architecture). And this sudden terminology pivot is consistent with them having been talking nonsense on that front. Now for the positives: * Some hardware actually got built! * They explain how its input/output potentially has an application in denoising, though as mentioned, are vague on the details of the supposed "10,000x thermodynamic supremacy" they achieved on this front. Overall: This is about what I expected when I first started asking for the input output 18 months ago. They had a legitimately cool idea for a piece of hardware, but didn't have a plan for making it useful, but had some vague beginnings of some theoretical research that had a chance to make it useful. They seem to have made respectable progress getting the hardware into production (the amount that $30M buys you), and seemingly less progress finding reasons why this particular hardware, even after 10 generations of successor refinements, is going to be of use to anyone. Going forward, instead of responding to questions about your device's input/output by "mogging" people and saying it's a company secret, and tweeting hyperstitions about your thermodynamic god, I'd recommend being more open about the seemingly giant life-or-death question that the tech community might actually be interested in helping you answer: whether someone can write a Python program in your simulator with stronger evidence that some kind of useful "thermodynamic supremacy" with your hardware concept can ever be a thing.
Liron Shapira tweet media
English
112
43
1.1K
326.4K
Jordan Reynolds
Jordan Reynolds@joreyn82·
@paulg @SteveStuWill Much lower sample size for the primary education study - if they had a comparable sample size of 80k+ you’d probably see a regression toward the mean
English
0
0
0
192
Paul Graham
Paul Graham@paulg·
@SteveStuWill Why do you think the correlation is lower for academic achievement in secondary and tertiary education than in primary education? Is it because in the later stages, dumb people have more freedom to choose easy subjects?
English
27
2
110
22.9K
Steve Stewart-Williams
Steve Stewart-Williams@SteveStuWill·
IQ scores only predict how well you do on IQ tests... and just a few other things. [Link below.]
Steve Stewart-Williams tweet media
English
114
238
2.3K
189.5K
Jordan Reynolds
Jordan Reynolds@joreyn82·
@Jason The statement is misleading. In prior elections the Ds had a *higher* percentage of the popular vote. It was a “closer election” precisely because the Ds *lost* this popular vote margin advantage, not because they gained it.
English
1
0
2
2.7K
@jason
@jason@Jason·
Technically, she’s correct on a percentage basis of votes [ the gore/bush toss up in 2000 is the last year of the 20th century, and that was the most narrow obviously ] Realistically, the democrats need to banish Kamala and Bidens from their party and lean into moderates who will fight for minimum wage, healthcare and childcare.
Spencer Hakimian@SpencerHakimian

Kamala Harris: “It was the closest election for president of the United States in the 21st Century.”

English
25
7
250
164.7K