YouAndYourBS

7.7K posts

YouAndYourBS banner
YouAndYourBS

YouAndYourBS

@YouAndYourBS

We're all sick of it.

Katılım Mayıs 2016
743 Takip Edilen427 Takipçiler
J.C. Parets
J.C. Parets@JC_ParetsX·
Why do people keep comparing today’s market to 1999-2000? I’ve genuinely tried to find the similarities and I’m struggling. Can anyone explain the top conspiracy theories behind why this is supposedly the same environment?
English
198
11
418
272K
YouAndYourBS
YouAndYourBS@YouAndYourBS·
@dampedspring @JC_ParetsX I’m old, lived through it. They don’t feel the same at all. And have virtually nothing in common. Internet companies weren’t taking in billions in real user revenues. Industries weren’t being transformed. 1999 was about the promise that this would happen. It took time.
English
1
0
1
131
Andy Constan
Andy Constan@dampedspring·
@JC_ParetsX A big difference is you were in high school. Having not lived through the experience is probably quite confusing to you.
English
19
2
185
18.9K
YouAndYourBS
YouAndYourBS@YouAndYourBS·
@thedimitri @johnthenoticer Wouldn’t a systemically racist society not have people of similar competence have similar outcomes if it was specifically in regards to a race that is being marginalized and oppressed?
English
0
0
1
322
Dimitri
Dimitri@thedimitri·
@johnthenoticer This doesn't really disprove anything about systemic racism's existence, it proves that if two people are essentially equally competent they'll have similar outcomes.
English
58
0
32
15.4K
John Rain
John Rain@johnthenoticer·
In the United States, white people earn significantly more than black people on average. But as soon as you compare blacks and whites with the same IQ, that gap disappears like magic... This is one of the clearest pieces of evidence that systemic racism isn't what's driving the raw overall income disparity between the two groups.
John Rain tweet media
English
175
1.3K
9.7K
1.2M
YouAndYourBS
YouAndYourBS@YouAndYourBS·
@marceelias Stop LARPing as a 70s revolutionary. You’re not fighting for democracy, you’re hoping republicans don’t even exist and your team can run everything.
English
0
0
1
9
Marc E. Elias
Marc E. Elias@marceelias·
I often say that the fight for democracy is the fight of our generation. But let me be clear, if you aren’t fighting for Black voting rights right now then you aren’t really fighting for democracy.
English
3.8K
5.8K
19.5K
457.7K
ben hylak
ben hylak@benhylak·
so instead of saying “funky cabins within 2 hour drive” i will have to keep filling out your patient intake form
ben hylak tweet media
Brian Chesky@bchesky

@benhylak The ChatGPT interface doesn’t work for this. We’ve already tried it.

English
75
10
1.3K
355.8K
Swann Marcus
Swann Marcus@SwannMarcus89·
I swear to Christ the Democratic Party’s message at this point is “all of the most progressive places are unaffordable shitholes where nobody can afford an apartment, therefore you should vote for us”
Swann Marcus tweet media
English
266
1.4K
12.1K
310.6K
Jaynit
Jaynit@jaynitx·
MrBeast: "If my mental health was a priority I wouldn't be as successful as I am" "I obviously never would have buried myself alive for seven days. There's a reason no one makes videos like me, not even close. Because no one wants to live the life I live" "There were months I'm flying 200 days a year on a plane. To get these videos done I do everything" "Something I always tell myself is how you feel right now is why no one else does what you do. If you push through this that's just even more of a reason why no one will ever be who you are" "Once you make a couple million dollars why would you live the life I live? Why would you not take weekends off? Why would you not prioritize your sanity? It makes no sense. But that's why no one else does it"
English
131
136
2.2K
841.3K
Morgan
Morgan@morganlinton·
Officially canceling our Anthropic plan, it’s Codex + Cursor for my little 16 person eng team. Anthropic is great for companies that can spend $2,000/mo and up per engineer, but not affordable for us. Codex really upped their game recently, and with GPT 5.5, it’s just so good, and so token efficient. Still using Cursor plenty, my team still looks and reviews a lot of code. But with Cursor, we’ve never hit a limit, and Composer 2 is pretty awesome for most stuff. Testing out Droid as well and see some good early results with Droid + GLM 5.1, but still more testing to do before rolling it out to the whole team. My guess is many more engineering leaders will be sending messages like this. Anthropic makes great stuff but phew, it’s so darn token hungry. My team loves Codex and Cursor, onward!
Morgan tweet media
English
326
126
3.2K
374.2K
YouAndYourBS
YouAndYourBS@YouAndYourBS·
@G_S_Bhogal This optimistic take reminds me of when I was a kid and the internet was brand new, and I thought to myself “this will end human ignorance.” Still hope you are right though.
English
0
0
0
17
Gurwinder
Gurwinder@G_S_Bhogal·
The more people are exposed to the polite tone of chatbots, the more they’ll internalize this way of speaking. Just as humans trained chatbots to be courteous, so chatbots will eventually return the favor.
English
21
16
194
11K
eric zakariasson
eric zakariasson@ericzakariasson·
orchestrate a swarm of agents here's a visualization of the swarm and how it's using multiple planners, verifiers, and workers try it today with /add-plugin orchestrate and then /orchestrate [goal]
Cursor@cursor_ai

Introducing /orchestrate, a skill that recursively spawns agents to tackle your most ambitious tasks with the Cursor SDK. We’ve used it to: - Autoresearch our internal skills, cutting token use by 20% while improving evals - Cut cold start times on our internal backend by 80%

English
56
100
1.7K
391.6K
DawZYN
DawZYN@D_SAUC3Y·
@Noahpinion If you did you wouldn't talking about millionaires and billionares as if they are the same.
English
9
0
37
4.1K
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
I am concerned that the Dems are becoming the party of "millionaires who resent billionaires". "I made my millions fair and square, but you cheated and exploited the workers to make your billions, you capitalist pig!"
Marco Foster@MarcoFoster_

AOC: “There’s a certain level of wealth and accumulation that is unearned. You can’t earn a billion dollars. You just can’t earn that. You can get market power, you can break rules, you can abuse labor laws, you can pay people less than what they’re worth, but you can’t earn that”

English
375
677
8.7K
605.7K
YouAndYourBS
YouAndYourBS@YouAndYourBS·
@Jason We used to have a bunch of people in my company's Slack using them. Just checked, no one is. That fad (that was a morale imperative and showed you were a good human being!) sure died fast.
English
0
0
0
51
YouAndYourBS
YouAndYourBS@YouAndYourBS·
@privetavdey irritating way to select something that I'd like to toggle quickly
English
0
0
1
42
Alexander Avdeev
Alexander Avdeev@privetavdey·
Can't decide if this is smart or not
English
123
20
1.4K
113.8K
Austen Allred
Austen Allred@Austen·
And now it's time to see what my little brother has been working on for the past couple years: An AI model fully built on sub-quadratic sparse-attention architecture. Result? 12 million token reasoning model 150 tokens/second 1/5 the cost of Opus
Alexander Whedon@alex_whedon

Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.

English
23
9
171
36.3K
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️
I don’t know how good this new 12 million context system is, or if it’s hype or whatever, but I think it definitely shows a point I’ve been making since 2023. We really suck at everything. - The chips are primitive - The research and training and inference systems are primitive - Our RL approaches are primitive - We’ve barely started building harnesses Everything we’re doing is massively inefficient right now. And there are thousands of vectors for improvement. And many of them are multiplicative. Most people think we’re at like 88% of AI’s capabilities, and we’re pushing to hit 92% or eventually 97% or something. Nah. This is us at .0003% Everything we have is Punch Card AI. And as the AI gets better it will reveal that it’s similar for our understanding of medicine, physics, chemistry, etc. This barely even day 0. This is pre-history.
Alexander Whedon@alex_whedon

Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.

English
55
41
353
52.5K
YouAndYourBS
YouAndYourBS@YouAndYourBS·
@alex_whedon You're giving Martin Shkreli a run for his money. This is so obviously bs. You're telling me you beat OpenAI and Anthropic and Google at this breakthrough that is (checks notes) 1,000x less compute? You're going to jail.
English
0
0
0
88
Alexander Whedon
Alexander Whedon@alex_whedon·
Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.
English
1.5K
2.9K
23.1K
12.5M
YouAndYourBS
YouAndYourBS@YouAndYourBS·
@daniel_mac8 Transparently Theranos. If they had this, they would be releasing it and blowing peoples minds.
English
0
0
0
359
Dan McAteer
Dan McAteer@daniel_mac8·
SubQ is either the biggest breakthrough since the Transformer... > 52x faster than FlashAttention at 1mm tok context > 20x cheaper than Opus ...or it's AI Theranos. Requested early access so hopefully can investigate soon.
Dan McAteer tweet media
Alexander Whedon@alex_whedon

Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.

English
54
49
921
105.5K