YouAndYourBS
7.7K posts











@benhylak The ChatGPT interface doesn’t work for this. We’ve already tried it.



When I was at Apple, I loved working on micro interactions that you see all over the OS. Now that I’m not an apple I still like to solve for these little problems that really annoyed me. In this case, I designed a backspace button with a speed controller, so by just pressing it you can delete by letter and then immediately by word as you stretch it, without having to wait (like it usually does on the OS) and then if you stretch a little more, you can speed delete through words… I’m also working on another one where you can repair the words if you over-deleted it by accident 😜 (it also has haptic feedback, which makes it really fun)





Introducing /orchestrate, a skill that recursively spawns agents to tackle your most ambitious tasks with the Cursor SDK. We’ve used it to: - Autoresearch our internal skills, cutting token use by 20% while improving evals - Cut cold start times on our internal backend by 80%


AOC: “There’s a certain level of wealth and accumulation that is unearned. You can’t earn a billion dollars. You just can’t earn that. You can get market power, you can break rules, you can abuse labor laws, you can pay people less than what they’re worth, but you can’t earn that”


The incidence of pronouns in social media bios and email signatures is down at least 70% since the peak in 2020.


“Yes!” Democrat Katie Porter enthusiastically said she supports taxpayer-funded healthcare for illegal immigrants during a California gubernatorial primary debate. The former congresswoman insisting that restoring that coverage to undocumented migrants is “what Californians deserve.” Porter gained momentum in the polls after frontrunner and fellow Democrat Eric Swalwell suspended his campaign following a number of sexual misconduct allegations.


Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.


Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.





Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.






