Shaq

585 posts

Shaq banner
Shaq

Shaq

@shareqhusain

arsenal fan. passionate about tech for good. food is life.

London Katılım Ocak 2015
374 Takip Edilen113 Takipçiler
Shaq
Shaq@shareqhusain·
@WelBeast Insane how so many forget and are ungrateful so fast!
English
0
0
0
363
WelBeast
WelBeast@WelBeast·
For the first time in Arsenal’s history we’ll play in Back to back Champions League semifinals. Thank you Mikel Arteta. You have changed our lives.
English
494
1.8K
14.5K
166K
Shaq
Shaq@shareqhusain·
Obsessed builders who go to the last mile to truly solve consumer / business challenges as before can still build massive companies from scratch. The bar is maybe higher but for the greats probably acceleration is going to be even faster than ever before. So tldr let builders build and we are going to have a faster cycle of startup discovery : bigger faster winners and cheaper losers (can discover faster that they have no advantage to win)
English
0
0
1
134
andrew chen
andrew chen@andrewchen·
“ok this startup is cool but …” 1980: … what if IBM builds this? 1995 … what if Microsoft builds this? 2010 … what if Google builds this? Today … what if builds this? reality is, if founders listened to the “what if” pessimists we’d never have any startups or new products. That’s why they’re building and the pundits aren’t My observation: When these huge waves happen, these new markets are so damn big there will be tens of thousands of new viable companies, hundreds of unicorns, and a few iconic companies that become generational. The big cos play a role but can never compete with the glorious open market known as capitalism So for all the “what if” people - sit down, log off X for a bit, and let the founders do their thing. And let’s cheer them on when they do
English
141
125
1.1K
81.2K
Alex
Alex@alexmoneypenny·
My belief is very low right now. Yesterday hurt and City have the momentum. But it is a fact that if Arsenal win two games next week, they are: 1) In the UCL Semi Finals, likely against a team they beat 4-0 earlier this season. 2) 9 clear in the PL with 5 to go. It feels so far away but it’s literally two wins. This sport man. The margins. Absolutely insane. 😅
English
104
153
2.6K
131.3K
Shaq
Shaq@shareqhusain·
It was always going to be difficult against the greatest manager - against a team that can buy semenyo and guehi to reinforce in winter. Our backs are against the wall - belief is low. But now is the moment heroes are made. We can still win a magnificent double! It’s time to step up and make history - and I think we will do it at city. 2-1 to the gunners #cyog
English
0
0
0
161
Triple M
Triple M@Tripple____M·
Arsenal fans only, do you think we will beat Manchester City next week?
English
1.6K
55
1.1K
164.2K
Shaq
Shaq@shareqhusain·
@Matin_Zubimendi Ok just realised this is a parody account hah but the words still true!
English
0
0
0
52
Shaq
Shaq@shareqhusain·
You’re a top top player. In the heat of the moment mistakes can happen - don’t sweat it. True fans are 100% behind you - and we know you and the team will bring a big trophy home. Play some front foot high tempo football at city and beat them in own stadium to 🤫the naysayers. CYOG!
English
1
0
1
3.6K
fã Zubimendi 🇪🇸
fã Zubimendi 🇪🇸@Matin_Zubimendi·
To every Arsenal fan, I don’t even know where to begin but I have to say I’m sorry. What happened yesterday is on me. One moment, one decision and it changed everything. I’ve replayed it over and over in my head and it hurts knowing I let the team and every single one of you down. I know what this club means. I know what it means to fight for every point, every position, every dream. And in a moment where we needed composure, I made the wrong choice. There are no excuses for that. Seeing the disappointment, the anger, I understand it. Honestly I feel it too. Probably even more. Because when you wear this badge, you carry millions with you and yesterday I didn’t carry it the way I should have. All I can promise is this I won’t hide. I won’t shy away from it. I will take it, learn from it and use it. Because moments like this either break you or build you and I refuse to let it break me. I will come back stronger. For the team. For this club. For you. I’m truly sorry.
fã Zubimendi 🇪🇸 tweet media
English
2.4K
1K
13.6K
3M
Shaq retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.
staysaasy@staysaasy

The degree to which you are awed by AI is perfectly correlated with how much you use AI to code.

English
1.1K
2.4K
20K
4M
Shaq
Shaq@shareqhusain·
Hi we don’t know each other but this would be so cool! I’m COO at a series a startup - and use clause code so much to do my board work, team management , and hands on product management work! Everything from strategy, making decks, making prototypes, synthesizing research….Would be so cool to get pro tips from an insider!!!
English
0
0
0
166
Thariq
Thariq@trq212·
I want to do some streams where I work with non-technical people using Claude Code to figure out how they might be able to improve their process. My feeling is that just a few tips could make a big difference in efficiency. Any mutuals interested?
English
698
80
3.4K
186.9K
Shaq
Shaq@shareqhusain·
@andrewchen Its hard to do this with sr hires you are trying to poach though!
English
0
0
1
343
andrew chen
andrew chen@andrewchen·
noticing a trend of startups replacing standard resumes/interviews with week-long (or at least 3-day weekend) in-office trials. Makes sense in a world of AI-generated resumes and interview responses Turns out the best signal for whether someone can do a job is watching them actually do the job. took us 100 years of HR to rediscover apprenticeships!!! 😂
English
199
82
1.5K
175.1K
Shaq retweetledi
NASA
NASA@NASA·
Even in darkness, we glow. In this image of Earth taken by the Artemis II crew, we can see the electric lights of human activity. In the lower right, sunlight illuminates the limb of the planet.
NASA tweet media
English
4K
45.3K
325.8K
9.9M
Shaq retweetledi
Arthur MacWaters
Arthur MacWaters@ArthurMacwaters·
I think about this often
Arthur MacWaters tweet media
English
32
163
1.2K
39.7K
Shaq retweetledi
kepano
kepano@kepano·
More and more people are using Obsidian as a local wiki to read things your agents are researching and writing. It works best with a separate Obsidian vault that you can fill it with content, e.g. via Obsidian Web Clipper.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
48
92
2.1K
168.2K
Aesthetics 𝕏
Aesthetics 𝕏@aestheticsguyy·
Post a picture YOU took. Just a pic. No description
Aesthetics 𝕏 tweet media
English
3.8K
2.6K
32.1K
1.2M
Shaq
Shaq@shareqhusain·
@bcherny @trq212 @wongmjane @BenLesh Great leaders front up in difficult moments. So this is great to see. The world got more of a glimpse of what an amazing product you and team have built (life changing). Keep going!!!🚀
English
0
0
3
3.4K
Boris Cherny
Boris Cherny@bcherny·
Mistakes happen. As a team, the important thing is to recognize it’s never an individuals’s fault — it’s the process, the culture, or the infra. In this case, there was a manual deploy step that should have been better automated. Our team has made a few improvements to the automation for next time, a couple more on the way.
English
321
834
11K
1.4M
Ben Lesh
Ben Lesh@BenLesh·
Apparently Bun might be the cause of Anthropic leaking the Claude Code source code today. A 3-week old bug where source maps are hosted when they shouldn't be. It's wild there were no tests to catch such an issue #issuecomment-4163277829" target="_blank" rel="nofollow noopener">github.com/oven-sh/bun/is…
English
22
13
466
211K
Aakash Gupta
Aakash Gupta@aakashgupta·
The guy who helped build React, the most popular workaround for the browser's layout engine, just said the workaround isn't sufficient and built the replacement himself. Cheng Lou's resume is the context that makes this announcement hit different. He worked on React at Facebook. Created ReasonML and ReScript. Built Messenger's frontend. Now runs Midjourney's entire UI stack on Bun. Every single role was a fight against the same enemy: the browser's rendering pipeline. Here's why this matters beyond the engineering flex. The web was built to render documents. Static HTML, flowing text, pages you scroll through. CSS layout was designed for that world. Then we started building applications inside the document renderer: spreadsheets, design tools, messaging apps, AI chat interfaces. Every one of those applications has to ask the browser permission to know how big text is. That question triggers reflow. Reflow locks the main thread. At 60fps you get 16 milliseconds per frame. Spend those milliseconds on layout recalculation and the user sees jank. The industry's answer for the last decade has been to work around the problem. Virtual DOM (React) batches the writes. CSS containment limits the blast radius. content-visibility skips offscreen layout. FastDOM separates reads from writes. Every solution accepts that the browser owns text measurement and tries to call it less often. Cheng Lou's answer: stop calling it at all. Measure text in pure TypeScript. Skip the DOM. Skip CSS. Skip reflow entirely. Zero layout passes. The performance improvement, per his demo, is categorical. 0.05ms versus 30ms. Zero reflows versus five hundred. The person who understands the browser rendering pipeline better than almost anyone alive just built the tool that makes part of it unnecessary. That tells you where application-grade UI is heading.
Cheng Lou@_chenglou

My dear front-end developers (and anyone who’s interested in the future of interfaces): I have crawled through depths of hell to bring you, for the foreseeable years, one of the more important foundational pieces of UI engineering (if not in implementation then certainly at least in concept): Fast, accurate and comprehensive userland text measurement algorithm in pure TypeScript, usable for laying out entire web pages without CSS, bypassing DOM measurements and reflow

English
54
348
3.7K
944.1K
Shaq retweetledi
Dave Kline
Dave Kline@dklineii·
You’re thinking about luck all wrong. Successful people did not have their names randomly pulled from a hat. They took steps to increase the probability of good things happening. More connections -> More collisions -> More success. Luck is just another word for hustle.
English
3
5
24
1.6K
Shaq
Shaq@shareqhusain·
We’re building to be the ai for home life. With moving as the wedge (super high arpu opp as scope is everything from mortgages to broadband, energy, insurance, refurb, furniture, energy effeciency …). Think openclaw meets Amazon for services! Challenges now is building reliability and then consumer trust in full agentic delegation to run home life!
English
1
0
1
216
andrew chen
andrew chen@andrewchen·
consumer AI won’t be won by wrapping the smartest model - instead I'm convinced it'll have the following chracteristics: - AI native functionality reinvents the UX enough to move the needle - deliver enough new AI UX with "good enough" models - ARPU reliably outruns inference cost (as the latter goes down) - retention ends up stronger than non-AI incumbents - creates margin to fund distribution channels Thus, am particularly bullish about high ARPU consumer sectors (particularly with whale dynamics) like personal finance, health, productivity, gaming, etc - these categories already have willingness to pay, which means you can afford heavier models, more iterations, and better UX. Particularly variations of these ideas where an agent can dramatically improve the outcomes Meanwhile low ARPU categories get trapped in a race to the bottom, forced into cheaper models, worse experiences, and fragile retention loops. Particularly true for high global / low ARPU categories like content creation tools and communication apps etc As I mentioned earlier, it seems like 18-24 months before we can wrap AI functionality with remnant ads and it just works. Can def see a huge mega explosion of AI consumer in 2027 as this flips on which will be exciting
English
95
31
365
40.5K