DB

1.9K posts

DB

DB

@dbull888

Retail investor

NYC انضم Haziran 2025
267 يتبع253 المتابعون
bubble boi
bubble boi@bubbleboi·
God please help me god please I promise if the S&P goes down on Monday I will sell and go back to the Bronx and teach kids electrical engineering please I’ll be good please forgive me.
English
10
1
171
8.1K
DB أُعيد تغريده
big strong guy, he’s a big strong guy
smoking some pork belly in nyc. having friends and family over. the alcohol is flowing. there much pain in the world, but not in this backyard.
big strong guy, he’s a big strong guy tweet media
English
8
1
56
2.7K
DB أُعيد تغريده
Tokyo Deep Value
Tokyo Deep Value@TokyoDeepValue·
Japan has a 112-year-old gas monopoly that heats an entire prefecture. Zero competitors. No debt. ¥5B cash on the balance sheet. Trading at 42 cents on the dollar of book value. 1.9× EBITDA. The reason it's cheap: nobody on Wall Street has ever heard of it. $9537.T (bio for more info)
Tokyo Deep Value tweet media
English
44
52
1.2K
148.2K
Unemployed Capital Allocator
This is frightening as fuck but for a completely different reason.
Tech Layoff Tracker@TechLayoffLover

Had drinks with 30 CTOs last night at an off-the-record gathering in Palo Alto Every single one showed me the same internal PowerPoint slide "2026 AI Headcount Targets: Path to 70% Cost Reduction" The numbers will make you physically sick Fintech CTO planning to cut 280-person engineering org down to 43 "AI orchestrators" by September. Same product roadmap. Same delivery expectations. Healthcare CTO already eliminated his entire manual QA department. 67 people. Replaced with 3 senior engineers running autonomous testing agents that ship code directly to production. SaaS CTO walked me through his "human depreciation timeline": 340 engineers today, 89 planned for 2027. Customer support going from 120 humans to 12 "escalation specialists" managing AI conversations. The most chilling part: they're all using the exact same consulting deck from McKinsey called "The 30% Organization" One CTO literally said "hiring humans for code is like hiring horses for transportation" Another showed me Slack screenshots where his L7s are asking if they should train their replacements The consensus was unanimous: if you can't manage 10 AI agents by Christmas, you're not making it to New Year's Every single one of them is planning to announce these cuts as "AI transformation success stories" While their stock options vest at record highs built on the backs of workers they're about to execute The future of engineering is 3 humans with 50 AI agents in a WeWork somewhere while 500 families lose their homes

English
28
1
115
73.6K
DB
DB@dbull888·
@churrascooooo Mine is so fire. All the premium selections and they don’t mark up that much. Blessed.
English
0
0
2
96
CHURRASCO SAUDADE
CHURRASCO SAUDADE@churrascooooo·
my corner store has the worst selection of items in the city from top to bottom, its truly appalling but i gotta roll with them because they're the home team, it is what it is
English
4
0
59
2.1K
DB
DB@dbull888·
SahBabii has officially entered the music rotation
English
0
0
1
50
DB أُعيد تغريده
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.1K
5K
43.9K
12.6M
taobanker
taobanker@taobanker·
HIGHLY-PAID, HIGHLY-REGULATED BANKERS MOMENTS AFTER YOU FILE YOUR "CONFIDENTIAL" IPO
taobanker tweet media
English
3
5
363
14K
DB
DB@dbull888·
@volcrushed It’s a tourism attraction now. Really pisses me off. All luxury I see in NYC I immediately assume is fake
English
0
0
1
40
vol
vol@volcrushed·
Especially in the past two years the stigma around buying/wearing fake designer goods has all but gone away. A decade ago canal st. was never this crowded and people would press you if you popped out with anything high-end about where you got it, price, etc...
English
3
0
9
821
DB أُعيد تغريده
Sisyphus
Sisyphus@0xSisyphus·
Sisyphus tweet media
ZXX
32
84
731
38.3K
DB
DB@dbull888·
@volcrushed Cook king. This is why I’m nice to everyone and do favors whenever I can
English
0
0
2
42
vol
vol@volcrushed·
The same people that wouldnt push my resume when the market was hot are asking me for internal referrals. God is good and hell naw
English
1
0
9
551
DB
DB@dbull888·
Listening to Trumps speech right now. Lol we’re gonna open down 3%. This war isn’t ending anytime soon
English
0
1
3
117
DB
DB@dbull888·
God I love Buffett so much. Only guy I’ve ever idolized besides Druck. I’m going to cry when he dies.
Tom Elliott@tomselliott

Warren Buffett vs. the @federalreserve: “I wish they had a 0% inflation target … once you start saying you’re going to tolerate 2%, that compounds pretty dramatically over time. “And you’re saying to people, if you’re getting less than 2% on your money, you’re going backwards”

English
0
0
3
137