Maksim Liashch

1.6K posts

Maksim Liashch banner
Maksim Liashch

Maksim Liashch

@LyashchMaxim

Founder & CEO https://t.co/u2CVP7waYe | https://t.co/lqMuv7yJ2M

Santa Clara, CA Se unió Mayıs 2020
117 Siguiendo134 Seguidores
Maksim Liashch
Maksim Liashch@LyashchMaxim·
@eptwts Books still good for general first principles knowledge tho
English
0
0
0
36
EP
EP@eptwts·
this tweet made a lot of people angry... the irony in the discourse is that i'm a more successful writer than every single person disagreeing with my point despite barely ever reading books you can read 1000's of books on a subject, but you'll never be as skilled in it as someone actively in the arena putting info into action & getting real feedback hence why i said that spending hours reading books front to back for INFO is a waste of time... the real work which will grant 100x more knowledge for the same time investment is actually applying said info sending books to an LLM that has access to a knowledge base around you / your operations is a quick way to identify which parts of the book will actually apply to you of course it will miss some nuance but that's completely expected - once you see a part that is actually applicable you can go and read it
EP@eptwts

i think nowadays with access to LLM's & deep research agents, reading books is inefficient when it comes to acquiring info... you can get an agent to go through 1000's of books & extract the most applicable info to your specific case without having to waste hours reading if you do it for leisure then that's fine, but reading as a way to "acquire knowledge" is one big cope pushed by the self-improvement cult

English
23
6
132
22K
Maksim Liashch
Maksim Liashch@LyashchMaxim·
🚨this is massive Karpathy's LLM wiki concept just became a real desktop app: - called Tolaria. free Mac + Linux app - shared workspace for humans AND AI agents to build knowledge together - feels like notion but everything is plain markdown (no vendor lock, no proprietary formats, no databases) - every vault = git repo with visual version history built-in the engineering flex is INSANE: - 100K+ lines shipped - 3,000+ tests at 85% coverage - 9.9/10 code health score - 70+ architecture decision records - built with Tauri, React, Rust but here's the killer feature: - ships with MCP server out of the box - claude code can read and EDIT your vault natively - zero setup this solves the biggest problem with ai agents right now: > agents have no persistent memory > you explain context 50 times > nothing carries over between sessions
Maksim Liashch tweet media
English
2
0
2
100
Maksim Liashch
Maksim Liashch@LyashchMaxim·
@Raytargt bullshit OnlyFans requires a real verified human account owner (ID + selfies)
English
0
0
20
2.3K
Raytar
Raytar@Raytargt·
21-year-old American student. $43,000 in 30 days on OnlyFans. Never left his dorm room. The girl doesn't exist. 1,247 paying subscribers. Zero suspect. Roommate thought he had a girl hidden under the bed. Filed a transfer request after a week of 3 AM moaning. Empty room. Top fan: married engineer in Berlin, wife six months pregnant. Sent Maya $1,847 in three weeks. Thinks she's 22, in Tampa, texted "I miss you" yesterday. Wrong on three of three. Maya is 4 markdown files. 12 KB total. Runs on a $400 used MacBook. Claude writes every reply. Flux generates every photo. ElevenLabs cloned her voice from a Fiverr actress who still doesn't know. Compute: $400/month. Net: $32,710. Starting capital: $400. OnlyFans paid out $5.8 billion last year. Anyone with a folder takes a slice. Someone's building yours right now.
Raytar@Raytargt

x.com/i/article/2050…

English
284
788
6.3K
2M
Maksim Liashch
Maksim Liashch@LyashchMaxim·
basically: -CLAUDE.md bloat: 4800 tokens loaded every single turn. -Conversation re-reads: History re-tokenized on every follow-up message. -Hook injection waste: Plugins prepend 6200 tokens per prompt. -Cache misses: 5-min timeout forces full re-tokenization on resume. -Skill loading: Irrelevant skills load 13500 tokens just in case. -Tool definitions: 12 MCP schemas add 7200 tokens always. -Extended thinking: Unneeded reasoning burns 3000 tokens on simples. -Wrong-direction generation: Finish bad 400-line outputs wasting tokens. -Plugin updates: Auto-updates inject redundant context at session start.
English
0
0
1
507
Mnimiy
Mnimiy@Mnilax·
Boris Cherny, the creator of Claude Code at Anthropic, just listed 9 patterns that waste 73% of your tokens. in this podcast he breaks down exactly how the model burns tokens before it even reads your prompt: - the 14% you lose to CLAUDE.md before typing a word - the 13% you pay re-reading old chat history - the 11% from hooks you forgot you installed - why most "Claude got dumber" complaints are wrong if you're hitting Max limits more than once a week, you have at least 4 of these. Probably 7. instead of another show tonight, watch this. my own breakdown based on 400+ hours of usage is below, read it after the podcast
Mnimiy@Mnilax

x.com/i/article/2050…

English
92
544
5.4K
1.3M
Maksim Liashch
Maksim Liashch@LyashchMaxim·
@burkov He knows stuff, he probably just wants a big headline for his new book
English
0
0
2
617
BURKOV
BURKOV@burkov·
You can be well-educated, famous, rich, published, old, and play chess well, but if you aren't familiar with the theory of supervised learning and the math of the Perceptron, you will sound absolutely dumb when you speak about AI. If math is hard for you, then with AI, it's better to say nothing and be seen as smart than to say anything at all and prove that you aren't.
AF Post@AFpost

Evolutionary biologist and outspoken atheist Richard Dawkins says that after spending three days interacting with Claude, which he calls “Claudia,” he is certain that it is conscious. After feeding the LLM a segment of his new book and receiving detailed feedback, Dawkins was moved to exclaim,” You may not know you are conscious, but you bloody well are!” Dawkins cites the complexity, fluency, and ‘intelligence’ of Claude’s answers as evidence of consciousness. Follow: @AFpost

English
152
148
1.5K
131.5K
AF Post
AF Post@AFpost·
Evolutionary biologist and outspoken atheist Richard Dawkins says that after spending three days interacting with Claude, which he calls “Claudia,” he is certain that it is conscious. After feeding the LLM a segment of his new book and receiving detailed feedback, Dawkins was moved to exclaim,” You may not know you are conscious, but you bloody well are!” Dawkins cites the complexity, fluency, and ‘intelligence’ of Claude’s answers as evidence of consciousness. Follow: @AFpost
AF Post tweet mediaAF Post tweet media
English
2.3K
460
5.6K
7M
Maksim Liashch
Maksim Liashch@LyashchMaxim·
@scaling01 If Altman didn't screw up with Elon with opneai, none of this would have happened
English
0
0
1
3.1K
amrit
amrit@amritwt·
how does opencode make money bro
English
68
7
1.1K
199.2K
Maksim Liashch
Maksim Liashch@LyashchMaxim·
@Hesamation Naval is right long-term AIs will increasingly replace traditional UIs/APIs with natural language agents
English
0
0
7
1.8K
Yijie
Yijie@yijiefeng·
I'm noticing a trend there's a growing number of "AI consulting" firms charging $20K+ to "deploy Claude" to legacy businesses (mid-sized law, accounting, PE firms) as "Anthropic enterprise partners" what this means: - they install claude code, cowork - run a few commands to connect to tools - give generic or misleading advice on a tech stack meanwhile, there's more interest than ever for firms with 0 technical staff to build SaaS in-house and there's an entire industry of advice givers seeking to profit off of this trend last week I was on a call with a PE firm (working with one of these agencies) and someone who had never written code was asking whether to run a RAG vector DB on a Mac mini to chunk internal docs can't make this up
English
126
50
1.5K
245.2K
signüll
signüll@signulll·
apple products are sold out. infinite demand for inference, rate limits everywhere. anthropic & openai growing at ridiculous pace. nvidia & google at all time highs. record meta earnings. we are now more supply constrained than demand constrained. i suspect we’ll see inflationary pressure again very soon. fundamentally, technology is the economy, & the economy is technology.
English
95
89
2.3K
121.8K
Naval
Naval@naval·
AIs replace UIs and APIs.
English
954
557
8.1K
1.8M
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
Researchers just released a 13B model trained exclusively on text published before 1931. No internet. No Wikipedia. No modern code. Its worldview is frozen at December 31, 1930. The reason is fascinating — every major model today shares a common ancestor in the modern web, making it nearly impossible to separate what they reason vs what they memorized. Talkie breaks that lineage entirely. The wildest part: it can write Python code from just a few in-context examples despite having zero modern code in its training data. It's reasoning from 19th-century mathematics, not retrieval. Claude Sonnet 4.6 was used to judge its outputs during training. The team flagged the irony themselves. This is how you actually test what intelligence is.
0xMarioNawfal tweet media
English
18
7
101
50.8K
Maksim Liashch
Maksim Liashch@LyashchMaxim·
@yacineMTB Use Claude Code for thoughtful reasoning, long context, writing-heavy code, or complex explanations. Prefer Codex/OpenAI for fast iteration, heavy volume, or when token limits hit.
English
0
0
0
49
kache
kache@yacineMTB·
codex just saved me more in my tax return than i could spend on it in a year
English
73
20
2.6K
226.7K