Prof Adebayo

4.9K posts

Prof Adebayo banner
Prof Adebayo

Prof Adebayo

@ProfAdebay

CTO @Academicnight | i literally want ai to replace you.

you can't reject light forever เข้าร่วม Mayıs 2023
902 กำลังติดตาม1.2K ผู้ติดตาม
Farza 🇵🇰🇺🇸
This is Farzapedia. I had an LLM take 2,500 entries from my diary, Apple Notes, and some iMessage convos to create a personal Wikipedia for me. It made 400 detailed articles for my friends, my startups, research areas, and even my favorite animes and their impact on me complete with backlinks. But, this Wiki was not built for me! I built it for my agent! The structure of the wiki files and how it's all backlinked is very easily crawlable by any agent + makes it a truly useful knowledge base. I can spin up Claude Code on the wiki and starting at index.md (a catalog of all my articles) the agent does a really good job at drilling into the specific pages on my wiki it needs context on when I have a query. For example, when trying to cook up a new landing page I may ask: "I'm trying to design this landing page for a new idea I have. Please look into the images and films that inspired me recently and give me ideas for new copy and aesthetics". In my diary I kept track of everything from: learnings, people, inspo, interesting links, images. So the agent reads my wiki and pulls up my "Philosophy" articles from notes on a Studio Ghibli documentary, "Competitor" articles with YC companies whose landing pages I screenshotted, and pics of 1970s Beatles merch I saved years ago. And it delivers a great answer. I built a similar system to this a year ago with RAG but it was ass. A knowledge base that lets an agent find what it needs via a file system it actually understands just works better. The most magical thing now is as I add new things to my wiki (articles, images of inspo, meeting notes) the system will likely update 2-3 different articles where it feels that context belongs, or, just creates a new article. It's like this super genius librarian for your brain that's always filing stuff for your perfectly and also let's you easily query the knowledge for tasks useful to you (ex. design, product, writing, etc) and it never gets tired. I might spend next week productizing this, if that's of interest to you DM me + tell me your usecase!
Andrej Karpathy@karpathy

Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.

English
158
211
2.9K
808.4K
Prof Adebayo
Prof Adebayo@ProfAdebay·
@FarzaTV honest question: why didn't you used RAG for it? Though this method of fetching data seems faster but i think it might not be effective for large files
English
0
0
0
2
Qwen
Qwen@Alibaba_Qwen·
Qwen3.6-Plus ranks # 1 on @OpenRouter , and the first model on OpenRouter to break 1 Trillion tokens processed in a single day!!🥇🔥 We are thrilled to see Qwen3.6-Plus topping the charts so quickly. This milestone wouldn't be possible without our amazing developers. ❤️Thank you!!
OpenRouter@OpenRouter

Qwen 3.6 Plus from @Alibaba_Qwen is officially the first model on OpenRouter to break 1 Trillion tokens processed in a single day! At ~1,400,000,000,000 tokens, it’s the strongest full day performance of any new model dropped this year. Congrats to the Qwen team!

English
89
116
1.5K
122.5K
Prof Adebayo
Prof Adebayo@ProfAdebay·
That’s the way forward, hopefully it gives their users more compute headroom to breathe. Anthropic is essentially prioritizing how its resources are allocated, even if that means turning down certain user demands and directing them to other models. It’s a calculated move, they understand their positioning in the market and are managing access in a way that maintains performance and exclusivity. One thing is certain: a large portion of the users they’re redirecting will likely keep their subscriptions, even if they can’t fully useit with Openclaw.
English
0
0
0
184
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
Claude is cutting off apps like OpenClaw from using Claude subscriptions. I read somewhere that a $200/month Claude plan can use up to $5,000 in compute, so it’s heavily subsidized. Given Claude’s uptime issues, this might be the right move under current Anthropic GPU constraints. Codex is the more generous one for 3rd-party apps (OpenAI has more GPUs). It’ll be interesting to see how this strategy difference plays out.
Yuchen Jin tweet media
English
65
18
532
50.6K
Kamell
Kamell@kamellperry_·
@TheAhmadOsman I use most of these and claude code still handles long running agentic tasks better than anything else I've tried. codex is solid but it's a different workflow entirely
English
2
0
1
213
Ahmad
Ahmad@TheAhmadOsman·
friends don’t let friends use Claude Code in 2026 btw alternatives? Codex, Droid, Kimi Cli, OpenCode among others
Ahmad tweet media
English
52
21
376
29.4K
am.will
am.will@LLMJunky·
@ProfAdebay thats my one complaint about cursor is you get locked out for weeks if you hit your limits fast.
English
1
0
0
67
am.will
am.will@LLMJunky·
I did it. Found a way to greatly reduce usage rates in Claude. step 1: navigate to your .claude folder and open settings.json make the following changes: json { "model": "claude-sonnet-4-5", "compactThreshold": 200000, "subagentModel": "claude-haiku-4-5-20251001" } save and close the file step 2: back in your terminal, run claude to open a new session - once you're inside, type `/effort medium` - press enter to save step 3: now let's make sure we're not using unnecessary additional tokens in outputs. type /config scroll down to "verbose output" press space bar and set to "false" step 4: good. now for the most important step press ctrl+c once to pause the session you'll see a prompt asking if you want to exit, press ctrl+c again to confirm keep your terminal open step 5: type the following: `npm i -g @openai/codex && codex`
Lydia Hallie ✨@lydiahallie

Thank you to everyone who spent time sending us feedback and reports. We've investigated and we're sorry this has been a bad experience. Here's what we found:

English
139
81
2.2K
248.7K
Prof Adebayo
Prof Adebayo@ProfAdebay·
@LLMJunky Yeah… I can’t risk a $200 plan and then hit the weekly limit within 24 hours (and get lockeddown for 7 days). I’d rather keep subscribing with a new account once I hit the weekly limit. on 4th acc today... Codex is more budget friendly for heavy usage... than Claude
English
1
0
2
124
Prof Adebayo
Prof Adebayo@ProfAdebay·
@LLMJunky I'm currently on my 4th codex subscriptions (account). all 3accs on hit weekly limit. I just subscribed 4th today.
English
1
0
2
223
am.will
am.will@LLMJunky·
all jokes aside, ive been burning tokens like crazy on codex too. 2x ruined me 😭
English
12
0
137
17.9K
Prof Adebayo รีทวีตแล้ว
Chujie Zheng
Chujie Zheng@ChujieZheng·
We are planning to open-source the Qwen3.6 models (particularly medium-sized versions) to facilitate local deployment and customization for developers. Please vote for the model size you are **most** anticipating—the community’s voice is vital to us!
English
311
262
4.1K
286.9K
Prof Adebayo
Prof Adebayo@ProfAdebay·
@ChujieZheng why would anyone vote for anything less than Qwen3.6-122B-A10B... ?🤦‍♂️
Prof Adebayo tweet media
English
2
0
6
833
Bitshala
Bitshala@bitshala_org·
No internet? No problem. Bala just dropped the ultimate cypherpunk demo at the BOSS Summit: From Off-Grid to On-Chain He literally broadcasted a live Bitcoin transaction using Mesh Radio. No ISPs, no Wi-Fi, no cellular data. Just pure radio waves bypassing the traditional internet layer to hit the mempool. When we say we are building unconfiscatable money for uncertain times, this is exactly what we mean. Mind blown. Here is the repo used to connect meshtastic to bitcoin core:github.com/BTCtoolshed/Me…
Bitshala tweet mediaBitshala tweet media
English
176
987
4.5K
217.7K
Prof Adebayo
Prof Adebayo@ProfAdebay·
@martinvars @jack can we have the phone version of FON AI for mobile devices? That will be cooler. Just like we have mesh
English
0
0
1
183
Prof Adebayo รีทวีตแล้ว
Martin Varsavsky
Martin Varsavsky@martinvars·
In 2005 I built Fon on a simple idea: millions of WiFi routers sit idle most of the day. Share that spare capacity and you build a global network without laying a single cable. Millions of homes joined. Telcos partnered. No infrastructure needed. Now Jack Dorsey’s Block just launched mesh-llm. Same idea, applied to AI. Pool your idle GPU and suddenly a group of people with modest machines can run large open models that none of them could run alone. Models split automatically across nodes. No cloud provider, no API fees, no one controls your data. The timing matters. Google DeepMind released Gemma 4 today under Apache 2.0. A 31B model that competes with much larger closed models. A 26B MoE variant that only activates 3.8B parameters at inference. Edge models that run on a phone. All free to download, free to use commercially, no restrictions. Combine mesh-llm with Gemma 4 and you get the Fon of AI. Distributed compute running frontier open models. No central server. No per-query cost. Total privacy. The intelligence stays with the people who pool the hardware. Twenty years ago the scarce resource was connectivity. Today it is compute. The solution is the same: share what you have, access what you need.
jack@jack

mesh-llm: pool compute to run open models. built by @michaelneale at block: docs.anarchai.org

English
40
88
599
127.5K