Prof Adebayo

4.9K posts

Prof Adebayo banner
Prof Adebayo

Prof Adebayo

@ProfAdebay

CTO @Academicnight | i literally want ai to replace you.

you can't reject light forever Katılım Mayıs 2023
903 Takip Edilen1.2K Takipçiler
Naz Bent
Naz Bent@Bent302·
@ProfAdebay I pulled the server out of a scrap bin. The RAM, 3060, and the CPU were purchased Q1 2025. Everything else was this year.
English
1
0
1
13
Naz Bent
Naz Bent@Bent302·
I have no clue how this monstrosity even booted. RTX 3090 FE 24GB 2x RTX 5060TI 16GB (32GB) Quadro RTX 5000 16GB 72GB total VRAM 256GB system RAM 22 core 44 thread intel xeon processor All in cost ~$3000 #opensource #localllama #qwen #ai #nvidia
Naz Bent tweet mediaNaz Bent tweet mediaNaz Bent tweet media
English
8
0
36
13.1K
0xSero
0xSero@0xSero·
GPT-5.3-Codex is still the best coding agent, no doubt about it. GPT-5.4 is better at computer use, but doesn't match the sheer autistic power Codex holds.
0xSero tweet media
English
60
12
833
71.1K
Tj Dunham
Tj Dunham@RealTjDunham·
@DSBatten now what if that same device could run ai inference, contributing its compute towards running the biggest models trustlessly been building something like that, i believe its what satoshi wouldve wanted for ai models
English
2
0
4
177
Prof Adebayo
Prof Adebayo@ProfAdebay·
Honestly, this would be really cool with local LLMs. In fact, it can be used as a memory layer for local dense LLMs.
Farza 🇵🇰🇺🇸@FarzaTV

This is Farzapedia. I had an LLM take 2,500 entries from my diary, Apple Notes, and some iMessage convos to create a personal Wikipedia for me. It made 400 detailed articles for my friends, my startups, research areas, and even my favorite animes and their impact on me complete with backlinks. But, this Wiki was not built for me! I built it for my agent! The structure of the wiki files and how it's all backlinked is very easily crawlable by any agent + makes it a truly useful knowledge base. I can spin up Claude Code on the wiki and starting at index.md (a catalog of all my articles) the agent does a really good job at drilling into the specific pages on my wiki it needs context on when I have a query. For example, when trying to cook up a new landing page I may ask: "I'm trying to design this landing page for a new idea I have. Please look into the images and films that inspired me recently and give me ideas for new copy and aesthetics". In my diary I kept track of everything from: learnings, people, inspo, interesting links, images. So the agent reads my wiki and pulls up my "Philosophy" articles from notes on a Studio Ghibli documentary, "Competitor" articles with YC companies whose landing pages I screenshotted, and pics of 1970s Beatles merch I saved years ago. And it delivers a great answer. I built a similar system to this a year ago with RAG but it was ass. A knowledge base that lets an agent find what it needs via a file system it actually understands just works better. The most magical thing now is as I add new things to my wiki (articles, images of inspo, meeting notes) the system will likely update 2-3 different articles where it feels that context belongs, or, just creates a new article. It's like this super genius librarian for your brain that's always filing stuff for your perfectly and also let's you easily query the knowledge for tasks useful to you (ex. design, product, writing, etc) and it never gets tired. I might spend next week productizing this, if that's of interest to you DM me + tell me your usecase!

English
0
0
0
10
Farza 🇵🇰🇺🇸
This is Farzapedia. I had an LLM take 2,500 entries from my diary, Apple Notes, and some iMessage convos to create a personal Wikipedia for me. It made 400 detailed articles for my friends, my startups, research areas, and even my favorite animes and their impact on me complete with backlinks. But, this Wiki was not built for me! I built it for my agent! The structure of the wiki files and how it's all backlinked is very easily crawlable by any agent + makes it a truly useful knowledge base. I can spin up Claude Code on the wiki and starting at index.md (a catalog of all my articles) the agent does a really good job at drilling into the specific pages on my wiki it needs context on when I have a query. For example, when trying to cook up a new landing page I may ask: "I'm trying to design this landing page for a new idea I have. Please look into the images and films that inspired me recently and give me ideas for new copy and aesthetics". In my diary I kept track of everything from: learnings, people, inspo, interesting links, images. So the agent reads my wiki and pulls up my "Philosophy" articles from notes on a Studio Ghibli documentary, "Competitor" articles with YC companies whose landing pages I screenshotted, and pics of 1970s Beatles merch I saved years ago. And it delivers a great answer. I built a similar system to this a year ago with RAG but it was ass. A knowledge base that lets an agent find what it needs via a file system it actually understands just works better. The most magical thing now is as I add new things to my wiki (articles, images of inspo, meeting notes) the system will likely update 2-3 different articles where it feels that context belongs, or, just creates a new article. It's like this super genius librarian for your brain that's always filing stuff for your perfectly and also let's you easily query the knowledge for tasks useful to you (ex. design, product, writing, etc) and it never gets tired. I might spend next week productizing this, if that's of interest to you DM me + tell me your usecase!
Andrej Karpathy@karpathy

Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.

English
185
270
3.5K
1.1M
Prof Adebayo
Prof Adebayo@ProfAdebay·
@FarzaTV honest question: why didn't you used RAG for it? Though this method of fetching data seems faster but i think it might not be effective for large files
English
0
0
0
12
Qwen
Qwen@Alibaba_Qwen·
Qwen3.6-Plus ranks # 1 on @OpenRouter , and the first model on OpenRouter to break 1 Trillion tokens processed in a single day!!🥇🔥 We are thrilled to see Qwen3.6-Plus topping the charts so quickly. This milestone wouldn't be possible without our amazing developers. ❤️Thank you!!
OpenRouter@OpenRouter

Qwen 3.6 Plus from @Alibaba_Qwen is officially the first model on OpenRouter to break 1 Trillion tokens processed in a single day! At ~1,400,000,000,000 tokens, it’s the strongest full day performance of any new model dropped this year. Congrats to the Qwen team!

English
93
118
1.5K
126.7K
Prof Adebayo
Prof Adebayo@ProfAdebay·
That’s the way forward, hopefully it gives their users more compute headroom to breathe. Anthropic is essentially prioritizing how its resources are allocated, even if that means turning down certain user demands and directing them to other models. It’s a calculated move, they understand their positioning in the market and are managing access in a way that maintains performance and exclusivity. One thing is certain: a large portion of the users they’re redirecting will likely keep their subscriptions, even if they can’t fully useit with Openclaw.
English
0
0
0
192
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
Claude is cutting off apps like OpenClaw from using Claude subscriptions. I read somewhere that a $200/month Claude plan can use up to $5,000 in compute, so it’s heavily subsidized. Given Claude’s uptime issues, this might be the right move under current Anthropic GPU constraints. Codex is the more generous one for 3rd-party apps (OpenAI has more GPUs). It’ll be interesting to see how this strategy difference plays out.
Yuchen Jin tweet media
English
66
19
534
51.2K
Kamell
Kamell@kamellperry_·
@TheAhmadOsman I use most of these and claude code still handles long running agentic tasks better than anything else I've tried. codex is solid but it's a different workflow entirely
English
2
0
1
224
Ahmad
Ahmad@TheAhmadOsman·
friends don’t let friends use Claude Code in 2026 btw alternatives? Codex, Droid, Kimi Cli, OpenCode among others
Ahmad tweet media
English
54
21
379
38.6K
am.will
am.will@LLMJunky·
@ProfAdebay thats my one complaint about cursor is you get locked out for weeks if you hit your limits fast.
English
1
0
0
70
am.will
am.will@LLMJunky·
I did it. Found a way to greatly reduce usage rates in Claude. step 1: navigate to your .claude folder and open settings.json make the following changes: json { "model": "claude-sonnet-4-5", "compactThreshold": 200000, "subagentModel": "claude-haiku-4-5-20251001" } save and close the file step 2: back in your terminal, run claude to open a new session - once you're inside, type `/effort medium` - press enter to save step 3: now let's make sure we're not using unnecessary additional tokens in outputs. type /config scroll down to "verbose output" press space bar and set to "false" step 4: good. now for the most important step press ctrl+c once to pause the session you'll see a prompt asking if you want to exit, press ctrl+c again to confirm keep your terminal open step 5: type the following: `npm i -g @openai/codex && codex`
Lydia Hallie ✨@lydiahallie

Thank you to everyone who spent time sending us feedback and reports. We've investigated and we're sorry this has been a bad experience. Here's what we found:

English
140
81
2.2K
249.9K
Prof Adebayo
Prof Adebayo@ProfAdebay·
@LLMJunky Yeah… I can’t risk a $200 plan and then hit the weekly limit within 24 hours (and get lockeddown for 7 days). I’d rather keep subscribing with a new account once I hit the weekly limit. on 4th acc today... Codex is more budget friendly for heavy usage... than Claude
English
1
0
2
127
Prof Adebayo
Prof Adebayo@ProfAdebay·
@LLMJunky I'm currently on my 4th codex subscriptions (account). all 3accs on hit weekly limit. I just subscribed 4th today.
English
1
0
2
224
am.will
am.will@LLMJunky·
all jokes aside, ive been burning tokens like crazy on codex too. 2x ruined me 😭
English
12
0
137
18K
Chujie Zheng
Chujie Zheng@ChujieZheng·
We are planning to open-source the Qwen3.6 models (particularly medium-sized versions) to facilitate local deployment and customization for developers. Please vote for the model size you are **most** anticipating—the community’s voice is vital to us!
English
311
262
4.1K
288.6K