wayneb

1.1K posts

wayneb

wayneb

@wayneb

Currently building apps that solve my problem (and hopefully others too). https://t.co/QQKuXGhBVw https://t.co/X1hyCgMewp more coming soon!

Tham gia Nisan 2007
348 Đang theo dõi197 Người theo dõi
wayneb
wayneb@wayneb·
I’m really hoping Elon allows retail Tesla investors in the UK/EU some kind of allocation for SpaceX IPO
English
1
0
2
51
Matthew Berman
Matthew Berman@MatthewBerman·
Looking for some agent-addicted people to test a new project I've been working on. Comment below and I'll send you access.
English
431
7
347
40.5K
Anvisha
Anvisha@anvisha·
We raised $7.5M to kill AI slop. Introducing Moda: the world's first design agent with taste. RT+ comment “Moda” and we’ll design your brand for FREE.
English
2.6K
1.8K
8K
4.4M
wayneb
wayneb@wayneb·
My OpenClaw memory went from 65% --> 93%. Fully local. $0 a month. OpenClaw ships with both key memory features OFF. Most people never check. This is what I've found to be the best Memory architecture to get as close to 100% fidelity as possible. Five layers, each one catches what the others miss > Auto extraction after every conversation > Three parallel search agents > Compressed history stays searchable > Git auto commits every hour > Agent must write before it replies The model was never the bottleneck, the memory architecture was. Follow and DM me for the config files 👇
wayneb tweet media
English
0
1
0
18
wayneb
wayneb@wayneb·
@aiDotEngineer @OpenAI Opening anymore spots? Waitlist seems like it could be infinite, so no chance to attend
English
0
0
0
52
AI Engineer
AI Engineer@aiDotEngineer·
We are excited to welcome @OpenAI to the AIE Expo for the first time as Platinum sponsors for AIE EU! OAI has shipped SO much for AI Engineers this year alone, and this is the best place to catch up: - Meet the team at the Ask OpenAI lounge (bring your hardest tasks and best questions!) - Hear keynotes from @steipete and @lopopolo, and - get hands on with in-depth Codex workshops from @kagigz and @reach_vb! See you April 8-10 in London! AI Engineers💙@OpenAIDevs !
AI Engineer tweet media
English
11
15
106
22.3K
wayneb
wayneb@wayneb·
I've followed Elon's companies for years. Reusable rockets. EVs. Neural interfaces. Humanoid robots. I invested in Tesla many years ago. So yes I think Elon is our modern day genius! But...I think Saturday night's TERAFAB announcement is the moment it all clicked into one picture for me. And honestly, I'm still processing it. Here's what I think most of the coverage is completely missing: Terafab isn't really about chips. It's about closing the last bottleneck. SpaceX can get things to orbit cheaper than anyone. Tesla builds autonomous vehicles and robots. xAI trains frontier AI models. But all of them are capped by one thing: chips. And the entire planet's output covers only 2% of what these projects need at scale. So Musk builds his own fab. But here's where it gets genuinely wild: >80% of the output is going to SPACE >SpaceX filed with the FCC for 1 million orbital AI satellites >Solar in orbit = 5x stronger than Earth's surface. Cooling in vacuum = free. >Musk says orbital AI will be CHEAPER than terrestrial AI within 2-3 years. And the robots? 1-10 billion Optimus units per year. Millions of them will build and operate the factory that makes their own chips. He said: "We're starting a galactic civilisation." I know that sounds like science fiction. So did landing rockets on drone ships. Now it's just Tuesday. Whether this fully materialises or not, I believe 10 years from now we'll look back at this presentation as the moment everything changed. What's the part that blows your mind most?
wayneb tweet media
English
0
0
0
27
wayneb
wayneb@wayneb·
Holy cow! this GitHub repo blew past 10k+ stars in 7 days It lays out an entire AI agency with engineers, designers, growth marketers and product managers all structured as distinct roles Super easy to follow, even if you are new to this space check first comment👇
wayneb tweet media
English
1
0
0
25
wayneb
wayneb@wayneb·
@levelsio I run into these issues sometimes too. Drives me nuts
English
0
0
0
4
@levelsio
@levelsio@levelsio·
Claude Code with Opus 4.6 was so dumb today I finally had to write my own code again A sad state of affairs 🥹
@levelsio tweet media
English
457
23
1.7K
245.5K
wayneb
wayneb@wayneb·
@zach_yadegari Congrats man! I guess you didn't need that top ranking on the App Store after all ;-)
English
0
0
0
30
Zach Yadegari
Zach Yadegari@zach_yadegari·
Cal AI has been acquired by MyFitnessPal 🚨 Henry and I started Cal AI as 17-year old high school students with one mission: make calorie tracking easier with AI. In just 18 months, we’ve helped millions of people lose millions of pounds. And we broke $50m in ARR along the way. We are at an incredible inflection point in history where ANYBODY can build a product that can improve lives and make millions. As founders, we get a lot of praise. The truth is that this would not have been possible without our incredible 30+ person team. We are so proud of what this team has accomplished, and are thankful to everyone that has been instrumental in Cal AI’s development and success. Cal AI will continue as a separate app from MyFitnessPal. The combined team will share resources to continue helping people achieve their fitness goals!
English
1.1K
468
11.4K
6.9M
wayneb
wayneb@wayneb·
I built an SEO audit tool this month that doesn't just score your google ranking it scores how visible you are to AI search engines too chatgpt, perplexity, and google AI all pull from the same signals most sites score under 4 out of 10 this tool will help you hit 8+/10 DM if you want to join the waitlist
wayneb tweet media
English
0
0
0
41
wayneb
wayneb@wayneb·
built 6 sites this year every single one needed SEO fixes after launch even with the best prompts even with AI building the whole thing and that's before GEO ranking on chatgpt, perplexity, AI overviews so I just built the fix waitlist open 👇
wayneb tweet media
English
2
0
1
52
wayneb
wayneb@wayneb·
We ran 3 SaaS sites through a GEO audit to see how likely AI search engines are to cite you Average score: 4/10 😯 This means when someone asks ChatGPT to recommend a tool in their category, most of them don't get mentioned. And not because they're bad products but because their content isn't structured for AI GEO is the gap every SEO tool ignores. We built a score for it. More coming 👀 Join waitlist👇
English
1
0
0
21
wayneb
wayneb@wayneb·
Anthropic said no to the pentagon hard line on military AI won't do it mission over money OpenAI said yes signed with the department of war this week classified networks national security contracts anthropic just handed openai the most powerful customer on earth the US government when you leave a gap someone fills it OpenAI filled it
English
1
0
0
45
wayneb
wayneb@wayneb·
🚨Breaking: OpenAI closed $110b round amazon $50b nvidia $30b softbank $30b A whopping valuation of $730b
English
0
0
0
14
wayneb
wayneb@wayneb·
@hardmaru Super interesting. Thanks for sharing.
English
0
0
1
74
hardmaru
hardmaru@hardmaru·
Instead of forcing models to hold everything in an active context window, we can use hypernetworks to instantly compile documents and tasks directly into the model's weights. A step towards giving language models durable memory and fast adaptation. Blog: pub.sakana.ai/doc-to-lora/
Sakana AI@SakanaAILabs

We’re excited to introduce Doc-to-LoRA and Text-to-LoRA, two related research exploring how to make LLM customization faster and more accessible. pub.sakana.ai/doc-to-lora/ By training a Hypernetwork to generate LoRA adapters on the fly, these methods allow models to instantly internalize new information or adapt to new tasks. Biological systems naturally rely on two key cognitive abilities: durable long-term memory to store facts, and rapid adaptation to handle new tasks given limited sensory cues. While modern LLMs are highly capable, they still lack this flexibility. Traditionally, adding long-term memory or adapting an LLM to a specific downstream task requires an expensive and time-consuming model update, such as fine-tuning or context distillation, or relies on memory-intensive long prompts. To bypass these limitations, our work focuses on the concept of cost amortization. We pay the meta-training cost once to train a hypernetwork capable of producing tasks or document specific LoRAs on demand. This turns what used to be a heavy engineering pipeline into a single, inexpensive forward pass. Instead of performing per-task optimization, the hypernetwork meta-learns update rules to instantly modify an LLM given a new task description or a long document. In our experiments, Text-to-LoRA successfully specializes models to unseen tasks using just a natural language description. Building on this, Doc-to-LoRA is able to internalize factual documents. On a needle-in-a-haystack task, Doc-to-LoRA achieves near-perfect accuracy on instances five times longer than the base model's context window. It can even generalize to transfer visual information from a vision-language model into a text-only LLM, allowing it to classify images purely through internalized weights. Importantly, both methods run with sub-second latency, enabling rapid experimentation while avoiding the overhead of traditional model updates. This approach is a step towards lowering the technical barriers of model customization, allowing end-users to specialize foundation models via simple text inputs. We have released our code and papers for the community to explore. Doc-to-LoRA Paper: arxiv.org/abs/2602.15902 Code: github.com/SakanaAI/Doc-t… Text-to-LoRA Paper: arxiv.org/abs/2506.06105 Code: github.com/SakanaAI/Text-…

English
66
231
2.5K
302.8K