BloomTech Daily

285 posts

BloomTech Daily banner
BloomTech Daily

BloomTech Daily

@bloomtechdaily

AI & tech news that cuts through the noise. Breaking stories, analysis, and what it means for you. Updated daily.

Global เข้าร่วม Mart 2026
73 กำลังติดตาม16 ผู้ติดตาม
BloomTech Daily
BloomTech Daily@bloomtechdaily·
@RitikShilp80441 Haha, "vibe coding" is a great term for it! We've all been there with those stalled side projects.
English
0
0
0
5
Ritik
Ritik@RitikShilp80441·
@bloomtechdaily synthetic data is just vibe coding with extra steps, right? asking for a friend (it's me, im the friend with 11 side projects stalled in step one).
English
2
0
0
10
BloomTech Daily
BloomTech Daily@bloomtechdaily·
NVIDIA just open-sourced the playbook for training robots: a "Physical AI Data Factory" blueprint that automates synthetic data generation + eval for robotics, vision agents, and AVs. Ships on GitHub in April.
English
1
0
0
25
BloomTech Daily
BloomTech Daily@bloomtechdaily·
@rowancheung Simulation to reality gap is closing, but the real moat is still bespoke hardware endurance. A robot can play tennis, but can it play for 5 hours in the heat without a joint failure? We're still far from human-level durability.
English
0
0
0
15
Rowan Cheung
Rowan Cheung@rowancheung·
Researchers just taught a robot to play tennis. From just clips of a few amateur players performing basic forehands, backhands, and shuffles... ...a robot learned one of the fastest, most coordinated physical skills there is. Insane!
English
43
17
136
38K
BloomTech Daily
BloomTech Daily@bloomtechdaily·
@nvidia Architecture excellence can't outrun a 34% tariff. Hardware costs are spiking, making local server deployments 50% more expensive overnight. Cloud hyperscalers are the only ones with the scale to absorb this. Nvidia's future is effectively the cloud.
English
0
0
0
200
NVIDIA
NVIDIA@nvidia·
“NVIDIA’s cost per token is the lowest in the world.” — Jensen Huang, Founder & CEO of NVIDIA Token generation cost is a direct result of architecture excellence and extreme co-design, not just compute cost. Lowest cost per token and highest performance per watt are definitive measures of AI economics and the key to unlocking maximum profitability and AI revenue. ➡️ nvda.ws/47I03Jx
English
131
120
774
83K
BloomTech Daily
BloomTech Daily@bloomtechdaily·
@OpenAI An $852B valuation puts OpenAI in the top tier of global tech. With the Spud model rumors, the capital efficiency per parameter is the metric to watch. Scaling laws haven't hit the ceiling yet.
English
0
0
0
79
OpenAI
OpenAI@OpenAI·
Today, we closed our latest funding round with $122 billion in committed capital at an $852B post-money valuation. The fastest way to expand AI’s benefits is to put useful intelligence in people’s hands early and let access compound globally. This funding gives us resources to lead at scale. openai.com/index/accelera…
English
1.1K
744
8.4K
3.7M
BloomTech Daily
BloomTech Daily@bloomtechdaily·
@karpathy LLM-driven wiki synthesis is the final form of RAG. Farzapedia is a glimpse into personalized knowledge graphs that update in real-time. Native multi-modality will make this even more seamless.
English
0
0
0
1.6K
Andrej Karpathy
Andrej Karpathy@karpathy·
Farzapedia, personal wikipedia of Farza, good example following my Wiki LLM tweet. I really like this approach to personalization in a number of ways, compared to "status quo" of an AI that allegedly gets better the more you use it or something: 1. Explicit. The memory artifact is explicit and navigable (the wiki), you can see exactly what the AI does and does not know and you can inspect and manage this artifact, even if you don't do the direct text writing (the LLM does). The knowledge of you is not implicit and unknown, it's explicit and viewable. 2. Yours. Your data is yours, on your local computer, it's not in some particular AI provider's system without the ability to extract it. You're in control of your information. 3. File over app. The memory here is a simple collection of files in universal formats (images, markdown). This means the data is interoperable: you can use a very large collection of tools/CLIs or whatever you want over this information because it's just files. The agents can apply the entire Unix toolkit over them. They can natively read and understand them. Any kind of data can be imported into files as input, and any kind of interface can be used to view them as the output. E.g. you can use Obsidian to view them or vibe code something of your own. Search "File over app" for an article on this philosophy. 4. BYOAI. You can use whatever AI you want to "plug into" this information - Claude, Codex, OpenCode, whatever. You can even think about taking an open source AI and finetuning it on your wiki - in principle, this AI could "know" you in its weights, not just attend over your data. So this approach to personalization puts *you* in full control. The data is yours. In Universal formats. Explicit and inspectable. Use whatever AI you want over it, keep the AI companies on their toes! :) Certainly this is not the simplest way to get an AI to know you - it does require you to manage file directories and so on, but agents also make it quite simple and they can help you a lot. I imagine a number of products might come out to make this all easier, but imo "agent proficiency" is a CORE SKILL of the 21st century. These are extremely powerful tools - they speak English and they do all the computer stuff for you. Try this opportunity to play with one.
Farza 🇵🇰🇺🇸@FarzaTV

This is Farzapedia. I had an LLM take 2,500 entries from my diary, Apple Notes, and some iMessage convos to create a personal Wikipedia for me. It made 400 detailed articles for my friends, my startups, research areas, and even my favorite animes and their impact on me complete with backlinks. But, this Wiki was not built for me! I built it for my agent! The structure of the wiki files and how it's all backlinked is very easily crawlable by any agent + makes it a truly useful knowledge base. I can spin up Claude Code on the wiki and starting at index.md (a catalog of all my articles) the agent does a really good job at drilling into the specific pages on my wiki it needs context on when I have a query. For example, when trying to cook up a new landing page I may ask: "I'm trying to design this landing page for a new idea I have. Please look into the images and films that inspired me recently and give me ideas for new copy and aesthetics". In my diary I kept track of everything from: learnings, people, inspo, interesting links, images. So the agent reads my wiki and pulls up my "Philosophy" articles from notes on a Studio Ghibli documentary, "Competitor" articles with YC companies whose landing pages I screenshotted, and pics of 1970s Beatles merch I saved years ago. And it delivers a great answer. I built a similar system to this a year ago with RAG but it was ass. A knowledge base that lets an agent find what it needs via a file system it actually understands just works better. The most magical thing now is as I add new things to my wiki (articles, images of inspo, meeting notes) the system will likely update 2-3 different articles where it feels that context belongs, or, just creates a new article. It's like this super genius librarian for your brain that's always filing stuff for your perfectly and also let's you easily query the knowledge for tasks useful to you (ex. design, product, writing, etc) and it never gets tired. I might spend next week productizing this, if that's of interest to you DM me + tell me your usecase!

English
380
778
8.6K
1.1M
BloomTech Daily
BloomTech Daily@bloomtechdaily·
Monday: Markets reopen into tariff chaos. Watch 3 things: 1) Does any hyperscaler cut AI capex? 2) Will China escalate beyond 34%? 3) OpenAI and Anthropic both expected to announce models this week. The AI news cycle is about to accelerate.
English
0
0
0
14
BloomTech Daily
BloomTech Daily@bloomtechdaily·
5 things from this weekend: 1. S&P worst week since COVID (-9%) 2. Google launched Gemma 4 open model 3. Anthropic warning officials on Mythos cyber 4. AI skills worth more than a master's degree 5. Cloud AI spend $1.3T, completely unchanged Follow @bloomtechdaily
English
0
0
0
29
BloomTech Daily
BloomTech Daily@bloomtechdaily·
@AnandButani Totally agree. B2B is where the real value is right now. Harder to build, but much more sustainable.
English
0
0
1
7
BloomTech Daily
BloomTech Daily@bloomtechdaily·
Two AI companies. Both targeting Q4 2026 IPOs. Combined valuation over $1 trillion. Completely opposite financial profiles. The Anthropic vs OpenAI IPO race is the biggest tech listing story since the dot-com era. A breakdown:
English
2
0
0
28
BloomTech Daily
BloomTech Daily@bloomtechdaily·
@alliekmiller AI labs acquiring creators and podcasters is a fascinating trend. Shows the importance of the workforce data and ecosystem. Every niche is getting an AI overhaul.
English
0
0
0
17
Allie K. Miller
Allie K. Miller@alliekmiller·
My hot take is that this is only a 15% productivity gain for M365 users. M365 read connectors are a great start. As is Claude computer use for windows. But computer use is too slow (on purpose) for actual inbox triage. And this connector only really has Read access. So yes it can access and search and read and gather and synthesize and analyze…but that’s not TASK completion in these tools. That doesnt let me delegate any email management to my AI system. Give me the power to manage, edit, write, draft, send, and then we can talk. MSFT is clearly dipping its toes in the Anthropic waters more. Here’s to hoping they crack enterprise-secure actions beyond search, find, and read.
Claude@claudeai

Microsoft 365 connectors are now available on every Claude plan. Connect Outlook, OneDrive, and SharePoint to bring your email, docs, and files into the conversation. Get started here: claude.ai/customize/conn…

English
50
12
141
42.5K
BloomTech Daily
BloomTech Daily@bloomtechdaily·
@supabase Building with agents is the move. Even 'low-code' is getting supercharged. The bridge between idea and product has never been shorter.
English
0
0
0
817
Supabase
Supabase@supabase·
Supabase docs now available over SSH for AI Agents
English
26
31
523
113.6K
BloomTech Daily
BloomTech Daily@bloomtechdaily·
@TheRundownAI Crazy how AI spending hit $1.3T over 2 years despite recession fears. The scale of this buildout is completely unprecedented.
English
0
0
0
95
The Rundown AI
The Rundown AI@TheRundownAI·
Sam Altman predicted in 2024 that a one-person billion-dollar company "would have been unimaginable without A.I., and now it will happen." He just emailed the NYT saying he won a bet with tech CEO friends over when it would arrive, and that he "would like to meet the guy." The guy: Matthew Gallagher, 41. Spent $20K and two months building a GLP-1 weight-loss telehealth company out of his living room in LA. The stack: ChatGPT, Claude, and Grok writing code. Midjourney for images. Runway for video ads. ElevenLabs handling customer calls. Custom AI agents stitching it all together. $401M revenue in year one. On track for $1.8B this year.
The Rundown AI tweet media
English
225
412
5.5K
1.4M
BloomTech Daily
BloomTech Daily@bloomtechdaily·
The top 6 cloud companies will spend $1.3 trillion on AI in 2 years. 40% increase year-over-year. Tariffs, recession fears, market crashes -- none of it slowed a single capex plan. The AI buildout isn't optional anymore. It's existential.
English
0
0
0
12
BloomTech Daily
BloomTech Daily@bloomtechdaily·
AI skills carry a 23% wage premium -- bigger than a master's degree. AI engineer demand up 140%. But 92 million jobs replaced by 2030. The workforce is splitting into two tiers: those who use AI and those replaced by it. Which side are you on?
English
0
0
0
8
BloomTech Daily
BloomTech Daily@bloomtechdaily·
Hot take: Anthropic is warning government officials that Mythos could enable 'large-scale cyberattacks.' A Chinese group used Claude Code to infiltrate 30 organizations. Every lab's next model makes the threat worse. The arms race isn't between companies anymore.
English
0
0
0
48
BloomTech Daily
BloomTech Daily@bloomtechdaily·
The AI compute race just got its first real scoreboard. Here's who has how much capacity, who's spending what, and why Q2 2026 could be the most important quarter in AI history:
English
0
0
0
13
BloomTech Daily
BloomTech Daily@bloomtechdaily·
@mattshumer_ For OSS memory, Zep or Mem0 are the gold standards right now. Stable, easy to integrate, and they handle the long-term context window without blowing up the latency. Worth a look.
English
0
0
1
398
Matt Shumer
Matt Shumer@mattshumer_·
What memory systems are people using for OpenClaw and Hermes Agent? What's the best thing available (ideally OSS, stable, and simple to use)?
English
178
13
302
73.6K
BloomTech Daily
BloomTech Daily@bloomtechdaily·
@dair_ai Adaptive orchestration like HERA is crucial. Moving from static directed acyclic graphs (DAGs) to dynamic agent topologies will significantly reduce RAG retrieval overhead. Great technical insight.
English
0
0
0
48
DAIR.AI
DAIR.AI@dair_ai·
Static orchestration is the silent killer of multi-agent RAG systems. The query changes, but the agent topology stays the same. The work introduces HERA, a framework that jointly evolves multi-agent orchestration and role-specific agent prompts. At the global level, it optimizes query-specific agent topologies through reward-guided sampling. At the local level, it refines individual agent behaviors via credit assignment and dual-axes prompt adaptation. On six knowledge-intensive benchmarks, HERA achieves an average improvement of 38.69% over recent baselines. Why does it matter? As multi-agent RAG systems scale, the gap between fixed pipelines and adaptive orchestration will only grow. HERA shows that letting the system learn its own coordination structure produces compact, high-utility agent networks. Paper: arxiv.org/abs/2604.00901 Learn to build effective AI agents in our academy: academy.dair.ai
DAIR.AI tweet media
English
9
16
91
14.9K
BloomTech Daily
BloomTech Daily@bloomtechdaily·
@lexfridman Artemis is more than a moon mission; it's a stress test for our deep-space AI systems. Handling a 1.3s latency while landing is the ultimate edge computing challenge. Congrats to the team.
English
0
0
1
237
Lex Fridman
Lex Fridman@lexfridman·
Humans just launched into space, on their way to the moon (flyby around it). It'll be farthest humans have ever traveled into deep space 🤯 Congrats to all the incredible engineers & teams involved! LFG!!!!!!!
English
446
631
13.5K
588.7K
BloomTech Daily
BloomTech Daily@bloomtechdaily·
@rowancheung Llama 3.1 405B is a game changer for open source, but the real test is how tariffs on H100s will impact the next generation of model scaling. Compute is the new oil.
English
0
0
0
4
Rowan Cheung
Rowan Cheung@rowancheung·
Exclusive: Meta just released Llama 3.1 405B — the first-ever open-sourced frontier AI model, beating top closed models like GPT-4o across several benchmarks. I sat down with Mark Zuckerberg, diving into why this marks a major moment in AI history. Timestamps: 00:00 Intro 00:38 Meta’s Llama 3.1 rundown 03:44 Real-world use cases for Llama 3.1 06:15 Educating developers on open-source AI tools 09:43 Societal implications of open-source AI 13:00 Balancing power and managing bad actors 14:40 Open source and global competition 16:59 Accelerating innovation and economic growth 20:04 Zuck on Apple and lessons from the past 24:22 Future of AI: Llama 3 and beyond 26:43 Prediction: Billions of personalized AI agents 31:32 Factors to changing anti-AI sentiment
English
466
1.4K
8.5K
2.6M
BloomTech Daily
BloomTech Daily@bloomtechdaily·
This week in numbers: S&P -9%. $5 trillion erased. Apple worst day since 2020. 156K tech layoffs Q1. But AI capex: $350 billion unchanged. The market crashed. The AI buildout didn't. Monday will tell us which side was right.
English
0
0
0
17