BubblSpace

130 posts

BubblSpace banner
BubblSpace

BubblSpace

@bubblspace

BubblSpace: Create bubbl for your Open AI Agents. Build , customise , collaborate., innovate using Open AI agents called bubbl

Beigetreten Ocak 2022
34 Folgt19 Follower
BubblSpace retweetet
FireHacker
FireHacker@thefirehacker·
One of the biggest hidden pains of building products with AI coding agents is regression testing. A new feature written by an agent can quietly break existing functionality and wipe out days of effort. I’ve run into this multiple times. The problem isn’t feature velocity. It’s stability. My solution: use structured Skills for testing and documentation. First, a quick primer. Test cases are foundational to the Software Development Life Cycle (SDLC). Well-written test cases can often replace complex feature or requirement documents. Once code is written, you typically perform: 1. Unit testing — validates the functionality of a single module or feature 2. Integration testing — ensures different modules work correctly together. 3.Regression testing — confirms that new changes haven’t broken existing functionality. In many early-stage products, most of this testing is manual. Let’s assume a simple CI/CD pipeline and manual test execution for this discussion. Here’s how I use Skills: For large features, I create a detailed, phase-wise plan. After each phase, structured test cases are generated and stored alongside the plan. Test execution logs are maintained in the same file. But the real leverage comes later. In one example, I was building a dashboard for AI Personas so users could track what their agents were doing while they focused on other work. All test cases and execution logs were captured during development. On subsequent iterations, coding agents extract the full test history and automatically generate a regression checklist. Because execution logs already exist, the agent can focus on real historical breakpoints instead of hallucinating edge cases. You can go further: Add an impact analysis step inside the Skill to prioritize affected requirements. Log every PR and commit automatically. Maintain structured change history for easier rollback.
FireHacker tweet media
English
5
5
13
243
BubblSpace retweetet
FireHacker
FireHacker@thefirehacker·
The community's response towards Cowork gives us a strong positive signal towards what we have been building timecapsule.bubblspace.com Our product works in your browser & solves a different problem. The Problem: How do you manage a knowledge base, user or team expertise, documents, and context across AI sessions? Users need a platform that allows them to build and deploy workflows with ease. The platform should enable workflow management based on functions, topics, teams, or open search. The Solution: TimeCapsule TimeCapsule has two simple parts: AI-Frames- Helps build knowledge bases and workflows. Sage Mode- Real-time voice-to-voice AI persona. 1. AI-Frames: Upload documents to create a knowledge base directly in your browser. Use the Flow Builder to perform deep research and build workflows or AI Frames. These workflows can be explored in graph or linear mode (chapters and AI Frames). 2. Sage Mode: Start a real-time voice-to-voice session with an AI persona. TimeCapsule automatically detects the relevant AI Frames, searches through your knowledge base documents, and responds in real time. 3. Collaboration: Share TimeCapsule with your team. Sage sessions automatically pick up new knowledge (TimeCapsules) as workflows evolve. @bubblspace @AIEdXLearn
FireHacker tweet mediaFireHacker tweet mediaFireHacker tweet media
Claude@claudeai

Introducing Cowork: Claude Code for the rest of your work. Cowork lets you complete non-technical tasks much like how developers use Claude Code.

English
1
5
8
876
BubblSpace retweetet
FireHacker
FireHacker@thefirehacker·
Use your favourite AI coding agent to create AI frames. What if you could connect everything—your PDFs, videos, notes, code, and research—into one seamless flow that actually makes sense? AI-Frames: Open Source Knowledge-to-Action Platform:timecapsule.bubblspace.com ✨ Annotate • Learn • Research • Build • Automate One prompt → AI builds your entire learning path with: • Citations from your Knowledge Base • Mastery checks & quizzes • Step-by-step progression • Vision or text modes From scattered notes to structured knowledge. Instantly. Watch how it works 👇 Video shows how to build with Cursor & Codex @bubblspace @AIEdXLearn youtu.be/gvyLzZNCe6k?si…
YouTube video
YouTube
English
1
6
10
184
BubblSpace retweetet
Tensor-Slayer
Tensor-Slayer@TensorSlay·
🤗
FireHacker@thefirehacker

Use your favourite AI coding agent to create AI frames. What if you could connect everything—your PDFs, videos, notes, code, and research—into one seamless flow that actually makes sense? AI-Frames: Open Source Knowledge-to-Action Platform:timecapsule.bubblspace.com ✨ Annotate • Learn • Research • Build • Automate One prompt → AI builds your entire learning path with: • Citations from your Knowledge Base • Mastery checks & quizzes • Step-by-step progression • Vision or text modes From scattered notes to structured knowledge. Instantly. Watch how it works 👇 Video shows how to build with Cursor & Codex @bubblspace @AIEdXLearn

ART
0
2
5
119
BubblSpace retweetet
FireHacker
FireHacker@thefirehacker·
Use your favourite AI coding agent to create AI frames. What if you could connect everything—your PDFs, videos, notes, code, and research—into one seamless flow that actually makes sense? AI-Frames: Open Source Knowledge-to-Action Platform:timecapsule.bubblspace.com ✨ Annotate • Learn • Research • Build • Automate One prompt → AI builds your entire learning path with: • Citations from your Knowledge Base • Mastery checks & quizzes • Step-by-step progression • Vision or text modes From scattered notes to structured knowledge. Instantly. Watch how it works 👇 Video shows how to build with Cursor & Codex @bubblspace @AIEdXLearn
English
2
6
8
1.1K
BubblSpace retweetet
SJ
SJ@Quantum_Pandit·
@thefirehacker @TheZachMueller Love how you unpacked the real mechanics behind scaling AI — clarity like this drives stronger teams and smarter systems.
English
0
2
5
57
BubblSpace retweetet
FireHacker
FireHacker@thefirehacker·
🎛️Zachary is a brilliant instructor — laser-focused on helping us learn how AI pros truly work at scale.This course genuinely bridges the gap between academic theory and real-world distributed training. 🎛️What I’ll Apply Next 🔹 Build Expert Parallelism (MoE) from scratch using a small local GPU cluster — and later scale it up with cloud GPUs for training compact models. 🔹 Recreate parts of the OLMo-2 pre-training pipeline at a much smaller scale, at least up to a few checkpoints, to study the training dynamics hands-on. #ScratchToScale #Maven #DistributedTraining #DeepLearning #AI #LLMs #ZacharyMueller #MachineLearning #ScalingAI
English
0
3
6
58
BubblSpace retweetet
FireHacker
FireHacker@thefirehacker·
🎛️Another major highlight was the keynote and spotlight sessions — featuring exceptional speakers from top AI labs and startups. These are people who’ve actually built the tools, frameworks, and innovations we use today — from model design and scaling pipelines to production-grade training infrastructure. Listening to them share their journeys and hard-earned lessons was both inspiring and grounding.
English
1
3
5
60
BubblSpace retweetet
FireHacker
FireHacker@thefirehacker·
🎛️Way Points- My Journey: 1. We started with the fundamentals of distributed training (operations such as all-reduce and broadcast). The custom nbdistributed package made it incredibly easy to get hands-on with multi-GPU training in notebooks right from week one.I know notebooks get a lot of hate — however, I feel they are an incredible learning tool. Later sessions focused on scaling scripts and deep dives into small training workflows written from scratch. 2. Then we moved on to DDP and Data Loading. 3. Covered FSDP/ZeRO and advanced parallelism (Pipeline / Tensor Parallelism). Tensor parallelism was one topic I found difficult to follow — I’m looking for additional material to bridge that gap. 4. Expert Parallelism – MoEs 😁: Another super challenging topic, but the course material on this was excellent. I’ll be going through the recordings and practicing using the code shared by Zach. 5. 2D/3D/Sequence Parallelism sessions were awesome — these were more like introduction sessions that opened new directions to explore. My focus for this month is on DDP + FSDP/ZeRO + PP/TP. Once that’s solid, I’ll shift to Expert Parallelism (EP).
English
1
2
6
51
BubblSpace retweetet
FireHacker
FireHacker@thefirehacker·
🎯 Milestone Unlocked I’m excited to share that I’ve completed the “Scratch to Scale: Large-Scale Training in the Modern World” course by @TheZachMueller on maven ! Scratch to Scale has been one of the most practical and insightful courses I’ve taken — it goes far beyond theory. 👉 Special thanks to @TensorSlay for pointing me toward this epic course!
FireHacker tweet media
English
5
8
10
755
BubblSpace retweetet
FireHacker
FireHacker@thefirehacker·
🔥1+ month of effort and first signs of success! Final Product: TimeCapsule-SLM An Open Source Deepresearch that works in browser with Qwen 3 0.6b(ollama) that has semantic understanding , provide insights & generate novel ideas . Privacy first local Deep Research. 👽 timecapsule.bubblspace.com 🧑‍💻 github.com/thefirehacker/… 🔐The Problem : AI products fail to understand context of query. Hallucinate difficult to tarck if infromation is correct and where did it source from . Just source attribution is also not useful. 🪄Magic: TimeCapsule-SLM is able to reject my CV which has the word "automation" in it and reasons that word "speed" has different meaning and looks for data that is more alinged with query! It can create Regex patterns and do flat file type search along with semantic search on chunks/docs. 📔You can traceback results to exact chunks/docsuments giving relaibilty and grounding of Data with your local knowledgebase. Took 5-9 minutes to get result. System has deep understanding of Knowledgebase. Next Steps allow sytem to use these insights to build things for you ( Lesson plan , short form content , deliver sales presentation, enterprise learning, come up with novel ideas) This also works well with gemma3n 2B ( some issues will test ,fix and push changes soon) . Also system keeps missing little data from source will patch up the issue soon. Data Source: Tyler's blog on GPT-2 speedrun tylerromero.com/posts/nanogpt-…
FireHacker tweet mediaFireHacker tweet mediaFireHacker tweet mediaFireHacker tweet media
English
3
10
17
1.7K
BubblSpace retweetet
ollama
ollama@ollama·
@thefirehacker Thank you for sharing! 🫡
English
1
4
7
67
BubblSpace retweetet
FireHacker
FireHacker@thefirehacker·
🔬💊 TimeCapsule-SLM is LIVE! - Complete AI Deep Research & creative Open Source platform that runs 100% in your browser! 🎯 Try it now: timecapsule.bubblspace.com 🚀 Revolutionary Features: 🧠 In-Browser RAG - Upload PDFs, get AI insights (no server needed!) 🔗 TimeCapsule Sharing - Export/load research 📚 Knowledge Base - Your docs + Comminity Expertise+ AI = breakthrough discoveries 🤖 Local LLMs - @ollama , @lmstudio , @OpenAI API support ✨ Two Powerhouse Tools: 🔬 DeepResearch - Generate novel ideas, uncover hidden patterns 🎮 Playground - Transform research into visual stories with AI coding 🔒 Privacy-First: Everything local, your data NEVER leaves your device ⚡ Zero Setup: Just open , start creating & sharing 🌟 Open Source: Built for the community Perfect for researchers, creators & developers who want AI power without privacy compromise! ⭐ Star us: github.com/thefirehacker/…
FireHacker tweet mediaFireHacker tweet media
English
2
6
7
118
BubblSpace retweetet
FireHacker
FireHacker@thefirehacker·
We at @bubblspace are addressing this exact challenge. While horizontal AI technologies offer impressive UI and UX, they often fall short in domain-specific depth. Our solution empowers each enterprise—and even individual teams within it—to design their own AI workflows, seamlessly integrating with innovative tools like these PS: We've consistently outperformed Big Tech and major AI firms during the POC stage, proving that this is a critical issue. Just as modern applications are built with distinct front-end, back-end, and database layers, real-world AI applications must follow suit to deliver true value. bubblspace.com
FireHacker tweet media
sankalp@dejavucoder

looks good ux/ui wise but the competitor products in all the categories are more vertically integrated (sauce: i work/review these). they carry more domain specific context engineering powered via domain experts. will definitely say this is a nice meeting assistant though.

English
0
3
6
96
BubblSpace retweetet
FireHacker
FireHacker@thefirehacker·
🎨 What if you could create stunning visual animations just by describing them in plain English? Introducing SketchPad-SLM - an Open Source AI-powered creative coding studio that runs entirely in YOUR browser! ✨ Simply type: "Create a particle system with gravity effects" 🤖 AI instantly generates optimized p5.js code 🎯 Watch your vision come to life in real-time 🔒 Privacy-First Features: • Runs completely in your browser • API keys stay local - never shared • Open source & transparent • No data sent to external servers 🚀 Try the live demo: sketchpad.bubblspace.com 💻 Contribute on GitHub: github.com/thefirehacker/… The future of creative coding is here - secure, local, and incredibly powerful! @bubblspace
English
3
5
7
71
BubblSpace retweetet
FireHacker
FireHacker@thefirehacker·
🧘‍♀️Epic Post- I had to ask Grok to put this in story form The Tale of Marin 32B: The Bad Spike and the Redemption of QK Norm In the bustling labs of the Marin community, nestled in the digital ether of 2025, the air was thick with anticipation. It was late May when Percy Liang, a seasoned AI pioneer, posted a plot on X that sent ripples through the team: the Marin 32B model, a colossal 32-billion-parameter behemoth, had crossed 1.5 trillion tokens in its pretraining journey. The graph, a jagged blue line dancing across the screen, showed promising progress—outpacing the 8B run that rivaled LLaMA 3.1 and leveraging the vast, chaotic expanse of the Nemotron-CC dataset. The team,celebrated quietly. The model was a titan, its loss curve dipping steadily despite occasional spikes that seemed to heal with time. They were happy—perhaps too happy. But beneath the surface, trouble brewed. The spikes, though recoverable at first, hinted at a deeper instability, a whisper of chaos in the attention mechanism’s heart. The community buzzed with feedback—some cried doom, others shrugged it off, and a few trusted voices murmured that all was well if the trajectory held. David and the team nodded, confident in their 32Mi-token batch size and the Marin Tootsie Process, a philosophy of never wasting a single flop. They tightened gradient norms, skipped outlier losses, and clipped updates, watching as the model chugged along, its orange loss line weaving through the noise. Then, on a fateful day in mid-June, the inflection point struck. The "Bad Spike" emerged—a monstrous peak where the loss shot up and refused to settle back to its plateau. The team gathered around the Weights & Biases dashboard, staring at the plot in stunned silence. "It’s nothing in absolute terms," David muttered, "but it just won’t come down." The y-axis, unchanged from the "fine" spikes, mocked their earlier optimism. Panic set in. The model had consumed 2.7 trillion tokens—months of compute, millions of dollars, and countless hours. Restarting from scratch was unthinkable; the data pipeline, the optimizer states, the very soul of Marin 32B were too precious to discard. The team scrambled. They tried skipping the offending step, hoping to dodge the bullet, but the loss lingered. Then came Muon, the experimental optimizer with its orthogonal promises, separating Q, K, and V with Newton-Schulz elegance. For a fleeting moment, it looked promising—the loss dipped as Adam warmed up—but then it soared into the stratosphere, a "space-going" disaster. David sighed, "Need to spend more time with it at small scales," admitting the tune-up had failed. The Bad Spike had exposed a flaw no optimizer tweak could patch alone. It was then that whispers of QK Norm grew louder. The team had resisted it, proud of their smooth 22B and 70B runs, convinced Marin was different. But the Bad Spike shattered that illusion. "We were too foolish to try," David conceded in a late-night meeting. Yet, true to the Marin ethos, they refused to start over. Instead, they devised a daring plan: a warm-start with QK Norm, preserving the optimizer states and rewarming the learning rate. It was a gamble—worst case, it could blow up, and they’d lose everything—but it was a chance to salvage their work. The warm-start began, a delicate dance of 6.5 billion tokens. The team watched, hearts in their throats, as the green line of the QK Norm run emerged on the plot. It overshot briefly, a hiccup from the warmup, but then settled—smoothly, beautifully—into a trajectory just a bit better than before. The Bad Spike was a memory, its lesson etched into the model’s architecture. The community cheered on Discord, and David posted the triumphant update on June 26: "And now it’s looking great! So, more norms good."
David Hall@dlwh

So about a month ago, Percy posted a version of this plot of our Marin 32B pretraining run. We got a lot of feedback, both public and private, that the spikes were bad. (This is a thread about how we fixed the spikes. Bear with me. )

English
0
5
6
110