LiquidMetal AI

518 posts

LiquidMetal AI banner
LiquidMetal AI

LiquidMetal AI

@LiquidMetalAI

The First Development Framework Designed For AI Coding Assistants

United States Beigetreten Mayıs 2024
68 Folgt111 Follower
LiquidMetal AI retweetet
Devpost
Devpost@devpost·
Let's celebrate the winning projects in the @LiquidMetalAI AI Champion Ship! 🚢💨🏆 This mission? Pushing @LiquidMetalAI beyond the ordinary. The results? Elite execution across the board—from the "Ghost Protocol" defense in Project Sentinel to high-impact literacy tools like TeddyTales. Check out the full fleet of champions: 🔗liquidmetal.devpost.com/project-gallery
Devpost tweet media
English
1
2
5
1.3K
Csaba Kissi
Csaba Kissi@csaba_kissi·
Which one is your preferred hosting?
Csaba Kissi tweet media
English
151
6
210
20.9K
sam
sam@samgoodwin89·
Cloudflare has a nasty habit of rolling out breaking changes to their API. We've been broken twice in two weeks by un-announced and un-documented changes. If you broke prod like this in AWS everyone would be fired.
English
22
3
367
55.9K
Guillermo Rauch
Guillermo Rauch@rauchg·
Observability should tell *you* what’s broken. And fix it. Autonomously. The entire o11y industry is built around giving users the burden of making dashboards, setting up alerts, instrumenting code… This is why we call Vercel “self-driving infrastructure”.
Guillermo Rauch tweet media
English
46
17
295
36.1K
LiquidMetal AI
LiquidMetal AI@LiquidMetalAI·
Agreed. Full-fidelity trace and logs must be built into everything you can create, build, and deploy. We do this on our Raindrop AI infra platform for backends. You also need versioning of apps and data too otherwise your AGENT is fixing in prod vs just tagging a section of prod to be your canary and rolling forward or rolling back as needed. docs.liquidmetal.ai/reference/logg…
English
0
0
2
31
LiquidMetal AI
LiquidMetal AI@LiquidMetalAI·
@levie Yes, but it is not just with coding repos but also with data, repositories, files, buckets, tables, databases.. We call it "Annotations" and every Agentic platform needs to support it or it will never get up to speed. docs.liquidmetal.ai/concepts/annot…
English
0
0
0
11
Aaron Levie
Aaron Levie@levie·
“Structure codebases to be agent-first” So much of our work today is structured around the inherent context that we all collectively have around the workflows we’re involved in. We know the projects we’re working on, why we’re working on them, who else is working on them, what tools to use for those projects, what the rough best practices are, and so on. All of that context basically comes for free by virtue of what our role is, our inherently large context windows, our background experience, etc. That same context doesn’t come for free for agents. It takes work to ensure that agents get that context and understanding and not veer off down the wrong rabbit holes. While this is most pronounced in agentic coding, the same methodologies will come for all areas of knowledge work. Soon, we will need to figure out how to structure our work in a way that lets agents jump into a workflow and intently get up to speed. There will be a huge premium in your business process or team for being able to have authoritative documented approaches to how things get done. And ensuring that information is kept up to date and available to the right people and agents at any time. It’s best to keep watching what’s happening in agentic coding as it’s coming for the rest of work, too.
Greg Brockman@gdb

Software development is undergoing a renaissance in front of our eyes. If you haven't used the tools recently, you likely are underestimating what you're missing. Since December, there's been a step function improvement in what tools like Codex can do. Some great engineers at OpenAI yesterday told me that their job has fundamentally changed since December. Prior to then, they could use Codex for unit tests; now it writes essentially all the code and does a great deal of their operations and debugging. Not everyone has yet made that leap, but it's usually because of factors besides the capability of the model. Every company faces the same opportunity now, and navigating it well — just like with cloud computing or the Internet — requires careful thought. This post shares how OpenAI is currently approaching retooling our teams towards agentic software development. We're still learning and iterating, but here's how we're thinking about it right now: As a first step, by March 31st, we're aiming that: (1) For any technical task, the tool of first resort for humans is interacting with an agent rather than using an editor or terminal. (2) The default way humans utilize agents is explicitly evaluated as safe, but also productive enough that most workflows do not need additional permissions. In order to get there, here's what we recommended to the team a few weeks ago: 1. Take the time to try out the tools. The tools do sell themselves — many people have had amazing experiences with 5.2 in Codex, after having churned from codex web a few months ago. But many people are also so busy they haven't had a chance to try Codex yet or got stuck thinking "is there any way it could do X" rather than just trying. - Designate an "agents captain" for your team — the primary person responsible for thinking about how agents can be brought into the teams' workflow. - Share experiences or questions in a few designated internal channels - Take a day for a company-wide Codex hackathon 2. Create skills and AGENTS[.md]. - Create and maintain an AGENTS[.md] for any project you work on; update the AGENTS[.md] whenever the agent does something wrong or struggles with a task. - Write skills for anything that you get Codex to do, and commit it to the skills directory in a shared repository 3. Inventory and make accessible any internal tools. - Maintain a list of tools that your team relies on, and make sure someone takes point on making it agent-accessible (such as via a CLI or MCP server). 4. Structure codebases to be agent-first. With the models changing so fast, this is still somewhat untrodden ground, and will require some exploration. - Write tests which are quick to run, and create high-quality interfaces between components. 5. Say no to slop. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high - Ensure that some human is accountable for any code that gets merged. As a code reviewer, maintain at least the same bar as you would for human-written code, and make sure the author understands what they're submitting. 6. Work on basic infra. There's a lot of room for everyone to build basic infrastructure, which can be guided by internal user feedback. The core tools are getting a lot better and more usable, but there's a lot of infrastructure that currently go around the tools, such as observability, tracking not just the committed code but the agent trajectories that led to them, and central management of the tools that agents are able to use. Overall, adopting tools like Codex is not just a technical but also a deep cultural change, with a lot of downstream implications to figure out. We encourage every manager to drive this with their team, and to think through other action items — for example, per item 5 above, what else can prevent a lot of "functionally-correct but poorly-maintainable code" from creeping into codebases.

English
38
35
364
110.1K
LiquidMetal AI
LiquidMetal AI@LiquidMetalAI·
58 modules. Zero infra tickets. Most stacks can’t say that. As systems grow, every new agent, job, or workflow usually comes with: new services, new dashboards, new on-call surface area. You end up paying a “platform tax” just to keep the thing running. Raindrop is built to make that trade-off disappear. In our AI ChampionSHIP program, one team decomposed their system into 58 modules because that’s what the product needed, not what the infra allowed. They described those modules in Raindrop manifest. The platform handled: → packaging and deployment → routing between modules → autoscaling and health → observability out of the box No custom Kubernetes setup. No separate deployment pipelines. No infra management on their side. That’s the point of Raindrop: design the architecture your product deserves, without turning your team into an infrastructure org.
LiquidMetal AI tweet media
English
1
0
0
21
LiquidMetal AI
LiquidMetal AI@LiquidMetalAI·
Meet AuditGuardX, #2 Grand Prize Winner in AI ChampionSHIP It turns enterprise compliance into an AI-native workflow. It reads policies across dozens of frameworks, spots gaps, and regenerates compliant documents All that in minutes instead of months so teams can ship and stay audit-ready at the same time. Under the hood, AuditGuardX runs as a serverless on Raindrop + Vultr: Raindrop’s SmartMemory, SmartBuckets, and SmartInference orchestrate document analysis, voice chat, and semantic search. AuditGuardX is a sharp example of what happens when one builder treats Raindrop as the backend—and focuses all their energy on real enterprise impact. Congrats @patsinfotech devpost.com/software/audit…
English
0
0
1
19
LiquidMetal AI
LiquidMetal AI@LiquidMetalAI·
@janwilmake liquidmetal.ai - MCP support, build an API or agent and deploy globally. AI infused building blocks like RAG obj store or NLP SQL. A lot of CF under the hood except inference.
English
0
1
3
131
Jan Wilmake
Jan Wilmake@janwilmake·
who's building a Cloudflare Workers-native agent orchestration layer
English
31
3
102
13.2K
Bhavani.py
Bhavani.py@Bhavani_00007·
be honest, which one is best for hosting ?
Bhavani.py tweet mediaBhavani.py tweet mediaBhavani.py tweet mediaBhavani.py tweet media
English
523
52
1.8K
222.6K
LiquidMetal AI
LiquidMetal AI@LiquidMetalAI·
Meet the first Grand Prize winner of the AI ChampionSHIP: Hakivo. Hakivo turns Congress into an AI product. It tracks bills, summarizes them in plain language, and delivers NPR-style audio briefings so anyone can actually follow what’s happening in government. Under the hood, it runs 58 Raindrop modules: agents, services, tasks, queues; without managing a single piece of infrastructure. SmartBuckets power semantic search over thousands of bills. SmartMemory keeps long-running conversations grounded. Hakivo is our kind of winner: ambitious, real-world, and built to go beyond the demo. Congrats @tarikjmoody Learn more - Link below !! @Vultr @cerebras @elevenlabs @CloudflareDev @stripe @WorkOS are the best sponsors of The AI ChampionSHIP.
LiquidMetal AI tweet media
English
1
1
1
46
LiquidMetal AI
LiquidMetal AI@LiquidMetalAI·
You shouldn’t be reinventing conversational memory. In the AI ChampionSHIP program, one team took a different route. They needed an agent that stayed context-aware across sessions. Instead of building custom storage and retrieval for conversation history, they defined a SmartMemory in their Raindrop manifest and plugged it into the agent. Raindrop handled: →storing and indexing the right parts of every conversation →retrieving relevant context on the next interaction →keeping state consistent across long-lived sessions No bespoke “memory service.” No manual history management or one-off databases to keep alive. SmartMemory turns cross-session context from an infra project into a configuration decision. For builders and engineers, that means you spend less time worrying about how to remember, and more time deciding what your agent should remember to actually be useful. Try it out: liquidmetal.ai
LiquidMetal AI tweet media
English
0
0
0
31
LiquidMetal AI
LiquidMetal AI@LiquidMetalAI·
You shouldn’t be designing chunking strategies for a living. But that’s where a lot of RAG work ends up: tuning window sizes, fiddling with overlaps, wiring embedding pipelines, then rebuilding it all when the use case changes. In our AI ChampionSHIP program, one builder dropped that entire layer of work. They plugged their data into SmartBuckets, and Raindrop handled the rest: ingest, chunking, embeddings, retrieval - all wired into the agent flow. Instead of debugging yet another indexing script, they spent their time on prompts, behaviors, and what the agent should actually do for users. That’s what SmartBuckets are for: a production-grade RAG pipeline out of the box, so your energy goes into product decisions, not plumbing. For teams shipping AI features, those hours add up fast. Try it out: liquidmetal.ai
LiquidMetal AI tweet media
English
0
1
2
153
LiquidMetal AI
LiquidMetal AI@LiquidMetalAI·
Shipping an AI agent shouldn’t take weeks. But for most teams, it does. Not because the idea is complicated, but because everything around it is: Tool wiring, state, retries, observability, “just one more” glue service. Raindrop’s SmartComponents are designed to skip that part. This is what the AI ChampionSHIP Hackathon participant says: “SmartComponents are insanely fast to prototype with – went from zero to working agent in < 48h.” SmartComponents give you ready-made pieces for: → RAG pipelines → model calls and routing → memory + context handling → control flow and monitoring You’re not reinventing agent architecture every time. You’re assembling from components that already know how to work together. The result: idea → working agent in days, not sprints. Less time on boilerplate, more time on what the agent actually does for users. That speed compounds into more experiments, faster learning, and a much shorter path from “we should try this” to “it’s live.” Try it now: liquidmetal.ai
LiquidMetal AI tweet media
English
0
0
1
32