James
289 posts


@DrizzleORM Nice now we just need Effect v4 to actually ship their docs
English

overheard at spacexai:
"let's rename it X-code!"
"oh wait..."
"ok how about Code-X"
SpaceX@SpaceX
SpaceXAI and @cursor_ai are now working closely together to create the world’s best coding and knowledge work AI. The combination of Cursor’s leading product and distribution to expert software engineers with SpaceX’s million H100 equivalent Colossus training supercomputer will allow us to build the world’s most useful models. Cursor has also given SpaceX the right to acquire Cursor later this year for $60 billion or pay $10 billion for our work together.
English
James retweetledi

The hardest thing about agents and backends is durability. @workflowsdk fixes this.
That LLM you're calling *will* go down. That service *will* rate limit you. That database *will* unexpectedly slow down. You *will* get paged 💀
I've been looking for a unicorn for a decade. I wanted the level of reliability of combining stuff like SQS / Kafka / microservices, and I absolutely did not want *that* at the same time 😂
Truly reliable systems like that are notoriously difficult to reason about, to develop locally, to test, to simulate, to deploy… Workflow SDK solves that without compromises.
We're doing what Next.js did for the frontend, but for one of the most important problems of the new generation of backend applications.
Notably, Workflow SDK has an incredible self-hosting and multi-cloud story from day 0. We've taken amazing lessons from Next.js and poured them into the many Worlds (adapters) you can deploy to.
Congrats to Pranay and the Workflow team on a generational ship: vercel.com/blog/a-new-pro…

Vercel@vercel
Vercel Workflows is GA. Your code is the orchestrator. Ship agents, backends, or any long-running process without managing queues, retries, or workers. vercel.com/blog/a-new-pro…
English

@HeyGarrison We've only used vercel's. You can see if for yourself if you are logged in. vercel.com/ai-gateway/mod…
English

@HeyGarrison chunky bad, it makes the model response come out as half/full sentence at a time.
token level stream - or closer to it - feels buttery.
English

@robertcourson if they degrade and then release a ~new amazing~ model (2x more expensive btw) then you'll be eager to make the switch
English

@jayair @kitlangton I thought we were done with slop features? (How can I buy this)
English

@SheardLuke @cramforce @computesdk Cook faster ;)
Before we entrench ourselves in the technical debt of azure blob and e2b
English

@SharingPsyche @cramforce @computesdk Sounds like you'll like something we're cooking at the moment then 🍳
English

Vercel Sandboxes are now the fastest sandbox using real VMs as security boundary based on the @computesdk benchmark. The team has been absolutely cooking on this.
And the best thing: Because we have a unified Fluid Compute stack across Sandbox, Builds, and Functions these wins are often shared across the stack.
On the feature side there is a really exciting roadmap ahead as well.
My favorites (all driven by feature requests from our customers):
- Persistent sandboxes (in beta, GA immanent)
- The fully mutable firewall also becomes fully programmable

English

@SheardLuke @cramforce @computesdk Chatbot -> user uploads file to chat -> create sandbox if not exist -> mount remote store -> ai has sandbox with user files.
Nice for a concept of projects too because multiple sandboxes can mount the same file store (many chats in the same project)
English

@SheardLuke @cramforce @computesdk Mounting remote storage - using rclone, blobfuse etc. then the compute unit is then mostly ephemeral and you dynamical mount a blob or file store
English











