Runpod

645 posts

Runpod banner
Runpod

Runpod

@runpod

AI Infrastructure developers trust 💜 https://t.co/PWVZocfp6D

Katılım Mart 2022
8 Takip Edilen7.7K Takipçiler
Sabitlenmiş Tweet
Runpod
Runpod@runpod·
Introducing Flash. An open source Python SDK for building cloud-native AI apps. Define your hardware, functions, and dependencies in code. No Docker, no config files. github.com/runpod/flash
Runpod tweet media
English
2
7
21
3.1K
Runpod
Runpod@runpod·
Runpod is @OpenAI 's infrastructure partner for Parameter Golf, the first challenge in the Model Craft series. Train the best language model that fits in a 16MB artifact in under 10 minutes on 8×H100s. Together with OpenAI, we’re distributing up to $1M in credits across the challenge period to support experimentation and broaden participation. Enter the challenge and request credits 👇
English
7
6
54
21.5K
Runpod
Runpod@runpod·
The AI market looks nothing like the narrative. We have the production data to prove it. The State of AI: what 500K developers are actually running in 2026. Read the full report now 👇
Runpod tweet media
English
3
1
6
2.2K
Runpod
Runpod@runpod·
Heading to @QCon London next week! If you’re working on: • GPU infrastructure • inference at scale • agent systems • production AI workloads Let’s talk about what you’re building. Come meet us on-site 👇
English
1
0
5
463
Runpod
Runpod@runpod·
Give your coding agent a Flash skill and start building right away. npx skills add runpod/skills
Runpod tweet media
English
1
0
1
444
Runpod
Runpod@runpod·
Introducing Flash. An open source Python SDK for building cloud-native AI apps. Define your hardware, functions, and dependencies in code. No Docker, no config files. github.com/runpod/flash
Runpod tweet media
English
2
7
21
3.1K
Runpod retweetledi
Andrew Jiang
Andrew Jiang@andrewjiang·
The brilliance of @karpathy is being able to distill vastly complex concepts and make them simple to understand and implement at a small scale. All it took was Claude Code and $10 on @runpod to spin up a single H100, and I had a world class ML researcher working on autopilot. I'm taking the general concept of autoresearch and applying it to an inference pipeline I've been working on (no GPU needed thankfully). Everything is so fun now.
Andrew Jiang tweet media
Andrej Karpathy@karpathy

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)

English
53
167
2.4K
316.7K
Runpod
Runpod@runpod·
$15K in GPU credits. Stanford TreeHacks went by in a flash. More coming soon 👀
English
0
3
10
1.6K
Dev Shah
Dev Shah@0xDevShah·
openclaw for training robots...? (not exactly... will share more updates this weekend…) next, excited to play with @theworldlabs ps,: @runpod you beauty, what would i do without you!!
English
5
7
86
5.8K
Runpod
Runpod@runpod·
Runpod is officially HIPAA and GDPR compliant. Healthcare orgs and EU companies can now train and deploy AI models on our GPU cloud with enterprise-grade security protections for sensitive data. 🔒 🔗 runpod.io/press/runpod-a…
English
2
2
7
1.8K
Runpod retweetledi
Marko Denic
Marko Denic@denicmarko·
AI teams don’t need “cheap GPUs” - they need infrastructure that lets them move now. Runpod is an AI infrastructure platform built for speed and on-demand access: ↳ Spin up serverless endpoints for inference and APIs without managing infra ↳ Multi-node instant clusters available on demand for large models (not weeks of provisioning) ↳ API-first workflow that feels like it was designed by developers for developers For researchers and startup teams under deadlines: ↳ No buying hardware or waiting on traditional cloud queues ↳ Launch experiments, RFP prototypes, or production inference fast ↳ Pay-as-you-go so you don’t commit before you validate It scales with your project: ↳ Start small for testing and experimentation ↳ Access a wide GPU range (5090s, A40s, H100s) as needs grow ↳ Move to serverless auto-scaling when usage spikes ↳ Skip ops overhead and focus on shipping models The real value isn’t GPUs - it’s removing the friction between idea and deployment. Worth exploring for your next model or inference pipeline? Check it out here: fandf.co/4qU2Ab4 — Thanks to @runpod for sponsoring this post.
Marko Denic tweet media
English
12
5
34
15.1K
Runpod
Runpod@runpod·
Serverless vs. Pods — how to pick the right deployment model for your AI workload 👇 Serverless: Scales to zero, pay-per-second, no infra management. Best for stateless inference and bursty traffic. Pods: Dedicated GPUs, full control, persistent storage. Best for training, fine-tuning, and interactive dev. The move? Build in Pods. Scale with Serverless. Teams fine-tune in Pods, then containerize and deploy inference to Serverless for production. Match the model to the workload, and don't default to one approach. Follow us for weekly AI infrastructure breakdowns 😉
English
2
0
4
844
Runpod retweetledi
Aryan Keluskar
Aryan Keluskar@soydotrun·
Learning a dance or workout from videos is tough. It's passive, there's no feedback, and you don't know something's gone wrong till your back starts paining. So at @hackwithtrees, we built JiggleWiggle to solve this problem. It can convert any YouTube video or live Zoom session into an interactive coach. Built on @Render using @HeyGen @Zoom @Runpod @OpenAI @Modal
Aryan Keluskar tweet mediaAryan Keluskar tweet media
English
2
1
12
7.6K
Prime Intellect
Prime Intellect@PrimeIntellect·
Introducing Lab: A full-stack platform for training your own agentic models Build, evaluate and train on your own environments at scale without managing the underlying infrastructure. Giving everyone their own frontier AI lab.
English
133
291
2.5K
745K