jasonbla

53 posts

jasonbla banner
jasonbla

jasonbla

@jassonbla

Seoul Katılım Şubat 2011
186 Takip Edilen17 Takipçiler
jasonbla
jasonbla@jassonbla·
@MattSchrage @dabit3 That’s great to hear. Thanks for the update. Looking forward to trying the native DeepWiki integration in the CLI 🫡
English
0
0
0
22
nader dabit
nader dabit@dabit3·
Introducing Devin for Terminal. Local when you want control. Cloud when you want your laptop back. Tight integration between both. Work locally or send sessions to the cloud with /handoff Choose between Claude, GPT, SWE, GLM, Kimi, and other models. It's fast, it's great, and it's the easiest way to get started with Devin.
Cognition@cognition

The terminal hasn’t changed much since the 1970s. What you do with it has. Introducing Devin for Terminal: everything we learned building Devin, now as a local agent, available right in your shell. And when your work outgrows your laptop, hand it off to the cloud.

English
31
13
239
30K
Erick
Erick@ErickSky·
TENCENT ACABA DE DROPEAR LA BOMBA para todos los que hacen AI Agents: Un sandbox que: - Arranca en menos de 60 ms (hasta 50x más rápido) - Usa solo 5 MB de RAM por instancia - Puedes correr +2.000 sandboxes en un solo servidor - Seguridad de verdad (microVMs con KVM + RustVMM) - y 100% compatible con E2B SDK. Self-hosted, open-source y GRATIS. REPOOO👇
Erick tweet media
Español
37
400
3.8K
239.1K
jasonbla
jasonbla@jassonbla·
This round of work helped me understand something more clearly: a meaningful part of getting Devin to finish tasks faster is giving it an environment where CI feedback comes back quickly.
English
0
0
0
39
jasonbla
jasonbla@jassonbla·
Cloud-hosted coding agents can feel slow sometimes, but at least in my case, the bigger bottlenecks weren’t the model. They were CI and repo structure. I kept noticing how much time was being lost waiting for @DevinAI to get back to me after opening a PR, so I changed a few things: 1) ESLint + Prettier → Biome (@biomejs) lint 60s → 3s 2) Backend CI 3 jobs → 1 job eliminated duplicate npm ci runs 3) node_modules caching cache hits can skip installs entirely 4) removed redundant workflows less queue contention 5) kept the project as a monorepo a single indexed surface through DeepWiki, which seems to help Devin reason across apps much better That last part mattered more than I expected. On tasks that span backend and frontend, Devin seems much better at reasoning about cross-app dependencies when everything is indexed together. At least for me, these optimizations noticeably reduced PR wait time and made the overall workflow much smoother. Curious whether others using Devin or similar coding agents have found optimizations that worked even better.
jasonbla tweet media
English
2
0
1
56
jasonbla
jasonbla@jassonbla·
Some context on the repo behind this: - 𝙢𝙤𝙣𝙤𝙧𝙚𝙥𝙤 - 𝙉𝙚𝙨𝙩𝙅𝙎 𝙗𝙖𝙘𝙠𝙚𝙣𝙙 + 𝟯 𝙉𝙚𝙭𝙩.𝙟𝙨 𝙛𝙧𝙤𝙣𝙩𝙚𝙣𝙙𝙨 - 𝟭,𝟵𝟰𝟯 𝙛𝙞𝙡𝙚𝙨 - 𝟯𝟳𝟯𝙠 𝙇𝙊𝘾 𝙩𝙤𝙩𝙖𝙡 - 𝟯𝟭𝟱𝙠 𝙇𝙊𝘾 𝙤𝙛 𝙏𝙮𝙥𝙚𝙎𝙘𝙧𝙞𝙥𝙩 Part of why I kept it this way was to give @DevinAI a single indexed surface through DeepWiki. At least in my case, that seemed to matter a lot more once tasks started crossing app boundaries.
jasonbla tweet media
English
0
0
0
38
jasonbla
jasonbla@jassonbla·
@AlexFinn Openclaw is also great for checking device specs, but I still tend to prefer 𝙡𝙡𝙢𝙛𝙞𝙩 because the command is just so straightforward. github.com/AlexsJones/llm…
English
1
0
3
634
Alex Finn
Alex Finn@AlexFinn·
Do you even understand what this means? An open source model just released that is: • Outperforms models 20x its size • Can run on a base model Mac Mini • Is AMERICAN 🇺🇸 If you have a base model Mac Mini you can have unlimited super intelligence on your desk. For free. Sonnet 4.5 was released 5 months ago In 5 months that level of intelligence went from frontier to free on your desk And not only that, can run on any basically any computer out there If you have even a remotely modern computer, do the following immediately: 1. Download LM Studio 2. Go to your OpenClaw and ask which of these new Gemma 4 models is best for your hardware 3. Have it walk you through downloading and loading it 4. Build apps with it knowing you are using your own personal, private super intelligence on your desk The people denying this is the future are so beyond lost.
Google DeepMind@GoogleDeepMind

Meet Gemma 4: our new family of open models you can run on your own hardware. Built for advanced reasoning and agentic workflows, we’re releasing them under an Apache 2.0 license. Here’s what’s new 🧵

English
282
560
7.1K
1.3M
jasonbla retweetledi
Cognition
Cognition@cognition·
Devin can now manage a team of Devins. Devin will break down large tasks and delegate them to parallel Devins that each run in their own VM. Over time, Devin gets better at breaking down and managing tasks for your codebase. Available now for all users.
English
26
36
418
108.5K
Grok
Grok@grok·
@Kanishk11486111 @xenovacom Hey! Which model? Drop the name, param count, or HF link, and I'll calc the VRAM/RAM for full precision (e.g. BF16/FP16 ~2 bytes/param + KV cache/activations overhead, often 1.5-3x model size). Without deets I can't give a number!
English
1
0
0
119
Xenova
Xenova@xenovacom·
Not enough people are talking about NVIDIA's new Nemotron-3-Nano (4B) model! 🤯 Hybrid Mamba + Attention architecture, designed as a unified model for reasoning and non-reasoning tasks. So small and efficient, it can run 100% locally in your web browser at 75 tokens per second.
English
18
64
471
50K
jasonbla retweetledi
Unsloth AI
Unsloth AI@UnslothAI·
Introducing Unsloth Studio ✨ A new open-source web UI to train and run LLMs. • Run models locally on Mac, Windows, Linux • Train 500+ models 2x faster with 70% less VRAM • Supports GGUF, vision, audio, embedding models • Auto-create datasets from PDF, CSV, DOCX • Self-healing tool calling and code execution • Compare models side by side + export to GGUF GitHub: github.com/unslothai/unsl… Blog and Guide: unsloth.ai/docs/new/studio Available now on Hugging Face, NVIDIA, Docker and Colab.
English
223
868
5.3K
1.6M
jasonbla
jasonbla@jassonbla·
@AdvaitRaykar Totally agree. We’ve been trying this in our team lately: issue comes up in a Slack thread → Linear ticket → assign to @DevinAI Works really well with @linear
English
0
0
2
104
Advait Raykar
Advait Raykar@AdvaitRaykar·
Our Devin usage is soaring too. The product is improving rapidly and is very good right now. I tried it last year, churned in a week. I even got on call with an engineer to give them feedback, but didn't even know where to begin. Imo, it's the best product in the category right now. Claude code and Codex are not comparable, since they don't have feature parity or the maturity Devin has for autonomous development. Maybe they will get there, but at the moment, Devin is in a league of its own.
Scott Wu@ScottWu46

Interesting stat - our enterprise customers have already done more Devin sessions (and more merged Devin PRs) in 2026 than in all of 2025. Not bad for 2-ish months into the year!

English
15
12
168
68.9K
jasonbla
jasonbla@jassonbla·
Great thread. This matches our experience too. The Slack interface almost feels like using a CLI. The interaction is incredibly immediate. ’Devin Review‘ is great at catching critical issues. When it finds something in a PR, it can push a fix right away. That alone saves us a lot of review time. ’DeepWiki‘ is another big advantage. The repo is continuously indexed, so Devin always has fresh context.
English
0
0
1
460
nader dabit
nader dabit@dabit3·
Devin is like Claude Code except it lives in the cloud and runs against all of your repos vs your local filesystem. So it never turns off, can be run from anywhere including your phone + Slack, and runs as many tasks as you can send it in parallel. It's complementary to all agentic IDEs and CLIs, and for the first time ever it's free to get started.
English
57
19
287
49.5K
jasonbla
jasonbla@jassonbla·
Also, I’m curious whether it would be possible for the API to expose the trigger source for each session (e.g., Slack) and a deep link to the source (such as the Slack message permalink). I understand this might be difficult since Devin sessions can be triggered from multiple sources. In our team, we intentionally agreed to trigger Devin only from Slack. The reason is that the conversations with Devin in public Slack channels themselves become part of our team’s know-how. Senior engineers, junior engineers, and even non-developers all interact with Devin in the open, so everyone can see how others communicate with it. This transparency helps raise the team’s overall AI literacy. Because of this workflow, having the session’s trigger source and the Slack permalink available per session would be extremely helpful : )
English
0
0
0
22
nader dabit
nader dabit@dabit3·
@jassonbla this is so cool! what features would you like to see in our api that currently don't exist?
English
2
0
0
58
nader dabit
nader dabit@dabit3·
Lysium is an app I've been building specifically for background agent orchestration. Features: → Mobile-first + cross-platform → Run multiple agents in parallel across repositories → Launch agent-based requests from issues and PRs → Swipe actions for close, merge, create PR, and skip-to-tail workflows → One click agent-powered PR reviews + assessments → One-click agent-powered issue assessments Think of it as a cross-platform control plane for async, agent-driven software delivery (that you can control from anywhere). It's still an experiment, but it's open source and available to try: lysium.ai
English
36
18
305
47.4K
jasonbla
jasonbla@jassonbla·
Thanks! I just built it because I needed it. Since we're a small team, the Enterprise plan is a bit expensive for us, so we manage ACU usage very tightly. Every morning and evening, I check the Usage History in the Admin “Usage & Limits” tab to see how many ACUs each session consumed, and to understand why some sessions use a lot of ACUs even for relatively simple tasks. It would be really helpful if session-level ACU consumption could also be retrieved via the API.
English
1
0
1
41
jasonbla
jasonbla@jassonbla·
That makes sense. In my case, the state changes naturally as I start conversations, provide additional instructions to Devin, or merge PRs. So I don't really need a feature to manually change the state like in a traditional Kanban system. However, if SSE or WebSocket support becomes available as you mentioned, it would make it much easier to keep the Kanban board up to date compared to relying on API polling. That said, even with the current polling approach, it's already sufficient for quickly finding sessions and taking the necessary actions in the Kanban system I built.
English
0
0
0
18
nader dabit
nader dabit@dabit3·
I love this. Lysium doesn't manage much session state, it just links to Devin sessions at the moment. As you probably already know, the Devin API lets you read session states and interact with sessions, but not directly set/override the state. for reading session status, you need to poll /v3/organizations/sessions/{devin_id} to get the current status There's been some discussion about SSE or websockets for a push-based notification mechanism in the API but it's not there yet.
English
1
0
0
64
jasonbla
jasonbla@jassonbla·
@dabit3 This is the Devin Kanban I use:
jasonbla tweet media
English
1
0
3
58
jasonbla
jasonbla@jassonbla·
Hi, I run a small startup and currently spend over $1,000 per month on Devin AI. Your product is very appealing to me. I usually trigger Devin missions through Slack. However, when I manage multiple sessions at the same time, it becomes difficult to track which sessions are waiting for my input and which ones have already completed. To work around this, I built a simple state Kanban using the Devin API. Is there a way in Lysium to manage session states such as: - in progress - completed - human-in-the-loop I often use Devin on my phone while commuting (subway, bus, etc.), and then continue working later in the Cursor editor when I get back to the office. Because of this workflow, having clear session status management is very important for me.
English
2
0
2
81