Jacob

2.1K posts

Jacob banner
Jacob

Jacob

@jakecodes

Building Kai and 1Medium https://t.co/7FFToZ0W6Q https://t.co/6OIaxvug3H

Philadelphia 가입일 Aralık 2015
814 팔로잉2.6K 팔로워
Jacob
Jacob@jakecodes·
Shipped 10 releases for Kai this week (v0.9.14→0.9.23). Now: • capture/push = change-aware (not O(all files)) • CI skips unaffected work • tracks AI vs human code You + AI can actually understand your codebase. github.com/kaicontext/kai…
English
0
0
0
54
Jacob
Jacob@jakecodes·
Next version of Kai: Created a pull-through cache on Artifact Registry, one command, zero Dockerfile changes. First pull hits Docker Hub, everything after is cached.
English
0
0
1
89
Jacob
Jacob@jakecodes·
This release gives you Kai grep. Now I use Kai grep instead of Claude code grep Yesterday, I had Claude refactor. A standard text search made it look safe. But Kai grep flagged a hard dependency in another module. I would have shipped a production bug. github.com/kailayerhq/kai
English
0
0
2
187
Jacob
Jacob@jakecodes·
@TalAdam incredible! We're in the process of releasing kai server open source today
English
0
0
0
24
Jacob
Jacob@jakecodes·
Super excited because in a few seconds Kai will be deployed by Kai. Once this is done, Kai Server will be open sourced, and you can do this too on your own hardware or cloud. Your agents will be so happy.
Jacob tweet media
English
1
0
1
106
Jacob
Jacob@jakecodes·
A few pilot teams told me the same thing after using Kai: “This gives me peace of mind shipping AI-generated code.” AI can generate changes faster than humans can review. Kai gives both the developer and the AI agent a shared understanding of the repo. github.com/kailayerhq/kai
English
0
0
1
241
Jacob
Jacob@jakecodes·
@_brian_johnson That's so cool! It'd be cool to have that in kai directly, maybe you'd like to contribute.
English
1
0
0
74
Jacob
Jacob@jakecodes·
Claude Code: 100k tokens, 20 min for my refactor. With Kai: 20k tokens, 2 min. Semantic infrastructure for AI agents. Call graphs, dependencies, impact analysis — structured, not grepped. claude mcp add kai -- npx -y kai-mcp Open source: github.com/kailayerhq/kai
English
1
0
0
171
Jacob
Jacob@jakecodes·
@mipsytipsy @boristane Testing is the most interesting stage to watch. We’re already asking agents to write tests. The question shifts from “can it generate?” to “how do we validate confidently?” Observability closes the loop after deploy. The next frontier is tightening the loop before it.
English
0
0
1
110
Charity Majors
Charity Majors@mipsytipsy·
Every stage of the traditional SDLC is collapsing, except monitoring. And monitoring needs to evolve. Observability becomes the feedback mechanism that drives the entire loop...the connective tissue of the whole system. 🙌 Read the rest from @boristane, boristane.com/blog/the-softw…
English
15
37
302
29.5K
Jacob
Jacob@jakecodes·
Hours of coding made 10-minute CI feel free. Feedback loop? Not your problem. Agent does it in 5 minutes. Now CI is the feedback loop. Nothing changed in your pipeline. Everything changed in your baseline.
English
0
0
0
180
Jacob
Jacob@jakecodes·
@yuxiyou “看不懂 Vue”是一种感觉, 但等 agent 写了大部分代码以后, 不知道该怎么测,可能才是更大的挑战。 中文不太流利,请多包涵 🙏
中文
0
0
1
680
尤雨溪
尤雨溪@yuxiyou·
今天让 ai 做了一个很复杂的 Vue 相关的功能,框架层面的,一开始页面完全是挂掉的,我觉得 4.6 Opus 终于也有极限啊,没想到在指点它起一个真的服务器然后用 playwright debug 之后居然真的修好了。仔细看了下它有些地方实现我看得都很累,感觉以后有一天可能不懂 vue 这个梗要成真了…
中文
65
44
1.1K
175.5K
Jacob
Jacob@jakecodes·
@youyuxi I think most loved is the best benchmark.
English
0
0
0
8
Evan You
Evan You@youyuxi·
Vite & Vitest continue to be the most loved technologies in the JS ecosystem (screenshot from State of JS 2025). Rolldown and oxlint appeared in the rankings too! Next year we will not only see Rolldown, tsdown, oxlint and oxfmt rising, but also something that unifies all of them on the list…
Evan You tweet media
English
23
31
625
31.3K
Jacob
Jacob@jakecodes·
@Lethain The eval-as-signal vs eval-as-gate distinction mirrors the same problem in traditional CI — the more heuristic your test selection, the less you trust it as a blocking check. Determinism is the thing that earns gate status.
English
0
0
0
19
Will Larson
Will Larson@Lethain·
Wrote up a series of nine short posts on interesting problems we encountered while building an internal agent framework (virtual files, compaction, LLM vs code-driven workflows, etc) lethain.com/agents-series/
English
12
50
541
44.3K
Jacob
Jacob@jakecodes·
@dhh When code generation drops below a minute, validation time dominates cycle time.
English
0
0
0
13
DHH
DHH@dhh·
Kevin: "I just raced Claude and Kimi K2.5 against that bug that Ryan was talking about. K2.5 fixed it in 21s. Claude took just over a minute to make the plan, then about 2 minutes to execute on it. Both had the same fix, though." (K2.5 is now my main driver. Opus just backup.)
English
170
106
3.2K
273.5K
Jacob
Jacob@jakecodes·
The number one CI strategy most teams rely on is: “Just run everything.” It works — as long as compute is cheap and teams are small. But what happens when PR volume doubles? What happens when agents generate 10x more changes?
English
0
0
0
88
Jacob
Jacob@jakecodes·
I replaced mocha with howth.run for API tests — without rewriting a single test. CI dropped from 12m to 1.5m — an ~86.5% reduction. v0.5.31
Jacob tweet media
English
0
1
0
287
Jacob
Jacob@jakecodes·
New JS/TS Runtime. Bundler. Test runner. Currently getting close to 2x Bun speeds howth.run
Jacob tweet media
English
1
1
4
223
Jason Freedman
Jason Freedman@jasonfreedman·
Next up Israel! I'll be in Tel Aviv and would love to meet people that want to chat startups, investing, Orange Collective/Y Combinator, and hang with me and my family! Old friends and new friends! I'm hosting a few events. Comment below or DM if you're interested.
English
49
10
369
120.5K