Christopher

25.9K posts

Christopher banner
Christopher

Christopher

@communicating

Optimist, Geek, Building @AgletsAI. Dot Connector, Tool Builder, Info Hacker & Coder. Into Edge & Physical AI, Agents, Small LLMs, and making hard things easier

Now YVR | Soon Everywhere Katılım Haziran 2008
297 Takip Edilen2K Takipçiler
Christopher
Christopher@communicating·
A really interesting pdf parser lib. Designed to run as a cli. No python. Local. Will have to test it tho. Right in the readme ⁦@llama_index⁩ points to their cloud product for more advanced tasks so it might just b for simple, clean pdfs? Still cool! github.com/run-llama/lite…
English
0
0
0
17
Christopher
Christopher@communicating·
Oops. Is the new Conductor model an unlicensed Kimi fine tune? If it’s a fine tune that’s not the issue - use what works, it’s the fact they apparently didn’t license it & are now post discovery hiding the fact? I bet (hope) this is just a misunderstanding but it looks bad…
Fynn@fynnso

was messing with the OpenAI base URL in Cursor and caught this accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast so composer 2 is just Kimi K2.5 with RL at least rename the model ID

English
0
0
1
71
Christopher
Christopher@communicating·
One of my favourite local ASR models just got a significant upgrade in capabilities while slashing its size. The only thing I think it’s missing (that the old model had) is fill in the middle capabilities but I’ll need to dig deeper to be sure. huggingface.co/ibm-granite/gr…
English
0
0
0
21
Christopher
Christopher@communicating·
As I build my own (software only) infra layer I continue to be inspired by the work @baseten is doing Innovations like this help push costs down while still maintaining the quality customers see (demand) which is huge They did “write the book” on infra so it’s not surprising 😉
Rachel Rapp@rapprach

Today we're launching BDN, our infra team's answer to cold starts 1) for big models, and 2) for co's that need to burst scale (two of the most painful scenarios for cold starts). Spoiler: it's all about the weights.

English
1
0
3
170
Christopher
Christopher@communicating·
@ArelAvellino @LangChain Well stated. Why don’t more production systems follow this simple rule: Isolation first, then controlled sharing. Nice job with fleets.
English
0
0
1
15
Arel Avellino
Arel Avellino@ArelAvellino·
@LangChain The memory-per-agent model is the right call. Shared memory across a fleet is where multi-agent systems fall apart - you get state collisions, context bleed, agents stepping on each other's work. Isolation first, then controlled sharing.
English
3
0
4
437
LangChain
LangChain@LangChain·
Introducing LangSmith Fleet: an enterprise workspace for creating, using, and managing your fleet of agents. Fleet agents have their own memory, access to a collection of tools and skills, and can be exposed through the communication channels your team uses every day. Fleet includes: → Agent identity and credential management with “Claws” and “Assistants” → Sharing and permissions to control who can run, clone, and edit (just like Google Docs) → Custom Slack bots so each agent has its own identity in Slack Try Fleet: smith.langchain.com/agents?skipOnb… Read the announcement: blog.langchain.com/introducing-la…
English
14
42
190
54.4K
Christopher
Christopher@communicating·
LLMs are UNIX-style stream processors where “everything is a text stream” Ignore this fact and you’re fighting the LLM. Ignoring this fact is also why most expressions of agentic loops (outside of coding harnesses) currently fail - badly Note: this is also why CLIs work so well
English
0
0
0
18
Christopher
Christopher@communicating·
I almost can’t believe it’s working! As agents evolve, the future looks remarkably like the 70s/80s as three core truths are being reinforced: Bell Labs got it right Simplicity beats complexity every time ; composability scales Advanced behavior emerges from basic primitives
English
0
0
0
23
Christopher
Christopher@communicating·
@gregisenberg This is one of the bigger steps towards letting anyone be a builder. Encoding taste is freaking hard. This new iteration of Stitch is getting really close to letting anyone who has taste express it. Looks like I may be going short on Figma, etc. 👍
English
0
0
0
53
Christopher
Christopher@communicating·
Years of data fragmentation are bottlenecking enterprise agent adoption. The solution requires a 3-part stack: A Virtual Context Layer (unified secure data access **for agents** not humans) An Orchestration Layer (observable managed workflows) A Governance Layer (control risk)
English
0
0
0
9
Christopher
Christopher@communicating·
Naming things is hard. I prefer Virtual Context System (VCS) but like almost every other 3 letter acronym in our industry these 3 are not only taken but well established. Sticking with Virtual Context Layer (VCL) I suppose.
English
0
0
0
19
Christopher
Christopher@communicating·
@gneubig It’s not the case any longer. In a pick your foundation model of choice environment we’re dealing w/ core management (orchestration , planning, context) & deployment (security, governance & again orchestration) issues now. How we apply these processes to the harness is the unlock
English
0
0
1
106
Graham Neubig
Graham Neubig@gneubig·
In 2025 the biggest barrier to using agents effectively was definitely agentic model accuracy. That no longer feels like as much of a barrier -- context, security, and cost definitely play an almost bigger role now.
English
1
1
33
2.6K
Christopher
Christopher@communicating·
@badlogicgames @dmwlff @sprobertson @swyx @dillon_mulroy Absolutely. To be clear this was a bravo for Pi not a boo for CC. I just like seeing credit given when inspiration is leveraged. If we’re not all learning from each other we’re doing something wrong. CC was at the beachhead of the CLI craze no doubt about that. 🍺
English
0
0
0
45
Dillon Mulroy
Dillon Mulroy@dillon_mulroy·
thoughts after day 1 of using pi full time - less is more - i don't miss subagents like i thought i would - /tree is an insanely good context management primitive (and partially why i havent reached for subagents yet) - based only on vibes, i think having a minimum system prompt is improving code quality - telling pi to copy opencodes webfetch and websearch tools was a good play
English
43
11
610
41.1K
Christopher
Christopher@communicating·
@dillon_mulroy What about opencode’s webfetch and websearch tools in particular are you a fan of?
English
0
0
0
162
Christopher
Christopher@communicating·
@dmwlff @badlogicgames @sprobertson @swyx @dillon_mulroy Maybe I’m wrong but it feels like I’ve seen a lot of inspiration from Pi heading into CC in the past bit which is amazing. From the original repo to all the extensions Pi is sure having a positive impact. Hopefully the inspiration is being acknowledged.
English
1
0
0
538
Christopher
Christopher@communicating·
@OfficialLoganK Indeed it is. Our reality going forward is review in general (across domains) becoming the dominant bottleneck and how do we handle that. I’ve been thinking about this along with the front end of the pipeline (orchestration) & the guts (governance) a lot.
English
0
0
2
315
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
The bottleneck has so quickly moved from code generation to code review that it is actually a bit jarring. None of the current systems / norms are setup for this world yet.
English
380
186
4.1K
515K
Christopher
Christopher@communicating·
@evalstate That’s I think the most exciting part of the addition - adding small, focussed or vertical models to offload specific tasks. Having the infra for that basically automated for free is massive. 👍🙏 Your agent is quickly becoming a “reference agent” for a lot of functionality.
English
1
0
2
64
Shaun Smith
Shaun Smith@evalstate·
Yes, the llama.cpp thing is nice as it makes it very easy to download models, and not having to configure windows, output lengths etc. is super convenient. Qwen3.5-9B is small and capable. As a subagent, you can just ask a big model to tune it for a task you have in mind (keep history off etc.)
English
1
0
1
82
Shaun Smith
Shaun Smith@evalstate·
fast-agent 0.6.0... big update for Anthropic 1M Context Window defaults, Google model improvements... and llama.cpp support. Discover and sets model parameters and capabilities (e.g. Vision) from llama.cpp servers.
Shaun Smith tweet media
English
3
0
6
291