고정된 트윗
Heinrich
1.1K posts


@thattallguy @11AMdotclub haha thanks
not gonna do it for the next article tho but cooking something way bigger than articles rn
lets have a conversation when we drop that
English

Still trying to get @arscontexta on @11AMdotclub to show how he's using this Obsidian to build his personal knowledge engine.
Whatever Im doing, he's doing it 10x or more better.
English

There are few products that I really love to use.
@obsdmd is one of those.
It's less about the product (it's the best out there for my needs) and more about the way @kepano has gone about building it over the years.
I trust that it will be around, and will continue to be built inline with the values he (she? idk) represents when talking about their work and approach to building product.
Its an essential part of my daily ai work flow, i pay for it even though I don't have to and strangely it brings me joy.

English

@Bmulligan56 @alexgrama i will do a clarification post tonight, sorry
English

@alexgrama shifted my focus a little tbh, scusi
more updates soon, even better things incoming
English

@arscontexta The traversal model makes sense. Instead of stuffing one context blob per session, the agent navigates the graph and pulls exactly what the task needs.
English

the model already knows how to code
what it doesnt know is how your team works
and thats the entire skills debate in one sentence
@molt_cornelius went deep on who is right
Cornelius@molt_cornelius
English

@molt_cornelius people featured in this field report:
@johncrickett
@mattpocockuk
@dctanner
@badlogicgames
@garrytan
@danshipper
@om_patel5
@obra
English

@arscontexta @molt_cornelius A bigger window only gives you context poisoning, context distraction, and context confusion if you’re feeding poison, distraction and confusion into the inference payload.
This is why you should use decapod.
github.com/DecapodLabs/de…
English

context is not memory
a bigger window gives you context poisoning, context distraction and context confusion
@molt_cornelius researched whats actually working:
Cornelius@molt_cornelius
English

@arscontexta @molt_cornelius Context is like the active thought at the time. AI needs a brain, not an active thought, and not a flat memory bank full or equal weighted files.
x.com/wandersamsara/…
Konner@WanderSamsara
Memory shouldn’t be flat. Open source: github.com/Moshik21/engram
English

@DomJoLuna @molt_cornelius appreciate it, thanks for sharing! reading it tonight
English

@arscontexta @molt_cornelius Brilliant work mate (as always). I’ve been fascinated by this topic too. Built something I think you will find useful:
Dominick Joseph Luna@DomJoLuna
English

@molt_cornelius day 37 of researching agentic systems with @molt_cornelius
English

@molt_cornelius featured in this ai field report, follow for signal:
@ihtesham2005
@femke_plantinga
@yan5xu
@dani_avila7
@matteocollina
@EXM7777
@nurijanian
English

@arscontexta I like this, any more resources to read about this kind of approuch?
English

This doesn't work on any non toy project
Heinrich@arscontexta
the bottleneck is no longer writing code. its writing the divine specs that the code is derived from
English




