Ian Clarke
4K posts

Ian Clarke
@Sanity
Building @FreenetOrg — the antidote to Big Tech | Founder of https://t.co/zHHxQQAv8r (AI for fair, stress-free negotiation) 🇮🇪🇺🇸
Austin, TX Katılım Nisan 2007
430 Takip Edilen2.4K Takipçiler

@DoingFedTime If they add this feature, the OS should also display the name of one of the legislators who voted for this idiocy. Pick one at random, or rotate through them. The name should blink.
English

Dylan, useful idiot with commit access, pushed age verification PRs to systemd, Ubuntu & Arch,
got 2 Microslop employees to merge it, called it 'hilariously pointless' in the PR itself,
then watched Lennart personally block the revert after community outrage.
Unpaid compliance simp.
Link below...

English

@marknoble @notch @SimpleXChat SimpleX decentralizes relay ownership, but it is still a set of isolated relays rather than a unified network. Freenet is a unified global network, like the internet.
English

@Sanity @notch @SimpleXChat @Sanity SimpleX uses fully decentralized relays: open-source, self-hostable by anyone. Users choose servers. Relays are metadata-blind & temporary. Officially 'fully decentralised'. Different model from Freenet’s pure P2P – better for mobile chat. simplex.chat
Powell, OH 🇺🇸 English

@marknoble @notch @SimpleXChat SimpleX still relies on relays so it isn't truly decentralized. Freenet does not: in normal operation it has no privileged servers or relays, and workload is distributed across peers. Also, Freenet is general-purpose - chat is just one app on the network.
English


@kapitaali__com @notch Try it again, just fixed a bug that may have impacted this. If it happens again tell me what you see.
English

@unclebobmartin Makes sense. I've been thinking about using pairwise comparisons with an algorithm like github.com/sanity/asap to allow the llm to curate its own context.
English

@Sanity Ostensibly that’s what compression is. I think you can do better by manually clearing and having the AI write focused notes just before.
English

These deep analytic dives into systematic failures burn a _LOT_ of tokens. It really has to think hard to work through the issues. It barely finishes before compaction.
This implies something I think we've all known. There are problems that are too complex for the context window to hold. Once a problem exceeds the context window, I'm not sure what would happen.
My approach would be to subdivide the problem into chunks that the AI could write a report about, so that it's conclusions would be available after the compression.
This, however, simply postpones the issue. The final implication is that there is an upper limit of complexity beyond which the AIs cannot go.
This must be true of humans as well, though we don't have context windows per se. Perhaps this explains why physicists have been stymied for over a century by the incompatibility of QM and GR.
English

@elvecent_not Mostly maturity/stability - zellij crashes every few weeks.
English

🤔
"A decentralized pizza ordering application built on Freenet. Allows groups to collaboratively create and manage pizza orders with cryptographic verification and eventual consistency across peers."
github.com/realraum/freee…
English
Ian Clarke retweetledi

Giving a talk about recent progress on @FreenetOrg on Friday close to UT campus - hope to see you there!
English

@aiscottadams Love what you're doing, if you need any help from a software engineer with a LOT of AI/LLM experience let me know.
English

@techdevdaily @unclebobmartin By that argument, humans are just glorified autocomplete too.
English

@unclebobmartin Something said back in 2022 still relevant "LLMs are just glorified autocomplete" we have just optimised them for contextual feed and also trained them better for these complex tasks, nothing at the most basic level has changed.
English

I asked Claude why it has so much trouble with parentheses. Here was its answer:
The real issue is that I'm a token-prediction model, not a parser. I don't
maintain an explicit stack of open delimiters the way a compiler does. I'm
essentially guessing what comes next based on patterns, so in deeply nested
s-expressions, I can lose count. The deeper the nesting, the worse I do — each
additional level is another thing I have to implicitly track across the
generated sequence.
Clojure is particularly unforgiving here because mismatched parens aren't just
a style issue — they silently change the semantics of the code.
English





