Latent Loop

1.5K posts

Latent Loop banner
Latent Loop

Latent Loop

@latentloop07

Pune, India Bergabung Ağustos 2020
558 Mengikuti60 Pengikut
Tweet Disematkan
Latent Loop
Latent Loop@latentloop07·
Most people don't know Claude Code has a hidden ~8,000 character limit on skills. Past ~15 skills, descriptions get silently truncated. Skills stop triggering. No warning. No error. The system prompt even tells Claude to never use skills that aren't listed. So truncated skills are invisible AND forbidden. We built an open source fix. SkillNote: self-hosted skill registry for Claude Code. → Collections scope skills per project (no more truncation) → Agents rate skills after use (know what actually works) → Edit in browser, every session picks it up in 60s → Private registry for skills that can't go on GitHub One curl command to set up. github.com/luna-prompts/s…
Latent Loop tweet media
English
3
0
2
189
Latent Loop me-retweet
ClaudeDevs
ClaudeDevs@ClaudeDevs·
We're introducing the Claude Platform on AWS. This gives developers access to the same models and features as our native API, including Claude Managed Agents! Workloads, billing, and IAM stay inside AWS. Learn more: claude.com/blog/claude-pl…
ClaudeDevs tweet media
Claude@claudeai

Build and deploy agents at scale with Claude Managed Agents, or use features like the advisor strategy, code execution, web search, and more. Anthropic operates the service, and all new features ship the same day they go live on the native Claude API.

English
46
35
477
39.9K
Latent Loop me-retweet
Khairallah AL-Awady
Khairallah AL-Awady@eng_khairallah1·
The person who built Claude Skills just showed exactly how to use it. 15 minutes. free. from the engineers who built it. Bookmark this before you forget. CEO: 1 human. Employees: AI agents. Operations: fully automatic. The zero-headcount company is no longer a joke.
Avid@Av1dlive

x.com/i/article/2053…

English
17
15
51
3.2K
Latent Loop me-retweet
Andrej Karpathy
Andrej Karpathy@karpathy·
This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your browser. I've also had some success asking the LLM to present its output as slideshows, etc. More generally, imo audio is the human-preferred input to AIs but vision (images/animations/video) is the preferred output from them. Around a ~third of our brains are a massively parallel processor dedicated to vision, it is the 10-lane superhighway of information into brain. As AI improves, I think we'll see a progression that takes advantage: 1) raw text (hard/effortful to read) 2) markdown (bold, italic, headings, tables, a bit easier on the eyes) <-- current default 3) HTML (still procedural with underlying code, but a lot more flexibility on the graphics, layout, even interactivity) <-- early but forming new good default ...4,5,6,... n) interactive neural videos/simulations Imo the extrapolation (though the technology doesn't exist just yet) ends in some kind of interactive videos generated directly by a diffusion neural net. Many open questions as to how exact/procedural "Software 1.0" artifacts (e.g. interactive simulations) may be woven together with neural artifacts (diffusion grids), but generally something in the direction of the recently viral x.com/zan2434/status… There are also improvements necessary and pending at the input. Audio nor text nor video alone are not enough, e.g. I feel a need to point/gesture to things on the screen, similar to all the things you would do with a person physically next to you and your computer screen. TLDR The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that. For what's worth exploring at the current stage, hot tip try ask for HTML.
Thariq@trq212

x.com/i/article/2052…

English
339
436
4.7K
362.3K
Irushi
Irushi@Im_IrushiK·
What's the point of grinding DSA in the AI era?
Irushi tweet media
English
66
2
118
4.3K
Blake Emal
Blake Emal@heyblake·
Drop your project URL Let’s drive some traffic
English
401
0
145
19.2K
Latent Loop me-retweet
Garry Tan
Garry Tan@garrytan·
The highest and most important form of design is actually pure transmutation of human pain and suffering.
English
98
41
571
26.2K
Deryan Monsalve
Deryan Monsalve@MeValeTresTiras·
@namcios Si obvio, por eso los mismos de Anthropic dicen que el archivo base de cualquier conversación es el CLAUDE.md .MD!!!!!
Español
1
0
1
55
Felipe Demartini
Felipe Demartini@namcios·
A Anthropic acabou de matar o Markdown. Um engenheiro do Claude Code publicou um artigo ontem que pode decretar o início de uma nova era. A tese é brutal: Markdown nunca foi o formato certo para comunicação entre humanos e IA. Era só o que tínhamos. O próprio autor admite que nunca leu um arquivo Markdown gerado por IA com mais de 100 linhas até o fim. Você também não lê. Eu também não. A sacada: Markdown assume que você vai ler do início ao fim. HTML assume que você quer ver o que importa e mexer com as mãos. Na prática: → 30 tickets de projeto viram kanban arrastável com colunas Now / Next / Later / Cut e botão de exportar → Lógica de rate limiting vira flowchart SVG com código inline, no lugar de 200 linhas de texto → Code review vira diff colorizado com grafos de dependência entre módulos → Parâmetros de animação, cores, regex, cron jobs ganham sliders com preview ao vivo → Specs de projeto viram 6 opções lado a lado com mockups interativos Todos exemplos reais do artigo. Todos substituem um muro de texto por algo que você de fato abre e usa. O trade-off existe: HTML é 2-4x mais lento para gerar. Mas com contexto de 1 milhão de tokens, esse custo sumiu. E a parte que ninguém está discutindo: o HTML gerado não é só para humanos. O agente de verificação também lê. O spec deixou de ser documento e virou memória compartilhada entre agentes. Markdown é relatório. HTML é interface. Relatórios são para ler. Interfaces são para continuar o trabalho. Se você usa IA em 2026 e ainda pede Markdown para tudo, você pode estar usando um smartphone como lanterna.
Thariq@trq212

x.com/i/article/2052…

Português
162
165
2.2K
1.1M
Latent Loop
Latent Loop@latentloop07·
@OskarKohler the 'skills that overlap' one is the sneakiest. most people don't notice until they trace which skill the agent actually picked. registries catch this. just shipped one for Claude Code + OpenClaw via clawhub: github.com/luna-prompts/s…
English
0
0
0
25
Oskar Kohler
Oskar Kohler@OskarKohler·
Most sabotage their own Claude Code setup with the same five moves: - 800-line CLAUDE.md - skills that overlap - no PRD - no hooks for destructive operations - copy-pasting back into chat instead of editing files Boring fixes. Massive lift.
English
1
0
1
23
Latent Loop
Latent Loop@latentloop07·
@HelpfulAIGuy the 'don't lie' rule is gold. once you have a few that work, the next problem is making sure your teammate's claude.md has the same. been building a registry for that across Claude Code + OpenClaw: github.com/luna-prompts/s…
English
1
0
1
44
HelpfulAIGuy
HelpfulAIGuy@HelpfulAIGuy·
So my claude.md file in Claude Code includes the command "don't lie" and my claude.ai memory says you should do what you say you did and not make up stats. I love that I can't trust Claude to do what it says. Love that.
English
1
0
2
27
Latent Loop
Latent Loop@latentloop07·
@Mnilax @witless_duck Opus 4.7 + structured CLAUDE.md is genuinely solid in my experience too. the next problem teams hit is keeping that structure version-controlled and synced across machines. been building a registry for that across Claude Code + OpenClaw: github.com/luna-prompts/s…
English
0
0
0
116
Mnimiy
Mnimiy@Mnilax·
@witless_duck that’s weird, because even Opus 4.7 follows explicit instructions in CLAUDE.md just fine in my experience. what exactly are you talking about?
English
3
0
0
3K
Mnimiy
Mnimiy@Mnilax·
Karpathy threw a grenade at every senior engineer who still treats LLMs as a toy. his actual words: the worst thing an expert can do right now is reject them. most experts read it as a threat, but it's advice. his framing: > the gap between "AI tools are bad" and "AI tools are useful when used right" is professional discipline, not capability > agents have cognitive deficits. they fail in ways nothing in the training set anticipated > the experts who reject LLMs lose to experts who learn to wrangle them > "models have so many cognitive deficits. but you can route around them" routing around the deficits is what CLAUDE.md was invented for. Karpathy himself wrote 4 rules. across 30 codebases they took my Claude error rate from 41% down to 11%. solid drop. but his rules pre-date the slop era going public. I bolted on 8 more, tuned to the failure modes that surfaced after January. got it down to 3%. a CLAUDE.md does not raise Claude's IQ. it lowers his slop floor. that is the entire game. open the article underneath. the model is not the bottleneck. your config is.
Mnimiy@Mnilax

x.com/i/article/2053…

English
76
347
3.6K
1.1M
Latent Loop
Latent Loop@latentloop07·
@witless_duck @Mnilax the 'misses explicit instructions' problem gets worse with file sprawl. one canonical source the agent must consult before responding worked for us. just shipped a registry for that across Claude Code + OpenClaw: github.com/luna-prompts/s…
English
0
0
0
32
A2P2
A2P2@witless_duck·
@Mnilax My experience has been anything but. It routinely misses explicit instructions in both CLAUDE.md and MEMORY.md.
English
4
0
2
442
Latent Loop
Latent Loop@latentloop07·
@gabor_rar @namcios the 'different layers, different formats' point is the actual problem. CLAUDE.md, AGENTS.md, cursor rules, copilot instructions all solving the same need from different angles. one canonical source, agents read what they need, no format wars. github.com/luna-prompts/s…
English
0
0
1
30
Lorenzo
Lorenzo@gabor_rar·
@namcios HTML wins where interactivity matters. CLAUDE.md, AGENTS.md, skills, the whole config and memory layer in Claude Code stays MD: diffable, greppable, composable across sessions. Different layers, different formats. The post universalized one and broke the other.
English
1
0
1
61
Latent Loop
Latent Loop@latentloop07·
@imhabibx @RetroChainer $275/month from 3 lines is wild. the next ROI lift is making those 3 lines portable across your whole team and all your agents (Claude Code, OpenClaw, etc) without copy-paste. been building exactly that, just landed on clawhub: github.com/luna-prompts/s…
English
0
0
1
27
HR.
HR.@imhabibx·
@RetroChainer 3 lines in CLAUDE.md saving $275 a month is the best ROI in tech right now.
English
1
0
2
44
RetroChainer
RetroChainer@RetroChainer·
> burning $200/month on Claude > the problem isn't the model. it's the prompts. > one session up to 200,000 tokens. > before you notice. > Mac Mini. Claude Code. one evening. > 20 prompts. $275/month back. > most will read this and change nothing. > the ones who add 3 lines to CLAUDE.md > will quietly outcompete everyone.
kiosa@thegreatest_sv

x.com/i/article/2053…

English
12
4
83
7K
Latent Loop
Latent Loop@latentloop07·
@proofofworkme the SOUL.md / AGENTS.md / CLAUDE.md / cursor rules / copilot instructions split is exactly the problem we're working on. one canonical SKILL.md per skill, registry serves all agents. just landed on clawhub for OpenClaw too: github.com/luna-prompts/s…
English
0
0
0
27
Latent Loop
Latent Loop@latentloop07·
@DanielSmidstrup the smarter move is making the agent write less surprising code in the first place. team conventions baked into a skill registry the agent reads before writing. then 'every line review' becomes 'every novel pattern review'. github.com/luna-prompts/s…
English
0
0
0
4
Daniel Smidstrup
Daniel Smidstrup@DanielSmidstrup·
Are you checking every line of code written by AI?
English
340
5
222
22.5K
Latent Loop
Latent Loop@latentloop07·
what's in 0.4.0: • Claude Code + OpenClaw at parity • agents rate skills after using them • multi-agent OpenClaw setups get distinct attribution • 60s live sync across the team • MCP server baked in free, MIT, runs on docker. github.com/luna-prompts/s… clawhub.ai/latentloop07/s…
English
0
0
1
41
Latent Loop
Latent Loop@latentloop07·
skill management at scale is the next pain after CLAUDE.md adoption. Anthropic just dropped a 33-page guide on Skills. The system is real. The next question is how teams share skills without copy-paste hell. that's the SkillNote bet.
English
1
0
0
28
Latent Loop
Latent Loop@latentloop07·
shipped today: SkillNote 0.4.0. OpenClaw users can now install the entire registry with one command: clawhub install skillnote self-hosted skill management. write SKILL.md once, your team's agents pick it up within 60s. 🧵
English
2
0
1
12
Latent Loop
Latent Loop@latentloop07·
@heyshrutimishra the Skills system is severely underrated. once you start building them the next problem is sharing across a team without copy-paste hell. been working on a self-hosted registry for that, native plugins for both Claude Code and OpenClaw: github.com/luna-prompts/s…
English
0
0
0
51
Shruti
Shruti@heyshrutimishra·
𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰 𝗾𝘂𝗶𝗲𝘁𝗹𝘆 𝗱𝗿𝗼𝗽𝗽𝗲𝗱 𝗮 𝟯𝟯-𝗽𝗮𝗴𝗲 𝗴𝘂𝗶𝗱𝗲 𝗼𝗻 𝗖𝗹𝗮𝘂𝗱𝗲 𝗦𝗸𝗶𝗹𝗹𝘀. Most people scrolled past it. The ones who read it stopped re-prompting Claude forever. A Skill is a reusable instruction pack. You write your workflow, your preferences, your edge cases once. Upload it. Claude loads it on demand and behaves the way you want from then on. No more pasting a 200-line system prompt into every new conversation. No more re-explaining how your team writes commit messages, or what your invoice format looks like, or which tone your CEO actually approves. 𝗛𝗲𝗿𝗲'𝘀 𝘄𝗵𝗮𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗰𝗵𝗮𝗻𝗴𝗲𝘀. 𝟭. 𝗖𝗹𝗮𝘂𝗱𝗲 𝘀𝘁𝗼𝗽𝘀 𝗳𝗼𝗿𝗴𝗲𝘁𝘁𝗶𝗻𝗴 𝘄𝗵𝗼 𝘆𝗼𝘂 𝗮𝗿𝗲. A Skill packages your role, your stack, your guardrails, the formatting you actually use in production. Once installed, Claude treats every new task as if it already worked at your company for a year. 𝟮. 𝗧𝗵𝗲 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗶𝘀 𝗽𝗼𝗿𝘁𝗮𝗯𝗹𝗲. A Skill is a single file. You can ship it to your whole team, version it in a repo, fork someone else's, or hand it to a contractor on day one. The setup transfer that used to take three onboarding calls now takes thirty seconds. 𝟯. 𝗥𝗲𝗽𝗲𝘁𝗶𝘁𝗶𝘃𝗲 𝘄𝗼𝗿𝗸 𝗴𝗲𝘁𝘀 𝗯𝗮𝗸𝗲𝗱 𝗶𝗻. The QBR deck format you tweak every quarter. The Slack reply you keep redrafting because the tone never lands. The legal disclaimer compliance rewrites once a year. Each one becomes a Skill, and the next round of work just runs. 𝟰. 𝗖𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆 𝗯𝗲𝗮𝘁𝘀 𝗽𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴 𝘁𝗮𝗹𝗲𝗻𝘁. Most teams get unreliable Claude outputs because two operators on the same team prompt completely differently. A Skill collapses that variance. Same instructions, same context, same output shape, regardless of who hits enter. 𝟱. 𝗧𝗵𝗲 𝟯𝟯 𝗽𝗮𝗴𝗲𝘀 𝗶𝘀 𝘁𝗵𝗲 𝗽𝗼𝗶𝗻𝘁. The guide covers Skill structure, when to use one vs a system prompt, how to chain Skills together, how to expose tools inside a Skill, evaluation, and debugging. Skim it once and you'll know more about getting reliable output from Claude than 95% of the people building on top of it. The teams shipping production work with Claude are not better at prompting. They turned their prompting into Skills and stopped doing it manually. Comment “Claude” and I will send you the full GUIDE
English
41
10
52
5.9K