Myles

70 posts

Myles banner
Myles

Myles

@MylesLX

Building fintech products with AI @ PEAK6 • @GauntletAI C3 • Ex-@Marlins Analytics Director

Katılım Ekim 2025
267 Takip Edilen91 Takipçiler
Myles
Myles@MylesLX·
@kimmonismus Opus 4.5 is cheaper and faster, and available in Cursor. It’s also just as competent as 4.7 for 80% of tasks
English
0
0
0
944
Myles
Myles@MylesLX·
Cursor GitHub links in your agent window now hyperlink to the new cursor diff and PR flow, they just disintermediated the GitHub UI completely
English
0
0
0
27
Myles
Myles@MylesLX·
Opus 4.5 is still the best coding model and it’s not close. For 80-90% of tasks it thinks for the right amount of time to still keep you in the loop. Is reasonably cost effective per tokens used given the complexity of your prompt, and is intelligent enough to handle hard tasks in large repos, and more importantly provides that intelligence fast enough to get things done more quickly than the latest frontier models so you can be more productive. It’s also really effective in the Cursor harness
English
0
0
0
39
Myles
Myles@MylesLX·
Cursor Claude and Codex are all converging on the same agents UI. Terminal is going away. Desktop apps are the future
English
0
0
1
166
Myles
Myles@MylesLX·
After you build it, Agents should be able to use your application like a real person, evaluate all user flows and test cases, spin up computer use or browser subagents to generate detailed reports, screenshots, and videos of what didn’t work, then fire off coding agents to generate PRs with fixes. Reviewer agents sweep the PRs to evaluate any regressions and summarize the feature changes/fixes for the human. Development can be largely automated with the right infrastructure. But design, planning, and enumerating test cases and use flows has never been more important.
Dan Shipper 📧@danshipper

clear that this is how we'll be doing most of our work for the next 10 years: agent running continuously on the left, application that you + the agent use on the right

English
0
0
0
71
Myles
Myles@MylesLX·
How did Cursor go from being the worst value to best value coding agent in under 6 months? Anthropic enterprise plan devalues the utility of Claude Code
English
0
0
1
43
Myles
Myles@MylesLX·
Is everyone really going to KYC themselves for their agent? That’s a lot of responsibility for agent payments when their already have a valid credit card
English
0
0
1
31
Myles
Myles@MylesLX·
There’s still so much friction and human intervention required to set up Payments agents - even from supposedly seamless setups like Stripe Link or Ramp Agent Cards.
English
0
0
3
95
Myles
Myles@MylesLX·
Businesses will tolerate voice agents but customers won’t. Financial incentives matter for humans to put up with agent interfaces
English
0
0
1
24
Myles
Myles@MylesLX·
Claude Traces should be its own product, claude embedded in your traces with perfect context of your agent sessions.
English
0
0
1
41
Myles
Myles@MylesLX·
While most AI companies are diverting compensation and spend away from humans, Floop is using AI to create more jobs for people
English
1
0
1
36
Myles
Myles@MylesLX·
@karpathy Every company needs versioned knowledge bases of their documentation for coding agents to use as context
English
0
0
0
16
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.9K
7.1K
58.5K
21M
Myles
Myles@MylesLX·
@mc_anthropic Sub agent traces looks like the browser dev tools network tab, but instead of loading clients you are loading agent traces, well done!
English
1
0
1
44
Michael Cohen
Michael Cohen@_hi_mc·
our new Memory suite of APIs works out of the box in Claude Managed Agents. they're also portable by design and can be used in any other AI product. the building blocks are coming together. mix and match what you need. go to production in minutes. scale your usage along with your business. then go touch grass while Claude does the rest.
Claude@claudeai

Memory on Claude Managed Agents is now in public beta. Your agents can now learn from every session, using an intelligence-optimized memory layer that balances performance with flexibility.

English
1
2
35
6.3K