Roberto Talamas

2K posts

Roberto Talamas banner
Roberto Talamas

Roberto Talamas

@RobertoTalamas

co-founder @VanarthAI. teaching robots to pick stocks so I don't have to

Katılım Mayıs 2014
1.2K Takip Edilen6.8K Takipçiler
Roberto Talamas
Roberto Talamas@RobertoTalamas·
"how do i enable my agent to do X" has been way more productive than "how do i get ai to help me with X" one makes you a delegator. the other keeps you the operator. as the operator you are still the bottleneck. ai is just sitting next to you while you do the work. the enable framing changes what you build. you start wiring up tools, context, access. and that compounds. every capability you give the agent is one less thing that needs you in the loop
English
4
0
4
204
Roberto Talamas
Roberto Talamas@RobertoTalamas·
data science is full of silent failures. code runs, output looks reasonable, no errors thrown but the result can be wrong for n reasons. so when i hit one of these i dont just accept the model output. i tell it to convince me. "i dont believe you. convince me in full that you implemented xyz correctly." because ive gone through the painful process of finding these bugs myself, i have a good sense of where things go wrong. so i prod on the suspicious spots. push until it either defends correctly or cracks. the result is great either way. if it holds up → can go to sleep. if its wrong → model is now fully aware of where the problem is, which makes debugging faster and keeps it from happening again. noticed claude leans into this more than codex. it will argue its case, walk through every edge case, genuinely try to convince you. codex is less motivated. turns out "convince me" is a free audit.
English
1
0
4
222
Roberto Talamas retweetledi
Affine
Affine@affine_io·
One team can't build the best AI. So we stopped trying. We built an open arena instead. Here's what happened👇
Affine tweet media
English
3
74
244
85K
Roberto Talamas retweetledi
nic carter
nic carter@nic_carter·
It should be pretty obvious at this point that AI is a "force multiplier" not a "labor substitute". It helps experts be better at things they are already good at. It doesn't let beginners match experts. If you can't write, anything you write with AI will be unmitigated slop. If you aren't a software engineer, anything you vibecode with AI will have security holes and won't be able to scale past a toy demo. If you blindly trust AI to deliver on a research task without knowing the subject matter, you won't be able to fact-check it. There's this weird misconception of AI as something that completely levels the playing field. I don't see it that way at all. There are mathematicians deriving novel lemmas with off-the-shelf models. Normal people can't do that. AI is a tool that makes experts better. It doesn't make everyone into an expert.
English
185
487
3.9K
238.8K
Roberto Talamas retweetledi
billy
billy@billyhumblebrag·
Haha those doofuses at ai2027 predicted we'd have professional level hacking abilities and the top ai company would be at $26B in revenue in May 2026. It's April and we already have superhuman hacking and $30B in revenue, why would you take forecasters this bad seriously???
billy tweet media
English
32
285
3.4K
184.8K
Roberto Talamas
Roberto Talamas@RobertoTalamas·
fundamental research has always worked the same way. analyst reads a filing, interprets what it means and puts a number in a model. the data is public and the estimate is in the model. but the step in between, the reasoning that turned one into the other, that lives in the analyst's head and goes nowhere. there is no record, structure or way to query it later. thats the black box in fundamental research. not the data. not the estimate. the interpretation. agent systems crack that open. you can capture the full reasoning chain. which sentence drove which inference and how strong the evidence was and what it implied for the estimate. all structured and traceable. full auditability from signal to source. and because the reasoning flows through a single pipeline from source to estimate, vertically integrating it means when you find a flaw in how growth is being interpreted or how competitive dynamics are weighted you can fix it once and it revalues every company in the universe simultaneously. a 20 person analyst team cant do this. each analyst has their own mental model and correcting a systematic bias means retraining 20 people and hoping they all apply it consistently. these two things are complementary. auditability lets you trace a signal all the way down to the source and find where the reasoning breaks. vertical integration lets you fix it once and propagate that fix across the entire universe at once. one finds the problem and the other fixes it everywhere.
English
1
0
3
150
Roberto Talamas
Roberto Talamas@RobertoTalamas·
when i run multiple coding agents in parallel, the obvious friction i find is context switching. every time i jump from one session to the next it takes me ~10 seconds seconds to reload what the agent is doing. doesnt sound like much but it compounds fast. gets worse when the tasks are completely unrelated. going from a data pipeline to portfolio optimization to signal research back to back. each switch is a full mental reset. the reason some switches are so much harder than others, i think, comes down to verification. when output is visual (a rendered widget, a chart, a UI change) the load drops. can verify in seconds. when its abstract (matrix math, big data transforms, complex computations) have to actually think. cant just look and know if its right. the real bottleneck isnt context switching. its how many outputs i can actually verify without letting correctness slip.
English
2
0
5
194
Roberto Talamas
Roberto Talamas@RobertoTalamas·
understanding the decay profile of your alphas is one of those things that touches everything in a portfolio but rarely gets talked about. say you build a signal and it backtests well. most people stop there. but if you dont know when the alpha peaks and how fast it decays you are flying blind on every decision that comes after. if your signal peaks at t+5 thats telling you to rebalance every 5 days. not monthly. holding longer than the signals useful life just adds noise and dilutes alpha. your optimizer needs to know this to set the right turnover constraints. short horizon signals like t+1 or t+3 look great on paper but get entirely consumed by spreads and market impact at size. if the signal only becomes significant at t+10 thats actually better. slower pace means lower turnover. you capture more alpha because you arent giving it back in transaction costs. the horizon profile also helps you validate the mechanism. alpha from fundamental signals should in theory work over quarters not days. if it peaked at t+1 and went flat by t+5 that would be suspicious. probably capturing post earnings drift or short term momentum instead of the valuation view you intended. if alpha peaks at t+10 and you hold to t+21 thats 11 extra days of idio risk for no expected reward. directly reduces your information ratio. knowing the horizon lets you size the position to exit at the right time. and once you know the profiles you can stack signals intelligently. one peaks at t+3 another at t+21. they are complementary. combining them fills out the return profile across horizons. without knowing the decay you cant do this. you are just blending blindly. people think alpha is all in the research. but operational alpha is a huge component that can make your lps a lot of money.
English
1
0
2
148
Roberto Talamas retweetledi
Lawrence Chen
Lawrence Chen@lawrencecchen·
Introducing cmux Claude Code Agent Teams: `cmux claude-teams --dangerously-skip-permissions` Teammates/subagents spawn as native cmux pane splits. They stack vertically in a right column and auto-equalize as agents spawn and exit. `cmux claude-teams` automatically sets CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 and shims tmux on PATH with cmux's tmux-compat layer, so you don't need to update your Claude config. All arguments forward to Claude Code, and it works over cmux SSH too. Out in the latest version of cmux (0.63.x).
English
99
118
1.3K
156.2K
Diran Li
Diran Li@diran_li·
Today I’m stepping into the CEO role at Messari. After conversations with Eric and the board, we agreed this is the right step for the company’s next chapter. This transition also includes a difficult decision: we’ve parted ways with many teammates who helped build Messari into what it is today. I’m incredibly grateful for their work and the impact they’ve had on the company. They’re an exceptionally talented group, and I’m eager to help connect them with teams that are hiring. Looking ahead, we’re doubling down on Messari as an AI-first company serving institutions through research and AI products. The industry and the world are changing quickly, but our mission remains the same: helping customers navigate crypto with confidence.
English
46
13
369
89.8K
Roberto Talamas retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
@nummanali tmux grids are awesome, but i feel a need to have a proper "agent command center" IDE for teams of them, which I could maximize per monitor. E.g. I want to see/hide toggle them, see if any are idle, pop open related tools (e.g. terminal), stats (usage), etc.
English
303
117
3.1K
1.4M
Mark Gadala-Maria
Mark Gadala-Maria@markgadala·
A Bloomberg Terminal costs $24,000 a year. Someone just recreated one using Perplexity Computer for $200 a month. Bloomberg's moat was never the data, that's increasingly commoditized. It was the interface: thousands of keyboard shortcuts, proprietary screens, and muscle memory that finance professionals spent years learning. The switching cost wasn't price, it was retraining. AI agents collapse that moat. If Computer can replicate the interface and pull equivalent data from public sources, the only remaining lock-in is the chat network and real-time feeds. One is a social product. The other is a licensing negotiation. Bloomberg did $12.6 billion in revenue last year selling terminals. The first credible open-source alternative just got built in an afternoon.
ₕₐₘₚₜₒₙ@hamptonism

Perplexity just became the the first Al company to truly go head-to-head with the Bloomberg Terminal... Using Perplexity Computer (with no local setup or single LLM limitation), it was able to build me a terminal with real-time data to analyze $NVDA using Perplexity Finance:

English
327
440
2.7K
2.1M
Ryan Watkins
Ryan Watkins@RyanWatkins_·
“Dude I have 10 agents running while I sleep. No one is prepared for AGI in 2 years man.” “So what are you building?” “Bro all my smartest friends are vibe coding until 3am every night. It’s all about agency. Intelligence is a commodity man.” “So what are you building?” “Do you even study exponentials? Have you seen the latest METR chart? You’re going to be stuck in the permanent underclass bro.” “So what are you building?” “Did you even setup OpenClaw? I’m maxing out my token budget everyday man.” “So what are you building?” “I promise you I’m 10x more productive bro! You just don’t understand! Please bro just…. I know you use this stuff everyday too, but you must not be prompting it right! Please broo…”
English
539
1K
14.6K
1.4M
Roberto Talamas retweetledi
Simon Willison
Simon Willison@simonw·
Short musings on "cognitive debt" - I'm seeing this in my own work, where excessive unreviewed AI-generated code leads me to lose a firm mental model of what I've built, which then makes it harder to confidently make future decisions simonwillison.net/2026/Feb/15/co…
English
140
208
2.3K
161.9K