Brett Caughran

3.9K posts

Brett Caughran banner
Brett Caughran

Brett Caughran

@FundamentEdge

Completed my hedge fund tour of duty (Maverick, D.E. Shaw, Citadel, Schonfeld). Adjunct at ASU. Now building an exceptional analyst training firm. DMs open!

Scottsdale, AZ Katılım Eylül 2018
4.2K Takip Edilen51.8K Takipçiler
Sabitlenmiş Tweet
Brett Caughran
Brett Caughran@FundamentEdge·
LAUNCHING ANALYST ACADEMY, APRIL 6TH COHORT. Hello! We are very excited to be launching our next cohort of Analyst Academy. What is Analyst Academy? Six weeks and ~60 hours of everything I wanted to teach my junior analysts (but never had the time). The basic tools needed to survive & thrive on the buy-side: - How to build your investment process - How to conduct a deep dive research process - How to figure out what will move the stock - How to generate investment ideas - How to prepare for a management meeting - How to manage the deluge of information - How to develop & communicate a thesis - How to be a good buy-side analyst - And much more In the Analyst Academy, we attempt to teach the "hedge fund equity research process" and endeavor to try to present what we think are best practices for the buy-side equity research process. We have hosted over 1,500 students from all walks of the buy-side - analysts from large multi-managers, tiger style funds, long only firms, and family offices. Increasingly, Fundamental Edge is building internal training programs in partnership with Top 100 Asset Managers. Roughly 2/3 of students are already on the buy-side, with the remainder working to break into the buy-side. This is a rigorous, intensive curriculum - not an investing for beginners program. If you would like to learn more, I am embedding a ~30-minute information session on the Analyst Academy here. To enroll in the next Analyst Academy cohort, see our website at www(dot)fundamentedge(dot)com - banner at the top will bring you right to Academy information. If this resonates with your current step in your buy-side journey, I would be absolutely thrilled to have you in our next cohort of Analyst Academy. Thanks!
English
5
4
81
42.9K
Brett Caughran
Brett Caughran@FundamentEdge·
I completely agree with Alix. Learning the analog way and getting reps is a critical foundation before you augment with AI. Just like I can't debug code because I have no foundations as a coder, those who don't learn the analog way won't be able to debug a thesis or an earnings preview. In a game of inches like investing, that matters a lot. OGs can look at a company model and find the error in 60 seconds. This is a skill forged through many years and many reps. Bypassing the building of this muscle is a risky decision. In theory, younger investors should be leading the charge with AI augmentation for investing. In practice, I see so many young people building bridges to nowhere with complex coding tools that are technically intriguing but useless in practice. The most impressive investors I've seen in terms of AI augmentation are experienced investors who know exactly what problem to solve and why. And as a PM, managing a junior analyst churning out AI slop at 10x speed with false conviction sounds like a nightmare. I'd much rather have that junior analyst go slow, build rigor, and very selectively deploy AI in the early years. Crawl, walk, run.
Alix Pasquet@alixpasquet

Analog training about to become the edge "I would have people on Wall Street learn the old fashioned way [without LLMs] for the first six or twelve months…I sound like an old man, but let’s walk into the room rather than run…I’m a believer in the tools but I also think it’s stunting the growth of this generation…it could lead to degradation in your 30s that you won’t be able to come back from. If you don’t know how to do anything, then you don’t know how to do anything, and competing on your raw smarts isn’t enough because everyone is smart."

English
4
5
91
16.3K
Brett Caughran
Brett Caughran@FundamentEdge·
@ShanuMathew93 Upload a model into Chat with 5.4 Pro and give it a detailed prompt. I did that and was very impressed
English
0
0
0
320
Brett Caughran
Brett Caughran@FundamentEdge·
Will be very interested to see a Claude CoWork equivalent here One thing I’ve noticed, when carefully prompted, GPT 5.4 is an unbelievable model (higher ceiling than Opus 4.6 for investment research tasks). Some of the 5.4 Pro outputs I’ve generated are very impressive (like “how did it do that” impressive). It has been very fluent in Excel via Chat. For whatever reason, though, it seems more “prompt sensitive” than Claude, and one shot Auto 5.4 also regularly disappoints (and give me a break with the emojis please). Claude is more steady and consistent irrespective of context/prompt. So makes me wonder if 5.4 Pro in Cowork style harness with context/skills and Excel fluency will be yet another key step in these tools becoming intuitional grade and surpassing the bar to impress institutional asset managers
The Wall Street Journal@WSJ

Exclusive: OpenAI’s top executives are finalizing plans for a major strategy shift to refocus the company around coding and business users on.wsj.com/3N6CFyr

English
3
3
48
21.3K
Brett Caughran
Brett Caughran@FundamentEdge·
The most important advice I could give to individual investors adopting AI: don't optimize for speed, optimize for rigor. We play a game where average is punished. There is all sorts of empirical evidence of power law dynamics in public equity investing: 70%+ of long only's underperforming the index, no observable alpha in sell-side ratings T+1, and high full-cycle churn of multi-manager PMs (by design). <5% of investors/firms earn substantially all the alpha. Beating the market isn't a check the box exercise, it's a game of multi-dimensional genius built on the back of incredible day in, day out discipline. It requires not just good process, but real talent. An average thesis is a losing thesis. An average PM is a losing PM. And AI tools can get you to average quality very quickly. That's almost completely useless, functionally. So, optimize for rigor, because rigor is what it takes to outperform. A fast thesis is just AI slop (and will probabilistically lose money). A few pieces of advice we give when we train analysts, and how AI shifts that advice. "How your spend your time is your most important decision". One of AI's most incredible characteristics in investing is the ability to use it to cut bad ideas quickly. Accelerated hypothesis formation, front end risk checklists, pattern recognition engines, "what's in the stock" first cuts, automated 7-year models, comparative efficiency / margin uplift scoping, management capability reports, "what I have to believe" reports, systematized guidance hockey stick screening...I could go on and on. You can automate the "sniff test" part of hypothesis formation, and where I wasted 3-days in the past only to disqualify an idea, I can now disqualify that idea in 30 minutes. Today, this is about where I see 85% of AI's usefulness in investing (though as quantitative accuracy and agentic usability is increasing, use cases are expanding rapidly). Use AI to be more rigorous about how you spend your time. "You will walk in with 10 things on your to do list, complete 4, and walk out with 12 things on your to do list...that your PM wants yesterday". In 13 years as a professional investor, I never once felt like I had my to do list complete. Always another idea to evaluate, another comparative margin analysis to do, another customer call to do, another hospital conference to fly to, another deep cohort analysis of some industry data, etc. Do I go on that bus tour or do I stay at the office and go deep on this idea in my pipeline? We are Pareto optimizing every day, it's always a tradeoff, and so much gets undone. The idea that public equity investors can be rigorous on every idea under coverage is obviously a myth...if you ever think you're "done", you don't know the nature of the game you are playing. I covered ideas as close as anyone at a firm with 3 ideas per investment professional, and STILL never felt like I knew enough about them, let alone when I ran an 80 stock portfolio. In embracing AI tools in my job now, I have this very unusual feeling of getting my to do list done and searching for more to do! It's a strange and invigorating feeling. Use AI to deepen rigor, augmenting the work you *should* do, but never had the time. AI-augmented versions of things you never had the time to do will increase rigor. "Focus on the key drivers, stock picking is a game of Pareto optimization where 2-3 key variables are deterministic to stock outcomes". I could spend 365 days a year analyzing just a couple companies. It would be very boring, but I could do it. And that 347th data point or channel check wouldn't move the needle *most of the time*, but sometimes it would. In the Tiger cub world, we would often bemoan the PM's push to go deeper, deeper, deeper, ("man, isn't this enough work?") but sometimes it would matter (and when position sizes are multiple hundreds of millions, it matters). You could channel check weekly for months with no change, but one week those checks catch a key inflection before the market sniffs it out. Six industry conferences may be useless, but that 7th may uncover some key changes and inflections in the industry that become an investible signal. The high-velocity multi-manager world, relative to the Tiger cub world, is very much a Pareto optimization game. In multi-manager world, I covered 300 stocks, so it had to be...I was drowning just to complete the desktop research functions of that coverage let alone any primary research. Find the 2-3 variables that will move the stock, analyze those, ignore the rest. Well, there are many, many of the "Tiger" research playbooks that are now accessible to a higher velocity firm. Not to 100% quality, but compared to no work at all, 85% is immensely better in deepening insight & rigor than nothing. And, for that matter, a long only can now build an 85% as good earnings preview that may, once or twice a year, lead to a key entry/exit decision around quarters (without spending 30% of their working hours earnings prepping). Catching earnings infections isn't as critical as it is a a multi, but that "process blending" is now more possible. Maybe with AI augmentation, we can spend human time on the 2-3 key variables that matters, but spend machine time on the other 16 important variables, resulting in deeper and broader rigor. Because sometimes that 15th variable matters. "The only certainty is your initial thesis will be wrong...how you babysit your thesis will determine failure or success". Securities are priced by discounting future cash flows, and no one has a perfect crystal ball. Building an investment thesis is a hubristic exercise effectively saying "I and I alone can see the future here". In reality, investing is a deeply Bayesian exercise where priors are constantly updated by new data and the best investors are stoically non-emotional about cutting when the fact pattern changes (emotional attachment to a broken thesis is the deepest sign of an amateur). Thesis tracking is a game of 24/7, always on, constant paranoia that the next 8-K is going to be some disgusting tape bomb. A wider net, a less emotional lens (tracking is the most deeply emotional part of investing, particularly when an idea goes against you, i.e. a great use case of a dispassionate machine), and systematic tracking of a broad set of data (structured & unstructured) is an absolute game changer in answering the simple question of "is my thesis right or wrong?". This is where recent native data capabilities and emerging Excel capabilities are so interesting (data trackers were first big use case that moved AI out of chatbots for me in investing AI)...track key drivers across multiple dimensions, understand how those key drivers translate back into revenues/profits/cash flow and business infections, and send signals back to me in an easy to use dashboard. Use AI to deeply enhance tracking rigor. I could go on an on. So, I agree with Bucket and sort of shake my head when I see people say "build an investment thesis in 1/10th the time". I think to myself, "why in the hell would you want to do that"? Firms have finite capital and if you run a 30 stock long only book with a 3-year duration, you don't want to run 10x ideas or 10x turnover. But what you do want is more rigorous understanding of those 30 ideas, and a more rigorous evaluation of your idea set and idea pipeline, ensuring you are allocating capital to the best available ideas at any time. You want to have a deeper set of priors grounded in past study of markets, industries, companies and managements. Rigor, not speed. Deeper, not faster (though by doing many things faster, it builds the time capacity to go deeper). This is why simple things like building AI capability to update models for Qs is helpful...in the triage moment of earnings, less time keying in a PR and more time evaluating what changed and why. There's also an irreducible value of the human element in investing that people don't talk about enough. Decades of failed quantamental initiatives and the elusiveness of quantifying human alpha have etched that firmly in my belief system. If you think we can take the human out of the loop in fundamental investing, I am game to deeply and passionately argue the other side. I plan to share more on this. The TLDR of all of this is even in Q4 '25 we considered that we were in "winter training" for AI. It wasn't yet spring, so take some time to learn the tools in anticipation/hope that they become more institutionally useful. Well, in just a few months, we've come out of winter and entering spring. The rapidity of improvement is remarkable. We are going to lean into that, as it's such a fascinating time to be alive and such a fascinating time to be an investor. Stay tuned for more.
Bucket Shop Capital@bucketshopcap

TL is now flooded with AI garbage output. Claude-generated summaries of 10-Ks, earnings calls, etc. If you ever thought the time commitment of reading basic source docs was what was holding you back, you might be brain dead. Maybe at least now buyside headcount can go -80%.

English
4
10
202
44.1K
Brett Caughran
Brett Caughran@FundamentEdge·
Having spent a lot of the last two weeks experimenting with different user interfaces for Claude Code (including via terminal, Cursor and VS Code) one of my primary observations is how frustrating it can be to get up to a baseline level of operability with these tools. Just the basics have taken me many hours and many YouTube videos, and numerous times cursing into an LLM as tutor “I said explain it to me like a 12 year old”. Maybe I’m dense, I’m certainly not very technically proficient, and my brain goes haywire staring at any coding language to be quite honest. But perhaps I’m a litmus test of the average end user of these tools (a user base that needs someone to help them build their Bloomberg launch pad, after all), and in that sense I doubt rolling out VS Code across many users to an investment organization to drive consistent adoption of AI is a viable plan to generate adoption. The good news is how fast the user experience is evolving. I contrast that maddening experience to the much more user-friendly experience of using Claude Co-Work and even Claude Code direct in the desktop app, and particularly the incredible usefulness of Agentic Work Platforms like Perplexity Computer to be a delightful and powerful user experience (not sponsored), and wonder if that user experience is indicative of what's to come. Multi-LLM agentic Workflow tools tied to your select MCP/APIs, your internal data and notes, and trained via natural language to create a Skills architecture that works the way you work (much like investors set up their Bloomberg launchpads right now i.e. in a heterogeneous fashion unique to their specific workflow). Do I lose something not being close to the code? I'm not sure I even know what that means (but all the arrogant YouTubers tell me it’s important). But I know I'm shocked by the usefulness of the outputs I'm getting out of Claude Co-Work and Perplexity Computer, and that’s what matters to me. For that reason, I wonder if learning VS Code for investors is akin to learning to construct prompts by hand in summer 2025, ie. mostly wasted learning that will quickly be subsumed by the next iteration of technology. For the vast majority of users (I.e. investors in a front office seat). It seems like we are very close to a level of intuitive operability with agentic work systems.
English
20
5
140
102.9K
Justin Gray
Justin Gray@justingray0·
Certainly the capabilities will improve such that you won't "need" to understand all the levels of abstraction. I'm biased, but I can't help but feel understanding and managing the entire context window and the layer beneath will give an edge. Else the agent will keep building and building and eventually get confused.
English
1
0
2
1.2K
Brett Caughran
Brett Caughran@FundamentEdge·
Yeah interesting point Historically investors have not used much software to support their investment process outside of Bloomberg Back office uses lots of software. But front office, the workflows are so heterogeneous that software hasn't found product-market fit. Even tools like RMS systems that seem like logical tools have had uneven adoption. So front office software for investors that accounts for the heterogeneity of the investment process and feels native to that specific investor's workflow
English
1
0
1
264
Michael Yuan
Michael Yuan@myuan95·
In terms of OpenAI / Anthropic? More like Cowork where it abstracts away the complexity In terms of Finance-specific tools: I strongly believe vertical software will emerge. There’s just a million workflows Claude can’t own, because they’re trying to serve billions of users. E.g. niche Excel macros will beat Claude trying to freestyle its way through a customer cube
English
1
0
4
297
Michael Yuan
Michael Yuan@myuan95·
I strongly believe all the “skills, MCP, hooks” stuff will get abstracted away The problem is most AI companies are building for the Silicon Valley crowd rn. They historically haven’t been great at understanding users who aren’t engineers I’ve spent time in both fields. And it’d blow your mind how wide the gap can be
English
3
0
15
2K
Brett Caughran
Brett Caughran@FundamentEdge·
Yeah, that's correct. The more I dig, the more I realize I wasn't even scratching the surface. I doubt even after a hundred hours I'd be a top decile technical user of these tools. So I'm going to go deep into my power alley of dissecting and articulating the highly nuanced workflows of the investor, and then partner with a technical superstar to build the working prototype of the AI Native Fund
English
3
0
5
2.5K
Brooker
Brooker@BrookerBelcourt·
Takes 100 hours to get to a level that you feel confident in the coding tools. My bet is you are working only in CLAUDE.md files and haven't got the right architecture of adding modular skills + commands + plugins. An earnings preview should call multiple skills: investment-philosophy, earnings-preview, dashboard, pm-review, email-sender. After that u can start pulling in sub-agents, then u will be convinced that there is no better tool.
English
1
0
7
3.5K
Brett Caughran
Brett Caughran@FundamentEdge·
@tropicalvalue Fully agree. Requires the buy-in of those star alpha generators to meticulously document and overlay their process, which heretofore was not a possibility
English
0
0
6
305
Tropical Value
Tropical Value@tropicalvalue·
Many funds do not have procedural and deterministic workflows, and only have a clear view of the investment mandate - not the de minimis of routines (which I've found to be more common in pods). There's a reason for that: they don't want to straight-jacket a star alpha generator. Or worse, sometimes they truly do not have thought of a process, and are more on the "art" side of investing. This is the real bottleneck.
English
1
0
3
409
Brett Caughran
Brett Caughran@FundamentEdge·
This is the exact light bulb moment I've had over the last two weeks. Helping firms become AI native is going to be much less around the technical complexity of the actual tooling. There's so much capex and engineering ingenuity pointed at the problem of making AI user interfaces intuitive to use. It is already happening. What's much more of a gating factor in deploying AI in the investment process is guiding firms through the creation of their own AI exoskeleton. That's harder than it seems because even within firms, investment process is highly heterogeneous. Every investor has a Bloomberg launchpad that looks a bit different. And that will be true for agentic AI co-pilots. The way your Asian banks analyst consumes news, evaluates industry data, and builds models is different than your biotech analysts. Chatbots couldn't handle these differences, but agents can. So successful adoption requires a cultural decision at the firm level, but also the careful crafting of the mental exoskeleton, investor by investor, wrapping your investment process in AI. I can't get this idea off my mind. I'm building my team to do this and would love to be in touch if this resonates with you (both those on a parallel process to share notes but also firms where we could possibly be of assistance)
Ethan Mollick@emollick

I am not sure "Forward Deployed AI Engineers" are going to deliver on what a lot of companies are hoping for. They are useful, yes, but AI applications are far less of a technical issue, and much more about rethinking the deep expertise & structure of your organization around AI.

English
4
4
67
21K
Brett Caughran
Brett Caughran@FundamentEdge·
Wait, I’m confused. Are MCPs dead or not??
English
8
0
30
17.9K
Brett Caughran
Brett Caughran@FundamentEdge·
Fordham Stock Pitch Competition Fundamental Edge is very proud to once again serve as a sponsor of the annual Roger F. Murray Stock Pitch Competition, which is hosted by Fundamental Edge instructor and Fordham Professor, Paul Johnson (Author of Pitch the Perfect Investment). The competition is open to all undergraduate students in the U.S. And I'm sure it will be a great event and lively debate. You can register and submit your ideas in the link that I will include in the replies.
Brett Caughran tweet media
English
1
2
30
6.4K
The Restler
The Restler@ProfitofParadox·
@FundamentEdge Most PMs will barely engage with a well structured pitch. Good luck with the python script on a boomer pm. Excel a shared language if/when they want to get serious
English
1
0
1
257
Brett Caughran
Brett Caughran@FundamentEdge·
Been hearing this argument more lately. And from a technical capability set, probably true. However, I think this misses the role of Excel in the investing process...Excel is a simple, trusted, mostly bug free, deterministic tool for analyzing historical fundamentals and making forecasts about future fundamentals (where the alpha lives). You be surprised by how simplistic the models of many great investors look, and this reflects the reality that most investments hinge on 2-3 key variables. The model is also a core communication tool. "I'd love to build my models in Python, but my CIO still wants to see the spreadsheet" is a common response. An Excel file can be e-mailed, saved locally on a laptop for mgmt meetings / HQ visits, and very easily validated (your model may be 800 rows, but generally only ~5-10% of the inputs need rigorous triple checking as they could make/break the model output...i.e. 6 year ago Q3 D&A isn't going to make or break a thesis, but a clean properly adjusted year ago gross profit number that aligns with mgmt's soft guide of GM bps trajectory, could). I think some people are also glossing over the fact that IDE to MCP isn't accurate yet. It's better, but multi-document retrieval capabilities are not yet mature. A 70% accurate Excel model is highly frustrating, particularly when you have offloaded the experience of building it and don't have the personal context to debug. In our green/yellow/red light AI tools rubric, coding agent models have shifted from Red light to Yellow light, but won't move to Green light until 95%+ accuracy is achieved. So it's confidence & usability. Spreadsheets aren't perfect, but they don't hallucinate. Analysts aren't perfect, but they triple check and validate the ~5-10% key inputs (or they don't last long). "But analysts make mistakes...". Yes, they do, but great analysts know where model inaccuracy is acceptable / non-core and where an input is mission critical and obsess over checks/validations and a multi-approach modeling structure. Human accuracy in thesis contingent areas of the model, in my experience, is 99.99% (I can only think of one or two brutal thesis-contingent mistakes across 5 firms and both had real career implications for the analyst, and were a very bad look on the PM who should have caught it). And Excel spreadsheets are just intensely useful up, down and across the organization, as the analyst who runs the spreadsheet is rarely the end investment decision maker...the Excel sheet is a thinking tool and a communication tool, above all else. This could change, but requires CIO/PM/MD sponsorship, which I'm not sensing is happening (yet). So, with that context, I'll take the under on the end of spreadsheets (while joyfully experimenting with the new tools...)
andrew chen@andrewchen

prediction re the end of spreadsheets AI code gen means that anything that is currently modeled as a spreadsheet is better modeled in code. You get all the advantages of software - libraries, open source, AI, all the complexity and expressiveness. think about what spreadsheets actually are: they're business logic that's trapped in a grid. Pricing models, financial forecasts, inventory trackers, marketing attribution - these are all fundamentally *programs* that we've been writing in the worst possible IDE. No version control, no testing, no modularity. Just a fragile web of cell references that breaks when someone inserts a row. The only reason spreadsheets won is that the barrier to writing real software was too high. A finance analyst could learn =VLOOKUP in an afternoon but couldn't learn Python in a month. AI code gen flips that equation completely. Now the same analyst describes what they want in plain English, and gets a real application - with a database, a UI, error handling, the works. The marginal effort to go from "spreadsheet" to "software" just collapsed to near zero. this is a massive unlock. There are ~1 billion spreadsheet users worldwide. Most of them are building janky software without realizing it. When even 10% of those use cases migrate to actual code, you get an explosion of new micro-applications that look nothing like traditional software. Internal tools that used to live in a shared Google Sheet now become real products. The "shadow IT" spreadsheet that runs half the company's operations finally gets proper infrastructure. The interesting second-order effect: the spreadsheet was the great equalizer that let non-technical people build things. AI code gen is the *next* great equalizer, but the ceiling is 100x higher. We're about to see what happens when a billion knowledge workers can build real software.

English
7
3
114
30.7K
Aryaman Iyer
Aryaman Iyer@AryamanIyer3·
@FundamentEdge been building ai that generates excel models from sec filings. the output is still excel because analysts want formulas they can audit and change. code is better for generation but worse for consumption. the format analysts trust won and that's not changing soon
English
1
0
1
333