Kevin

3.8K posts

Kevin banner
Kevin

Kevin

@100baggerhunt

📕 Building my junior AI-analyst @ https://t.co/HEbKKa48XG Hunting for small hidden winners ⬇️

Join 11,000 hunters ➡️ Katılım Ekim 2022
590 Takip Edilen12.9K Takipçiler
Sabitlenmiş Tweet
Kevin
Kevin@100baggerhunt·
I've spent years obsessing over 100-baggers. I recently found Anna Yartseva's study that analyzed 464 ten-baggers over 24 years. It challenged everything I thought I knew. FCF yield dominated every other factor. Here's why I'm rebuilding my approach:🧵
Kevin tweet mediaKevin tweet media
English
142
535
4.4K
1M
Kevin
Kevin@100baggerhunt·
6. You haven't written down what would make you sell
English
2
0
0
467
Kevin
Kevin@100baggerhunt·
6 signs your investment thesis is actually just a story you told yourself 🧵
Kevin tweet media
English
1
2
2
2.1K
Kevin
Kevin@100baggerhunt·
Sometimes, I take days or weeks going through a company. For this one, it clicked in a matter of hours ⬇️
Kevin tweet media
English
0
0
1
787
Yann
Yann@yanndine·
5 AI Agents running our entire social media operation. These are the 5 Claude agents we use to replace a $12k/month content team. I used to spend hours briefing writers, waiting on revisions, and watching engagement tank because nobody actually understood the ICP. Which hooks were working. What topics the audience actually wanted. How to turn a post into pipeline. Now? 5 Claude agents handle everything - built in Claude Code, connected directly to Notion. The agents: 1. Lead Magnet Engineer Reads your ICP docs, brand guidelines, and tool stack directly from Notion - then builds complete playbooks and creates them in your Draft database automatically. Outputs: → Full lead magnet in Notion format - ready to publish, no editing needed → Tool recommendations pulled from your actual stack → Brand voice matched to your guidelines → Notion link delivered the moment it's done → 2. Social Media Expert Trained on your top 50 posts with real engagement data. Generates 3-5 LinkedIn post variations per topic, picks the strongest one, and explains exactly why it will outperform the others. Outputs: → Hook variations scored against your own engagement benchmarks → Post structure following proven 1-3 line paragraph format → CTA matched to goal - lead magnet, authority, or engagement → Gut-check: flags anything that sounds like ChatGPT wrote it 3. Creative Director Generates complete design briefs - not vague direction, but exact dimensions, text placement, color codes, and ready-to-use Midjourney prompts your designer or AI tool can execute immediately. Outputs: → LinkedIn preview briefs at 1200x627px with full copy hierarchy → Lead magnet covers and workflow diagrams → Before/after comparisons and tool stack visualizations → Designer-ready brief so there's no back and forth 4. Research Analyst Analyzes your LinkedIn post CSV and surfaces rising topics with momentum before they peak. Spots competitor gaps. Pulls ICP pain points directly from comment sections. Outputs: → Daily or weekly trend report with momentum status per topic → Specific tactical angles your competitors haven't covered yet → Content format recommendations with shelf life estimates → "4 posts in 48hr on AI agent pricing, 2.3x engagement, zero comparison frameworks published" - that level of specific 5. Performance Tracker Turns raw engagement data into actual decisions. Not "this post got 500 reactions" - but "contrarian hooks outperform your baseline, double AI content, kill generic LinkedIn advice." Outputs: → Top and underperformer breakdown with pattern analysis → Next 7-day content calendar with optimal posting rhythm → CTA response rates by type → Specific recommendations fed back to each agent for the next cycle If you run content, manage a team doing outbound, or post on LinkedIn to generate pipeline - this replaces your entire social media setup overnight. Comment "AGENTS" and I'll send you the full system - All 5 Claude project instructions, Notion integration steps, and copy-paste input formats so you can have it running today.
English
167
15
133
14.9K
Kevin
Kevin@100baggerhunt·
@Thebullwhisper @markcboon Thanks for the info. Based on what I read, they would start building it as a micro grid, but would eventually connect to the grid. This gives them speed, which others don't have.
English
0
0
0
14
The Bull Whisperer
The Bull Whisperer@Thebullwhisper·
$NUAI do they got what it takes to get multiple hyperscalers to move in? Power availability & certainty? Yes — 450MW from TurbineX. A deal facilitated by Thunderhead Energy, NUAIs partner. Fully funded by institutional infrastructure investor Harbert for up to 1500MW of natgas to electricity. Fiber Connectivity? Yes — The MOU with Globe link fiber for 1600 new miles of fiber in Texas Connecting major data center hubs. Imo, this is a direct flirt with Microsofts ambitions for their super AI WAN. Msft is planning to lay their ultra fast Hollow core fiber, joint together several data centers to create a super AI WAN. Since data centers often host several hyperscalers, this AI WAN might become something all of them join forces to develop as a matter of National Security. The Globe-link fiber route would enable it. Carbon Capture and Verification? Yes — Context Labs’ Context AI™ (currently Msft Azur is the only known data center partner associated with context labs) Fully funded by Mawgan Capital. Institutional backing? Yes — Two huge capex parts funded by Harbert and Mawgan so far. 450 MW behind‐the‐meter plan means no power blockade risk. Fiber backbone MOU secures connectivity and Institutional partners in energy infrastructure lowers financing risk. What else could the first anchor tenant possibly want? If it really is Msft moving in, I guess a couple of Russian hookers with clamydia could convince them to finally sign the deal.
The Bull Whisperer tweet mediaThe Bull Whisperer tweet media
English
6
1
33
4.4K
Anish Moonka
Anish Moonka@AnishA_Moonka·
This is one of my most ambitious investment projects, and honestly, it's still very early. I built a system where 26 AI agents work together across 6 phases to produce deep equity research. The kind of analysis that takes a team of analysts weeks, condensed into an hour. And with models getting better every few months, this is only going to scale from here. Here's how it works. Phase 1: Four agents go out in parallel to collect data. > SEC filings, earnings call transcripts, market data, insider transactions, and competitive intelligence. > They cross-reference across sources; nothing gets taken at face value. > Each agent produces a full, detailed output and a compressed briefing that gets passed forward. Phase 2: Six agents break down the financials. > Revenue quality and growth trajectory with deceleration tracking. > GAAP vs non-GAAP margin reconciliation. > Balance sheet liquidity with dilution trajectory mapped against revenue growth. > Operating and free cash flow with SBC-adjusted FCF. > Historical beat/miss patterns on guidance. > Each dimension scored 1-5 with a weighted composite, and the key financial tension was identified. Phase 3: Four agents handle the strategic layer. > Competitive moats scored 0-3 across five dimensions: network effects, switching costs, cost advantages, intangible assets, and efficient scale. > Each one gets an AI-disruption overlay. Does AI strengthen or weaken this specific moat source? > Management is evaluated on the founder-led vs. professional track record, capital-allocation track record, insider alignment based on actual transaction data, and governance structure. > Industry analysis includes TAM sizing, adoption curve positioning, and an AI disruption taxonomy (additive vs. substitutive vs. deflationary, with specific evidence). > Sector vulnerability decomposes the stock's movement into sector-wide and company-specific components. Phase 4: Five agents build the forward-looking picture. > Head-to-head peer ranking across 4-6 alternatives on upside potential, risk/reward, growth durability, and moat strength. > Revenue evolution modeled by a stream, with per-stream gross margins and blended valuation-multiple implications. > Competitive threat scorecards with actual data cards per competitor: users, revenue, funding, growth signal, enterprise presence, classified as share taker vs TAM expander vs disruptive substitute. > Quarterly scenario progressions through 2027 for bull, base, and bear, with branching metrics identified. > And a thesis narrative that opens with the critical question the market is debating and takes a side. Phase 5: Four agents on risk and valuation. > Top 7 risks ranked by probability times impact with timeframes. > Three-scenario DCF with Year 5 revenue, FCF margin, WACC, terminal growth, and sensitivity matrix. > 5-8 peer comparable company analysis with growth-adjusted multiples. > Valuation convergence blending DCF at 40%, comps at 30%, historical at 10%, and revenue evolution at 20% into a probability-weighted expected value. Phase 6: Three agents synthesize everything the other 23 produced into a final report structured in five parts. > Investment case with thesis narrative. Investment debate with bull/bear arguments, each carrying confirmation and invalidation triggers. > The numbers with every section ending in "what this means for the thesis." Supporting evidence with competitive data cards and a catalyst calendar. > And a verdict with entry levels, sizing framework, add/cut triggers at specific metric thresholds, and a time horizon. The architecture handles context window limits by having each phase produce compressed briefings (600-800 words) that get passed to the next phase. Full outputs stay available if an agent needs to dig deeper. Parallel execution within each phase, sequential across phases. File-based handoffs between every stage. One important caveat: ignore the specific buy/sell recommendations and price targets in this version. The valuation models need more work. But what this system already does really well is give you a deep, structured understanding of the business. All the qualitative and quantitative aspects you need to actually make your own informed decision. The moat understanding, competitive dynamics, revenue quality breakdown, risk matrix, scenario mapping. That's the real value right now. Built it for US stocks first. India is next (to be added soon with data pipelines API help). Ran the first full test on Amazon. Still a lot to improve, but even at this stage, the depth of understanding it produces is genuinely useful. Link to the full report and all 6 phase outputs in the next tweet 🧵
Anish Moonka tweet mediaAnish Moonka tweet mediaAnish Moonka tweet media
English
71
141
1.2K
136.7K
Anish Moonka
Anish Moonka@AnishA_Moonka·
Why use 26 agents when you could use 5? This was the core architectural decision, and it comes down to three things. > Context window limits. One agent trying to pull SEC filings, parse earnings transcripts, gather market data, and scan competitive news simultaneously runs out of context space and starts dropping details. Specialization means each agent goes deep on one thing rather than shallow on everything. > Parallelism. Four agents running simultaneously in Phase 1 means the phase finishes at the slowest agent's time, not at the sum of all four. Multiply that across six phases, and the time savings are massive. > Quality through focus. Six financial analysts each scoring one dimension (income statement, balance sheet, cash flow, segments, historical trends, composite) produce sharper results than one agent trying to hold all of that in working memory at once. When you ask a model to do too many things at once, everything gets a little worse. When you let it focus, each output gets meaningfully better. The tradeoff is cost and orchestration complexity. More agents mean more tokens, and you need a clean handoff system between phases. I solved the handoff problem with compressed briefings; each agent writes a full output and a 600-800-word briefing that gets passed forward. Downstream agents read briefings by default and only pull full files when they need to go deeper. Total cost runs $10-20 in Opus 4.6 tokens per analysis. The next step is to cut costs down by 10x.
English
4
2
39
6K
Kevin
Kevin@100baggerhunt·
@Thebullwhisper Was this confirmed? Could be a solution, but a 100MW battery pack with 4h capacity is an additional 50M in capex. We would need to know the size of the backup capacity and what kind of MWh they would spec
English
1
0
0
47
Kevin
Kevin@100baggerhunt·
Because a hyperscaler doesn't want their data center to stop running. If you have a single supply of gas, and a series of turbines, that creates a risk if for some reason that supply stops. So the connection to the grid serves as a backup system. I think the bigger competitors already have their slot in the queue, I need to check my data. I'm just trying to assess what the real world advantage is of a behind the meter solution. Ideally, if there is a fast track for this who only need the connection for backup purposes. That would be a win
English
2
0
0
68
The Bull Whisperer
The Bull Whisperer@Thebullwhisper·
@100baggerhunt Why would they need a grid connection? As backup? Could nearby big plants like vistra energy or quailrun provide backup? Does the hyperscalers have a spot in the ercot que? And could they utilize that spot in the que with $NUAI build?
English
1
0
3
303
Kevin
Kevin@100baggerhunt·
@Investmentideen @stockgutter Duol is like a game. People get addicted to it. But actually learning anything? Surface at best.
English
0
0
1
83
Paul, not a CFA
Paul, not a CFA@Investmentideen·
Genuine question. I understand the bear case for $duol, but did anybody ever tried to learn a language with an LLM? To me, this doesn’t seem to be an actual option.
English
24
0
44
12.9K
Kevin retweetledi
Floebertus
Floebertus@Floebertus·
Sadly, Duolingo reported before the publishing of my February update letter. $DUOL
Floebertus tweet media
English
3
1
16
6K
Kevin retweetledi
Bilbel Capital
Bilbel Capital@bilbelcapital·
Bilbel Capital’s 2025 Annual Letter is out! Returns since inception (February 2022): Bilbel Capital: 2,092.4% S&P 500: 58.1% bilbelcapital.com
English
6
7
38
11.5K