jon cooper

2.6K posts

jon cooper banner
jon cooper

jon cooper

@jdc

SF Bergabung Mart 2007
1.6K Mengikuti271 Pengikut
jon cooper me-retweet
Brianne Kimmel
Brianne Kimmel@briannekimmel·
fwiw, Starface’s annual revenue better than the majority of AI startups if pimple stickers can bring in $150M, I’m confident there are many similar sized businesses yet to be built the problem is we need more founders like Julie who deeply understand culture & don’t care about VC approval
Brianne Kimmel tweet mediaBrianne Kimmel tweet media
English
20
46
793
45.3K
jon cooper me-retweet
Scott Wu
Scott Wu@ScottWu46·
Devin Review caught the axios supply chain attack for multiple Cognition customers before the attack was publicly known. These attacks will be 10x more frequent in the age of AI; it is critical that repo maintainers start using AI for defense as well. (showing one example below where Devin Review caught the attack within an hour of its release - text minorly edited for anonymization)
Scott Wu tweet media
English
93
144
1.6K
297.2K
jon cooper me-retweet
Calif
Calif@calif_io·
We asked Claude to find a bug in Vim. It found an RCE. Just open a file, and you’re owned. We joked: fine, we’ll switch to Emacs. Then Claude found an RCE there too. Full story: blog.calif.io/p/mad-bugs-vim…
English
17
94
577
176.1K
jon cooper me-retweet
Guillermo Rauch
Guillermo Rauch@rauchg·
When Opus 4.5 came out, it was a one-way door to a new way of engineering. Agents now do most of our coding. Knowing the inherent flaws and over-confidence of LLMs, we sent a clear message to our teams. Vibing and mission-critical infrastructure don’t go together. We’re sharing some of our early internal guidance in how we’re “agenting responsibly”, prioritizing security, durability, and availability at all times. vercel.com/blog/agent-res…
English
83
160
1.8K
194K
jon cooper me-retweet
Aaron Levie
Aaron Levie@levie·
We dramatically underestimate how much change management it is going to take to automate most knowledge worker tasks. Between data being in legacy environments or systems or without good APIs, context missing for doing the task, teams that are less technical, and other factors, there’s still a lot of work to drive real AI transformation in an enterprise. This is actually great news if you’re building right now because the opportunity is to build the software bridges to make this easier, or to build new services firms to help with this change management. Opportunity is all around for those looking.
Jason Shuman@JasonrShuman

Silicon Valley thinks AI agents are a $20/mo self-serve subscription. Main Street is paying local agencies $10,000 just to turn them on. Everyone assumes AI will be bought primarily online like Slack or Zoom. I think they are wrong. Some of the biggest winners in the AI boom won't be the software vendors. It will be the humans installing it. Here is the reality of SMBs right now: • 54% lack internal AI expertise. • 41% have data quality too poor for AI to even work. • 41% already prefer buying AI through a local IT provider. You cannot "1-click install" a genius AI into a messy CRM or a 15-year-old server. It will just execute the wrong tasks at the speed of light. The AI software will be cheap and a lot will absolutely be bought online. Making it actually work for a messy, real-world business will be expensive. Very bullish on the "Do It For Me" economy being back.

English
120
122
1.2K
263.7K
jon cooper me-retweet
Bryan Johnson
Bryan Johnson@bryan_johnson·
Awoke this morning feeling a child-like vibe of life. Bubbling excitement for what the day holds. Novelty in the smallest of things. Unburdened by the clouds of worry that dim an adults life. Possibility painting the horizon.
English
446
200
5.9K
287.3K
Blueprintsmb
Blueprintsmb@blueprintsmb22·
Grabbed breakfast today with another SMB owner (via ETA) that acquired after a career on Wall Street similar to me. Theme of the breakfast was that nobody really prepares people for what life can be like once you turn 40. We both are now seeing with regularity friends lose their W2s. This trend has accelerated in the last 12 months. A number of our friends have been looking for over 6 months with some well past the 2 year mark of their job search. The reality is the number of seats for high income white collar W2 roles are few and far between right now. In my old hedge fund world, there really aren’t that many portfolio manager seats (a big pod like Citadel has 3 divisions with like 30-35 long short equity portfolio managers) and many portfolio managers don’t want to hire 40+ year old senior analysts. At 40+ in many industries you are now deemed expensive and there is an increasingly higher likelihood hiring managers will ask how you envision using AI in the role you are interviewing for now. For those not prepared for those types of questions, it could be a long process to find a new seat. The loss of identity is rarely talked about either. If you were a senior executive at a publicly traded company, who are you now? Your job or title shouldn’t be your identity, but if you have spent 20 years building up a network and reputation it is often difficult for many to distinguish the two. The sooner one starts playing out their career roadmap 10-15 years ahead, the better chances they will be prepared for the nonlinearity of life that tends to accelerate later in one’s career. Saving, investing, building a strong business network, investing in new skills, etc seem increasingly more important, especially with AI. Are we being too bearish or are others seeing the same?
Blueprintsmb tweet media
English
34
3
262
45.9K
jon cooper
jon cooper@jdc·
@PI010101 I am curious how this relates to the notion of superposition in mechanistic interpretability. Does it?
English
0
0
1
919
Paata Ivanisvili
Paata Ivanisvili@PI010101·
The Johnson--Lindenstrauss lemma says something quite remarkable: if you have an astronomical number N of vectors of large size (say, in a very high-dimensional Euclidean space), then you can linearly map them into a much lower-dimensional space, of dimension about log(N), in such a way that the distances between the vectors are almost preserved. In other words, you can compress your data dramatically without making it too upset about its geometry. A random matrix with i.i.d. standard Gaussian entries will most likely do the job.
Google Research@GoogleResearch

Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI

English
36
126
1.2K
199.9K
jon cooper me-retweet
Max Weinbach
Max Weinbach@mweinbach·
GPT 5.4 in codex implemented TurboQuant in MLX in like 25 minutes by giving it model weights and the PDF report Sorta insane this is where we are now
Max Weinbach tweet media
English
64
148
2.2K
201K
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
There's more to my latest Jane Street mid-roll than meets the eye. Might be worth another listen... Timestamp 1:31:42
English
13
4
220
55.8K
jon cooper
jon cooper@jdc·
Facts.
Brett Caughran@FundamentEdge

AI won't kill fundamental investing because more information doesn't kill alpha. We have decades of priors here (Excel, Bloomberg, alt data...all democratized analysis & information gathering, and didn't kill alpha). As measured by factor volatility, stocks are less efficient and more alpha-rich than ever (and empirically, the ability of multi-eight figure market neutral multi-managers to consistently grind out 10-15% returns in an idio-maximized way proves this point...15 years ago a $10bn hedge fund was considered to be impossibly large). Innovations in investment process have shifted alpha pools, for sure, and systematic investors have arbitraged many old, reliable fundamental alpha pools. But as the players at the poker table have shifted, the constraints of those new players have created new alpha pools. Long duration fundamental investing has been gutted, and definitionally competing against a group of non-fundamental (quants, factor/thematic investors, indexers) and duration-constrained (multi's) investors should be a huge competitive advantage, long term (however frustrating in the near term). To wit, a 9-month thesis where I "look through" the next two prints is now considered a long-term thesis. Rigorous investment process serves investment judgment, but the real alpha generation fits a power-law distribution and there is some ineffable "nose for money" that the great investors have, that cannot be trained necessarily. Investing is a very hard game, that cannot be distilled to a reinforcement learning sandbox (by the time it is, the regime will have shifted and new drivers move stocks). AI has no sense of materiality, no true discernment, and the lack of context of N of 1 situations (if you haven't noticed, we are living in an N of 1 world!). There is a irreducible element of humanness that is critical to success in fundamental investing, and that won't change. What does this all mean? In my opinion, there is no better time to be starting a careers as an investor. My first year on the desk, I spent a lot of time doing grunt work: updating Nielsen files, updating models for my PM, creating same store sales master files, building question lists for CEO meetings, etc. This is grunt work. I can automate this all now, and get more quickly to the deep, value added parts of learning the investment process. Will AI drive alpha? This is a debate people are having, which I find sort of silly. When used correctly, by the right investor, of course it will. Ask any great investor if they had another 4 hours of research time per day whether the quality of their research would improve? That's kind of a dumb question...of course it will. Compressing the mechanical part of your job to focus more on the artisanal part of the job is Step 1, and with agentic systems accelerating fast is now in the strike zone of possibility. This is before we start to layer in a broader monitoring net and use cases to go deeper and build more rigor, finding signals in unstructured data that were missed before, as well as turning your investment genius into a co-pilot pattern recognition system. The future is very bright for fundamental investing, in my opinion.

English
0
0
0
27
jon cooper me-retweet
Brett Caughran
Brett Caughran@FundamentEdge·
AI won't kill fundamental investing because more information doesn't kill alpha. We have decades of priors here (Excel, Bloomberg, alt data...all democratized analysis & information gathering, and didn't kill alpha). As measured by factor volatility, stocks are less efficient and more alpha-rich than ever (and empirically, the ability of multi-eight figure market neutral multi-managers to consistently grind out 10-15% returns in an idio-maximized way proves this point...15 years ago a $10bn hedge fund was considered to be impossibly large). Innovations in investment process have shifted alpha pools, for sure, and systematic investors have arbitraged many old, reliable fundamental alpha pools. But as the players at the poker table have shifted, the constraints of those new players have created new alpha pools. Long duration fundamental investing has been gutted, and definitionally competing against a group of non-fundamental (quants, factor/thematic investors, indexers) and duration-constrained (multi's) investors should be a huge competitive advantage, long term (however frustrating in the near term). To wit, a 9-month thesis where I "look through" the next two prints is now considered a long-term thesis. Rigorous investment process serves investment judgment, but the real alpha generation fits a power-law distribution and there is some ineffable "nose for money" that the great investors have, that cannot be trained necessarily. Investing is a very hard game, that cannot be distilled to a reinforcement learning sandbox (by the time it is, the regime will have shifted and new drivers move stocks). AI has no sense of materiality, no true discernment, and the lack of context of N of 1 situations (if you haven't noticed, we are living in an N of 1 world!). There is a irreducible element of humanness that is critical to success in fundamental investing, and that won't change. What does this all mean? In my opinion, there is no better time to be starting a careers as an investor. My first year on the desk, I spent a lot of time doing grunt work: updating Nielsen files, updating models for my PM, creating same store sales master files, building question lists for CEO meetings, etc. This is grunt work. I can automate this all now, and get more quickly to the deep, value added parts of learning the investment process. Will AI drive alpha? This is a debate people are having, which I find sort of silly. When used correctly, by the right investor, of course it will. Ask any great investor if they had another 4 hours of research time per day whether the quality of their research would improve? That's kind of a dumb question...of course it will. Compressing the mechanical part of your job to focus more on the artisanal part of the job is Step 1, and with agentic systems accelerating fast is now in the strike zone of possibility. This is before we start to layer in a broader monitoring net and use cases to go deeper and build more rigor, finding signals in unstructured data that were missed before, as well as turning your investment genius into a co-pilot pattern recognition system. The future is very bright for fundamental investing, in my opinion.
English
70
173
1.5K
630.4K
jon cooper me-retweet
Brett Caughran
Brett Caughran@FundamentEdge·
It's really remarkable how fast AI tools for Excel have evolved. Even three months ago I found them almost completely unusable. Today, I was able to update my Uber model for the last four quarters in a fraction of the time, accurately, even when I consider the time I spent de-bugging and validating the key inputs. The three big unlocks for me were creating my own skills files, which are recipe cards encoding an incredibly detailed dissection of every step of the financial modeling process (put together in an 86 page document then crafted into six distinct modeling skills...unfortunately, I won't be sharing this at this time, but will consider in the future), connecting the Daloopa MCP to Claude in Claude Excel for accurate data, and creating a validation space in Perplexity Computer to do final checks and de-bugging. (I am not sponsored by either Daloopa or Perplexity, or any vendor for that matter) Obviously this AI augmented process is only valuable to the extent that it is 98%+ accurate and 100%+ accurate on critical metrics. Validation has to be a systematic process blending coding tools and human validation checklists (i.e. hand checking key model variables and understanding where in the model there is tolerance for mistakes, and where there isn't). But the ability of new LLMs to read & analyze models (particularly GPT 5.4) and the rise of Agentic Workspaces like Perplexity Computer to route tasks to the right LLMs seems to be resulting in big progress here. Really exciting stuff. I have been a huge skeptic here...Excel-based models are the foundation of institutional decision making, and they are no place for AI slop. With the technology improving, particularly workflows around systematic validation, that skepticism is melting.
Brett Caughran tweet mediaBrett Caughran tweet mediaBrett Caughran tweet mediaBrett Caughran tweet media
English
20
27
550
61.5K
jon cooper me-retweet
Alix Pasquet
Alix Pasquet@alixpasquet·
In NY, investors tend to look at the rest of the world as we do in this famous 👇🏼New Yorker magazine cover. That NY is the center of everything, the giant, and the rest of the country and the world is very small. By the way, this tendency creates a fun behavioral dynamic to exploit. Always remember the saying: "A desk is a dangerous place from which to view the world." So get off your ass and get into the field. Go visit companies, try new products, talk to customers, go to conferences, and make it a goal to travel to one different country per year, both for fun and research. Field research, in this ongoing regime change catalyzed by these new AI tools, is about to become even more important. It's hard to develop the ability to see change at its early stages, so you can exploit it, while sitting at your desk in NY, analyzing funnymentals fed to you by management teams and their PR representatives.
Alix Pasquet tweet media
English
2
4
42
3.3K
jon cooper me-retweet
Govind
Govind@DeepknowledgeU·
Latency numbers every programmer must know
Govind tweet media
English
8
36
603
32.8K
jon cooper
jon cooper@jdc·
@ianzepp @gauntletai Wow. I knew Nano Banana was excellent at prompt adherence but this is next level. What was the prompt-to-generate-the-prompt?
English
0
0
0
41
Duke Ian
Duke Ian@ianzepp·
# Slide 4: Analog — Continuous Signal + Noise Model ## Style Brief (Same as slide 1 — warm parchment, pencil sketch, dense lab notebook feel.) ## Layout: Encode/Decode Split with Margin Notes ## Prompt Technical pencil sketch on warm aged parchment paper with subtle coffee stain marks, worn edges, and faint ruled lines showing through like notebook paper. Dense hand-drawn diagram in dark brown ink with slightly uneven hand-lettered labels. 16:9 aspect ratio. The composition should feel like a dense page from a researcher's lab notebook. TOP: Bold hand-lettered title "4. ANALOG — CONTINUOUS SIGNAL + NOISE MODEL" centered, with a double underline. Smaller subtitle: "signal grows linearly, noise grows as square root of N". The main area is divided into upper ENCODE and lower DECODE sections by a hand-drawn horizontal dashed line. UPPER SECTION labeled "ENCODING" in a sketched tab: Far left: A pinned note card titled "PER-FRAME FORMULA" containing a large hand-lettered equation: "frame[cell] = sign * (S + payload_bias) + noise" with annotations below each term: "sign" has arrow to "+1 or -1 from QR", "S" has arrow to "signal_strength", "payload_bias" has arrow to "+delta or -delta from payload bit", "noise" has arrow to "uniform random from PRNG". The card has a pushpin and slightly curled corner. Center: A large hand-drawn graph taking up significant space, titled "SNR IMPROVEMENT" in bold lettering. The x-axis is labeled "N frames" with tick marks at 1, 4, 16, 64, 256. The y-axis is labeled "accumulated value". Two hand-drawn curves: a steep straight line rising steeply labeled "Signal = N * S (linear)" drawn in bold strokes, and a much flatter curve labeled "Noise = sqrt(N) * amplitude" drawn in thinner strokes. The growing gap between them is cross-hatched and labeled "SNR = sqrt(N)". Below the graph, a small table shows: "N=1: SNR=1" then "N=4: SNR=2" then "N=16: SNR=4" then "N=64: SNR=8". Right side: A hand-drawn 3D perspective sketch of a height field or terrain map, showing peaks and valleys arranged in a QR-like pattern. Tall peaks are labeled "+N*S (white module)" and deep valleys labeled "-N*S (black module)". A horizontal plane cuts through at zero labeled "threshold plane: sign = QR". The varying heights of peaks above the threshold are annotated "taller peak = payload bit 1" and "shorter peak = payload bit 0". Title above: "ACCUMULATED HEIGHT FIELD". Top margin annotation: "Grid, min 2 frames, signal_strength must exceed noise_amplitude" LOWER SECTION labeled "DECODING" in a sketched tab: Left side: A stack of overlapping frame sketches labeled "N float frames" with wavy lines suggesting continuous values, flowing into a large arrow labeled "sum all frames" leading to an accumulated grid with float values like "+4.8 -3.2 +5.1 -4.7". Center: A critical step shown as a large diagram. The accumulated grid sits at left. A second grid labeled "expected noise sum" sits below it, drawn with dashed borders and annotation "reconstructed from shared PRNG seed — same key, same noise". A large minus sign between them leads to a third grid labeled "CLEANED FIELD" with cleaner values. Annotation: "noise is deterministic, so we subtract it exactly". This is shown as the key insight with double-underline on "subtract it exactly". Center-right: From the cleaned field, two forking arrows. Upper arrow labeled "SIGN (threshold at 0)" leads to a QR grid with label "recovered QR". Lower arrow labeled "MAGNITUDE (distance from baseline)" leads to a comparison diagram showing actual magnitude versus expected baseline "N * signal_strength", with residuals marked as bit=1 (above) and bit=0 (below). A small majority-vote tally for multi-cell positions. Right margin: A small sketched note card titled "DETERMINISTIC NOISE" containing: "encoder seeds: Prng::from_key(qr_key, frame_index)" and "decoder reconstructs identical sequence" and "therefore: noise cancels PERFECTLY" with "perfectly" underlined. BOTTOM LEFT: Bold hand-lettered callout in a rough box: "FROM DISCRETE TO CONTINUOUS" with smaller text: "binary flipped biased coins. analog adds real-valued signal plus noise. now we have a formal noise model: SNR improves as sqrt(N). and deterministic noise means perfect cancellation on decode." BOTTOM CENTER: Properties table in a sketched frame. Left column "HAS": continuous float values, formal SNR model, deterministic noise cancellation, payload in magnitude, grayscale appearance. Right column "LACKS": multi-layer structure, overlapping windows, spatial permutation, boundary-free carrier. BOTTOM RIGHT: Torn-edge paper with arrow pointing right: "NEXT: what if L1 accumulated outputs THEMSELVES became frames for a second layer? Recursive steganography — a QR hidden inside a QR..." with "recursive" double-underlined. No people, no faces, no color beyond dark brown ink on warm parchment. Dense margin annotations, thin leader lines, pushpins on note cards.
English
1
0
0
65
Duke Ian
Duke Ian@ianzepp·
Using Claude to generate prompts for Gemini & Nano Banana produces exceptional results. This is for a QR-focused temporal stenography side project using skills I've picked up in the @gauntletai program.
Duke Ian tweet mediaDuke Ian tweet media
English
1
0
0
101
jon cooper me-retweet
Brett Caughran
Brett Caughran@FundamentEdge·
This is the exact light bulb moment I've had over the last two weeks. Helping firms become AI native is going to be much less around the technical complexity of the actual tooling. There's so much capex and engineering ingenuity pointed at the problem of making AI user interfaces intuitive to use. It is already happening. What's much more of a gating factor in deploying AI in the investment process is guiding firms through the creation of their own AI exoskeleton. That's harder than it seems because even within firms, investment process is highly heterogeneous. Every investor has a Bloomberg launchpad that looks a bit different. And that will be true for agentic AI co-pilots. The way your Asian banks analyst consumes news, evaluates industry data, and builds models is different than your biotech analysts. Chatbots couldn't handle these differences, but agents can. So successful adoption requires a cultural decision at the firm level, but also the careful crafting of the mental exoskeleton, investor by investor, wrapping your investment process in AI. I can't get this idea off my mind. I'm building my team to do this and would love to be in touch if this resonates with you (both those on a parallel process to share notes but also firms where we could possibly be of assistance)
Ethan Mollick@emollick

I am not sure "Forward Deployed AI Engineers" are going to deliver on what a lot of companies are hoping for. They are useful, yes, but AI applications are far less of a technical issue, and much more about rethinking the deep expertise & structure of your organization around AI.

English
4
4
69
21.9K
Niels Hoven 🐮
Niels Hoven 🐮@NielsHoven·
Alpha School is trying a new entrepreneurial high school track: make $1 million by graduation or get your tuition back When I say that our current education system is failing our highest potential students - this is what supporting high achievers looks like when it’s done right Our top students absolutely have the potential to be millionaires before they graduate high school, but today they’re held back by anti-excellence ideologues who insist that we not let anyone pull ahead
Nat Eliason@nateliason

Make $1m by graduation. Or get 100% of your tuition refunded. That's the promise of the new high school for entrepreneurs Cameron and I are launching this fall through @AlphaSchoolATX. We need 2-3 coaches to help make it happen. DM us or apply!

English
18
38
693
73.6K