fraser

314 posts

fraser

fraser

@Fraser

vc at @sparkcapital past: head of product at @openai; co-founder/ceo of an AI startup that was acquired by Airbnb

Katılım Aralık 2006
258 Takip Edilen13.6K Takipçiler
fraser retweetledi
Poke
Poke@interaction·
Starting today, personal superintelligence is just one tap away. No download, no signup. Text Poke for free now: Poke.com 🌴 — 0:00 – What's Poke? 0:50 – Introducing Poke Recipes 1:25 –  Create a Recipe in 10 seconds 1:43 – Earn on Poke 2:44 – Build with npx poke 12:58 – Recap 13:36 – Parisian Love
English
132
85
845
265.9K
stetson 🤠
stetson 🤠@___stets___·
@Fraser Wherever I’m at, the rocket launch is getting streamed
English
1
0
1
8
stetson 🤠
stetson 🤠@___stets___·
Space nerd? Same. 🚀 I set up a Rocket Launch Tracker through Poke that monitors every major upcoming launch — SpaceX, NASA, Blue Origin, Rocket Lab, ESA, and more It sends a notification 20 minutes before liftoff with a direct live stream link. No more missing launches Grab it: poke.com/r/rIiF1I3_7Pc
Poke@interaction

Starting today, personal superintelligence is just one tap away. No download, no signup. Text Poke for free now: Poke.com 🌴 — 0:00 – What's Poke? 0:50 – Introducing Poke Recipes 1:25 –  Create a Recipe in 10 seconds 1:43 – Earn on Poke 2:44 – Build with npx poke 12:58 – Recap 13:36 – Parisian Love

English
1
1
10
742
oGeneo
oGeneo@LordArche·
Stop Guessing How Long The Security Line Is.. Just Built A TSA Wait Times Recipe On @interaction's Poke. Real Time Wait Times For Any Airport Straight To Your Texts poke.com/r/GiPjrFepzRw
English
3
1
14
655
fraser
fraser@Fraser·
I’m a daily user of Poke. This type of product — a helpful personal assistant, universally available via messaging — will be one of the most important products of this era. Text is the universal interface. It’s intuitive, without a learning curve. And while a button can only do what a button says, a text box can do anything the user can articulate. There’s a reason why over the history of the consumer internet only two UI paradigms have reached a billion-user scale: the media feed and the chat interface Poke.com - now available for everyone
Poke@interaction

Starting today, personal superintelligence is just one tap away. No download, no signup. Text Poke for free now: Poke.com 🌴 — 0:00 – What's Poke? 0:50 – Introducing Poke Recipes 1:25 –  Create a Recipe in 10 seconds 1:43 – Earn on Poke 2:44 – Build with npx poke 12:58 – Recap 13:36 – Parisian Love

English
2
3
25
3.6K
fraser retweetledi
Mike Krieger
Mike Krieger@mikeyk·
More than a million people are now signing up for Claude every day. To everyone choosing to make @claudeai part of how they work and think: welcome.
English
162
225
3.9K
651K
fraser retweetledi
Elicit
Elicit@elicitorg·
The Elicit API is now available in preview for Pro and Teams users. You can search 138M+ papers and generate Research Reports from your code, scripts, or AI tools. Get your API key at elicit.com/settings and check out docs.elicit.com
Elicit tweet media
English
4
7
38
7.5K
fraser
fraser@Fraser·
Little nudges, adjustments, and taps... human intuition that helps us get things done when working with our hands. @GeneralistAI is showing emergent behavior where the model starts to react, correct, and recover in real-time. Weirdly human to see.
Andy Zeng@andyzengineer

x.com/i/article/2016…

English
1
1
6
1.2K
fraser retweetledi
Nabeel Hyatt
Nabeel Hyatt@nabeel·
Congrats to Q.ai on the acquisition by Apple, the second largest in their history. In 2022, Aviad cold emailed out of the blue. He barely even told me what he was up to. But from the very first call it was obvious I had met a force of nature, and a kindred spirit. These folks have really made magic, oh how I wish this wasn't in stealth so you all could see. But with Aviad & team inside of Apple, the magic is sure to hit us all soon enough. @sparkcapital are so happy to have had the chance to partner with them. Congrats to the Q team! reuters.com/business/apple…
Nabeel Hyatt tweet mediaNabeel Hyatt tweet media
English
29
24
483
60K
rapha
rapha@rapha_gl·
the proliferation of "chat sidebars everywhere" will end within 2 years
English
3
0
13
1.8K
Jungwon
Jungwon@jungofthewon·
you call it recency bias, i call it updating quickly based on new evidence
English
1
0
4
553
fraser retweetledi
Generalist
Generalist@GeneralistAI·
More pretraining improves GEN-0 real-robot performance (via blind A/B evals with closed-loop rollouts). Improvements are significant in the low-data regime, but the best models thrive with both pretraining and ample post-training. See blog addendum: generalistai.com/blog/nov-04-20…
Generalist tweet media
English
5
28
187
79.4K
fraser retweetledi
fraser
fraser@Fraser·
"Any task that frontier AI can sort of do today, it’ll likely be able to do reliably one year from now" applies to biology, too, suggesting a remarkably optimistic future. We at @sparkcapital are thrilled to continue to support Ali and the amazing team at @ProfluentBio
Ali Madani@thisismadani

We've raised $106M from @AltimeterCap @JeffBezos @sparkcapital @insightpartners @airstreet. The number is a green light to push harder. AI provides a credible path to design biology with precision and control. This will yield an age of abundance in curing disease and beyond.

English
1
0
5
1K
fraser
fraser@Fraser·
@kenbwork Great analysis. On the "slow market restructuring" ... Going from experimental science to ML predictions means overcoming cultural change, power structures, etc. Perhaps improbable?
English
0
0
0
104
Kenny Workman
Kenny Workman@kenbwork·
On EvoScale acquisition, these are unintuitively difficult businesses to build. Important to understand practical integration of molecular foundation models into discovery pipelines is essentially swapping a process step (eg. hybridoma) with a function call (eg. model that predicts the thing that binds). 1/ There are *many* of these steps, and each require largely orthogonal models of underlying biology (binding, immunogenicity, tox, PK/PD, etc.). In consequence, a general reasoning analogue for molecular tasks seems quite far off. More likely you focus but cover small surface area of the pipeline. 2/ Hard to capture value. You either wait for a notoriously slow market to restructure discovery process around said function calls or you vertically integrate + build drugs. Don't need to discuss difficulty/timeline of latter. If former, seems difficult to differentiate from competitors as techniques appear to converge in performance (eg. inspect Germinal release on the heels of Chai-2 whitepaper). These models *will* be very useful, but the rails here are very unclear and more existential thing to derisk imo. Drug hunters will likely compose many such function calls throughout discovery from multiple vendors.
Kenny Workman tweet media
English
8
9
78
7.6K
fraser
fraser@Fraser·
Empire of AI by @_karenhao is by far the most accurate telling of the era when I was at OpenAI, which was an important few years – from the first commercial step to shortly after the launch of ChatGPT. There is one important piece that is incorrect: the portrayal of @sama He’s presented as some machiavellian and reckless leader and the facts don’t support that. I joined OpenAI when we were about 100 people and purely a research lab. As head of product, I helped transition OpenAI from a research org to one deploying our research as products. During this time a number of large and complex decisions were worked through. There were no easy and obvious solutions to any of these and many of these decisions were seemingly at odds with past decisions. Complex situations often look very different to people and there were dynamics at OpenAI during this time that made everything more challenging – from the org’s structure to philosophical belief structures and much in between. The weirdness of OpenAI at this time appealed to me – the unusual structure felt like it created space for something different and the differing beliefs (while exhausting at times) felt necessary for navigating genuinely novel territory. But that same weirdness created real tensions as we worked through three major challenges. First, the Microsoft partnership: how do we take billions from a tech giant without compromising independence and our mission? Second, productization: how do we go from a research lab to shipping products without abandoning our original purpose? Third, deployment: how do we deploy AI research fast enough to matter while being careful enough to be responsible? In the moment, none of these had obvious answers. The right path forward was uncertain, and reasonable people disagreed – often strongly – about what we should do. Led by Sam, we worked through each of these tensions carefully and deliberately. With the fullness of time and the ability to see how things actually played out, I believe the evidence shows we reached the right decisions on all three. When negotiating the early Microsoft deal the entire term sheet was shared with everyone at the org. We’d add questions and comments and then Sam would host an endless meeting where we’d talk through the questions, discuss the spirit of what we cared about, gather feedback on what missed the mark, etc. Each iteration of the term sheet, month after month, progressed like this. Some opposed the partnership, but their voices were always heard and attempts to address their concerns were made. In hindsight, a deal of this sort was required – there was no other viable path – but Sam ensured that our independence and our mission were preserved while spending time working through everyone’s concerns. The first product roadmap spent considerable time articulating why shipping product supported our mission and how we could do so safely. I spent significant time working through my colleagues’ concerns about productization because getting buy-in across the org on the why was essential to doing it right. With Sam’s full support, we consistently slowed down our product work and made decisions that hurt our business and metrics. We refused to allow entire use-cases we felt we couldn’t handle responsibly. We learned what was required – technically and operationally – to comfortably support select use-cases and prioritized that work. We fired some of our biggest customers because we were concerned about misuse. We didn’t get everything right during this era, but we did an excellent job identifying, sizing, and mitigating risk while building one of the most widely-used products in history. This wasn’t luck, it was the result of the deliberate, sometimes frustrating culture Sam insisted we work through. On deployment, many of us believed that deployment was essential to the safety strategy (not separate and something to fear). Learning to deploy the research responsibly would require practice, and the time to practice was when the stakes were lowest. And so we embraced an iterative deployment strategy. While other labs struggled with misuse and PR crises, we consistently deployed without major incidents and we learned and improved with each model release. We all understood that being able to shape the norms and standards of AI was critical to our mission. Sam argued that writing policy memos could only go so far and we’d be in a much stronger position to define norms aligned with our values if we were consistently the first to deploy responsibly. His argument proved more correct than many of us realized at the time. One question I’ve reflected on a lot is why brilliant, well-intentioned people have such different views of this era and Sam’s leadership. I have respect for many who have framed Sam’s leadership negatively, and count many of them as friends, and so it’s somewhat uncomfortable to share my conclusion. Over the years, when I’ve listened to people share examples of what they saw as problematic behavior, I’ve noticed that it often traces back to one of these dynamics: someone who lost an internal debate and attributed it to bad faith rather than legitimate disagreement; someone who struggled to accept that complex situations made previous plans untenable; someone unfamiliar with how large organizations with multiple stakeholders actually function; or someone who pursued power and lost. I don’t say this to dismiss the substance of these perspectives – the concerns about Microsoft, productization, and deployment were real. But I think these underlying dynamics shaped how people interpreted complex, ambiguous situations. When I joined I was told we’d only ever be 200 people. For reasons I understood, we had to abandon this idea. I didn’t feel lied to or misled. I understood we were navigating novel territory where plans had to evolve. Not everyone experienced it that way, and I understand why. But those different experiences don’t mean Sam was acting in bad faith. With several years of distance, I believe the major decisions from that era have held up remarkably well. That doesn't mean we got everything right or that the concerns weren't legitimate – but it does suggest Sam was navigating these tensions with more wisdom than many give him credit for.
English
16
28
389
107.8K