
sracha
2.2K posts

sracha
@sracha
WYRD Show: How to live a weird life https://t.co/QtUORs85Jw





Notion doesn’t have a storytelling & community intern. My pitch? To be the first. @NotionHQ


My conversation with @im_roy_lee , founder of Cluely. 00:00 Authentic people are disliked 5:00 Reputation with 10 vs 10 million people 8:45 Risk framework when launching Interview Coder 11:24 No regrets on burning bridges 15:01 Looking down on people? 16:23 Being a founder is not that hard 22:53 Taking risk without rich parents 25:23 The 2 years between Harvard & Columbia 28:56 5 months of conscious comatose 32:35 How to be shameless: Ego vs Arrogance 41:28 "I wanted everyone to recognize the person my ego thought I was" 44:10 Why do controversial people (Trump, Kanye, Elon) win? 49:55 Learning everything in front of the world Look up WYRD Show on YouTube, Spotify, Apple Podcasts, etc. Enjoy!



How to overcome the fear of being judged



We’ve raised 75m in new funding from Sequoia and Spark Capital—partnering with @sonyatweetybird, @MikowaiA, and @YasminRazavi, all of whom are deeply supportive of our long-term mission. We’ve also brought on angels & advisors including @karpathy, @tszzl, and @_milankovac_. ----- Our early results with FDM-1 moved computer use from a data-constrained regime to a compute-constrained one; this latest round of funding unlocks several orders of magnitude of compute scaling for that work. With the FDM model series we have a path to scale agentic capabilities through video pretraining, and we expect to achieve superhuman performance on general computer tasks in the same way that current language models have superhuman performance on coding tasks. We’re also now able to invest in the blue-sky research necessary to our long term mission of building aligned general learners. To realize the civilizationally transformative impacts of AI, models must generalize far out of their training distributions, actively exploring and building skills in new environments. This capability represents a substantial shift from the current paradigm of model training. We believe that current alignment techniques are insufficient to predictably and safely steer a model with human-level learning capabilities, and so we’re doing work to study small versions of this problem in controlled environments to develop a science of alignment for general learners. We’re a team of 6 people in San Francisco. We’re hiring world-class researchers and engineers to help us achieve our mission. If that’s you, please get in touch.











