Andrew Yeolo

327 posts

Andrew Yeolo

Andrew Yeolo

@AndrewYeolo

VC Investor @OIFVC | DMs are open

Australia Katılım Mart 2021
366 Takip Edilen408 Takipçiler
Sabitlenmiş Tweet
Andrew Yeolo
Andrew Yeolo@AndrewYeolo·
Strong margin expansion in tech and SaaS imminent due to 5x+ productivity unlock for dev teams from GPT-3+, so R&D as % of revenue will decline. Also startups can build MvP and product with way less devs. Booolish
English
1
0
10
893
Andrew Yeolo
Andrew Yeolo@AndrewYeolo·
Excited for the launch of @rocksolidHQ and its first official rETH vault with @Rocket_Pool! Thrilled to support @wardy_ben and @paladin_eth on their mission to democratise access to secure DeFi strategies as stablecoins proliferate and finance increasingly moves onchain, and to invest alongside @CastleIslandVC, @Rocket_Pool, @GSR_io, @kindredventures, @theBBFund, @ideoVC, and @StanfordSBA!
RockSolid Network@rocksolidHQ

1/ Excited to share that RockSolid has raised $2.8M in pre-seed funding led by @CastleIslandVC w/ backing from @Rocket_Pool, @GSR_io, @kindredventures, @theBBFund, @ideoVC, @StanfordSBA, and others. Today, we're also launching the first official rETH vault with @Rocket_Pool.

English
0
0
4
866
Andrew Yeolo retweetledi
Startup Archive
Startup Archive@StartupArchive_·
Marc Andreessen explains how to identify fake founders “There are definitely people that come in [to pitch us] and present themselves to be something they’re not. They’ve read all the books. They will have listened to this interview. They study everything and they construct a facade…. And the amount of this is exactly correlated with the NASDAQ.” As Marc explains, when stock prices are high and tech is hot, there are a lot people who decide being a tech founder is a fast track to high status: “They’re fundamentally oriented for social status — they’re trying to get the social status without the substance. And there are always other places to go to get social status. So after 2000, the joke was B2B meant back to banking and B2C meant back to consulting — which is, the people who showed up to be in tech were like, yeah, screw it. This is over. I’m going to go back to Goldman Sachs or McKinsey where I can be high status. So you get this flushing kind of effect that happens in a downturn. But in a big upswing, you get a lot of people showing up with, let’s say, public persona without the substance to back it up.” How does Marc identify these people? He uses the same technique that homicide detectives use to find out if you’re innocent — keep asking increasingly detailed questions: “You ask increasingly detailed questions and people have trouble making things up and things just fuzz into obvious BS, and fake founders basically have the same problem. They’re able to relay a conceptual theory of what they’re doing… But as they get into the details, it just fuzzes out. Whereas the true people that you want to back can do it. What you find is they’ve spent 5 or 10 or 20 years obsessing over the details of whatever it is they’re about to do. And they’re so deep in the details and they know so much more about it than you ever will.” Video source: @hubermanlab (2023)
English
50
219
2.5K
345.7K
Andrew Yeolo retweetledi
Reservoir
Reservoir@reservoir0x·
Henlo — we’re live on Berachain! 🐻 Jump into Reservoir Market (link in bio) to: - Discover trending NFT collections - Easily mint your favorite Beras - Trade NFTs on Secondary And it’s all powered by Reservoir NFT.
Reservoir tweet media
English
2
3
51
4.7K
Peter | Relay
Peter | Relay@ptrwtts·
Reservoir has raised a Series A led by @usv! What makes Reservoir special is how we've responded to the exploding number of blockchains. We're not just "multi" chain. We're "every" chain. It's deep in the DNA of how we design our products, and we're starting to see it pay off with the success of @RelayProtocol But we're just getting started. So. Many. Chains. Plus some insane new products that we haven't announced yet. All pushing towards unleashing the potential of onchain assets. Super thankful to everyone who has supported us along the way, especially the earliest believers: @nickgrossman, @jessewldn, @ashegan, @dberenzon, @yonatansela p.s. come join us, we're hiring!
Reservoir@reservoir0x

Reservoir has raised a $14M Series A round led by @usv. We’re here to build the world’s best developer tools for token trading on every chain.

English
89
14
352
41.6K
Andrew Yeolo
Andrew Yeolo@AndrewYeolo·
Great take @AndrewYNg. Seems that foundation models are commoditising and value will accrue to the app/software layer as you’ve been saying. There should still be a lot of $ for the foundation layer similar to cloud storage and compute. It’s incredible what innovation occurs under constraints which @deepseek_ai showed I also think that @Google , @Meta and @x will win the foundation model war (if they can justify the RoI). The best models advance via compute, huge amounts of data (G suite, YouTube, FB, insta, Twitter), and algorithmic breakthroughs (as per deepseek). @Microsoft has swarms of enterprise data but lacks real time consumer data, which could explain their @tiktok_us interest (together with a potential bargain acquisition price) Agree that demand for intelligence will have no ceiling and more efficiency will see more consumption ie. jevons paradox x.com/andrewyng/stat…
Andrew Ng@AndrewYNg

The buzz over DeepSeek this week crystallized, for many people, a few important trends that have been happening in plain sight: (i) China is catching up to the U.S. in generative AI, with implications for the AI supply chain. (ii) Open weight models are commoditizing the foundation-model layer, which creates opportunities for application builders. (iii) Scaling up isn’t the only path to AI progress. Despite the massive focus on and hype around processing power, algorithmic innovations are rapidly pushing down training costs. About a week ago, DeepSeek, a company based in China, released DeepSeek-R1, a remarkable model whose performance on benchmarks is comparable to OpenAI’s o1. Further, it was released as an open weight model with a permissive MIT license. At Davos last week, I got a lot of questions about it from non-technical business leaders. And on Monday, the stock market saw a “DeepSeek selloff”: The share prices of Nvidia and a number of other U.S. tech companies plunged. (As of the time of writing, some have recovered somewhat.) Here’s what I think DeepSeek has caused many people to realize: China is catching up to the U.S. in generative AI. When ChatGPT was launched in November 2022, the U.S. was significantly ahead of China in generative AI. Impressions change slowly, and so even recently I heard friends in both the U.S. and China say they thought China was behind. But in reality, this gap has rapidly eroded over the past two years. With models from China such as Qwen (which my teams have used for months), Kimi, InternVL, and DeepSeek, China had clearly been closing the gap, and in areas such as video generation there were already moments where China seemed to be in the lead. I’m thrilled that DeepSeek-R1 was released as an open weight model, with a technical report that shares many details. In contrast, a number of U.S. companies have pushed for regulation to stifle open source by hyping up hypothetical AI dangers such as human extinction. It is now clear that open source/open weight models are a key part of the AI supply chain: Many companies will use them. If the U.S. continues to stymie open source, China will come to dominate this part of the supply chain and many businesses will end up using models that reflect China’s values much more than America’s. Open weight models are commoditizing the foundation-model layer. As I wrote previously, LLM token prices have been falling rapidly, and open weights have contributed to this trend and given developers more choice. OpenAI’s o1 costs $60 per million output tokens; DeepSeek R1 costs $2.19. This nearly 30x difference brought the trend of falling prices to the attention of many people. The business of training foundation models and selling API access is tough. Many companies in this area are still looking for a path to recouping the massive cost of model training. Sequoia’s article “AI’s $600B Question” lays out the challenge well (but, to be clear, I think the foundation model companies are doing great work, and I hope they succeed). In contrast, building applications on top of foundation models presents many great business opportunities. Now that others have spent billions training such models, you can access these models for mere dollars to build customer service chatbots, email summarizers, AI doctors, legal document assistants, and much more. Scaling up isn’t the only path to AI progress. There’s been a lot of hype around scaling up models as a way to drive progress. To be fair, I was an early proponent of scaling up models. A number of companies raised billions of dollars by generating buzz around the narrative that, with more capital, they could (i) scale up and (ii) predictably drive improvements. Consequently, there has been a huge focus on scaling up, as opposed to a more nuanced view that gives due attention to the many different ways we can make progress. Driven in part by the U.S. AI chip embargo, the DeepSeek team had to innovate on many optimizations to run on less-capable H800 GPUs rather than H100s, leading ultimately to a model trained (omitting research costs) for under $6M of compute. It remains to be seen if this will actually reduce demand for compute. Sometimes making each unit of a good cheaper can result in more dollars in total going to buy that good. I think the demand for intelligence and compute has practically no ceiling over the long term, so I remain bullish that humanity will use more intelligence even as it gets cheaper. I saw many different interpretations of DeepSeek’s progress here in X, as if it was a Rorschach test that allowed many people to project their own meaning onto it. I think DeepSeek-R1 has geopolitical implications that are yet to be worked out. And it’s also great for AI application builders. My team has already been brainstorming ideas that are newly possible only because we have easy access to an open advanced reasoning model. This continues to be a great time to build! [Original text: deeplearning.ai/the-batch/issu… ]

English
0
0
0
151
Andrew Yeolo retweetledi
@jason
@jason@Jason·
Important advice from Brother Kimbal here: be very careful with venture debt, it’s a ticking time bomb for startups. Having a small amount for financing hardware or as a safety net in a predictable business is just fine — but it’s not to be used to extend your runway.
𝙺𝚒𝚖𝚋𝚊𝚕 𝙼𝚞𝚜𝚔 🤠@kimbal

Never use debt to extend your runway. Debt is a great tool for working capital. But if you use it to extend your runway, you are betting on both your execution and the timing of the markets. Instead, focus on raising the capital you need and take the lower valuation with good terms for your company. If your company succeeds, you will forget the lower valuation very quickly.

English
36
17
311
140.3K
David Sacks
David Sacks@DavidSacks·
Some important announcements today by President Trump on his Tech Team! I look forward to working with: -- Michael Kratsios - Director of the White House Office of Science and Technology Policy (OSTP), and an Assistant to the President for Science and Technology. -- Dr. Lynne Parker - Executive Director of the Presidential Council of Advisors for Science and Technology (PCAST). -- Bo Hines - Executive Director of the new Presidential Council of Advisers for Digital Assets (the "Crypto Council"). -- Sriram Krishnan - Senior Policy Advisor for Artificial Intelligence at OSTP. This extraordinary team is a testament to the amazing talent that President Trump is attracting to his Administration!
Unofficial Trump on X@trump_repost

I am pleased to announce the brilliant Team that will be working in conjunction with our White House A.I. & Crypto Czar, David O. Sacks. Together, we will unleash scientific breakthroughs, ensure America's technological dominance, and usher in a Golden Age of American Innovation!   Michael J.K. Kratsios will be the new Director of the White House Office of Science and Technology Policy (OSTP), and an Assistant to the President for Science and Technology. In my First Term, Michael was unanimously confirmed by the U.S. Senate as Chief Technology Officer of the United States at the White House. He also served as the Under Secretary of Defense for Research & Engineering at the Pentagon, and received the DoD's Distinguished Public Service Medal. He graduated from Princeton, and is a Distinguished Fellow at Stanford.   Dr. Lynne Parker will serve as Executive Director of the Presidential Council of Advisors for Science and Technology (PCAST), and Counselor to the Director of the Office of Science and Technology Policy. PCAST will assemble America's most distinguished minds in science and technology to advise our Administration on critical issues like Artificial Intelligence. As previously announced, David Sacks will be chairing PCAST. Dr. Parker previously served as Deputy U.S. CTO, and Founding Director of the National Artificial Intelligence Initiative Office. She received her PhD in Computer Science from MIT. Additionally, Bo Hines will be the Executive Director of the Presidential Council of Advisers for Digital Assets (the "Crypto Council"), a new advisory group composed of luminaries from the Crypto industry, and chaired by our Crypto Czar, David Sacks. Bo is a graduate of Yale University, and Wake Forest University Law School. In his new role, Bo will work with David to foster innovation and growth in the digital assets space, while ensuring industry leaders have the resources they need to succeed. Together, they will create an environment where this industry can flourish, and remain a cornerstone of our Nation's technological advancement.   Sriram Krishnan will serve as Senior Policy Advisor for Artificial Intelligence at the White House Office of Science and Technology Policy. Working closely with David Sacks, Sriram will focus on ensuring continued American leadership in A.I., and help shape and coordinate A.I. policy across Government, including working with the President's Council of Advisors on Science and Technology. Sriram started his career at Microsoft as a founding member of Windows Azure.   Congratulations to all!

English
369
1.2K
11.3K
2.2M
Shehan
Shehan@TheCryptoCPA·
Heads up, crypto investors! The IRS is making BIG changes starting Jan 1, 2025, and if you don’t prep now, you could pay more in taxes or face penalties. Here’s what you need to do before the year-end to avoid penalties, save money, and stay ahead of the game. 🧵
English
227
319
2.6K
892.5K
Andrew Yeolo retweetledi
Elon Musk
Elon Musk@elonmusk·
Wise words from a recent interview with @geoffreyhinton, one of the smartest people in the world regarding AI
English
6.6K
25.4K
103K
37.8M
Andrew Yeolo retweetledi
Meghan Reynolds
Meghan Reynolds@MeghanKReynolds·
Heard from VC LPs this week: Some 👍🏻 feedback from my talk at @Jason ‘s @liquiditypod conference. Sharing a slide here that resonated. GPs, if hard to raise, this basic formula explains it. Complexity so often underestimated by GPs, but LPs see right through it…
Meghan Reynolds tweet media
English
21
34
274
79.7K
Andrew Yeolo retweetledi
Elon Musk
Elon Musk@elonmusk·
“Tesla is far ahead in self-driving cars” – Jensen Huang
English
5.8K
11.7K
101K
42M
Andrew Yeolo
Andrew Yeolo@AndrewYeolo·
@scottfarkas Phenomenal innings @scottfarkas! From pioneering SaaS PLG and Aussie tech with @mcannonbrookes, to your ethical and thoughtful leadership, and philanthropy, you've made a huge impact. All the best with your family, philanthropy and Skip!
English
0
0
1
187
Andrew Yeolo retweetledi
Jamin Ball
Jamin Ball@jaminball·
One of my biggest takeaways from the Llama3 announcement - data gravity is king. And access to high quality, annotated, LLM ready data will drive which companies truly utilize AI vs those who don't As the number of accessible, highly performant models continues to grow (both proprietary and open source), it becomes less important to ask "which model should I use or tie myself to" and more important to ask "how do I deploy / serve / train / fine tune / monitor a model on MY OWN data?" I don't think we'll see companies move data to the models. I think they'll move models to the data. It's easier - The gravity is with the data. It's not just about how you train / fine tune a model initially. It's really about how do you continuously re-train / fine tune the models you have in production as new data comes in, and monitor quality drift. It's much easier to do this if the models are sitting alongside the data. Data isn't stagnant, and models shouldn't be either. As many have shouted from the rooftops before, you don't have an AI strategy without a data strategy. And I think all of what I've just said is a big reason why. Data platforms have a lot of potential here. A big beneficiary here are they hyperscalers. Databricks has also done a killer job. Snowflake playing catchup, but probably done more than people give credit for So I think when looking looking to find big AI winners a key question is "who has the data?"
English
18
31
255
83.5K
Andrew Yeolo retweetledi
Startup Archive
Startup Archive@StartupArchive_·
Peter Thiel: “Buzzwords are a tell, like in poker, that the company is bluffing and undifferentiated” “I think all trends are overrated… if you hear the words ‘big data’ and ‘cloud computing’, you need to run away as fast as you possibly can. Just think ‘fraud’ and run away… All these buzzwords are a tell—like in poker—that the company is bluffing and undifferentiated… I think the things that are underrated are the ones where there are no buzzwords and it doesn’t actually fit into any pre-existing categories” Video Source: @twistartups
English
20
221
1.3K
324.7K