quentinclark

292 posts

quentinclark

quentinclark

@quentinclark

MD at General Catalyst, former CTO Dropbox, SAP, Microsoft product exec. Host of Equivalent to Magic podcast. Listen here: https://t.co/pCV47nnork

San Francisco, CA Katılım Haziran 2008
174 Takip Edilen2.4K Takipçiler
quentinclark retweetledi
Johannes Landgraf
Johannes Landgraf@jolandgraf·
love the microsite from @loujaybee touching on the false summit of coding agents, and how to address the system bottlenecks standing between you and a self-driving codebase background-agents . com
English
6
9
164
15.8K
quentinclark retweetledi
Martian
Martian@withmartian·
$1,000,000 to understand how LLMs write code. Announcing: The Martian Interpretability Challenge. Understanding the inner workings of LLMs is the greatest scientific challenge of our age,. Let's solve it. Apply here: withmartian.com/prize 🧵👇
Martian tweet media
English
11
44
156
30.9K
quentinclark retweetledi
Byungkyu Park
Byungkyu Park@byungkyu_p·
We're ready to get streamrolled @sama Automate repetitive admin work across emails, spreadsheets, CRMs, and more with just a prompt. Available now @ maton . ai
English
11
9
50
173.5K
quentinclark retweetledi
Abhishek Bhardwaj
Abhishek Bhardwaj@abshkbh·
Today, I'm excited to launch Arrakis: an open-source and self-hostable sandboxing service designed to let AI Agents execute code and operate a GUI securely. GitHub: github.com/abshkbh/arrakis Watch Claude code a live Google Docs clone using Arrakis. Having a VM sets it free -🧵
English
15
87
520
52.3K
quentinclark retweetledi
Hemant Taneja
Hemant Taneja@htaneja·
.@GeneralCatalyst has raised ~$8Bn of new capital, across core VC, Creation strategy, and SMAs, to invest in the most ambitious entrepreneurs driving transformation and resilience in AI, Defense, Climate & Energy, Industrials, Healthcare, and FinTech: generalcatalyst.com/stories/fundxii
English
30
37
584
116.6K
quentinclark retweetledi
General Catalyst
General Catalyst@generalcatalyst·
.@codeiumdev eliminates coding tedium, enabling developers and organizations to dream bigger. By processing 100 million lines of code in an instant, Codeium’s generative AI-powered platform makes coding a superpower for enterprises like our portfolio company, @anduriltech. Need to adjust all call sites and queries to conform to the semantics of the new API signature or schema? Cortex is the reasoning engine that will get this done so developers can focus on more creative tasks. Forge, Codeium’s code review assistant, expedites code review cycle times and enhances code review culture. “The future of coding isn’t just about writing lines of code faster—it’s about enabling developers to think bigger, push boundaries, and achieve the extraordinary.” -@_mohansolo, Codeium’s Co-Founder & CEO We’re proud to double down on our partnership with Varun, Douglas, and the entire team by leading Codeium’s $150M Series C raise. More from the Codeium team → codeium.com/blog/series-c-… @TechCrunch exclusive → techcrunch.com/2024/08/29/git…
General Catalyst tweet media
English
0
2
22
3.7K
quentinclark retweetledi
Atila
Atila@atiorh·
I❤️‍🔥Open + Diffusion + Transformer = Stable Diffusion 3
argmax@argmax

On-device Stable Diffusion 3 We are thrilled to partner with @StabilityAI for on-device inference of their latest flagship model! We are building DiffusionKit, our multi-platform on-device inference framework for diffusion models. Given Argmax's roots in Apple, our first step was to bring Stable Diffusion 3 to Mac. We have optimized the memory consumption and latency for both MLX and Core ML. We will open-source this project alongside Stability AI's upcoming open-weights release. Until then, we will share inference performance data in the coming days and work on compressing the models. Don't hesitate to reach out if you want your diffusion models on-device.

Français
0
3
25
3K
quentinclark retweetledi
Martian
Martian@withmartian·
🎉 Martian was named one of the top 100 AI companies in the world by @CBinsights! 🎉 It’s an honor to join the likes of @OpenAI, @databricks, @perplexity_ai, and others in 2024’s AI 100.
Martian tweet media
English
11
19
65
19.5K
quentinclark retweetledi
Niko Bonatsos
Niko Bonatsos@bonatsos·
1/ There hasn't been a better time to be an early stage consumer founder in many many years... Why now? - Gen Z'ers are just different than those before them and their time has come to create the world they want to live in.
English
8
9
159
38K
Martian
Martian@withmartian·
🚀Introducing The LLM Inference Provider Leaderboard leaderboard.withmartian.com - a live-updated, unbiased eval of API Inference products. Featuring: @abacusai, @anyscalecompute, @DeepInfra, @DecartAI, @FireworksAI_HQ, @LeptonAI, @togethercompute, @perplexity_ai, @replicate, as well as @OpenAI and @AnthropicAI models For each provider's Mixtral-8x7B and Llama-2-70B-Chat public endpoint, we benchmark cost, rate limit, P50 & P90 of throughput & TTFT, and average daily collections overtime for long term tracking. At Martian, we route each API request to the best LLM to reduce cost, reduce latency, and get the best performance. So finding the best providers is an important problem for us. We found that there's a > 5x cost difference, > 6x throughput variation, and even larger rate limit discrepancies among providers! Choosing between different LLMs is only part of the equation -- the selection of different inference endpoints is also crucial to get the best performance for your use case. See highlights of provider performance in🧵👇
English
8
55
270
128.8K
quentinclark retweetledi
Muddy
Muddy@feelmuddy·
On April 16, 2024, @sfsailingclub is introducing Muddy.
English
5
8
63
47.9K