Wayne Robins

3.7K posts

Wayne Robins banner
Wayne Robins

Wayne Robins

@wayner

Father, founder @paveteam, formerly @google @bcg @twitter.

San Jose, CA انضم Aralık 2008
3.3K يتبع952 المتابعون
Wayne Robins
Wayne Robins@wayner·
@dpatil I'm following up on my tweet, LinkedIn message, and email about your intern idea. I've been engaged in this space for 6 years, delivered meaningful results, and would love to help program manage it.
English
0
0
0
18
Wayne Robins
Wayne Robins@wayner·
@karenfinerman you @timseymour are my favorite panelists. Do y'all follow @benthompson or his close friends, the best tech analysts IMHO? Would love to see him interact with y'all, and next to Dan Ives, Gene Munster, any of the other regulars.
English
0
0
0
22
Wayne Robins أُعيد تغريده
Jacob Bank
Jacob Bank@jebank·
🚀 Today we’re launching a brand new @relay! If you want an AI team that works for you, now’s the time to start. Here’s what makes our AI agents different: Anyone can create agents. You work with AI agents just like you work with people. You ask your agent to do things for you and give it feedback to get better. No code, JSON, terminal, or MCP needed. Agents are predictable and reliable. You teach your agent skills with simple prompts, and it turns those into easily understandable, consistent workflows. Plus, your agent can keep a human-in-the-loop for anything high stakes. No random actions you can’t explain. To try it out, head over to @relay and get started for free. I can’t wait to hear your feedback. p.s. like and RT to get a bonus code for 500 extra AI credits per month for a year. 🙏
Jacob Bank tweet media
English
22
46
116
7.5K
Wayne Robins
Wayne Robins@wayner·
@XcelEnergyCO powers out in Greenwood village, as is your phone system, website, and outtage map. Running a world class operation over there. When will this be fixed?
English
1
0
2
419
Wayne Robins
Wayne Robins@wayner·
@levie Can you share use case examples to bring to life? If it's too proprietary, can you talk thematically vs specifics?
English
0
0
1
29
Aaron Levie
Aaron Levie@levie·
In 5 years from now, probably 95% of the tokens used by AI agents will be used on tasks that humans never did before. I just met with about 30 enterprises across 2 days and a dinner, and some of the most interesting use-cases that keep coming up for AI agents are on bringing automated work to areas that the companies would not have been able to apply labor to before. Most of the world hasn’t quite caught on to this point yet. We imagine AI as dropping into today’s workflows and just taking what we already do and making it more efficient by 20% or something. Yet most companies realize that most of the time they’re doing far less than they could because of the cost or limited capacity of talent. This shows up in different ways across every industry. In real estate it’s ideas like being able to read and analyze every lease agreement for every trend and business opportunity possible. In life sciences it’s being able to rapidly do drug discovery or improve quality by looking through errors in data. In financial services it’s being able to look through all past deals and figure out better future monetization. In legal it’s being able to execute on contracts or legal work for previously unprofitable segments or projects. And these are just the Box AI use cases that deal with documents and content. The same is going to be true in coding, where companies tackle software projects they wouldn’t have done before. Security of all systems and events they couldn’t get to. And so on. If you are working on AI Agents right now, the big opportunity is to bring enterprises “work” for problems that they couldn’t do before because it was nearly impossible to afford or scale. And if you’re deploying AI agents in an enterprise, consider what things you’d do more of (or differently) if the cost and speed of labor became 100X cheaper and faster. This is going to get you the real upside of automation.
English
155
240
2.1K
794.1K
Wayne Robins
Wayne Robins@wayner·
@lyft I have 2 Chase Sapphire promos this month that weren't applied to recent rides. Your AI support wasted my time. Now I'm waiting for 10 minutes for a terrible agent experience. Can you please dm me to expedite fixing this error, and helping prevent it from happening again?
English
2
0
0
131
Grant Lee
Grant Lee@thisisgrantlee·
Gamma crossed $50M ARR with 28 employees and more cash in the bank than we had raised ($23M) In hindsight: We got here because we ignored common VC advice. Examples of glaringly bad advice that you should ignore to save you $10M+ and years of time, like we did for Gamma:
Grant Lee tweet media
English
229
254
2.8K
479K
Wayne Robins
Wayne Robins@wayner·
Does anyone know why the 10yr yield round tripped, closed at 4.5% today?
English
1
1
6
106
Wayne Robins أُعيد تغريده
Artificial Analysis
Artificial Analysis@ArtificialAnlys·
Llama 4 independent evals: Maverick (402B total, 17B active) beats Claude 3.7 Sonnet, trails DeepSeek V3 but more efficient; Scout (109B total, 17B active) in-line with GPT-4o mini, ahead of Mistral Small 3.1 We have independently benchmarked Scout and Maverick as scoring 36 and 49 in Artificial Analysis Intelligence Index respectively. Key results: ➤ Maverick sits ahead of Claude 3.7 Sonnet but behind DeepSeek’s recent V3 0324 ➤ Scout sits in line with GPT-4o mini, ahead of Claude 3.5 Sonnet and Mistral Small 3.1 ➤ Compared to DeepSeek V3, Llama 4 Maverick has ~half the active parameters (17B vs 37B), and ~60% of the total parameters (402B vs 671B). This means that Maverick achieves its score much more efficiently than DeepSeek V3. Maverick also supports image inputs, while DeepSeek V3 does not ➤ Both Maverick and Scout place consistently across evals, with no obvious weaknesses across general reasoning, coding and maths Key model details: ➤ The Llama 4 ‘herd’ includes Scout, Maverick and Behemoth; all are large Mixture of Experts (MoE) models - the first time that Meta has released MoE models ➤ Behemoth (2T total, 288B active) is not being released today but Meta discloses that it was used for co-distillation into Scout and Maverick ➤ Multimodal: All three models take Text and Image input, natively trained on image inputs (this likely varies from Meta’s adapter approach in Llama 3.2). They can take multiple images, and Meta claims they should work well with up to 8 images - stay tuned for visual reasoning benchmarks next week! ➤ Pricing: We’re tracking 6 providers and are benchmarking a median price $0.24/$0.77 per million input/output tokens for Maverick, and $0.15/$0.4 for Scout lower than DeepSeek v3 and >10X cheaper than OpenAI’s leading GPT-4o endpoint ➤ Long context: Maverick supports a 1M token context window, Scout supports a 10M token context window - we will be monitoring availability of long context capabilities across providers and testing in greater detail in the coming days ➤ Style: In our early testing we have noticed responses are a lot more structured and uniform in their approach across prompts Key training details: ➤ Pre-training: Maverick is trained on ~22T tokens, and Scout on ~40T; Meta also shared the overall training dataset was >30T tokens (more than double Llama 3’s 15T, Llama 2 was only 1.8T) of more diverse data than previously (text, images, video stills) ➤ Post-training: Involved supervised fine-tuning, online reinforcement learning (RL), and direct preference optimization techniques to optimize performance. Meta shared that they achieved “a step change in performance” by filtering the dataset to focus on ‘hard’ prompts which improved coding, math and scientific reasoning capabilities ➤ Meta disclosed training consumed 1,999 tons of CO2, this represents ~99,950 oak tree-years 🌲 One note from our evals: we note that our results for multi-choice evals (MMLU Pro and GPQA Diamond) are materially lower than Meta’s claimed results. The key driver of the difference appears to be that Scout and Maverick frequently fail to follow our answer formatting instruction. We request an answer format of ‘Answer: A’. Full details of our prompts and answer extraction techniques are available in our methodology disclosure. Further analysis below 👇
Artificial Analysis tweet media
English
27
88
633
127.2K
Wayne Robins
Wayne Robins@wayner·
@MelissaLeeCNBC I want to win the 2025 acronym challenge, can we build a fantasy acronym challenge for you?
English
0
0
0
16
Wayne Robins
Wayne Robins@wayner·
@MelissaLeeCNBC do you prefer AI tips for megacaps, chips, and startups as Twitter dm's or emails?
English
0
0
0
11
David Hoang
David Hoang@davidhoang·
I’m at the phase of my tech career where I can’t play above the rim and towards the basket. Official in my fadeaway jump shot era to keep going.
English
4
0
21
3K
liam
liam@xyzfennell·
drop your favorite design studio websites below
English
72
33
787
103.3K
Wayne Robins أُعيد تغريده
Suhail
Suhail@Suhail·
first time founders care about product second time founders care about distribution third time founders care about retention
English
81
207
2.3K
207.9K
Nichole Wischoff
Nichole Wischoff@NWischoff·
Looking for a payroll provider/good medical - who do folks love?
English
25
1
22
18.9K