Thomas Tobin

525 posts

Thomas Tobin

Thomas Tobin

@tomjtobin

industrializing creativity one sentence at a time. building https://t.co/btWAPsdMuB to bring generative ai to the sales process

Katılım Nisan 2009
180 Takip Edilen705 Takipçiler
Thomas Tobin
Thomas Tobin@tomjtobin·
A couple of times I’ve tried to get sparklines in a product, where a small way of getting historical context makes sense. Then an a PM you get arguments over charting libraries, developer time, and get a compromise. Now, the PM drives the agent until it’s done.
Thomas Tobin tweet media
English
0
0
0
31
Thomas Tobin
Thomas Tobin@tomjtobin·
I am low key convinced MCP was a plot to drive up token usage in LLMs by 50x
English
1
0
1
48
Thomas Tobin
Thomas Tobin@tomjtobin·
@levie Do you think this ends the “scientist pm” who tests and moves in small experiments to optimize their way to success? Can you succeed like that faced with the many possible variables in how to do a job or task plus what to deliver as context and the ways this could impact the AI?
English
0
0
2
44
Aaron Levie
Aaron Levie@levie·
When doing product management for AI agents, the biggest shift is that the user you think about making successful is the agent. They are effectively your new “customer”. And the whole game is to figure out how to get the agent the most relevant context possible to execute their task, which is why context engineering is such a big deal. By default, AI agents know absolutely nothing about the work that they're supposed to do. You have to take this raw form of intelligence and mold it directly to the workflow that you're trying to automate. Imagine telling a super smart person (with no domain expertise by default) that just joined your company that you want them to review a contract in line with all your other contracts, or write code for a new product, or come up with a medical diagnosis about a patient, or estimate risk about a particular initiative, etc.  That new person has near infinite permutations of things they *could* go off and do to solve that task. That person would likely need instructions about how to perform the task, they would need a clear set of goals and “rules” that you expect the output for, and then they would need to talk to colleagues, review internal documentation, review past work in a similar area, and so on. Well, the issue is the same for the AI agent. Except they're at a huge disadvantage: they can't "easily" talk to colleagues, they often don't have access to the same resources a person does by default, and they by default don't know which part of their knowledge to tap into for a task.  But most importantly, they have an inherent limit of the information they can consume in each step of the process (size of the context window). And when they get too much data they perform worse at the task (context rot).  Entire companies will win or lose based on their ability to get the *right* context to the agent. Thus, AI agent PMs will require a very different type of expertise: * Deeply understand the domain that you're building agents for. In an ideal world you actually have worked in that space, but if you haven't then a good chunk of your time should be studying the actual people that work in that space. Understanding every single step of their work is going to be critical. * You have to think through essentially what would a human need to know to do the task across instructions, rules, existing data sources, best practices, etc. And then figure out how you actually get this data to the model (context engineering). * Figure out what end-user UX and features are needed to supply the agent with the right context to perform the task. Things like task queues, connecting to data sources, how to re-prompt the user during a long running task, how the user can review the output work and make modifications, and so on. * Doing evals on the AI agents with every tweak of the agent's instructions, model improvements, and any other variable that changes in the process. Especially figuring out how these changes affect the real-world customer environments is critical. Lots of work changes for engineers and PMs in a world of building AI agents and we’re just at the start of thinking through what this looks like.
English
40
125
605
64.5K
Thomas Tobin
Thomas Tobin@tomjtobin·
@shcallaway Exposure Package™ (for your LinkedIn) Title: “Founder” (title case); Cap table status: not that. LinkedIn polish: We provide a monthly founder-thought-leadership prompt Auto-endorsements: “Product Strategy,” “Zero-to-One,” and “Founder Energy” from at least three board members.
English
0
0
1
24
Sherwood
Sherwood@shcallaway·
Someone please remind me to do an April Fool’s post about hiring for a new role: “Founder” No need to start your own company when you can be a Founder (in name only of course) at mine
English
1
0
3
345
Thomas Tobin
Thomas Tobin@tomjtobin·
@shcallaway Isn't it: - 1970s: Single Terminal - 1980s: Multiple Terminals - 2000s: Terminal with 5 buttons - 2010s: Multiple flying Terminals - 2025: Single Terminal
English
0
0
1
18
Sherwood
Sherwood@shcallaway·
Evolution of Computer Interfaces - 1970s: Terminal - 1980s: GUI - 2000s: Touch Screens - 2010s: AR/VR - 2025: Terminal
English
6
0
17
701
alex
alex@shitshowdotinfo·
Printing this out and pasting it above every thermostat at work
alex tweet media
English
94
712
9.9K
618.9K
Simon
Simon@simonmxtthews·
@DJSnM i think it’s tumbling :(
English
1
0
31
2K
Scott Manley
Scott Manley@DJSnM·
And for the first time in a long time Starship is in its planned (sub)Orbit
Scott Manley tweet media
English
24
59
1.9K
77.5K
Defender
Defender@DefenderOfBasic·
@KevinF_26 Oh yes, I see what you mean! I think you're right
English
2
0
2
152
Defender
Defender@DefenderOfBasic·
no seriously, why can't my mom right click on a file and have it be on the internet????
Defender@DefenderOfBasic

@CarltonMackall @glitch in my ideal world, every OS has a built in "start a web server". Double click on a file, now it's hosted & publicly available, off your laptop, at your IP address. Why isn't this how computers work???

English
29
4
260
134.7K
Dennis Wingo
Dennis Wingo@wingod·
This should win science cartoon of the year award!
Dennis Wingo tweet media
English
33
408
2.3K
121.1K
Peter Yang
Peter Yang@petergyang·
What's the best alternative to Calendly?
English
777
46
638
653.8K
Thomas Tobin
Thomas Tobin@tomjtobin·
@Kellblog Does this work the other way around? When doing European expansion do US founders and operators feel there’s no experience ?
English
1
0
1
49
Peter Kazanjy
Peter Kazanjy@Kazanjy·
What were the best-timed ZIRP exits? - Divvy - Mailchimp - Chorus - SalesLoft - Drift - Slack - Afterpay Who else?
English
94
5
321
173.8K
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
🚨 BREAKING: Claude AI did not pass my privacy & reputational harm test. I gave it my LinkedIn profile and asked it for my bio. It's nearly 85% wrong, including working at Google and Anthropic. To help protect against privacy harm, it should refrain from providing AI-generated bios or reports about people. Please revise.
Luiza Jarovsky, PhD tweet media
English
6
16
83
9.3K
Thomas Tobin
Thomas Tobin@tomjtobin·
@Kellblog Two other milestones: Google using page links as the way to rank page content: 1996 OpenAI giving up on links and using Reddit posts with >3 karma for link quality: 2019 (noted in gregoreite.com/drilling-down-…)
English
0
2
2
504
Dave Kellogg
Dave Kellogg@Kellblog·
The assassination of the hyperlink continues Born: 1966 Popularized: 1990 (invention of WWW browser) Died: 2024 Murdered by: Twitter, LinkedIn, et alia.
Dave Kellogg tweet media
English
7
2
26
2.6K
Thomas Tobin
Thomas Tobin@tomjtobin·
@RuneKek @ednewtonrex The other part of this is that companies don’t do this in a vacuum. Somebody is going to drive this model and claim credit and get paid. Those people still get paid, they just have more tools.
English
0
0
1
234
Rune
Rune@RuneKek·
@ednewtonrex What if its a company that scales by hiring more humans?
English
6
0
8
8.7K
Ed Newton-Rex
Ed Newton-Rex@ednewtonrex·
OpenAI just released a model that can generate 1-minute videos. You simply cannot argue that these models don't / won't compete with the content they're trained on, and the human creators behind that content. What is the model trained on? Did the training data providers consent to their work being used? The total lack of info from OpenAI on this doesn't inspire confidence. Across the AI industry, people's work is being exploited without consent to build products that compete with that work. This must be controlled by regulators.
OpenAI@OpenAI

Introducing Sora, our text-to-video model. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. openai.com/sora Prompt: “Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.”

English
107
812
4.3K
1.2M