Chris Olson

3.6K posts

Chris Olson banner
Chris Olson

Chris Olson

@chrisolson

@mach1ai 🇺🇸 Former: @Traceup, @ReplicatedHQ, @Amplifyla, USC, Iowa.

Austin, Texas เข้าร่วม Ağustos 2007
366 กำลังติดตาม2K ผู้ติดตาม
Chris Olson
Chris Olson@chrisolson·
LGA tower frequency is 118.7. JFK has two tower frequencies, 119.1 and 123.9. I don't think this was a turn of the dial mistake; I think the pilot used muscle memory and just hit the wrong muscle.
English
0
1
1
185
Chris Olson
Chris Olson@chrisolson·
Tesla is putting on quite the show in downtown Austin with the announcement of Terafab.
Chris Olson tweet mediaChris Olson tweet media
English
0
0
2
68
Chris Olson
Chris Olson@chrisolson·
Operational AI has to move across systems, maintain context, and decide what happens next. @mach1ai powers complex real-world processes with automation that doesn’t just run. It thinks.
English
0
0
2
128
Chris Olson
Chris Olson@chrisolson·
During the holidays, Instagram should offer a feature where you can scroll nothing but ads for gift inspiration.
English
0
1
2
57
Chris Olson
Chris Olson@chrisolson·
I'm a lifelong Democrat. But Gavin Newsom’s recent tactics remind me of a marketing quote: when you start copying your competitor, you end up looking like their asshole.
English
0
0
3
205
Chris Olson
Chris Olson@chrisolson·
@GrantM Maybe YouTube hasn’t been optimized for playback being at 10x ;)
English
1
0
1
43
Grant Miller
Grant Miller@GrantM·
Why does the YouTube app make my phone heat up so much? It’s been happening for months. Same videos played on Spotify are fine. It’s so odd that such a popular app would have this problem. I assume it’s my hardware, but it’s not that old.
English
2
0
0
336
Chris Olson
Chris Olson@chrisolson·
Isn’t this just like when FB rolled out a redesign? History shows it makes noise, but never stops more people from using it. That’s a pretty good place for OpenAI to be.
Sam Altman@sama

If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake). This is something we’ve been closely tracking for the past year or so but still hasn’t gotten much mainstream attention (other than when we released an update to GPT-4o that was too sycophantic). (This is just my current thinking, and not yet an official OpenAI position.) People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that. Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot. We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks. Encouraging delusion in a user that is having trouble telling the difference between reality and fiction is an extreme case and it’s pretty clear what to do, but the concerns that worry me most are more subtle. There are going to be a lot of edge cases, and generally we plan to follow the principle of “treat adult users like adults”, which in some cases will include pushing back on users to ensure they are getting what they really want. A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way. This can be really good! A lot of people are getting value from it already today. If people are getting good advice, leveling up toward their own goals, and their life satisfaction is increasing over years, we will be proud of making something genuinely helpful, even if they use and rely on ChatGPT a lot. If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it), that’s bad. It’s also bad, for example, if a user wants to use ChatGPT less and feels like they cannot. I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions. Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way. So we (we as in society, but also we as in OpenAI) have to figure out how to make it a big net positive. There are several reasons I think we have a good shot at getting this right. We have much better tech to help us measure how we are doing than previous generations of technology had. For example, our product can talk to users to get a sense for how they are doing with their short- and long-term goals, we can explain sophisticated and nuanced issues to our models, and much more.

English
1
1
3
359
Chris Olson
Chris Olson@chrisolson·
GPT-5 in voice mode describes my custom instructions before it actually starts answering my questions. “Absolutely, let’s dive into that with a bit of that Ben Thompson meets Adam Liptak meets Heather Cox Richardson kind of vibe.” Strange.
English
0
1
4
252
Chris Olson
Chris Olson@chrisolson·
@benhylak I always add "Reduce the number of adjectives you use when responding" to my custom instructions. I don't know when it exactly it happened, but all of these models have become quite verbose...
English
0
0
0
30
ben (is hiring engineers)
ben (is hiring engineers)@benhylak·
so far, i find the output of grok 4 to be long, winding and a little incoherent. there are often really good bits and chunks. maybe there's something about the writing style not sticking for me. it sounds really good, but often not that useful?
English
4
1
22
3.7K
Chris Olson
Chris Olson@chrisolson·
MCPs are reinforcing that prompt engineering will be around for a while, right? Or maybe they can only be reliably used with good reasoning models.
English
0
0
1
177
Chris Olson
Chris Olson@chrisolson·
The Essential Role of Operators in AI Adoption The dawn of AI agents has prompted discussions about their role in the future of work. While some imagine autonomous systems handling complex tasks independently, practical deployment shows a different reality: businesses need skilled operators to oversee AI agents. The greatest businesses deploying AI capabilities will be those that adopt technology that helps them effectively oversee and manage their AI to achieve success, rather than simply releasing autonomous agents with minimal oversight. The goal is transforming businesses through AI, moving beyond merely artificial intelligence toward artificial capabilities—where AI is already smart, but operators must enable it to be truly capable. Understanding how AI agents work in business reveals they need attention. Working with these agents should feel like managing a person. The operator becomes the point of contact between your business and AI models, translating business needs into AI actions. Rather than autonomous behavior, it's a sophisticated form of delegation and instruction, demanding human intelligence to formulate and refine. A collaborative cycle emerges where AI can recognize its shortcomings, but humans must interpret these signals and adjust the AI's parameters. The power of this model lies in its economics: one skilled operator can oversee AI agents that accomplish what previously required entire teams, creating efficiency gains that transform businesses. This points to a clear conclusion: the future of AI in business is not one of manager-less operations, but rather a dynamic partnership where skilled operators guide AI to achieve unprecedented scale. This approach may not be what everyone believes, but I'm convinced it's the path that will succeed long term. The market will ultimately decide who's right, but my experience shows that businesses investing in technology for their operators to oversee, manage, and continuously improve their AI capabilities will be the true winners in the AI era. The future belongs to those who embrace this symbiotic relationship between operators and AI systems.
English
1
2
5
215
Chris Olson
Chris Olson@chrisolson·
I got the opportunity to share my experience deploying AI agents across Trace's operations, transforming processes that paved the way for @mach1ai's foundation. Thanks for the great conversation @jjacobs22! youtu.be/kVcn9acgpuU?si…
YouTube video
YouTube
English
0
1
6
466
Marc Campbell
Marc Campbell@marccampbell·
When did Grok 3 become available in the API. I can't believe I missed this.
English
1
0
1
138
Chris Olson
Chris Olson@chrisolson·
@jonfavs I know this is what you were getting at with the reference to the 8th, but just to really hammer the point home: it feels like either way you look at it, it’s both cruel and/or unusual…
English
0
0
2
5.3K
Jon Favreau
Jon Favreau@jonfavs·
Yeah, so here's why the Constitution doesn't allow the state to round up Americans and send them to be tortured in a foreign gulag because they lit cars on fire: 1) The 14th amendment means the state can't strip you of citizenship - whether naturalized (the case you cite) or U.S. born (post-14th, Congress did not have the power to strip citizenship from anyone born here). Deportation is, by definition, for non-citizens. 2) The 6th amendment means the state can't imprison you without due process, which at the very least includes a trial (courts have ruled this also covers non-citizens) 3) The 4th amendment means the state can't just bust down your door without a warrant because they suspect you may have a bad tattoo 4) The 8th amendment means the state can't send you to a brutal dictator's prison where inmates have been tortured and killed just because you vandalized a car made by the president's top advisor Also, grow the fuck up. You're 48 and you talk like a 5-year-old who just learned his first bad word
Chamath Palihapitiya@chamath

You’re probably not a total retard so you know that prior to Afroyim v Rusk (1967), Congress had broad power to strip citizenship. You may want to brush up on your Con Law before this issue is revisited and you can do a watercolor amicus brief or whatever chicken scratches you are capable of.

English
294
1.6K
16.1K
1.6M
Chris Olson
Chris Olson@chrisolson·
@Bolt__AI What is the secret behind the document analysis for o1? Is it using OCR to process the document and putting it into context?
English
1
0
1
42
BoltAI
BoltAI@Bolt__AI·
BoltAI v1.23.0 released ✨ Some notable changes: - The o1 series models now support Document Analysis - Estimate token & cost for Gemini models - Reworked the chat input field: faster and better - Other bug fixes Update when you can ✌️
BoltAI tweet media
English
4
1
10
2.1K
Daniel Nguyen
Daniel Nguyen@daniel_nguyenx·
Claude Prompt Caching is amazing 🤯 When sending a subsequent message: • without cache: uses 35k input tokens, costs ~$0.1 • with cache: uses just 497 tokens, costs ~$0.01 ( cost for cache reads: 35k x 0.3 / 1M = $0.01 ) Available in @Bolt__AI v1.18.3 ✌️
Daniel Nguyen tweet mediaDaniel Nguyen tweet media
English
23
8
218
25.9K
Dave
Dave@_heydave·
Just found something interesting in my Finder after the latest @NotionHQ update 👀 Anyone else?
Dave tweet media
English
15
6
105
18.4K
Chris Olson
Chris Olson@chrisolson·
Is there a way to reset the X "algo"? It has become nothing but TikTok/Reel videos. I know some might say, "well, it's because that's what you're engaging in." But it's really not; I feel like if I watch one video or get sent one in a DM, it's all I see.
English
0
0
1
2.6K