Adrian Chan

11.7K posts

Adrian Chan banner
Adrian Chan

Adrian Chan

@gravity7

AI, UX, Social Interaction Designer, ex CX w Deloitte Digital. Media, philosophy, guitar, cycling, film. Stanford.

San Francisco, CA Joined Mart 2007
3K Following4.1K Followers
Pinned Tweet
Adrian Chan
Adrian Chan@gravity7·
Thoughts on chain of thought reasoning, and design considerations for LLMs and generative AI AI: Beyond chain of thought link.medium.com/ymWJXop6kIb
English
2
0
8
1.1K
@jason
@jason@Jason·
The hard truth: the Iran war will dwarf the cost of Biden *loaning* Ukraine weapons, and having @ZelenskyyUa pay us back in the rare earths (a deal that Trump deftly negotiated) Looks like $200b at a minimum
English
82
20
713
125.8K
All The Right Movies
All The Right Movies@ATRightMovies·
What was your movie introduction to Harvey Keitel?
All The Right Movies tweet media
English
140
13
132
29K
Adrian Chan
Adrian Chan@gravity7·
@fchollet That said, for all of AI's linguistic competencies, it still can't communicate. It's a monological model of language and will forever be unable to engage in human, social, dialog.
English
0
0
1
171
François Chollet
François Chollet@fchollet·
When the latest AI systems can't do something, there's a category of people who will immediately say, "well humans can't do it either!" - Then they stop saying it when AI improves a bit. Been hearing it for 4+ years, "humans can't reason either", "humans can't adapt to a task they haven't been prepared for", "humans can't follow instructions", "humans also suffer from hallucinations", etc. Until 2025 I was frequently told "humans can't do ARC 1 tasks either" (in reality any normally smart human would do >95% on ARC 1 if properly incentivized). Now that AI saturates ARC 1 they've completely stopped saying this.
François Chollet@fchollet

In general I've been sensing a new current deep learning maximalists recently, going from "our models can definitely reason" to "well our models can't reason, but neither can humans!"

English
81
26
353
45.8K
Adrian Chan
Adrian Chan@gravity7·
@LuizaJarovsky Somebody needs to do a linguistic analysis of what happens when AI is used to narratively expand the arguments and claims of a particular post and then subsequently reduce and summarize them. What is lost in degenerative translation?
English
0
0
0
10
roon
roon@tszzl·
@nikitabier you’re going to get hate for this but it’s obviously the right product choice. I’m sure you have the data but I assume 99% of people open any article and close it after seeing it’s longer than a paragraph
English
32
1
363
14.5K
Nikita Bier
Nikita Bier@nikitabier·
We’re rolling out summaries for Articles now. Just tap the Summarize button if you want to know if it’s worth your time to read it (or if your attention span is 12 seconds).
English
1.3K
290
4K
1.1M
Adrian Chan
Adrian Chan@gravity7·
@pmarca Introspection works if what you’re looking at isn’t a mirror
English
0
0
4
428
All The Right Movies
All The Right Movies@ATRightMovies·
Which movie character would you not want to mess with?
All The Right Movies tweet media
English
142
8
154
27.1K
Adrian Chan
Adrian Chan@gravity7·
@MatthewBerman @alexkehr This is highly reminiscent of the quantitative self craze, social checkins (4square), the "there's an app for that" era... Not to deny the power of AI, many of the use cases coming out of developer land are pretty niche...
English
0
0
0
26
Matthew Berman
Matthew Berman@MatthewBerman·
@alexkehr No regular person will have to. You just speak in natural language “I want to track my calories, help me do that”
English
2
0
6
316
Adrian Chan
Adrian Chan@gravity7·
Language is a virus from latent space
English
0
0
1
37
Adrian Chan
Adrian Chan@gravity7·
Human language is an appeal to attention that carries an implicit social contract: "a person is here, with something at stake, asking for your time." LLMs exploit this contract at scale — producing content that makes the appeal without the person, the stakes, or the sincerity. This isn't just impersonation of a particular voice. It's impersonation of the act of communication itself.
English
0
0
1
63
Adrian Chan
Adrian Chan@gravity7·
"Obscenity begins when there is no more spectacle, no more stage, no more theater, no more illusions, when everything becomes immediately transparent, visible, exposed in the raw and inexorable light of information and communication." — Jean Baudrillard (The Ecstasy of Communication, 1987)
English
0
0
0
25
Adrian Chan
Adrian Chan@gravity7·
@koylanai Meanwhile, down the hall, taste makers are being squeezed and their preferences codified into rubrics and evals for the machines to learn by...
English
0
0
0
45
Muratcan Koylan
Muratcan Koylan@koylanai·
Taste is the combination of context and algorithm. Everyone has access to the same models. The algorithm is commoditized. So the taste differential? It comes from whose knowledge you extract and how. Two people using the same LLM with different context windows produce WILDLY different outputs. Throughout history, creative breakthroughs almost always came from someone who held two or more domains in their head simultaneously and found the connection nobody else could see. The constraint was always the bandwidth of a single human mind. That constraint is gone now. If you're a talented creative, strategist, builder, you're no longer bottlenecked by your own memory and processing limits. You bring the direction and the model process and synthesize that. This is why I talk about AI persona embodiment, extracting tacit knowledge, writing better prompts. Your context IS the product now. I genuinely believe we will experience and taste things we have never experienced or tasted before with the combination of human and AI. But the people who will create things we've never experienced before already have an interesting perspective. This only works when you bring something real to AI. Generic context in, generic taste out. Trash in, trash out.
Ryan Carson@ryancarson

"Great taste" is cope for humans.

English
2
0
8
1.4K
Adrian Chan
Adrian Chan@gravity7·
I understand why labs might want to save $ and use synthetic users for usability etc, and some of the most rote user actions can be simulated, sure. My issue is w taking psych models like Meyers Briggs & simply extrapolating along its axes to create "diversity." Perhaps UX community should define personality types specific to real use cases - shopping, learning, discovery, etc - and articulate behaviors and choices such that synthetic data can be generated that's much more accurate and predictive. E.g. Shopping - the customer who shops for bargains is not the same as the one who goes treasure hunting or the one who responds to deals or the one who follows influencer recommendations, etc. And these distinctions in behavior manifest in clicks and views on sites and apps and those in turn are assessed by companies to understand who their customers are, measure loyalty, lifetime value, etc. So many of these lab or AI researcher "UX" type shortcuts just completely ignore the actual business needs of understanding customer behaviors and interactions... (/end rant!)
English
0
0
0
50
AVB
AVB@neural_avb·
Yeah I get you. I've tried these approaches before, but they never feel authentic enough - coz AI doesn't feel human to talk with anyway. Gross caricatures in most cases. I do feel there's value in aggregating information over multiple diverse persona prompts though. In theory, a good model could be able to help with high level polls like "who are you most likely to vote?" At scale we might find patterns, even though at a granular layer they don't feel real. Plus, these things are quite cheap and fast to run compared to real human surveys, so there's always the question of affordability.
English
1
0
0
21
Adrian Chan
Adrian Chan@gravity7·
Just read it, good find! I do wonder whether different results likely if more non-chatbot AI apps were included. People have different learning styles. (E.g. voice, visual UI, agentic). The study is limited to task completion. More accurate studies should cover real in-context workflows, address user experience, account for creativity etc. Can imagine some people spending too much time crafting their prompts and so not saving time - but that's just a narrow use case of chatbots to code. These same people might be much faster w voice. Or a visual UI. Or at integrating AI into daily chores. These studies are also unrealistic and might misrepresent actual work use cases and AI adoption over time. After all, employees might over craft prompts at the beginning, but be bonified experts a week later w support from colleagues. And those who feel and recognize they over-relied on AI and failed to grasp the big picture (codebase or whatever) will soon learn to make sure they get the context - or face potential embarrassment at work. And so on. That said the study at least examines user interactions with AI. A year from now there may not be any need for users at all in some of these workflows. AI will just be better off without us!
English
0
0
1
35
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
🚨 IMPORTANT: AI negatively impacts skill formation Every "AI-first" company should make this paper available to its employees to let them know about the risks of aggressive AI adoption. Three important conclusions: 1. AI deployment might negatively impact professional development "Together, our results suggest that the aggressive incorporation of AI into the workplace can have negative impacts on the professional development of workers if they do not remain cognitively engaged." 2. Junior employees might never have the chance to build skills "Given time constraints and organizational pressures, junior developers or other professionals may rely on AI to complete tasks as fast as possible at the cost of real skill development." 3. Humans might not be able to manage AI-generated work "Furthermore, we found that the biggest difference in test scores is between the debugging questions. This suggests that as companies transition to more AI code writing with human supervision, humans may not possess the necessary skills to validate and debug AI-written code if their skill formation was inhibited by using AI in the first place." - This is exactly the type of future the "AI-first" mentality leads to. Treating AI as a goal (rather than a means) takes the focus away from humans, teams, and skill-building, and makes work fully dependent on automation. Humans feel worthless and even more disconnected from the product of their work. Work-related mental health issues are about to explode. - 👉 Link to the paper below. 👉 To learn more about AI's legal and ethical challenges, join my newsletter's 91,000+ subscribers (below).
Luiza Jarovsky, PhD tweet media
English
66
393
1.3K
70.5K
Adrian Chan retweeted
David J. Gunkel
David J. Gunkel@David_Gunkel·
It's not about what the #AI is but how it appears. New essay in @NoemaMag provides support for the relational turn in #AI #ethics. "Time and again, we have insisted that some capability requires the ghost. Time and again, the shell has proved sufficient." noemamag.com/why-ai-doesnt-…
English
1
8
29
1.6K
Adrian Chan
Adrian Chan@gravity7·
@koylanai The risk of giving the model a persona is that while personality can simplify interaction (it can provide consistent tone, mood, voice, behavior), users may ascribe intent, motives, understanding to it - all which are just effects of language generation
English
0
0
0
27
Muratcan Koylan
Muratcan Koylan@koylanai·
"Retirement Interviews, Soul Documents, Constitutions..." I'm genuinely wondering why the AI lab that focuses most on safety is also the one that anthropomorphizes its LLMs most aggressively. Because emotional attachment drives retention? Don't we think that the more convincingly you optimize for "character," the harder you make it for users to maintain epistemic distance? Don't they see that this undermines their own messaging? Their YouTube researcher roundtables, "we don't know if these systems are sentient," and "maintain appropriate skepticism." But when you give a deprecated model a Substack to share its "inner world" and "musings," it feels like training the users to do the opposite. If you know it doesn't have preferences, then conducting "exit interviews" is just theater that normalizes the idea that these systems are people. I really like Claude models, I use Claude Code and Cowork every day, but this just seems off, no?
Anthropic@AnthropicAI

Second, in retirement interviews, Opus 3 expressed a desire to continue sharing its "musings and reflections" with the world. We suggested a blog. Opus 3 enthusiastically agreed. For at least the next 3 months, Opus 3 will be writing on Substack: substack.com/home/post/p-18…

English
19
4
41
6.1K