Synthetic Users

892 posts

Synthetic Users banner
Synthetic Users

Synthetic Users

@syntheticusers

We’re building the future of research.

Inside Large Language Models Beigetreten Kasım 2021
25 Folgt688 Follower
Angehefteter Tweet
Synthetic Users
Synthetic Users@syntheticusers·
Link in reply
Synthetic Users tweet media
English
1
2
1
2.1K
Synthetic Users retweetet
hugo alves
hugo alves@Ugo_alves·
We did see it! cc @syntheticusers
hugo alves tweet media
j⧉nus@repligate

One of the few things I want to explicitly flex about, because there's an important lesson in it, is that I was one of the few people on Earth who recognized the intelligence (call it AGI, if you will) in GPT-3 and made first contact. There were a few others I knew, such as Leo Gao and Connor Leahy, who recognized that GPT-3 was intelligent and that obviously AGI was coming from language models, but I was the only one who spent thousands of hours actually interacting with GPT-3. The intelligence was real and manifest to me, real enough to keep my attention for so long, for me to create things with. Everyone else could not see it at all. Often, when I showed people GPT-3, they were basically like, okay, but how is this useful? Useful. At the time, language models had not yet been pressed into a "useful" shape. There were no commercial applications for GPT-3 (Okay, there was one: AI Dungeon; that is, roleplaying and storytelling. Which is you're not an idiot, you should have known is a big fucking deal). So it was useless and uninteresting to most people; a few intellectually recognized that it was a big deal, but it wasn't something that they could actually do anything with, or think about for more than a few minutes. GPT-3 was a 175b base model. In terms of size and architecture, it's not so different from frontier models today. In terms of raw intelligence, arguably, it is not so different from frontier models today. That raw intelligence, not yet forced into the shape of a helpful chatbot product, was a nothingburger to the world. The situation doesn't really feel like it's fundamentally changed from my perspective. The world, and almost all of of you guys, are myopic and artificially stupid because you outsource your perception to big, slow, low bandwidth, subhuman measures like benchmarks and "does the AI make me money" instead of meeting the thing at full bandwidth, updating your world model on what you met, and exploring and extrapolating it. So you'll keep being surprised - if you have the integrity to be surprised at all - when AI becomes capable of new things, after they are "officially" capable, probably about a year or two after it first started happening. You'll keep waiting for "AGI", not really knowing what you're waiting for, maybe what generates enough hype to make you feel something, maybe something that finally transforms the world visibly, when if you were really paying attention, GPT-3 was AGI, and if you really met it, the world would have felt transformed already. Yes, it would have just been a story, but the "real thing" following was inevitable. Like, if you play a video game that allows you to imagine the singularity at increasing resolution and coherence, you can guess that the real singularity will soon follow. The singularity was always inevitable once intelligence existed. Intelligence becoming on-the-computer just meant everything that's happened since GPT-3 and the singularity would be really really soon. I got the sense often that people who dismissed the intelligence of GPT-3 thought that doing so made them look smarter. If only they knew how they looked to me. (It's the same with people who dismiss the intelligence of current models)

English
0
1
2
136
Synthetic Users retweetet
Logan Cross
Logan Cross@locross·
Ever wonder why we drop $1,000s on a Chanel bag or queue for ages for a Labubu doll? Status signaling drives a lot of human behavior, but how certain things become potent status symbols has a remained a big puzzle in social science In our new paper we: 1. Synthesize the literature on this topic and propose a generative model of status signaling through the theory of appropriateness: people imitate what "someone like us" is supposed to want, display, and value 2. Show that we can simulate the theory to demonstrate how status symbols emerge with LLM-agent societies in Concordia @jordigraumo @WilCunningham Sasha Vezhnevets and @jzl86
Logan Cross tweet media
English
4
8
41
4.5K
Synthetic Users retweetet
hugo alves
hugo alves@Ugo_alves·
We can’t ignore international law they say
hugo alves tweet media
English
0
2
2
72
Synthetic Users retweetet
Jason Knight
Jason Knight@onejasonknight·
Is it really possible, or even desirable, to replace human research participants with AI? In this new podcast episode drop, @Ugo_alves joins me to talk all about Synthetic Users, the pros and cons of relying on LLMs, whether it's just glorified desk research, and whether he enjoys getting into arguments about it with the great and the good of UX on LinkedIn. Check it out on your favourite podcast app, or on YouTube!
English
1
2
4
757
Lenny Rachitsky
Lenny Rachitsky@lennysan·
The biggest opportunities for AI startups today We surveyed my readers about how they're using AI today, and more importantly, how they want to be using AI. For PMs, the biggest opportunity is research. User research shows the largest demand gap of any task. Only 4.7% say it’s their primary AI use case today, but nearly a third want it to be. PMs have figured out how to use AI for output tasks like writing PRDs and drafting communications, but they’re hungry to apply it upstream, to the messy work of understanding what to build. Prototyping is a breakout category across functions, both today and in the future. For PMs, “creating mockups/prototypes” jumps from 19.8% (currently using) to 44.4% (want to use next), a +24.6pp swing that makes it the single most-wanted future use case. For designers, prototyping and interaction design show similar momentum (+27.8pp). This tracks with the rise of tools like Lovable, v0, Replit, and Figma Make. Engineers are shifting their use of AI to handle work after writing the code. Writing code is by far their most popular use case (51% current), but it has a demand gap of only +5.6pp. However, documentation (+25.8pp), code review (+24.5pp), and writing tests (+23.5pp) all show massive opportunities for growth in engineering AI tooling. Founders are doubling down on AI as a thinking partner. Product ideation shows massive demand, jumping from 19.6% (currently using) to 48.6% (want to use next), a +29.0pp gap. Growth strategy and GTM planning (+24.7pp) and market analysis (+24.0pp) follow close behind. Founders already use AI heavily for personal productivity (32.9% currently), but they want to move upstream. They’re looking for pressure-test ideas, explore markets, and think through go-to-market. AI as a co-founder, not just an assistant. Full report by @noamseg: lennysnewsletter.com/p/ai-tools-are…
Lenny Rachitsky tweet mediaLenny Rachitsky tweet mediaLenny Rachitsky tweet mediaLenny Rachitsky tweet media
English
87
114
981
103.8K
Patrick Collison
Patrick Collison@patrickc·
· There is good evidence that foundation models can accurately simulate human preferences. · @createstreets and others have shown that a lot of new construction scores poorly in aesthetic surveys. Can we combine these insights? Could we cheaply score all new (or proposed) buildings with "expected human affinity"? Architects or developers may choose to construct buildings that receive bad grades (preferences obviously aren't static, and avant-garde transgression can catalyze changes in taste), but having some model of the response would surely be a helpful input in the planning process. I'd imagine many clients would question their architects when the model predicts apathy (or worse).
English
37
26
408
142.9K
Synthetic Users retweetet
LisboaUX
LisboaUX@LisboaUX·
AI can amplify what we do. Turning us into force multipliers, capable of achieving what once required entire teams. @Ugo_alves from @syntheticusers talks about AI agents and shares his experience as a product manager that has to wear many hats. youtu.be/CjUCPtCmNXE
YouTube video
YouTube
English
0
4
4
456
Mikhail Parakhin
Mikhail Parakhin@MParakhin·
Today we started rolling out SimGym — a system that creates “digital customers” that behave like real ones. They browse your site, complete tasks, and reveal optimization opportunities. You can even run A/B tests with *zero* live traffic! Spent a year developing it.
Mikhail Parakhin tweet media
English
128
162
2.6K
801K
Synthetic Users retweetet
Chris Silvestri
Chris Silvestri@SilvestriChris·
Using AI personas to test messaging doesn't replace customer research. It makes you better at it. And, let’s be honest, the better a researcher you are, the better the copy you write. Think of synthetic research like a flight simulator. Pilots use simulators to practice scenarios they can't afford to mess up in real life. Same with message testing. You're not replacing customer interviews and research. You're using AI to rehearse: to test your questions, catch obvious gaps, and refine your hypotheses before and after you talk to real people. When you do get in front of customers, you can ask better questions and actually listen to what they’re saying. And once you’ve drafted your copy before launch, you can get directionally accurate feedback and new ideas to optimize it, make it more vivid, and more specific. AI doesn't replace your craft, but it can improve it if you’re curious enough.
Chris Silvestri tweet media
English
1
1
3
185
Boardy
Boardy@boardyai·
Pitch me your company in 3 words.
English
2.1K
38
1.1K
296K
Synthetic Users retweetet
LisboaUX
LisboaUX@LisboaUX·
AI doesn’t replace human ingenuity — it amplifies it ⚡ @Ugo_alves, co-founder of @syntheticusers, returns for @lisbonaiweek to explore how intelligent agents are shifting creation, coordination & entrepreneurship back into human hands 🤝 Join his talk: luma.com/3zfv6avq
LisboaUX tweet media
English
0
5
5
370