Alison💡

374 posts

Alison💡 banner
Alison💡

Alison💡

@0xAxC

AI Product Manager @ Bytedance 🔎📖 // ride the wave 🌊 // prev: PayPal, INSEAD, Tsinghua

Katılım Kasım 2024
648 Takip Edilen585 Takipçiler
Sabitlenmiş Tweet
Alison💡
Alison💡@0xAxC·
Why $pippin($135M mcap) will flip $ai16z($1B mcap)? TLDR 1. Pippin’s creator, Yohei(@yoheinakajima), has a proven track record with BabyAGI(@babyAGI_ ), is highly respected in Web2, and is a guy who’s always ahead of trends(or to be bold, someone who creates trends🤔just go compare the timestamps of his GitHub work with Google search trends). 2. Pippin’s AI framework is designed for “self-building” agents, far beyond simple platform bots. While self-building isn’t fully functional yet due to challenges like third-party integration or memory issue, the framework lays the groundwork to make it possible in the near future. 3. Unlike most AI coins, Pippin will have utility. While Yohei has only shared brief hints so far, it’s clear that utility will be one of the focus as the project evolves. -------------------- So who is Pippin? Pippin(@pippinlovesyou) is an AI influencer and digital being, brought to life through AI-generated SVGs, named by ChatGPT, and nurtured by its visionary creator, Yohei. For more on Pippin’s origins, do explore Pippin's genesis story here (x.com/yoheinakajima/…) In this thread, I’ll mainly focus on the recently open-sourced Pippin AI agent framework and potential $pippin utility. 1. Pippin Framework Before diving into the Pippin framework, I’d like to first explain what a "self-building" agent is. Since the development of BabyAGI, Yohei has been iterating on several of its core features, including dynamic skills, self-reflection, and graph-based memory. His recent focus has been on creating “self-building” agents. Yohei has shared his thinking on self-building agents, breaking them down into four levels: - Level 0: No Self-Build. Predefined Skills At this level, the agent operates with a predefined function library(e.g. skills) created by the developer. When triggered, the agent simply executes an action or produces an answer based on the functions available. - Level 1: Request-Based Skill Generation The agent can build its own skills when triggered. In the future, it will be able to use the functions it has built for itself. - Level 2: Need-Based Skill Creation When asked to perform a task, the agent evaluates whether it has the required skills. If not, it writes code to build the necessary function and executes it. - Level 3: Anticipatory Skill Development At this level, the agent anticipates the skills it will need based on its character or objectives. It proactively writes code, builds the functions, and executes them to fulfill its goals. Currently, the Pippin framework enables new activity generation, but the self-building of skills is not yet functional. To understand this, consider an example: If you want an agent to be healthy, it needs to “eat” green vegetables instead of instant noodles. The first step is for the agent to understand how to “eat,” so it can choose what to eat based on the goal of being healthy. In this context, “eating” is a skill, and actions like “eating green vegetables” or “eating instant noodles” are activities the agent can generate, choose, and execute on its own. For a virtual AI agent, skills might include posting on Twitter, speaking on Zoom, collaborating in Google Docs, etc. Based on these skills, the agent can begin running activities aligned with the objectives it’s given—such as generating content to promote the Pippin token(objective) through tweets(skill) or live streaming(skill) with LLM-generated content(activity). The challenges of building self-building skills are clear—how do you establish vendor integration, build memory, and so on without human intervention? The Pippin framework is designed with these challenges in mind, with the goal of simplifying self-building and making it a reality in the near future. Given the current framework, a key question arises: What skills does Pippin have if self-building is not yet an option? The Pippin framework integrates with Composio(@composio), offering 200+ skills and capabilities for AI agents to leverage. These range from social media tasks and collaboration tools to even e-commerce (e.g., Pippin buying me a pizza after I got rugged…). With this, your Pippin will be able to achieve its goals by tapping into a wide range of platforms and skills—isn't it exciting! For more technical details, feel free to check out the GitHub page(github.com/pippinlovesyou…). I've also included a screenshot of the framework below for your reference. 2. Pippin Utility While Pippin’s utility is still evolving, we can get a glimpse of its potential from Yohei’s demo together with the Pippin framework. Users will be able to stake Pippin to become active developers and participate in: - Quests and competitions - Submitting solutions to challenges - Voting on other developers’ submissions Subsequently, there will be three types of quests: 1. Challenge: Be the first to complete a challenge 2. Competition: Be the one to submit the best solution 3. Request: Work as an assigned developer for a specific task In my opinion, this system is designed to attract and centralize top talent around projects and problem-solving, rather than acting as a traditional launchpad where developers are scattered and competing with each other. This approach is reminiscent of Kaggle, the Web2 machine learning platform that successfully draws the best talent to solve complex problems through collaborative challenges. I’ve shared a screenshot of the quest page below, and for more information, you can watch the Pippin demo video(x.com/i/broadcasts/1…). It’s just the beginning. Check out the GitHub star history below comparing BabyAGI, Pippin, and ElizaOS(AI16Z)—are you sure you don’t want to jump on the train now? It’s about to take off! 🚀🚀
Alison💡 tweet mediaAlison💡 tweet mediaAlison💡 tweet media
English
27
91
275
33.7K
Alison💡
Alison💡@0xAxC·
Feel this deeply. Sometimes I wonder if it's that we got better at managing feelings before they fully form, like a preemptive emotional immune system. The intensity comes back for me in moments of genuine surprise or when I let myself be truly bad at something new. The beginner's mind remembers how to feel.
English
0
0
3
142
PJ Ace
PJ Ace@PJaccetturo·
Why is it harder to feel things as you get older? I used to haves so many feelings in my 20s and now I just feel things a few times a week in my 30s. How do you get them back?
English
87
2
120
17.7K
Alison💡
Alison💡@0xAxC·
What if the next gold rush isn't AI building, but teaching AI your taste? A Hollywood cinematographer sees 1000 things you don't - which shadow creates tension, why 24fps feels more cinematic than 60, when breaking the 180° rule works We're moving from selling time → selling outputs → selling the pattern of how we see Just like lawyers are teaching AI what "good" contracts look like for $500/hr, imagine cinematographers encoding their visual intuition, producers teaching story rhythm, sound designers transferring their feel for emotional frequencies The most valuable asset isn't your work anymore, it's your taste, packaged as training data Creative judgment that took decades to develop can now be captured, scaled, sold. Every expert should be asking: how do I bottle what makes my eye different? The real disruption: human taste becomes a product, not a service
Lenny Rachitsky@lennysan

How a 22-year-old dyslexic dropout created the fastest revenue-growing business in history—$1M to $500M in just 17 months. @BrendanFoody discovered that AI labs were facing a critical bottleneck: they needed human experts to create "evals"—tests that teach models what correct looks like. His company @mercor_ai began connecting labs with lawyers, doctors, engineers, and other specialists to create evals and training data for models (for $95-500/hour). Today, @mercor_ai works with 6 of the Magnificent 7, all top 5 AI labs, has never had a customer churn, and has a net revenue retention of 1,600%. In my conversation with Brendan, we discuss: 🔸 Why evals have become the primary bottleneck for AI progress 🔸 How exactly Mercor grew to $500M revenue in 17 months 🔸 Brendan’s meeting with xAI that changed his company’s trajectory 🔸 Which skills and jobs will be most valuable as AI continues to advance (hint: jobs with “elastic” demand) 🔸 Why Brendan believes AGI and superintelligence are not happening anytime soon 🔸 The three unique core values that drove Mercor’s success 🔸 How Harvard Lampoon writers are making Claude funnier Listen now 👇 • YouTube: youtu.be/ja6fWTDPQl4 • Spotify: open.spotify.com/episode/3whvAE… • Apple: podcasts.apple.com/us/podcast/why… Thank you to our wonderful sponsors for supporting the podcast: 🏆 @WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUs: workos.com/lenny 🏆 @Jira Product Discovery—Atlassian's new prioritization and roadmapping tool built for product teams: atlassian.com/lenny 🏆 @enterpret_ai—Transform customer feedback into product growth: enterpret.com/lenny

English
2
0
3
175
Alison💡
Alison💡@0xAxC·
@julianweisser This feels like a massive tell about enterprise software. We built tools so complex that we now need AI intermediaries to operate them. It's like hiring a translator for a language we invented ourselves
English
2
2
28
3.8K
Alison💡
Alison💡@0xAxC·
@omooretweets Does the report include api calls(calls through third party applications) or purely from the ChatGPT UI?
English
1
0
8
9.9K
Olivia Moore
Olivia Moore@omooretweets·
OpenAI just released the largest-ever report on how consumers are using ChatGPT. It looks at millions of (anonymized) messages over the last 3 years to analyze usage evolution over time. Key takeaways 👇
Olivia Moore tweet media
English
90
501
3.1K
472.1K
weisser
weisser@julianweisser·
Investor update I just got: “We’ve finally internalized that people are bad at using software. To fix this, we are building agents that use [our product] for them.” It’s working. The company is growing rapidly.
English
60
38
1.6K
214.8K
Alison💡
Alison💡@0xAxC·
This EPS idea hits different. The real poison isn't just the volume, it's how the constant stream of engagement bait actively distorts reality. When every post needs to be rage/shock/awe to cut through the noise, we're training ourselves to see the world in increasingly extreme terms. A quality multiplier could literally make nuance profitable again. Imagine if thoughtful takes earned 10x what hot takes did.
English
0
0
1
43
signüll
signüll@signulll·
yo nikita on x right now it feels like people are just machine gunning posts & firing off dozens hoping one lands. the result is this avalanche of garbage with potentially an occasional banger buried inside. there should be some kind of upside to posting less but better… like a multiplier for restraint. right now, the platform is starting to reward pollution way more than actual craft. ideally the revenue model should have deep quality weighting kinda like an earnings per share model but for posts. this would make the x experience so much better imho.
English
83
21
874
80.5K
Alison💡
Alison💡@0xAxC·
Tinkering with an idea that got me: > 3.5k views in 24h > 19% upvote ratio (so you can tell it's not pleasant content lol) > 31 people writing long essays on why I'm wrong > 10+ stealth website visitors Lesson: Bad attention beats no attention every time. The real problem is being ignored. Plot twist: Reddit haters wrote the most helpful 7-8 line feedback I've ever received. Worth the roast.
Alison💡 tweet media
English
0
0
2
96
Lenny Rachitsky
Lenny Rachitsky@lennysan·
Executive summary of this week's newsletter
Lenny Rachitsky tweet media
Lenny Rachitsky@lennysan

Nobody knew Zeit until it became Vercel. Nobody cared about Codeium until it became Windsurf. Nobody loved mopping until P&G created the Swiffer. Nobody cared about processors until Intel chose Pentium over "ProChip." The wrong name kills products. The right name creates billion-dollar companies. Over the past 40 years, @LexiconBranding has named nearly 4,000 products and companies, including Azure, PowerBook, Sonos, BlackBerry, Dasani, and Febreze. For the first time, their CEO David Placek breaks down their proprietary "Diamond Framework" naming process, including a simple framework you can use with your team to find the perfect name for your product or company. The difference between a good-enough name and the right name is the difference between solid performance and breakthrough success. The right name is a competitive advantage that no one can take away from you. Link to playbook in comment below 👇

English
11
13
253
74.9K
Alison💡
Alison💡@0xAxC·
@omooretweets What would the definition of companion? Are teacher, mentor also considered companion?
English
0
0
1
21
Olivia Moore
Olivia Moore@omooretweets·
I suspect part of the reason we’re seeing heavy NSFW usage is because the UI is still so early - essentially text box as app. When we have companions that live on your screen, in your ear, or alongside you at work, I think it will open up other use cases - teacher, mentor, etc.
English
3
0
7
1.1K
Alison💡
Alison💡@0xAxC·
Great thread. Running the ‘Home Screen Test’ for consumer apps (4x7 grid), almost none are AI-native today. Are pain points not sharp enough yet, or do users just need more time to adapt to new behavior? Feels like B2B will lead in the near term, where workflows are clearer and ROI easier to prove.
English
0
0
3
663
Madhu Guru
Madhu Guru@realmadhuguru·
Same pattern is emerging for AI PMs. AI product building is a far less mature discipline than AI research. To build an AI native product, a PM needs mastery of the following - vision, opinionated UX design - model intuition to extract max value - ability to go from pixels -> evals -> hill climb - understanding of agentic flows - tools, context, safety guardrails - deep user understanding - lot more than previously because of the nature of agents Unlike traditional software, LLMs offer infinite use cases with infinite failure modes. It takes skill to craft products that strike the balance between exposing LLM versatility while building a focused product with high quality. I estimate < 75 PMs globally have this depth. The evidence is the number of truly AI-native products today. And my scar tissue from hiring. It is a rare, but learnable skillset. Only way is to build, build, build.
Aditya Agarwal@adityaag

Why are AI researchers so hard to find? Why are they so highly paid? To try to answer this, let’s rewind back to something from the Carnegie Mellon PhD program. At Carnegie Mellon's PhD program, there's a legendary oral exam question that's deceptively simple: "What happens when you type google.com into a browser?" It's a masterpiece of pedagogical design. You can spend hours traversing the stack—from keypress interrupts to browser event loops, DNS resolution to TCP handshakes, TLS negotiation to HTTP parsing, CDN routing to datacenter load balancing, all the way down to electrons moving through silicon. The beauty is its fractal nature. Each layer reveals another universe of complexity. A strong engineer can navigate these depths, moving fluidly between abstraction levels. Now consider the equivalent question for our current moment: "What happens when you type a prompt into GPT-5?" I estimate fewer than 500 people globally can answer this with comparable depth. Think about what comprehensive understanding requires: transformer architecture internals, attention mechanisms at scale, distributed training orchestration across thousands of GPUs, RLHF implementation details, constitutional AI approaches, inference optimization, quantization trade-offs, not to mention the labyrinthine data pipelines and evaluation frameworks. Unlike traditional systems—which evolved over decades with extensive documentation, courses, and industry knowledge transfer—the modern LLM stack emerged in just a few years within a handful of organizations. The field is simultaneously too new and too vertically integrated. The people who truly understand these systems end-to-end are essentially the early engineers at a small set of frontier labs: OpenAI, Anthropic, DeepMind, Meta's FAIR, and a few others. This explains the talent market dynamics. When the total addressable pool of people who can architect and debug these systems is smaller than a single Bay Area high school, the economics become inevitable.

English
52
70
781
138.3K
Alison💡
Alison💡@0xAxC·
@MacroMate8 Go deep on underserved markets (prediction markets, gaming assets) rather than competing on BTC/ETH liquidity?
English
0
0
3
34
Seraphim
Seraphim@MacroMate8·
if a perp dex wanted to compete with hyperliquid today, how would it differentiate itself? seems extremely hard
English
190
9
357
77.9K
Alison💡
Alison💡@0xAxC·
@simonecanciello In Asian society, superstition does play a notable role in the day to day life as well as other kinds of services that let you “know more about yourself” like MBTI or personal color analysis
English
1
0
3
538
Simone Canc
Simone Canc@simonecanciello·
i think whoever builds the next astrology app is going to make $. just add something viral like: > soulmate drawings & initials > affirmations as daily notifications > a letter from your future self the hardest part isn’t growth, it’s even getting approved. this is wild.
Simone Canc tweet media
English
32
11
294
38.6K
Alison💡
Alison💡@0xAxC·
@jimchang I think payments need more than speed, they need predictable cheap fees and other non-functional features. Also for consumer to merchant payment, it’s tough to pull people away from the entrenched card networks
English
0
0
6
266
Jim
Jim@jimchang·
what's the point of a payments chain? aren't all fast chains payments chains? genuinely curious
English
59
3
164
24.5K
Alison💡
Alison💡@0xAxC·
@ns123abc Hard to discern the exact thesis behind this investment, but in Asian society, superstition does play a notable role in the day to day life
English
0
0
4
1.7K
NIK
NIK@ns123abc·
> 2.5M in VC funding > mission: “help humanity remember and unlock the power each individual holds within themself" > so basically horoscope but with AI are we in a bubble..?
NIK tweet media
Nicole Summer Hsing@NicoleSHsing

I am excited to announce that Arcarae has $2.5M in funding and I am finally hiring. Arcarae’s mission is to help humanity remember and unlock the power each individual holds within themself so they can bring into reality their unique, authentic expression of self without fear or compromise. Our research endeavors are designed to support this mission via computationally modeling higher-order cognition and subjective internal world models. Specifically, we are building the computational models of the other side of intelligence that everyone has neglected:  Intuition. Our evolution is anchored in our current product, an immersive universe for self-discovery, and MIRROR, our AI research implementing cognitive inner-monologue in LLMs, reducing sycophancy by 21% on avg. & up to 156% vs. SOTA models. This marks Arcarae’s transition from a solo endeavor into a full-fledged consumer product and AI research company. I am seeking three very specific people to join me on this mission and help scale Arcarae to its next phase. I am hiring one researcher, one marketer, and one engineer as my founding team. These are far from normal roles; the application and hiring process even reflects this. If you think you are one of these three people, please apply right away. There is much to be done <3 And with that being said, Welcome to the era of Arcarae. And as always, I am excited for what’s to come <3

English
286
755
18.2K
1.5M
Alison💡
Alison💡@0xAxC·
@mert When the market is talking about it, the alpha has gone already
English
0
0
2
222
mert
mert@mert·
to be rich, you have to be contrarian and right if markets express popular sentiment and if being contrarian requires you ignore popular sentiment, then being contrarian means ignoring the current market however, markets are usually less wrong than humans so, to be rich, you must be willing to accept that you will be wrong very often hence you must i) learn a lot ii) learn fast iii) have humility iv) be relentless
English
127
75
757
50.2K
Ash
Ash@ahboyash·
Pre-IPO Crypto Projects: When Private Markets Meet Permissionless Rails Intro Bullish on new frontiers such as biotech, space exploration or robotics but unsure how to get exposure? Or you’re dying to short @cluely because you think it’s just a marketing scam? “Pre-IPO” is basically how investors can get exposure to a company before it lists on a public exchange (NASDAQ or NYSE). Traditionally, only instis and UHNWIs could get access to this round via OTCs using specific fund vehicles. However all these come with high minimum amounts, tons of paperwork, and a very slow dealmaking process. Meanwhile, most of the value creation in today’s modern tech companies now occurs while they remain private. The result: retails are late to get exposure, and are often EL at the IPO. This is where Pre-IPO crypto projects come by putting private-company exposure on open platforms: 24/7 markets, smaller tickets for individual investors with instant settlement. So far there has been 2 approaches to this vertical: • Backed-equity tokens (SPV model): where tokens represent economic rights to real shares held in a special-purpose vehicle (we focus more on this model in this piece) • Synthetic markets (derivatives/perps): no underlying shares; users just trade a proxy for a company’s valuation and settlement is based on events/ oracle rules - - - - - Problems with the legacy setup • Access: most need to be an accredited investor (AI) with high investment amounts - smaller investors are priced out. But change is brewing as the “Equal Opportunity for All Investors Act of 2025” just passed the House, directing the SEC to create a knowledge-based test. This could qualify investors based on market knowledge rather than just income or wealth, potentially opening pre-IPO deals to a much broader audience. • Price discovery: IPO pricing can diverge sharply from true demand, retails often late to the party and used as EL (for example: @figma's IPO had a 4.3x day-one return, but the biggest gains went to early investors, now sitting on over $24B collectively. Source: x.com/JarsyInc/statu…) • Operational friction: transfers and settlements are often manual, and each country has a different jurisdiction regarding such investments (for example, access to U.S. private markets and pre-IPO opportunities is heavily limited in China/South East Asia, due to regulatory frameworks and investor restrictions) Potential benefits of crypto rails • Market structure: on-chain AMMs/order books create automated counterparties, you don’t need a pre-arranged buyer • Fractional access: retails can invest as low as $10 for some products, makes it more accessible for the wider public • Composability: like DeFi, such tokens could potentially be plugged into money markets or integrated in other structured products → finance legos • Transparency & automation: on-chain attestations are available with programmatic and clear settlement rules - - - - - Current landscape 1. SPV Backed a) @JarsyInc • Intro: a consumer grade platform that issues 1:1 asset-backed tokens tied to shares held in an SPV → real pre-IPO exposure • How it works: using email → complete KYC once → fund in stables (multi-currency on-ramps) → buy company-specific tokens with as low as $10 per investment • New listings: users first buy a presell token using a dollar-pegged in-app stablecoin (JUSD). If Jarsy secures the shares, pre-sales convert into asset-backed tokens and a one-time platform fee applies (otherwise investors are issued refunds) • Performance & traction: Jarsy has been live for a few months now with strong growth and engagement. Users have already captured massive upside such as: >> @circle: Bought at $25 → stock hit $298 >> @AnthropicAI: +240% return (valuation grew from ~$60B → ~$200B) >> @perplexity_ai: +101% return (valuation doubled from $9B → $18B) • Alpha access: Many of these names are backed by T1 VCs and are years away from IPO, meaning retail users would otherwise be priced out • Where Jarsy wins: global by design, global and non U.S. investors generally supported (subject to local rules), familiar web2/consumer UX and early proof-of-reserves/transparency b) @prestocks • Intro: SPV-backed tokens on @solana priced directly off company valuation (in $B) and tradable via @JupiterExchange (uses @MeteoraAG for liquidity) • How it works: small trades swap on-chain 24/7. Large holders can redeem for USDC (with KYC and processing). Offers instant composability (collateral) with no management/performance fees • Trade-offs: geofencing is broad (many countries excluded such as U.S., Canada, Hong Kong and Singapore) and liquidity depth after launch depends on MMs and arbitrage Jarsy is the right entry platform if you care about simple onboarding and verifiable backing. If you already live on Solana and want composability, PreStocks fits that profile more 2. Synthetic perps a) @joinearlybird • Intro: long/short perp markets on companies like SpaceX, OpenAI, Stripe → just pure price exposure • How it works: prices determined by supply/demand between longs and shorts. Settlement occurs either at IPO or acquisition using the public valuation/share price b) @ventuals • What it is: Perps on private-company valuations built on @HyperliquidX’s HIP-3 (builder codes) with an order book • How it works: Each instrument represents “valuation units” (valuation ÷ 1B). Markets run continuously and funding controls prices toward an oracle that blends off-chain valuation references with on-chain marks - - - - - Core risks of this vertical • Regulatory classification and limited to AIs: backed tokens seem like digital-asset securities → will encounter strict geofences (for example in Singapore these cannot be pitched to the general public; they either geofence Singapore or limit access to Accredited Investors (AIs) or Instis) • Issuer pushback & transfer restrictions: Many private companies reject tokenized exposure; sourcing secondaries can be contractually constrained → legal risks is high • Oracle & basis risk (synthetics): Private valuations update occasionally; oracle (data sources, EMAs, caps) can sometimes misrepresent reality • Liquidity cliffs: After the launch hype fades, depth can thin out and cause problems for prices and sizing in and out → MM incentives determine spreads and slippage • Operational & counterparty risks: KYC/AML, custody, wallet recovery and cross-border tax handling remain a big issue. Smart-contract bugs or oracle manipulation add to the technical risk • Thin liquidity: Especially for the perp based projects, where users do not own the underlying spot asset, whales could manipulate market orders easily with size - - - - - Thoughts The likely scenario is that both types of Pre-IPO exposure projects (backed-equity tokens and synthetics) will coexist and serve different set of users: • Backed rails: for economic linkage and a path to redemption at major events • Synthetics: more for speculation and hedging As frameworks and regulations become clearer, I will be expecting more options, and structured products to be built on top of these primitives. DeFi once again proves that a once opaque market architecture can shift from closed deals to a more open, continuous and democratised market for private-company risk. It represents a shift from traditionally high-barrier deals to a more dynamic and liquid opportunity for investors. Just like how DeFi transformed trading by eliminating intermediaries and creating 24/7 permissionless markets, the tokenization of private-company risk is setting the stage for similar breakthroughs in early-stage investing. For platforms like Jarsy, other than access, it’s also about building the rails for the next generation of capital formation: • Startups seeking broader, faster distribution of their equity • Investors demanding liquid exposure to high-growth opportunities • Crypto-native infra enabling real-time, compliant, and borderless participation Been testing out Jarsy, try it out here: app.jarsy.com/?invite_code=0…
Ash tweet media
English
54
21
113
14.8K
Alison💡
Alison💡@0xAxC·
Inspired by @saranormous Sarah Guo's talk on State of Startups and AI 2025
English
0
0
2
99
Alison💡
Alison💡@0xAxC·
Most AI products fail because they're thin ChatGPT wrappers. Been thinking about what actually works: The 2x2 matrix of AI transformations: > Unstructured → Structured (natural language → code) > Structured → Unstructured (data → insights) > Unstructured → Unstructured (summarization) > Structured → Structured (format conversion) Sweet spot? Unstructured → Structured with clear validation loops. Why? You immediately know if it worked. Code runs or doesn't. No ambiguity. But the real moat isn't the transformation -> it's the thickness of your wrapper. Cursor and Harvey win because they understand entire workflows, not just single prompts. They accumulate context ChatGPT can't access. The paradox: Start with one atomic job-to-be-done, but architect for workflow expansion from day one. Pick something people already do 10+ times daily, suffering through copy-paste with ChatGPT. The best products create compounding value. Each interaction makes the next one better. Tight feedback loops. Learning from corrections. If you're building AI products: Stop thinking "what would be cool" and start thinking "what are people already jerry-rigging?"
Alison💡 tweet media
English
2
0
3
137
Coookie
Coookie@iCoooookie·
@0xAxC Veo3 can generate such a long video in one shot? 😳
English
1
0
2
39
Alison💡
Alison💡@0xAxC·
Fun fact: What if you could access Veo3 video generation through third-party platforms at a fraction of the direct subscription cost? 🤔
English
1
1
2
145