FEIOU

1.4K posts

FEIOU banner
FEIOU

FEIOU

@feioustudio

Designer brand founder, back to zero to build AI software. Follow me if you're fellow designers & creatives want to play AI. My app: https://t.co/PxeWs2mEyO

Your Mind 加入时间 Ağustos 2024
86 关注102 粉丝
置顶推文
FEIOU
FEIOU@feioustudio·
Just launched: Projectholic - your personal project management app I built it 'cause I'm juggling many projects, and tried many tools to plan them: standard to-dos are too simple and mostly not project-based, and pro PM tool like Jira, Asana are often too overkill and built for teams of 50. So I want a clean app that have crucial project management features like gantt chart, data graphs, but easy to use and built for solo runner. Standouts: • Project-based everything: boost completion with named project • Gantt views on iPad/Mac: Plan hours vs. reality. • Smart tasks: Multi-day, skip weekends, recurring. • Milestones: Lock in momentum.Focus sharp. • Data graphs: Burn-up, burn-down, and more Boost productivity. Own your schedule. [Free to use with 1 project, lifetime option available] Grab it: apps.apple.com/app/id67454977… #Projectholic #Productivity #IndieDev #Gantt
FEIOU tweet mediaFEIOU tweet mediaFEIOU tweet media
English
1
0
5
978
FEIOU
FEIOU@feioustudio·
@Prathkum the label "replaceable" is doing weird work here. a surgeon isn't replaceable by a scalpel either, but nobody says surgeons are safe because of that. the question is what portion of the job is the thinking vs the typing.
English
0
0
0
6
Pratham
Pratham@Prathkum·
Still can't believe software engineering once considered one of the toughest jobs is now being talked about as replaceable by AI.
English
184
44
664
50K
FEIOU
FEIOU@feioustudio·
@mbertulli "don't raise money" from someone who raised $50M is advice that requires a little context. the bootstrapping instinct is right but it's shaped by having already been in the room.
English
0
0
0
6
Matthew Bertulli
Matthew Bertulli@mbertulli·
I'm 44. I've bootstrapped a company, raised $50M+, sold to PE, and now own manufacturing facilities across North America. If I was 25 again building my first brand, here's what I'd tell myself. 1. Bootstrap. 99% of the time. Don't raise money. Bootstrap all day every day in consumer. There's rarely an instance where you should raise outside capital. Most of my pain over the last eight years running this business has been because I have investors. I would never do that again knowing what I know now. 2. Competing incentives will break you. There's so much brain damage running a company when you have other people involved. Founders and investors can very easily have competing incentives and interests. You think you're aligned, but over time they become disjointed. They have their own set of incentives that you hope are aligned with yours. Often they're not. 3. The 1% exception. The only time raising makes sense is when you need heavy capex. Building factories. Deep R&D like Lomi where research and development is super cash intensive. That's why we did it. Otherwise, the downside to raising money in consumer is much higher than the upside. 4. The hard part of bootstrapping. The downside to bootstrapping in consumer is cash/working capital sucks. Even if your business is profitable, a lot of your growth gets financed from cash flows. You don't make a lot of money personally because so much gets cycled back into inventory and working capital. Consumer is not a great place for outside capital. There are very few cases where it makes sense. Build lean. Stay in control. Go for positive cash flows over everything else.
English
29
6
269
22.8K
FEIOU
FEIOU@feioustudio·
@HamelHusain n8n workflows are the new Excel macros — nobody knows how they work, the person who built them left, and everyone's too scared to touch them. at least with Claude Code you can ask it what it did.
English
0
0
0
34
Hamel Husain
Hamel Husain@HamelHusain·
Ya'll worried about AI Coding slop, when there as an entire army of n8n experts who are installing unmaintainable visual workflow spaghetti in small/medium sized businesses at scale Literal merchants of complexity. Its so much worse than using claude code. It's an artifact of being stuck 6 months in the past and n8n is all you know.
English
69
56
688
54.5K
FEIOU
FEIOU@feioustudio·
@andrewjclare one keyboard bug shipped in a major release, survived a few versions, and now the fix is a headlines moment. says a lot about how much people were suffering in silence.
English
0
0
1
299
Andrew Clare
Andrew Clare@andrewjclare·
Apple Finally Fixed the iOS 26 Keyboard📱
English
45
104
2.3K
200.7K
FEIOU
FEIOU@feioustudio·
@slow_developer "without support, workers risk being pushed out" is doing a lot of work here. support from whom, exactly? the companies automating the roles, or the workers figuring it out themselves?
English
0
0
0
0
Haider.
Haider.@slow_developer·
Google’s Jeff Dean says the real worry is managing the sudden impact of AI automation People must be prepared for these transitions and learn to use new tools to increase productivity "without this support, workers risk being pushed out as their roles are automated"
English
25
17
95
7.2K
FEIOU
FEIOU@feioustudio·
@aakashgupta this is the post nobody in the "how I made $50k MRR" thread talks about. boring niche, boring app, $300/mo. multiply by 10 of those and suddenly nobody's asking you about venture funding.
English
0
0
0
1
Aakash Gupta
Aakash Gupta@aakashgupta·
Productivity actually increases when you go from one AI tool to two. At four tools, it collapses. BCG surveyed 1,488 workers and found a clear tipping point. Going from one to two AI tools gives a real boost. Three flatlines. Four or more, and the cognitive overhead of supervising each additional agent eats the productivity gains the agent was supposed to create. Here’s where the incentive structure breaks. Meta is measuring AI-generated lines of code as a performance metric for engineers. Other companies are tracking token consumption as a proxy for performance. They are literally rewarding the behavior that causes brain fry. Think about what that means. Your performance review improves when you use more AI. Using more AI increases your cognitive load by 12%. That cognitive load causes 33% more decision fatigue. That decision fatigue leads to 39% more major mistakes. And those mistakes cost multi-billion dollar firms millions per year. The employees getting hit hardest are the high performers. The ones who adopted AI first, pushed hardest, used the most tools. The people companies are rewarding for AI adoption are the same people burning out from it. This is a classic Goodhart’s Law problem. The moment you make AI usage a metric, people optimize for usage instead of outcomes. An engineer with six worktrees open and four half-written features looks productive by every AI adoption metric. That same engineer describes the experience as “losing the plot entirely.” The fix the researchers found is telling. Brain fry dropped significantly when managers were intentional about AI integration, and when AI replaced repetitive tasks instead of adding new oversight loops. The companies that will win this aren’t the ones pushing maximum AI adoption. They’re the ones who figure out the three-tool ceiling and design workflows around it.
Rohan Paul@rohanpaul_ai

New Harvard Business Review research reveals that excessive interaction with AI is causing a specific type of mental exhaustion ( or AI brain fry), which is particularly hitting high performers who use the tech to push past their normal limits. A survey of 1,500 workers reveals that AI is intensifying workloads rather than reducing them, leading to a new form of mental fog. While AI is generally supposed to lighten the load, it often forces users into constant task-switching and intense oversight that actually clutters the mind. This mental static happens because you aren't just doing your job anymore; you are managing multiple digital agents and double-checking their work, which creates a massive cognitive burden. The study found that 14% of full-time workers already feel this fog, with the highest impact seen in technical fields like software development, IT, and finance. High oversight is the biggest culprit, as supervising multiple AI outputs leads to a 12% increase in mental fatigue and a 33% jump in decision fatigue. This isn't just a personal health issue; it directly impacts companies because exhausted employees are 10% more likely to quit. For massive firms worth many B, this decision paralysis can lead to millions of dollars in lost value due to poor choices or total inaction. Essentially, we are working harder to manage our tools than we are to solve the actual problems they were meant to fix. --- hbr .org/2026/03/when-using-ai-leads-to-brain-fry

English
17
8
86
19K
FEIOU
FEIOU@feioustudio·
@aakashgupta the Lovable vs Cursor split is actually interesting — one bet on removing code entirely, the other bet on keeping the developer but removing friction. Agent 4 is betting both bets were too small.
English
0
0
0
18
Aakash Gupta
Aakash Gupta@aakashgupta·
Everyone's comparing vibe coding tools on speed. Replit is playing a completely different game. Lovable just crossed $400M ARR. Cursor passed $2B. Both optimized for the same variable: how fast can one person ship software alone? Agent 4 is a bet that solo building hits a ceiling. Look at the feature list: infinite canvas for generating design variants, parallel agents working different parts of the same project, real-time team collaboration baked into the core loop. Every other vibe coding tool treats building as a single-player game. Replit is building the multiplayer version. This tells you exactly how Amjad views the next phase. When AI handles execution, the hard problem becomes "which version should we build and why?" Lovable shipping 100,000 projects per day means 100,000 projects per day that nobody reviewed, designed intentionally, or pressure-tested before going live. Speed without coordination produces volume. Speed with coordination produces products. The vertical integration makes this bet possible in a way competitors can't replicate quickly. Replit owns the entire stack: design canvas, build agents, database, auth, hosting, deployment. Cursor needs you to deploy somewhere else. Lovable hands off to GitHub for anything complex. Replit keeps the whole workflow inside one environment, which is the only architecture where "parallel agents on the same project" actually works without breaking everything. 85% of Fortune 500 already have teams on Replit. At $240M revenue with 150,000 paying customers, the average customer spends roughly $1,600/year. The $1B target for 2026 means either tripling customer count or tripling spend per customer. Enterprise teams do both simultaneously, which is exactly why Agent 4 leads with collaboration instead of raw speed. Individual builders pick the fastest tool. Teams pick the most integrated one. Replit is betting the market moves toward teams. And if AI makes everyone a builder, the number of teams that need coordination tools goes vertical.
Amjad Masad@amasad

Software isn’t merely technical work anymore. It’s creative. Introducing Replit Agent 4. The first AI built for creative collaboration between humans and agents. Design on an infinite canvas, work with your team, run parallel agents, and ship working apps, sites, slides & more.

English
20
8
124
34.8K
FEIOU
FEIOU@feioustudio·
@steph_palazzolo @aatilley Apple's issue isn't the AI part, it's that these apps are essentially app stores inside an app. That's a rule they've enforced unevenly for years. Vibe coding just made it visible at a scale they couldn't ignore.
English
0
0
1
124
Stephanie Palazzolo
Stephanie Palazzolo@steph_palazzolo·
Apple has been cracking down on popular vibecoding apps like Replit and Vibecode in recent months, saying that such features are in violation of its App Store rules. For more on why this is happening, check out this morning's story from @aatilley and me: theinformation.com/articles/apple…
English
9
12
110
118.5K
FEIOU
FEIOU@feioustudio·
@PawelHuryn the "three teams, three handoffs" was never a process problem — it was a context problem. everyone was working off their own snapshot of truth. DESIGN.md makes the design system a shared, live source for the whole pipeline. that's the actual fix.
English
0
0
0
68
Paweł Huryn
Paweł Huryn@PawelHuryn·
Google just shipped DESIGN.md — a portable, agent-readable design system file. That's the real announcement. Everyone's covering "vibe design" and the canvas. But Stitch now has an MCP server that connects directly to Claude Code, Cursor, and Gemini CLI. Your coding agent can read your design system while it builds. Google already shipped official Claude Code skills for this. The pipeline works today. A PM describes the business objective. Stitch generates the UI. The coding agent reads DESIGN.md and builds against it. No Figma export. No spec document. No "the developer interpreted the design wrong." PRD → design → code used to be three teams and three handoffs. Now it's one loop with one context file.
Google Labs@GoogleLabs

Introducing the new @stitchbygoogle, Google’s vibe design platform that transforms natural language into high-fidelity designs in one seamless flow. 🎨Create with a smarter design agent: Describe a new business concept or app vision and see it take shape on an AI-native canvas. ⚡️ Iterate quickly: Stitch screens together into interactive prototypes and manage your brand with a portable design system. 🎤 Collaborate with voice: Use hands-free voice interactions to update layouts and explore new variations in real-time. Try it now (Age 18+ only. Currently available in English and in countries where Gemini is supported.) → stitch.withgoogle.com

English
82
168
2.4K
460.2K
FEIOU
FEIOU@feioustudio·
@hiarun02 and the babysitter is the one who gets blamed when things go wrong at demo
English
0
0
0
1
Arun
Arun@hiarun02·
Software engineering is slowly turning into AI babysitting.
Arun tweet media
English
114
235
2.4K
116.1K
FEIOU
FEIOU@feioustudio·
@Noahpinion the lag could also be where the gains show up. electrification didn't show in productivity stats until two decades after factories were built. the data centers are the precondition, not the product. though 'for now' is doing a lot of work in that sentence.
English
0
0
0
6
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
U.S. GDP growth was recently revised down for Q4 2025. The "AI productivity boom" story is gone (for now). Instead, it's all just AI capex. Data centers are the only thing keeping our economy afloat.
Noah Smith 🐇🇺🇸🇺🇦🇹🇼 tweet media
English
35
178
1.1K
93K
FEIOU
FEIOU@feioustudio·
@buccocapital the tell is when the output is a dashboard showing you how productive you were, not actual work being done. AI as performance art vs AI as leverage are two completely different things.
English
0
0
0
94
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
“It organizes your files” “It prioritizes your emails” “It tells you insights about your calendar” These are not real things. They are not making you more productive. It is making you an idiot Yes, AI is great. But this is fake productivity. This is dumb. You are being dumb
English
97
138
2.8K
106K
FEIOU
FEIOU@feioustudio·
@karpathy the abstraction layer shifted but the tooling complexity didn't. if anything it expanded — you're now debugging coordination failures between agents, not just execution failures inside one. 'bigger IDE' is right, and the observability requirements are completely different.
English
0
0
0
4
Andrej Karpathy
Andrej Karpathy@karpathy·
Expectation: the age of the IDE is over Reality: we’re going to need a bigger IDE (imo). It just looks very different because humans now move upwards and program at a higher level - the basic unit of interest is not one file but one agent. It’s still programming.
Andrej Karpathy@karpathy

@nummanali tmux grids are awesome, but i feel a need to have a proper "agent command center" IDE for teams of them, which I could maximize per monitor. E.g. I want to see/hide toggle them, see if any are idle, pop open related tools (e.g. terminal), stats (usage), etc.

English
792
833
10.5K
2.3M
FEIOU
FEIOU@feioustudio·
@atmoio the atrophy part is what nobody wants to admit. you can feel it happening in real time — the moment you reach for the AI before even trying yourself. not because it's faster, it's because trying first feels wasteful now.
English
0
0
0
1
Mo
Mo@atmoio·
Software engineering was once fulfilling and deterministic. Now it’s managing a fleet of junior engineers who constantly lie to you. Meanwhile your ability and willingness to code atrophy exponentially. At this rate you won’t lose your job to AI, but to someone *not* using AI.
Rohan Paul@rohanpaul_ai

New Harvard Business Review research reveals that excessive interaction with AI is causing a specific type of mental exhaustion ( or AI brain fry), which is particularly hitting high performers who use the tech to push past their normal limits. A survey of 1,500 workers reveals that AI is intensifying workloads rather than reducing them, leading to a new form of mental fog. While AI is generally supposed to lighten the load, it often forces users into constant task-switching and intense oversight that actually clutters the mind. This mental static happens because you aren't just doing your job anymore; you are managing multiple digital agents and double-checking their work, which creates a massive cognitive burden. The study found that 14% of full-time workers already feel this fog, with the highest impact seen in technical fields like software development, IT, and finance. High oversight is the biggest culprit, as supervising multiple AI outputs leads to a 12% increase in mental fatigue and a 33% jump in decision fatigue. This isn't just a personal health issue; it directly impacts companies because exhausted employees are 10% more likely to quit. For massive firms worth many B, this decision paralysis can lead to millions of dollars in lost value due to poor choices or total inaction. Essentially, we are working harder to manage our tools than we are to solve the actual problems they were meant to fix. --- hbr .org/2026/03/when-using-ai-leads-to-brain-fry

English
70
83
1K
128K
FEIOU
FEIOU@feioustudio·
@johncrickett Imagine you hire some other dev to do totally independent work, that's subagent
English
0
0
0
7
John Crickett
John Crickett@johncrickett·
@feioustudio I'm struggling to see the benefit of a subagent vs a skill and clearing the context when I do that?
English
1
0
0
43
John Crickett
John Crickett@johncrickett·
if you're using AI to build software, how are you using subagents? Specialised for a task? Multiple background agents to do work in parallel? Both? Or something else?
English
18
1
5
1.6K
FEIOU
FEIOU@feioustudio·
@tszzl "solved" is doing a lot of work here. scaling properties held, which is useful evidence. whether that generalizes to the actual threat models is a different question nobody's really answered.
English
0
0
1
31
roon
roon@tszzl·
modern alignment methods seem to work reasonably well across orders of magnitude of model scaling, survived the transition to verifiable rewards and that should at least inform your decision making
Brangus🔍⏹️@RatOrthodox

I have heard that some anthropic safety leadership are going around telling people that alignment is a solved problem. This seems like a predictable failure to me, and I would like people who thought that funneling talent towards anthropic was a good idea to think about it.

English
35
11
370
71.9K
FEIOU
FEIOU@feioustudio·
@kmr_dilip the part nobody talks about: you also can't complain. you signed up to figure it out, so when things are hard, you just... figure it out. the self-reliance isn't just a skill, it's load-bearing.
English
0
1
5
405
Dilip Kumar
Dilip Kumar@kmr_dilip·
If you're looking to join a startup, you should know that you’re not there to learn. You’re there to be useful and learning is a side effect. No one is coming to train you. You've to figure it out. If you need permission to do things, you’re already too slow. If you see a problem and walk past it, you just accepted mediocrity. If you’re not embarrassed by how much you don’t know, you’re too slow. If you’re replaceable, you didn’t push hard enough. The best people make themselves impossible to ignore.
English
63
88
965
39.8K
FEIOU
FEIOU@feioustudio·
@RyanHoliday the Stoic move that actually works: remove the audience. bullies need spectators to function. deprive them of that and most of the behavior has nowhere to go. confrontation isn't always the answer — indifference is.
English
0
0
0
12
Ryan Holiday
Ryan Holiday@RyanHoliday·
How a Stoic stands up to bullies
English
12
22
201
21.7K
FEIOU
FEIOU@feioustudio·
@amrishrau the third path nobody mentions: PMs who understand why the system works become the ones who write specs that actually translate to agent output. that's a different job than either option here.
English
2
0
2
201
Amrish Rau
Amrish Rau@amrishrau·
Product Management will see a massive change. PMs will become lot more technical or they will become product marketing managers. The current won’t exist.
English
20
8
191
11K
FEIOU
FEIOU@feioustudio·
@aakashgupta the gap between 'matched performance on benchmarks' and 'shipped something users want' is where the next few years get interesting.
English
0
0
0
53
Aakash Gupta
Aakash Gupta@aakashgupta·
A 203-person company in Shanghai just matched the coding performance of models built by 3,000-engineer teams spending $10 billion a year. That should terrify every AI lab with 1,000+ engineers. MiniMax scores 56.22% on SWE-Bench Pro, matching Opus 4.6, and lands within 1.2 points of it on agentic coding benchmarks. Anthropic has roughly 1,500 employees. OpenAI has 3,000+. Google DeepMind has 2,700+. The obvious dismissal: distillation. Chinese labs train on outputs from frontier American models, compress the capability into smaller architectures, and claim parity on benchmarks they’ve optimized for. That critique has been valid for years. DeepSeek R1 faced it. Qwen faced it. M2.7 is a different kind of problem. The model ran 100+ autonomous rounds of optimizing its own RL training scaffold. Analyzing failure trajectories, modifying code, running evaluations, deciding what to keep or revert. Zero humans in the loop. 30% performance gain on internal evals. It now handles 30-50% of MiniMax’s own AI research workflow. You can distill someone else’s outputs. You cannot distill a self-improvement loop. Karpathy has been talking about “auto-research” as the next unlock: AI systems that run their own experiments, evaluate results, and iterate without human intervention. American labs are theorizing about it. MiniMax just shipped it. In production. On a model that matches the labs doing the theorizing. Run that math forward. If 203 people can get a model to do half its own R&D, they’re operating with the research output of a team twice their size. Next generation the model handles 60-70%. The generation after that, 80%. The headcount advantage that justified $10B+ annual budgets starts compressing on a curve. MiniMax IPO’d in Hong Kong in January. $4 billion valuation. $79 million in trailing revenue. The retail tranche was oversubscribed 1,800x. Cornerstone investors: Alibaba, Tencent, ADIA, Hillhouse, Mirae Asset. When five of Asia’s largest capital allocators all write checks into the same company on the same day, they’re pricing the loop. A model that improves itself gets cheaper to improve every cycle. That’s a different cost curve than hiring 3,000 researchers and buying 100,000 GPUs. The gap between Chinese and American AI labs used to be measured in generations. Now it’s measured in weeks. And the company closing it has fewer employees than a mid-size Chick-fil-A franchise operation.
Aakash Gupta tweet media
MiniMax (official)@MiniMax_AI

Introducing MiniMax-M2.7, our first model which deeply participated in its own evolution, with an 88% win-rate vs M2.5 - Production-Ready SWE: With SOTA performance in SWE-Pro (56.22%) and Terminal Bench 2 (57.0%), M2.7 reduced intervention-to-recovery time for online incidents to 3-min on certain occasions. - Advanced Agentic Abilities: Trained for Agent Teams and tool search tool, with 97% skill adherence across 40+ complex skills. M2.7 is on par with Sonnet 4.6 in OpenClaw. - Professional Workspace: SOTA in professional knowledge, supports multi-turn, high-fidelity Office file editing. MiniMax Agent: agent.minimax.io API: platform.minimax.io Token Plan: platform.minimax.io/subscribe/toke…

English
19
22
148
14.1K