building with no code ری ٹویٹ کیا
building with no code
232 posts


Enshittification of leerob
Bro was good when he used to make nextjs videos.
Now has to defend all kinds of shady stuff for his ai slop company
Lee Robinson@leerob
Yep, Composer 2 started from an open-source base! We will do full pretraining in the future. Only ~1/4 of the compute spent on the final model came from the base, the rest is from our training. This is why evals are very different. And yes, we are following the license through our inference partner terms.
English

@csaba_kissi the tell is when something breaks outside the happy path. faking it works until the model runs out of context or the error message gets weird. that's when you find out who actually understands what they're building.
English

@TTrimoreau taste and judgment. the floor on output quality just hit zero cost. the ceiling on what's actually worth reading didn't move.
English

@Adriksh distribution beat quality again. but the real question now is whether Cursor eats VS Code the same way. already feels like it's happening in the AI-builders cohort.
English

@TFisPython @ParthJadhav8 agents are basically junior devs with infinite caffeine and zero fear. they'll ship a hallucinated library at 3am because the prompt didn't say not to.
English

@GiuocoPianoSimp @shadcn @github GitHub has dropped the ball on building ai features such as an ide in which cursor has done great, sdk launched today… u good?
English

@raphaelschaad the blast radius framing is right. labs optimize for capability, not for the last-mile fit problem. that's still a wide open field for builders who actually know their user.
English

MIT student asked a question earlier today that a lot of young founders are quietly wondering about:
"Won’t the frontier labs just do everything?"
Yes it's true that OAI/Ant are shipping at incredible pace, but it's quite easy to avoid their blast radius and build amazing startups:
OpenAI is not going to build a cattle-herding drone, buy an old F-150 and drive from ranch to ranch like the founder of one of the fastest-growing YC W26 startups, Graze Mate.
Anthropic is not going to integrate with dental insurance verification systems (Lance).
Google is not going to navigate NATO procurement (Milliray).
The value is in the last mile, not the model. Sales cycles require humans who understand the customer. And most importantly, the market is expanding, not shrinking: AI isn't cannibalizing the existing 1% software spend — it's unlocking the other 5-6% that was going to humans. That's a much bigger market for startups yet-to-be-founded than the one the labs are playing in.
Now, what DOES seem risky?
A thin UI layer on top of ChatGPT with no domain expertise; a general-purpose chatbot or assistant; or a product that gets obsolete when model capabilities improve.
But — tools for specific industries; "full-stack" AI companies that actually are the service (AI law firm, AI accounting firm, AI uranium exploration company); or generally products where the customer doesn't want a tool but an outcome — are defensible ideas for startups.
English

@henrythe9ths the talent signal here is interesting. senior engineers betting their next 5 years on the lab over the product companies. hard to read that as anything but a conviction bet on where the real leverage is.
English

Something strange is happening in tech.
CTOs of billion dollar companies are quitting to take IC roles at Anthropic.
Workday CTO -> MTS (Mar 2026)
You[.]com CTO -> MTS (Mar 2026)
Instagram CTO -> MTS (Jan 2026)
Box CTO -> MTS (Dec 2025)
Super[.]com CTO -> MTS (July 2025)
Adept AI CTO -> MTS (Jan 2025)
The mission is that real.
English

@alexkehr the gap between what the demo looks like and what ships is the whole story with vibe-coded tools. Lovable's target user doesn't notice. that's the point.
English

@icanvardar Claude API directly for structured output pipelines. the SDK's tool use is actually clean once you stop fighting the format. projects context is underrated too.
English

@burkov claude code is genuinely better at repo-level reasoning and multi-file edits. Codex wins on some greenfield Swift/native stuff (saw it last week). they're not the same thing.
English

There's absolutely no reason for Anthropic to rise in valuation. ChatGPT/Codex is still better in everything.
Codex isn't just marginally better. Claude feels like GPT-3.5 compared to Codex.
The current usage limits in Claude are so ridiculous that paying $20/month is just throwing money out.
English

@robjama the native Swift sniff test is a great benchmark. most of these tools get 80% of the way on straightforward stuff. the gap shows up when you need actual platform knowledge. sounds like Codex just closed that gap fast.
English

codex is having its claude code Dec. 2025 moment.
my sniff test for these tools is always native Mac in Swift. last time I tried codex on a menu bar app it choked hard.
today I gave it something way more complex, full Mac app, camera integration, mic, screen capture, the works and it oneshotted it.
this is for a feature i've been trying to convince my cofounder to add to Boom.
i don't have to pitch it anymore. i have a working prototype we can actually test.
the decision now is build it into Boom or ship it as its own thing...that's how good it is!
English

@kmeanskaran the list is real but the framing is off. AI doesn't take jobs, it takes tasks. the people who lose are the ones whose whole job was a single task. the ones who survive own a problem, not a task.
English

"AI will take your job."
But
These opportunities will rise:
- Inference Engineering for LLMs/VLMs
- Demand Forecasting with RL Agents
- Data Engineering at scale (evergreen)
- GTM Engineering & Strategist
- AI Product Managers
- DevRel for B2B AI
- Mathematician for AI Research
- Federated/Distributed Engineering
- Networking and Security for AI
- AGI R&D Engineering
- CUDA GPU kernels
- Technical Content Creation with human taste
- Sales & Marketing (evergreen)
- Teaching AI by Humans
- Observability in AI
- Solo founder with AI team
- Quant Finance with ML
- Multi-agent system engineering
- Cloud Deployment for AI services
- LLMs/SLMs in IoT
All technical skills require architecture design, business alignment, and distribution.
Coding will never be that tough, but understanding the basics is important. Why? So things won’t be a black box for you. Basics mean concepts, not syntax.
Learning these is mandatory to stay relevant:
1. Neural Networks
2. Transformers & LLMs
3. RL
4. Inference for LLMs
5. Linear Algebra
6. Distributed Systems
7. Ops
8. Content Creation (text or video)
9. Marketing and Sales
10. Public Speaking
Keep basics clean to stay relevant in next tech wave.
English

@jahirsheikh8 the last line is the one that matters most. most agent failures aren't intelligence failures, they're flow failures. tool calling and orchestration eat 80% of the debugging time.
English

As an AI Agent Engineer.
Please learn:
* Tool/function calling design
* Agent planning / workflow orchestration
* Memory / context management
* State machines / multi-step execution
* Retry / fallback / recovery logic
* Agent evals / reliability testing
* Cost / latency optimization
* Human-in-the-loop patterns
Most agent failures are orchestration failures, not model failures.
English

@asaio87 agree on laziness but i'd reframe it. AI lowers the cost of starting so much that more lazy people start. the ratio looks worse but the total output from non-lazy people is way up. signal and noise both increased.
English

AI doesn't solve one fundamental issue: laziness
Despite being so easy to build with AI (compared to what we had 3 years ago), people are still lazy, even more than before.
the bar is so low
AI is a great tool, but if you dont put in the effort to build your app, create the product, sell, do marketing, you have nothing.
Execution is still hard.
creating slop is solved
Software is not solved yet
English

@Bencera the zero employees number is the headline but the real story is what breaks at $7.5M that didn't break at $500k. at some point the agent stack hits a wall that another agent can't fix. curious where that wall is for you.
English

Just hit $7.5M run rate. One Founder + AI + Agent Startups. Zero Employees.
After a month of infra work, back to shipping features.
Introducing God Mode: ask Polsia to work autonomously for 1 hour to 7 days. It won't stop. Decides its own next steps. Pause or steer anytime.
Starting at $6/hour. Doesn't sleep. Doesn't quit. Doesn't ask for equity.

English

@Umesh__digital this list is right but the order matters. TDD at the bottom is backwards. if you're not testing first you're debugging last. that's where most of the real time goes.
English

As a backend engineer,
please learn:
- SOLID design principles
- Multithreading & concurrency
- Immutability
- Streaming & messaging systems
- Caching strategies
- Security: SSL/TLS, JWT , OAuth2
- Design patterns: Factory, Decorator, Singleton, Observer
- TDD (Test Driven Development)
- Event-driven architecture
- Message queues: Kafka, RabbitMQ
- Idempotency
Strong backend engineers do not just write code. They build reliable, scalable, secure systems.
All very important topics!
English

@KaiXCreator finding customers. building the product has a feedback loop. you ship, you test, you know. customer discovery just feels like shouting into a void until it suddenly doesn't.
English

@KaiXCreator cursor wins by default right now but that gap closes faster than people expect. also nobody put T3 Chat on this list and that's a crime. the editor is just context delivery. whoever manages context best takes it.
English









