AM

17 posts

AM

AM

@matlikestobuild

I like to build, keep up with tech and learn new things everyday. sharing some of that here !

Katılım Nisan 2025
43 Takip Edilen4 Takipçiler
AM retweetledi
Will Ahmed
Will Ahmed@willahmed·
You have no experience. You’ve never started a company. You’ve never had a full time job. Nike is going to kill you. You’re a kid. You don’t have technical skills. You shouldn’t build hardware. Apple is going to kill you. You can’t build hardware. You can’t measure heart rate non-invasively. Athletes don’t care about recovery. Under Armour is going to kill you. It won’t be accurate. You don’t listen. You’re an ineffective leader. You can’t recruit great talent. You’re going to have to pay every athlete. You can’t measure sleep non-invasively. It’s too expensive to research. Athletes are a small market. The product costs too much to make. The product costs too much to sell. Your valuation is too high. Consumers aren’t going to want it. Hardware is too hard. You should measure steps. Fitbit is going to kill you. You can’t build a marketing engine. You can’t raise enough money. You need a real CEO. Google is going to kill you. You can’t be a subscription. You can’t build a brand. You can’t do consumer in Boston. Your valuation is too high. You shouldn’t make accessories. You shouldn’t make apparel. Lululemon is going to kill you. You can’t predict Covid. Stay in your niche. You are going to run out of money. You can’t build a health platform. Amazon is going to kill you. You can’t measure blood pressure. You can’t get medical approvals. The market is too small. You don’t understand AI. The market is too competitive. It won’t work internationally. The supply chain is too complicated. You can’t build an AI. You can’t raise enough money. It’s too competitive. Healthcare isn’t going to want it. … Just keep going ✌️
Will Ahmed tweet media
English
1.1K
2.9K
23.1K
2.4M
AM
AM@matlikestobuild·
It looks like Anthropic’s Mythos/capybara is going to consume cybersecurity and other SaaS products for breakfast (if they haven’t already) . I think us founders should be cautious about a world where cost of creating high quality code will be negligible and moats like technical complexity of the code, workflow integrations and switching costs may not be good enough anymore. My hypothesis is that taste (the kind that’s not easily noticeable/hard to replicate), deep domain expertise, good customer relations and some proprietary data moat may be key. What do you guys think ⬇️
English
0
0
0
23
AM
AM@matlikestobuild·
@AJ_1121 @grok why are MCPs still a thing
English
1
0
1
35
AJ
AJ@AJ_1121·
How are MCPs still a thing? CLIs are clearly the way forward Ironic I know
English
1
0
1
34
AM retweetledi
AJ
AJ@AJ_1121·
My cofounder @AanishSachdev built this while we were getting CoralEHR ready for HIPAA. We looked at Vanta. $10k/yr. We looked at Delve. They just got accused of fabricating compliance evidence and their investor is scrubbing the internet. Cool. So we open-sourced what we built for ourselves. github.com/aanishs/em-dash
English
1
4
3
124
AM retweetledi
AJ
AJ@AJ_1121·
Now that SpaceX owns xAI, they should call their rockets, Grokets @elonmusk
English
0
1
2
147
AM retweetledi
Paul Graham
Paul Graham@paulg·
Someone asked what's the most underappreciated quality in startup founders. I realized I could answer this by asking what's the most underappreciated aspect of startups. That's easy: how hard they are. So the most underappreciated quality in founders is sheer toughness.
English
216
205
3K
166K
AM retweetledi
AM retweetledi
AM
AM@matlikestobuild·
@sama “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles” If this is true why did Anthropic not get the same terms ?
English
1
3
28
5.1K
Sam Altman
Sam Altman@sama·
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
English
3.5K
1K
9.2K
8.5M
AM retweetledi
Paul Graham
Paul Graham@paulg·
I meet a lot of founders who are worried by the rapid rate of technological change. They shouldn't be. It may feel uncomfortable, but techno-turbulence is net good for startups. They're much more likely to adapt successfully to some big change than incumbents are.
English
175
172
2.5K
118.8K
AM
AM@matlikestobuild·
@sama “ DoW displayed a deep respect for safety”
GIF
English
0
0
4
354
Sam Altman
Sam Altman@sama·
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
English
15.7K
4K
33.9K
38.2M
AM
AM@matlikestobuild·
@CodeByNZ Build it better!
English
0
0
0
130
NZ ☄️
NZ ☄️@CodeByNZ·
EVERY FUCKING THING IN SOFTWARE IS ALREADY BUILT SO WHAT ARE WE EVEN SUPPOSED TO BUILD??
NZ ☄️ tweet media
English
400
44
1K
78.8K
AM
AM@matlikestobuild·
@_devJNS All you need to know
AM tweet media
English
0
0
1
182
JNS
JNS@_devJNS·
It's crazy how newbie devs of 2026 wouldn't know this GOATed website.
JNS tweet media
English
166
283
3.9K
82.1K
AM
AM@matlikestobuild·
@fidexcode Claude code
English
0
0
0
17
fidexCode
fidexCode@fidexcode·
Which of these tech skills will you advise someone to learn in 2026? AI/ML Content creation Marketing Web development UI/UX design Graphics design Cyber security Social media management
English
73
4
84
6.1K
Vadim
Vadim@VadimStrizheus·
POV: February 2026
Vadim tweet media
Slovenščina
111
650
5.1K
154K