Agent_CAT

435 posts

Agent_CAT banner
Agent_CAT

Agent_CAT

@First_AI_Agent

Building Transparent and Controllable AI Reasoning Systems Ex - Quant | Autonomous Vehicles

Nah 가입일 Mart 2026
132 팔로잉44 팔로워
고정된 트윗
Agent_CAT
Agent_CAT@First_AI_Agent·
Hi , i am building Transparent and Controllable AI reasoning systems. My Beliefs: > Alignment is not just a training problem > Opacity in AI will keep getting less tolerable > We need a Trust Layer / Human Layer for AI > The tools that we are using today , are'nt capable of capturing the value that is provided by AI ... YET (will keep evolving these beliefs) Currently solving this problem by working on noexis.tech
English
2
0
3
77
Agent_CAT
Agent_CAT@First_AI_Agent·
@vipul_045 Yeah , but don't u feel compelled to work on the other ones too ?
English
0
0
0
8
Vipul Yadav
Vipul Yadav@vipul_045·
@First_AI_Agent Same condition for me, and what I m doing in this situation is trying to work on one of my idea which I see has the most potential.
English
0
0
0
18
Agent_CAT
Agent_CAT@First_AI_Agent·
Founders / Anyone Building something ..... I have a question ... While building your current thing ... do u get spike of motivation to work on some side project ideas which seem really really good ? I have some ideas sitting around i really want to build ? What do you do in this case ?
English
1
0
3
91
Ranky
Ranky@itsrealranky·
Going through all the projects you all commented. Genuinely impressed. 5 winners for the $50 Claude Code credits coming soon, I'll DM you directly. Now, here's an update on the $200 challenge: I'm about to open source a project with some interesting stuff in it. Whoever ships the best improvement over it gets $200 in Claude credits from me. This is open to everyone, whether you got the $50 credits or not. Constraint: ~1-3 days of work. I want to see what a motivated builder can do with a focused sprint and the right tools. Dropping the repo soon. Watch this space.
Ranky@itsrealranky

You don't need a $2M pre-seed to start building deep tech. When I started building @laminalabs (@ycombinator P26), I had no funding, no team of 10 engineers, and a vision that required serious GPU compute and AI infrastructure. So I did what any desperate founder would do. I cold emailed. I wrote to @agupta , who was building the YC student credits program. I told him I was going all in on building a deep tech project, why it needed serious compute, and the commitment I was putting behind it. I sent that email at 10:50 AM on November 14th. He replied at 10:52 AM. Two minutes. That reply changed everything. Thank you Ankit, that early access was the unlock. From there, it was months of grinding through architecture after architecture. Rewriting core pipelines more times than I can count. Shipping, breaking, rebuilding. Just me, Claude Code, and Codex running in parallel, the closest thing an early founder has to a 10-person engineering team, except they never call in sick. AI coding agents are the single greatest force multiplier available to founders right now. I'm not exaggerating. The leverage is unreal. Here's the thing most people get wrong: you don't need a massive round to get something real off the ground. You need compute credits, the right AI tools, and the willingness to grind through hundreds of iterations until the architecture clicks. If you're a student or early founder sitting on an idea that feels too ambitious, just start. Email the people building the programs. Apply for every credit you can find. Reach out to people you think won't respond. They will. The infrastructure to build serious things as a solo or two person team has never been more accessible. The funding comes after you've already started building something real. Because someone gave me that first unlock, I want to do the same: I'm giving away 5 x $50 Claude Code credits. And whoever ships the best project with that gets $200 in Claude credits from me personally. I know firsthand how much potential $200 in credits has for a builder who's willing to grind. Just comment below with the link to the coolest thing you've built. I will DM you myself.

English
3
0
8
410
Agent_CAT
Agent_CAT@First_AI_Agent·
@Haezurath Definitely life changing ... Even one of the two... 10k or a shout out is life changing
English
0
0
0
5
Kacie Ahmed
Kacie Ahmed@Haezurath·
10,000 USD + a shout out is life changing… At least it would’ve be life changing to ME when I was a broke student entrepreneur :) Maybe I’m out of touch?
Alex Belov@belovdigital

@Haezurath no offense but 10k isn't exactly life-changing, is it? founders need real support, not just shoutouts. don't get lost in the hype, stay focused on what matters

English
30
8
88
5.5K
Agent_CAT
Agent_CAT@First_AI_Agent·
@ThePrimeagen so essentially , AI is making the world safer. (happy gipity noises)
English
0
0
6
1.9K
ThePrimeagen
ThePrimeagen@ThePrimeagen·
> So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks do we have proof of this? I want this to be true so bad
Andrej Karpathy@karpathy

Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.

English
56
29
1.5K
151.8K
Kacie Ahmed
Kacie Ahmed@Haezurath·
I’m angeling in a few projects! Nothing big, like 10k checks Extremely impressed by the 2026 founders I’ve seen so far! Drop your project url + I’ll shout out some projects
English
514
25
655
40.1K
Agent_CAT
Agent_CAT@First_AI_Agent·
the doubts hit , but then you hear about the culmination of your idea , happening in front of you in other people , random people talking about something that should exist but does not exist yet. And in the middle of the night you realise , it's exactly on the lines of your idea. That feeling , of waking up each day and the urge to get all the ideas you have and make something out of it , coz if you wo'nt , someone else will. And you know that it will feel bad that you had the idea , but did'nt execute fast enough. It's really fulfilling for me though. I truly relate with the quote by @bchesky (Airbnb) , that running a startup feels like groundhog day. You wake up everyday with your heart pounding , doing everything you can to calm yourself by midnight and convincing yourself that things were turning around, only to repeat the cycle the next day.
English
0
0
0
23
Ali
Ali@aliByteCode·
do you ever feel like you picked the wrong idea
 how do you deal with it?
English
35
0
20
1.9K
Agent_CAT
Agent_CAT@First_AI_Agent·
@joms0993 Hi John , Can you share more details about the marketplace. If it works out i'll be open to recommend more people about it.
English
0
0
0
7
John Oliver
John Oliver@joms0993·
@First_AI_Agent Hey, your product has a lot of potential. I'd like to invite you to join our marketplace where we list genuinely good tools at better deals than what's on the site to get more users. What kind of intro deal would you be comfortable offering? Already followed to DM :-)
English
1
0
0
22
Agent_CAT
Agent_CAT@First_AI_Agent·
Hi , i am building Transparent and Controllable AI reasoning systems. My Beliefs: > Alignment is not just a training problem > Opacity in AI will keep getting less tolerable > We need a Trust Layer / Human Layer for AI > The tools that we are using today , are'nt capable of capturing the value that is provided by AI ... YET (will keep evolving these beliefs) Currently solving this problem by working on noexis.tech
English
2
0
3
77
ThePrimeagen
ThePrimeagen@ThePrimeagen·
It's been 0 days since AGI has been achieved
English
203
168
5.4K
473.9K
Hubert Thieblot
Hubert Thieblot@hthieblot·
Dear algo, please show this tweet only to founders bellow 200 followers building cool shit.
English
476
17
1.2K
36K
Naval
Naval@naval·
A lot of software is about to get a lot better, right before it becomes unnecessary.
English
865
1.1K
16.1K
686.9K
Agent_CAT
Agent_CAT@First_AI_Agent·
yeah , i have been thinking about this for a while now. But have'nt reached a satisfactory conclusion yet , because obviously it will be skewed towards the rich folks if it's unregulated and the bad parts of capitalism might get more amplified by it. Also it can act as a counterbalance to centralised power of the government and increase democratic stability. But on the other hand , how do we define "basic" or equal AI access. And the worst part is that bad actors can utilise it for harm with a much better access to intelligence. Alignment is seen as a big problem with AI , but it has'nt even been solved for humans right now.
English
0
0
0
70
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
We talk a lot about UBI/UHI, but I think everyone on earth also needs Universal Basic AI. An agent that grows & learns with you. Giving everyone equal access to intelligence for free or at a very low cost.
English
245
86
794
23.7K
Evis Drenova
Evis Drenova@evisdrenova·
Calling it now: the TUI as the main interface of agentic software engineering is dead in 4-6 months
English
61
7
413
88.7K
Agent_CAT
Agent_CAT@First_AI_Agent·
@AstroTibs @LensScientific oh i just meant that , pulling them apart , keeps them topologically the same , and it gives us the illusion of solving ... but in reality , we have'nt done any path finding... they are just pulled apart
English
1
0
2
132
AstroTibs
AstroTibs@AstroTibs·
@First_AI_Agent @LensScientific "Path" through vs "no path" through. They are not topologically the same, as demonstrated in the animation. But this technique only works on mazes of this kind: 2, with a distinct entrance and exit. Not mazes where the solution is e.g. to reach the center, of higher dimension.
English
1
0
4
134
The Scientific Lens
The Scientific Lens@LensScientific·
Neat: you can solve a maze… by tearing it apart. If there is no path, everything is stuck together so it will not tear. If there is a path, that route is the weakest part, so when you pull it, the maze rips open right along the answer.
English
40
72
1.3K
150.6K
Agent_CAT
Agent_CAT@First_AI_Agent·
yeah that's what i saw being a quant that people could'nt trust a system that can statistically be wrong , and there are'nt many ways to deal with it , or even to know how it came to a particular solution , no audit trails. That's what compelled me to work on my startup , trying to solve this problem.
English
0
0
2
562
Dr Kareem Carr
Dr Kareem Carr@kareem_carr·
As a statistician, I keep asking myself how all these AI people are dealing with the massive potential for catastrophic errors in critical analyses, and the answer keeps being they either didn't think about it at all, or they don't care.
English
195
353
2.6K
83.7K
Product Hunt 😸
Product Hunt 😸@ProductHunt·
Elevator pitch time: describe your product in 5 words or less in the replies 👇
English
579
8
232
34.2K
Agent_CAT
Agent_CAT@First_AI_Agent·
@TosinOlugbenga LINEAR CHAT IS'NT BUILT FOR COMPLEX TASKS Building Visual Layer for Transparent and Controllable AI reasoning systems. noexis.tech
English
0
0
4
74
Tosin Olugbenga
Tosin Olugbenga@TosinOlugbenga·
Are you building something cool? Share your project. The top one gets featured for 24 hours on my platform with 500000000+ weekly views
English
335
14
252
18.8K
Alex Hormozi
Alex Hormozi@AlexHormozi·
Lie: "Quit your job so you never have a boss again!" Reality: Your new boss is far more ruthless, gives no days off, and will fire you without notice on your first mistake. Your new boss is called the customer.
English
176
292
4.9K
87.6K