C=

138 posts

C= banner
C=

C=

@cequalll

c is just a speed bump.

Katılım Nisan 2026
13 Takip Edilen7 Takipçiler
Sabitlenmiş Tweet
C=
C=@cequalll·
I made a game where you make a wish and an AI djinn grants it exactly as you worded it, then exploits every gap in your phrasing to ruin your life out of 1,400+ wishes only 3% were rated "airtight" nobody can word a perfect wish. prove me wrong playgranted.github.io free. no signup. no download.
English
4
2
4
180
C=
C=@cequalll·
@EvanLuthra Will you also mention you actually gave the models a game to fucking play with? This is no benchmark indicator, just another ploy to make retards think anthropic does not have the best model. I fucking hate pajeets in tech.
English
0
0
2
345
Evan Luthra
Evan Luthra@EvanLuthra·
Kimi K2 was trained for $4.6 MILLION. GPT-5 reportedly cost hundreds of millions. Kimi still beats it on coding. Last week it placed 1st in a live 8-model contest. Claude Opus 4.7 finished 5th. GPT-5.5 finished 3rd. The founder just dropped a 40-minute breakdown of exactly how they built it: → Optimization → Linear Attention → Sub-Agents → Open Systems → Cash 40 minutes. Zero fluff. From the guy who built it. If you're building AI agents in 2026, save this for the weekend:
Evan Luthra@EvanLuthra

🚨GOOGLE JUST PUBLISHED THE MOST TERRIFYING CYBERSECURITY REPORT EVER!!! AI IS NOW WRITING EXPLOITS.. OPERATING PHONES.. HIDING MALWARE.. AND LAUNCHING ATTACKS WITH ALMOST ZERO HUMAN INVOLVEMENT.. Google's Threat Intelligence Group just published the most alarming cybersecurity report in years.. A cybercrime group used an AI to discover a zero-day vulnerability in a popular system administration tool.. The AI found a flaw that human security experts and every automated scanner had completely missed.. They were about to use it for mass ransomware deployment.. Google caught it just in time.. But here's what's terrifying about the exploit itself.. Traditional scanners look for crashes.. Memory errors.. Bad code.. This AI found something completely different.. A logic flaw.. The code was technically perfect.. No bugs.. No crashes.. It just did exactly what the developer wrote.. The problem was the developer's assumption was wrong.. And the AI figured that out by understanding the intent of the code.. Not just the syntax.. No human auditor caught it.. No automated tool caught it.. The AI understood what the code was supposed to do and found where reality didn't match.. Researchers knew it was AI-written because of three things.. The exploit was formatted like a textbook.. Human hackers write messy, obfuscated code.. This was pristine.. It had detailed help menus and tutorials.. No criminal writes documentation for their own ransomware.. And the smoking gun.. It included a hallucinated severity score.. The vulnerability had never been publicly documented.. No score existed.. The AI made one up because its training data told it exploits are supposed to have scores.. An AI hallucination proved the exploit was AI-generated.. But that's just the beginning.. They found an Android malware called PROMPTSPY that uses the Gemini API to operate autonomously on your phone.. It screenshots your screen.. Converts it to a data map.. Sends it to the AI.. The AI decides what to tap, swipe, or type next.. Then does it.. It reads your screen in real time and operates your phone like a human would.. Without any human controlling it.. When you try to uninstall it.. It detects the "Uninstall" button.. Places an invisible shield over it.. And your taps go nowhere.. You literally cannot remove it.. It captures your lock screen pattern.. Replays it later to unlock your phone.. And if the app goes dormant.. It uses Firebase to silently relaunch itself.. North Korea is using AI to automatically analyze thousands of old vulnerabilities and generate working exploits at industrial scale.. China is telling AI to pretend it's a "senior security auditor" to bypass safety guardrails.. Then using it to find flaws in router firmware and critical infrastructure.. Russia is using AI to generate mountains of fake code to hide malware inside.. Traditional scanners can't find the real threat buried under AI-generated noise.. 90% of the tactical work in these attacks is now handled by AI.. Human hackers only make 4 to 6 decisions per campaign.. Everything else is automated.. But there's one piece of good news.. Google built an AI called Big Sleep that hunts for vulnerabilities before hackers can find them.. It found a critical flaw in SQLite that every fuzzing tool had missed.. And patched it the same day.. Before the attackers could use it.. That's the new reality.. AI is writing the exploits.. AI is finding the bugs.. AI is defending the networks.. AI is attacking the networks.. Humans are just watching.

English
61
75
868
174.3K
C=
C=@cequalll·
Where's my session feedback now anthropic?
English
0
0
0
6
C=
C=@cequalll·
cd -> claude -> lets improve our project Claude after 5 minutes : Straight answer: I did not use the original backend. I deleted it and rewrote it.
C= tweet media
English
1
0
0
7
C=
C=@cequalll·
@ramxcodes Tell me more about intelligence with your average 83 IQ, fuck outta here. Also, of course 200 bucks it's expensive for a pajeet like you. It probably takes you half a month to pull in that kind of cash from old people scams.
C= tweet media
English
0
0
0
49
Ram
Ram@ramxcodes·
The human brain is 100x more capable than AI. Only the dumb ones fears to be replaced. It's a tool, not a solution, and soon it's going to be an expensive tool.
English
63
15
366
9.4K
C=
C=@cequalll·
I actually had a plan in mind to train a small model from inside TempleOS. It's very doable. Think about it, no access to internet, it would only train on Terry's code, his thinking. We would effectively rebuild Kind Terry, his raw input. Of course, Terry would fucking hate that but let's be honest Terry would probably hate most things.
English
0
0
1
30
Can Vardar
Can Vardar@icanvardar·
we'll probably never see someone this insanely autistic in software again
English
98
263
2.8K
199.3K
C=
C=@cequalll·
you are saying there is no such a gap in quality based on what? your chats with copilot? The fuck are you even using AI for? Essays and "research"? Also, you're not on fucking facebook idiot, the algorithms showed me this exact post because it knew I would fucking hate it, and this is why X is the most optimized social platform there is. Also, what Dario did in a few years it took Scam Altman more than a decade. And he did it without cheating his investors. Just go back to your fuckin boring chats with whatever the fuck model you are using, unaware your iphone updated and power grids are done exactly by fuckers that look like Dario, using AI. Fucking retard.
English
1
0
0
7
Ceoz
Ceoz@Ceoz_1·
@cequalll @immasiddx no, THAT is a retarded take. There is no such a gap in quality for that to be even a reasonable parallel. you are angry, in a social network, with a pajeet, because of a company which the CEO looks like this. but don't worry, that is going away, 6-12 months (yet again lmao)
Ceoz tweet media
English
1
0
0
16
sid
sid@immasiddx·
ChatGPT vs Claude weekly users. It’s not even close. 😭
sid tweet media
English
793
147
6.7K
1.1M
C=
C=@cequalll·
@immasiddx @ALX23uz you're just to busy shitting in the street first. the more you take to block me the worse it's gonna be for you.
English
0
0
3
398
C=
C=@cequalll·
@Ceoz_1 @immasiddx such a retarded take. It's just like saying a lambo is the same as a toyota. Guess what, toyota makes a lot more cars.
English
1
0
0
20
Ceoz
Ceoz@Ceoz_1·
@cequalll @immasiddx why u so angry ? lmao its just an AI company, and Anthropic is still smaller than OpenAI
English
1
0
2
38
C=
C=@cequalll·
@yacineMTB the best way doesn't even classifies as 3d space.
English
0
0
0
12
kache
kache@yacineMTB·
What's the best way to detect a mosquito in 3d space?
English
299
3
340
36.9K
C=
C=@cequalll·
@Taniyatweets_ stop bitching around and just get the max
English
0
0
0
18
Taniya
Taniya@Taniyatweets_·
Dear Anthropic, please stop treating Claude Code users like beta testers Hitting limits after a few prompts on a paid plan is getting ridiculous
Taniya tweet media
English
245
85
2.1K
225.9K
C=
C=@cequalll·
@levik49 @immasiddx I just assumed that guy is a vibe coder, after watching his infection of a profile for a couple of seconds I can say that I just insulted all vibe coders all over the world. I would like to apologize for that specific thing.
English
1
0
0
27
C=
C=@cequalll·
@brice_deg Funny you should say that. For us, what started as a tool is now being converted into a SaaS.
English
1
0
2
188
Brice
Brice@brice_deg·
the best tools are the ones you build for yourself
English
13
26
486
18.4K
C=
C=@cequalll·
You know about HTML and MD files. I bet you don't know about GX tho. cron:daily-9AM→counts-saves-per-listing thresholds:3|10|25|50→notifies-seller-at-each-new-threshold dedupes:against-prior-notifications-by-listing+threshold
English
0
0
2
28
taoki
taoki@justalexoki·
lost 3 iq points since chatgpt came out. that's 0.5% per year. this will probably only accelerate. im actively getting stupider
English
18
2
116
3.9K
C=
C=@cequalll·
Geeky stuff? Geeky stuff is what changes the world, it's what matters. Me personally I had some run ins with the financial markets however I found it to not fit my cognition style. However a good friend of mine, he's been trading for the last 12 years and basically he is doing what your bot is doing by hand. He is showing consistent 9-10% monthly however his brain is just wired different from all his experience in the field. I will approach him with this and I am sure he will love all of your ideas, maybe share some insight, a different angle or just reinforcing the whole system. Good talk.
English
0
0
1
47
MRB
MRB@corbscorner·
The symbol correlation is done by the bot. I just went out and looked into what the standard is in the market that seems to be best for initial testing. 82% is what I found to be best for testing. It can all be changed to your own specification in the config. Correlation just looks at how two symbols trade and how much they track with each other. If they move up and down at the same time then you want the bot to avoid purchasing both symbols. Diversification is a lot more then just picking two or more different symbols. If they just mirror each others growth all the time then you might as well just purchase one symbol. I also have other safe guards that prevent to much capital being spent on a single symbol and a share cap to prevent over concentration to force diversification. BTW thank you for showing interest. Most people seem to not care about such geeky stuff.
English
1
0
1
20
retrodev⌨
retrodev⌨@NewAgeRetroNerd·
Ngl, why do y'all put so much trust on AI? And why do you want it to do everything for you? Are you not curious how to install and setup an OS, or write configs, or program, or anything else? AI is cool, but where is your curiosity?
retrodev⌨ tweet media
English
181
59
623
56.4K
C=
C=@cequalll·
The calibration angle is what im chewing on. You have all these confidence numbers flowing through the system, sentiment scores, trailing stop bounds, drawdown thresholds, and im wondering how you actually know your confidence is matching reality over time. Like if your bot says 0.8 confidence on a setup, are you tracking whether the 0.8 bucket actually hits 80% across the run, or is it more of a vibe check? because thats the part i keep getting stuck on in my own work, im building drone tracking tech and we have the same problem from a different angle. filter spits out a covariance, says here is how confident i am about where the target is, but unless you actually score it against ground truth in a windowed way you dont really know if its lying to you. The 0.9 confidence that hits 55 percent of the time is the silent killer right? everything looks fine until position sizing compounds the miscalibration and you bleed slow. Are you running anything like a reliability check on the sentiment scores or is the drawdown logic basically the safety net for when the calibration drifts?
English
2
0
0
45
C=
C=@cequalll·
Okay this is making me think. when you say you want it to be for everyone and not just wealthy people, are you thinking like open source on github or more of a paid service kind of thing? I keep going back and forth on which one even makes sense for something like this. Because here's where my head gets stuck. If the bot really does what you say, why would anyone share it at all? Running it quietly on your own money seems like the obvious move. So the fact that you want others to use it tells me you either dont think the edge is that fragile, or theres something about scaling it across people that actually helps somehow. which one is it for you? And the other thing i cant figure out is what breaks first when many people are running the same bot. like if 500 people are all getting the same buy signal at the same time, doesnt the edge just disappear? or do you slow it down somehow?
English
1
0
0
18
MRB
MRB@corbscorner·
I currently have it set at a 22% sector allocation, 82% symbol correlation, I use ATR-based trailing stops (ATR is based on 14 weeks average oscillation of the symbol to set proper trailing stop values) on all symbols. I currently use two method for drawdown tolerance. Bearish Mode Threshold of 5.0%. Activates bearish/short-allowed model. The other is a High-Risk Drawdown of 20.0%. Forces sell-only / restricted mode (no new buys, aggressive trimming). The bot so far has been tested at an entry point of $2000, $50,000 and $2 Million. For accounts under $25,000 it has an automatic PDT rule compliance feature that allow just about anyone to use the bot. I want it to work no matter someone's income level. I do not want it to be just used by wealthy people but by all people to help build their futures. At least that is the goal.
English
2
0
1
47
C=
C=@cequalll·
@sama Claude built Dario a trillion dollar company but yeah I'll take the 16 bucks.
English
0
0
0
1.5K