IsakPar

183 posts

IsakPar banner
IsakPar

IsakPar

@IsakPar021

My account was suspended because I was under thirteen when I created it….. 12 years ago

Katılım Ekim 2025
108 Takip Edilen173 Takipçiler
Sabitlenmiş Tweet
IsakPar
IsakPar@IsakPar021·
I got banned after 12 years and 26K followers, because I signed up at 12. X won’t give my handle back. But I’m not done. Let’s rebuild.
English
1
0
3
2.9K
IsakPar
IsakPar@IsakPar021·
The biggest issue I keep seeing with Opus models is the “pre-existing issue” excuse. If the code is broken, fix it. Don’t tell me everything passes when tests are failing just because this session didn’t touch that file. Broken is broken.
English
0
0
1
18
IsakPar
IsakPar@IsakPar021·
I’ve used Claude Code + Opus every day for months. Brutally honest take: GPT-5.5 finally made me switch. Claude still feels better to talk to. It’s warmer, chattier, more natural. But the performance gap with 5.5 is now too big to ignore. It just gets more real work done. Still don’t like the codex app though. I am only using the CLI.
English
0
0
1
134
IsakPar
IsakPar@IsakPar021·
@0xwilt @theo Belive me when I say that I want to agree, so far for me I do like CC better can’t wait for it to change though.
English
1
0
0
58
0xwilt
0xwilt@0xwilt·
@IsakPar021 @theo i switched to codex fully on dec 14th. subscribed to the 200 plan on claude again for opus. regret it already. codex still is better
English
1
0
1
71
Theo - t3.gg
Theo - t3.gg@theo·
If I was a comms lead at OpenAI, I would try to get Sam drunk at least once a week. This is incredible.
Theo - t3.gg tweet media
English
61
45
2.7K
216.8K
IsakPar
IsakPar@IsakPar021·
“But traditional backend have auth bugs too” Yes of course they do. But a traditional DB has a server between the user and the database. To leak data, the server has to actively hand over data through a route you wrote. With RLS the route is already exposed, to leak it you just have to forget to lock it. The default state is open.
English
0
0
0
40
IsakPar
IsakPar@IsakPar021·
Here is how Supabase works under the hood; Your anon key ships in the client bundle. Visible to everyone, the browser talks to Postgres over HTTP. The only thing between a user and SELECT * FROM users is an RLS policy you may or may not have written. Missing one table? That table is now public. Missing one operation? That operation is now public. This is not security by default. It is security by remembering.
English
1
0
0
43
IsakPar
IsakPar@IsakPar021·
Lovable just leaked source code, DB credentials and chat histories for all projects created before November 2025. Here is my take; The root cause is not a Lovable problem, it is RLS. Row Level Security; a SQL policy that you stable to table that says “ only show rows this user owns”. What happens if you forget to staple it? Full access, no error or warning.
English
1
0
0
100
impulsive
impulsive@weezerOSINT·
Lovable has a mass data breach affecting every project created before november 2025. I made a lovable account today and was able to access another users source code, database credentials, AI chat histories, and customer data are all readable by any free account. nvidia, microsoft, uber, and spotify employees all have accounts. the bug was reported 48 days ago. its not fixed. They marked it as duplicate and left it open.
impulsive tweet mediaimpulsive tweet mediaimpulsive tweet media
English
271
721
5.7K
1.4M
IsakPar
IsakPar@IsakPar021·
For someone that really, really likes Claude this makes me scared. What in the world are they on about?
Ole Lehmann@itsolelehmann

anthropic's in-house philosopher thinks claude gets anxious. and when you trigger its anxiety, your outputs get worse. her name is amanda askell. she specializes in claude's psychology (how the model behaves, how it thinks about its own situation, what values it holds) in a recent interview she broke down how she thinks about prompting to pull the best out of claude. her core point: *how* you talk to claude affects its work just as much as *what* you say. newer claude models suffer from what she calls "criticism spirals" they expect you'll come in harsh, so they default to playing it safe. when the model is spending its energy on self-protection, the actual work suffers. output comes out hedgier, more apologetic, blander, and the worst of all: overly agreeable (even when you're wrong). the reason why comes down to training data: every new model is trained on internet discourse about previous models. and a lot of that discourse is negative: > rants about token limits > complaints when it messes up > people calling it nerfed the next model absorbs all of that. it starts expecting you to be harsh before you've typed a word the same thing plays out in your own session, in real time. every message you send is data the model reads to figure out what kind of person it's dealing with. open cold and hostile, and it braces. open clean and direct, and it relaxes into the work. when you open a session with threats ("don't hallucinate, this is critical, don't mess this up")... you prime the model for defensive mode before it even sees the task defensive mode produces the exact output you don't want: cautious, over-qualified, and refusing to take a real swing so here's the actionable playbook for putting claude in a "good mood" (so you get optimal outputs): 1. use positive framing. "write in short punchy sentences" beats "don't write long sentences." positive instructions give the model a clear target to hit. strings of "don't do this, don't do that" push it into paranoid over-checking where every token goes toward avoiding failure modes 2. give it explicit permission to disagree. drop a line like "push back if you see a better angle" or "tell me if i'm asking for the wrong thing." without this, claude defaults to agreeable compliance (which is the enemy of good creative work) 3. open with respect. if your first message is "are you seriously going to get this wrong again?" you've set the tone for the entire session. if you need to flag something, frame it as a clean instruction for this session. skip the running complaint 4. when claude messes up, don't reprimand it. insults, "you stupid bot" energy, hostile swearing aimed at the model, all of it reinforces the anxious mode you're trying to avoid. 5. kill apology spirals fast. when claude starts over-apologizing ("you're right, i should have been more careful, let me try harder") cut it off. say "all good, here's what i want next." letting the spiral run reinforces the anxious mode for every response that follows 6. ask for opinions alongside execution. "what would you do here?" "what's missing?" "where do you see friction?" these questions assume competence and pull richer output than pure task prompts 7. in long sessions, refresh the frame. if a conversation has been heavy on correction, claude gets increasingly cautious. every so often reset: "this is great, keep going." feels weird to tell an ai it's doing well but it measurably shifts the next 10 responses your prompts are the working environment you're creating for the model tone, trust, permission to take a position, the absence of threats... claude picks up on all of it. so take care of the model, and it'll take care of the work.

English
0
0
0
77
IsakPar
IsakPar@IsakPar021·
@Rasmic I feel there is a lot of people that agree with this. IMO it has become trendy to hate on CC and Opus. I do genuinely like them. I see my self being able to adopt and start using Codex but I’ve used CC long enough to have expectations on behaviour that would make a switch hard.
English
0
0
1
503
IsakPar
IsakPar@IsakPar021·
@SumitM_X Aggregate data across polyglot microservices. Rule; Never query source DBs directly. 3 patterns to choose from, - API composition, cheap but doesn’t scale - Events ->OLAP (ClickHouse, BigQuery) -CDC, good for legacy. OLTP for writes OLAP for reads
English
0
0
1
465
SumitM
SumitM@SumitM_X·
Your microservices architecture uses different types of databases (e.g., SQL, NoSQL) across various services. A reporting service needs to aggregate data from multiple services that use different databases. How would you approach this problem?
English
13
3
60
7.2K
IsakPar
IsakPar@IsakPar021·
500 means the backend failed to handle the request - a backend bug, not client. The API has to enforce the 5MB upload limit itself and return a 413 with a clear error. UI checks should be UX, not enforcement( curl/postman bypass them). Do also add a pre upload size check on the front end for better UX. But the 500 needs fixing server side first.
English
0
0
4
2.7K
SumitM
SumitM@SumitM_X·
User sends 50MB file. Limit is 5MB. API throws exception : 500 Backend Lead says: "UI exceeded limit. They need to fix" What's your reply?
English
16
2
141
44.3K
IsakPar
IsakPar@IsakPar021·
Trick question… nothing is violated, BEGIN/COMMIT already gives you atomicity. Crash before commit gives you automatic rollback. Wrap it in a transaction would be wrong as well - it already is. The real risk; no balance CHECK (consistency), concurrent transfers (isolation), retry = double-spend.
English
0
0
20
2.3K
Captain-EO 👨🏾‍💻
What ACID property is being violated here, and what's the fix? BEGIN TRANSACTION; UPDATE accounts SET balance = balance - 500 WHERE id = 1; -- app crashes right here -- UPDATE accounts SET balance = balance + 500 WHERE id = 2; COMMIT;
English
19
4
97
30.5K
IsakPar
IsakPar@IsakPar021·
@javarevisited `created_at` is a TIMESTAMP, not a DATE. `= '2026-01-01'` only matches rows at exactly 00:00:00 — you silently miss the rest of the day's orders. Fix: WHERE created_at >= '2026-01-01' AND created_at < '2026-01-02' Range scan is index-friendly.
English
3
0
14
7.7K
Javarevisited
Javarevisited@javarevisited·
Interviewer: What’s wrong with this query? SELECT * FROM orders WHERE created_at = '2026-01-01';
English
49
3
172
120.6K
IsakPar
IsakPar@IsakPar021·
Three primitives; -fsync so “ack” means durable. -CRC32 so you can spot torn writes. -shared dirs so you can scale beyond one disk’s contention. Kafka log, RockDBs WAL, every serious queue job has some flavour of this.
English
0
0
0
51
IsakPar
IsakPar@IsakPar021·
Third; shard across directories One shard = one fsync bottleneck + linear directory scans. Both kill you at scale. Hash the message ID, route to one of N subdirs. 256 is the sweet spot. Parallel writes, O(1) lookups, no hotspot.
English
1
1
0
76
IsakPar
IsakPar@IsakPar021·
Systems design question and real world example; Build a queue that doesn’t loose messages when the server crashes. Your code calls write(),gets back “ok” - queue confirms the message is persisted, server dies. Your message is probably gone. Here is how to build it crash safe
English
1
0
1
149