Abdelrahman Ali

97 posts

Abdelrahman Ali

Abdelrahman Ali

@zboon101

Data Scientist

Jordan Katılım Eylül 2020
323 Takip Edilen16 Takipçiler
Abdelrahman Ali
Abdelrahman Ali@zboon101·
We used to say "use Adam everywhere, it'll just work," and now we'll say "use Claude Code everywhere, it'll just work." Some relevant work on this is GEPA and Karpathy's recent /autosearch.
English
0
0
1
66
Abdelrahman Ali
Abdelrahman Ali@zboon101·
You don't really care which hyperparameters or architecture the agent used, you just get the final code optimized for your specific metrics. It's another level of abstraction.
English
1
0
0
14
Abdelrahman Ali
Abdelrahman Ali@zboon101·
The coding agent is becoming about more than just writing code, it's becoming part of an optimization loop. You can ask someone what algorithm they're using to optimize something and they'll just say it's a coding agent optimizer that loops until a criteria is satisfied.
English
1
0
1
21
Abdelrahman Ali
Abdelrahman Ali@zboon101·
@matvelloso They are writing very very clear skills and md files, I hope they treat us the same way they treat LLMs
English
0
0
1
24
Mat Velloso
Mat Velloso@matvelloso·
The most impressive change that AI caused is that now engineers are writing detailed specs
English
74
181
2K
135.2K
Abdelrahman Ali
Abdelrahman Ali@zboon101·
Reading this thread about RLM, I got a feeling that it's all about giving the LLM more freedom, overcoming the AI bitter lesson, let the LLM build the algorithm instead of forcing it to follow our rules.
Harrison Chase@hwchase17

good thread on whether RLMs are just coding agents or not I am still in the camp they are basically just coding agents with some distinct features (just like some coding agents have different subagent/swarm features) eg @lateinteraction in response your points, isn't that just coding agents with: 1. files as input (can be done with UX, eg in claude when you post long snippets it makes it a file at least in ui) 2. just give coding agent a cli command that itself/a subagent 3. with (2) done it can trivially save it to files Omar am i missing anything?

English
0
0
0
64
Abdelrahman Ali
Abdelrahman Ali@zboon101·
@yazins You can build a mapping between Uthmani and MSA quranic text. A two pointers approach for each aya, would build it. However, you may face that some words in MSA span multiple words in Uthmani or vice versa, but there are few cases, and can be handled manually
English
0
0
9
1.2K
yazin
yazin@yazins·
been struggling with this, need help: comparing LLM Quran output to canonical text is harder than expected. same verse can be written differently: • الصلوة vs الصلاة (Uthmani vs modern) • والنبيّن vs والنبيين (single vs double ya) • يَسْـَٔلُونَكَ vs يسألونك (floating vs carrier hamza) is there a standard normalization approach for Arabic text comparison? or a canonical mapping between Uthmani ↔ modern spellings?
yazin@yazins

Now you can test your favorite model to see how well it does (or doesn't quote the Quran) link 👉 quranvalidator.com

13
6
90
59.3K
Abdelrahman Ali
Abdelrahman Ali@zboon101·
People are showing more respect to LLMs than to humans, they are writing skills.md clearer than blogs/tutorials
English
1
1
4
147
Abdelrahman Ali
Abdelrahman Ali@zboon101·
Nowadays, I can't say that I'm skilled at debugging and fixing issues, or solving complex problems with advanced algorithms, I was having fun and feeling the challenge. Now it's more about building than developing, ideas than implementation, fun in some aspects, boring in others.
Andrej Karpathy@karpathy

A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent. IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits. Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased. Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion. Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage. Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building. Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it. Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements. Questions. A few of the questions on my mind: - What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*. - Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro). - What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music? - How much of society is bottlenecked by digital knowledge work? TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability.

English
0
0
1
65
Abdelrahman Ali retweetledi
Ibraheem Tuffaha 🥛
Ibraheem Tuffaha 🥛@IbraheemTuffaha·
We just raised our seed round 🥛 I met @jawad_shreim & @anas_y_abdallah working on a small project and we clicked instantly. Months later we went our separate ways, but we all knew we had to build something together. Late 2023, we came back together and started @MilkStrawAI: optimize your AWS cloud in 10 minutes, save up to 50%. My AI background helped us build the recommender engine that powers our optimization. But I also taught myself Ruby on Rails and shipped our first version with the team in early 2024. As CPO, I became the glue. Translating between our business vision and technical execution, making sure we build what actually matters. We raised $600k pre-seed in 2024 to scale. And as we worked with more clients, we learned something: cloud isn't just expensive, it's hard to see. So we built observability into MilkStraw. Clients now manage all regions and resources from one tab. See more, save more. Optimization is polished. Now we're expanding. We also brought on incredible talent to help us get there: → @alihfadel, Founding Engineer → @zboon101, AI Engineer → @abderizik, Design Engineer → @Gyamm994, Founder Associate With this round, we're making MilkStraw the place you go to manage your cloud. Grateful to our investors, our customers, and to this team for believing in what we're building. Let's milk the cloud ☁️🥛
Ibraheem Tuffaha 🥛 tweet media
English
4
5
11
501
Abdelrahman Ali retweetledi
أحمد السيد
أحمد السيد@ahmadyusufals·
تقبل الله البطل الملثّم في الشهداء، ورفع درجته، وعوض الأمة خيراً، والله المستعان وعليه التكلان ولا حول ولا قوة إلا به.
العربية
32
1.1K
5.3K
86.6K
Abdelrahman Ali retweetledi
Jawad🥛
Jawad🥛@jawad_shreim·
We're going to make big announcement soon! We're on a mission to help startups save up to 75% on AWS and change the way they interact with their infra without the engineering overhead or account control tradeoffs. and it's working🚀
English
0
2
4
141
Abdelrahman Ali retweetledi
sanket patel
sanket patel@realsanketp·
Blogged: non-obvious things I learned about GEPA.
sanket patel tweet media
English
2
11
97
6.5K
Abdelrahman Ali
Abdelrahman Ali@zboon101·
@brankopetric00 I recommend using @MilkStrawAI to enjoy the RIs savings without the 3 years commitment, it's the best solution for out of the box AWS cost optimization
English
0
0
3
49
Branko
Branko@brankopetric00·
Startup paying $12k/month for AWS. Raised Series A. Immediately hired a cloud architect. First recommendation: Move everything to reserved instances. Projected savings: $4k/month. We reviewed the plan: - Committed to 3 years - Locked into specific instance types - Company pivoting every 6 months - Infrastructure changed 4 times in last year Six months later: - Company pivoted to AI workloads - Needed GPU instances - Reserved instances: useless - Would have lost $35k in sunk costs We stayed on-demand. Paid the premium. Saved the flexibility. Reserved instances are a trap for startups. Pay the on-demand tax until you're boring. Then optimize.
English
21
19
333
19.3K
Abdelrahman Ali
Abdelrahman Ali@zboon101·
Interestingly, people are trying to look for definitions of fundamental human terms: sentience, intelligence, consciousness, learning, etc., even though those terms were investigated centuries ago. And as we dig deeper, we are realizing the greatness of our creation even more.
English
0
0
2
43
Abdelrahman Ali
Abdelrahman Ali@zboon101·
We've realized that engineers perform complex tasks beyond coding and that the coding part is becoming easier, but not as easy as it might seem. This can be applied to all other professions. So before deciding whether AI will replace X, we need to define what X is actually doing.
English
1
0
2
45
Abdelrahman Ali
Abdelrahman Ali@zboon101·
One of the things I like about the talks surrounding LLMs and AGI is this movement of redefinition. We are trying to redefine our jobs and responsibilities. What are the responsibilities of a software engineer that an LLM can't do?
English
1
0
2
70