Sabitlenmiş Tweet

Update on the project comparison (created a "timekeeping" app attempted in a simultaneous one-shot session with Claude Code Opus4.6 and GPT5.4): I hope this is helpful. I think immediately a clear winner emerged. Give feedback and let me know if I left something out. I gave the exact instructions with identical plans (created by Grok for fairness) to both models. Here's what happened:
-Both finished the first pass in about the same time, within a few minutes about 40 min total. GPT produced in stages so a little more time 20 min was required notes below.
-GPT produced a "initial MVP" purely for viewing and did this several times. Rather than delivering every step at once, GPT allowed confirmation at several steps. I like this approach. Catch errors in stages. Took extra time, but it was welcomed.
-The first pass was more complete from Claude, but not everything worked refactoring required on most of the windows, especially errors involving CRUD operations. I had to request refactoring several times, still trying to resolve CRUD issues. Not resolved.
-I actually liked the frontend GPT5.4 generated, it's actually very pleasing to use.
-Claude had several parts that did not include add, edit or delete options. I could not add team or clients. After the "Add" buttons were included, they did not work.
-Tried to enter a new client in Claude, it would not add. It froze, then the record did not add, same for projects.
-GPT5.4 is building everything in steps and requests checks as we go along, I kind of like this feature. So far everything is working but the next phase is the CRUD, where problems are usually found. Essentially no errors.
Summary with winners (sorry not in a table):
Frontend design: GPT
Fully functioning: GPT (after stepping through phased output), everything worked no issues
CRUD: GPT, Claude is still not ready, taking several passes and still does not work. I'm not going to keep trying.
Fields included: GPT was much more thorough with all the fields as specified in the design plan. Claude took the minimal approach, left fields out.
Followed the design: GPT, I'm still checking this, but appears every design element was followed. Claude omitted many items.
Code review: Tie - this was done at a high level but the code seemed cogent without strange patterns, linting was self checked by both, and no problems were found (not the best analysis, but fine for purposes here).
Token use: Claude 45k, GPT recorded zero tokens, this has to be configured up front and I didn't do this. Apologies on this, as mentioned I have the max subscriptions on everything so I usually don't pay attention to the costs, unless I hit a wall, which we rarely do.
Winner analysis
Here's my assessment. GPT5.4 clearly won this one. It followed the design, gave steps for checking and after adding CRUD operations everything worked without error. Plus the frontend design is so pleasing and easy to work with in GPT. I would actually enjoy working in the GPT app and I might end up using this internally. Not sure about making it a product offering though. I'm a huge general Claude fan, I use it for many things, but Claude Code (CLI) needs work. I think Opus 4.6 is smart enough, but for some reason it does not produce rock-solid out of the gate. This has happened with several projects.
Screen shots:
Claude's first pass (the plain looking one), and GPT's first pass (the fancier looking window); followed by GPTs project entry window followed by Claude project entry window. I'm not sure how the images will show, but GPTs windows have much more verbiage and has the very light tan background. Claude images are black and white.




English




















