Fabio Angela

2.4K posts

Fabio Angela banner
Fabio Angela

Fabio Angela

@FabioAngela79

L'arroganza umilia anche quando hai ragione, l'umiltà esalta anche quando hai torto. Longtime developer, so many ideas, so much passion, too much maybe!

Italia Katılım Mayıs 2015
186 Takip Edilen68 Takipçiler
Sabitlenmiş Tweet
Fabio Angela
Fabio Angela@FabioAngela79·
"Generate a demoscene 64k video using any tools of your choice, such as ffmpeg, external utilities, or any libraries, you name it. The output should feature MOD-style tracker music with sampled sounds." Codex 5.4 on my custom agent infra (no mcp used) @josephdviviano @tibo let's play
English
2
0
1
193
Fabio Angela
Fabio Angela@FabioAngela79·
Since they keep talking about AI as a commodity, the price can't go up. I think over time we should see it getting way lower, because of tech stack improvement, breakthrough, etc. If we see it increase, would be just to have "intelligence capture" and thus we are all going to be fckd
English
0
0
0
15
Tyler
Tyler@rezoundous·
What are the chances OpenAI reduces Codex usage limits in May?
English
93
0
243
33.3K
arb8020
arb8020@arb8020·
gpt-5.5 prompt for codex seems to have a duplicated line trying to get it to not talk about creatures? Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query. [...] Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query gh link: #L55" target="_blank" rel="nofollow noopener">github.com/openai/codex/b…
English
184
161
3K
944.7K
0xSero
0xSero@0xSero·
I’m convinced that reasoning was a hacky stupid solution that was mistakenly deeply backed in all our tools. It’s more expensive it is rarely beneficial outside of very novel problems which despite saying we have is rarely ever the bulk of what we do. Now taking it out, yikes
0xSero@0xSero

OAI has to be prioritising low/no thinking GPT-5.5 inference, it's so fast. I think this is better than high thinking, so much less annoying, so much more token efficient.

English
55
5
258
24.8K
Fabio Angela
Fabio Angela@FabioAngela79·
@stevibe github is microsoft, they literally "own" the models (chatgpt at least).
English
0
0
0
70
stevibe
stevibe@stevibe·
Copilot is going credit-based June 1. Pay per token, just like the API. Anyone reselling someone else's model was always going to end up here. You can't subsidize inference forever when you don't own the model. This is why local LLMs matter. Not because they're frontier — they're not. But you actually own the intelligence. Fixed cost.
stevibe tweet media
English
35
15
315
31K
GitHub
GitHub@github·
Starting June 1st, GitHub Copilot will move to a usage-based billing model as GitHub Copilot supports more agentic and advanced workflows. In early May, you'll see a preview bill experience, giving visibility into projected costs before the transition. 👉 Read more about the upcoming change: github.blog/news-insights/…
English
475
895
2.8K
3.4M
Brett Stuart
Brett Stuart@bstuartTI·
This is NULL 📽️🍿 A 15-minute short film. 250,000 Runway credits. Created in 4 days. The all-nighters nearly broke me, but this is the thing I’m most proud of making in film or AI. Thanks for hosting AIF & CPP @c_valenzuelab Thanks for amazing anime inspo's @PsyopAnime
English
198
183
1.5K
62.4K
Juventus News Live
Juventus News Live@juvenewslive·
‼️‼️ | The UEFA president Ceferin is ready to exclude Italian teams from European competitions and strip Italy of Euro 2032 hosting rights if the FIGC is placed under administration following the resignation of his deputy Gravina. [@corriere via @mirkonicolino]
English
28
23
276
55.8K
Fabio Angela
Fabio Angela@FabioAngela79·
@thsottiaux Let's cheer this new with a reset! I'm under 10% and my reset Is in 48h
English
0
0
1
21
Tibo
Tibo@thsottiaux·
Looking at the traffic dashboard for Codex just now, it would be scary if we didn't have a lot more compute coming online in the coming weeks. All according to plan fortunately.
English
251
99
4.9K
184.8K
Fabio Angela
Fabio Angela@FabioAngela79·
@antirez Do you find It that different than qwen3.6 27b dense?
English
0
0
2
1.4K
antirez
antirez@antirez·
DeepSeek v4 Flash with *local inference* after 24h of playing with that: even with the 2 bit selective quantization GGUF, iti is the FIRST time I feel I have a frontier model running on my computer. This is *crazy*, and probably a much stronger change in the landscape than PRO.
English
46
105
1.8K
117.2K
Fabio Angela
Fabio Angela@FabioAngela79·
@thsottiaux oh, an easy one: on codex app (windows) when you show the generated image, there is no way to zoom it. on my 4k monitor it's so small that I've always to open in an image editor to see it properly
English
0
0
4
169
Tibo
Tibo@thsottiaux·
It’s the little things that matter, what are some small papercuts you have noticed in Codex? We’ll fix as many as possible in the next week.
English
2K
57
2.3K
258.6K
Fabio Angela
Fabio Angela@FabioAngela79·
@TheAhmadOsman I've just one 3090ti since 5 years, but I want to host also image gen, TTS, ASR, all together
English
1
0
0
126
Ahmad
Ahmad@TheAhmadOsman·
Qwen 3.6 27B is still the release of 2026 for me despite everything else that has come out Pair it with a couple of RTX 3090s and you’re set even if they banned AI everywhere
English
53
38
713
58.3K
Fabio Angela
Fabio Angela@FabioAngela79·
@TheAhmadOsman Yeah, I'm in normal speed now on my Pro account otherwise I won't make it :(
Fabio Angela tweet media
English
0
0
0
6
Fabio Angela
Fabio Angela@FabioAngela79·
I see lot of you openai guys asking for it (as a long time fullstack dev I know the power of feedbacks) and since I'm pretty sure you guys enjoy vibe-enjoying between agent sessions, why don't you add a companion in codex app? I did something similar for my custom agent harness so that I can tweak the orchestrator itself by talking, it's the future x.com/FabioAngela79/…
English
0
0
0
25
Fabio Angela
Fabio Angela@FabioAngela79·
Hey @thsottiaux, since you're so eager to get feedback on Codex so we can fix it, why not add an agent within the Codex app that we can use to send you feedback and it can reply back giving us advices if the feedback is something that can already be fixed? (Just don't use up our tokens, eh!) I suppose this is something we'll see on many apps as soon as API price drops (2 order of magniute :D)
English
0
0
0
5
Fabio Angela
Fabio Angela@FabioAngela79·
Good, antigravity had a huge lead on this but they thew it away with their way to "package the tools" and be easy to buy... btw on your implementation it's not clear what happens when you have it hidden/resized/minimized. What's its behavior? does it internally resize the view to proper sizes to test it?
English
0
0
0
99
James Sun
James Sun@JamesZmSun·
Today, we launched browser use inside Codex to further close the build & verify loop for local development! Now, you can ask Codex to build your front end, and test it like a user would by clicking through the app. Codex sees everything a user sees through vision & checks the network/console logs to help debug & fix any issues that it finds. This change brings us closer to fully autonomous coding agents that delivers high quality and tested changes. Watch Codex test my app in the browser, catch & fix a real bug, and doing that loop again with a brand new feature.
English
192
234
3.1K
216.7K
Fabio Angela
Fabio Angela@FabioAngela79·
@Isnfndndxn @JamesZmSun I second this, also it's not clear what happens when you have it hidden/resized/minimized what's its behavior? does it internally resize the view to proper sizes to test it?
English
0
0
0
17
Dannyd
Dannyd@Isnfndndxn·
@JamesZmSun Definitely prefer this over other mcps. One other slight enhancement - view ports. Often the fixes are for mobile or desktop. So an easy toggle makes the annotate feature even easier. Thanks James!
English
1
0
4
212
Fabio Angela
Fabio Angela@FabioAngela79·
@b1a_iwnl @xenovacom where do you see a slower decode?! btw from the video clearly is prefill that got the huge boost while decode is ~15% better
English
0
0
0
53
b1a
b1a@b1a_iwnl·
@xenovacom Isnt only the prefill speed up while decode is slower?
English
1
0
0
1.5K
Xenova
Xenova@xenovacom·
Opus 4.7 just wrote a custom WebGPU kernel that runs Qwen3.5 up to 13x faster using a fused LinearAttention op! 🤯 Agentic kernel optimization is the future. Now live in 🤗 Transformers.js v4.2.0! P.S. I've updated all our previous demos to use this new version. Enjoy!
English
35
68
794
77.1K
Fabio Angela
Fabio Angela@FabioAngela79·
btw, don't you think we'll be flooded by a lot of low quality software (not just frontend) in the short term, before hopefully get to a better place? Actually humans are the bottleneck on agentic coding, because if you have to review all the changes, you are slowing the agent. This mean most ppl go YOLO on dev and now even business owner, that are hyped by AI that "can do anything", are pushing toward agentic coding with low guardrail (this mean going into production with software you don't have control about). How can this go? In the company I'm following, where I'm leading a group of people, I'm pushing a lot about AI but for internal tools, prototypes, brainstorming, devops scripts, documentation, etc. but for production code I require ppl requiring PR, to understand every LOC they are trying to push. I don't think many are doing that because it feels like you are "slowing down". It's not something I think will last much, anyway, because of how much agents are improving, but the short term risk is losing control of your product
English
0
0
0
53
antirez
antirez@antirez·
@FabioAngela79 This line of thoughts assume you can't write any prompt to ask the agent how to do things.
English
3
0
5
3.3K
antirez
antirez@antirez·
My POV on front-end of 2026
antirez tweet media
English
55
50
750
254K
Fabio Angela
Fabio Angela@FabioAngela79·
Well, kinda yes, that's one of the problem. If your argument is generic, you should consider who is the average person using the code. With AI everybody feels like a coder and so they get what the model spits out. And the people you now see not being proper "old school" developers, would hardly be able to steer the model to produce code that's out of their expertise. You need to know what ask for, in order to steer the model, so how can you expect LLM to fix that if the default output of LLM are bloated apps, single file god objects, and so forth? Agens on proper, structured code, can give better result because they can follow good practice you've implemented in your code base while for new projects you have to be more strict on specification, do you expect the average users will do that?
English
0
0
1
344
Fabio Angela
Fabio Angela@FabioAngela79·
son andato nel loop temporale a cercare uno dei miei primi articoli... haha che spiegazione orripilante al tempo! web.archive.org/web/2003091716… Comunque è proprio con progetti come il mio sito che mi mettevo alla prova e miglioravo e ancora oggi, a distanza di ... cacchio 26 anni... continuo a creare e studiare... Che tempi... il sito era arrivato a 10k iscritti quando al tempo in italia non c'era niente e poco dopo iniziavano ad affermarsi siti come HTML.it... ricordo anche che al tempo c'era una community che oggi fatico a riconoscere, mi ricordo anche un paio di cene a Milano con webmaster di siti cosi, mamma mia... veramente una vita fa... Per me non era una questione di business come lo è stato per gli altri siti che poi hanno avuto successo (infatti quando poi è andato gambe all'aria l'host svizzero che usavo ed avevo scoperto la vera importanza dei backup in casa...), per me era una palestra e poi mi bastava la gratificazione dei ringraziamenti che ricevevo o delle mail di sostegno quando a 23 anni mi era stato diagnosticato un Linfoma di Hodgkin e ricevevo supporto morale durante le cure, continuando a mantenere il sito e tutto ciò che gravitava attorno. Cavolo... mi sono autocondotto verso un tunnel dei ricordi, neanche provocato... scusate torno a scrivere codice :)
Italiano
0
0
1
49
Fabio Angela
Fabio Angela@FabioAngela79·
I don't share your view about the hope that LLM will fix the problem about bloating web apps, because they are trained mostly on that kind of bloated web apps, so can't expect to escape from that, on the contrary I'd expect the code quality decrease if big labs will keep scraping the web (so basically feeding with their own "crap") I think we'll see at some point some new programming language or framework for agent use and then, maybe, things could heal P.S. I've almost your age ('79, and italian of course) and I understand what you mean, I'm full stack dev, never stop learning anything I find interesting and fit my "pleasure", I consider programming an art and the list of things you mentioned as tools. If you ever remember a website in the 2000 for italian devs, redangel.it , it was mine, here the first snapshot on web archive of the first version of the site web.archive.org/web/2001020105… 20060102013228 for one of the 2nd version, damn I'm going emotional thinking about the old times lol, another life. I watched one interview you had on youtube in italian and I found many connections with my way to live dev and not only that, I thought would have been nice to have a talk even if I prefer spend time coding than talking with people lol, but who knows. I can be more or less attached to some tech (e.g. I love C#) and I'm out of "religion wars" (linux vs windows, IDE vs VIM/terminal, etc.) I know a lot of things in the field, but the most important thing is realizing that, the more you know, the less you know. Now with LLM I've kind of lost some of the pleasure I had about coding (because Agent coding causes you to code way and way less and be more an "agent manager") but at the same time it's like I'm in constant dopamine rush, it's weird to describe but I bet many are in the same boat. Ciao!
English
3
0
5
3.9K