hoy

67 posts

hoy

hoy

@Yohuyt

Katılım Nisan 2022
24 Takip Edilen3 Takipçiler
hoy
hoy@Yohuyt·
@DFintelligence Anis s'il te plait dis nous que tu ragebait... "Absolument toutes les scènes ont des images modifiés avec l’IA en terme de retouche, colorimetrie etc" non, travaillant dans le milieu c'est juste faux c'est même l'inverse.
Français
0
0
0
19
Defend Intelligence (Anis Ayari)
Je pense que ça sera inapplicable. Parce que ou s’arrête et se fini l’IA ? Ca veut dire que Chris Evans dans son rôle de Captain America n’aura jamais pu recevoir d’oscar parce qu’il y a eu sa tête incrusté sur le corps d’un autre acteur avec des effets spéciaux qui utilise de l’IA ? Absolument toutes les solutions de VFX au cinéma utilise actuellement de l’IA . Absolument toutes les scènes ont des images modifiés avec l’IA en terme de retouche, colorimetrie etc etc. Donc comme d’hab on met où le curseur ? Pareil pour les scénarios ? Traduire ou corriger automatiquement avec de l’IA ça compte ?
Defend Intelligence (Anis Ayari) tweet media
Agence France-Presse@afpfr

🎬 Les acteurs et les scénarios générés par intelligence artificielle ne seront pas éligibles aux Oscars, a annoncé vendredi l'Académie dans de nouvelles règles ⤵️

Français
29
2
106
30.9K
hoy
hoy@Yohuyt·
@HuguUu_FR @DFintelligence Je suis pratiquement certain que son poste est du ragebait pour faire de bonnes stats sur X, imaginer que l'IA generative est actuellement utilisée par les gros studios VFX est absurde
Français
0
0
0
11
HugUuUu
HugUuUu@HuguUu_FR·
@DFintelligence ‘Absolument toute les solutions de là planete utilisé de l’IA” Je travail le dans la plus grosses compagnie de vfx de là planete et vous diets n’importe quoi.
Français
1
0
3
124
hoy
hoy@Yohuyt·
@aescripts Monetizing open-source projects without clearly contributing back or adding real improvements is a bad look. You are just selling repackaged open-source work without clear added value @CorridorDigital
English
1
0
29
776
hoy
hoy@Yohuyt·
@reach_vb reached limit as well as a $200 pro user... "no excuses"
English
0
0
2
1K
Vaibhav (VB) Srivastav
Vaibhav (VB) Srivastav@reach_vb·
ICYMI: ChatGPT Pro gets 2x Codex rate limits, 24/7 through May 31. No excuses. Just build.
English
58
29
976
76.7K
Eric
Eric@ericmitchellai·
If you are experiencing any response quality issues with GPT-5.4 Pro, please let me know here! Have seen some chatter but no concrete issues
English
108
19
489
181.6K
hoy
hoy@Yohuyt·
@thsottiaux You did the reset 1 week ago already....
English
0
0
0
1.2K
Tibo
Tibo@thsottiaux·
Hi! To celebrate its 1-year anniversary, I have allowed Codex to reset its own rate limits across all plans. Enjoy all the new features.
English
420
153
4.4K
286.6K
hoy
hoy@Yohuyt·
@joshbuildings @thsottiaux Good question, I don't know more but sounds like they're targeting their biggest margin on businesses
English
0
0
0
116
Tibo
Tibo@thsottiaux·
I realize yesterday’s Codex reset came in a bit at an unfortunate time given the last one was almost perfectly a week ago. To really celebrate the 3M I’ll reset again tomorrow. Thanks for the feedback!
English
643
298
6.6K
560.1K
Josh
Josh@joshbuildings·
@thsottiaux Does resets not apply to the business plan?
English
3
0
7
6.3K
hoy
hoy@Yohuyt·
@PawelHuryn M4 Max 128 GB, takes more than a minute for a simple "hi" in my case
English
0
0
0
18
Paweł Huryn
Paweł Huryn@PawelHuryn·
@Yohuyt Yes. Prefill is slow. Claude Code adds 30-40K tokens to each request (system prompt, MCPs). What your HW? In my tests its slow on M4 16 GB, faster on my laptop (32 GB RAM, 8 GB GPU).
English
1
0
0
123
Paweł Huryn
Paweł Huryn@PawelHuryn·
Gemma 4 has function calling built in. Good luck actually using it. I tested it with Claude Code yesterday. It worked as chat only — couldn't retrieve files, couldn't call tools. Just a conversation model in an agentic shell. Ollama: tool parser still broken as of v0.20.1. mlx-lm: doesn't parse the format at all. llama.cpp: just merged the fix.
Min Choi@minchoi

Less than 48 hours ago, Google dropped Gemma 4. Minds are blown. And people are already coming up with wild use cases. 10 examples:

English
32
7
141
75.2K
hoy
hoy@Yohuyt·
@PawelHuryn but hugely slow inside claude code
English
1
0
1
77
Paweł Huryn
Paweł Huryn@PawelHuryn·
Paweł Huryn@PawelHuryn

After the llama.cpp fix, we can finally use gemma-4 with Claude Code: Step 1: - Windows: winget install llama.cpp - MacOS: brew install llama.cpp More: github.com/ggml-org/llama… Step 2: - llama-server -hf ggml-org/gemma-4-E2B-it-GGUF (huggingface.co/ggml-org/gemma…) OR - llama-server -hf ggml-org/gemma-4-E4B-it-GGUF (huggingface.co/ggml-org/gemma…) Step 3: Add this to your settings.local.json { "env": { "ANTHROPIC_BASE_URL": "http://127.0.0.1:8080" }, "model": "gemma4" } P.S. Fixes for Ollama and mlx-lm are still in progress.

English
3
1
26
5.8K
hoy
hoy@Yohuyt·
@thsottiaux What happened to buisness/team plans 5h limits?
English
0
0
0
670
Tibo
Tibo@thsottiaux·
Does anyone have a breakdown of how much value you get in your various AI subscriptions from different providers? When compared to API prices
English
184
15
949
120.7K
hoy
hoy@Yohuyt·
@t1amak Looks like they did something to team/business accounts, hitting 5h limits for only 13% of the weekly (was 30% before)
English
0
0
0
32
☩
@t1amak·
wtf happened with codex 5h usage limits? :D i havent even started working and limit dropped from 57% to under 25% lmao
☩ tweet media
English
1
0
0
95
thatvirtualboy
thatvirtualboy@thatvirtualboy·
Anyone else's Codex 'Usage limit' drunk lately? How am I suddenly burning through 5 hours of usage in 1 hour?
thatvirtualboy tweet media
English
1
0
2
168
hoy
hoy@Yohuyt·
@janusch_patas No need to critic one tech to sell another. They both have prons and cons.
English
0
0
1
95
Ziad
Ziad@ziad_makes·
I take back everything I said
Ziad tweet media
English
1
0
1
231
hoy
hoy@Yohuyt·
@rohanvarma On a side note I find it confusing in the app that the time of a thread is when the chat was created, instead of saying the time since the last message
English
0
0
0
5
Rohan Varma
Rohan Varma@TheRohanVarma·
Some interesting data we pulled today showed that ~40% of Codex users use multiple surfaces, between the App, CLI, and IDE extensions. Everyone seems to have a primary preference, but a bigger-than-expected chunk of users launch codex agents outside of their primary interface. If you use multiple surfaces, I'm curious why? And what could we do to improve the experience?
English
155
2
282
36.6K