Ferran Adell

2.3K posts

Ferran Adell banner
Ferran Adell

Ferran Adell

@ferranadell

philosophy, videogames (VGBL), generative AI, and expanded education. Gamer. Youtube: Videojocs i Educació.

Montseny Katılım Ağustos 2009
316 Takip Edilen281 Takipçiler
Sabitlenmiş Tweet
Ferran Adell
Ferran Adell@ferranadell·
Si algú està interessat en seguir de ben a prop el projecte que tot just hem començat: Videojocs i Educació en català (vjeducació.org) podeu seguir el compte @vjeducacio on publico tots els capítols nous, recursos, dossiers, articles, etc. #educació #videojocs #encatalà
Català
0
1
6
0
Ferran Adell retweetledi
Via lliure RAC1
Via lliure RAC1@vialliure·
🖥️ Què hem de fer si tenim el Windows 10 a l'ordinador, ara que sabem que deixarà d’actualitzar-se arreu del món a partir del dimarts 14 d'octubre? 🗣️ N'hem parlat amb @ferranadell, expert en Informàtica de la @UOCuniversitat ⬇️ rac1.cat/a-la-carta/det…
Via lliure RAC1 tweet media
Català
0
2
5
1.2K
Ferran Adell retweetledi
LudovicCreator
LudovicCreator@LudovicCreator·
🎨 AI VIDEO TOOLS 🎨 Last time I compared one single prompt in text to video with 9 AI Video tools. This time I did 10 randoms prompts with 9 different models. This was not done to say which one is better but just to see how they react. I put every model in same condition : ONE SINGLE SHOT, no prompt enhancement. Was impossible to share 90 videos in a Thread on X so I put everything on a Notion file . ( See Screenshots) You have 10 subpages , each one with the prompt and the 9 videos each time at same place ( and the tool name in the video caption ). I couldn't do Google Veo 2 not getting access so if people have time and do them please send me a google drive in DMs with the videos I will add them crediting you. For SORA I had them just now I will update when I will get the VEO 2 ones You have the Notion link in first comment , if you use Notion you can duplicate it , if not no worries you can use that link as a website. Bookmark for later , share if you find this interesting. 1/2
LudovicCreator tweet mediaLudovicCreator tweet mediaLudovicCreator tweet mediaLudovicCreator tweet media
English
23
30
143
11.1K
Ferran Adell retweetledi
Gary Marcus
Gary Marcus@GaryMarcus·
𝗼𝟯 “𝗔𝗥𝗖 𝗔𝗚𝗜” 𝗽𝗼𝘀𝘁𝗺𝗼𝗿𝘁𝗲𝗺 𝗺𝗲𝗴𝗮𝘁𝗵𝗿𝗲𝗮𝗱: 𝘄𝗵𝘆 𝘁𝗵𝗶𝗻𝗴𝘀 𝗴𝗼𝘁 𝗵𝗲𝗮𝘁𝗲𝗱, 𝘄𝗵𝗮𝘁 𝘄𝗲𝗻𝘁 𝘄𝗿𝗼𝗻𝗴—𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 𝗶𝘁 𝗮𝗹𝗹 𝗺𝗲𝗮𝗻𝘀 @KevinRoose, of Hard Fork and NYT, was so impressed with OpenAI’s rollout that he joked “of course they have to announce AGI the day my vacation starts”. For many people, what sealed the deal, or lead them to conclude, wrongly, that o3 necessarily “must be a step to AGI”, was o3’s performance on @fchollet’s ARC—AGI. Yesterday war erupted over what was actually done. Here’s what you should know: 1.As NYU prof @LakeBrenden pointed out, the test should never have been called ARC-AGI. Even Chollet acknowledged this in his blog, saying “it’s not an acid test for AGI”. At *most* the test is necessary for AGI; it certainly isn’t sufficient. Critical things like factuality, compositionality, and common sense aren’t even addressed. 2.The video should have been much clearer about what was actually tested and what was actually trained. To the average listener it may have sounded like the AI took the test cold, with a few sample items, like a human would, but that’s not actually what happened. 3.What was actually done - pretraining on what I believe was hundreds of public examples - is NOT comparable to what humans require. Such pretraining is not uncommon in the field, but was not made clear in the video. Altman saying that the test wasn’t “targeted” added to the confusion. 4.Because of the pretraining, and lack of comparability, what was actually shown was disappointing. @Thom_Wolf, cofounder of HuggingFace, wrote “people commenting that it's normal to train on the train set but somehow I would have expected/hoped that as we're nearing AGI-level capabilities we would not need to really fine-tune/specifically train the model on any specific downstream task” 5.Two graphs, one presented by OpenAI and one by Chollet were misleading. As the DeepMind’s @olcan pointed out, the Chollet blog version made the breakthrough seem bigger than it really was by omitting results from others like the @jacobandreas lab at MIT. Same was true of the openAI graph: the MIT work (halfway in between o1 and o3) and many others results weren’t shown, making the breakthrough relative to the field seem far bigger than it really was. 6.As the scientist @AdanBecerraPhD put it (and Chollet publicly agreed) the best thing would have been to present data for the “base model” without the pretraining. This is what many people thought they saw, and that # is important scientifically. Unfortunately the key test was not done. 7.The way in which influencers tried to frame my legitimate criticism as being exclusively about my personal alleged bias was intellectually dishonest. Many others, including researchers from HuggingFace, DeepMind, NYU, and Huawei, and even Chollet himself, shared many of my concerns. Every single point I made was shared by at least one other researcher with a PhD. Conclusions ⁃The problem wasn’t the task per se (a fine addition to our benchmark collection), or even how it was administered (legit relative to the test’s rules), it’s in the impression that OpenAI conveyed, which left many (not all) people believing that more had been shown than actually was. ⁃We still don’t have a solid test of what o3 does without the pretraining, in the case that would be more comparable to humans - Because the wrong experiment was performed, and key data weren’t given, we can’t compare directly with humans. (And best humans still outperformed the model). ⁃Until there is considerable external scientific scrutiny (so far there has been none), we won’t really know exactly what the o3 advance is or how important it is. ⁃What we saw is not AGI. Both Chollet and OpenAI’s @anupk24 made this clear, but only after the video. ⁃People in the media probably shouldn’t even joke about it being AGI. The media should be asking hard questions, not fanning hype.
English
30
47
280
85.8K
Ferran Adell retweetledi
Alex Reibman 🖇️
Alex Reibman 🖇️@AlexReibman·
OpenAI’s biggest rival is shaking things up. Anthropic invited 200+ elite hackers to their SF headquarters to see what’s possible with Claude Here’s what we saw at the @AnthropicAI x @MenloVentures Builder Day Hackathon (🧵):
Alex Reibman 🖇️ tweet mediaAlex Reibman 🖇️ tweet media
English
92
664
7.5K
1.2M
Ferran Adell retweetledi
Iris van Rooij 💭
Iris van Rooij 💭@IrisVanRooij·
Will AI soon surpass the human brain? If you ask OpenAI, Google DeepMind etc., it is inevitable. However, researchers at @Radboud_Uni and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. ru.nl/en/research/re… @o_guest
English
14
98
287
29.8K
Ferran Adell retweetledi
MetaData.Cat
MetaData.Cat@MetaDataCat·
🖼️ Crear fotografies oníriques sobre records que mai han estat fotografiats. Aquesta és la missió de 'Memòries Sintètiques', el projecte de @domesticstream al @dissenyhub que ja ha recollit els moments íntims de més de 300 barcelonins metadata.cat/reportatge/475…
Català
0
2
1
376
Ferran Adell retweetledi
María del Mar Sánchez
María del Mar Sánchez@mallemar·
Han creado un juego que se llama "publica o muere". Tienes que conseguir más citas que los demás, sabotear la evaluación por pares y hacer comentarios sarcásticos. 🤣 Lo quiero.
Jay Van Bavel, PhD@jayvanbavel

I played “Publish or Perish” last night with my lab. It was a blast—it pokes fun at a lot of absurdities of academia. A perfect 👍 game for a lab party! You try to gain as many citations as possible while publishing papers, sabotaging your highly cited peers, and making snarky comments. Kudos to @baixx062 for developing the game. Reach out to him to get a copy.

Español
2
4
12
1.5K
Ferran Adell
Ferran Adell@ferranadell·
Això és greu... Més encara tenint en compte cap a on camina el nostre sistema informacional amb les eines de IA generativa.
Mozilla@mozilla

We're calling on @Meta to maintain its current transparency tool @CrowdTangle; a crucial, real-time tool used to monitor the spread of political disinformation and hate speech. Sign our petition here to help us get #Meta's attention: mzl.la/4bwP2KI

Català
0
0
0
46
Ferran Adell retweetledi
Winix
Winix@WinixRun·
Hoy me ha llegado un SMS de phising de "Correos". Como mi socio estaba reiniciando servidores y no podía trabajar, he pensado... pues me voy a divertir un rato. Si este hilo le sirve a alguien para entender un poco más como funcionan estos timos, eso que hemos ganado. 🧵⬇️
Winix tweet media
Español
1.1K
8.6K
44.8K
7.3M
Ferran Adell retweetledi
Jim Fan
Jim Fan@DrJimFan·
OpenAI is expected to demo a real-time voice assistant tomorrow. What does it take to deliver an immersive, or even magical experience? Almost all voice AI go through 3 stages: 1. Speech recognition or "ASR": audio -> text1, think Whisper; 2. LLM that plans what to say next: text1 -> text2; 3. Speech synthesis or "TTS": text2 -> audio, think ElevenLabs or VALL-E. Last year, I made the figure below to show how to make Siri/Alexa 10x better. However, naively going through 3 stages results in huge latency. User experience falls off the cliff if we have to wait 5 seconds for *each* reply. It breaks the immersion and feels lifeless even if the synthesized audio itself sounds real. Natural dialogues fundamentally don't work like this. We humans > think about what to say next at the same time as we listen & speak; > inject "yes, hmm, huh" at appropriate moments; > predict when the other person finishes and immediately take over; > decide to talk over the other person organically, without being offensive; > handle interruptions gracefully. Currently, AI assistants either cannot be interrupted (super frustrating) or simply stop when they detect an audio event and lose train of thought; > engage in group chat. We are so good at multi-agent conversations. It's not as simple as making each of the 3 neural nets faster, sequentially. Solving real-time dialogue requires us to rethink the whole stack, overlap each component as much as possible, and learn how to make interventions in real time. Or perhaps even better - just have 1 NN mapping audio to audio. End-to-end always wins. I'll sketch out how to design such a model and its training pipeline. Meanwhile, let's wait and see how far OpenAI pushes it!
Jim Fan tweet media
English
129
519
3.2K
727.7K
Ferran Adell
Ferran Adell@ferranadell·
@rod2nordcat Som a Riells/Breda, hauria d'haver sortit el tren cap a Sants a les 20:29. Ni tant sols està anunciat a les pantalles, cap informació per megafonia, cap informació als comptes de Twitter. Hem interioritzat que un endarreriment de 20 mins no és greu, i ni s'informa :(
Català
0
0
0
12