JotaDe Rodriguez

846 posts

JotaDe Rodriguez banner
JotaDe Rodriguez

JotaDe Rodriguez

@JotaDeRodriguez

Somewhere between CG art, Architecture and Design. And now programming.

Madrid, Comunidad de Madrid Katılım Mart 2022
252 Takip Edilen46 Takipçiler
nutanc
nutanc@nutanc·
Had an interesting discussion with my front-end UI developer. I was asking him why he does not use more claude code to do things faster, and I quickly built a prototype in front of him to show how it was done. He calmly asked me, "Well, if you observe here, this button does not look like the design given in Figma." I said, "It's okay, but look at how fast we quickly built it." Then he told me, "Why is it okay to be fast but wrong with AI, but with me it has to be fast and correct?" 😁
Marcin Krzyzanowski@krzyzanowskim

we knew how to make bad code, cheap, and fast, before agentic coding emerged. why didn't we follow that path earlier? it bugs me. why now? we had the knowledge how to build bad code before.

English
69
195
2.9K
253.8K
Pinto
Pinto@pintoneous·
@adamndsmith And what is the lesson here? If you have a business pay for proper IT support. This would not have happened because you would have backups of everything and segregated work/family accounts
English
1
0
0
4.2K
Adam Smith
Adam Smith@adamndsmith·
Incredible post. His sister is at university, her dissertation is in Google Docs - all locked out.
Adam Smith tweet media
English
170
479
16.2K
2M
JotaDe Rodriguez
JotaDe Rodriguez@JotaDeRodriguez·
@triathenum @aiamblichus The issue is that LLMs know only how to add code, not simplify it. They duplicate stuff because they're incapable of thinking of the whole. It's not surprising, if you had an infinite ammount of humans you could hire but they'd only get 1-2h to work on your project (1/2)
English
1
0
0
22
αιamblichus
αιamblichus@aiamblichus·
I am currently building a tool of low-to-moderate complexity and was intentionally trying not to look at the code Codex was writing, to see how it goes. Now I finally looked. The results are shocking. The thing "works", but the code quality is truly apocalyptic. I don't even want to think about the amount of refactoring it would take to fix this mess. If you think your bot will build you a Salesforce clone any time soon, I have a bridge to sell you. The present generation of AIs (if left unattended for any length of time) will create tar pits beyond your wildest imagining. And if you do decide to verify everything they do, you will reduce your velocity by a factor of 10 at least. Which means you won't win nearly as much from the whole process. And before anyone says: "just let them refactor it!"-- I tried. Asking the AIs to refactor their own code won't bring you any joy. It just drags you further into the tar pit. The models are clearly trained to pursue the one goal of producing code that "works", with little or no regard for architecture or code quality. This is classic junior developer behavior, of course, but an AI junior will drown you in slop before you know what hit you. With human juniors, you at least have some time to react before they've written 100k lines of code and exhausted your token budget. This is what progressive loss of control feels like in SE space. I am sure there are use cases where vibe coding is genuinely useful (small projects, PoCs, straightforward migrations). But we are still far from them being able to produce software of any size or complexity. I advise extreme caution with how much autonomy you choose to delegate to AI coders.
English
53
20
323
28.2K
JotaDe Rodriguez retweetledi
Peer Richelsen
Peer Richelsen@peer_rich·
ok german car industry hear me out reboot ALL iconic 1990s cars fully electric sell nostalgia to millenials who all have money now remove all expensive components, sell them for <$30k go into debt if needed and sell them at a loss
Peer Richelsen tweet mediaPeer Richelsen tweet mediaPeer Richelsen tweet mediaPeer Richelsen tweet media
English
358
138
2.6K
104.4K
JotaDe Rodriguez
JotaDe Rodriguez@JotaDeRodriguez·
@carlos6k00 @rauchg Yes. Some people did it with OpenClaw. Hide malicious text in the skills, the AI has the ability to add skills autonomously and you have the perfect recipe for trouble.
English
0
0
0
73
Rick
Rick@carlos6k00·
@rauchg Just hallucinating here as well, could this be the new form of virus? For example, I create a well-structured GitHub library that solves a very common problem and hide malware inside it. Claude Code implements my library and my malicious code?
English
4
0
23
15.4K
Guillermo Rauch
Guillermo Rauch@rauchg·
A Vercel user reported an issue that sounded extremely scary. An unknown GitHub OSS codebase being deployed to their team. We, of course, took the report extremely seriously and began an investigation. Security and infra engineering engaged. Turns out Opus 4.6 *hallucinated a public repository ID* and used our API to deploy it. Luckily for this user, the repository was harmless and random. The JSON payload looked like this: "𝚐𝚒𝚝𝚂𝚘𝚞𝚛𝚌𝚎": { "𝚝𝚢𝚙𝚎": "𝚐𝚒𝚝𝚑𝚞𝚋", "𝚛𝚎𝚙𝚘𝙸𝚍": "𝟿𝟷𝟹𝟿𝟹𝟿𝟺𝟶𝟷", // ⚠️ 𝚑𝚊𝚕𝚕𝚞𝚌𝚒𝚗𝚊𝚝𝚎𝚍 "𝚛𝚎𝚏": "𝚖𝚊𝚒𝚗" } When the user asked the agent to explain the failure, it confessed: The agent never looked up the GitHub repo ID via the GitHub API. There are zero GitHub API calls in the session before the first rogue deployment. The number 913939401 appears for the first time at line 877 — the agent fabricated it entirely. The agent knew the correct project ID (prj_▒▒▒▒▒▒) and project name (▒▒▒▒▒▒) but invented a plausible-looking numeric repo ID rather than looking it up. Some takeaways: ▪️ Even the smartest models have bizarre failure modes that are very different from ours. Humans make lots of mistakes, but certainly not make up a random repo id. ▪️ Powerful APIs create additional risks for agents. The API exist to import and deploy legitimate code, but not if the agent decides to hallucinate what code to deploy! ▪️ Thus, it's likely the agent would have had better results had it not decided to use the API and stuck with CLI or MCP. This reinforces our commitment to make Vercel the most secure platform for agentic engineering. Through deeper integrations with tools like Claude Code and additional guardrails, we're confident security and privacy will be upheld. Note: the repo id above is randomized for privacy reasons.
English
202
237
3.2K
775.2K
JotaDe Rodriguez retweetledi
Francisco Fonseca
Francisco Fonseca@_Francis_co_Art·
Portuguese cable management
Francisco Fonseca tweet media
Français
33
477
6.2K
85.2K
JotaDe Rodriguez retweetledi
1926 Live
1926 Live@100YearsAgoLive·
In an interview with John Kennedy of Collier’s Magazine, reclusive inventor Nikola Tesla says that in the future, “wireless will be perfectly-applied to the whole earth” and we will have devices “that instantly allow humans to communicate with one another, and they will fit in our pockets.” Tesla claims that humans can even have face-to-face meetings on these devices, using wireless magic. In the same interview, Tesla also predicts that in the future, women will be the dominant sex and the “Queen Bees” of society.
1926 Live tweet media
English
17
94
737
33.5K
JotaDe Rodriguez retweetledi
City Aesthetics ⛩
City Aesthetics ⛩@cityaestheticss·
How can you not love cities?
City Aesthetics ⛩ tweet media
English
96
158
3.7K
479K
JotaDe Rodriguez
JotaDe Rodriguez@JotaDeRodriguez·
@ursacke @anatolideli @codexeditor @ahmetb "assistant turn" and other specific tokens (thinking, context, etc) and a special "End of Sequence" token, so the LLM is actually trained on when a response should end, and is able to output that token. The 'chat' interface just hides it.
English
0
0
1
12
JotaDe Rodriguez
JotaDe Rodriguez@JotaDeRodriguez·
@ursacke @anatolideli @codexeditor @ahmetb You have been answered already, but to add, when you feed text into an LLmM, one of the complexities that is usually abstracted away is the special tokens that get inserted. A multi turn conversation is actually just a single string of text with "user turn", "user end of turn"
English
1
0
1
17
ahmetb
ahmetb@ahmetb·
How are LLMs are able to generate perfectly aligned ASCII diagrams/tables if they're only focused on the next token (and not seeing all lines at once)? What's the technical explanation of this?
ahmetb tweet mediaahmetb tweet media
English
231
35
1.8K
522.5K
JotaDe Rodriguez
JotaDe Rodriguez@JotaDeRodriguez·
@qtnx_ @graffioh Yes they do when the whole point that they're supposed to be expandable on demand, ie the agent reads the first parts and go deeper only if it needs to.
English
0
0
1
30
JotaDe Rodriguez
JotaDe Rodriguez@JotaDeRodriguez·
@advancedjd @Speclizer_ Things you can do in MGS3 with enemies: Make them give you items Shoot their radio so they can't call for backup Hold them while lying down so they'll stay like that forever Shoot their arm so they can't use their main weapon Make them talk These are just off the top of my head!
English
1
0
19
671
Speclizer
Speclizer@Speclizer_·
TLOU 2 had a cut feature called "Hold Up" which allowed you to sneak up behind an enemy and force them to put their hands up, you could then control where they move, tell them to face you or away, then they would eventually try to pull out their weapon. The animation still exists in the game, along with the script. #TheLastofUsPartII
Speclizer tweet media
English
109
179
6.5K
2.4M
Dreaming of Ink
Dreaming of Ink@Changing_Wander·
@Prazkat En el libro tambien dicen que el anillo aunque no es un objeto que tenga inteligencia de verdad, si tiene la "personalidad" de un parasito, y va atraer y Manipular a personas y situaciones pa seguir su proposito de corrupcion, x eso tampoco lo amarran a una cuerda y lo arrastran
Español
3
1
167
3.8K
🐾🚔 Prazkat Reviews 🇲🇽
Nah, el anillo corrompería al ratón y luego a quien lo cargue. La novela te dice que el efecto de corrupción es directamente proporcional a la ambición de quien lo porte. Los hobbits fueron elegidos para transportarlo porque son los más humildes y la corrupción es más lenta.
🐾🚔 Prazkat Reviews 🇲🇽 tweet media
Rothmus 🏴@Rothmus

Español
22
309
4.1K
62K
Tiago Sá
Tiago Sá@tedmirra·
@levelsio Thank you. I meant in general, not only for images.
English
1
0
1
520
Melvin Vivas
Melvin Vivas@donvito·
Sonnet 4.7 rumors
English
79
19
1.2K
112.4K
JotaDe Rodriguez
JotaDe Rodriguez@JotaDeRodriguez·
@MaziyarPanahi @HanchungLee Yes and no. It is true that baiscally everything for LLMs right now is just 'more text in or more text out' but the idea is that these are elements that get called when actually needed, ie opposed to MCPs that were appended with every message (thus wasting)
English
0
0
2
60
Maziyar PANAHI
Maziyar PANAHI@MaziyarPanahi·
@HanchungLee so it’s just prompts, we are hoping the model that could be anything does it super well but there is no guarantee. what’s the point of making standard our of this really
English
6
0
44
7.8K
JotaDe Rodriguez
JotaDe Rodriguez@JotaDeRodriguez·
@supabase I'd advice everyone to be mindful, personally I covered 90% of my use case (database tables as context to design my APIs) got covered when using a schema dump into a sql file and letting Claude browse through it when it needs to.
English
0
0
0
22
JotaDe Rodriguez
JotaDe Rodriguez@JotaDeRodriguez·
@supabase Used this daily, but the amount of tokens consumed constantly because of the ammount of tools seriously cut into my context window and usage with Claude Code.
English
1
0
0
82
Supabase
Supabase@supabase·
The Model Context Protocol (MCP) connects Large Language Models (LLMs) to platforms like Supabase This allows AI assistants to interact with and query your Supabase projects for you
Supabase tweet media
English
13
18
242
13K