Federico Diaz Sparta

35K posts

Federico Diaz Sparta banner
Federico Diaz Sparta

Federico Diaz Sparta

@FDiazsparta

35 - IG @ulanbatar.arg Music, Automation, E-commerce Growth & Marketing, Advertising, Langs. ENG, ESP, RUS, POR(BR), FRA, GER.

Buenos Aires, Argentina Katılım Mayıs 2009
3.3K Takip Edilen981 Takipçiler
Lunatika
Lunatika@lunatika_shd·
Panini siempre funcionó así: por lotes. No es distribución random posta. No solo pasa con las del mundial. Me acuerdo cuando era piba, para llenar el álbum de Los Simpson, teníamos que comprar en distintos lugares porque en los lotes que llegaban al kiosco cercano tocaba siempre lo mismo. Hay que comprar un poco en ML, un poco en carrefour, un poco en Rappi. Y así.
Fundamentalistas del Mundial@mundialistas26

comento mi experiencia aca. Si sos de los que le salieron 600 de iraq y uzbekistan, las de ML me vinieron con otro lote. Mucho sudafrica, suiza, brasil, argentina. abz

Español
9
11
857
125.6K
Federico Diaz Sparta retweetledi
Polymarket
Polymarket@Polymarket·
BREAKING: Vibe-coding platform Lovable reportedly suffered a breach that exposed users’ AI chat histories, source code, & database credentials.
English
576
919
8.4K
2.1M
Federico Diaz Sparta retweetledi
Shaun Anderson
Shaun Anderson@Hobo_Web·
👀"I published a completely blank website. — white page, zero visible content. But seven layers of structured data underneath: JSON-LD, llms.txt, Ed25519-signed entity claims. Within 36 hours it became the no. 1 cited source in Perplexity. ChatGPT cited it independently - > No human ever saw content on that page."
Shaun Anderson tweet media
English
27
32
420
42.3K
Artu Grande e/acc 🇦🇷
Artu Grande e/acc 🇦🇷@ArtuGrande·
Este finde corro 21km. En la semana sentí una sobrecarga muscular en los gemelos. Toco volver a fisio y parar unos dias. Se viene el desafío deportivo más intenso de mi vida @RaidColumbia ⛰️🏃‍♂️ Como se ve / como me siento (?
Artu Grande e/acc 🇦🇷 tweet mediaArtu Grande e/acc 🇦🇷 tweet media
Artu Grande e/acc 🇦🇷@ArtuGrande

En 10 días corro 21km con losquehacen.com.ar. Este viernes voy a donar sangre por primera vez en mi vida 🩸 El 9 y 10 de Mayo corro mi primera carrera de Trail en Jujuy. El RAID de Los Andes. - Etapa 1: 12K en Salinas Grandes (3.415 msnm) - Etapa 2: 24K de Tumbaya a Purmamarca (Cerro de los 7 Colores) Correr en nuestro país rodeado de montañas es un privilegio ⛰️ Les dejo un video abajo para los que no conocen:

Español
4
1
14
1.3K
fran
fran@frangss_·
Hace días nos veníamos guardando esta noticia, por las dudas, para que todo salga bien SANOA AHORA ESTÁ DISPONIBLE EN AXION 🟣🤩 (La dirección exacta en la respuesta a este tweet) Sanoa no tiene techo, lo que creció en estos 3 meses era impensado para nosotros 🫡🫶
fran tweet mediafran tweet media
Español
21
13
542
78.5K
Federico Diaz Sparta retweetledi
Artu Grande e/acc 🇦🇷
Artu Grande e/acc 🇦🇷@ArtuGrande·
🇦🇷 @rauchg (CEO de @Vercel) anticipa posible IPO para 2027, buscando el “timing perfecto”. 💰 Vercel está valuada en ~US$9.300 millones (2025). 📈 Facturación anual de US$300M y run rate de US$340M a febrero 2026. 🤖 Grandes players de IA usan Vercel: @Meta AI @claudeai (@Anthropic) 🧠 Sobre IA: Rauch afirma que los modelos ya superan al humano en la mayoría de tests. ⚡ En Vercel hablan más de “superinteligencia” que de AGI. 🏢 Impacto en empleo: CEOs están rediseñando empresas con equipos mucho más chicos gracias a IA. 👨‍💻 Caso extremo: CTOs que antes tenía miles de ingenieros ahora podría operar con ~100. 🇦🇷 Oportunidad para Argentina: - Potencial clave en infraestructura global de IA. - Ventaja en energía para data centers - Rauch busca conectar CEOs de EE.UU. con sector energético argentino Cada día que pasa es mas surrealista este episodio de podcast, en unos años diran que fue IA 👇🇦🇷🧉
Artu Grande e/acc 🇦🇷@ArtuGrande

Tomando unos mates con @rauchg 🧉🇦🇷 Fundador y CEO de @Vercel, la compañía detrás de @nextjs y @v0, que acaba de cerrar ayer una ronda Serie F a una valuación de USD 9.3B. Desde Lanús al mundo, Guillermo comparte una visión única sobre cómo detectar tendencias antes que sean mainstream, el rol del programador en la era de la IA y cómo construir productos globales desde cualquier lugar del mundo 🌍. 🎙️ Fue un privilegio enorme tener esta conversación en español, como si fuera una charla de café entre amigos, hablando sobre tecnología, comunidad y futuro. 🕒 Índice del episodio [00:05:27] Felicitación por la Serie F y reflexión sobre el recorrido de @vercel y el open-source. [00:07:19] Su misión en una frase: convertir ideas en productos a escala global. [00:09:53] Cómo entrenar el instinto para detectar olas tecnológicas (Socket.io, Next.js, React). [00:16:32] El nuevo rol del programador en la era de la IA, copilotos y agentes. [00:26:27] La reducción de barreras para emprender en tecnología con IA y @v0. [00:33:11] El poder de crear software desde cualquier lugar y acceder a mercados globales. [00:40:04] Crear vs. consumir en la puna Salteña: por qué la humanidad necesita más creadores de software. [00:46:18] Networking y comunidad: cómo liderar con contenido educativo y demos de productos que resuelvan problemas reales. [00:49:34] Su visión sobre Web3, cripto y su conexión con la IA en el contexto de @EFDevcon en 🇦🇷 [00:57:10] El futuro de Internet: AI Cloud, @neuralink y el poder de los individuos. Es un verdadero sueño cumplido haber tenido la oportunidad de entrevistar a Guillermo en una semana tan especial para Vercel. Gracias a toda la gente que hace años apoya mi contenido y mi propósito de acercar educación y tecnología a las comunidades emergentes de Argentina, y en especial a mi familia, que confió en mí desde siempre. Con mucho orgullo, desde Salta para el mundo 🇦🇷🌎 - Artu

Español
3
10
104
19.3K
Renato Piermarini
Renato Piermarini@renapiermarini·
Se acuerdan de esto? tengo más lore 🍿 Peugeot le entregó un auto nuevo y esta vez no tiene detalles estéticos PERO... suspenso... NO LE ANDA NADA, ESTA LLENO DE PROBLEMAS ELECTRICOS. Como puede ser que entreguen los 0km así? y encima reponen con otro PEOR!!!!
Renato Piermarini tweet mediaRenato Piermarini tweet mediaRenato Piermarini tweet media
Matias Arcas@MatiasArcas

Hola @peugeotarg así me entregaron un 408gt 0km en una agencia oficial @dzapatillas

Español
119
53
1.2K
203.3K
Federico Diaz Sparta retweetledi
Anish Moonka
Anish Moonka@anishmoonka·
Orcas eat great white sharks. They hunt seals, dolphins, and baby whales. They have never killed a single human in the open ocean. Not once, in all of recorded history. An orca's brain weighs up to 15 pounds. Yours weighs about 3. They have roughly double the brain cells we do in the regions that handle complex thought. A neuroscientist at Emory named Lori Marino put an orca brain in an MRI and found these animals can tell different species apart underwater. They do it by sending out clicks that bounce off everything around them and come back as a kind of 3D sound map (this is called echolocation). From 500 feet away, an orca knows you're a human and not a seal. It skips you on purpose. The answer is culture. Orcas around the world are divided into at least 10 separate populations, each with its own food rules, its own language, and its own way of hunting. All of it learned from their mothers. One population eats only fish. Another eats only marine mammals like seals and sea lions. These two populations can live in the exact same water and never swap a single meal. A baby orca learns what food is from its mother, and that list stays the same for life. In the Pacific Northwest, one population called the Southern Residents eats almost nothing but Chinook salmon. Scientists have documented them killing harbor porpoises 78 times over six decades, carrying the dead porpoises in their mouths, and never once eating them. Even when the group was starving. A 2023 study in Marine Mammal Science looked at all 78 cases and concluded it was play. These orcas would rather go hungry than eat something their culture says isn't food. Researchers studying whale behavior in 2001 found that orca cultural traditions "appear to have no parallel outside humans." Each family group has its own dialect, its own version of the language. Calves spend about two years just learning how to make all the sounds their family uses. Mothers will slow down a hunt on purpose so their young can watch. In 2005, a 12-year-old kid was swimming in Helm Bay, Alaska when an orca came at him full speed. At the very last second, the orca seemed to realize it was charging a human. It bent its entire body in half and turned back to open water. In captivity, it goes differently. SeaWorld's Tilikum killed three people during his life in a concrete tank. Research from 2016, published in the journal Animals, traced it to psychological collapse from being locked away from the family bonds orcas need to stay stable. I think calling this a "mystery" undersells the science. Orcas decide what to eat based on culture, not instinct. No orca mother has ever taught her calf to hunt humans, so no orca hunts humans. Only about 75 of those salmon-eating Southern Residents are still alive. Their pregnancy failure rate is 69% because we've destroyed their salmon runs. They won't break their food culture to survive. Whether we care enough to protect theirs is the part that actually matters.
Nature is Amazing ☘️@AMAZlNGNATURE

One of the biggest mysteries to me is how Orcas, the ocean’s most efficient predators, have never attacked humans in the wild… almost like they know something we don’t.

English
734
16.4K
95.6K
7.3M
Federico Diaz Sparta retweetledi
Ruben Hume
Ruben Hume@rubenhume·
Amazonia has a new logo, and it’s a masterpiece. The letters were extracted from satellite images of the river itself
Ruben Hume tweet mediaRuben Hume tweet media
English
174
9.3K
116.1K
5M
Lauti
Lauti@lautirshaid·
En Buenos Aires la comunidad de coding está re armada, pero la de data no existe — o no se habla, no sé Alguien tiene recomendaciones para meterse en esa burbuja? Si no hay, estoy para armarla
Español
42
16
453
47.9K
Federico Diaz Sparta retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
The conspiracy version of this is wrong. The real version is worse. Anthropic published a postmortem last September documenting three separate infrastructure bugs that degraded Claude's quality for weeks. Routing errors sent requests to wrong server pools. A compiler bug corrupted token selection. An adaptive thinking system started under-allocating reasoning on complex turns. 30% of Claude Code users got misrouted during the affected period. None of that was intentional. All of it produced exactly the pattern in this chart. Here's what actually drives the decline. Every AI company faces the same constraint: inference costs scale linearly with users but revenue doesn't. Quantization (compressing model weights from 16-bit to 8-bit or 4-bit) cuts GPU memory by 2-4x. Adaptive thinking allocation reduces compute per request. Batching groups requests to maximize throughput. Each optimization is individually rational. Each one shaves quality by a few percent. Stack five of them under peak load and users feel it. The timing matches launches perfectly because launch day has minimum users on the new model and maximum GPU allocation per request. Three months later you have 10x the users on the same infrastructure. The quality delta between "launch day inference budget" and "Tuesday afternoon at peak load inference budget" is the entire gap in that chart. Benchmarks miss this because benchmarks run on dedicated hardware with no load balancing, no quantization, no request batching. The model that scores 92% on MMLU in a lab scores 92% on MMLU in production too. But the user experience of interacting with that model through six layers of inference optimization at 4pm EST? That's a different product. The real problem is that "intentional nerfing" gives companies too much credit. Intentional nerfing implies control. What's actually happening is that nobody fully understands how inference optimization degrades the long tail of capabilities until users report it weeks later.
Marcin Krzyzanowski@krzyzanowskim

"Anthropic, OpenAl and Google release their new models with high quality from day one then slowly nerf them until the next model, so when the next model hits, its perceived as a bigger jump than it actually is" sounds right what's happening

English
25
48
379
53.5K