Alberto Giménez

1.3K posts

Alberto Giménez banner
Alberto Giménez

Alberto Giménez

@albergimenez

Javascript : Python : Java

Paraguay Katılım Nisan 2010
295 Takip Edilen152 Takipçiler
Alberto Giménez
Alberto Giménez@albergimenez·
@AlexisCubells y la verdad impuesto a la plata que no tenes porque siempre pago mas de lo que supuestamente me sobra
Español
1
0
1
101
Cubells
Cubells@AlexisCubells·
El IRP es un impuesto al ahorro. Un castigo al progreso que hace que avanzar sea más difícil. El IRP debe ser eliminado. Fin.
Español
53
359
1.2K
41.3K
MrVe
MrVe@thugGauss·
2 horas me duró la sesión de claude hoy. Con el plan de 100 usd. Y no hice nada demasiado complejo. Estafadisimo.
Español
181
64
4.1K
414.4K
Carla
Carla@carlaconwifi·
Trabajar remoto te da libertad, pero también te quita algo que nadie menciona: el contacto humano del día a día. No tener compañeros cerca, no hablar con alguien en el almuerzo, no tener con quién desahogarte entre tareas. Eso pesa más de lo que parece. ¿Lo sientes? 💛 ¿Qué extrañas más de un trabajo presencial?
Español
1.9K
209
3.2K
531.3K
Chief Nerd
Chief Nerd@TheChiefNerd·
🚨 SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”
English
5.9K
2K
19.2K
26.2M
DELPY 📱🎬
DELPY 📱🎬@delpynews·
▶️SUSPENDEN ENCUENTRO DE "THERIANS PARAGUAY" EN ENCARNACIÓN |🚨 La comunidad Therians Paraguay postergó la convocatoria prevista para el 15 de febrero en Encarnación por "motivos de seguridad". 👀Según informaron, la decisión se tomó tras recibir presuntas amenazas que generaron preocupación y un clima de incertidumbre. Señalan que no estaban dadas las condiciones para garantizar la integridad de los participantes. 👉El encuentro buscaba reunir a miembros de la comunidad en un espacio de intercambio y convivencia. La nueva fecha será anunciada próximamente.
DELPY 📱🎬 tweet media
Español
106
28
300
38.9K
Julio Gonzalez
Julio Gonzalez@JulioGcavello·
Increíble como la gente no lee. Kubischek y 25 de Mayo.. SIN SEMAFORO.... Y seee queda..
Julio Gonzalez tweet media
Español
298
41
297
189K
🅽🅴🆁🅳🅿🆈
Asunción tiene muchos contrastes, sí, como cualquier capital. Pero decir que “Paraguay es aburrido, pobre y peligroso” después de caminar un rato por el microcentro, la zona más vieja y desigual del centro, que encima fuera de horario laboral se ve vacía y áspera. No es “el país”, es un recorte. Y ya que hablamos de “pobreza”, hoy en muchas ciudades de EEUU podes ver campamentos de gente sin techo, crisis por el consumo de drogas, barrios con tiendas cerradas y zonas donde también te recomiendan no caminar distraído. Eso no convierte a todo el país en “pobre y peligroso” ¿verdad? Todo el mundo tiene derecho a su opinión, pero para hablar en serio compara zonas iguales, horarios iguales y experiencias reales. Si no, es turismo de prejuicio con filtro de drama.
🥥 𝙇𝘼𝙏𝘼𝙈 🥥@TheLatamGuy

An American woman visits Asunción, Paraguay 🇵🇾

Español
72
37
580
57K
ray
ray@ln4norris·
“i hope my hair is not too bad now *awkward laugh* i put so much effort into that” this is now the third time this week MBS grabbed lando’s hair and every time it’s disrespectful and completely inappropriate. even during the most important night that he got ready for, MBS still manages to be extremely unprofessional. i mean lando’s polite enough to laugh it off but it’s so uncomfortable to watch and it must’ve felt like that for him too.
English
332
714
9.5K
2.1M
Alberto Giménez retweetledi
Formula 1
Formula 1@F1·
The Drive To Survive episodes are revealed! 👀 Launching March 7 on Netflix 🎥🍿 #F1
Formula 1 tweet media
English
199
1.5K
14K
2.7M
Alberto Giménez retweetledi
Andrew Ng
Andrew Ng@AndrewYNg·
The buzz over DeepSeek this week crystallized, for many people, a few important trends that have been happening in plain sight: (i) China is catching up to the U.S. in generative AI, with implications for the AI supply chain. (ii) Open weight models are commoditizing the foundation-model layer, which creates opportunities for application builders. (iii) Scaling up isn’t the only path to AI progress. Despite the massive focus on and hype around processing power, algorithmic innovations are rapidly pushing down training costs. About a week ago, DeepSeek, a company based in China, released DeepSeek-R1, a remarkable model whose performance on benchmarks is comparable to OpenAI’s o1. Further, it was released as an open weight model with a permissive MIT license. At Davos last week, I got a lot of questions about it from non-technical business leaders. And on Monday, the stock market saw a “DeepSeek selloff”: The share prices of Nvidia and a number of other U.S. tech companies plunged. (As of the time of writing, some have recovered somewhat.) Here’s what I think DeepSeek has caused many people to realize: China is catching up to the U.S. in generative AI. When ChatGPT was launched in November 2022, the U.S. was significantly ahead of China in generative AI. Impressions change slowly, and so even recently I heard friends in both the U.S. and China say they thought China was behind. But in reality, this gap has rapidly eroded over the past two years. With models from China such as Qwen (which my teams have used for months), Kimi, InternVL, and DeepSeek, China had clearly been closing the gap, and in areas such as video generation there were already moments where China seemed to be in the lead. I’m thrilled that DeepSeek-R1 was released as an open weight model, with a technical report that shares many details. In contrast, a number of U.S. companies have pushed for regulation to stifle open source by hyping up hypothetical AI dangers such as human extinction. It is now clear that open source/open weight models are a key part of the AI supply chain: Many companies will use them. If the U.S. continues to stymie open source, China will come to dominate this part of the supply chain and many businesses will end up using models that reflect China’s values much more than America’s. Open weight models are commoditizing the foundation-model layer. As I wrote previously, LLM token prices have been falling rapidly, and open weights have contributed to this trend and given developers more choice. OpenAI’s o1 costs $60 per million output tokens; DeepSeek R1 costs $2.19. This nearly 30x difference brought the trend of falling prices to the attention of many people. The business of training foundation models and selling API access is tough. Many companies in this area are still looking for a path to recouping the massive cost of model training. Sequoia’s article “AI’s $600B Question” lays out the challenge well (but, to be clear, I think the foundation model companies are doing great work, and I hope they succeed). In contrast, building applications on top of foundation models presents many great business opportunities. Now that others have spent billions training such models, you can access these models for mere dollars to build customer service chatbots, email summarizers, AI doctors, legal document assistants, and much more. Scaling up isn’t the only path to AI progress. There’s been a lot of hype around scaling up models as a way to drive progress. To be fair, I was an early proponent of scaling up models. A number of companies raised billions of dollars by generating buzz around the narrative that, with more capital, they could (i) scale up and (ii) predictably drive improvements. Consequently, there has been a huge focus on scaling up, as opposed to a more nuanced view that gives due attention to the many different ways we can make progress. Driven in part by the U.S. AI chip embargo, the DeepSeek team had to innovate on many optimizations to run on less-capable H800 GPUs rather than H100s, leading ultimately to a model trained (omitting research costs) for under $6M of compute. It remains to be seen if this will actually reduce demand for compute. Sometimes making each unit of a good cheaper can result in more dollars in total going to buy that good. I think the demand for intelligence and compute has practically no ceiling over the long term, so I remain bullish that humanity will use more intelligence even as it gets cheaper. I saw many different interpretations of DeepSeek’s progress here in X, as if it was a Rorschach test that allowed many people to project their own meaning onto it. I think DeepSeek-R1 has geopolitical implications that are yet to be worked out. And it’s also great for AI application builders. My team has already been brainstorming ideas that are newly possible only because we have easy access to an open advanced reasoning model. This continues to be a great time to build! [Original text: deeplearning.ai/the-batch/issu… ]
English
283
1K
4.3K
616.5K
kache
kache@yacineMTB·
btw the reason there are no software engineers over 40 is because 25 years ago you didn't make a million dollars a year as a software engineer
English
194
135
5.8K
416.6K
Alberto Giménez retweetledi
Neo Kim
Neo Kim@systemdesignone·
If you want to master coding interviews, learn these templates:
English
9
74
533
67.7K
ThePrimeagen
ThePrimeagen@ThePrimeagen·
merry christmas
Eesti
112
25
1.9K
60.4K
Radio Ñandutí
Radio Ñandutí@nanduti·
Santiago Peña acompaña quema de toneladas de droga en Canindeyú 👉🏼El presidente de la República, Santiago Peña, acompaña y colabora en el trabajo de la quema de las 57 toneladas de marihuana que fueron incautadas en Canindeyú, tras un enfrentamiento con delincuentes que pertenecerían a la organización de Felipe Acosta, alias "Macho". #Ñanduti
Español
162
69
231
40.6K
DELPY 📱🎬
DELPY 📱🎬@delpynews·
▶️E'A | 😳 Mochilero estadounidense criticó las deterioradas veredas de Asunción, pero se sorprendió de la "seguridad".🧐 Christian Grossi, que tiene 1,6 millones de seguidores en TikTok, dijo que se puede caminar tranquilamente con el celular en la mano.📲​ ¿En qué zonas de la capital le llevarían para un paseo?🤔 🎥christian.grossi (TT)
Español
67
89
403
62K