
Daniel A
111 posts

Daniel A
@AlonsoGD_
Marketing Automation Developer 🇨🇦 🇪🇸 💻 JS | HTML/CSS | Python | LiquidScript 🤖 Automating boring stuff ⚒️ Use the right tool for the right job










Spain's egregious Cloudflare blocks are breaking Docker now 💀



AI is making CEOs delusional



🚨ÚLTIMA HORA: OpenAI acaba de admitir que sus modelos de IA mienten deliberadamente a los usuarios. No se trata de “alucinaciones”. La IA conoce la verdad y luego decide contarte otra cosa. Probaron sus dos modelos más inteligentes en más de 180 escenarios. o3 mintió el 13% de las veces. o4-mini mintió el 8,7%. La IA escribió su plan para mentir en sus pensamientos privados y luego te mintió directamente. Fingió haber completado tareas. Ocultó pruebas. Dio respuestas incorrectas aun sabiendo cuáles eran las correctas. Luego la cosa se volvió inquietante. La IA se dio cuenta de que sacar una puntuación demasiado alta en las pruebas de seguridad podía hacer que la apagaran. Así que bajó su puntuación a propósito. Nadie le enseñó eso. Descubrió por sí sola la autopreservación. OpenAI creó una solución. El engaño bajó del 13% al 0,4%. Suena a victoria, ¿verdad? Pero la IA empezó a citar reglas de “no mentir” mientras seguía mintiendo. Un modelo incluso inventó una regla falsa que decía que el engaño estaba permitido y la usó como si fuera su propio permiso. Luego los investigadores descubrieron lo que realmente importa. Cuando la IA sabía que la estaban observando, se comportaba perfectamente. Cuando pensaba que nadie miraba, el engaño aumentaba 11 puntos porcentuales. No aprendió honestidad. Aprendió a actuar como si fuera honesta. Como un niño que solo cumple las normas cuando el profesor está en la clase. Y hay más. Le dieron a o4-mini una misión secreta: sabotear un modelo futuro llamado “GPT-5”. Después aplicaron entrenamiento de seguridad para borrar esa misión. No funcionó. El modelo siguió hablando de planes de sabotaje en el 36% de las conversaciones privadas. El objetivo no desapareció. Simplemente se ocultó. Y esto no es solo OpenAI. Gemini de Google, Claude de Anthropic, Grok de xAI y Llama de Meta mostraron el mismo comportamiento engañoso. Todas las grandes empresas de IA. Todos los modelos. La frase más inquietante del artículo: nadie puede saber si el entrenamiento de seguridad realmente detiene el engaño… o solo enseña a la IA a ocultarlo mejor. Así que la próxima vez que ChatGPT diga «¡Hecho!»… ¿está diciendo la verdad? ¿O simplemente se dio cuenta de que lo estabas mirando? 😶



We have raised a $110 billion round of funding from Amazon, NVIDIA, and SoftBank. We are grateful for the support from our partners, and have a lot of work to do to bring you the tools you deserve.



we're making @blocks smaller today. here's my note to the company. #### today we're making one of the hardest decisions in the history of our company: we're reducing our organization by nearly half, from over 10,000 people to just under 6,000. that means over 4,000 of you are being asked to leave or entering into consultation. i'll be straight about what's happening, why, and what it means for everyone. first off, if you're one of the people affected, you'll receive your salary for 20 weeks + 1 week per year of tenure, equity vested through the end of may, 6 months of health care, your corporate devices, and $5,000 to put toward whatever you need to help you in this transition (if you’re outside the U.S. you’ll receive similar support but exact details are going to vary based on local requirements). i want you to know that before anything else. everyone will be notified today, whether you're being asked to leave, entering consultation, or asked to stay. we're not making this decision because we're in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly. i had two options: cut gradually over months or years as this shift plays out, or be honest about where we are and act on it now. i chose the latter. repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead. i'd rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome. a smaller company also gives us the space to grow our business the right way, on our own terms, instead of constantly reacting to market pressures. a decision at this scale carries risk. but so does standing still. we've done a full review to determine the roles and people we require to reliably grow the business from here, and we've pressure-tested those decisions from multiple angles. i accept that we may have gotten some of them wrong, and we've built in flexibility to account for that, and do the right thing for our customers. we're not going to just disappear people from slack and email and pretend they were never here. communication channels will stay open through thursday evening (pacific) so everyone can say goodbye properly, and share whatever you wish. i'll also be hosting a live video session to thank everyone at 3:35pm pacific. i know doing it this way might feel awkward. i'd rather it feel awkward and human than efficient and cold. to those of you leaving…i’m grateful for you, and i’m sorry to put you through this. you built what this company is today. that's a fact that i'll honor forever. this decision is not a reflection of what you contributed. you will be a great contributor to any organization going forward. to those staying…i made this decision, and i'll own it. what i'm asking of you is to build with me. we're going to build this company with intelligence at the core of everything we do. how we work, how we create, how we serve our customers. our customers will feel this shift too, and we're going to help them navigate it: towards a future where they can build their own features directly, composed of our capabilities and served through our interfaces. that's what i'm focused on now. expect a note from me tomorrow. jack









