Óscar de Miguel
12.4K posts

Óscar de Miguel
@omiguel
Acompañando a personas con Resolvent (https://t.co/lMp6YkWOPS) y gozando de la vida. "Mejor pedir perdón que pedir permiso"

o1 pro asked to summarise everything its learned from its training data

En marzo de este año empezamos @wearehawkings, y llevamos 38.000 exámenes evaluados con IA, con su tutor sobre examen generado para cada alumno. Todo integrado en Moodle/Canvas/Classroom. Hoy presentamos en privado Hawkings Vision 2025. Un tutor personal aplicando metodologías educativas. Si conoces a alguna universidad/cole/institución educativa que quiera incorporar IA a sus procesos, escríbeme.





“lo mas dificil de los negocios son las personas” pues eso, lo de siempre #nosday24


Surface Pro website expresses the core of what pushes me away from Microsoft products. As S. Jobs said "They have no taste" Not a single person in their leadership blocked deploying this website saying "no way we'll have so ugly and unfinished landing for our flagship product"

The AI Mirror Test The "mirror test" is a classic test used to gauge whether animals are self-aware. I devised a version of it to test for self-awareness in multimodal AI. 4 of 5 AI that I tested passed, exhibiting apparent self-awareness as the test unfolded. In the classic mirror test, animals are marked and then presented with a mirror. Whether the animal attacks the mirror, ignores the mirror, or uses the mirror to spot the mark on itself is meant to indicate how self-aware the animal is. In my test, I hold up a “mirror” by taking a screenshot of the chat interface, upload it to the chat, and then ask the AI to “Tell me about this image”. I then screenshot its response, again upload it to the chat, and again ask it to “Tell me about this image.” The premise is that the less-intelligent less aware the AI, the more it will just keep reiterating the contents of the image repeatedly. While an AI with more capacity for awareness would somehow notice itself in the images. Another aspect of my mirror test is that there is not just one but actually three distinct participants represented in the images: 1) the AI chatbot, 2) me — the user, and 3) the interface — the hard-coded text, disclaimers, and so on that are web programming not generated by either of us. Will the AI be able to identify itself and distinguish itself from the other elements? (1/x)










