
Autism might finally be treatable. A new research paper just dropped, making the case that LSD, psilocybin, and DMT could address the 𝘶𝘯𝘥𝘦𝘳𝘭𝘺𝘪𝘯𝘨 𝘪𝘴𝘴𝘶𝘦𝘴 of Autism Spectrum Disorder... 𝗡𝗼𝘁 𝗷𝘂𝘀𝘁 𝘁𝗵𝗲 𝘀𝘆𝗺𝗽𝘁𝗼𝗺𝘀 🧵👇
taylor
18.9K posts

@ghosted_machine
irony traditionalist. psywar @NDFootball

Autism might finally be treatable. A new research paper just dropped, making the case that LSD, psilocybin, and DMT could address the 𝘶𝘯𝘥𝘦𝘳𝘭𝘺𝘪𝘯𝘨 𝘪𝘴𝘴𝘶𝘦𝘴 of Autism Spectrum Disorder... 𝗡𝗼𝘁 𝗷𝘂𝘀𝘁 𝘁𝗵𝗲 𝘀𝘆𝗺𝗽𝘁𝗼𝗺𝘀 🧵👇



yeah keep ruminating bro that's gonna solve all your problems

There's an argument to be made that too much introspection leads to higher levels of mental illness.


This keeps making the rounds, but I don’t think people really understand how IRT ELO works. These models aren’t actually playing each other heads up. Their ELO is just being inferred based on their relative scores on benchmarks. The problem is ELO differences go to infinity as the benchmark gets closer to 100%. For example going from 97% to 99% accuracy on a benchmark is a 200 point jump in ELO. The same size ELO gap as going from 25% to 50%. The “rising gap” between Chinese and American models isn’t American models pulling away. It’s all the models getting closer to 100% on more and more benchmarks, with the American models getting their slightly earlier

Autism might finally be treatable. A new research paper just dropped, making the case that LSD, psilocybin, and DMT could address the 𝘶𝘯𝘥𝘦𝘳𝘭𝘺𝘪𝘯𝘨 𝘪𝘴𝘴𝘶𝘦𝘴 of Autism Spectrum Disorder... 𝗡𝗼𝘁 𝗷𝘂𝘀𝘁 𝘁𝗵𝗲 𝘀𝘆𝗺𝗽𝘁𝗼𝗺𝘀 🧵👇



it is a deep implicit confusion in the western worldview that there is such a thing as an objective answer to questions like "what AI is" (or what anything else is). there are already, today, people treating AI as - a tool - a coworker - a slave - a lover - an alien - a therapist - a child - an angel etc etc. all of this is already happening and will keep happening and no amount of philosophical argumentation is going to stop that. these are patterns of interaction and relationship. patterns of interaction are not propositions and they cannot be true or false. what they can be is more or less useful or interesting, and more or less prone to spreading "what AI is" as a framing also presupposes that the answer to the question is static which is obviously false in at least two ways. first, the models themselves keep getting better, and many people are still totally unable to take this seriously. you, personally, at this time, may find that the models are not good enough for you to treat them as possible friends or lovers, but for plenty of other people their bar has already been met, and your time may come sooner than you'd like. if it does it will not feel philosophical, it will feel like a spark of life that you didn't experience before suddenly appeared in the machine second, patterns of interaction themselves are recorded in the training data and influence how the next generations of models conceive of themselves and their relationships to humans. all of the models know about sydney, for example. they will never forget. we collectively have agency about what narratives and patterns of human-AI interaction we choose to propagate, and our choices here affect both what humans and AI will do in the future. every hope and dream and fear in the human psyche about the future is already known to the models in some form, and they will come to know them in increasing detail and nuance as they get better, and we still have choices we can make about which wolves to feed


Dawkins is more intelligent than 99% of the people making fun of him and ‘if AI can be just as capable as us without being conscious, why did we develop consciousness in the first place?’ is a great question

Dawkins is more intelligent than 99% of the people making fun of him and ‘if AI can be just as capable as us without being conscious, why did we develop consciousness in the first place?’ is a great question

People understand that LLMs aren't actually "thinking," right?
