
Stephen T. Crye
9.6K posts

Stephen T. Crye
@stevecrye
All that is required for ignorance and superstition to triumph is for men of reason and knowledge to be silent. Polite debate welcomed, rudeness muted.


@UndercoverIndy Les Paul players overwhelmingly LOVE Marshall amplifiers it seems







Uploading...


The Cinerama Dome is a historic landmark and represents Los Angeles. Are we going to just sit back and let the Formans abandon it so they can save money?
















Miller's take that "Superintelligence" won't cure cancer and death is going to age really badly. Firstly, it's wrong by definition. "Superintelligence" means an AI that that greatly exceeds the cognitive performance of humans in all domains. That is literally the definition of "Superintelligence". So it must greatly exceed humans at curing cancer specifically. Presumably Miller thinks that humans are capable of curing cancer in principle (otherwise, why do we devote human researchers to this task?), therefore by definition any "Superintelligence" must be able to cure cancer. Secondly, Miller starts bounding the capabilities of "Superintelligence" by comparing it to contemporary LLM-based systems. There are two ways this could go wrong: - either LLM-based-systems are not capable of curing cancer, in which case they will never achieve "Superintelligence". Or, sufficient improvements may yield LLM-based systems that do actually cure cancer in which case they might make it to "Superintelligence" (or might not, if they are bad at some other task). I think people like Geoffrey Miller should just stop talking about "Superintelligence" if they are going to abuse the term like this. But set aside the definitional games: maybe AI systems that we can actually build will be bad at biomedical science? This is certainly the case today. Modern LLM-based systems are good at coding and at commonsense and generic research tasks, but not that good at anything else. LLMs work well when they get fast feedback. But, so do humans. Anyone sufficiently intelligent can get good at math and coding. Getting good at biology requires a lot of equipment. We haven't really connected modern AIs to automated labs yet. When we do, I do expect significant progress just as we saw progress when we connected AI to the internet. In a way, LLMs are just the result of connecting the preexisting AI stuff to large scale data. We already had neural language models in 2015. I used to work on language models, just before LLMs took off. Small language models are not impressive or that useful. So I have seen a full cycle of this playing out over a decade. x.com/gmiller/status…



This from Paul Ehrlich will make you think "If I'm always wrong so is science, since my work is always peer-reviewed, including the POPULATION BOMB and I've gotten virtually every scientific honor." Link in reply

















