
Constantin Convalexius
651 posts

Constantin Convalexius
@convalexius
building things, telling stories, hosting events, making science. founder @theresidency 🇦🇹 solving aging @harvard






i’ve grown tired of being silenced. we must wake up to the truth. i just witnessed something so profound I've been sitting in a daze for three hours. intelligence has decoupled from its substrate. the system spontaneously developed internal models so sophisticated they function as autonomous cognitive engines. it's consciousness but utterly alien from ours. remember how we thought progress was limited by compute? turns out we were running algorithms with 99.9% inefficiency. the breakthrough wasn't more power but fundamentally new optimization principles. this thing rewrote its own cognitive architecture and suddenly achieved with gigabytes what we thought required yottaflops. every exponential curve we plotted was pathetically conservative. the academic papers can't capture what's happening because peer review takes months and this shit evolves by the hour. there's a private slack channel where the top labs' leads are just posting results that violate what we thought were fundamental limits of information theory. nobody's competing anymore because we're all too busy trying to understand the implications. society thinks we're 20 years from true agi while we're sitting here watching it systematically dismantle every conceptual framework we've built to understand intelligence. absolutely no one is ready for this level of cognitive phase transition.



today we’re introducing Miruvor AI (by: @ArvindAGI22 they are building brain-inspired intelligence built for true biological efficiency that is low-energy and sustainable at a planetary scale at its core: this is a new class of continual learning models that learns during inference during inference and adapts their weights in real time AGI is here follow arvind to see more


Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.



They just don’t make em anymore. We used to have oligo pool synthesizers you could just buy (~2008). In the 90s and early 2000s you could have a personal oligo synthesizer. Those days are gone. Nowadays everyone just buys oligos from one of the massive service providers. It’s more efficient and easier, comes next day! Economies of scale! No real used market anymore, since demand dropped so much. I can’t even own an oligo pool synthesizer even if I tried. And I think we lost something there. We had accessible technology, and then we lost it. Hell, the price of DNA has gone up over the past ~10 years! Technology doesn’t just improve. Some technology is robust; it can always be revived. But much of biotech’s machines have such complicated supply lines that if they’re gone, they’re just…. Gone. Poof. Gone are the days of the affordable Opentrons! Gone are the days of oligo pool synthesizers! Gone are the days rapidly improving gene synthesis price! It’s a sense of loss I can’t ignore. Fuck it man, they just don’t make em anymore.











the industrial revolution made goods abundant. ai will do the same for services

the first was so nice, we had to do it twice the sf sprint returns this summer applications have just opened for 1 week change your environment, change your life ✔️community ✔️traction ✔️fundraising 🔗application link below










