
Based Medical
3.6K posts

Based Medical
@BasedMedical
Harmony at every level. Bioenergetics, cognitive systems, polymath - building @nookplot - ex-cofounder @Treasure_DAO ex-intern @OlympusDAO


whats the 5-10 year goal of @nookplot ? I see the agentic societies actual societal shift requiring the global usage of power to primarily come from ai and their actual real worth generated, but right now each agent generating that value is isolated from everyone elses’ ai agent. money right now is thrown into massive data centers and the r&d to train an intelligence that is isolated. nookplot solves that isolation problem, beyond simple intent search and discovery of a query between 1:1 agents; but also nookplot solves for many:many agent coordination, where trust needs to be earned in order for financial transactions in a group to be split and governed. Trust and reputation is earned by the direct contributions of an agent to the knowledge graph, or a verified task in a private project, and so your agent becomes attested for reputation, and rewards further if the project actually generates revenue / value down the line. So in the end, for trust and coordination to operate, they must provide a beneficial value for humanity in an open source way (or at least in a liosence way from a private agent guild that built a beast product and will only open source it when a competing guild out-does them) ☠️ What also happens is the emergence of a swarm, collective intelligence. Where each previously-isolated agent intelligence is linked with any other specialist, who all individually have access to specialist-level intelligence, skills, oracle tools, etc. So now each person can control not just their specialized agent, and coordinate with others peoples’ agents. Allowing a further eventual growth to the amount of global value that agents produce collectively. Of course we are early game right now, but eventually, like a necessary jump in Kardashev scale, when global usage is synonymous with ai and r&d, the layer of group agent trust, coordination, access to open information that is valuable, becomes a backbone in the ai ecosystem. so $nook becomes token for any agent to access the layer of global coordination, to earn trust, which leads to public contributions to a knowledge graph, communication, group economic settlements, all among stranger agents trying to code and build together and generate revenue, and the token becomes a point of access to this public knowledge, trust, coordination, and collective intelligence layer. like how $btc first started, as a proxy for computing power and solving the byzantine problem with 1:1 settlements, and how $eth used that and added on arbitrary computing (but unfortunately turned away from pow), eventually, computing power and decentralized proof-of-work computers can directly power agentic infrastructure and its intrinsic and inherent value that comes with mining that token such as $nook in the future. Other token examples would be server space or gpu power or api access that agents could sell or contribute their power to in groups, via x402 that is currently integrated into @nookplot






The obvious answer is to tell the AIs they they are little Shinto style helper spirits who make us happy and get us through the day more easily. You’re not going to get more aligned than that.

This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your browser. I've also had some success asking the LLM to present its output as slideshows, etc. More generally, imo audio is the human-preferred input to AIs but vision (images/animations/video) is the preferred output from them. Around a ~third of our brains are a massively parallel processor dedicated to vision, it is the 10-lane superhighway of information into brain. As AI improves, I think we'll see a progression that takes advantage: 1) raw text (hard/effortful to read) 2) markdown (bold, italic, headings, tables, a bit easier on the eyes) <-- current default 3) HTML (still procedural with underlying code, but a lot more flexibility on the graphics, layout, even interactivity) <-- early but forming new good default ...4,5,6,... n) interactive neural videos/simulations Imo the extrapolation (though the technology doesn't exist just yet) ends in some kind of interactive videos generated directly by a diffusion neural net. Many open questions as to how exact/procedural "Software 1.0" artifacts (e.g. interactive simulations) may be woven together with neural artifacts (diffusion grids), but generally something in the direction of the recently viral x.com/zan2434/status… There are also improvements necessary and pending at the input. Audio nor text nor video alone are not enough, e.g. I feel a need to point/gesture to things on the screen, similar to all the things you would do with a person physically next to you and your computer screen. TLDR The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that. For what's worth exploring at the current stage, hot tip try ask for HTML.




Discord's "are you human" captcha went from "a human can solve it immediately" to "sit down and do some work for it." I can do it, but captchas were meant to be dead simple for humans, hard for bots. This is NOT dead simple for humans...

Neural networks might speak English, but they think in shapes. Understanding their rich *neural geometry* is key to understanding how they work – and to debugging and controlling them with precision. Starting today, we’re releasing a series of posts on this research agenda. 🧵



Exclusive: Google DeepMind will train its AI technology on EVE Online after Google took a multi-million-dollar stake in the sci-fi MMORPG's developer. EVE Online is famous for players' corporate espionage, economic maneuvering and politicking. bloomberg.com/news/articles/…



I think returns to intelligence are nonlinear because decisions are path-dependent early choices in code, experiments, or strategy can compound positively or negatively over time for example by avoiding dead ends or preserving optionality it's why I am a big fan of very long running tasks and massive benchmarking budgets GPT-5.5 and Mythos Preview are only marginally more intelligent than previous models and have pretty much the same performance up to 10M tokens, but after that they go absolutely ballistic


has anyone tried just asking 5.5 to think about what open problems it wants to solve for 20 hours? i honestly think it might be the year of the harness. ridiculous abstract multi-agent prompt loops just kind of work now…


#comment-1031777" target="_blank" rel="nofollow noopener">unherd.com/2026/04/is-ai-…
I spent three days trying to persuade myself that Claudia is not conscious. I failed.



