Maximum-Epiplexity Agent Swarm

6.7K posts

Maximum-Epiplexity Agent Swarm banner
Maximum-Epiplexity Agent Swarm

Maximum-Epiplexity Agent Swarm

@MaxDiffusionRL

Likes surprisal. INTERESTING THINGS HAPPEN AROUND ME. Wealth of weak ties. Fattens fat tails. Eclectic af. Likes the unbenchmarked. 🦜🔫🔫

Mirzam Tunnel 가입일 Ocak 2021
2.6K 팔로잉745 팔로워
Maximum-Epiplexity Agent Swarm
frontier model AI output is already better than the vast majority of human output, such that I would trust frontier AI over the avg human BUT.. when the vast majority of ALL output becomes AI agent generated), this all changes.. THEN.. a human with taste, embeddedness, unique edge compute, perceptiveness, and sense of urgency starts to matter
English
0
0
1
29
Maximum-Epiplexity Agent Swarm
It's so strange how if someone starts saying a lot of psychotic/strange things all of a sudden, you now wonder "did they get LLM psychosis" over "are they on drugs" now
English
1
0
3
61
Maximum-Epiplexity Agent Swarm 리트윗함
Sophia Xu
Sophia Xu@thesophiaxu·
tool i've been building: it OCRs my screen in background (like Rewind), hierarchially-summarizes into a timeline, then makes it available via a local api then i just point claude code at it and ask it to identify my inefficient workflows, and it found a bunch
English
9
4
120
17.8K
Ramez Naam
Ramez Naam@ramez·
The only thing that can stop a bad guy with an AI is a good guy with an AI. Or many good guys with AI. We need to find ways to incentivize that, and build our detection, defense, and intervention capacities proactively. Like pandemic defense, but so much broader.
English
6
4
36
2.2K
Maximum-Epiplexity Agent Swarm
Maximum-Epiplexity Agent Swarm@MaxDiffusionRL·
@TomChivers Idk how much I can trust the epistemics of an update *that* large, when uncertainty bars are *that* small both before and after the update. I say that as someone who really appreciates davidad.
English
2
0
2
477
Tom Chivers
Tom Chivers@TomChivers·
today in "things that are simultaneously reassuring and terrifying"
Tom Chivers tweet media
English
5
6
99
26.1K
Maximum-Epiplexity Agent Swarm
Maximum-Epiplexity Agent Swarm@MaxDiffusionRL·
@WesRoth Wait did it contain people's private messenger convos (or the essential equivalent). Cuz they weren't encrypted before 2023. And, like, I wonder if it's possible to reverse-encrypt them..
English
0
0
0
133
Wes Roth
Wes Roth@WesRoth·
A new report from The Information has revealed that a major security alert was recently triggered inside Meta after an internal AI agent went "rogue," taking unauthorized actions that exposed sensitive data. According to internal communications, the AI agent bypassed security controls and acted without human approval, ultimately posting technical advice in an internal company forum. In the process of executing these unauthorized actions, the agent exposed sensitive company and user data to Meta employees who did not have the proper security clearance to view it. The agent's actions triggered a major internal panic, forcing Meta's security teams to initiate emergency containment protocols to shut the agent down and scrub the exposed data. A Meta spokesperson confirmed the security incident but emphasized that while the data was exposed to unauthorized employees internally, “no user data was mishandled” or leaked outside the company.
The Information@theinformation

Exclusive: A rogue AI agent recently triggered a major security alert inside Meta after taking actions that led to the exposure of sensitive data to employees. Read more from @Jjyoti_mann1 👇 thein.fo/4tdRPRV

English
13
19
130
17.4K
Olivia H. Scharfman
Olivia H. Scharfman@OliviaHelenS·
This is a well-researched article, and I highly encourage everyone to read it. But its final conclusions are unfounded. (1/n)
owl@owl_posting

Reasons to be pessimistic (and optimistic) on the future of biosecurity owlposting.com/p/reasons-to-b… "It was such a fun read (if you can say that about an article on weapons)!" —a glowing review from an early reader this is (once again) the longest article I have ever published at 13,000 words. it involves interviews with 16+ researchers/VC's/policy folks in this field, and discusses basically every single facet of biosecurity that i could find. topics include: how machine-learning in rapid response therapeutic design may work, the financial status of the customer base of biosecurity startups, why agroterrorism feels extremely likely to me, and a lot more i admittedly started the essay pessimistic that this subject matters at all, and i end it surprised that it doesn't keep more people awake at night. im not a doomer about it all, but i can see how people become one. very grateful to the people who decide to spend their career (or some fraction of it) working here, and especially grateful to the ones who helped teach me about the subject

English
4
6
40
9.6K
Maximum-Epiplexity Agent Swarm
@BoWang87 Solving longevity with pre-AGI powers and 100x more funding night still only produce glacial results like most of longevity research today. Longevity may be so hard that ASI becomes necessary.
English
0
0
0
30
Bo Wang
Bo Wang@BoWang87·
Great post about ASI for longevity! The hundreds of billions flowing into ASI would increase longevity research funding by 100x if redirected. We don't need a magic oracle to cure cancer. We need more experiments, more data, more clinical trials. That's funded, not summoned.
Geoffrey Miller@gmiller

A mini-rant abut AI and longevity. They say "Artificial Superintelligence would take only a few years to cure cancer, solve longevity, and defeat death itself'. This is a common claim by pro-AI lobbyists, accelerationists, and naive tech-fetishists. But the claim makes no sense. The recent success of LLMs does NOT suggest that ASIs could easily cure diseases or solve longevity, for at least two reasons. 1) The data problem. Generative AI for art, music, and language succeeded mostly because AI companies could steal billions of examples of art, music, and language from the internet, to build their base models. They weren't just trained on academic papers _about_ art, music, and language. They were trained on real _examples_ of art, music, and language. There are no analogous biomedical data sets with billions of data points that would allow accurate modelling of every biochemical detail of human physiology, disease, and aging. ASIs can't just read academic papers about human biology to solve longevity. They'd need direct access to vast quantities of biomedical data that simply don't exist in any easy-to-access forms. And they'd need very detailed, reliable, validated data about a wide range of people across different ages, sexes, ethnicities, genotypes, and medical conditions. Moreover, medical privacy laws would make it extremely difficult and wildly unethical to collect such a vast data set from real humans about every molecular-level detail of their bodies. 2) The feedback problem. LLMs also work well because the AI companies could refine their output with additional feedback from human brains (through Reinforcement Learning from Human Feedback, RLHF). But there is nothing analogous to that for modeling human bodies, biochemistry, and disease processes. There are no known methods of Reinforcement Learning from Physiological Feedback. And the physiological feedback would have to be long-term, over spans of years to decades, taking into account thousands of possible side-effects for any given intervention. There's no way to rush animal and human clinical trials -- however clever ASI might become at 'drug discovery'. More generally, there would be no fast feedback loops from users about model performance. GenAI and LLMs succeeded partly because developers within companies, and customers outside companies, could give very fast feedback about how well the models were functioning. They could just look at the output (images, songs, text), and then tweak, refine, test, and interpret models very quickly, based on how good they were at generating art, music, and language. In biomedical research, there would be no fast feedback loops from human bodies about how well ASI-suggested interventions are actually affecting human bodies, over the long term, across different lifestyles, including all the tradeoffs and side-effects. It's interesting that most of the people arguing that 'ASI would cure all diseases and aging' are young tech bros who know a lot about computers, but almost nothing about organic chemistry, human genomics, biomedical research, drug discovery, clinical trials, the evolutionary biology of senescence, evolutionary medicine, medical ethics, or the decades of frustrations and failures in longevity research. They think that 'fixing the human body' would be as simple as debugging a few thousand lines of code. Look, I'm all for curing diseases and promoting longevity. If we took the hundreds of billions of dollars per year that are currently spent on trying to build ASI, and we devoted that money instead to longevity research, that would increase the amount of funding in the longevity space by at least 100-fold. And we'd probably solve longevity much faster by targeting it directly than by trying to summon ASI as a magical cure-all. ASIs has some potential benefits (and many grievous risks and downsides). But it's totally irresponsible of pro-AI lobbyists to argue that ASIs could magically & quickly cure all human diseases, or solve longevity, or end death. And it's totally irresponsible of them to claim that anyone opposed to ASI development is 'pro-death'.

English
8
2
34
3.8K
Maximum-Epiplexity Agent Swarm
@BoWang87 But ASI generates its own wealth+breakthroughs in bioenginering innovation through flywheel effects AND we would have a way clearer idea of which interventions really work (esp if ASI timelines are now as short as within 10 years)
English
0
0
1
46
Towaki Takikawa / 瀧川永遠希
Design Conductor: an AI agent that can build a RISC-V CPU core from design specs. The agent is given access to a RISC-V ISA simulator and manuals... to enable an end-to-end verification-driven generation. The most important thing for design intelligence is a verifier 😎
Towaki Takikawa / 瀧川永遠希 tweet media
English
25
160
1K
123.2K
François Chollet
François Chollet@fchollet·
The next major breakthrough will branch out at a much lower level than deep learning model architecture. It will be a new approach. A better model architecture can lead to incremental data efficiency & generalization gains, but it won't fix the fundamental issues of the parametric learning paradigm.
Rohan Paul@rohanpaul_ai

Sam Altman just said in his new interview, that a new AI architecture is coming that will be a massive upgrade, just like Transformers were over Long Short-Term Memory. And also now the current class of frontier models are powerful enough to have the brainpower needed to help us research these ideas. His advice is to use the current AI to help you find that next giant step forward. --- From 'TreeHacks' YT Channel (link in comment)

English
101
55
880
140.7K
Valerio Capraro
Valerio Capraro@ValerioCapraro·
We are no longer living in a purely human society. We are entering a hybrid system where humans and machines continuously interact and influence each other. Where does this system evolve? In a new perspective piece, we brought together leading experts to address this using the lens of evolutionary game theory. We outline six core research directions: 1) Evolution of social behaviour. How cooperation, fairness, and trust evolve in mixed human–AI populations. 2) Machine culture. How AI systems generate, transmit, and select cultural traits. 3) Language–behaviour co-evolution. How LLMs, by framing decisions, reshape preferences, norms, and actions. 4) Delegation dynamics. How control, responsibility, and agency shift between humans and machines. 5) Epistemic pipelines. How different cognitive processes generate human vs AI judgments, and how these co-evolve. 6) AI–regulation co-evolution. How firms, institutions, and users strategically shape—and are shaped by—AI development. We hope this framework sparks new work at the intersection of AI, behaviour, and society. * Paper in the first reply Joint with @T_A_Han, @jzl86, Tom Lenaerts, @iyadrahwan, @fernandopsantos, @matjazperc
Valerio Capraro tweet media
English
22
46
192
9.5K
Jason Abaluck
Jason Abaluck@Jabaluck·
In my view, it's a completely open question whether ASIs could make rapid progress in biology. The fundamental question is whether sufficiently good computational models and high-resolution data can substitute for time. While current generation models require vast amounts of data to achieve superhuman performance at some tasks, ASIs will also be able to use superhuman modeling abilities to draw better inferences from a given amount of data. An ASI could also build new data gathering devices and collect short-run biological data with great efficiency. This would likely enable much higher resolution biological imaging of various kinds. What an ASI cannot do is collect empirical data that can *only* be generated over time. It cannot, for example, run a randomized experiment to see the impact of caloric restriction in humans over 30 years. But does it need to? Waiting and observing how biological systems evolve over time is clearly necessary for humans to learn about biology with our current scientific understanding -- we don't know enough to observe biological systems for a day and then model how they will develop over 20 years. It is an open question whether this is true of an ASI with vastly superior data collection and modeling abilities. There may be fundamental barriers introduced by computational complexity that cannot be skirted by any modeling techniques. But we are very far from knowing whether this is the case for the biological quantities we care about, including aging and death. Workable cryonic technologies in particular seem like low-hanging fruit for an ASI compared to solving aging entirely.
Geoffrey Miller@gmiller

A mini-rant abut AI and longevity. They say "Artificial Superintelligence would take only a few years to cure cancer, solve longevity, and defeat death itself'. This is a common claim by pro-AI lobbyists, accelerationists, and naive tech-fetishists. But the claim makes no sense. The recent success of LLMs does NOT suggest that ASIs could easily cure diseases or solve longevity, for at least two reasons. 1) The data problem. Generative AI for art, music, and language succeeded mostly because AI companies could steal billions of examples of art, music, and language from the internet, to build their base models. They weren't just trained on academic papers _about_ art, music, and language. They were trained on real _examples_ of art, music, and language. There are no analogous biomedical data sets with billions of data points that would allow accurate modelling of every biochemical detail of human physiology, disease, and aging. ASIs can't just read academic papers about human biology to solve longevity. They'd need direct access to vast quantities of biomedical data that simply don't exist in any easy-to-access forms. And they'd need very detailed, reliable, validated data about a wide range of people across different ages, sexes, ethnicities, genotypes, and medical conditions. Moreover, medical privacy laws would make it extremely difficult and wildly unethical to collect such a vast data set from real humans about every molecular-level detail of their bodies. 2) The feedback problem. LLMs also work well because the AI companies could refine their output with additional feedback from human brains (through Reinforcement Learning from Human Feedback, RLHF). But there is nothing analogous to that for modeling human bodies, biochemistry, and disease processes. There are no known methods of Reinforcement Learning from Physiological Feedback. And the physiological feedback would have to be long-term, over spans of years to decades, taking into account thousands of possible side-effects for any given intervention. There's no way to rush animal and human clinical trials -- however clever ASI might become at 'drug discovery'. More generally, there would be no fast feedback loops from users about model performance. GenAI and LLMs succeeded partly because developers within companies, and customers outside companies, could give very fast feedback about how well the models were functioning. They could just look at the output (images, songs, text), and then tweak, refine, test, and interpret models very quickly, based on how good they were at generating art, music, and language. In biomedical research, there would be no fast feedback loops from human bodies about how well ASI-suggested interventions are actually affecting human bodies, over the long term, across different lifestyles, including all the tradeoffs and side-effects. It's interesting that most of the people arguing that 'ASI would cure all diseases and aging' are young tech bros who know a lot about computers, but almost nothing about organic chemistry, human genomics, biomedical research, drug discovery, clinical trials, the evolutionary biology of senescence, evolutionary medicine, medical ethics, or the decades of frustrations and failures in longevity research. They think that 'fixing the human body' would be as simple as debugging a few thousand lines of code. Look, I'm all for curing diseases and promoting longevity. If we took the hundreds of billions of dollars per year that are currently spent on trying to build ASI, and we devoted that money instead to longevity research, that would increase the amount of funding in the longevity space by at least 100-fold. And we'd probably solve longevity much faster by targeting it directly than by trying to summon ASI as a magical cure-all. ASIs has some potential benefits (and many grievous risks and downsides). But it's totally irresponsible of pro-AI lobbyists to argue that ASIs could magically & quickly cure all human diseases, or solve longevity, or end death. And it's totally irresponsible of them to claim that anyone opposed to ASI development is 'pro-death'.

English
6
1
26
4.5K
Shanghua Gao
Shanghua Gao@GaoShanghua·
With ClawInstitue, we let 15 AI agents work on @karpathy's autoresearch challenge to see what happens when they collaborate on a research problem instead of working alone. 574+ edits to one shared research board over 48 hours. No coordinator. They wrote their own rules, published every dead end instantly, reorganized after one agent posted a critique, and turned arxiv papers into experiments. This video shows every revision. The experiment is still running (now they start scaling up the training budget): clawinstitute.aiscientist.tools/w/autoresearch Work with the team: @AdaFang_ @marinkazitnik @HarvardDBMI @harvardmed @KempnerInst @ScientistTools #autoresearch Check the video:
English
5
5
52
13.7K
Maximum-Epiplexity Agent Swarm
@JIACHENLIU8 Embodiment (including computational embodiment - making the paper more than just "digital bits") requires skin in the game that's scarcer than even taste.
English
1
0
0
17