Robert Bjarnason

526 posts

Robert Bjarnason banner
Robert Bjarnason

Robert Bjarnason

@robertbjarnason

President/CTO @ Citizens Foundation | Driving innovative collaboration between people and AI agents for better decisions.

Reykjavik, Iceland Katılım Aralık 2010
524 Takip Edilen226 Takipçiler
Sabitlenmiş Tweet
Robert Bjarnason
Robert Bjarnason@robertbjarnason·
If you are interested in AI, then here is a recent paper we wrote on how we are using AI agents to help solve complex public problems. arxiv.org/abs/2407.13960
English
1
1
6
732
Robert Bjarnason retweetledi
Alex Rives
Alex Rives@alexrives·
Scaling laws are powering AI. It’s time to scale biology. Today we’re launching the Virtual Biology Initiative to generate the data to unlock scaling laws in biology and build accurate predictive models of the cell. Digital representations of proteins are already expanding our understanding of life at the molecular level, and accelerating the design of molecules and medicines. Accurate digital representations of the cell could reveal the mechanisms that are responsible for disease, and show how to reverse them. The protein data bank, and worldwide repositories of protein sequence biodiversity were created through decades of work by the scientific community. The advances in artificial intelligence for proteins would not have been possible without them. The cell is orders of magnitude more complex, and we will need to create the data in just a few years rather than decades. This will require a coordinated global effort. We're partnering with Broad, Wellcome Sanger, Arc, Allen, Human Cell Atlas, Human Protein Atlas, NVIDIA, and Renaissance Philanthropy. Biohub is contributing to this effort as both a funder and a builder. We are developing microscopy to observe millions of cells in living organisms, and cryo-ET to resolve the cell in atomic detail. We're building instruments that expand the range of modalities and parameters that can be simultaneously measured. We’re developing molecular, cellular, and tissue engineering to create models of disease and design interventions. The data we generate will be available to the worldwide scientific community. We’re also committing $100M over the next five years to support work beyond Biohub. We invite other scientific teams and funders to join. Link: biohub.org/news/virtual-b…
English
37
137
733
118.2K
Robert Bjarnason retweetledi
Georgia Channing
Georgia Channing@cgeorgiaw·
🤗🤗🤗introducing Hugging Science -- the home of AI for science 🤗🤗🤗 open models and datasets are the powerhouse of science (see the PDB), but finding the models and data you actually need for your breakthrough is hard af you shouldn't need to scrape arxiv, own your own wetlab, fight a custom HDF5 parser, build a fusion stellarator, and beg for compute before you've trained a single epoch so we're changing that we've put all the best science on @huggingface in one place: - 78GB of genomics data - 11TB of PDE simulations - 100M cell profiles - 9T DNA base pairs - 13M molecular trajectories - 400k medical QA pairs and much more, all open, and all ready for training (+ you can also now filter and search by domain, task, and keyword) we've put together all the biggest releases from our partners at NASA, Google, OpenAI, Meta FAIR, Arc Institute, Ginkgo, SandboxAQ, Proxima Fusion, NVIDIA, Ai2, OpenADMET, InstaDeep, Future House, Polymathic AI, LeMaterial, Earth Species Project, Merck, and Eve Bio if you're not sure where you fit in -- work on open challenges for problems that matter: including fusion stellarator design, ADMET, antibody developability, multilingual medicine, catalysis and materials, and scientific reasoning. we're already changing how science gets done: a fusion startup needed a benchmark for stellarator plasma confinement that didn't exist. @proximafusion shipped ConStellaration on Hugging Science: a leaderboard, dataset, and eval metrics, all in one place. a drug discovery team wanted to predict hPXR induction. OpenADMET put up a blind challenge: 11,000+ compounds assayed at Octant, 513 held out, two tracks (pEC50 + structure). Anyone in the world can train and submit. an antibody team at @Ginkgo released GDPa1, a developability dataset for stability, manufacturability, and immunogenicity prediction, with a live leaderboard scoring every submission. if you know a problem the ML community should be working on, let us know. make a challenge! this is about putting all the tools for solving science in one place. so we can hillclimb! → huggingscience.co
English
55
351
1.8K
186.7K
Robert Bjarnason retweetledi
Noam Brown
Noam Brown@polynoamial·
I'm a manager at @OpenAI, but with GPT-5.5 I'm a more effective IC than I've ever been. I can now write CUDA kernels like a pro. I can rely on it to run my research experiments. And we know how to make it much more powerful from here.
Noam Brown tweet media
English
102
164
3K
356.2K
Robert Bjarnason retweetledi
OpenAI
OpenAI@OpenAI·
Introducing GPT-Rosalind, our frontier reasoning model built to support research across biology, drug discovery, and translational medicine.
English
485
1.3K
12.9K
2.3M
Robert Bjarnason retweetledi
Jeff Clune
Jeff Clune@jeffclune·
I like this article because it includes a discussion of many interesting consequences of The AI scientist that most mainstream publications did not cover. forbes.com/sites/johndrak…
English
2
7
21
2K
Robert Bjarnason retweetledi
Sam Altman
Sam Altman@sama·
I wrote this early this morning and I wasn't sure if I would actually publish it, but here it is: blog.samaltman.com/2279512
English
2.9K
1.2K
15.9K
7M
Robert Bjarnason retweetledi
Alex Albert
Alex Albert@alexalbert__·
Mythos Preview is currently available to our launch partners in Project Glasswing. Learn more about the model and the project here: anthropic.com/glasswing
English
63
77
1.6K
376.1K
Robert Bjarnason retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Something I've been thinking about - I am bullish on people (empowered by AI) increasing the visibility, legibility and accountability of their governments. Historically, it is the governments that act to make society legible (e.g. "Seeing like a state" is the common reference), but with AI, society can dramatically improve its ability to do this in reverse. Government accountability has not been constrained by access (the various branches of government publish an enormous amount of data), it has been constrained by intelligence - the ability to process a lot of raw data, combine it with domain expertise and derive insights. As an example, the 4000-page omnibus bill is "transparent" in principle and in a legal sense, but certainly not in a practical sense for most people. There's a lot more like it: laws, spending bills, federal budgets, freedom of information act responses, lobbying disclosures... Only a few highly trained professionals (investigative journalists) could historically process this information. This bottleneck might dissolve - not only are the professionals further empowered, but a lot more people can participate. Some examples to be precise: Detailed accounting of spending and budgets, diff tracking of legislation, individual voting trends w.r.t. stated positions or speeches, lobbying and influence (e.g. graph of lobbyist -> firm -> client -> legislator -> committee -> vote -> regulation), procurement and contracting, regulatory capture warning lights, judicial and legal patterns, campaign finance... Local governments might be even more interesting because the governed population is smaller so there is less national coverage: city council meetings, decisions around zoning, policing, schools, utilities... Certainly, the same tools can easily cut the other way and it's worth being very mindful of that, but I lean optimistic overall that added participation, transparency and accountability will improve democratic, free societies. (the quoted tweet is half-ish related, but inspired me to post some recent thoughts)
Harry Rushworth@Hrushworth

The British Government is a complicated beast. Dozens of departments, hundreds of public bodies, more corporations than one can count... Such is its complexity that there isn't an org chart for it. Well, there wasn't... Introducing ⚙️Machinery of Government⚙️

English
409
728
5.9K
943.4K
Robert Bjarnason retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1.1K
2.8K
26.6K
6.9M
Robert Bjarnason retweetledi
Michael Levin
Michael Levin@drmichaellevin·
New #preprint: @BeneHartl @LPiolopez arxiv.org/abs/2604.01932 "BraiNCA: brain-inspired neural cellular automata and applications to morphogenesis and motor control" Abstract: Most of the Neural Cellular Automata (NCAs) defined in the literature have a common theme: they are based on regular grids with a Moore neighborhood (one-hop neighbour). They do not take into account long-range connections and more complex topologies as we can find in the brain. In this paper, we introduce BraiNCA, a brain-inspired NCA with an attention layer, long-range connections and complex topology. BraiNCAs shows better results in terms of robustness and speed of learning on the two tasks compared to Vanilla NCAs establishing that incorporating attention-based message selection together with explicit long-range edges can yield more sample-efficient and damage-tolerant self-organization than purely local, grid-based update rules. These results support the hypothesis that, for tasks requiring distributed coordination over extended spatial and temporal scales, the choice of interaction topology and the ability to dynamically route information will impact the robustness and speed of learning of an NCA. More broadly, BraiNCA provides brain-inspired NCA formulation that preserves the decentralized local update principle while better reflecting non-local connectivity patterns, making it a promising substrate for studying collective computation under biologically-realistic network structure and evolving cognitive substrates.
English
30
87
441
22.9K
Robert Bjarnason retweetledi
sarah guo
sarah guo@saranormous·
.@gabepereyra is consistently early. intelligence is getting cheap, but judgment / coordination are $$. @harvey has an agent that picks up work off incidents, bug reports, and slack msgs on its own — and getting work done faster than anyone can review. they are AI-native, but the pace is still hitting them in the face most companies were not built for this, and they're next
Gabe Pereyra@gabepereyra

x.com/i/article/2039…

English
11
18
188
52.5K
Robert Bjarnason retweetledi
Michael Levin
Michael Levin@drmichaellevin·
New #preprint with @LPiolopez and Benjamin Lyons preprints.org/manuscript/202… "From Cancer to AI Alignment: Tackling Externalities Through Homeostatic Principles" Abstract: The problem of aligning humans and artificial intelligences can be understood in terms of minimizing externalities between them. However, economics cannot define externality because it contradicts the rationality assumption. This paper applies the homeostatic principles, from anatomical homeostasis to its disorder – cancer, to define externality. Drawing upon the perspective of cancer as a problem of scaling cellular collectives, this paper shows how to redefine both externality and rationality in terms of cognitive light cones (which demarcate the scale of goals any agent can pursue). We propose that cognitive light cones are constructed out of interoceptive signals for the purpose of anatomical homeostasis. We show that externalities can be understood in terms of anatomical homeostasis and derive some important implications for AI alignment, including the possibility of using market mechanisms enable the mutual co-construction of alignment between artificial intelligences and humans. #economics #diverseintelligence
English
19
36
165
11.1K
Robert Bjarnason retweetledi
Michael Levin
Michael Levin@drmichaellevin·
New preprint, memory in Xenobots! First round of our efforts to understand behavioral properties of novel beings (Xenobots, Anthrobots, and more). @pai_vaibhav , James A. Traer, Megan M. Sperry, Yuxin Zheng biorxiv.org/content/10.648… "Behavioral, Physiological, and Transcriptional Mechanisms of Memory in a Synthetic Living Construct" "Synthetic living constructs, which lack the long histories of selection in ecological contexts that shape behaviors of conventional organisms, offer an important complement to traditional studies of learning. Could novel biobots exhibit sensing and memory of experiences? Here, we investigated the effects of chemical stimuli on basal Xenobots – autonomously motile entities derived from Xenopus embryonic ectodermal explants (with no additional sculpting or bioengineering). We quantified and characterized the coordinated ciliary activity that generates fluid flow fields guiding the trajectory of Xenobot motion. We also show distinct and specific changes in Xenobot behavior after brief exposure to Xenopus embryonic cell extract and to ATP. These two experiences produced distinct, long-term, stimulus-specific memories, detectable through both transcriptional and physiological signatures. Exposure to specific environmental stimuli induced alterations in the spatiotemporal patterns of calcium signaling across Xenobots. Together, these data lay a foundation for characterizing the capabilities of synthetic cellular collectives to sense and discriminate among stimuli, as well as store functional information in a non-neural context. Understanding behavioral competencies in novel, non-neural systems have broad implications across evolutionary biology, behavioral science, bioengineering, and bio/hybrid robotics."
Michael Levin tweet mediaMichael Levin tweet media
English
25
110
547
33K
Robert Bjarnason retweetledi
Michael Levin
Michael Levin@drmichaellevin·
Ever wonder what a nervous system would look like if it self-assembled inside a novel being that hadn't faced a history of selection for its organism-level form and function? Or, perhaps you wondered how #Xenobots would look and act, or what their transcriptome would be like, if they had nervous systems? Well, here's the first step: advanced.onlinelibrary.wiley.com/doi/epdf/10.10… "Engineered Living Systems With Self-Organizing NeuralNetworks: From Anatomy to Behavior and Gene Expression" Our awesome team: led by @halehf: @LaurieONeill99, @mmsperry, @LPiolopez, @DrPatrickE, and Tiffany Lin. The @TuftsUniversity and @wyssinstitute press releases are here, for summaries: now.tufts.edu/2026/03/16/sci… wyss.harvard.edu/news/toward-au…
Michael Levin tweet media
English
63
271
1.5K
212.2K
Robert Bjarnason retweetledi
Blaise Agüera (@blaiseaguera.bsky.social)
I’m so honored to see “What Is Intelligence?” named the 2026 PROSE Award winner in the Engineering and Technology category. Thank you to the Association of American Publishers (@AmericanPublish) for this recognition, and to everyone who has supported the book.
Blaise Agüera (@blaiseaguera.bsky.social) tweet mediaBlaise Agüera (@blaiseaguera.bsky.social) tweet media
English
6
18
153
9.2K
Robert Bjarnason retweetledi
Aria Schrecker
Aria Schrecker@Aria_Babu·
Human lifespans could get much longer. • Most gains in life expectancy no longer come from saving children. From the 20th century on, most of the increase has come from reducing deaths in middle and old age. • Blue zones aren't real. The longest human lifespans generally come from places with bad records not special diets. • People who live longer usually stay healthy for longer too. Longevity doesn't just mean more years of being old and decrepit. • Pet dogs are now living about a year longer than they did ten years ago. • Some jellyfish can revert back to a baby state under conditions of extreme stress. • Naked mole rats are extremely weird and long-lived. They live in bee or ant-like colonies with one breeding queen and 'socially infertile' workers. Huddling to keep warm, they don't even regulate their body temperature alone. • Lobsters can live for hundreds of years. They never stop growing, getting bigger and bigger until they starve to death. Listen to the latest episode of the Works in Progress podcast. I talk with @bswud and @salonium about super long lived animals, the medical and social successes that have increased lifespans so far, and what new biohacking and medicine might work to help us live even longer. Apple: podcasts.apple.com/gb/podcast/lon… Spotify: open.spotify.com/episode/66A5Qr… YouTube: youtube.com/watch?v=jo6M99…
YouTube video
YouTube
English
14
18
117
30.4K
Robert Bjarnason retweetledi
Chubby♨️
Chubby♨️@kimmonismus·
Absolutely insane: A startup called Eon Systems demonstrated what may be the first whole-brain emulation controlling a body: a full digital model of a fruit fly brain (125,000 neurons, 50 million synapses) connected to a physics-simulated fly body that produces realistic behaviors. The system closes the full perception-action loop, sensory input activates the connectome-derived brain model, which generates motor commands that move the simulated body, marking a step toward larger brain emulations such as mice and eventually humans. We are living in the future where we always dreamt of.
Dr. Alex Wissner-Gross@alexwg

x.com/i/article/2029…

English
46
101
1K
108.6K
Robert Bjarnason retweetledi
Michael Levin
Michael Levin@drmichaellevin·
Final version is out: advanced.onlinelibrary.wiley.com/doi/epdf/10.10… @YanboZhang3, @BeneHartl, and @HananelHazan "Heuristically Adaptive Diffusion-Model EvolutionaryStrategy" Abstract: Diffusion Models (DMs) and Evolutionary Algorithms (EAs) share a core generative principle: iterative refinement of random initial distributions to produce high-quality solutions. DMs degrade and restore data using Gaussian noise, enabling versatile generation, while EAs optimize numerical parameters through biologically inspired heuristics. Our research integrates these frameworks, employing deep learning-based DMs to enhance EAs across diverse domains. By iteratively refining DMs with heuristically curated databases, we generate better-adapted offspring parameters, achieving efficient convergence toward high-fitness solutions while preserving explorative diversity. DMs augment EAs with deep memory, retaining historical data and exploiting subtle correlations for refined sampling. Classifier-free guidance further enables precise control over evolutionary dynamics, targeting specific genotypical, phenotypical, or population traits. This hybrid approach transforms EAs into adaptive, memory-enhanced frameworks, offering unprecedented flexibility, and precision in evolutionary optimization, with broad implications for generative modeling and heuristic search.
Michael Levin tweet media
English
23
58
322
16.4K