Far El

7.4K posts

Far El banner
Far El

Far El

@far__el

building a new kind of AI @ruliad

world Katılım Nisan 2022
2.5K Takip Edilen14.3K Takipçiler
Far El
Far El@far__el·
compiled simple playable games into transformer while replicating perceptra's research on "LLMs = computer" interesting ideas coming out of this one
English
1
0
6
472
Far El
Far El@far__el·
@tobi Easier said than done when the ecosystem is littered with unnecessary obstacles
English
0
0
2
343
Far El
Far El@far__el·
@foomagemindset @francoisfleuret Wrong abstraction imo we don’t need to go down to the level of molecules, elements or quantum effects, so I highly doubt that carbon-based or even a biological substrate is required to emulate some exotic forms of consciousness
English
1
0
1
20
Kassandra Popper
Kassandra Popper@foomagemindset·
@far__el @francoisfleuret Which parts of natural consciousness systems are important? Does it need to be carbon based or can we get by with in-silico simulation? I personally doubt carbon is a prerequisite but without a theory of what is required we don’t know.
English
1
0
2
30
François Fleuret
François Fleuret@francoisfleuret·
If we completely understood the science of self awareness and consciousness, and it happened that equipping AIs with it makes them far stronger, should we do it?
English
22
0
12
4.6K
Far El
Far El@far__el·
While working on the emulated fruit fly brain for a bit, Ive been thinking about the implications of mapping, uploading and emulating brains, and the potential for some actual transfer of memories, skills and knowledge blows my mind. I went into it curious to learn more about biological brains to support our superintelligence research (and we did gain some massive insights from this deep dive, more on this very soon). But it also made it clear to me that this tech is no longer a sci-fi pipe dream. It’s early but the pantheon is not a mirage. It’s actually terrifyingly awesome what will be possible in the next 5-10 years with brain tech. In this Gung-ho world where AI is being rushed into war and surveillance, this technology is inevitable. Especially when you combine brain emulation/upload, with all the other neurotech advancements… Best thing to do is make sure that we are as ahead of the curve as possible and have alternatives and defences in place for what’s to come, in order to maintain our cranial sovereignty.
English
3
3
47
2.5K
Far El
Far El@far__el·
Just found out eon open sourced their code here: github.com/eonsystemspbc/… Will review it later but seems like my method is very similar
English
4
11
150
10.1K
Far El
Far El@far__el·
To answer some questions: yes this is a full brain emulation, no it is not RL trained, there is no training whatsoever, the brain architecture is running the inference based on its existing config of neurons and synapses. I’m literally running the fly brain, mapped inputs and outputs to the brain and running a SNN setup over the full ~150K neuron connectome with its existing synapses, downloaded from Flywire.ai, in order to take actions in the env. I’ve tried to stay as close to the science as possible for the interface between the brain and the neuromechfly “digital body”. The main rule is ensuring that it is indeed the connectome performing the actions based on stimuli. Scientists fortunately already mapped this brain in significant detail including which neurons (or groups of them) are responsible for which actions. Some liberties were obviously taken in the interface in order to connect the action neurons for body movement. But the brain is itself taking forward, backward, steering, grooming, feeding and other actions purely based on vision and olfaction stimuli. It’s a very limited emulation, the fidelity of the interface could probably be much higher. Aiming to expand on the brain emulation software, the interface and the environment to figure out what the hell is going on here and how far can we take it.
English
9
11
233
14.1K
Far El
Far El@far__el·
i spent some time recreating the fruitfly "brain" upload. it's an interesting idea to simulate destructive brain uploads. the fly connectome is truly taking actions (completely untrained) that i could only call fruitflyish. i still have lots of questions so will explore this area further.
GIF
English
71
185
2.8K
295.3K
nisten🇨🇦e/acc
nisten🇨🇦e/acc@nisten·
Yo I called it @far__el
nisten🇨🇦e/acc tweet media
Dima Mikielewicz@dimamikielewicz

OpenAI published a repo with the code to orchestrate AI agents built primarily with Elixir (96.1%): github.com/openai/symphony. While explaining why they chose Elixir, they say that - It is great for supervising long-running processes - It has an active ecosystem of tools and libraries - It supports hot code reloading without stopping actively running subagents, which is very useful during development. Amazing news for the Elixir community; I hope even more people will appreciate how amazing Elixir is for agentic AI systems. #myelixirstatus

English
1
0
9
1.8K
Far El
Far El@far__el·
manifest dystopia
CY
1
0
8
988
chiefofautism
chiefofautism@chiefofautism·
someone connected LIVING BRAIN CELLS to an LLM Cortical Labs grew 200,000 human neurons in a lab and kept them alive on a silicon chip, they taught the neurons to play Pong, then DOOM now someone wired them into a LLM... real brain cells firing electrical impulses to choose every token the AI generates you can see which channels were stimulated, the feedback from the neurons in choosing that letter or word
English
758
1.6K
12K
2.9M
Far El
Far El@far__el·
@mayfer When I was playing with LoRAs over a year ago, we indeed found you can superpose LoRAs (kind of like a versioning system)
English
0
0
1
151
murat 🍥
murat 🍥@mayfer·
if representation space continues to work better, 1. i expect to see diffusion generating, infilling or annealing LoRAs (or DoRA etc) 2. if LoRAs could be superposed that's basically the goal of an RNN
Sakana AI@SakanaAILabs

We’re excited to introduce Doc-to-LoRA and Text-to-LoRA, two related research exploring how to make LLM customization faster and more accessible. pub.sakana.ai/doc-to-lora/ By training a Hypernetwork to generate LoRA adapters on the fly, these methods allow models to instantly internalize new information or adapt to new tasks. Biological systems naturally rely on two key cognitive abilities: durable long-term memory to store facts, and rapid adaptation to handle new tasks given limited sensory cues. While modern LLMs are highly capable, they still lack this flexibility. Traditionally, adding long-term memory or adapting an LLM to a specific downstream task requires an expensive and time-consuming model update, such as fine-tuning or context distillation, or relies on memory-intensive long prompts. To bypass these limitations, our work focuses on the concept of cost amortization. We pay the meta-training cost once to train a hypernetwork capable of producing tasks or document specific LoRAs on demand. This turns what used to be a heavy engineering pipeline into a single, inexpensive forward pass. Instead of performing per-task optimization, the hypernetwork meta-learns update rules to instantly modify an LLM given a new task description or a long document. In our experiments, Text-to-LoRA successfully specializes models to unseen tasks using just a natural language description. Building on this, Doc-to-LoRA is able to internalize factual documents. On a needle-in-a-haystack task, Doc-to-LoRA achieves near-perfect accuracy on instances five times longer than the base model's context window. It can even generalize to transfer visual information from a vision-language model into a text-only LLM, allowing it to classify images purely through internalized weights. Importantly, both methods run with sub-second latency, enabling rapid experimentation while avoiding the overhead of traditional model updates. This approach is a step towards lowering the technical barriers of model customization, allowing end-users to specialize foundation models via simple text inputs. We have released our code and papers for the community to explore. Doc-to-LoRA Paper: arxiv.org/abs/2602.15902 Code: github.com/SakanaAI/Doc-t… Text-to-LoRA Paper: arxiv.org/abs/2506.06105 Code: github.com/SakanaAI/Text-…

English
2
2
25
4.7K
Far El
Far El@far__el·
@geoffreyhinton We need to automate and modernize our government with AI to eliminate bureaucracy and corruption.
English
1
1
15
6.2K
Geoffrey Hinton
Geoffrey Hinton@geoffreyhinton·
Our lying Ontario premier has just stolen $50 from every single person in Ontario. The estimated cost of repairing our beloved Science Center was 200 million dollars. The firm that made the estimate was told to multiply it by 1.85 to make it bigger. We were then told it would be better to build a new science center. Now we learn the new center will be smaller and will cost a billon dollars before the cost overruns. The only win is that the extensive parking lots of the old Science Center will be available for his developer friends.
English
99
588
2.9K
209K
Far El
Far El@far__el·
what comes after llms will scare even the most devout accelerationist into a doomer 😂
English
6
0
26
2.2K
Far El
Far El@far__el·
I find it amusing that an ASI will possibly use a LLM for menial tasks not worth its compute
English
1
0
14
1.2K
Far El
Far El@far__el·
@elonmusk @aaronburnett Starlink subsidizes the moon which subsidizes mars and helps lay the foundation for interplanetary human civilization.
English
0
0
2
344
Elon Musk
Elon Musk@elonmusk·
@aaronburnett The Moon would establish a foothold beyond Earth quickly, to protect life against risk of a natural or manmade disaster on Earth. We would continue to launch directly from Earth to Mars while possible, rather than Moon to Mars, as fuel is relatively scarce on the Moon.
English
1.3K
629
8.2K
1.7M
Far El
Far El@far__el·
@TeenyLightHouse most immediately to me and many others, significant and accelerated advancements in bio/medicine/longevity
English
0
0
0
22