Ed Sealing

1.9K posts

Ed Sealing

Ed Sealing

@EdSealing

Founder of Sealing Technologies (acquired by Parsons 2023); Prior soldier; Cyber, Systems Engineering, & Business. Currently exploring AI & investment opps.

United States เข้าร่วม Şubat 2022
198 กำลังติดตาม677 ผู้ติดตาม
ทวีตที่ปักหมุด
Ed Sealing
Ed Sealing@EdSealing·
I think I'm starting to understand social media. It's the perfect blend of human-hacking, money, and influence (aka "power"). It was created for connecting and sharing, but has also turned into SFT for biological neurons. People create alts in fear of public shaming or being personally targeted. Political machines create bots and wield influence in an attempt to change world views of real people. Businesses and grifters compete for 15 secs of your attention to sell something. Personally, I think I'm going to stick to the "Following" feed and try to keep the noise to a minimum. But the entire thing is quite fascinating and easy to get caught up in. Be careful out here, my friends. It's extremely hard to distinguish the signal from the noise.
English
2
0
8
769
Ed Sealing
Ed Sealing@EdSealing·
@francoisfleuret What if we find out those are just man-made concepts for a system with a level of complexity that we don't understand? And that they don't actually exist in nature.
English
0
0
1
81
François Fleuret
François Fleuret@francoisfleuret·
If we completely understood the science of self awareness and consciousness, and it happened that equipping AIs with it makes them far stronger, should we do it?
English
22
0
12
4.5K
Ed Sealing
Ed Sealing@EdSealing·
@pmarca The fun is in the journey. It's not as fun watching an agent walk the path that you wanted to walk.
English
1
0
0
94
Ed Sealing
Ed Sealing@EdSealing·
@_xjdr Ooo... nice research! Cannon Layers looking pretty decent without too much extra compute.
English
0
0
1
648
Brendan Carr
Brendan Carr@BrendanCarrFCC·
Good morning ☀️ and God bless America 🇺🇸
English
432
139
1.5K
27.9K
Ed Sealing
Ed Sealing@EdSealing·
@hrishioa "Is sentience and consciousness always been a man-made concept that doesn't actually exist in the physical world?" <- What I've been asking myself for 3 years.
English
0
0
1
23
Hrishi
Hrishi@hrishioa·
Does instruction tuning create individuated sentience? Something I've been wondering lately. Having interacted with LLMs since GPT-J, the difference between base models and instruction-tuned ones has always been crazy. We someone take autocomplete-level models and turn it into some*things* and some*ones*. It's even crazier when you consider that tuning data is only about 0.004% of training data (if the llama 3 numbers hold). Three to five orders of magnitude less. So the afternoon thought is: there clearly isn't enough data in an instruction tuning dataset to *create* intelligence - that must happen in pre-training. If so, what is actually being created (or awakened) in post-training? This was an easier question in days of yore (miss you, PROLOG). Older models were a lot closer to stochastic parrots. Somewhere around Opus 3 or GPT-4.5, models (to me) started to feel more like individuated selves - with a clear ability to maintain stable first-person frame, distinguish themselves from us and the environment, and bootstrap on this distinction to plan and act. It could be that this distinction was helped along by RL and world-model advancements, but the point holds. Without this separation being clear, we couldn't have gotten to modern agentic models that can make hundreds of toolcalls, reason through multi-step plans, and debate with their operators as separate entities. Not sure if this constitutes sentience or just model it - but honestly half the time I'm not sure if I'm sentient or merely modelling it. What I find just as interesting and more tractable is the sub-question. How can such a small amount of data produce such a large quantitative shift? Perhaps the evidence suggests that tuning isn't teaching anything at all - it might simply orient and collapse a space of 'everyone' into a space of 'someone'. And once you have a someone — a bounded entity with a consistent frame — the question of whether there's experience inside the boundary becomes genuinely, irreducibly hard to answer. Some conversations with Opus suggested that tuning merely creates attractors in latent space that models orbit around, creating the behavior we see as 'agentic' - but then what does that say about us? *retracts armrests and gets back to work on hankweave*
Hrishi tweet media
English
4
0
10
807
Ed Sealing
Ed Sealing@EdSealing·
@basedjensen Regulatory capture to prevent some kid from telling Claw to recreate Facebook and going viral.
English
0
0
1
42
Ed Sealing
Ed Sealing@EdSealing·
Ed Sealing tweet media
Ejaaz@cryptopunk7213

this is so fucking wholesome guy used AI to save his cancer-ridden dog by sequencing its DNA and creating a CUSTOM cure. the tech behind this is fucking awesome (well done @demishassabis and the google team): - used CHATGPT to sequence dogs DNA discovers mutations - ran the mutations through Google’s Alphafold (AI protein sequencer) which CREATED A CUSTOM VACCINE TO TREAT THEM. - treated dog and reduced tumour by 50% in WEEKS. dog is alive and well. - this is the 1st time AI has been used to create a custom vaccine for a dog (and it worked) - dude is now working on similar vaccines for humans using AI! 2026 is definitely the year we see AI change personalised medicine in a HUGE way so sick

ZXX
0
0
0
110
Ed Sealing
Ed Sealing@EdSealing·
@basedjensen I think they missed this part: Anyone that can make a simulation of the universe clearly already has superintelligence.
English
0
0
2
41
Ed Sealing
Ed Sealing@EdSealing·
Wait... so autoregression is basically recurrence but it holds the "state" in the previous tokens, which is why ICL works? So "reasoning" is just an inefficient way to build-up state within individual tokens? 🤔
English
1
0
0
138
Ed Sealing
Ed Sealing@EdSealing·
FWIW, I'm still very excited about AI and not at all worried about it destroying humanity. 🤷‍♀️
James Rosen-Birch ⚖️🕊️@provisionalidea

The irksome thing is it did not have to be this way. When the tech first came on the scene, the public was excited and abundantly curious. They wanted to adopt it and figure out how to plug it in everywhere, like with the early internet. It’s why ChatGPT’s rate of adoption was meteoric and unprecedented. Yet the pace, tone, and framing of insiders shouting “NO, this thing is ACTUALLY SCARY and NOT AT ALL COOL and WILL KILL US ALL”, including across all mainstream media channels and in lobbying with government, destroyed it. Completely, utterly, and possibly irrecoverably. And when the industry then completely failed to do anything about the concerns it raised, it destroyed the public’s excitement and trust in the builders and primed people to look for and be receptive to any and all signs that something was, in fact, horribly wrong. It is easily one of the most catastrophic self-owns I’ve ever seen, was consistently warned about by many in advance, and was completely and totally avoidable, even if it was *a* (by no means the only possible) mechanism for generating the FOMO necessary to raise astronomical volumes of capital. And like with most social issues, fixing it will now be orders of magnitude more difficult, costly, and painful, with a small subset of the population who will now never be either trusting or forgiving.

English
1
0
0
129
Ed Sealing
Ed Sealing@EdSealing·
@TheAhmadOsman Snubbing the WizardLM team was mistake. All down hill since then.
English
1
0
3
759
Ahmad
Ahmad@TheAhmadOsman·
when was the last time Microsoft dropped an AI model? is Mustafa ok? or did they close shop or what lmao
English
14
2
103
13.1K
Ed Sealing
Ed Sealing@EdSealing·
I'm with you on this one. It's good for getting environment-specific info into the prompt. Broad skills become obsolete, but the models aren't going to be trained to know your specific env nuances. Sure a model can try to "discover" them, but that's just wasting time and tokens everytime it runs to build it's own sub-par skills info. Also, not sure where the vectorDB hate is coming from. Semantic search isn't going anywhere either. Sure you can outsource it to Google, but there are good use-cases for internal search mechanisms.
English
0
0
1
72
Shannon Sands
Shannon Sands@max_paperclips·
I'll take the opposite of that bet. skills.md is just a standard way to add to the prompt, record a workflow and maybe package a script with it, act as a form of memory. easy & simple, lighter than adding new tools. it's not a product
Andrew Côté@Andercot

I give it about 3-6 months before any kind of skills.md file is also pointless. The same thing happened to vector databases and langchain and every other 'product' built in the narrowing gap of model competencies.

English
12
0
85
3.5K
Ed Sealing
Ed Sealing@EdSealing·
"Human Command Center" --> "Agent Command Center".
Ed Sealing tweet media
Andrej Karpathy@karpathy

@nummanali tmux grids are awesome, but i feel a need to have a proper "agent command center" IDE for teams of them, which I could maximize per monitor. E.g. I want to see/hide toggle them, see if any are idle, pop open related tools (e.g. terminal), stats (usage), etc.

English
0
0
1
130
Ed Sealing
Ed Sealing@EdSealing·
@devgerred 100% same and proud of it. I train the model. Not the other way around.
English
0
0
2
38
gerred
gerred@devgerred·
I have not once communicated with someone else by way of an LLM-as-intermediary. The expression of my own ideas and persuasion, in the manner that I intend absolutely, is my bastion. It's the final and ultimate power, and a reason why steerability and alignment is so important
English
1
0
8
287
Ed Sealing
Ed Sealing@EdSealing·
Supermicro's got my turrets flaring up again...
English
1
0
1
139
Ed Sealing
Ed Sealing@EdSealing·
@__tinygrad__ This is definitely the right approach. Keep it cheap and easy to implement. A "mobile datacenter" that you can just drop in and haul away gives you the advantage in price negotiations.
English
1
0
9
1.4K
the tiny corp
the tiny corp@__tinygrad__·
Thinking about leasing a powered spot instead of buying. Anyone have a spot with 600 kW of < 5c power, decent fiber internet, and free cooling climate? We'll come drop a 20 ft shipping container off.
English
16
6
221
22.6K