Wim

653 posts

Wim banner
Wim

Wim

@WLWeertman

Studying the intersections of deep learning and conservation.

Katılım Nisan 2023
106 Takip Edilen57 Takipçiler
Wim
Wim@WLWeertman·
@kloss_xyz Ehhhh I deal with it when it happens. I already have local.. but man. It’s so much slower.. I’m hoping by the time they make it financially undoable local will be enough…. Or there will be cheap api. It will be a few years before local catches up.
English
0
0
0
6
klöss
klöss@kloss_xyz·
$200/mo for max frontier AI compute is a subsidy… and subsidies eventually end the smartest builders I know are already moving their craziest workflows to open models and local the rest will wake up to subscription hikes and usage throttles soon enough own your own compute
English
24
2
75
3.9K
Wim
Wim@WLWeertman·
@iamtrask Become a blacksmith
English
1
0
1
40
Zolden
Zolden@ZoldenGames·
I'm making a physics simulation game. It's computed on GPU, allowing big dynamic destructible world. I haven't decided on gameplay yet, this video is a collection of experiments.
English
312
267
7K
1.1M
Wim
Wim@WLWeertman·
@StasiTerumi @ayanoai66105244 @woke8yearold Not entirely sure if you are a argumentive transformer or a person lol… the money would come from demonstrating something people thought was impossible even if it was mid.. obv it’s not going to be the same .
English
1
0
0
12
Aleph
Aleph@woke8yearold·
Imagine you had access to Claude 4.6 Opus and ChatGPT 5.4 in 2014, you have no idea what they are but can ask either questions via your computer with infinite tokens. How much of an advantage would this really be? What could you do with this?
English
82
11
1.6K
112.5K
Collin Rugg
Collin Rugg@CollinRugg·
JUST IN: FBI agents have just raided the home belonging to the suspect accused of throwing a Molotov cocktail at OpenAI CEO Sam Altman's home. The suspect is reportedly 21-year-old Daniel Moreno-Gama, who was "driven by strong anti-AI views." Moreno-Gama was carrying a manifesto when he was arrested in San Francisco that included "a list of other AI executives and investors along with their names and addresses," according to Fox.
English
290
939
5.1K
785.1K
Wim
Wim@WLWeertman·
@johnlindquist I think it can be any of those issues. Sometimes it’s the prompt. Sometimes you don’t know the problems spec at the start. Sometimes it’s the model. I’d like to add another. Sometimes it’s the harnesses fault fucking up the context.
English
0
0
0
5
John Lindquist
John Lindquist@johnlindquist·
If AI takes 3+ tries to fix the bug, is the problem "hard"? Is the model "bad"? Or do you prompt poorly?
English
47
1
16
4.6K
frisk2137
frisk2137@ayanoai66105244·
@StasiTerumi @WLWeertman @woke8yearold bullshit, Large by what standard? By today's standard? Then of course, classic language models weren't "large" but... for 2014 they maybe fucking were? And the world doesn't revolve around transformers
English
2
0
0
17
Stasi Terumi
Stasi Terumi@StasiTerumi·
@ayanoai66105244 @WLWeertman @woke8yearold We didn't have LLMs in 2014. We had LMs. We didn't have Transformers which allowed to scale the networks to extreme sizes (even LSTM doesn't fully solve the issue), and we certainly didn't have enough compute to train even some shitty 32B Transformer, let alone something large.
English
2
0
0
34
Stasi Terumi
Stasi Terumi@StasiTerumi·
@WLWeertman @woke8yearold Won't have enough compute. GPT-5.4 is dubiously sustainable financially even with all our VRAM. We didn't have neural networks in the 90s not because we didn't have the tech but because we didn't have the compute. Same with LLMs in 2014 (though we were lacking tech too, mostly).
English
2
0
0
48
Nature is Amazing ☘️
Nature is Amazing ☘️@AMAZlNGNATURE·
One of the biggest mysteries to me is how Orcas, the ocean’s most efficient predators, have never attacked humans in the wild… almost like they know something we don’t.
English
1.6K
1.9K
25.2K
9.8M
Wim
Wim@WLWeertman·
@marty188586 Distillation will allow for even very large models far beyond current sizes be economically viable
English
0
0
0
22
Martin Chang
Martin Chang@marty188586·
When does LLM scaling end economically? Models are 10x 100x larger. For maybe 1.3x ~ 2x performance gain, varies by domain. Seriously?
English
11
1
54
6.1K
Wim
Wim@WLWeertman·
@yacineMTB Grandpa worked on nukes. Studied how they broke at the atomic scale during the explosion before fission. Proper math guy. My dad has childhood memories of spending time in Pakistan. USA probably sent gramps that. With others. To teach them the secrets.
English
0
0
0
739
kache
kache@yacineMTB·
like that's actually crazy. indian vs pakistan cricket fights could turn into nuclear armageddon doom
English
14
3
1.2K
90.4K
kache
kache@yacineMTB·
you know what blows my mind? pakistan has nukes
English
468
345
12.9K
826.8K
Wim
Wim@WLWeertman·
@Teknium Cool? Was this with qwen? Finally been giving local a try and gotta say. I’m impressed.
English
1
0
1
333
Teknium (e/λ)
Teknium (e/λ)@Teknium·
Hermes Agent now comes packaged with Karpathy's LLM-Wiki for creating knowledgebases and research vaults with Obsidian! In just a short bit of time Hermes created a large body of research work from studying the web, code, and our papers to create this knowledge base around all of Nous' projects. Just `hermes update` and type /llm-wiki in a new message or session to begin :) github.com/NousResearch/h…
Teknium (e/λ) tweet media
English
182
379
3.6K
477.4K
Jesse Samuel
Jesse Samuel@jwsaml·
Has anyone fully replaced their OpenClaw with Hermes?
Jesse Samuel tweet mediaJesse Samuel tweet media
English
304
13
557
88.9K
Wim
Wim@WLWeertman·
@MaziyarPanahi Amazing stuff is happening in the vision world. .6B is actually large for edge tasks. But now it seems so quaint. Big room for compression to make smaller models!
English
1
0
0
88
Maziyar PANAHI
Maziyar PANAHI@MaziyarPanahi·
Jets. Helicopters. Wildfire. Traffic. Crowds. Falcon Perception. 0.6B parameters. "Find every person." It finds every person. "Find the fire." It finds the fire. +21.9 over SAM 3 on spatial understanding. Running locally via MLX. No cloud. What should I throw at it next?
English
18
14
201
17.8K
Rand
Rand@rand_longevity·
are you gonna get a brain implant or wait for nanobots in your bloodstream?
English
84
4
108
13.3K
Wim
Wim@WLWeertman·
@yacineMTB Seems also like it would be relatively easy to simulate.. you use a generator for the spaces? How do you have the drones avoid hitting kids? Purely drone design? Small and caged?
English
0
0
0
36
kache
kache@yacineMTB·
I honestly don't care what it's used for. I'll just sell it to groups that have highly aligned democratic values. If I'm being honest, the only thing I really want is to see it work
English
6
1
38
4.4K
kache
kache@yacineMTB·
I've had autonomous indoor flight for a while now, working with goals. The next goal is to crack 3 seconds per room. I want to have a swarm of drones completely saturating every single room - an operator should be able to know how many people are inside a building in a minute
English
30
4
255
16K
Wim
Wim@WLWeertman·
@shimetsu_nittan Look kinda like urchin spines but that seems wrong
English
0
0
0
3.4K
死滅淡水魚
死滅淡水魚@shimetsu_nittan·
サメガレイを貰ったので皮剥いたら白い側にこんな感じで棘みたいなのが大量に着いてたんですが、これなにか分かりますか? 身の中にも刺さってました
死滅淡水魚 tweet media
日本語
388
187
11.1K
15.8M
Wim
Wim@WLWeertman·
@DeeperThrill My expectation. Francois chollet is largely correct about the smooth sphere of intelligence. These means eventually open source will be basically just as good. But harder to buy compute or use the models.
English
0
0
1
50
Deep Thrill
Deep Thrill@DeeperThrill·
Open Source keeps catching up but never surpassing. Will it ever? Or will the closed source models always be the smartest, just paving the way for open source to follow in their wake?
AVB@neural_avb

Underrated fact: open source will always catch up A comparison of GPT 5.4, Opus 4.6, Gemini 3.1 on highest settings vs open models Qwen, Minimax M-2.5, GLM-5, Kimi K2.5 They are all 2-10x cheaper lol. Idk how long closed source can keep any moat besides marketing budget.

English
5
0
9
1.4K