Libre Computer

2.2K posts

Libre Computer banner
Libre Computer

Libre Computer

@librecomputer

Changing the Landscape of Open Computing with reliable, composible, upstream, and secure SBCs and Computing Devices for professionals, hobbyists, and tinkerers.

Salt Lake City, UT Entrou em Temmuz 2017
1K Seguindo4.3K Seguidores
Libre Computer
Libre Computer@librecomputer·
Feel like it is only a matter of time before DRAM manufacturers become compute manufacturers.
English
0
0
5
374
bret.dk
bret.dk@bretweber·
@librecomputer Still trying to stick a price point. I respect it
English
1
0
0
34
Libre Computer
Libre Computer@librecomputer·
April will be amazing.
English
2
0
4
456
bret.dk
bret.dk@bretweber·
@librecomputer Libre Computer Green Bean releasing with 128MB of RAM?
English
2
0
2
222
Libre Computer
Libre Computer@librecomputer·
Connecting with people in GPU sales/leasing/cloud compute, containerized/modular solutions, datacenter design & deployment space. Have 2MW+ cheap reliable power source for PoC, expandable to multiple sites.
English
0
0
4
302
bret.dk
bret.dk@bretweber·
How many of these do we think ended up cooked before they started putting these stickers on them?
bret.dk tweet media
English
5
1
24
1.2K
Sahaj Sarup 🐧
Sahaj Sarup 🐧@sahajsarup·
Why does every freaking sbc OS have a complicated "startup script"? Whatever happened to just dropping to console? Armbian/Rpi/DietPi whatever else l. Everyone wants to change the password, connect to network, update every darn thing and do full desktop configuration 🤬
English
2
0
1
156
Libre Computer
Libre Computer@librecomputer·
@axboe @lffgz Do you ask it to write a bunch of md files guiding intuition? This is what we found most helpful through OpenClaw. Building context and groundwork of relative truths.
English
0
0
0
39
Jens Axboe
Jens Axboe@axboe·
@lffgz I don't think you can, because that skill comes from intuition built from years/decades of experience. Same goes for taste, though many experienced developers never achieve that.
English
1
0
3
408
Libre Computer
Libre Computer@librecomputer·
@axboe @erwanaliasr1 We have been uaing Claude with great success. For architects who understand the implications in the stack, it is great. For vibe coding, it is a recipe for disaster.
English
1
0
5
1.2K
Libre Computer
Libre Computer@librecomputer·
UEFI extensibility? Device tree will always be superior.
English
0
1
3
386
bret.dk
bret.dk@bretweber·
Hm, well this 1st iteration was generating a deterministic pseudo-random data per file (each cycle of 95% of the disk's capacity gets a new file) using a seeded PRNG (Go's math/rand), bypassing the OS caches with O_DIRECT, and then reading it back to compare the SHA-256 hashes. I haven't deployed anything big time yet so I'm open to tweaking. This was a side quest that I haven't really dedicated much time to :D
English
1
0
1
45
bret.dk
bret.dk@bretweber·
Almost at 50TiB written/read on the 32GB @Raspberry_Pi microSD card, and no smoke yet.. I should really try and find time to better understand the innards of flash storage, the increase (and then decrease) is interesting. Controller optimising for the workload for a while maybe?
bret.dk tweet media
English
1
1
11
619
Libre Computer
Libre Computer@librecomputer·
@bretweber @Raspberry_Pi The real test is writing 4K data randomly and explicitly sychronized. That will destroy poorly designed devices with only a few GB.
English
1
0
2
176
Libre Computer
Libre Computer@librecomputer·
@bretweber @Raspberry_Pi NAND can take a lot of sequential writes. Take your size and multiply by 1000 should not be an issue. You have to write random data (eg AES encrypted via openssl) to really test as flash controller have compression now to improve perf and not actually write much.
English
1
0
3
217
System64
System64@System64fumo·
@librecomputer Depends on what “good” and “low hardware means”, I feel like a local translation or automation bot is helpful and that can run on pretty low-ish hw (8gb ram) But generally speaking you’re correct, Actually useful coding assistants or similar do need beefy hw.
English
1
0
1
20
System64
System64@System64fumo·
@librecomputer You guys have been promoting AI/Agent stuff lately, Would be super cool to see an LLM/Agent run fully locally running on upstream linux and llama.cpp with HW accel since rk3588 or similar can’t utilize the NPU/GPU with llama.cpp.
English
1
0
1
177
Libre Computer
Libre Computer@librecomputer·
@System64fumo Local models are only good for roles such as TTS or STT or image classification. Useful LLMs cannot run effectively on lower end hardware.
English
1
0
1
24
System64
System64@System64fumo·
@librecomputer A small low power cluster that can utilize an NPU or GPU to accelerate an LLM would be very appealing because at the moment the only way to get hardware accelerated LLMs on ARM is with vendor kernels and very specific and broken AI tools.
English
2
0
1
32