Arcadian Computers (RKDN)

25.8K posts

Arcadian Computers (RKDN) banner
Arcadian Computers (RKDN)

Arcadian Computers (RKDN)

@ArcadianComp

Computer consulting / repair center. We fix (and build) laptops / desktops / servers. Onsite support for business, and website design/hosting. Linux / Mac / Win

La Plata, MD Katılım Aralık 2011
2K Takip Edilen687 Takipçiler
Arcadian Computers (RKDN) retweetledi
Bell Labs
Bell Labs@BellLabs·
We're continuing to celebrate Claude Shannon's 110th birthday this month. Here's some short but shrewd wisdom from the father of information theory himself: "Information is the resolution of uncertainty." Shannon's quest to resolve uncertainty laid the foundations of modern AI.
Bell Labs tweet media
English
4
23
56
2.6K
Arcadian Computers (RKDN) retweetledi
Oliver Prompts
Oliver Prompts@oliviscusAI·
Someone just built a Claude Code for electronics. It's called Blueprint. Type what you want to build and it generates wiring diagrams, bills of materials, and step-by-step assembly guides for your Arduino or Raspberry Pi project. 100% Free.
English
31
363
2.5K
126.5K
Arcadian Computers (RKDN) retweetledi
Igor Os
Igor Os@igor_os777·
UNIX introduced ‘/dev/null’—the system’s ultimate trash can—early in its life (Version 4 UNIX, 1973-ish). Feeding it data instantly discards it. Sysadmins lovingly refer to it as the bit bucket, digital abyss, or management’s feedback inbox. Coincidentally, its throughput remains unbeaten.
Igor Os tweet media
English
12
29
165
7.1K
Arcadian Computers (RKDN) retweetledi
🌿 lithos
🌿 lithos@lithos_graphein·
The engineers behind this launch definitely deserve that bonus. What kind of new AI sorcery will this thing bring us?
🌿 lithos tweet media
English
6
5
180
14.5K
Arcadian Computers (RKDN) retweetledi
William Ruider
William Ruider@ruider92545·
I don't blame people who voluntarily spend over $200 a month chatting with their beloved OpenClaws or Hermeses, but aren't forced to. But I'm surprised that so many people don't realize that without investing in professional NVIDIA or Apple hardware, certain specific requirements for working with sensitive or private customer data cannot be met. Such work requires running local AI. The point isn't how intelligent these are compared to frontier cloud models, but what can be done in an Air-Gap environment regarding sensitive data and customer privacy. More and more standards, such as ISO/IEC 27001 and NIST CSF/800-53, require accreditation and appropriate hardware and software. At the same time, more and more people are realizing how dangerous online work is. That is why there are more and more people who are able to meet the demand in this niche sector.
William Ruider tweet media
English
26
18
239
24.6K
Arcadian Computers (RKDN) retweetledi
Wccftech
Wccftech@wccftech·
AMD taps GlobalFoundries for MI500’s co-packaged optics as the silicon photonics race with NVIDIA heats up. wccftech.com/amd-taps-globa…
English
0
2
8
1.2K
Arcadian Computers (RKDN) retweetledi
Alex Cheema
Alex Cheema@alexocheema·
Ternus as CEO, Srouji as Chief Hardware Officer. The people who turned Apple Silicon into the leading local AI platform are now the ones steering the company. These were the people doing heterogeneous on-device AI 10 years ago (w/ CoreML on CPU/GPU/ANE). The software sucked - but the hardware + architecture was way ahead of its time. This is Apple leaning into local AI. This will define the next 20 years of Apple. Great news for local AI.
Alex Cheema tweet mediaAlex Cheema tweet media
Mark Gurman@markgurman

BREAKING: Tim Cook steps down. Ternus to CEO.

English
24
112
1.4K
121.7K
Arcadian Computers (RKDN) retweetledi
Uros Popovic
Uros Popovic@popovicu94·
Most Linux tutorials end where the interesting part begins. Install guides. Distro comparisons. Ricing screenshots. And then nothing. I'm building what comes next. The Linux Field Guide: 6 series planned at the moment, tons of articles, for upper beginners ready to go past the desktop. Some of the series ideas for now: - From Zero to Root (Alpine on QEMU from scratch) - The C Layer (libc, syscalls, POSIX) - Shell as a Programming Language - Your Kernel Talks Through /proc - Signals and Process Life - Files Are Everything The foundation for wielding Linux for real - building kernel images, rolling your own userspace, writing against system calls, running it on hardware you built. Start with the parts you touch every day. Go as deep as you want. A whole new website and newsletter coming soon! Follow along!
Uros Popovic tweet media
English
14
90
960
41K
Arcadian Computers (RKDN) retweetledi
Tom's Hardware
Tom's Hardware@tomshardware·
New cost-effective DDR5 memory 'HUDIMMs' show around 50% reduction in throughput with single subchannel — Two HUDIMMs are as fast as a single stick of regular DDR5 RAM tomshardware.com/pc-components/…
English
0
3
27
2.3K
Arcadian Computers (RKDN) retweetledi
Rosen.Tech
Rosen.Tech@RosenBridge_erg·
Rosens wrapped version is called rsETH. Fortunately that is not the asset trending right now. We built an open source bridge because marketing in crypto can be deceptive vs what happens under the hood. Attack surfaces don’t matter until they do. They mattered today.
English
0
27
100
1.5K
Arcadian Computers (RKDN) retweetledi
ℏεsam
ℏεsam@Hesamation·
this part of the KIMI K2.6 launch blog is insane: > it deployed Qwen3.5-0.8B model locally on a Mac. > coded and optimized its inference in Zig > (never knew you could do that) > improved throughput from ~15 to ~193 tokens/sec > made it 20% faster than LM Studio > did 4,000+ tool calls, >12 hours of execution, 14 iterations
ℏεsam tweet media
Kimi.ai@Kimi_Moonshot

Meet Kimi K2.6: Advancing Open-Source Coding 🔹Open-source SOTA on HLE w/ tools (54.0), SWE-Bench Pro (58.6), SWE-bench Multilingual (76.7), BrowseComp (83.2), Toolathlon (50.0), Charxiv w/ python(86.7), Math Vision w/ python (93.2) What's new: 🔹Long-horizon coding - 4,000+ tool calls, over 12 hours of continuous execution, with generalization across languages (Rust, Go, Python) and tasks (frontend, devops, perf optimization). 🔹Motion-rich frontend - Videos in hero sections, WebGL shaders, GSAP + Framer Motion, Three.js 3D. 🔹Agent Swarms, elevated - 300 parallel sub-agents × 4,000 steps per run (up from K2.5's 100 / 1,500). One prompt, 100+ files. 🔹Proactive Agents - K2.6 model powers OpenClaw, Hermes Agent, etc for 24/7 autonomous ops. 🔹Claw Groups (research preview) - bring your own agents, command your friends', bots & humans in the loop. - K2.6 is now live on kimi.com in chat mode and agent mode. For production-grade coding, pair K2.6 with Kimi Code: kimi.com/code - 🔗 API: platform.moonshot.ai 🔗 Tech blog: kimi.com/blog/kimi-k2-6 🔗 Weights & code: huggingface.co/moonshotai/Kim…

English
23
87
1.3K
115.5K
Arcadian Computers (RKDN) retweetledi
SDF
SDF@sdf_pubnix·
Wow! The vintage systems are getting a workout thanks to @lauriewired if something goes offline, we’ll do our best to bring it back up. We did get a few new members so we are grateful for your support and participation! Be sure to check out our restoration blog icm.museum
LaurieWired@lauriewired

More Vintage Computing museums should rent out cloud access to their rare hardware. SDF (Super Dimension Fortress) does it, and it’s freaking awesome. I’m literally logged into a Sun SPARCstation…anyone can do this for free, right now. Just SSH in.

English
3
9
58
2K
Arcadian Computers (RKDN) retweetledi
How To AI
How To AI@HowToAI_·
Yann LeCun was right the entire time. And generative AI might be a dead end. For the last three years, the entire industry has been obsessed with building bigger LLMs. Trillions of parameters. Billions in compute. The theory was simple: if you make the model big enough, it will eventually understand how the world works. Yann LeCun said that was stupid. He argued that generative AI is fundamentally inefficient. When an AI predicts the next word, or generates the next pixel, it wastes massive amounts of compute on surface-level details. It memorizes patterns instead of learning the actual physics of reality. He proposed a different path: JEPA (Joint-Embedding Predictive Architecture). Instead of forcing the AI to paint the world pixel by pixel, JEPA forces it to predict abstract concepts. It predicts what happens next in a compressed "thought space." But for years, JEPA had a fatal flaw. It suffered from "representation collapse." Because the AI was allowed to simplify reality, it would cheat. It would simplify everything so much that a dog, a car, and a human all looked identical. It learned nothing. To fix it, engineers had to use insanely complex hacks, frozen encoders, and massive compute overheads. Until today. Researchers just dropped a paper called "LeWorldModel" (LeWM). They completely solved the collapse problem. They replaced the complex engineering hacks with a single, elegant mathematical regularizer. It forces the AI's internal "thoughts" into a perfect Gaussian distribution. The AI can no longer cheat. It is forced to understand the physical structure of reality to make its predictions. The results completely rewrite the economics of AI. LeWM didn't need a massive, centralized supercomputer. It has just 15 million parameters. It trains on a single, standard GPU in a few hours. Yet it plans 48x faster than massive foundation world models. It intrinsically understands physics. It instantly detects impossible events. We spent billions trying to force massive server farms to memorize the internet. Now, a tiny model running locally on a single graphics card is actually learning how the real world works.
How To AI tweet media
English
234
953
5.8K
427.6K