trias

638 posts

trias banner
trias

trias

@trias

crypto - AI and net zero

Jakarta Capital Region, Indone Bergabung Mayıs 2012
339 Mengikuti678 Pengikut
trias me-retweet
Tech with Mak
Tech with Mak@techNmak·
In 1948, a 32-year-old at Bell Labs published a paper nobody fully understood. Engineers found it too mathematical. Mathematicians found it too engineering-focused. One prominent mathematician reviewed it negatively. That paper - "A Mathematical Theory of Communication", became the founding document of the digital age. The man was Claude Shannon. Father of Information Theory. At 21, he wrote the most important master's thesis of the 20th century. Working at MIT on an early mechanical computer, Shannon noticed its relay switches had exactly two states - open or closed. He had just taken a philosophy course introducing Boolean algebra, which also operated on two values: true and false. Nobody had ever connected these two things. His 1937 thesis proved that Boolean algebra and electrical circuits are mathematically identical, and that any logical operation could be built from simple switches. Howard Gardner called it "possibly the most important, and also the most famous, master's thesis of the century." Every digital computer ever built traces back to this insight. At 29, he proved that perfect encryption exists. During WWII, Shannon worked on classified cryptography at Bell Labs. His work contributed to SIGSALY, the secure voice system used for confidential communications between Roosevelt and Churchill. In a classified 1945 memorandum, he mathematically proved the one-time pad provides perfect secrecy, unbreakable not just computationally, but provably, permanently, against an adversary with infinite power. When declassified in 1949, it transformed cryptography from an art into a science. It laid the foundations for DES, AES, and every modern encryption standard. At 32, he defined what information is. His 1948 paper introduced one equation: H = −Σ p(x) log p(x) Shannon entropy. The average uncertainty in a probability distribution. The minimum bits required to encode a message. Three things followed: > He defined the bit - the fundamental unit of all information. His colleague John Tukey coined the name. > He proved the channel capacity theorem, every communication channel has a maximum rate of reliable transmission. You can approach it. You can never exceed it. > He unified telegraph, telephone, and radio into a single mathematical framework for the first time. Robert Lucky of Bell Labs called it the greatest work "in the annals of technological thought." Where his equation lives in AI today: Cross-entropy loss - the function training every classifier and language model, is derived directly from H. Decision tree splits use information gain, which is H applied to data. Perplexity, the standard LLM evaluation metric, is an exponentiation of cross-entropy. Every time a neural network trains, Shannon's formula runs inside it. He also built the first AI learning device. In 1950, Shannon built Theseus, a mechanical mouse that navigated a maze through trial and error, learned the correct path, and repeated it perfectly. Mazin Gilbert of Bell Labs said: "Theseus inspired the whole field of AI." That same year he published the first paper on programming a computer to play chess. He co-organized the 1956 Dartmouth Workshop, the founding event of AI as a field. The man: He rode a unicycle through Bell Labs hallways while juggling. He built a flame-throwing trumpet, a rocket-powered Frisbee, and Styrofoam shoes to walk on the lake behind his house. He called his home Entropy House. When asked what motivated him: "I was motivated by curiosity. Never by the desire for financial gain. I just wondered how things were put together." In 1985, he appeared unexpectedly at a conference in Brighton. The crowd mobbed him for autographs. Persuaded to speak at the banquet, he talked briefly, then pulled three balls from his pockets and juggled instead. One engineer said: "It was as if Newton had showed up at a physics conference." He died in 2001 after a decade with Alzheimer's, the cruel irony of information slowly leaving the mind of the man who defined what information was. Claude, the AI model, is named after Claude Shannon, the mathematician who laid the foundation for the digital world we rely on today.
Tech with Mak tweet media
English
189
2.1K
7.6K
435K
trias me-retweet
LocalAI
LocalAI@LocalAIx·
I benchmarked vLLM vs llama.cpp on dual Intel Arc Pro B70 GPUs (32GB each). Results are interesting. The good: vLLM's prefill is 8-16x faster than llama.cpp. Same model, same precision (BF16 vs FP16), same hardware. The gap is real. The bad: vLLM can't run Qwen3.5 on XPU (GDN attention unsupported). It also OOMs on Qwen2.5-14B on a single 32GB card at FP16. llama.cpp handles both fine. Qwen2.5-14B, dual GPU, BF16/FP16: pp128: llama.cpp 268 t/s vs vLLM 2,069 t/s pp2048: llama.cpp 692 t/s vs vLLM 11,385 t/s tg128: ~35 t/s both (tie) Root cause: llama.cpp's SYCL flash attention uses scalar FP16 ops. vLLM uses Intel's XMX/DPAS matrix units via CUTLASS FMHA. The code literally has a "// Todo: Use the XMX kernel if possible" comment. Token generation is a dead tie because both engines are memory-bandwidth-bound during decode - XMX doesn't help there. For B70 owners today: llama.cpp is the practical choice (runs everything, great quant support, no Docker needed). But there's a massive fixable gap in prefill that the community should know about.
English
2
2
6
442
trias me-retweet
Furkan Gözükara
Furkan Gözükara@FurkanGozukara·
Massive geopolitical shift. NHK World confirms Japan has perfected a revolutionary process to extract high purity lithium from dead batteries with a staggering 90 percent recovery rate. This brilliant technological leap guarantees Japan's absolute economic security.
English
248
5.9K
31.9K
917.3K
trias me-retweet
Alvaro Cintas
Alvaro Cintas@dr_cintas·
You can now fine-tune Gemma 4 completely FREE 🤯 No GPU. No credit card. No coding knowledge required. Just a browser and 500+ models to choose from. → Open the Unsloth Colab notebook → Pick your model + dataset → Hit Start Training
English
38
276
2.3K
169.4K
trias me-retweet
How To AI
How To AI@HowToAI_·
🚨 Someone just open-sourced a tool that converts pdfs to markdown at 100 pages per second. It's called OpenDataLoader. It runs entirely on CPU and handles complex layouts, tables, and nested structures like a senior dev 100% Free.
How To AI tweet media
English
41
350
2.7K
176.6K
trias me-retweet
Khairallah AL-Awady
Khairallah AL-Awady@eng_khairallah1·
🚨 BREAKING: Someone just built a headless browser from scratch that makes Chrome look like a joke for AI agents. It's called Lightpanda. Bookmark it for later. Not a Chromium fork. Not a WebKit patch. A completely new browser written from zero in Zig with one purpose: headless performance for machines. Here's the problem right now: Every AI agent doing web automation is running Chrome under the hood. A full desktop browser with CSS rendering, GPU compositing, image decoding, font rasterization. All of it running on a server. For an agent that will never see a single pixel. You're paying for 100% of Chrome and using 20% of it. Lightpanda strips out everything your agent doesn't need and keeps everything it does: → Full JavaScript execution via V8: Ajax, Fetch, SPAs, infinite scroll, dynamic content → HTML parsing via html5ever (Mozilla's battle-tested parser) → Custom DOM engine built in Zig → No CSS layout. No image decoder. No GPU compositor. No font rasterizer → Built-in MCP server for direct AI agent integration The benchmarks are brutal: → 100-page scrape: Chrome takes 25.2 seconds. Lightpanda does it in 2.3 seconds → Peak memory: Chrome uses 207MB. Lightpanda uses 24MB → At scale: 140 concurrent Lightpanda sessions fit in the same RAM as 9 Chrome sessions → At 100 tabs, Chrome takes over an hour. Lightpanda finishes in under 5 seconds Still in Beta and Web API coverage is growing. But the founder built this after running 20 million Chrome crawls per day at his previous company and watching the infrastructure costs pile up. 100% Opensource. AGPL-3.0. (Link in the comments)
Khairallah AL-Awady tweet media
English
79
184
1.5K
121.9K
trias
trias@trias·
@bahrozii @ainunnajib @aganurcholis @MiskinTV_ MU dan NU berdiri seblm ada NKRI , jadi wajar kalo engga nurut pemerintah, masalah ego aja sih. engga ada hubungannya dgn kepintaran krn semua itu relative
Indonesia
0
0
0
15
Bachrozi
Bachrozi@bahrozii·
@ainunnajib @aganurcholis @MiskinTV_ MU itu orangnya pinter2, saking pinternya ya ngeyelan, menjunjung tinggi sains tapi lupa kebijaksanaan dan pentingnya persatuan. Perbedaan sebenarnya lumrah, dsini fungsi pemerintah sebagai pemutus perbedaan itu. Ikut pemerintah, perbedaan selesai, sesimpel itu kok
Indonesia
16
2
18
7.6K
SobatMiskinTV
SobatMiskinTV@MiskinTV_·
Rujukan orang Indonesia menentukan Hari Lebaran: - Pemerintah - Muhammadiyah - Arab Saudi - Pak @ainunnajib
Indonesia
64
47
585
125.8K
trias me-retweet
Idris
Idris@7signxx·
"In front of the UN Human Rights Council in Geneva, Hong Kong MP Dominic Lee made a shocking statement: “The West talks about human rights, but lets Israel get away with committing genocide.” “The blood of Palestinians and Iranians is on their hands.” “What moral credentials does America have? A country ruled by Epstein’s followers!” ©prinzchal
English
689
18.3K
50.3K
742.1K
trias me-retweet
Haris Firdaus
Haris Firdaus@harisfirdaus·
Kawan2 yang butuh, peta wilayah Indonesia dalam format GeoJSON, silakan cek petanusa.web.id. Tersedia peta provinsi, kabupaten/kota, kecamatan, dan desa/kelurahan. Data batas wilayah diambil dari API Laravel Nusa: nusa.creasi.dev
Indonesia
68
1.6K
5.8K
164K
trias me-retweet
JATOSINT
JATOSINT@Jatosint·
🇮🇩🇦🇪 Indonesian Police Paramilitary Unit #Brimob Women's Team in action during the first day of the ongoing @swat_challenge 2026
English
35
236
1.6K
76.2K
Chao Huang
Chao Huang@huang_chao4969·
nanobot blew up way beyond what we expected — 8k+ stars & 1.1k forks on GitHub in just 4 days! 🤯 We hope everyone sees nanobot as a simpler, more lightweight alternative to OpenClaw with faster deployment. We've been pulling all-nighters integrating everyone's PRs and fixing issues on GitHub to make 🐈 nanobot faster and stronger. Huge shoutout to Xubin for the extremely hard work! Key updates this four days: - Feishu/Lark channel support - DeepSeek provider integration - Natural language task scheduling - vLLM/local LLM support - Docker deployment + bug fixes What features do you most want 🐈 nanobot to have next? GitHub: github.com/HKUDS/nanobot #Clawdbot #OpenClaw #AIAssistant #Agents
Chao Huang tweet media
English
76
58
730
50.4K
Anthropic
Anthropic@AnthropicAI·
New Engineering blog: We tasked Opus 4.6 using agent teams to build a C compiler. Then we (mostly) walked away. Two weeks later, it worked on the Linux kernel. Here's what it taught us about the future of autonomous software development. Read more: anthropic.com/engineering/bu…
English
868
2.5K
21.3K
8.5M
trias me-retweet
NgôThinh
NgôThinh@NgoThinh001110·
@Trias nice chart
NgôThinh tweet media
English
0
1
4
372
trias me-retweet
Mischa van den Burg
Mischa van den Burg@mischavdburg·
CTOs will rather hire the guy who has a Kubernetes cluster running in his basement than someone who has tons of certifications. Want to stand out in DevOps interviews? Learn why running a K8s homelab is your secret weapon to landing that next role. Full video shows you exactly why, in-depth 👇
English
20
87
768
58.2K
trias me-retweet
Bloomberg Originals
Bloomberg Originals@bbgoriginals·
"I will still be here long after you're gone." @emilychangtv meets her digital persona, built by computer scientists working to reconstruct human identity based on personal data. More on Posthuman: trib.al/rlOcO45
English
0
3
7
6.4K