Hardhat Chad

7.5K posts

Hardhat Chad banner
Hardhat Chad

Hardhat Chad

@HardhatChad

Foreman @OREsupply, Maintainer @STEELnew

Katılım Haziran 2023
1.7K Takip Edilen16K Takipçiler
Sabitlenmiş Tweet
Hardhat Chad
Hardhat Chad@HardhatChad·
It's not enough that I succeed. Everyone else must win forever
English
61
97
652
114.9K
Hardhat Chad retweetledi
Dwayne
Dwayne@CtrlAltDwayne·
The best argument for Rust in 2026 is not memory safety or performance. It is that AI writes better Rust than it writes C++. The compiler feedback loop is so tight that models self-correct in real time. Every error message is a free training signal. Rust was accidentally designed for AI-assisted development 10 years before anyone knew that mattered.
English
110
172
2.5K
170.7K
Hardhat Chad retweetledi
vixhaℓ
vixhaℓ@TheVixhal·
Computer science is gradually returning to the domain of physicists, mathematicians, and electrical engineers as large language models automate much of what we currently call software engineering. The field’s center of gravity is shifting away from manual code writing and toward deeper theoretical thinking, mathematical insight, and systems-level reasoning.
English
329
1.7K
15.5K
947.8K
Hardhat Chad retweetledi
Jewish Badger
Jewish Badger@BadgerBetsMMA·
. $ORE made more revenue than $SOL in the last 30d and you’re still sleeping on it? NGMI.
Jewish Badger tweet media
English
8
244
309
6.7K
Hardhat Chad retweetledi
Todd Saunders
Todd Saunders@toddsaunders·
The token cost to build a production feature is now lower than the meeting cost to discuss building that feature. Let me rephrase. It is literally cheaper to build the thing and see if it works than to have a 30 minute planning meeting about whether you should build it. It’s wild when you think about it. This completely inverts how you should run a software organization. The planning layer becomes the bottleneck because the building layer is essentially free. The cost of code has dropped to essentially 0. The rational response is to eliminate planning for anything that can be tested empirically. Don’t debate whether a feature will work. Just build it in 2 hours, measure it with a group of customers, and then decide to kill or keep it. I saw a startup operating this way and their build velocity is up 20x. Decision quality is up because every decision is informed by a real prototype, not a slide deck and an expensive meeting. We went from “move fast and break things” to “move fast and build everything.” The planning industrial complex is dead. Thank god.
English
374
569
5.5K
462.1K
Hardhat Chad
Hardhat Chad@HardhatChad·
If you have some moat like regulatory capture or network effects, your options are different. But most SaaS businesses are undifferentiated and can be vibe coded in a day or two
English
0
1
6
626
Hardhat Chad
Hardhat Chad@HardhatChad·
INT2 quantization is the bear case for photonics and analog compute btw
Guri Singh@heygurisingh

Holy shit... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU. It's called BitNet. And it does what was supposed to be impossible. No GPU. No cloud. No $10K hardware setup. Just your laptop running a 100-billion parameter model at human reading speed. Here's how it works: Every other LLM stores weights in 32-bit or 16-bit floats. BitNet uses 1.58 bits. Weights are ternary just -1, 0, or +1. That's it. No floats. No expensive matrix math. Pure integer operations your CPU was already built for. The result: - 100B model runs on a single CPU at 5-7 tokens/second - 2.37x to 6.17x faster than llama.cpp on x86 - 82% lower energy consumption on x86 CPUs - 1.37x to 5.07x speedup on ARM (your MacBook) - Memory drops by 16-32x vs full-precision models The wildest part: Accuracy barely moves. BitNet b1.58 2B4T their flagship model was trained on 4 trillion tokens and benchmarks competitively against full-precision models of the same size. The quantization isn't destroying quality. It's just removing the bloat. What this actually means: - Run AI completely offline. Your data never leaves your machine - Deploy LLMs on phones, IoT devices, edge hardware - No more cloud API bills for inference - AI in regions with no reliable internet The model supports ARM and x86. Works on your MacBook, your Linux box, your Windows machine. 27.4K GitHub stars. 2.2K forks. Built by Microsoft Research. 100% Open Source. MIT License.

English
0
1
18
2K
Hardhat Chad retweetledi
Naval
Naval@naval·
It’s not about junior vs senior, it’s about “good with AI” vs “not good with AI.”
English
936
1.9K
17.7K
858.3K
Hardhat Chad retweetledi
ORE
ORE@OREsupply·
Send, receive, and earn ORE in private. Available today.
ORE tweet media
English
12
55
146
7.8K
Hardhat Chad retweetledi
Nick Maselli
Nick Maselli@NickBuilds11·
The term "Fullstack Engineer" now means "Hardware, Electrical, and Software" capability
English
78
165
2K
64.3K
Hardhat Chad retweetledi
ORE
ORE@OREsupply·
ORE continues to redefine the mining primitive in 2026. With sustainable tokenomics, protocol revenue, staking yield, and open source code, ORE is building a digital store of value for the next generation of crypto.
Solana Daily@solana_daily

Top 10 Solana DApp by Revenue in the Last 30 Days 📊 1⃣@Pumpfun 2⃣@JupiterExchange 3⃣@AxiomExchange 4⃣@phantom 5⃣@tryfomo 6⃣@ant_fun_trade 7⃣@OREsupply 8⃣@MeteoraAG 9⃣@BagsApp 🔟@TrojanOnSolana

English
7
57
150
8.1K
Hardhat Chad
Hardhat Chad@HardhatChad·
I have been surprised by how many ORE miners are active on Seeker
toly 🇺🇸@toly

.@solanamobile is a special place right now. There is just enough interest and not enough noise for early stage founders to get their first 10k users.

English
18
15
132
15K
Hardhat Chad retweetledi
Brian
Brian@Lothaen·
We had a great turnout for the first Miner Side chat on the $ORE discord yesterday evening. Thanks to @HardhatChad, the crew from @minemoreapp, and @Starship_Fronk for hosting / MC. Looking forward to our next episode. I'll be working on the upload for Monday!
Brian tweet media
English
10
10
42
2.1K