md

899 posts

md banner
md

md

@cryptokendefi

software engr

Katılım Kasım 2022
36 Takip Edilen262 Takipçiler
Sabitlenmiş Tweet
md
md@cryptokendefi·
Created an app to have a visually summarize the accumulation (withdrawls from exchanges) and potential selling (deposits to exchanges) This is the accumulation data. Removing dusts < 10M Qus $QUBIC
md tweet media
English
8
10
65
2.3K
md retweetledi
Come-from-Beyond
Come-from-Beyond@c___f___b·
A major update of #Aigarth is approaching. We are transforming #Qubic into a giant "anthill" where every miner will be searching for shares in a coordinated manner (like ants for food). But we are not trying to create #SwarmIntelligence, we are using it for something more ambitious.
English
87
383
1.2K
60.5K
md retweetledi
Qubic
Qubic@_Qubic_·
Just over a week into Doge mining. 12 Doge blocks on mainnet. 65 TH/s stress tested. But the stat that matters most isn't in the numbers. It’s the fact that Qubic is the first network running ASIC-based Doge mining and AI compute training in parallel.  At the same time. Both at full capacity. No trade-offs, no time-sharing between the two. That’s not a feature. That is an entirely different category of infrastructure.  Phase 1 is a stress test. Four independent pools connected. 1.3M+ pool shares accepted. 43.5K+ tasks distributed. The network has already been pushed to 65 TH/s. No Doge topups yet, and that’s intentional.  Phase 1 exists to prove the pipeline works before real value flows through it, and it does.  Phase 2 brings the Doge topups and full computor migration, and that's when the economics kick in. This is the stress test.
Qubic tweet media
English
53
191
741
57.2K
md retweetledi
r0wie
r0wie@r0wie_eth·
All people worry about is CFB talking about price but one thing he doesn’t do is promote anything that doesn’t benefit the speed or performance of the protocol He is a true professional and degen lord Sell you soul to the VCs and you become Centralised $TAO $QUBIC
English
3
12
84
1.7K
Qubic
Qubic@_Qubic_·
Dogecoin mining is now live on Qubic mainnet. This is not your standard DOGE pool. Not even close. 🧵
Qubic tweet media
English
46
188
722
23.7K
md retweetledi
NoDrift
NoDrift@ElGuiscardo·
$QUBIC is rapidly gaining mining share in $DOGE and the reason is obvious: it is more profitable to mine $DOGE via $QUBIC. If you are wise, repoint your ASIC miners towards the $QUBIC pools and enjoy higher returns. Blind allegiance doesn’t pay your bills.
English
2
15
93
1.7K
md retweetledi
dogegod
dogegod@_dogegod_·
Dogecoin mining network has began its major upgrade with Qubic integration on April 1, 2026. Full production by end of the month.
dogegod tweet mediadogegod tweet media
English
36
120
652
23K
md retweetledi
cfb_token
cfb_token@c_f_b_token·
🚀 Look up in the sky, our mission has begun! #DogeMeetsQubic ⭐️ DOGE BLOCKS FOUND: 1 ✅ DOGE BLOCKS CONFIRMED: 1 🔵HASH 4613d3fdada57e4d38397a549716c93aa06e76ff1115802a7af03c3069ac725d 🟢 MINED ON 03 Apr 2026 05:23:24 UTC
cfb_token tweet media
English
4
25
117
2.2K
md
md@cryptokendefi·
@elitebreedss Congrats, $QUBIC team! 🥂🫣
English
0
0
4
161
elitebreedss ‏ױ
elitebreedss ‏ױ@elitebreedss·
A Block was found.. Congratulations $Qubic this is just the beginning.
elitebreedss ‏ױ tweet media
English
5
17
155
1.8K
md
md@cryptokendefi·
@The_Squale_ Ahaa, timezone based on the Qubic explorer timezone 🥂
English
0
0
0
75
MrUnknown
MrUnknown@The_Squale_·
@cryptokendefi Nice! What time zone did you pick for the X-axis (UTC)? I think I’m seeing mine 🤣
English
1
0
0
71
md
md@cryptokendefi·
Created an app to have a visually summarize the accumulation (withdrawls from exchanges) and potential selling (deposits to exchanges) This is the accumulation data. Removing dusts < 10M Qus $QUBIC
md tweet media
English
8
10
65
2.3K
md
md@cryptokendefi·
@AllKimya $QUBIC Incubation Lead doesnt support and want it, because he said it doesnt have any "business" value. So im trying to figure out how can i continue the developement since this is initially a PoC
English
0
0
2
223
md
md@cryptokendefi·
@crypto_with_seb -someone keeps spreading that it's april fools event even tho it's not (some aware that it s a prank but not all) -community keep building a hype that the team could earn millions in day 1, then the team (thankfully) clalrifed it by saying it has 3 phases - miners selling
English
1
0
1
329
Crypto Seb
Crypto Seb@crypto_with_seb·
Some laughed when I said it might be a buy the rumor, sell the news event 🤷🏼‍♂️ Don’t worry, if mining $doge really delivers, this will be just a fart in the wind 😝 $Qubic will be fine.
Crypto Seb tweet media
English
27
11
199
9K
md
md@cryptokendefi·
@RealistYksk cleanest dashboard among all. what's the link? $QUBIC
English
1
0
2
137
md
md@cryptokendefi·
@c_f_b_token $CFB meets $DOGE 🫵🥂
English
0
0
4
60
md retweetledi
cfb_token
cfb_token@c_f_b_token·
April 1, 2026  #DogeMeetsQubic The beginning of a new journey: $CFB will ride Doge into boundless space. There are no limits.
cfb_token tweet media
English
14
25
140
3.1K
md
md@cryptokendefi·
Also made a path finder using DFS to search all paths via network transfers $QUBIC
md tweet media
English
0
1
8
379
md
md@cryptokendefi·
Largest pontential accumulator for this epoch $QUBIC
md tweet media
English
0
0
10
427
md retweetledi
Come-from-Beyond
Come-from-Beyond@c___f___b·
Look, guys, Yann is going in the same direction as us. x.com/Ric_RTP/status…
Ricardo@Ric_RTP

The man who INVENTED modern AI just made a billion dollar bet that ChatGPT, Claude, and every AI company on earth is building the wrong technology. Yann LeCun won the Turing Award in 2018 for creating the neural networks that made AI possible. He spent a decade running AI research at Meta. Oversaw the creation of Llama and PyTorch, the tools that half the AI industry runs on. Then he quit. And raised $1.03 billion in a seed round. The LARGEST seed round in European history. $3.5 billion valuation before generating a single dollar of revenue. Bezos wrote the check. So did Nvidia. Samsung. Toyota. Temasek. Eric Schmidt. Mark Cuban. Tim Berners-Lee (the guy who invented the internet). His new company is called AMI Labs. And it's built on one thesis: Every AI company spending billions on large language models is wasting their money. ChatGPT, Claude, Gemini, Grok. They all work the same way. They predict the next word in a sequence. See "the cat sat on the" and predict "mat." Scale that to trillions of words and you get something that sounds intelligent. But LeCun says it doesn't UNDERSTAND anything. It can't reason. It can't plan. It can't predict what happens when you push a glass off a table. A two year old can do that. GPT-5 cannot. That's why AI hallucinates. It doesn't have a model of how the world actually works. It just predicts words. His solution? Something called JEPA. Instead of predicting words, it learns how the PHYSICAL WORLD works. Abstract representations of reality. Not language but physics. Think about what that means. Current AI can write your emails. LeCun's AI could design a car, run a factory, operate a robot, or diagnose a patient without hallucinating and killing someone. The CEO of AMI said it perfectly: "Factories, hospitals, and robots need AI that grasps reality. Predicting tokens doesn't cut it." And here's what's really crazy to me... LeCun isn't some outsider throwing rocks. He literally built the foundations that ChatGPT runs on. He knows exactly how these systems work because he helped create them. And after watching the entire industry sprint in one direction for three years, he raised a billion dollars to run the OPPOSITE way. No product. No revenue. No timeline. Just pure research. He told investors it could take YEARS to produce anything commercial. But they funded it anyway in just four months. Meanwhile OpenAI just raised $120 billion and still can't stop their models from making things up. Anthropic is building AI so dangerous they're afraid to release it. Google is burning billions trying to catch up. And the guy who started it all says they're all solving the wrong problem. Two Turing Award winners raised $2 billion in three weeks betting AGAINST the entire LLM approach. LeCun at AMI. Fei-Fei Li at World Labs. The smartest people in AI are quietly building the exit from the technology everyone else is betting their future on. Either they're wrong and the trillion dollar LLM industry keeps printing. Or they're right and every AI company on earth just built on a foundation that's about to crack.

English
35
183
675
37.9K
md retweetledi
Come-from-Beyond
Come-from-Beyond@c___f___b·
The rest of the world is starting noticing the problem which we at #Qubic already found a potential solution for - x.com/sukh_saroy/sta….
Sukh Sroay@sukh_saroy

🚨Breaking: Princeton researchers just ran the numbers on where AI is actually heading. The results should make every founder, investor, and policymaker stop what they are doing. Training OpenAI's next-gen model consumes an estimated 11 billion kWh of electricity. That is enough to power every home in New York City for a full year. More than the annual output of a nuclear reactor. For one model. One training run. And that is before a single user asks a single question. Every time someone uses a reasoning model like o1 or DeepSeek-R1, it costs 33 Wh of energy per query. A standard GPT-4 query costs 0.42 Wh. That is a 79x energy multiplier. Per query. At billions of queries per day. Now here is what nobody is saying out loud. The industry's answer to this is Stargate. A $500 billion compute campus. 5 gigawatts of power. Enough to run 5 million homes. Owned by the same four companies that already control the technology. They are building a new kind of utility. Except you do not elect its board. Meanwhile the models consuming all that energy still cannot reliably reason outside of math and code. Everywhere else they pattern-match. They hallucinate. They confabulate confidence. Princeton's argument is that this is not a scaling problem. It is a structural one. More parameters have not fixed it. More data has not fixed it. The architecture itself is the ceiling. Their alternative: stop chasing one god-model and build thousands of small specialists instead. Each one trained on curated domain data. Each one grounded in verified knowledge. Each one small enough to run on your phone. The energy comparison is not close. A cloud query to a reasoning model uses 33 Wh and 20 milliliters of water. The same query on a local specialist model uses 0.001 Wh. Zero water. That is 10,000 times more efficient. AlphaFold did not beat biologists by knowing everything. It won by going impossibly deep in one domain. A 14 billion parameter model trained on medical knowledge graphs just outperformed GPT-5.2 on complex clinical reasoning. Depth beats breadth when the domain is defined. The question nobody building these systems wants to answer: If the only path to general AI requires the energy output of a small nation, controlled by a handful of companies, running on hardware most of the world cannot access — is that actually intelligence? Or is it just the most expensive pattern matcher ever built?

English
24
192
588
30K
sharkie 'l
sharkie 'l@nasirasalis·
These will be my $qubic August target , I may be wrong but it will happen . Just don’t be bearish around me .
sharkie 'l tweet media
English
4
11
109
2.2K