blockcipher🔮

20.5K posts

blockcipher🔮 banner
blockcipher🔮

blockcipher🔮

@0x_techner

Bullish on RWA & AI | Enthusiast | Math Lover | Sharing Insights on CT |

CT Katılım Nisan 2022
1.8K Takip Edilen2.4K Takipçiler
Kyle Figs - 6figs
Kyle Figs - 6figs@realsixfig·
I’ve been quiet building ChartForecaster on ICP with @caffeineai — and it’s already running live. Here’s a look at it in real time 👇 ChartForecaster will be the Pump.fun of trading apps. One platform. Every market: stocks, forex, commodities, and crypto. It doesn’t matter where traders come from (web2/3) — they’ll end up on $ICP.
Kyle Figs - 6figs tweet mediaKyle Figs - 6figs tweet media
English
13
34
211
4.2K
Pranjal Bora 🧭
Pranjal Bora 🧭@Crypto_Pranjal·
Is Tabi TGE finally coming? 😅 @TabiVibe just showed April’s progress bar almost full, and said they’re quietly preparing for the next big step. Hopefully this leads to TGE. Almost 3 years since first interaction with Tabi. Even put $7K into GG conversion back then. Since that point… just waiting.
Pranjal Bora 🧭 tweet media
English
69
9
222
20.2K
blockcipher🔮 retweetledi
caffeine
caffeine@caffeineai·
Your app lives in Caffeine. Now your code can too - on your terms. A lot of you have been asking for this. You want to open your codebase in your own editor, version it properly, work with your existing tools, and bring it back into Caffeine when you're done. That's exactly what GitHub import lets you do. Export to GitHub, tweak it however you want, and pull it straight back in. Your workflow, your tools.
caffeine tweet media
English
19
75
257
13.4K
blockcipher🔮 retweetledi
Sui
Sui@SuiNetwork·
Starting today, anywhere @RedotPay is accepted, so is SUI. 7M+ RedotPay customers. 130M+ merchants. 100+ countries. $SUI and $USDC-Sui are live for real-world use. So, who’s buying the intern a coffee with SUI? ☕
Sui tweet media
English
42
152
753
61.2K
Cheeky Crypto
Cheeky Crypto@CheekyCrypto·
Do you still hold $ICP?
English
60
18
231
6.8K
Singularity Code.icp
Singularity Code.icp@SingCodeX·
Please, @jerrybanfield, stop this. The Internet Computer is supposed to support other ecosystems. Saying that all other ecosystems aren’t useful is a mistake. We should be onboarding other ecosystems, not disliking them. I also believe that the Internet Computer has by far the best technology, but that technology should be used to strengthen and support other ecosystems. If some ecosystems do not provide real value, they will fade away naturally. I think you should change your marketing strategy, because this approach will only push away people from other ecosystems and make them less likely to ever use the Internet Computer. $ICP
Jerry Banfield@jerrybanfield

Honest Hedera $HBAR Crypto Review: Built for Institutions, Not for You

English
42
11
119
10.2K
blockcipher🔮 retweetledi
Uosof Ahmadi🔦
Uosof Ahmadi🔦@uosofahmadi·
Bellscoin @BellsChain is the best undiscovered story in the entire meme space. It is the father of Doge and shares the same genesis message, “Nintondo,” because Dogecoin was forked from Bellscoin. It is also merge mined with Litecoin and Dogecoin, giving it one of the highest hash rates in the space. All confirmed by the founder of Dogecoin.
Shibetoshi Nakamoto@BillyM2k

@Sizzouu it was inscribed in the genesis block of bellscoin as a nod to nintendo / animal crossing then since dogecoin was a quick fork of bellscoin it’s also in doge’s genesis block xD

English
14
43
80
4.6K
blockcipher🔮 retweetledi
Ash Crypto
Ash Crypto@AshCrypto·
🚨CRASH: Oil has crashed -8.46% in the last 30 MINUTES and dropped below $80 per barrel.
Ash Crypto tweet mediaAsh Crypto tweet media
English
196
419
3.2K
147.2K
Moses ∞
Moses ∞@mosesibb·
We should honestly end the #Near vs #ICP war. Tech-wise, #ICP is clearly optimized for something most chains aren’t: storing and serving real data fully on-chain.✊✅ And here’s the part many overlook 👇 #Near was never designed for heavy on-chain data storage. It supports it, but economically it discourages it. That’s why most Near apps rely on IPFS or Arweave for images, keeping only hashes on-chain. On the other hand, #ICP is built for this use case. To put it into perspective: Storing 1MB on #ICP costs about $0.005 per year. Storing 1MB on #Near requires roughly $30–$50 in locked capital (refundable, but still a barrier). Different designs, different trade-offs. So instead of arguing which is “better,” it’s more accurate to say: 👉 #ICP is optimized for fully on-chain apps 👉 #Near is optimised for efficient, scalable smart contracts with minimal state Both have their place.
Moses ∞ tweet mediaMoses ∞ tweet media
English
12
14
107
2.8K
OpenGradient (∇, ∇)
OpenGradient (∇, ∇)@OpenGradient·
If you’re around, drop “gAI” 👇🏻 intern is double checking something important 👀
English
606
100
507
22.4K
blockcipher🔮
blockcipher🔮@0x_techner·
Ice Open Network CEO explains a four-year BVI-based operation funded via token agreements with a service provider instead of fiat banking. the provider's sale of unlocked tokens on April 7 due to lost confidence triggered the recent $ION price crash after covering $18M+ in total costs. attached expenditure reports detail $18.08M USD spent (2022–Q1 2026) mainly on development ($11M), infrastructure, and marketing, alongside ~718M ICE and ~185M ION tokens used for 29+ exchange listings, KOL promotions, liquidity provision, and services. with $400K monthly operating costs and ~1B tokens remaining in treasury, the team claiming zero salaries and net losses states they may cut costs by half, sell treasury tokens if needed, or shut down and burn holdings if insufficient community momentum materializes in coming days.
Ice Open Network@ice_blockchain

🚨 An Update from the CEO I want to speak openly about the situation we are facing. For more than four years, our company has operated out of the BVI without a traditional bank account. Throughout that time, the business was funded primarily through token-based agreements. That meant development, infrastructure, marketing, legal, and many other operational costs were covered through tokens rather than fiat. This was possible because we worked with a service provider who believed deeply in our vision and agreed to support the project in exchange for tokens. For over four years, that provider stood by us and helped us build. However, due to recent market conditions, he lost confidence in the project and decided to claim tokens that were scheduled to unlock after two years, on 7 April. That event triggered the crash you have seen and brought our collaboration with him to an end. It has also placed the company in a very difficult position. Over the past four years, the total cost of building this project has exceeded $18,000,000 USD. We have invoices, records, and audit trails for every expense. During this entire period, @ice_z3us, @robertpreoteasa, and @ice_apoll0 did not take salaries, because we believed in the long-term vision and chose to keep building. As many of you know, under our tokenomics, the team managed approximately 4.2 billion tokens across the team, treasury, and ecosystem allocations. Because the monthly operating costs of the project were so high, we entered into a long-term agreement with the service provider under which he would receive a larger amount of tokens after two years in return for supporting the business and helping us scale. That structure was meant to buy us time to build properly and reach a stronger position. The reality is that the cost of operating the project became far greater than what could reasonably be recovered. The provider ultimately lost money on the arrangement, and after investing around $18,000,000, he chose to exit and sell the tokens he was entitled to. That is what brought us to where we are today. At this moment, the company still holds a little over 1 billion tokens. As the attached data shows, and based on the average prices at which the provider sold in recent days, it is clear that the company has been operating at a loss from the very beginning. Even so, we kept going because we truly believed in the project. We have seen many accusations claiming that we, as a team, dumped tokens on the community. That is simply not true. What happened was the termination of an agreement with a long-term service provider, and that outcome has now been reflected in the market. The project's current operating cost is around $400,000 per month. Many people do not realize how expensive it is to keep a project like this alive at scale. Even if every token we had received had been sold, it still would not have fully covered the total costs and obligations of the business. We never lied when we said we believed in this project. In fact, we are the ones who have been hurt the most by this situation. Because I want to remain fully transparent, I have to say this clearly: we are now reviewing whether it is possible to reduce costs significantly over the coming weeks, potentially by half. If we continue operating, it may require us to sell part of the remaining treasury tokens to cover essential expenses. We are no longer in a position where we can keep absorbing losses indefinitely, especially after already carrying losses of roughly $8 million. What happens next depends on whether the project still has real support from the market and the community. We will watch the coming days carefully and assess whether there is enough confidence and momentum for us to continue building. If there is, we will keep going. If there is not, we will be forced to consider shutting the project down. And if that happens, I want to be clear: we will burn our remaining tokens, not sell them. It is also important for the community to understand how much of the unlocked token supply was used to support the ecosystem. Out of the 4.2 billion tokens managed across these years, more than 900 million tokens were used for exchange campaigns, KOLs, and liquidity. Many people ask for listings, but few understand what listings actually require: exchange liquidity, market making, campaigns, promotions, and other associated costs. These are real costs, and they are substantial. There is another truth I have avoided discussing publicly until now, but I believe it is important to say it. Exchanges do not value all user bases equally. Large user numbers from Tier 3 countries did not help with listings in the way many people assumed. In many cases, exchanges specifically asked us for performance and user metrics excluding those regions. This is an uncomfortable reality of the industry, but it is a reality nonetheless. In the images attached, I have also shared detailed costs, including what different exchanges charged us and how many tokens were required for marketing and listing-related activity. I want people to better understand how this industry really works. We have nothing to hide, and the exchanges involved can confirm the commercial structures. I am deeply saddened that we are in this position, but I owe you the truth. The documentation is there. The records are there. The transaction history is there. If anything, we are the ones who lost the most trying to make this vision real.

English
0
0
1
83
NEAR Intents
NEAR Intents@near_intents·
NEAR Intents is now integrated into Oisy Wallet, built by @dfinity. @oisy users can now perform intents-based cross-chain swaps directly within the wallet, enabling access to native tokens across Ethereum, Solana, BNB Chain, Base, Polygon, and Arbitrum.
NEAR Intents tweet media
OISY Wallet@oisy

OISY 2.0 is live. Cross-chain swaps powered by @near_intents. Yield through @harvest_finance Autopilot Vaults. OISY - a wallet that lives on a tamper-proof network. Crypto, stablecoins, stocks, commodities, protected by advanced cryptography. Accessible from any browser. Invisible if you choose.

English
22
56
272
44.5K
blockcipher🔮 retweetledi
Raoul Pal
Raoul Pal@RaoulGMI·
Total Global Liquidity is rising Global M2 is rising US Total Liquidity is rising US M2 is rising China Total Liquidity is rising ISM is rising Try not to over think it.
English
560
729
6.6K
600.2K
blockcipher🔮
blockcipher🔮@0x_techner·
@kristoferlund So far, it hasn't been great for me! Maybe for newcomers, but for existing builds, it's just been confusing and breaking so many files.
English
0
0
0
33
blockcipher🔮 retweetledi
Ash Crypto
Ash Crypto@AshCrypto·
🚨CRASH: Oil has crashed -16% from $110 to $93 in just 60 minutes, one of the largest hourly drop in history.
Ash Crypto tweet mediaAsh Crypto tweet media
English
239
1.1K
8.6K
451.5K
blockcipher🔮 retweetledi
Kristofer
Kristofer@kristoferlund·
Yes, we are live! Caffeine V3 is here. Someone asked if there is a list with all the changes. Here you go: ✅ Team of specialist AI agents builds your app ✅ Agents work in parallel waves for faster builds ✅ Composer orchestrates the entire build process ✅ Builder rescans project each run to stay accurate ✅ See real-time progress while your app builds ✅ Task checklist shows running, completed, failed work ✅ Stop a build anytime to change direction ✅ Errors detected and handled at every stage ✅ Failed tasks automatically retried with context ✅ Builder revisits earlier steps when problems appear ✅ Deployment failures trigger automatic repair attempts ✅ Build modes simplified to Instant and Guided ✅ Instant builds immediately without questions ✅ Guided asks clarifying questions before building ✅ Design agent creates a structured design brief ✅ Frontend follows shared visual design guidelines ✅ Influence the design brief from your prompt ✅ Version history shows all previous draft builds ✅ Revert instantly to earlier versions of your app ✅ Older projects automatically migrate to the latest system ✅ Previously stuck projects can build again ✅ App preview now the default main panel ✅ Code view moved to a dedicated tab ✅ Preview your app at desktop, tablet, phone sizes ✅ Inspect code changes with side-by-side diffs ✅ Collapse the sidebar for more workspace ✅ Search your projects directly in the sidebar ✅ Drag and drop files directly into the chat prompt
English
11
87
290
7.2K
blockcipher🔮 retweetledi
caffeine
caffeine@caffeineai·
Caffeine's biggest update just dropped. Here's what's new: Agentic build system - a Composer orchestrates specialist agents New landing page experience - completely redesigned from the ground up Updated dashboard - all your projects in one place Discovery phase - the agent reads your codebase before building Context as RAM - projects grow without hitting limits Version history - track and restore previous builds -- V3 is here.
English
54
232
775
54.2K
Kinic AI
Kinic AI@kinic_app·
AI memory is evolving. Kinic tech is evolving with it. Hosting this on tamperproof blockchain (ICP), allows us to store memory in a portable, personal, and private way (vetKey). A new AI memory economy is created. Since we can create, and own memory; we can sell and share it.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
2
8
59
2.1K