Toatse
2.1K posts


🔬 We’ve just scheduled our March AMA.
We’ll be providing some updates on our latest Oracle release, partner news, and answering your community questions.
Join us.
📅 Thur 26 March
🕟 1600 UTC
Set a reminder. 👇
x.com/i/spaces/1kJzD…
English

This partnership has been a long time in the making and I'm so excited that we finally announced this. Being able to partner with Timpi is huge for @cheqd_io's #VerifiableAI roadmap. For some context...
Training any AI model requires mountains of data. In practice, this runs into 100s of TBs...which means the only companies that are realistically able to do it are often the Big Tech companies that have billions of dollars to throw at this problem. Which is why you'll see all frontier models come from players with extremely deep pockets, such as Google (which already indexes the web), OpenAI with Microsoft Bing, Meta/Facebook etc.
(Google is one of the sneakiest among this, btw. You can't opt out of having your site indexed for training their LLMs, without also opting out of being included in their search index. Not being included in Google search results is a kiss of death for any traffic, and so people begrudgingly stay opted in. This is a kind of abuse of market power that I suspect will attract the attention of competition regulators soon - or maybe Google's strategy here is to do a landgrab on data as fast and quickly as possible before the two antitrust cases brought against them breaks the company up.)
Once you train a model, for "fresh" results beyond the training dataset date, once again you need a search engine feed. This is why ChatGPT used to say "according to my training data till Oct 2022..." and only recently has had capabilities at the free tier where it can plug that gap with Bing search results.
Why does this matter? If we truly want decentralised AI, we can't just be running the scraps of open source models released by Big Tech (and quite prominently, neither Google or OpenAI open source theirs anyway).
Repeat after me: decentralised AI needs decentralised training data. And as it happens, getting this is incredibly hard...and why we're so pumped to be working with @Timpi_TheNewWay as they have solved exactly this problem, at scale, with one of the largest search engine indexes after Google/Bing.
So where do digital credentials come in? I can foresee at least two paths where @cheqd_io's identity stack comes into the picture:
1. Training data sets for AI (narrow OR broad) will shift from being distributed via centralised servers/channels (like they are now) to decentralised distribution. AI developers will need to validate that these datasets are untampered and understand the reputation of sources in the dataset at a granular level. Verifiable credentials - with privacy-preserving payments to the sources checking the quality of the data (in this case, Timpi/cheqd) - are a solution to solving this.
2. As a greater proportion of the content on the Web is AI-generated, model training starts getting worse and at some point entirely collapses (at levels as low as 5-7% of synthetic data). There are two ways Content Credentials come into this:
- If/when Content Credentials become commonly-attached to content (say 2025+), search engines will need to parse these Verifiable Credentials as a signal to understand how much of a web page is AI vs human generated.
- Even when Content Credentials become popular, it's not guaranteed that every piece of content will be accompanied by one. So similar to general quality of web data, one of the aspects that could be encoded into are Content Credentials (or Verifiable Credentials) are heuristic assessments of whether the site/content is AI vs human generated.
In both scenarios, we at @cheqd_io believe payments for credential exchange can be incredibly useful for implementing payments for royalty/licensing checks when training AI models.
Extending these ideas further, as agentic AI becomes viable, AI agents themselves would need to prove what sort of quality of data they have been trained on, compliance with various global regulations, and using Verifiable Credentials as a mechanism of show what permissions they'd been given agent-to-agent or human-to-agent.
As we said in our original hypothesis for $CHEQ, MANY of the credential exchanges/interactions would require a payment mechanism to accomplish value exchange. We're starting to see some early implementations of this, i.e., AI agents paying each other using traditional fiat payment rails like Stripe...but I fully believe that for this technology to scale, in practice they will require decentralised, privacy-preserving payments for credential exchange for #vAI.
Excited to build together with the @Timpi_TheNewWay team towards this vision!
Timpi@Timpi_TheNewWay
English
Toatse retweetet

Our partner @AnonyomeLabs has announced a pilot programme with Utah Valley University.
This initiative enables students to test secure digital versions of their student IDs and academic credentials in a mobile wallet environment, running on cheqd.
linkedin.com/posts/anonyome…
English

It’s a great joke 😂
…but one of the reasons to ask for a selfie (whether traditional KYC or World) is to check it’s the real person linked to those details sending the request, and not just someone who found that biographical data through a data breach.
(Not EVERY interaction should require that. There’s many, many use cases where this could be replaced with some form of selective disclosure or zk.)
Grafton (Disco)@satsdisco
Wow, look how convenient
English
Toatse retweetet

Recently an Oracle update was initiated on the Mainnet of Cheqd.
What this means. (CHEQ/USDC)
With the market conditions always changing and the price of $CHEQ tokens as a utility token for the network keeps fluctuating, this helps find stability for pricing during the consensus layer,
- by using SMA = Simple Moving Averages, EMA = Exponential Moving Averages, & WMA = Weighted Moving Averages.
Now stated by @matt_arn - "If the validators can't pull live price feeds during the consensus window then the system falls back to a persisted value maintained via an Interchain Query = ICQ > to a Time-Weighted Average Price = TWAP."
Token price is directly mapped to the fixed dollar cost of Identity Transactions on Mainnet 👍
English

Support $CHEQ
Ultra Chemfy@bigbagz30k
If you think @AnthropicAI @claudeai developments are shocking, why aren't you screaming for a solution? Decentralized Identity enables privacy via Decentralised Identifiers and Verifiable Credentials, now. Support @cheqd_io $cheq If you dont make noise it will never happen cheqd.io/blog/2025-in-r…
English

If articles aren't you're thing, I've crafted a YouTube video discussing this topic at length. youtube.com/watch?v=LJ6C24…
@cheqd_io $CHEQ @cCDAO_space

YouTube
English
Toatse retweetet

I am ready to go hyper maxi mode on $CHEQ
Ultra Chemfy@bigbagz30k
I am ready to go hyper maxi mode on $cheq @cheqd_io
English







