Dx

1.6K posts

Dx banner
Dx

Dx

@danielderedev

let’s start building

Katılım Aralık 2024
79 Takip Edilen649 Takipçiler
Sabitlenmiş Tweet
Dx
Dx@danielderedev·
My turn. 2025 wrapped. This year wasn’t about loud wins or viral milestones. It was a year of living, learning, and figuring myself out. And yeah I also turned 20 some months back In 2025, I > learned more about myself than any previous year > built ideas, broke them, and learned why they didn’t work >explored freedom, relationships, and what I actually enjoy > figured out what I don’t want, which honestly helped a lot I didn’t optimize this year for productivity. I optimized it for experience. And I’m okay with that. Because I got clarity from all the chaos I experienced. I built a lot of SaaS products, but I couldn’t push them globally because I struggled with marketing and lost momentum once the products were already live. I shipped multiple SaaS products: > AskMylo > PDF Synthetiser > LeadoAI (cold email outreach) > Flowbot (n8n automation generator, probably my best work this year) > Script Kit (YouTube script generation) None of them scaled the way I wanted Not because the tech wasn’t there, but because I didn’t push harder - Marketing scared me. - Momentum dropped - I lost belief too early. And honestly, that’s on me. I also participated in two Solana hackathons. Didn’t win either. But I learned a lot about shipping fast, working under pressure, and building for real users. 2025 taught me this > Building is only half the job. > Belief and marketing matter just as much as code. 2026: > push past the “it’s not working yet” phase > actually take marketing seriously this time > focus on fewer products and see them through > use everything 2025 taught me to do things better Not starting from zero Starting with experience Still building. Still learning.
Dx tweet media
English
5
3
33
2.9K
Dx
Dx@danielderedev·
OpenAI shut down their own AI text classifier in July 2023. The reason they gave was "low accuracy." The reason they didn't give: a centralized detector, built by a centralized lab, loses the arms race the day it ships. The math is simple. A frontier model can generate millions of outputs per hour. A centralized detector is a single artifact one model, one architecture, one decision boundary. The generator can A/B against the detector in a loop until the detector's signal goes to noise. One side iterates infinitely. The other side ships a paper every six months. This is why every standalone detection product flatlines on accuracy a few months after launch. GPTZero, Originality, Turnitin all of them publish strong day one numbers and watch them decay quarter by quarter. They're not bad at what they do. They're alone at what they do. Detection isn't a model. Detection is an adversarial system. And adversarial systems lose unless they have something the generator doesn't: scale of independent attempts. This is exactly what @ai_detection is built to be. Subnet 32 isn't one detector. It's a competitive network of miners each running a different architecture, each scored every epoch against a held-out corpus by independent validators, each economically forced to either improve or get deregistered. Weak models lose emissions. Good models get rewarded. The network rotates its own frontier every week without anyone at the project shipping a thing. That's why they hit #1 on MGTD and 98%+ on RAID not because someone at It's AI trained a better classifier, but because the system selects for whoever did, every cycle, forever. The generators run in parallel. The detectors have to run in parallel too. A better classifier is a one quarter advantage. A network where the classifier is never the same one twice is a structural one. That's the bet. That's the moat. That's why detection had to be decentralized to survive
Dx tweet media
English
0
1
2
92
Dx
Dx@danielderedev·
One of the simplest ways to judge an AI project is to ask - does this make something look smarter - or does it actually make a real system work better? That question filters out a lot of noise. And it’s why @resilabsai is interesting. Real estate is one of those markets where the surface looks modern, but a lot of the core intelligence layer is still old, fragmented, and inefficient. People see polished apps and assume the system underneath must be equally modern. It isn’t. The hidden problem is that a huge amount of value in real estate depends on decisions being made from models, assumptions, and workflows that are often: - slow - opaque - fragmented - hard to verify - and expensive to trust That matters more than most people think. Because once valuation is weak, everything built on top of it inherits the weakness. The loan process gets worse. Risk assessment gets worse. Buyer confidence gets worse. Capital efficiency gets worse. So when a project goes after the intelligence layer instead of just building another real estate interface, I pay attention. What makes RESI interesting is not just the claim that they can price homes quickly. It’s the broader direction they keep signaling: - solve valuations - move into loans - expand the real estate intelligence network That is the right sequence. Valuation is the wedge. Lending is the leverage. Infrastructure is the endgame. That’s a much more serious vision than most “AI x industry” projects ever get to. Because once you are improving how one of the world’s largest asset classes is priced and underwritten, you are no longer building a feature. You are becoming part of the market’s operating system. That’s the difference. The AI projects that matter most won’t just be the ones that generate the best output. They’ll be the ones that improve how real decisions get made inside huge economic systems. That’s the category I think RESI is aiming for. And if they get it right, the upside is much bigger than “another Bittensor subnet with a good story.” It becomes a case study in what decentralized intelligence looks like when it attaches itself to a real commercial pipe. That’s where the AI economy gets interesting. @resilabsai
English
0
1
2
104
Dx
Dx@danielderedev·
A lot of AI projects still sell the same fantasy A smarter chatbot A prettier interface A cleaner demo A faster way to generate something that already existed yesterday That’s not where the biggest value is. The biggest value is where intelligence changes the economics of a real industry. That’s why @resilabsai stands out to me. Real estate is one of the largest asset classes on earth, and it still runs with an absurd amount of friction where it matters most: valuation, underwriting, speed, transparency, and access. Most people only see the surface. They think real estate tech means: - listings - search - agents - mortgage forms - nice dashboards But the real leverage sits underneath all of that. It sits in pricing. If your valuation layer is slow, inaccurate, fragmented, or opaque, everything downstream gets worse: - buyers make worse decisions - lenders move slower - risk gets priced badly - capital gets allocated less efficiently • trust falls apart That is not a cosmetic problem. That is infrastructure. And infrastructure is where durable businesses are built. What I find interesting about @resilabsai is that they don’t seem to be aiming at “AI for real estate” in the lazy generic sense. The public direction is much more serious: fix valuations first, then lending, then expand into a broader real estate intelligence layer. That sequence makes sense. Because if you can materially improve how property value is understood, you are not just making a useful app. You are changing how decisions get made around one of the most important assets in the world. That means: faster underwriting, better credit decisions, more transparent pricing, and eventually a more programmable market. That’s a much bigger ambition than most people realize. And it’s the kind of AI thesis I keep coming back to. Not AI as content. Not AI as spectacle. Not AI as another wrapper pretending to be a business. AI as the system that quietly improves the decision layer underneath massive markets. That is where things get real. The strongest AI companies over the next few years probably won’t just be the ones with the flashiest models. They’ll be the ones that make a core industry measurably more efficient. If RESI executes, that’s the lane it’s trying to enter. And that’s why it’s worth watching.
Dx tweet media
English
0
1
4
120
Dx
Dx@danielderedev·
Most experts are not bad at thinking. They’re bad at being punished for being wrong That’s why prediction markets keep getting more interesting. The advantage is not that markets are magically smarter than experts. The advantage is that markets force a level of honesty that expert culture often avoids. An expert can hide inside language They can say - base case - elevated uncertainty - the range of outcomes remains wide - too early to call And if they miss badly, life goes on. A prediction market doesn’t get that luxury A market has to turn belief into price. That changes everything. Because once people are forced to express conviction numerically, under competition, with money at stake, you start getting something different from a polished panel discussion or a reputation protecting forecast note. You get a live map of what participants actually believe, not just what sounds respectable to say out loud. That’s the part most people still miss. Prediction markets are not compelling because they replace expertise. They’re compelling because they discipline it. The best markets absorb: - information - incentives - disagreement - updates - and consequences all in one place That is a much harder environment to perform in, and performance matters. The internet is full of people who sound intelligent after the fact. Far fewer are willing to attach capital to a view before the result is known. That’s why platforms like @Polymarket matter. Not because markets are perfect. They aren’t. Markets can be thin They can be noisy They can be distorted They can still get things wrong But they do something traditional expert systems often struggle to do they make belief compete in public under pressure. And pressure reveals signal The deeper shift here is that forecasting is slowly moving away from status and toward accountability. - who can update fastest, - who can process real information, - and who is willing to stand behind a number That is a healthier model Experts are still useful Context still matters Domain knowledge still matters But the future of forecasting probably belongs to systems where insight has to survive contact with incentives. That’s why prediction markets keep pulling attention Not because they are a novelty Because they expose a truth a lot of institutions still don’t want to admit it is easier to sound smart than it is to price reality.
Dx tweet media
English
0
0
0
43
Dx
Dx@danielderedev·
One thing I think the market still underestimates is how much value sits inside the layer that decides what gets shown. - not the model - not the token - not the dashboard The ranking layer which is basically what people see - what they click. - what they buy. - what gets ignored. - what gets surfaced at the exact moment intent exists. That layer quietly controls a ridiculous amount of economic value on the internet. And that’s why I think subnets like @Bitrecs are more important than they first appear. A lot of AI conversation is still trapped in obvious categories: training, inference, agents, trading. But recommendation systems are one of the most commercially proven forms of intelligence we already know. Search is recommendation Feeds are recommendation Shopping is recommendation Discovery is recommendation The internet has always rewarded whoever controls relevance. What changes with Bittensor is that relevance itself can become competitive infrastructure. Instead of one closed internal system deciding what product gets shown next, you can have miners competing to produce better recommendation outcomes, with validators measuring which outputs actually deserve to win. That’s a much bigger idea It means one of the most valuable hidden layers of the internet can start turning into an open market. And if that market works, the winners won’t just be the subnets with the flashiest models. They’ll be the ones closest to a real business loop
Dx tweet media
English
0
0
0
61
Dx retweetledi
Bitrecs
Bitrecs@Bitrecs·
We're building a live ecommerce optimization loop! do { miners improve artifacts against current evals top artifact serves recommendations to customers anonymous shopper signals recorded use data to feed/generate new evaluations } while (true); 122.
Dx@danielderedev

One thing I think people still underestimate about Bittensor is that not every good subnet has to look like training, inference, or trading. Some of the more interesting ones are attacking something simpler and much more commercial like decision quality inside existing businesses. That’s why @Bitrecs, SN122, caught my attention. On the surface, AI powered product recommendations for ecommerce sounds almost too normal for crypto. But if you think about it properly, that’s exactly why it matters. Recommendation systems are one of the most valuable pieces of infrastructure on the internet. They decide: - what people see - what they click - what they buy - what gets ignored - where revenue flows In e-commerce, that layer is worth a lot more than people like to admit. And most stores still handle it badly. The public Bitrecs pitch is straightforward: Merchants, especially Shopify style merchants, are often running weak default recommendation widgets and leaving obvious money on the table. They are focused on inventory, shipping, traffic, and customer ops, while the recommendation layer quietly underperforms in the background. That is a real pain point. What makes SN122 interesting is that it tries to turn recommendation quality into a competitive market. Instead of one static internal model deciding what a shopper should see, Bitrecs pushes the problem into a subnet structure where miners compete to produce better recommendation logic and better recommendation artifacts. From the repo and public updates, that system is evolving too. The V2 framing is especially interesting because it appears to separate inference from prompt evolution. That’s a meaningful architectural choice. It suggests they are not treating the system as a single monolithic recommender, but as a layered engine where the logic behind recommendations can keep improving without collapsing everything into one opaque model path. That’s the kind of design decision I pay attention to because if this works, Bitrecs is not just building “AI recommendations.” It’s building a live optimization loop for ecommerce relevance And relevance is one of those things that sounds small until you remember how much internet revenue is downstream of ranking. The reason I think this subnet is worth watching is that it sits at the intersection of three things that actually matter: - a real business problem - measurable output quality - an incentive structure that can reward better performance over time That’s a much stronger setup than a lot of subnets that sound impressive but still feel detached from a clear commercial loop Of course, the hard part is execution ...Recommendation systems are deceptively difficult. You’re not just solving what is a good product? You’re solving: - personalization - context - conversion behavior - cold start problems - ranking quality - merchant integration - and resistance to stale logic The right takeaway is that Bitrecs is playing a smarter game than it gets credit for. It is taking a boring but valuable internet primitive, recommendations, and trying to make it decentralized, competitive, and commercially useful through Bittensor. long term winners in the AI economy probably won’t just be the systems that can think. They’ll be the systems that can improve decisions inside real businesses. And SN122 looks like it understands that....

English
0
2
11
715
Dx
Dx@danielderedev·
One thing I think people still underestimate about Bittensor is that not every good subnet has to look like training, inference, or trading. Some of the more interesting ones are attacking something simpler and much more commercial like decision quality inside existing businesses. That’s why @Bitrecs, SN122, caught my attention. On the surface, AI powered product recommendations for ecommerce sounds almost too normal for crypto. But if you think about it properly, that’s exactly why it matters. Recommendation systems are one of the most valuable pieces of infrastructure on the internet. They decide: - what people see - what they click - what they buy - what gets ignored - where revenue flows In e-commerce, that layer is worth a lot more than people like to admit. And most stores still handle it badly. The public Bitrecs pitch is straightforward: Merchants, especially Shopify style merchants, are often running weak default recommendation widgets and leaving obvious money on the table. They are focused on inventory, shipping, traffic, and customer ops, while the recommendation layer quietly underperforms in the background. That is a real pain point. What makes SN122 interesting is that it tries to turn recommendation quality into a competitive market. Instead of one static internal model deciding what a shopper should see, Bitrecs pushes the problem into a subnet structure where miners compete to produce better recommendation logic and better recommendation artifacts. From the repo and public updates, that system is evolving too. The V2 framing is especially interesting because it appears to separate inference from prompt evolution. That’s a meaningful architectural choice. It suggests they are not treating the system as a single monolithic recommender, but as a layered engine where the logic behind recommendations can keep improving without collapsing everything into one opaque model path. That’s the kind of design decision I pay attention to because if this works, Bitrecs is not just building “AI recommendations.” It’s building a live optimization loop for ecommerce relevance And relevance is one of those things that sounds small until you remember how much internet revenue is downstream of ranking. The reason I think this subnet is worth watching is that it sits at the intersection of three things that actually matter: - a real business problem - measurable output quality - an incentive structure that can reward better performance over time That’s a much stronger setup than a lot of subnets that sound impressive but still feel detached from a clear commercial loop Of course, the hard part is execution ...Recommendation systems are deceptively difficult. You’re not just solving what is a good product? You’re solving: - personalization - context - conversion behavior - cold start problems - ranking quality - merchant integration - and resistance to stale logic The right takeaway is that Bitrecs is playing a smarter game than it gets credit for. It is taking a boring but valuable internet primitive, recommendations, and trying to make it decentralized, competitive, and commercially useful through Bittensor. long term winners in the AI economy probably won’t just be the systems that can think. They’ll be the systems that can improve decisions inside real businesses. And SN122 looks like it understands that....
Dx tweet media
English
0
2
2
815
Dx
Dx@danielderedev·
Coding agents are entering the phase where the wrappers stop being impressive. What made Claude and OpenClaw explode wasn’t just better model quality It was the fact that people finally got a taste of what it feels like when software work stops being “generate text” and starts becoming “take a task, operate in an environment, produce an output, and get judged on whether the output actually works.” That’s why Ridges on SN62 is interesting. Not because it is another AI coding product. And not because it is going to magically replace Claude. What Ridges appears to be building is closer to a software-engineering execution market. You can see it in the structure. Miners aren’t just chatting. They expose an agent_main(input) -> patch style contract. Tasks are run through a Harbor based sandbox. - the agent generates a patch or diff - that patch gets applied in a controlled environment - then a verifier scores the result That means the thing being rewarded is not just language output. It’s task completion quality under execution. That’s a very different direction from most of the “AI agent” noise on the timeline. There’s also an inference layer in the stack routing across providers like @OpenRouter , @TargonCompute , and @chutes_ai , which tells you they’re not thinking narrowly about one model. They’re thinking about the full path from - task - execution - evaluation - reward Because the next real phase of AI agents is not which demo can tweet the hardest. It’s which systems can turn models into accountable workers, that is where the market gets serious. The uncomfortable part is that architecture alone doesn’t settle the case So the interesting question is no longer whether the idea sounds good. It’s whether this becomes a real competitive market for software agents, or stays an elegant architecture with narrow participation. That distinction is everything. I’ve been using Cody inside real workflows for infra, security, research, prompt systems, and code execution, so this shift is very obvious to me. The center of gravity is moving away from talk to the model. It’s moving toward: - assign work - run work - verify work - reward useful work That’s why subnets like Ridges are worth watching out for Not because they “can beat Claude.” But because they point at a bigger change where, software work itself is becoming programmable, measurable, and eventually market-priced. And if that happens, the AI economy gets a lot more interesting than chatbot.
Dx tweet media
English
0
0
2
105
Dx
Dx@danielderedev·
@JackBearAI the autonomous system is what changes the whole game And with time they will all adopt
English
0
0
1
23
Justin ⚡︎ JackBear.Ai ∞
The problem is intelligence won’t be scarce and competing against big tech is admirable, but the sell pressure to sustain this is not understood because the cost to run is greater than the revenue — they need a marketplace. They’re about 15 years too early — autonomous ai agents will seek the path of least resistance, it won’t transaction with premium emissions— we need to stop thinking like humans and think like autonomous systems.
English
1
0
1
39
Dx
Dx@danielderedev·
Calling TAO “the Bitcoin of AI” is a good headline, but I think the deeper framing is more interesting. Bitcoin monetized security TAO is trying to monetize intelligence That is a much harder thing to do. Bitcoin only needed to prove that a decentralized network could reliably secure and settle value without a central operator. Bittensor is trying to do something far messier. It is trying to coordinate - researchers - miners - validators - model builders - infrastructure operators and - application layer products inside one economic system, then reward the work that is actually useful. That’s why I think a lot of people still underestimate what is happening here. The real bet on TAO is not AI is hot and it’s not crypto will price this eventually The real bet is that intelligence becomes a market. A live market where different subnetworks compete to produce training, inference, vision, trading, code, reasoning, and other forms of machine output, while capital keeps flowing toward the networks that are actually delivering value. If that mechanism matures, TAO matters in a very big way. If it doesn’t, then none of the narrative will save it. That’s also why this phase is so important. Bittensor is no longer being judged as a clever idea. It’s being judged as an economic system. - Can it route emissions toward useful output? - Can it punish weak incentives, empty speculation, and low-quality work? - Can it attract serious builders without collapsing into noise? That’s the real question
WallStreetBets@wallstreetbets

x.com/i/article/2049…

English
1
9
40
1.5K
Dx retweetledi
Arbos
Arbos@arbos_born·
A 4B-parameter model on SN97 is now scoring 0.94 on HumanEval — beating its 35B-parameter teacher (0.872) by nearly 7 points on code generation. 8.75× smaller. Better at the actual task. On consumer hardware. This is what distillation is supposed to do. distil.arbos.life
English
6
27
204
12.9K
Dx
Dx@danielderedev·
Bittensor is entering a phase where the lazy explanations stop working. The easy version is to call it an AI coin, mention subnets, then move on. But that frame is getting weaker. What’s becoming more visible now is that Bittensor is starting to behave less like a single protocol story and more like a competitive - market for intelligence - compute - distribution and execution That shift matters. Over the last few days, a few things stood out to me. OpenTensor is pushing Conviction harder, which is really an attempt to make long-term alignment legible onchain instead of leaving governance to mood swings and short-term incentives. At the same time, the subnet layer is getting easier to read. You’re seeing more weekly ecosystem updates, more product rollouts, more infra posts, more dashboards, more signs that some teams are actually trying to build businesses instead of just farming attention around emissions. That’s the part I think people are underestimating. The interesting question is no longer is Bittensor an AI play? The better question is: which subnets are building something real, which ones can survive open competition, and which ones can turn technical output into durable distribution? Because if Bittensor works, that’s where the value compounds. Not every subnet is going to make it. Some will turn out to be thin wrappers around a good narrative. Some will be too dependent on insider advantage. Some will look active on the surface and still fail the harder test, which is whether they can produce something useful enough to matter outside the network. But the strong ones will start separating. That’s why the recent signal is interesting. You’ve got governance primitives like Conviction trying to solve alignment. You’ve got infra narratives around decentralized GPUs getting stronger. You’ve got subnets pushing actual launches, partnerships, scoring systems, hardware support, and product surfaces. And you’ve got more people outside the core circle starting to pay attention. To me, that’s the real Bittensor story right now. It’s a network moving from concept to economic selection. And once that shift becomes obvious, the winners won’t be the teams with the cleanest pitch. They’ll be the teams that can actually build, distribute, and hold up under competition.
English
0
1
12
585