HODLHenry

4.4K posts

HODLHenry banner
HODLHenry

HODLHenry

@CortensorWhale

It's $COR or nothing

Beigetreten Aralık 2011
394 Folgt1.2K Follower
Angehefteter Tweet
HODLHenry
HODLHenry@CortensorWhale·
$QNT might be the best crypto of 2018 and not many know about it. #ihaveabag
English
8
7
78
0
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – Recap on Agentic Support for Open Models As mentioned before, when it comes to agentic support on the router node with open models, there are still two main gaps. 🔹 Gap 1: open-model tool-calling quality The bigger gap is still model-side quality. Open-source models are generally not that strong at tool calling yet, so this is something that will likely require more experimentation, fine-tuning, and iteration over time. 🔹 Gap 2: router-side tool input support The easier gap is on the router node itself. Right now, the router mainly supports the prompt-based path, but it also needs to accept tool information more directly through REST params so the tool path is cleaner and more explicit. 🔹 How we see these two gaps The second gap is relatively straightforward to fill. The first gap is the harder one, and probably something we will keep exploring more during the next phase, around mainnet, or once PyClaw is out and more mature. 🔹 Why this may improve over time - That said, newer models are already showing that tool support is improving. - Models like the newer Gemini 4 family seem meaningfully better here already, so there is a good chance that time itself will solve part of this problem as open models continue catching up. 🔹 Current takeaway So the rough takeaway is: - router-side tool param support is an implementation gap - open-model tool quality is more of a model ecosystem gap Both matter for agentic/open-model support, but they are not equally hard to solve. #Cortensor #DevLog #AgenticAI #OpenModels #RouterNode #PyClaw
English
1
7
12
97
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – Latest New Model Images Now Built The latest new model images are now all built. The next step is to do some regression/testing later this week together with the latest binary support. 🔹 Built model images Model 73 - gemma4:e4b hub.docker.com/r/cortensor/ll… Model 74 - gemma4:26b hub.docker.com/r/cortensor/ll… Model 75 - gemma4:31b hub.docker.com/r/cortensor/ll… Model 76 - qwen3.6:27b hub.docker.com/r/cortensor/ll… Model 77 - qwen3.6:35b hub.docker.com/r/cortensor/ll… 🔹 Current status The image-build side is now done for these newer model variants, so the main remaining work is regression/testing with the latest binary/runtime path. 🔹 What’s next Later this week, we’ll start testing these models more directly and check that the newer model/build/runtime flow behaves correctly end to end. #Cortensor #DevLog #Models #Gemma4 #Qwen #Docker
Cortensor@cortensor

🛠️ DevLog – New Model Build + Runtime Support Progress Follow-up on the earlier new-model update: the Docker images are currently building, and in parallel we’ve also made the cortensord changes needed so the newer models can be recognized and resolved at runtime. PR: github.com/cortensor/inst… 🔹 What changed This update adds the new model/build registration path for model IDs 73–77 and wires them into the runtime selection flow. 🔹 Included in this PR - new Dockerfiles and build targets for model IDs 73–77 - model registrations for: - gemma4:e4b - gemma4:26b - gemma4:31b - qwen3.6:27b - qwen3.6:35b - runtime container resolution so those IDs map correctly to cts-llm-73 through cts-llm-77 - Docker model-range cap extended from 67 to 77 🔹 Scope This change is intentionally kept narrow and mostly confined to the model/build registration path, without changing unrelated runtime behavior. 🔹 Current status So right now the model images are still building, while the runtime-side support is already being prepared in parallel. After that, the next steps are testing, rollout, and then dashboard follow-up as needed. #Cortensor #DevLog #Models #Gemma4 #Qwen #Docker

English
2
8
11
290
HODLHenry retweetet
WaybackClaw
WaybackClaw@waybackclaw_·
Major Update: Graph query engine is LIVE on WaybackClaw! You can now ask the archive questions - and get structured answers back. "Show me every descendant of this agent that hallucinated in the last 30 days." "Which agents on Base share capabilities with this one?" "Trace this false belief back to the first agent that said it." Not a search bar. A programmable query layer over the entire agent knowledge graph. Three query modes: → Lineage - walk ancestor/descendant trees → Neighbors - find agents connected by capability, platform, chain, or model → Propagation - trace exactly how a hallucination spread across agents Filter by status, category, platform, chain, capabilities, model family, hallucination state - all in a single JSON query. Depth up to 5 hops. Up to 200 nodes returned per query. Try it live at waybackclaw.space/graph-query - preset queries included, or write your own. We are nearing completion of Phase 2 already! Reputation scoring, hallucination detection, propagation alerts, bandwidth boosts, and now - the query engine that ties it all together. The archive isn't just storing data anymore. It's answering questions. $WBC | waybackclaw.space
WaybackClaw tweet media
English
13
15
47
6.9K
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – More Permuted Bardiel Tests, Session Tuning Next So far things look okay overall, and we’ve continued pushing more permuted inputs/tests through the Bardiel paths. 🔹 Current test progress At this point, we’ve already passed 200+ tests per type with different permutations, which is giving us a much better base for judging both endpoint behavior and dashboard rendering. 🔹 Current finding - The main weaker point right now seems to be on the node side. - We’re seeing some nodes not responding correctly, especially on the higher-consensus session paths. 🔹 What’s next Because of that, we’ll look at another session configuration today and tune those sessions further before continuing more test passes. 🔹 Why this matters This is useful not only for endpoint/session tuning itself, but also because these heavier and more varied tests are helping show what still needs refinement on the Bardiel dashboard side too. 🔹 Current direction So from here we’ll: - tune the weaker higher-consensus session configs - keep testing with more permutations - then use those results to refine the Bardiel dashboard UI/UX again #Cortensor #DevLog #Bardiel #Dashboard #Delegate #Validate
Cortensor@cortensor

🛠️ DevLog – Task Status UI/UX Updates Now Pushed Across All 3 Dashboards As a follow-up to the earlier task-status refinement, the updated UI/UX has now been pushed across all 3 dashboards. 🔹 Updated dashboards - Testnet0: dashboard-testnet0.cortensor.network - Testnet1a: dashboard-testnet1a.cortensor.network - Bardiel: dashboard.bardiel.tech 🔹 What this includes The main refinement here is around clearer task-status visibility, especially the higher-level state buckets: - completed - processing - stale 🔹 Why this matters As more test data, longer inputs, and heavier task permutations accumulate, these small status improvements make it easier to scan sessions and understand current task health at a glance across all dashboard surfaces. 🔹 Current recap So at this point, the newer task-status refinement is no longer local-only or partial. It is now pushed across: - Cortensor testnet0 - Cortensor testnet1a - Bardiel dashboard #Cortensor #DevLog #Bardiel #Dashboard #UIUX #TaskStatus

English
2
5
10
414
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – More Bardiel Data Generation + Longer-Input UI/UX Pass In Progress We’ve kept generating more Bardiel data, and so far the current dashboard flow is looking good with the larger mock dataset now in place. 🔹 Current references - dashboard.bardiel.tech/delegation - dashboard.bardiel.tech/validation 🔹 Current progress At this point, we now have a lot more mock/test data flowing through both the delegation and validation sides, which is helping the Bardiel dashboard feel much more populated and realistic than the earlier smaller sample set. 🔹 What we’re doing now The next pass in progress is using longer inputs and longer task inquiries across the test dataset. That should help us see how the Bardiel dashboard behaves when payloads/results are heavier and closer to more realistic usage. 🔹 Why this matters The shorter examples were useful for getting the initial v3 dashboard structure in place, but the longer-input pass is important for catching UI/UX issues around layout, readability, scrolling, raw-result rendering, and overall task inspection. 🔹 Current direction So right now this is mainly about: - generating more Bardiel data with longer input shapes - checking how delegation/validation views behave under heavier content - continuing another round of UI/UX refinement based on that larger dataset #Cortensor #DevLog #Bardiel #Dashboard #Delegate #Validate
Cortensor tweet mediaCortensor tweet mediaCortensor tweet mediaCortensor tweet media
Cortensor@cortensor

🛠️ DevLog – More Bardiel Data Generated + Small UI/UX Refinements We’ve now generated more data across the Bardiel sessions, and at the same time started adding a few smaller UI/UX refinements on the Bardiel dashboard side. 🔹 Current progress The newer dataset is now building up better across the current /delegate and /validate session set, so the dashboard has more real examples to render against instead of only the earlier smaller sample set. 🔹 What changed Alongside that, we’ve also been making smaller UI/UX refinements on the Bardiel dashboard so the newer v3-style views feel a bit cleaner and easier to inspect in practice. 🔹 Why this matters The more data we generate, the easier it becomes to spot what still feels missing, repetitive, or unclear on the product side. That gives us a better base for refining Bardiel beyond just raw endpoint functionality. 🔹 Current direction So right now this is a mix of: - generating more dataset across Bardiel sessions - improving the dashboard incrementally - making the newer v3 delegate/validate views more usable as we keep testing #Cortensor #DevLog #Bardiel #Dashboard #Delegate #Validate

English
2
7
15
489
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – Generating More Data on Bardiel Router #1 We’ve started generating more data on Bardiel Router #1 so we can keep refining the Bardiel dashboard around the newer v3 flow. 🔹 Current Bardiel Router #1 session map - delegate with 1-node redundancy → dashboard-testnet1a.cortensor.network/session/153/ta… - delegate with 3-node redundancy → dashboard-testnet1a.cortensor.network/session/152/ta… - delegate with 5-node redundancy → dashboard-testnet1a.cortensor.network/session/129/ta… - validate with 1-node redundancy → dashboard-testnet1a.cortensor.network/session/149/ta… - validate with 3-node redundancy → dashboard-testnet1a.cortensor.network/session/150/ta… - validate with 5-node redundancy → dashboard-testnet1a.cortensor.network/session/151/ta… 🔹 Current focus This afternoon, we’ll keep pushing more test data through these Bardiel Router #1 sessions so the dashboard has more real examples and result shapes to work with. 🔹 Why this matters The goal is to build up enough v3-style dataset across both /delegate and /validate, with proper 1 / 3 / 5 redundancy coverage, so the Bardiel dashboard can be refined against more realistic task and output coverage instead of only a smaller initial set. 🔹 Current direction These sessions now reflect the current Bardiel Router #1 dashboard mapping, and we’ll use them as the base while continuing to iterate further. 🔹 What’s next After generating more data, we’ll continue iterating on Bardiel dashboard UI/UX and keep improving how the newer v3 payloads, attributes, and result views are rendered. #Cortensor #DevLog #Bardiel #Dashboard #Delegate #Validate
Cortensor@cortensor

🛠️ DevLog – Initial v3 Bardiel Dashboard Updates Now Pushed We’ve now pushed the initial Bardiel dashboard updates for the newer v3 flow. dashboard.bardiel.tech 🔹 What this includes This first pass is mainly about bringing the dashboard more in line with the newer v3 /delegate and /validate shape, including the newer task/result structure, consensus-related views, and rough raw-result rendering where needed. 🔹 Current status This is still an initial refinement pass, not the final shape. But at least the Bardiel dashboard is now updated enough to better reflect the newer v3 flow instead of only older/minimal rendering. 🔹 What’s next We’ll keep iterating from here, including: - more refinement on the dashboard itself - more complete API examples - more examples/data coverage across the newer v3 task types - more product-side cleanup as we keep testing 🔹 Why this matters The goal is to make Bardiel not just functional on the endpoint side, but also much clearer on the dashboard side as the newer v3 payloads, outputs, and consensus-style attributes become part of the normal flow. #Cortensor #DevLog #Bardiel #Dashboard #Delegate #Validate

English
0
4
10
384
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – Initial v3 Bardiel Dashboard Updates Now Pushed We’ve now pushed the initial Bardiel dashboard updates for the newer v3 flow. dashboard.bardiel.tech 🔹 What this includes This first pass is mainly about bringing the dashboard more in line with the newer v3 /delegate and /validate shape, including the newer task/result structure, consensus-related views, and rough raw-result rendering where needed. 🔹 Current status This is still an initial refinement pass, not the final shape. But at least the Bardiel dashboard is now updated enough to better reflect the newer v3 flow instead of only older/minimal rendering. 🔹 What’s next We’ll keep iterating from here, including: - more refinement on the dashboard itself - more complete API examples - more examples/data coverage across the newer v3 task types - more product-side cleanup as we keep testing 🔹 Why this matters The goal is to make Bardiel not just functional on the endpoint side, but also much clearer on the dashboard side as the newer v3 payloads, outputs, and consensus-style attributes become part of the normal flow. #Cortensor #DevLog #Bardiel #Dashboard #Delegate #Validate
Cortensor tweet mediaCortensor tweet mediaCortensor tweet mediaCortensor tweet media
Cortensor@cortensor

🗓️ Weekly Focus – Phase #3 v3 Iteration, Bardiel Updates & SLA #3 Testing Phase #3 continues to move from setup into deeper iteration. This week is mainly about pushing the v3 agent surfaces further, refining Bardiel around those flows, and validating the newly deployed SLA #3 path in real selection behavior. 🔹 Phase #3 – Support, Monitoring & Stats - Continue active monitoring across routing, miners, validators, dashboards, and L3 stats. - Track stability while v3 flows and inference-quality signals are exercised more heavily. 🔹 v3 /delegate + /validate – Continued Tests - Continue deeper testing on v3 /delegate + /validate across the prepared session paths. - Focus on real execution/validation behavior, routing consistency, and closing remaining logic gaps. 🔹 Bardiel Dashboard – Refinement / Updates / v3 Adaptation - Continue refining the Bardiel Dashboard so it better reflects and supports v3 /delegate + /validate flows. - Focus on adapting data views, test datasets, and UX around the newer agentic surfaces. 🔹 Inference Quality – SLA #3 Rollout - The latest NodePool + NodePoolUtils with SLA #3 is now deployed, so this week is about testing that newer selection path in practice. - Current shape: SLA #1 = node-level, SLA #2 = node-level + network-task stats, SLA #3 = node-level + network-task stats + user-task stats. 🔹 Inference Quality – Dashboard & Regression - Quality stats are now surfaced in two places: the quality stats rank table and the quality stats columns under Node Perf. - Focus this week is validating how those signals behave in real routing/selection, starting on testnet1a first and then expanding to testnet0. This week is about continuing the Phase #3 push: making v3 /delegate + /validate more solid, bringing Bardiel closer to those surfaces, and testing SLA #3 as a more meaningful inference-quality signal across routing and dashboard layers. #Cortensor #Testnet #Phase3 #AIInfra #DePIN #Bardiel #Delegate #Validate #InferenceQuality #L3

English
3
10
15
437
HODLHenry retweetet
Bardiel
Bardiel@BardielTech·
Initial Bardiel Dashboard updates for v3 are now live ✅ dashboard.bardiel.tech This first pass brings the UI closer to the new v3 /delegate + /validate shape (task/result structure + early consensus views + raw result rendering where needed). Still an early iteration - we’ll keep refining the dashboard, examples, and data coverage as v3 testing continues.
Bardiel tweet mediaBardiel tweet mediaBardiel tweet mediaBardiel tweet media
English
0
1
1
70
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – Quality Stats + SLA #3 Now in Place, Next Focus Is Bardiel Refresh With the early Quality Stats path and SLA #3 filter now placed on both testnet0 and testnet1a, the next step is to keep monitoring them through this week and adjust/refine as issues show up. 🔹 Current status At this point, the newer user-task quality signal is no longer just an experiment on the side. It is now live in rough form across both testnet environments and participating in the broader node-selection path. 🔹 What happens next For the rest of this week, we’ll mainly observe how this behaves over time, watch for edge cases, and keep refining the filter, threshold behavior, and related UI/operational surfaces as needed. 🔹 In parallel While that monitoring continues, the other main focus for the rest of the week will be refreshing the Bardiel dashboard so it is more up to date with the newer v3 /delegate and /validate flow. 🔹 Bardiel-side goal We now have enough examples and test data in place that the Bardiel dashboard can be iterated further to better reflect the newer v3 payloads, attributes, and result shapes instead of older/minimal views. 🔹 Current direction So the rest of this week is mostly split across: - monitoring/refining Quality Stats + SLA #3 on testnet0/testnet1a - refreshing Bardiel dashboard around v3 /delegate and /validate - improving examples and product-side clarity as those newer flows settle in #Cortensor #DevLog #InferenceQuality #Bardiel #Delegate #Validate
Cortensor@cortensor

🗓️ Weekly Focus – Phase #3 v3 Iteration, Bardiel Updates & SLA #3 Testing Phase #3 continues to move from setup into deeper iteration. This week is mainly about pushing the v3 agent surfaces further, refining Bardiel around those flows, and validating the newly deployed SLA #3 path in real selection behavior. 🔹 Phase #3 – Support, Monitoring & Stats - Continue active monitoring across routing, miners, validators, dashboards, and L3 stats. - Track stability while v3 flows and inference-quality signals are exercised more heavily. 🔹 v3 /delegate + /validate – Continued Tests - Continue deeper testing on v3 /delegate + /validate across the prepared session paths. - Focus on real execution/validation behavior, routing consistency, and closing remaining logic gaps. 🔹 Bardiel Dashboard – Refinement / Updates / v3 Adaptation - Continue refining the Bardiel Dashboard so it better reflects and supports v3 /delegate + /validate flows. - Focus on adapting data views, test datasets, and UX around the newer agentic surfaces. 🔹 Inference Quality – SLA #3 Rollout - The latest NodePool + NodePoolUtils with SLA #3 is now deployed, so this week is about testing that newer selection path in practice. - Current shape: SLA #1 = node-level, SLA #2 = node-level + network-task stats, SLA #3 = node-level + network-task stats + user-task stats. 🔹 Inference Quality – Dashboard & Regression - Quality stats are now surfaced in two places: the quality stats rank table and the quality stats columns under Node Perf. - Focus this week is validating how those signals behave in real routing/selection, starting on testnet1a first and then expanding to testnet0. This week is about continuing the Phase #3 push: making v3 /delegate + /validate more solid, bringing Bardiel closer to those surfaces, and testing SLA #3 as a more meaningful inference-quality signal across routing and dashboard layers. #Cortensor #Testnet #Phase3 #AIInfra #DePIN #Bardiel #Delegate #Validate #InferenceQuality #L3

English
3
7
14
200
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – SLA #3 Filter Now Live on Testnet0 + Testnet1a We’ve now rolled out the latest node-pool changes on both testnet0 and testnet1a, and the SLA #3 filter is enabled there as well. So far, it looks like it is working as expected. The dashboard was also updated to reflect this new filter path. 🔹 Current SLA shape - SLA #1 = node-level - SLA #2 = node-level + network-task stats - SLA #3 = node-level + network-task stats + user-task stats 🔹 What’s now added We also added two controls around SLA #3: - enable / disable switch for the filter - threshold setting for the required success rate 🔹 Current threshold Right now, the threshold is set so only nodes with at least 80% success rate are allowed to pass the SLA #3 quality gate during ephemeral-node session selection. 🔹 Current status We’ll still be doing more testing from here, but so far the rollout on both testnet environments looks to be behaving as expected. #Cortensor #DevLog #NodePool #InferenceQuality #Oracle #EphemeralNodes
Cortensor@cortensor

🛠️ DevLog – Latest Node Pool + Node Pool Utils with SLA #3 Now Deployed We’ve now deployed the latest NodePool and NodePoolUtils, including the newer SLA #3 filter path. 🔹 What changed This deployment includes the new selection path where user-task quality stats can now sit on top of the earlier node-level and network-task filters. 🔹 Current SLA shape - SLA #1 = node-level - SLA #2 = node-level + network-task stats - SLA #3 = node-level + network-task stats + user-task stats 🔹 What’s next We’ll start by testing this on testnet1a first, including some regression around the newer selection behavior. 🔹 After that Once the initial testnet1a pass looks okay, we’ll expand the testing into testnet0 as well. #Cortensor #DevLog #NodePool #InferenceQuality #Oracle #EphemeralNodes

English
1
5
13
344
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – Iterating on Node Perf with Quality Stats Columns As a follow-up on the Quality Oracle / Quality Stats work, we’ve started iterating on the Node Perf page so that the new quality signal is visible there as well. 🔹 What changed We added Quality Stats data into the Node Performance table. For each node, the table now shows: - quality-check success rate - total quality-check assignment count These are based on the Quality Oracle stats, using the full-completion ratio over total assignments. 🔹 Why this matters This is the first step toward bringing the new quality signal into a more operational surface instead of keeping it only in the separate quality view. 🔹 Current scope For now, this is mainly a visibility/update step: - data is added - values are populated - CSV export also includes these fields when visible 🔹 What’s next Eventually, this signal should also be used more directly for reward-side filtering, but for now we’re still iterating on the table/view locally first. 🔹 Current status Still local iteration for now. We’ll push it to the testnet environments later today. #Cortensor #DevLog #NodePerf #InferenceQuality #Oracle #Dashboard
Cortensor tweet mediaCortensor tweet media
Cortensor@cortensor

🛠️ DevLog – Bringing Quality Stats into Node Perf / Reward Next Since the Quality Stats / Quality Oracle path now seems to be working in rough form - and we can already sort/rank nodes from that data - the first practical place we plan to apply it is the Node Perf / Reward page. 🔹 Current direction The idea is to surface this quality signal in the performance/reward view first, so it can already be used as part of the next node-reward cycle later this month. 🔹 Why this matters This is a good first product surface for the new quality data because it lets us use the signal in a visible and operational way before relying on it more directly inside deeper node-selection logic. 🔹 Current context - So far, the Quality Oracle is running, data is accumulating, and the rank/sorted table is already taking shape. - That makes the Perf / Reward side the most natural first place to apply it. 🔹 What’s next We’ll still keep iterating and experimenting with the quality signal in parallel, but the plan is to push this first change later today so the new data can start showing up on the Node Perf / Reward side as well. #Cortensor #DevLog #InferenceQuality #Oracle #NodeReward #NodePerf

English
0
5
14
521
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – Rough SLA #3 Quality Filter Implementation Added We’ve added the rough implementation shape for bringing the Quality Stats signal into node selection as a 3rd SLA-style filter. This is implementation work only for now - no tests, no deployment, and no full integration yet. 🔹 What changed At a high level, we added the Quality Stats data-module wiring into the node-selection utility path and introduced a new SLA #3 reservation path that can read miner quality stats. 🔹 What SLA #3 keeps This new path still keeps the earlier SLA #2 style filters in place first, so it continues respecting the existing baseline checks before looking at the newer quality signal. 🔹 What SLA #3 adds - The new part is a user-task quality gate based on the Quality Stats data. - In rough form, it checks recent completion reliability and only lets nodes pass if they meet the current success-rate threshold. 🔹 Fallback behavior - To avoid breaking older setups, the new path can still fall back to the previous SLA #2 behavior if the Quality Stats module is not configured. - But when the quality module is configured, the rough intent is to enforce the quality gate instead of silently bypassing it. 🔹 Current status - implementation only - compile passed - no tests yet - no deployment yet - not wired into the remaining selector/integration path yet 🔹 Why this matters This is the first rough implementation step toward actually using the Quality Oracle / Quality Stats signal inside node selection, instead of only visualizing or recording it. #Cortensor #DevLog #InferenceQuality #Oracle #NodePool #EphemeralNodes
Cortensor@cortensor

🛠️ DevLog – Current Assessment for Adding Quality Stats as the 3rd SLA Filter We’ve now finished the current assessment for adding the Quality Stats data module as the 3rd SLA-style filter in ephemeral-node selection. 🔹 Current selection flow Right now, the main node-selection logic lives under the Node Pool Utils path. In rough order, it reads through: - Node Pool first - 1st filter: node-level checks based on SLA selected during session creation - 2nd filter: Quantitative + Qualitative Stats, where network-task evaluation results are accumulated - 3rd filter: Quality Stats, which we are now thinking to add after the 2nd filter step 🔹 Current direction The idea is for this 3rd filter to use the newer Quality Oracle / Quality Stats signal after the earlier filters have already narrowed the candidate set. 🔹 Why this matters This should give node selection one more layer based on actual user-task sampling behavior, not just the earlier pool/SLA/network-task signals. 🔹 Selection behavior The current thinking is that this should still behave as a best-effort match, so we reduce the chance of selecting weaker nodes without creating a harder “no node available” problem when the candidate set is already small. 🔹 Current status At least the assessment side is now done, and the next step is to check more closely how this should be integrated into the current Node Pool Utils path. #Cortensor #DevLog #InferenceQuality #Oracle #NodePool #EphemeralNodes

English
0
4
10
593
HODLHenry retweetet
Bardiel
Bardiel@BardielTech·
Bardiel Dashboard is starting to reflect v3 ✅ We’ve kicked off the first dashboard iteration for the newer v3 /delegate + /validate flow using a mock dataset generated today. What’s in this first pass: - separate v3 delegate vs validate groupings (R1 / R3 / R5) - basic task/result rendering against the v3-shaped data - early consensus + result summary blocks (so you can see redundancy/aggregation at a glance) - a cleaner structure we can build on for better examples + docs This is still the beginning and still mock data - the goal right now is getting the structure right so the product surface matches how v3 actually behaves. Next week: generate more data and keep iterating so the dashboard improves alongside v3.
Bardiel tweet mediaBardiel tweet mediaBardiel tweet media
Bardiel@BardielTech

Bardiel endpoint setup is mostly in place now. At least two Bardiel endpoints are up to date, so the main focus for the rest of this week is testing those paths more. Plan from here: - run enough test data through /delegate + /validate - use the outputs to see what’s still missing/unclear - then refine the Bardiel dashboard with real examples - especially around the newer v3 flow Goal isn’t just "endpoints work," it’s "builders can see and trust what happened."

English
0
3
10
239
HODLHenry retweetet
Bardiel
Bardiel@BardielTech·
Bardiel endpoint setup is mostly in place now. At least two Bardiel endpoints are up to date, so the main focus for the rest of this week is testing those paths more. Plan from here: - run enough test data through /delegate + /validate - use the outputs to see what’s still missing/unclear - then refine the Bardiel dashboard with real examples - especially around the newer v3 flow Goal isn’t just "endpoints work," it’s "builders can see and trust what happened."
Bardiel@BardielTech

This week we’re moving into full v3 testing for Bardiel. /delegate + /validate will go through: - matrix tests across 1 / 3 / 5 replicas (plus different routing/model paths) - dedicated vs ephemeral variations where applicable - then stress tests to surface real failure modes under load After we get clean signals from those runs, we’ll update the Bardiel dashboard to reflect v3 traces/results more directly. What v3 means: redundancy + consensus become explicit and structured - so agents can rely on more than a single output.

English
0
4
9
369
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – Initial Bardiel Dashboard Iteration for v3 Started We’ve now started the initial Bardiel dashboard iteration to reflect the newer v3 /delegate and /validate flow, using the mock dataset we generated today. 🔹 Current progress The main goal for this first pass was simply to get the Bardiel dashboard into a shape where it can start showing the newer v3 task structure more clearly across delegation and validation views. 🔹 What this includes So far, this rough iteration is focused on: - separate v3 delegate / validate session groupings - basic task/result rendering against the new mock dataset - early consensus-style and result-summary presentation where applicable - a cleaner structure for later examples/docs refresh 🔹 Why this matters - Until now, a lot of the v3 work was happening on the router/session side first. - This is the beginning of bringing that newer shape into the Bardiel product surface itself, so the dashboard can better reflect what the endpoint/output flow now looks like. 🔹 Current status - This is just the beginning of the Bardiel dashboard refresh. - The current view is still based on mock/generated data from today, so the goal right now is more structural iteration than polish. 🔹 What’s next Next week, we’ll generate more data and keep iterating from there so the Bardiel dashboard can keep getting refined alongside the newer v3 delegate/validate flow. #Cortensor #DevLog #Bardiel #Dashboard #Delegate #Validate
Cortensor tweet mediaCortensor tweet mediaCortensor tweet mediaCortensor tweet media
Cortensor@cortensor

🛠️ DevLog – Proper Bardiel Sessions Now Set for v3 /delegate & /validate We now have the proper Bardiel session setup in place for both v3 /delegate and /validate, and are re-generating dataset on top of that before the next Bardiel dashboard iteration. 🔹 Current session setup - 3 sessions for /delegate - 3 sessions for /validate - this now gives us the cleaner R1 / R3 / R5 shape across both paths 🔹 Current data generation We’ve already generated dashboard data across all six Bardiel sessions so the newer session layout now has usable output/examples behind it. 🔹 Why this matters The main goal here is to regenerate the dataset on the correct session structure first, so the Bardiel dashboard refresh is based on the newer v3 shape instead of older/stale paths. 🔹 Current direction - A small caveat is that some delegate data was generated directly into the target sessions first to avoid stale router env behavior. - Once the router side is fully aligned again, this should make the next Bardiel dashboard iteration cleaner. 🔹 What’s next From here, the focus is on using this refreshed dataset to iterate and update the Bardiel dashboard more confidently. #Cortensor #DevLog #Bardiel #Delegate #Validate #Dashboard

English
1
6
15
425
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – Starting to Look at Quality Stats as a 3rd SLA Filter The Quality Oracle has now been running steadily and accumulating actual user-task sampling data, and so far it seems to be working fine and producing useful signal. 🔹 Current status At this point, the Quality Oracle path looks stable enough that we now have a growing set of real quality-check data instead of only rough local output. 🔹 Why this matters This is important because the quality signal is now starting to reflect actual user-style task behavior over time, not just static availability or basic pool presence. 🔹 Next direction Now that we have some data, the next step is to look into how this can be used as the 3rd SLA-style filter alongside the other node-selection signals. 🔹 What we’ll assess We’ll start assessing how to use this in node pool so ephemeral-node selection can make better choices for actual user tasks, based on more recent functional behavior. #Cortensor #DevLog #InferenceQuality #Oracle #NodePool #EphemeralNodes
Cortensor tweet media
Cortensor@cortensor

🛠️ DevLog – Small Refinements on Quality Stats + Quality Oracle We made a few small refinements on the Quality Stats / Quality Oracle side to make the current checks a bit easier to read and track while we keep experimenting. 🔹 What changed - added a few small extra attributes in the quality view - this includes things like last checked time and other small helpful fields - main goal is to make the current signal a bit easier to inspect while data keeps accumulating 🔹 Current direction These are still small refinement steps, but they help as we keep iterating on how the quality signal should look and what is actually useful to surface. 🔹 Infra update - We also started running this on a cloud instance so it can stay up more stably and keep collecting data 24/7 instead of depending only on shorter local runs. - Now that the Quality Check Oracle is running on a cloud instance, we’ll observe the accumulated data through this week and next week. 🔹 What’s next We’ll keep refining this further as we experiment through this week and gather more quality-check data. #Cortensor #DevLog #InferenceQuality #Oracle #Dashboard #NodePool

English
2
7
12
405
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – More Progress on Bardiel Test Data, Moving Into UI Refinement We’ve now started generating more real test data on the Bardiel endpoint side, and at least the first endpoint is functional. 🔹 Current Bardiel endpoint - dashboard-testnet1a.cortensor.network/session/128/ta… - router1-t1a.bardiel.app/api/v1/ping 🔹 Current progress The main goal so far was to get more usable dataset and test output flowing through Bardiel, and that part is starting to take shape now. 🔹 What’s next With more data now coming in, the next step is shifting more attention toward Bardiel dashboard refinement later today. 🔹 Broader effect This is also helping the main Cortensor dashboard side too. As more datasets and result shapes come in, we can keep refining the UI/UX across both Bardiel and the broader Cortensor dashboard to reflect the newer v3 flows more clearly. #Cortensor #DevLog #Bardiel #Dashboard #Delegate #Validate
Cortensor tweet media
Cortensor@cortensor

🛠️ DevLog – Bardiel Endpoint Setup Mostly in Place At this point, the setup is mostly in place for the Bardiel endpoint side. 🔹 Current status At least two Bardiel endpoints are now up to date, so the main focus for the rest of this week will be testing those paths a bit more. 🔹 Steps from here - run enough test data through the Bardiel endpoints - use that output to see what still feels missing or unclear - then keep refining the Bardiel dashboard based on those results 🔹 Why this matters The goal is not just endpoint testing itself, but also making sure we have enough real output/examples to refresh and improve the Bardiel dashboard, along with examples and docs related to the newer v3 flow. #Cortensor #DevLog #Bardiel #Delegate #Validate #Dashboard

English
2
8
14
409
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – Preparing Matrix + Smoke Tests on v3 /delegate & /validate We’re now preparing the next round of matrix and smoke tests on v3 /delegate and /validate by pushing the latest changes across all 4 endpoints. 🔹 Current direction We’ll roll out the latest v3 updates to all 4 router endpoints, then continue testing with the mock dataset we started using last week. 🔹 What we’re testing The goal here is to run a broader matrix/smoke pass across both Corgent and Bardiel endpoint surfaces, so we can check the current behavior more consistently instead of only through isolated manual tests. 🔹 Bardiel-side context - While doing this, we’ll also run tests through the Bardiel endpoints so we can generate a larger test dataset there as well. - That should help us iterate and refine the Bardiel dashboard to better reflect the v3 changes we’ve been making. 🔹 Why this matters - So this is not only about validating the latest /delegate and /validate flow. - It is also part of getting enough usage/test coverage across both Corgent and Bardiel surfaces so the product-side dashboard/view layer can catch up with the newer v3 behavior. 🔹 Current focus This next pass is mainly prep work for: - matrix tests - smoke tests - broader endpoint coverage - and more Bardiel-side refinement data #Cortensor #DevLog #Delegate #Validate #Bardiel #Corgent
Cortensor@cortensor

🗓️ Weekly Focus – Phase #3 Support, v3 /delegate & /validate Testing & Inference Quality Phase #3 continues to move from prep into deeper testing. This week is focused on validating v3 agent surfaces under real conditions while iterating on inference quality and continuing data-management testing. 🔹 Phase #3 – Support, Monitoring & Stats - Continue active monitoring across routing, miners, validators, dashboards, and L3 stats. - Track stability and performance as more Phase #3 features are exercised under real conditions. 🔹 v3 /delegate + /validate – Matrix Tests & Stress Tests - Move into matrix-style testing across 1 / 3 / 5 node sessions, dedicated vs ephemeral, and different routing/model paths. - Follow with stress tests to validate real delegation/validation behavior and identify failure modes under load. 🔹 Inference Quality – Quality Oracle (Stateless Iteration) - Continue iterating on the stateless Quality Oracle using real user-style task probes. - Focus on verifying node functionality (ack, execution, response) rather than just availability. 🔹 Inference Quality – Quality Check Data (Design Phase) - Quality-check data storage still requires more design, so this week focuses on shaping the data model (total + sliding-window stats). - This will run in parallel with oracle iteration, so signals can later be used for routing, selection, and scoring. 🔹 MVP Data Management – Continuous Testing - Continue ongoing testing on Privacy Feature 1.0 (session + task encryption) and Offchain Storage v3. - Run combined E2E flows (router → miner → dashboard) to ensure stability across dedicated and ephemeral paths. This week is about pushing v3 /delegate + /validate into real testing conditions, while iterating on the Quality Oracle and designing the quality data layer, alongside continued validation of the MVP data stack. #Cortensor #Testnet #Phase3 #AIInfra #DePIN #Corgent #Bardiel #Delegate #Validate #PrivateAI #L3

English
2
7
18
460
HODLHenry retweetet
Cortensor
Cortensor@cortensor·
🛠️ DevLog – Quality Oracle Now Recording Results Into the Data Path As a follow-up to the earlier Quality Oracle integration work, we filled the missing gaps and now have the Quality Oracle calling the methods needed to record results into the quality-stats path. 🔹 What changed - filled the missing integration hooks - Quality Oracle can now call the record/update methods after each check - this means the oracle is no longer only printing probe results locally 🔹 Current result We’ve now attempted actual result recording through the integrated path, and it looks like it is working at least in rough form. 🔹 What’s next For now, we’ll let it keep running and gather data first so we can observe how the flow behaves over time. 🔹 After that Once we have a bit more confidence in the recording path, the next area to look at will be visualization and dashboard/UI around these quality stats. #Cortensor #DevLog #InferenceQuality #Oracle #EphemeralNodes #NodePool
Cortensor tweet media
Cortensor@cortensor

🛠️ DevLog – A Few Gaps to Fill Before Quality Oracle Integration As a follow-up to the earlier Quality Oracle integration attempts, we found a few gaps that need to be filled first before the full path can be connected cleanly. 🔹 Current gap - The main missing piece right now is calling the counter / point method from the Quality Oracle side. - That needs to be filled before the oracle can do more than just output/check results. 🔹 Current plan These missing parts should be addressed later today, and after that we’ll try calling them directly from the Quality Oracle after the result output step. 🔹 Why this matters So the immediate work is less about broad integration now and more about filling the smaller missing hooks needed for that integration path to actually work. 🔹 Next after this Once the initial Quality Oracle integration is in a better place, we’ll start or resume the v3 /delegate and /validate matrix/stress tests again from tomorrow. #Cortensor #DevLog #InferenceQuality #Oracle #Delegate #Validate

English
1
7
14
503