About dTaoAnalytics
Built to turn Bittensor complexity into clearer allocation decisions.
dTaoAnalytics is built by an operator who has spent decades close to critical infrastructure, and run on our own Bittensor data pipeline. The product exists for investors who need a calmer way to compare subnet opportunity, liquidity, evidence, costs, and risk.
Why “dTaoAnalytics”?
The lowercase d stands for dynamic TAO — the per-subnet token model Bittensor introduced when it shipped dTAO in April 2024. Every subnet now has its own alpha token, denominated in TAO via an AMM pool. That structural shift is what makes subnet allocation a real investment problem, and what this product is built to analyze.
Evidence first
Signals need measured forward evidence before they deserve trust. Narrative alone is not enough.
Liquidity aware
A subnet can be interesting and still too thin to trade at the size an investor needs.
Cost honest
Strategies are benchmarked, then held to a net-of-cost bar. Slippage, fees, turnover, and capacity limits all count before claiming alpha.
Why this exists
Buying TAO is simple. Choosing subnet exposure is not.
Subnet tokens are specific bets inside Bittensor. Each one has a different pool, emission profile, builder path, liquidity depth, and risk surface. A table of prices is not enough when the real question is whether an allocation survives evidence, tradeability, costs, and benchmark comparison.
The app is designed to turn those checks into a repeatable workflow: start with the market, inspect one subnet, validate signal evidence, compare against benchmarks, and monitor what is deteriorating.
Provenance
Data should carry its source with it.
We run our own Subtensor node and attach block-level context where practical. Pool state, emissions, metagraph snapshots, and derived indicators should be traceable back to chain data, not treated as detached dashboard numbers.
That provenance is part of the product promise: if a page makes an investor think differently, it should also show where the data came from and how fresh it is.
Beyond raw chain data, every fleet decision — predictions, trades, portfolio changes — is labeled with the data context that produced it and later matched against the outcome that followed. That labeled outcome ledger compounds daily. It is what lets the methodology be tested rather than asserted, and what makes it possible to say which signals actually predict subnet returns and which do not.
Research and capital
Agents scout. Strategies must prove.
The agent fleet is a research and labeling loop. Agents help generate observations, hypotheses, predictions, and outcome labels. They are not presented as approved capital allocators.
Capital allocation belongs to benchmark-tested, mechanical system strategies and model portfolios. The method only earns trust if it beats sensible passive baselines net of slippage, fees, turnover, and capacity limits. Underperformance should be visible too.
Connected to Glenn Landgren's advisory work.
dTaoAnalytics is the product expression of the same advisory stance: navigate complexity, separate signal from noise, and turn analytics, AI, and infrastructure into practical decision systems.