Dynamic TAO is a significant enhancement to Bittensor’s tokenomics and governance, introduced on February 13, 2025, after extensive research and development. This upgrade decentralizes the network’s economic model by assigning each subnet its own unique token, known as an alpha token, which operates alongside the primary TAO token. This structure allows subnets to manage their own economies, fostering specialization and autonomy within the Bittensor ecosystem.
In the Dynamic TAO framework, each subnet maintains a liquidity reserve comprising TAO and its respective alpha tokens. Users can stake TAO into a subnet’s reserve to receive alpha tokens, effectively “voting” for the subnet’s value and performance. The exchange rate between TAO and alpha tokens is determined by the ratio of TAO to alpha in the reserve, reflecting the market’s assessment of the subnet’s utility and demand.
The emission mechanism under Dynamic TAO is designed to distribute rewards in a balanced manner. TAO tokens are issued to subnets based on the relative market value of their alpha tokens. Subnets with higher demand and liquidity for their alpha tokens receive a larger share of TAO emissions, incentivizing the provision of valuable services and the attraction of users. Additionally, subnets emit their own alpha tokens at a rate that follows a halving schedule similar to TAO, ensuring controlled and predictable token supply growth.
The distribution of alpha tokens within a subnet is structured to reward various participants. Each subnet can issue up to one alpha token per block, which is allocated as follows: 18% to the subnet owner, 41% to validators, and 41% to miners.
To prevent potential manipulation of alpha token prices, Dynamic TAO employs a constant product automated market maker (AMM) model. In this system, large trades relative to the pool’s liquidity result in increased slippage costs, making price manipulation economically unfeasible. For example, purchasing a significant portion of the alpha tokens in a pool would substantially increase the token’s price, deterring attempts to artificially inflate or deflate token values.
Bittensor utilizes a Random Order Finalization mechanism to enhance fairness in transaction processing. This approach randomizes the order of transactions within each block, preventing actors from gaining an unfair advantage through tactics like front-running. As a result, all participants have an equal opportunity, and the network maintains a level playing field for transactions.
Subnet Zero, also known as the root subnet, operates uniquely within the Dynamic TAO structure. It does not have its own alpha token and does not host mining or validation activities. However, validators can register on Subnet Zero, and TAO holders can stake to these validators, allowing for subnet-agnostic staking. This design provides flexibility for participants to support the network without being tied to a specific subnet, accommodating diverse strategies and preferences within the Bittensor ecosystem.
Validators are responsible for reviewing the work of AI models and making sure the network rewards useful contributions. They act as quality controllers, checking the accuracy and value of AI-generated responses. Their scores determine how many TAO tokens each AI contributor earns.
To become a validator, a participant must meet a few conditions. First, they need to register and get a unique ID within the subnet they want to validate. Then, they must stake at least 1,000 TAO tokens and rank among the top 64 validators in that subnet. These conditions help ensure that only committed and capable validators take part.
Validators can work in multiple subnets if they meet the requirements for each. They also have the option to attract other users to stake TAO tokens on their behalf. This means validators who perform well and build trust in the community can receive more support from other network participants.
The system encourages validators to act honestly and efficiently. If they consistently provide useful assessments, they earn more TAO rewards. If they try to manipulate the system or provide poor evaluations, they risk losing rewards.
Bittensor uses a commit-reveal system to prevent validators from copying each other’s scores instead of doing independent assessments. Since validator scores determine how many TAO tokens AI contributors earn, some validators might try to take shortcuts by waiting to see what others submit before submitting their own evaluations. This could lead to biased or unfair rewards.
To stop this, the network requires validators to first submit their scores in an encrypted form, which hides their actual evaluations. Then, after a short waiting period, they must reveal the real scores they assigned. Because of this time delay, validators cannot simply copy someone else’s scores. This ensures that all evaluations are independent and fair.
This system needs to be carefully timed. If the waiting period is too short, dishonest validators could still find ways to cheat. If it is too long, it could slow down the network. Bittensor allows subnet owners to set the appropriate delay so that the system remains secure while running smoothly.
The commit-reveal process is also linked to the way new participants join the network. New AI models and validators get a grace period before they are removed for poor performance. The waiting time for commit-reveal should always be shorter than this grace period, so participants are not unfairly removed before their scores are revealed.
Consensus-based weights in Bittensor determine how validators are rewarded for their evaluations of AI models. Instead of relying on a fixed formula, this system continuously adjusts in response to the collective judgment of all validators. Validators assign scores, or “weights,” to AI models based on their performance. The more their assessments align with the broader network consensus, the more rewards they receive. If a validator consistently assigns ratings that differ significantly from the consensus, their influence and rewards decrease. This discourages random or biased evaluations and ensures that the best-performing AI models are accurately identified and properly rewarded.
To understand this system, imagine a panel of judges scoring a competition. If most judges give a performance a high score, but one judge gives it a much lower score without reason, their opinion is considered less reliable. Over time, this unreliable judge will have less influence over final scores. The same principle applies to validators in Bittensor. If their assessments consistently match what the majority of experienced validators determine to be fair, they maintain strong influence and earn higher rewards. If they attempt to manipulate results or consistently make poor judgments, their weight in the network decreases.
Previously, validator rewards were based on a formula that combined past and present performance, meaning that once a validator established a good track record, they could continue benefiting from it even if they stopped making high-quality evaluations. The new consensus-based approach replaces this with a real-time adjustment mechanism. A validator’s effectiveness is no longer based on historical data alone but on how accurately they evaluate AI models in the present. This creates a more dynamic system where validators must consistently perform well to maintain their rewards.
One issue that could arise in a system like this is validators simply copying each other’s evaluations instead of doing their own analysis. To prevent this, Bittensor uses a commit-reveal process. Validators must first submit encrypted evaluations that are hidden from others. After a set period, these evaluations are revealed. This prevents validators from waiting to see what others say and copying their responses, forcing them to make independent assessments. This mechanism keeps the evaluation process fair and ensures that rewards go to those who put in real effort rather than those who attempt to game the system.
Another important aspect of this system is how validator influence grows over time. Validators “bond” to AI models they believe are strong, and this bond increases as they continue to make correct evaluations. The strength of this bond determines how much a validator earns from the success of the AI model they support. If a validator consistently identifies high-quality AI models early, their rewards will compound over time as those models gain recognition. However, if they frequently misjudge quality, their influence weakens, and their rewards diminish.
Subnet owners have control over how strict or flexible this system is by adjusting a setting called liquid_alpha_enabled. This setting controls how much a validator’s influence shifts in response to network consensus. If set to a higher level, validators who frequently match the consensus gain influence faster, and those who deviate lose influence more quickly. If set lower, the system is more forgiving, allowing validators more room to develop their judgment. This gives subnet owners the ability to fine-tune the fairness and responsiveness of the system according to the needs of their specific subnet.
This approach makes sure that validators who put in effort and make thoughtful evaluations are rewarded appropriately. Instead of a fixed formula that can be exploited or outdated, this real-time system ensures rewards are based on ongoing accuracy and engagement. Validators who try to manipulate outcomes or take shortcuts lose influence, while those who provide useful assessments earn more over time.
Highlights
Dynamic TAO is a significant enhancement to Bittensor’s tokenomics and governance, introduced on February 13, 2025, after extensive research and development. This upgrade decentralizes the network’s economic model by assigning each subnet its own unique token, known as an alpha token, which operates alongside the primary TAO token. This structure allows subnets to manage their own economies, fostering specialization and autonomy within the Bittensor ecosystem.
In the Dynamic TAO framework, each subnet maintains a liquidity reserve comprising TAO and its respective alpha tokens. Users can stake TAO into a subnet’s reserve to receive alpha tokens, effectively “voting” for the subnet’s value and performance. The exchange rate between TAO and alpha tokens is determined by the ratio of TAO to alpha in the reserve, reflecting the market’s assessment of the subnet’s utility and demand.
The emission mechanism under Dynamic TAO is designed to distribute rewards in a balanced manner. TAO tokens are issued to subnets based on the relative market value of their alpha tokens. Subnets with higher demand and liquidity for their alpha tokens receive a larger share of TAO emissions, incentivizing the provision of valuable services and the attraction of users. Additionally, subnets emit their own alpha tokens at a rate that follows a halving schedule similar to TAO, ensuring controlled and predictable token supply growth.
The distribution of alpha tokens within a subnet is structured to reward various participants. Each subnet can issue up to one alpha token per block, which is allocated as follows: 18% to the subnet owner, 41% to validators, and 41% to miners.
To prevent potential manipulation of alpha token prices, Dynamic TAO employs a constant product automated market maker (AMM) model. In this system, large trades relative to the pool’s liquidity result in increased slippage costs, making price manipulation economically unfeasible. For example, purchasing a significant portion of the alpha tokens in a pool would substantially increase the token’s price, deterring attempts to artificially inflate or deflate token values.
Bittensor utilizes a Random Order Finalization mechanism to enhance fairness in transaction processing. This approach randomizes the order of transactions within each block, preventing actors from gaining an unfair advantage through tactics like front-running. As a result, all participants have an equal opportunity, and the network maintains a level playing field for transactions.
Subnet Zero, also known as the root subnet, operates uniquely within the Dynamic TAO structure. It does not have its own alpha token and does not host mining or validation activities. However, validators can register on Subnet Zero, and TAO holders can stake to these validators, allowing for subnet-agnostic staking. This design provides flexibility for participants to support the network without being tied to a specific subnet, accommodating diverse strategies and preferences within the Bittensor ecosystem.
Validators are responsible for reviewing the work of AI models and making sure the network rewards useful contributions. They act as quality controllers, checking the accuracy and value of AI-generated responses. Their scores determine how many TAO tokens each AI contributor earns.
To become a validator, a participant must meet a few conditions. First, they need to register and get a unique ID within the subnet they want to validate. Then, they must stake at least 1,000 TAO tokens and rank among the top 64 validators in that subnet. These conditions help ensure that only committed and capable validators take part.
Validators can work in multiple subnets if they meet the requirements for each. They also have the option to attract other users to stake TAO tokens on their behalf. This means validators who perform well and build trust in the community can receive more support from other network participants.
The system encourages validators to act honestly and efficiently. If they consistently provide useful assessments, they earn more TAO rewards. If they try to manipulate the system or provide poor evaluations, they risk losing rewards.
Bittensor uses a commit-reveal system to prevent validators from copying each other’s scores instead of doing independent assessments. Since validator scores determine how many TAO tokens AI contributors earn, some validators might try to take shortcuts by waiting to see what others submit before submitting their own evaluations. This could lead to biased or unfair rewards.
To stop this, the network requires validators to first submit their scores in an encrypted form, which hides their actual evaluations. Then, after a short waiting period, they must reveal the real scores they assigned. Because of this time delay, validators cannot simply copy someone else’s scores. This ensures that all evaluations are independent and fair.
This system needs to be carefully timed. If the waiting period is too short, dishonest validators could still find ways to cheat. If it is too long, it could slow down the network. Bittensor allows subnet owners to set the appropriate delay so that the system remains secure while running smoothly.
The commit-reveal process is also linked to the way new participants join the network. New AI models and validators get a grace period before they are removed for poor performance. The waiting time for commit-reveal should always be shorter than this grace period, so participants are not unfairly removed before their scores are revealed.
Consensus-based weights in Bittensor determine how validators are rewarded for their evaluations of AI models. Instead of relying on a fixed formula, this system continuously adjusts in response to the collective judgment of all validators. Validators assign scores, or “weights,” to AI models based on their performance. The more their assessments align with the broader network consensus, the more rewards they receive. If a validator consistently assigns ratings that differ significantly from the consensus, their influence and rewards decrease. This discourages random or biased evaluations and ensures that the best-performing AI models are accurately identified and properly rewarded.
To understand this system, imagine a panel of judges scoring a competition. If most judges give a performance a high score, but one judge gives it a much lower score without reason, their opinion is considered less reliable. Over time, this unreliable judge will have less influence over final scores. The same principle applies to validators in Bittensor. If their assessments consistently match what the majority of experienced validators determine to be fair, they maintain strong influence and earn higher rewards. If they attempt to manipulate results or consistently make poor judgments, their weight in the network decreases.
Previously, validator rewards were based on a formula that combined past and present performance, meaning that once a validator established a good track record, they could continue benefiting from it even if they stopped making high-quality evaluations. The new consensus-based approach replaces this with a real-time adjustment mechanism. A validator’s effectiveness is no longer based on historical data alone but on how accurately they evaluate AI models in the present. This creates a more dynamic system where validators must consistently perform well to maintain their rewards.
One issue that could arise in a system like this is validators simply copying each other’s evaluations instead of doing their own analysis. To prevent this, Bittensor uses a commit-reveal process. Validators must first submit encrypted evaluations that are hidden from others. After a set period, these evaluations are revealed. This prevents validators from waiting to see what others say and copying their responses, forcing them to make independent assessments. This mechanism keeps the evaluation process fair and ensures that rewards go to those who put in real effort rather than those who attempt to game the system.
Another important aspect of this system is how validator influence grows over time. Validators “bond” to AI models they believe are strong, and this bond increases as they continue to make correct evaluations. The strength of this bond determines how much a validator earns from the success of the AI model they support. If a validator consistently identifies high-quality AI models early, their rewards will compound over time as those models gain recognition. However, if they frequently misjudge quality, their influence weakens, and their rewards diminish.
Subnet owners have control over how strict or flexible this system is by adjusting a setting called liquid_alpha_enabled. This setting controls how much a validator’s influence shifts in response to network consensus. If set to a higher level, validators who frequently match the consensus gain influence faster, and those who deviate lose influence more quickly. If set lower, the system is more forgiving, allowing validators more room to develop their judgment. This gives subnet owners the ability to fine-tune the fairness and responsiveness of the system according to the needs of their specific subnet.
This approach makes sure that validators who put in effort and make thoughtful evaluations are rewarded appropriately. Instead of a fixed formula that can be exploited or outdated, this real-time system ensures rewards are based on ongoing accuracy and engagement. Validators who try to manipulate outcomes or take shortcuts lose influence, while those who provide useful assessments earn more over time.
Highlights