128 subnets: Bottleneck or backbone?
Bittensor has capped its number of subnets - what does this mean for the ecosystem?
This week, it finally happened: the long-rumoured subnet limit was introduced, capping the total number of subnets at 128 (at least, for now).
Limiting the number of subnets on Bittensor has sparked ongoing debate - not just in protocol mechanics, but in the broader philosophy of what Bittensor should be. On the surface, hard-capping subnet registrations feels counterintuitive in a decentralized, permissionless system. So why artificially constrain growth in a network designed to scale knowledge and innovation?
Why not allow unlimited subnets?
A fixed cap introduces scarcity. By definition, some projects will be excluded. And as interest in Bittensor accelerates, so too does the number of high-quality projects vying for a place in the network. Is there something intrinsically better about a project that happened to join? Maybe not - Google was notoriously late to the AI game but now has one of the stronger AI assistants available.
There isn’t truly anything that ties the long term value creation of a subnet to the timing of registration. Older projects got in earlier, and likely have more experience with the protocol, but that doesn’t mean new teams can’t have talent. Is there any number, greater or less than 128 that is magic, and definitively the best number of teams to have working on bittensor? Will a hard cap at 128 keep the quality of Bittensor higher?
Limiting registration alone doesn’t filter out poor quality or abandoned projects—it merely blocks newcomers. If the goal was reducing network ‘noise’, then the cap doesn’t address the existing clutter. Instead, it places a lid on evolution.
Why the limit exists
Despite all this, there are solid, structural reasons why a limit like this makes sense - especially in the absence of a deregistration mechanism.
In earlier versions of Bittensor, and in most decentralized systems, churn - where less useful or inactive entities exit the system - is not only normal, but essential. Without a way to remove subnets that are abandoned, broken, or stagnant, the index becomes bloated. Resources are wasted, participants get confused, and valuations become more difficult. A deregistration process would allow the system to recycle capacity for newer, more promising work.
The 128 limit is likely intended only as a stopgap - a way to artificially slow growth while a more elegant pruning mechanism is developed.
Practical and human constraints
There’s also a human limit to how many subnets the average participant can track, understand, and meaningfully interact with. Think of the S&P 500: you probably don’t know every company on the list, but you trust the curation process. A subnet list capped at 128 offers a similar trust layer: a semi-curated surface area of projects that can be researched, discussed, and ranked with some degree of depth.
More subnets means more network extrinsics: more alpha swaps, more weight setting transactions, more miner registrations. All of this adds load to the chain. Without scaling improvements, this can lead to increased latency, bloat, and even network instability. And with each additional subnet, the operational cost of maintaining the chain grows - both computationally and economically.
So what’s the real issue?
The core problem isn’t the existence of a limit - it’s the lack of a way to move through it. If dead or low-value subnets cannot be deregistered, then the 128 slots will stagnate. Those that perform well will continue to do so, and those that don’t will take up space and attention.
The offchain sale mechanism for subnets is also inefficient. Sales are transacted behind closed doors, which leads to asymmetric information and increased difficulty in determining what any given subnet is actually worth. Promising new projects can end up sidelined, not because they lack merit but because the system is insufficiently flexible.
If noise is the problem, we need more than a cap; we need a cleaner. Deregistration isn’t an optional extra - it’s the only mechanism that transforms the subnet registry into an evolving ecosystem.
Until then, the 128-subnet cap serves more as a patch than a principle. But it’s a step further on the path to solving the underlying problem.
Kalei Brady is a Data Scientist and Lead Engineer of Subnet 1 and Subnet 37