Fine-tuning, harmonized: Taoverse and Macrocosmos team up on SN37
Taoverse is tackling fine-tuning on Bittensor, partnering with Macrocosmos to launch SN37.
By Dr. Steffen Cruz and Will Squires, Macrocosmos
Fine-tuning plays a critical role in training machine learning models. Demanding of both time and compute, fine-tuning has the biggest impact on whether or not an AI application meets the needs of its users. Fine-tuning delivers the ‘final mile’ of AI model development, turning rambling outputs into useful intelligence.
That’s why Taoverse are launching a new fine-tuning subnet: Subnet 37, and partnering with Macrocosmos to do it. With the Taoverse team’s engineering skill, and Macrocosmos’ subnet design expertise and deep AI talent, our partnership has what it takes to set a new standard for fine tuning on Bittensor - which could boost performance across the entire ecosystem.
The challenges to change
The technical requirements for operating a fine-tuning Subnet are significant. It has to bring together the right pretrained base model, the right dataset, the right evaluation mechanism and sufficient compute. Moreover, it has to be aligned with the Bittensor ecosystem. What works in a closed, centralized, in-house system may not be sufficient in a competitive, adversarial environment of an open-source, decentralized community.
Subnet 6 and the AI research team at Nous Research have shown there’s demand for a fine-tuning subnet. That isn’t surprising; fine-tuning is a critical step in developing a full AI ecosystem on Bittensor. As we build more subnets to pursue the broader AI goals, vision and objectives of the Bittensor community, fine-tuning models efficiently will be vital. With SN37, Taoverse will build on the work of the Nous team and the codebase from SN6, and harness synergies with Macrocosmos subnets like SN9, SN1 and SN13.
Combining Taoverse’s engineering expertise with Macrocosmos’ experience running complementary subnets and the support of the community, we’re ready to bring the benefits of fine-tuning to Bittensor. With this subnet, we’re finally able to bring the entire AI production line into Bittensor: SN9’s pre-trained models can be used to power fine tuning, supporting our ultimate goal of training models that can be used for inference across the ecosystem. We’re now ready to realize the potential of a supercharged ML system of systems - where input from SN13 and SN1 is pre-trained in SN9, fine-tuned in SN37, and then fed back into SN1. This virtuous cycle allows us to backpropagate on Bittensor. We will partner with broader subnets outside of the Macrocosmos ecosystem including renewing the SN1 principle of using SN18’s synthetic data.
Yet we can only achieve this through an ongoing dialogue with our community. Intelligence is inherently complex; training it on distributed networks even more so. It cannot be achieved in isolation, regardless of how technically accomplished or experienced your team is. We’re convinced that, with ongoing support, fine-tuning on Bittensor can achieve its potential - and improve performance throughout the whole ecosystem.