The AI chatbot space is getting increasingly crowded as new challengers appear with their ChatGPT competitors. Google Bard and several Chinese firms have had their chance to challenge the king, and now it’s the turn of Stability AI, the team behind Stable Diffusion, a leading AI image generator.
Know as StableLM, the model is nowhere near as comprehensive as ChatGPT, featuring just 3 billion to 7 billion parameters compared to OpenAI’s 175 billion model. However, Stability AI states that its model will “demonstrate how small and efficient models can deliver high performance with appropriate training.” This could make it the go-to model for lower-end machines and systems, helping to democratize AI.
StableLM is certainly no slouch but it does lack the reinforcement-based learning of ChatGPT. As a pre-trained model, improvements will have to be made directly to the model by humans. But Stability AI has plans in the works to introduce larger model offerings, including one day introducing a 175 billion parameter model to match ChatGPT itself.
An Alpha version of the model is currently available to try on Hugging Face but it is an early demo and may have performance issues and mixed results.
The benefits of going open-source
As an open-source model Stable Diffusion are aiming to democratize chatbot style AI. Developers who may not have been able to afford access to the ChatGPT API or ChatGPT plugins can use StableLM to add AI to their creations.
Even if just as a proof of concept or trial run until they then use ChatGPT, this looks like a vital step that will nourish the creativity of bedroom coders and indie developers.
The videogame modding scene has shown that some of the best ideas can come from outside of traditional avenues and, hopefully, StableLM will find a similar sense of community. With this open-source structure and planned improvements to the model already on the way, StableLM could one day be a true ChatGPT killer.