The age of massive AI improvements in software such as ChatGPT is almost over. OpenAI CEO Sam Altman has revealed that the sweeping changes of AI training is already reaching its limit.
Speaking to an audience at the Massachusetts Institute of Technology, Altman explained that AI development is already reaching a massive wall. While AI improvements have resulted in huge quality boosts to ChatGPT, Stable Diffusion and more, they are reaching their end.
OpenAI is currently deep in development of GPT-4, the next-generation of its ChatGPT platform. While companies such as Microsoft have praised the technology, Altman believes its massive growth is about to be majorly stunted.
“I think we're at the end of the era where it's going to be these, like, giant, giant models,” Altman said, via Wired.
Despite this, Altman believes that there are ways of improving. While massive AI improvements may be a thing of the past, there is still optimisation to be had on current models.
One of the main issues with massive ChatGPT improvements is not only the current limit of processing power, but also limits of data storage. New AI models are likely to be kept to massive conglomerates due to power and storage constraints, but even the likes of Microsoft and Google are reaching limits.
AI development hopes are looking to future storage technologies such as DNA storage drives. However, while this form of emerging technology may offer some hope for massive AI products, the technology is not available for commercial or private consumption.
Nevertheless, AI companies still believe there’s a lot of room for current-gen AI products to evolve. Through clever optimisations, new weighting techniques and new model architecture, there are still ways to improve AI technology.
OpenAI is still working on more advanced versions of its GPT AI product. In the coming years, GPT-5 will be released, bringing even more improvements to the artificial intelligence platform.
For more articles like this, take a look at our News , AI , and Tech pages.