The start-up OpenAI has presented the next version of the technology behind the popular text machine ChatGPT. Among other things, GPT-4 should deliver better results than the previous variants, as OpenAI announced on Wednesday night. Problems with technology - such as the fact that it can simply invent alleged facts - continue to exist, but should occur less frequently.
ChatGPT and the Dall-E software, which can generate images from text templates, are based on the previous GPT generation. Paying customers of OpenAI get access to GPT-4 for their services. There is a waiting list.
However, some customers are already using the technology. The language learning app Duolingo, for example, uses GPT-4 for dialogue training, which is available in a new, more expensive subscription. Microsoft confirmed that its Bing search engine has been using GPT-4 for a few weeks. Microsoft bought OpenAI in a multi-billion dollar deal and the money secured, among other things, access to the enormous computing power required.
Risk of "hallucinating" facts
For the GPT technologies, the software captured enormous amounts of text and images. On this basis, she can formulate sentences that can hardly be distinguished from those of a human being. The program estimates which words could follow next in a sentence. One of the risks of this basic principle is that the software will "hallucinate facts", as OpenAI calls it.
That could also happen to GPT-4, albeit less frequently than before, the blog entry said. The new version could also make simple logic errors and spread prejudices. GPT-4 only knows facts that happened before September 2021 - and it does not learn from experience, OpenAI emphasized.
GPT-4 is also said to be good at analyzing images and describing them with words - but OpenAI is not initially making this function available to customers.