OpenAI has released a new image and text understanding AI model called GPT-4.
GPT-4 is available for paying users via ChatGPT Plus and developers can sign up on a waitlist to access the API.
GPT-4 can generate text and accept image and text inputs, and performs at "human level" on various benchmarks.
GPT-4 has the ability to caption and interpret complex images, and can understand and analyze ingredients in a refrigerator.
GPT-4's pricing is $0.03 per 1,000 "prompt" tokens and $0.06 per 1,000 "completion" tokens.
GPT-4 is being used by early adopters such as Stripe, Duolingo, Morgan Stanley, and Khan Academy.
GPT-4 was trained using publicly available data, including from public webpages, and data that OpenAI licensed.
OpenAI worked with Microsoft to develop a "supercomputer" from the ground up in the Azure cloud, which was used to train GPT-4.
GPT-4 has a new API capability called "system messages" that allow developers to prescribe style and task by describing specific directions.
GPT-4 is not perfect and can still "hallucinate" facts and make reasoning errors, but OpenAI acknowledges improvements in particular areas.