Unveiling OpenAI's Game-changing Updates to GPT-4 and GPT-3.5-Turbo

Jayson Gent

This year has been significant for OpenAI, marked by the launch of revolutionary language models GPT-3.5-Turbo and GPT-4. Developers across the globe are already harnessing the power of these models to create extraordinary applications.

OpenAI continues to break boundaries by introducing several fascinating enhancements to these models.

Chat Completions API’s Novel Function Calling Feature

OpenAI announced a new function calling capability in the GPT-4-0613 and GPT-3.5-Turbo-0613 models. This breakthrough allows developers to present functions to the models and receive JSON objects that encapsulate arguments needed for those functions. This incredible feature bridges GPT’s abilities with external resources and APIs, making the extraction of structured data from models more reliable.

Developers can now develop chatbots that integrate external resources and APIs by describing functions to the models. For example, chatbots can now answer queries by invoking external tools, transcribe natural language into API calls or database requests, and parse structured data from text. OpenAI has introduced “functions” and “function_call,” new API parameters, that enable developers to describe functions using JSON Schema and instruct models to execute specific functions.

To assist developers in using function calling, OpenAI offers a detailed developer documentation. It comprises examples and suggestions on enhancing function calling with evals.

Enhanced and More Controllable GPT-4 and GPT-3.5-Turbo

OpenAI has upgraded GPT-4 and GPT-3.5-Turbo, making them more adaptable and controllable. This provides developers with more granular control over the models’ behavior, allowing for customized responses.

The refined versions open up an expanded realm of possibilities for developers to investigate and execute unique applications using GPT-4 and GPT-3.5-Turbo.

The Debut of GPT-3.5-Turbo’s 16k Context Version

Besides the conventional 4k context version, OpenAI introduces a new 16k context version of GPT-3.5-Turbo. The extended context capability allows for lengthier and more in-depth interactions with the models. Developers can now engage in lengthier dialogues with the models, facilitating a more nuanced and context-aware user experience.

The expanded context size of GPT-3.5-Turbo allows applications that require an in-depth understanding of user inputs and dialogue history. It enables the models to maintain a firm grip on the conversation context, leading to more precise and relevant responses.

Considerable Reduction in Costs

OpenAI announces a cost reduction of 75% on its state-of-the-art embeddings model. Additionally, there is a cost decrease of 25% on input tokens for GPT-3.5-Turbo, making it more economical for developers to harness the model’s robust capabilities.

By cutting the costs associated with these models, OpenAI aims to make them more accessible and cost-effective for developers. This cost reduction promotes innovation and enables developers to create influential applications without compromising on quality.

GPT-3.5-Turbo-0301 and GPT-4-0314 Models Deprecation Timeline

OpenAI is sharing an official deprecation timeline for the GPT-3.5-Turbo-0301 and GPT-4-0314 models. Developers are encouraged to switch to the newer versions, GPT-3.5-Turbo-0613 and GPT-4-0613, to benefit from the latest features and enhancements.

Remember that OpenAI maintains its commitment to data privacy and security. As with the previous models, customers own all outputs generated from their requests, and their API data will not be used for training purposes.