OpenAI Unveils GPT-3.5 Turbo Fine-tuning and API Updates

Open AI GPT-3.5 finetuning


OpenAI has recently announced the availability of fine-tuning for GPT-3.5 Turbo, with GPT-4 fine-tuning coming this fall. This exciting update allows developers to customize models to perform better for their specific use cases and run these custom models at scale.



Early tests have shown that a fine-tuned version of GPT-3.5 Turbo can match, or even outperform, base GPT-4-level capabilities on certain narrow tasks. Importantly, data sent in and out of the fine-tuning API is owned by the customer and is not used by OpenAI, or any other organization, to train other models.

Fine-tuning has a range of use cases. Since the release of GPT-3.5 Turbo, developers and businesses have been asking for the ability to customize the model to create unique and differentiated experiences for their users. With this launch, developers can now run supervised fine-tuning to make this model perform better for their use cases.

Some of the benefits of fine-tuning include improved steerability, reliable output formatting, and custom tone. For instance, developers can use fine-tuning to ensure that the model always responds in German when prompted to use that language. Fine-tuning also improves the model’s ability to consistently format responses—a crucial aspect for applications demanding a specific response format, such as code completion or composing API calls.


The fine-tuning process is straightforward. Developers prepare their data, upload files, create a fine-tuning job, and once the model finishes the fine-tuning process, it is available to be used in production right away.

OpenAI is also committed to ensuring the safety of fine-tuning. To preserve the default model’s safety features through the fine-tuning process, fine-tuning training data is passed through OpenAI’s Moderation API and a GPT-4 powered moderation system to detect unsafe training data that conflict with their safety standards.

The pricing for fine-tuning is broken down into two buckets: the initial training cost and usage cost. For example, a GPT-3.5 Turbo fine-tuning job with a training file of 100,000 tokens that is trained for 3 epochs would have an expected cost of $2.40.

In addition to these updates, OpenAI has also made babbage-002 and davinci-002 available as replacements for the original GPT-3 base models. These models can be fine-tuned with OpenAI’s new API endpoint /v1/fine_tuning/jobs.

In conclusion, the fine-tuning feature for GPT-3.5 Turbo is a significant step forward, offering developers more control and customization over their AI models. With the upcoming fine-tuning for gpt-3.5-turbo-16k, we can expect even more exciting developments in the near future.


Get updates directly in your mailbox by signing up for our newsletter. Signup Now

Comments

Popular Posts