Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

OpenAI Updates GPT-3.5 Turbo with Fine-tuning Capability

OpenAI has announced an upgrade to its GPT-3.5 Turbo model, allowing enterprise customers to fine-tune the model for their specific use cases. This enhancement offers developers the ability to customize the model for better performance on their specific tasks and run these models at scale. Early testing has shown that the fine-tuned version of GPT-3.5 Turbo can rival or even surpass the base GPT-4 model in certain small-scale tasks.

OpenAI emphasizes that the input and output data for the fine-tuning API will be the exclusive property of the customer, ensuring the security and privacy of their data. This approach guarantees that OpenAI or any other organization cannot use this data to train other models.

The addition of fine-tuning capability for GPT-3.5 Turbo has been hailed as the most significant product upgrade from OpenAI since the launch of the APP Store. It provides developers and businesses with the ability to create unique and differentiated user experiences by customizing the model for specific use cases.

Fine-tuning offers several benefits, including improved control over the model’s output, consistent output formats, and customized tone. Developers can use fine-tuning to ensure that the model consistently responds in a given language or provides concise answers. It also allows companies with well-established brand voices to align the model’s output with their brand identity.

In addition to enhanced performance, fine-tuned GPT-3.5 Turbo models can maintain their performance even with shortened prompts. Furthermore, these models can handle 4k tokens, twice the capacity of previous fine-tuned models. Early testing has shown that prompt sizes can be reduced by up to 90%, resulting in faster API calls and reduced costs. When combined with other techniques such as prompt engineering, information retrieval, and function calls, fine-tuning becomes a powerful tool.

OpenAI also announced upcoming support for fine-tuning of function calls and gpt-3.5-turbo-16k during the fall season.

The fine-tuning process involves four steps: data preparation, file upload, creation of a fine-tuning task, and use of a fine-tuned model.

Security is of utmost importance during the fine-tuning process. To ensure the safety of default models, fine-tuning training data undergoes OpenAI’s moderation API and is checked against the security standards supported by GPT-4’s moderation system, identifying any unsafe training data that conflicts with these standards.

However, the cost of fine-tuning GPT-3.5 Turbo is relatively high. The cost consists of initial training costs and usage costs:

– Training phase: $0.008 per 1K Tokens for input usage: $0.012 per 1K Tokens for output usage: $0.016 per 1K Tokens

For example, fine-tuning a training file with 100,000 tokens and training it for three epochs would cost approximately $2.40. Some have raised concerns about the price, as it is four times higher than the base GPT-3.5 model.

(Source: OpenAI Blog: GPT-3.5 Turbo Fine-tuning and API Updates)

The post OpenAI Updates GPT-3.5 Turbo with Fine-tuning Capability appeared first on TS2 SPACE.



This post first appeared on TS2 Space, please read the originial post: here

Share the post

OpenAI Updates GPT-3.5 Turbo with Fine-tuning Capability

×

Subscribe to Ts2 Space

Get updates delivered right to your inbox!

Thank you for your subscription

×