Back

 Industry News Details

 
OpenAI begins allowing customers to fine-tune GPT-3 Posted on : Dec 15 - 2021

OpenAI, the San Francisco, California-based lab developing AI technologies including large language models, today announced the ability to create custom versions of GPT-3, a model that can generate human-like text and code. Developers can use fine-tuning to create GPT-3 models tailored to the specific content in their apps and services, leading to ostensibly higher-quality outputs across tasks and workloads, the company says.

“According to Gartner, 80% of technology products and services will be built by those who are not technology professionals by 2024. This trend is fueled by the accelerated AI adoption in the business community, which sometimes requires specifically tailored AI workloads,” an OpenAI spokesperson wrote in an email. “With a single line of code, customized GPT-3 allow developers and business teams to run and train powerful AI models based on specific datasets, eliminating the need to create and train their own AI systems from scratch, which can be quite costly and time-intensive.”

Customized GPT-3

Built by OpenAI, GPT-3 and its fine-tuned derivatives, like Codex, can be customized to handle applications that require a deep understanding of language, from converting natural language into software code to summarizing large amounts of text and generating answers to questions. GPT-3 has been publicly available since 2020 through the OpenAI API; as of March, OpenAI said that GPT-3 was being used in more than 300 different apps by “tens of thousands” of developers and producing 4.5 billion words per day.

The new GPT-3 fine-tuning capability enables customers to train GPT-3 to recognize a specific pattern for workloads like content generation, classification, and text summarization within the confines of a particular domain. For example, one customer, Keeper Tax, is using fine-tuned GPT-3 to interpret data from bank statements to help to find potentially tax-deductible expenses. The company continues to fine-tune GPT-3 with new data every week based on how their product has been performing in the real world, focusing on examples where the model fell below a certain performance threshold. Keeper Tax claims that the fine-tuning process is yielding about a 1% improvement week-over-week — which might not sound like a lot — but it’s compounding over time.

“[A thing that] we’ve been very mindful of and have been emphasizing during our development of this API is to make it accessible to developers who might not necessarily have a machine learning background,” OpenAI technical staff member Rachel Lim told VentureBeat in a phone interview. “How this manifests is that you can customize a GPT-3 model using one command line invocation. [W]e’re hoping that because of how accessible it is, we’re able to reach a more diverse set of users who can take their more diverse set of problems to technology.”

Lim asserts that the GPT-3 fine-tuning capability can also lead to cost savings, because customers can count on a higher frequency of higher-quality outputs from fine-tuned models compared with a vanilla GPT-3 model. (OpenAI charges for API access based on the number of tokens, or words, that the models generate.) While OpenAI levies a premium on fine-tuned models, Lim says that most fine-tuned models require shorter prompts containing fewer tokens — which can also result in savings. View more