Back

 Industry News Details

 
AI Experts Discuss Implications of GPT-3 Posted on : Feb 24 - 2021

Last July, GPT-3 took the internet by storm. The massive 175 billion-parameter autoregressive language model, developed by OpenAI, showed a startling ability to translate languages, answer questions, and – perhaps most eerily – generate its own coherent passages, poems, and songs when given examples to process. As it turns out, experts were captivated by these abilities, too: captivated enough, in fact, that researchers from OpenAI and a number of universities met several months ago to discuss the technical and sociopolitical implications of the platform.

The summit, helmed by OpenAI in partnership with Stanford’s Institute for Human-Centered Artificial Intelligence, convened in October. Apart from those two institutions, the remainder of the participants are currently unknown by the public, as the meeting was held under the Chatham House Rule, whereby a meeting’s information is public but its participants are secret.

On the table: two key questions on the future of large language models like GPT-3. First: what are the technical capabilities and limitations of those models? Second: what are the societal effects of widespread use of those models?

The summary of the summit was written by Alex Tamkin, Miles Brundage, Jack Clark, and Deep Ganguli, who characterized the conversation as “collegial and productive,” but added that “there was a sense of urgency to make progress sooner than later in answering these questions.” View More