Back

 Industry News Details

 
OPENAI’S ARTIFICIAL INTELLIGENCE STRATEGY Posted on : Sep 26 - 2020

For several years, there has been a lot of discussion around AI’s capabilities. Many believe that AI will outperform humans in solving certain areas. As the technology is in its infancy, researchers are expecting human-like autonomous systems in the next coming years. OpenAI has a leading stance in the artificial intelligence research space. Founded in December 2015, the company’s goal is to advance digital intelligence in a way that can benefit humanity as a whole. Since its research is free from financial obligations, OpenAI can better focus on a positive human impact.

Deep Research and Product Development

OpenAI’s mission is to be the first to create artificial general intelligence (AGI); not to lead the world, but to ensure that it benefits all of humanity. The company is deeply involved in research and product development. Most recently, it has attracted global media traction for its introduction of GPT-3, an AI program and the largest language model ever made. Launched in August 2020, GPT-3 is the third in a series of autocomplete tools designed by the company. The program has taken years of development, but it is also surfing a wave of recent innovations within the field of AI text-generation.

Prior to this, in June 2020, OpenAI released an API for accessing new AI models. Unlike most AI systems that are designed for one use-case, the API provides a general-purpose “text in, text out” interface, allowing users to try it on virtually any English language task. In April same year, the company introduced Jukebox, a neural net that generates music, including rudimentary singing as raw audio in a variety of genres and artistic styles.

In April 2019, OpenAI created MuseNet, a deep neural network that can generate 4-minute musical compositions with 10 different instruments. This can even combine styles from country to Mozart to the Beatles. MuseNet uses the same general-purpose unsupervised technology as GPT-2, which is the second in a series of GPT and a large-scale transformer model trained to forecast the next token in a sequence, whether audio or text. In the same year in February, the company trained GPT-2 that generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization, all without task-specific training. The model was trained simply to envisage the next word in 40 GB of Internet text. GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages.

In July 2018, OpenAI trained a human-like robot hand, called Dactyl, to manipulate physical objects with unprecedented dexterity. The system was trained entirely in simulation and transferred its knowledge to reality, adapting to real-world physics using techniques the company has been working on for the past few years. Dactyl learns from scratch using the same general-purpose reinforcement learning algorithm and code as OpenAI Five, a multiplayer video game. In August 2017, the company built a bot which beats the world’s top professionals at 1v1 matches of Dota 2 under standard tournament rules. Dota 1v1 is a complex game with hidden information where agents must learn to plan, attack, trick, and deceive their opponents. In the same year in June, OpenAI released a new class of reinforcement learning algorithms, Proximal Policy Optimization (PPO), which performs comparably or better than state-of-the-art approaches while being much simpler to implement and tune. PPO has become the default reinforcement learning algorithm at OpenAI thanks to its ease of use and good performance. View More