Intro and context
OpenAI is an AI research and deployment company whose goal it is to make ****artificial general intelligence benefits all of humanity. For a few initial years, OpenAI was a nonprofit, but in 2019 it restructured as a “capped-profit” company that cuts returns from investments past a certain point. Profits in excess of a 100x return will be passed onto the overarching nonprofit company that will distribute them as it sees fit.
As you have probably guessed by now, Artificial Intelligence is the absolute core of what OpenAI does. OpenAI attempts to build safe and beneficial artificial general intelligence (AGI – highly autonomous systems that outperform humans at most economically valuable work). Furthermore, OpenAI creates powerful artificial intelligence models that users can use to build next-gen applications.
OpenAI’s Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text, and it can perform a wide variety of natural language tasks.
Use cases of GPT-3
- Duolingo – Duolingo uses GPT-3 to provide French grammar corrections, and after conducting an internal study, Duolingo concluded that the use of GPT-3 led to an improvement in second language writing skills.
- Github – Github used GPT-3 in their Copilot feature, which is an AI programmer that helped users write code faster and with less work. Github Copilot can do that by applying to context in your editor and it synthesizes whole lines and entire functions of code.
- Viable – Viable helps businesses better and more quickly understand what customers are telling them by using language models, such as GPT-3, to analyze customer feedback and generate summaries and insight.
In January 2021, OpenAI introduced DALL E 2 – a system that can generate images from text. A user can edit the image however they like by inputting the change they would like to see through text. Furthermore, the system accepts images as inputs and can output variations in angles and styles of those images.
Through deep learning, DALL E 2 not only can understand individual objects, but it can also understand the relationship between these objects. So much so, that a user can input “a koala riding a motorcycle” and DALL E 2 will output an image showing exactly that.
Challenges facing OpenAI
Many have criticized that AI will forever be imitative, and that the most recent innovative news and developments have simply allowed AI models to imitate better because the recent technological capabilities allow users to accumulate extremely large datasets.
In my opinion, there is some truth to this criticism and I think that in certain business applications, building an AI model that can provide extremely accurate imitations allows businesses to achieve their financial goals. Large language models are limited in that they lack knowledge about the world. Common general knowledge is frequently relied upon when making connections between objects. Perhaps enough data can one day allow OpenAI to build models that understand extra context and common knowledge, but that is certainly one of the challenges they face.
Furthermore, the challenge for OpenAI, is to assess and identify whether their AI models are sufficiently predictive in their nature in order to achieve OpenAI’s desired goal. Given OpenAI’s switch to a for profit model, they might face pressure to build AI products that continue to be imitative in nature if that means that the company will receive more investment. But perhaps imitative AI modeling has use cases in which it can contribute to advancing humanity. Of course, there will be certain benevolent use cases where true AI prediction will be required, but maybe next-gen imitation can also be beneficial.
Another challenge that OpenAI faces, is one that is directly related to their mission. They must ensure that their models are not used malevolently, and when building them, they must design them in such a way, so as to eliminate any bias. OpenAI are very carefully when building models, and take great care in making sure that models are ready to be released to the public. Once released, their models have safeguards in place to ensure that the models are not used to harass.
Given the nature of OpenAI’s mission, they have the opportunity to be the ones to plug in the gaps that are commonly seen in AI language models. In June 2021, they already published a paper that offered a new technique to battle toxicity in GPT-3’s responses – in short, this paper establishes a process which adds a new layer of human intervention to the modeling. OpenAI have the opportunity to make a real difference in the world and to help eliminate toxic biases from artificial intelligence.
OpenAI’s work focuses on tools that will allow its users to massively scale their productivity and enhance their own output. Other companies are more likely to focus on tools that just help them solve their own problems, not to mention that these tools will likely be kept private, but OpenAI builds tools that address gaps in many different use cases and markets.