First what is an API and GPT-3? We will start with an API.
An application programming interface (API) is a connection that allows computers or computer programmes to communicate with one another. It is a type of software interface that provides a service to other programmes. An API simplifies programming by abstracting the underlying functionality and only exposing the objects or actions required by the developer. While a graphical interface for an email client may provide a button that performs all of the steps for fetching and highlighting new emails, an API for file input/output may provide the programmer with a function that copies a file from one location to another without requiring the developer to understand the file system operations taking place behind the scenes.
Okay, so what is GPT-3 or Generative Pre-trained Transformer 3? According to OpenAI it is a deep learning-based autoregressive language model that generates human-like text. It is the third generation language prediction model in OpenAI's GPT-n series. The entire version of GPT-3 can store 175 billion machine learning parameters. GPT-3 is composed of natural language processing (NLP) systems that use pre-trained language representations. Prior to the introduction of GPT-3, the biggest language model was Microsoft's Turing NLG, which was launched in February 2020 and has a capacity of 17 billion parameters, which is less than a tenth of GPT-3's.The text created by GPT-3 is of such good quality that it is impossible to differentiate from that written by a person, which has both advantages and disadvantages. Since then, Microsoft has invested $1 billion in OpenAI, and on September 22, 2020, it announced that it has licenced "exclusive" usage of GPT-3; others can still use the public API to receive output, but only Microsoft has access to GPT-3's underlying technology.
Today OpenAI launched Codex which is an API that uses artificial intelligence to write code from natural language. The model that was displayed today was a very rudimentary version of what the possibilities are. Greg Brockman and Wojciech Zaremba showed us how the model can complete a function with 37% accuracy so far.
I believe Codex leads the way into what we as humans really want from computers - we say something and the computer just does it. The demo started with them just typing the commands until later they started using their voice the same way you would talk to a friend. Programming languages are great and they have brought us far but we need to be able to communicate with computers without studying a new language to do menial tasks or just for the average person who doesn't have an interest in programming. Humans and computers relying on lingua franca to understand each other is soon to be an ancient relic with Codex. As the model is used the neural net expands and becomes more accurate the more it practises what we want. This will take time but soon enough I believe we will forget how to manually input information on a computer.
About halfway through the demonstration, they showed how to create a video game using natural text. If this is only the beta and you can do that, what comes next?
As more APIs are created, AI systems that write code can easily start creating products without the need for human input. For years I've been warning people about AI taking over physical jobs such as Amazon packing, which they have. But this is cognitive and logical tech that completely bucks the trend that cognitive labor is going to change industries sooner than physical labor.
Are you a data scientist? Don't worry you weren't left out either. Here is a demonstration of the API computing a weather dataset in San Francisco.
This will rapidly transform the tech industry - programmers and developers in the future will be much more productive and be able to do more things with neural networks as partners. For people that don't program you can wait patiently for Siri to adopt this new API so it can actually be useful. Don't worry we are one step closer to a working Iron Man's J.A.R.V.I.S.