GPT-3 Is an Amazing Research Tool. But OpenAI Isn’t Sharing the Code.

Some A.I. experts warn against a lack of transparency in the buzzy new program

Dave Gershgorn
OneZero
Published in
8 min readAug 20, 2020

--

OpenAI’s company logo
Image: OpenAI

For years, A.I. research lab OpenAI has been chasing the dream of an algorithm that can write like a human.

Its latest iteration on that concept, a language-generation algorithm called GPT-3, has now been used to generate such convincing fake writing that one blog written by the it fooled posters on Hacker News and became popular enough to top the site. (A telling excerpt from the post: “In order to get something done, maybe we need to think less. Seems counter-intuitive, but I believe sometimes our thoughts can get in the way of the creative process.”)

While OpenAI has released its algorithms to the public in the past, it has opted to keep GPT-3 locked away.

OpenAI has been able to achieve such a powerful algorithm because of its access to massive amounts of computing power and data. And the algorithm itself is bigger than any that’s come before it: The largest version of GPT-3 has 175 billion parameters, which are equations that help the algorithm make a more precise prediction. GPT-2 had 1.5 billion.

While OpenAI has released its algorithms to the public in the past, it has opted to keep GPT-3 locked away. The research firm says it’s simply too large for most people to run, and putting it behind a paywall allows OpenAI to monetize its research. In the past year, OpenAI has changed its corporate structure to make itself more appealing to investors. It dropped a nonprofit standing in favor of a “capped-profit” model that would allow investors to get returns on their investment if OpenAI becomes profitable. It also entered into a $1 billion deal with Microsoft, opening collaboration between the firms and giving OpenAI priority access to Microsoft’s cloud computing platform.

Researchers who spoke to OneZero questioned OpenAI’s decisions to not release the algorithm, saying that it goes against basic scientific principles and makes the company’s claims harder to verify. (A representative for OpenAI declined to comment when reached for this article.)

--

--

Dave Gershgorn
OneZero

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.