with @smc90 @withfries2 Given all the recent buzz around the language model GPT-3 -- what is "it", how does it work, where does it fit into the arc of broader tech trends -- what's hype, what's real here? Are we really getting closer to artificial general intelligence? We cover all this and more and discuss broader implications for startups, incumbents, and the future of work too.
In this episode -- cross posted from our 16 Minutes show feed -- we cover all the buzz around GPT-3, the pre-trained machine learning model from OpenAI that’s optimized to do a variety of natural-language processing tasks.
It’s a commercial product, built on research; so what does this mean for both startups AND incumbents… and the future of “AI as a service”? And given that we’re seeing all kinds of (cherrypicked!) examples of output from OpenAI’s beta API being shared — how do we know how good it really is or isn’t? How do we know the difference between “looks like” a toy and “is” a toy when it comes to new innovations?
And where are we, really, in terms of natural language processing and progress towards artificial general intelligence? Is it intelligent, does that matter, and how do we know (if not with a Turing Test)? Finally, what are the broader questions, considerations, and implications for jobs and more? Frank Chen explains what “it” actually is and isn’t and more in conversation with host Sonal Chokshi. The two help tease apart what’s hype/ what’s real here… as is the theme of 16 Minutes.