Jordan Peterson is a well-known Canadian psychologist while ChatGPT is the moniker given to a general language processing model. The newness of ChatGPT (well, relative newness anyway) has given rise to a lot of fear and excitement, sometimes both.

The exciting part is centered around the unique possibilities that people in various jobs—mostly on the freelance side of coding and content writing—can take advantage of in direct coordination with AI. The expediting nature of ChatGPT offers a lot when it comes to streamlining workflow. 

The fear comes from the inescapable reality that ChatGPT, or something very similar, will end up putting freelancers (and many more jobs) out of work. There are merits to both arguments, however, Jordan Peterson takes it to an entirely new and terrifying level. Especially if it turns out that he’s right. 

Jordan Peterson ChatGPT

What is ChatGPT Capable Of?

Before getting into the crux of Jordan Peterson’s opinion on AI and ChatGPT in particular, it’s important to note a couple of things. First, there’s a widespread assumption that ChatGPT only writes relatively bland, matter-of-fact content that an editor could easily read and determine that it’s non-human. 

Second, there’s also the prevailing assumption that there’s nothing more to it. Both assumptions are drastically incorrect. Not only can ChatGPT take command and convert it into a copy that’s nearly indistinguishable from human hands, but it can also code. 

According to the commands given for what you want to code, ChatGPT can do it in a matter of seconds, rendering code that is completely logical and executes its function without a hitch. Thirdly, there is one thing that ChatGPT cannot do, at least not yet. 

It can’t write copy based on legitimate, lived experiences. In other words, new content is generated from a human experience rather than something that’s thoroughly researched and >Will ChatGPT become capable of that in the near future? It’s hard to say. Right now, it can only assemble data from what it can cull from the internet. Of course, there are hundreds of thousands of lived experiences written down on blogs all over the place. 

But, can ChatGPT properly mimic the emotional response from picking up a flower and smelling it? It’s hard to say at the moment. 

Jordan Peterson’s Thoughts and Warnings on AI

According to Jordan Peterson, he and a few colleagues spent some time inserting commands into ChatGPT and observing how well it functioned and what it could spit out. The results were understandably alarming to Peterson, as ChatGPT produced copy on a number of intentionally complex commands.

Not only did it produce copy, it did so in a way that Peterson himself said he probably could not differentiate between AI and his own writing. One of Peterson’s colleagues put ChatGPT through and SAT and it apparently performed pretty well, though certainly not top tier. 

What it did next, however, was pretty amazing. Peterson asked it to write an appropriately complex essay, which ChatGPT then produced. Then he had ChatGPT grade its own copy, which it proceeded to do without a hitch, producing a profound and logical analysis of its own work.

As fascinated and afraid of AI as Peterson seems to be, he also believes that it will grow immensely in this year alone. When it comes to “Artificial Intelligence,” most people assume that those two words imply consciousness. 

Right now, things like ChatGPT are constrained by their program limitations and are not truly self-aware. No matter how close it may feel like it is self-aware, it’s not. While Peterson doesn’t necessarily take a blatant, cut-and-dry stance on ChatGPT, it’s clear that he is not fond of it.

It’s also clear that he predicts a fairly terrible future in which ChatGPT grows into something that largely replaces the need for human labor in many, different industries. He also feels like ChatGPT is already smarter than us and will be far smarter than us before the year is out. 

Is Jordan Peterson Correct in His Assertions?

Right now, it would be correct to say that ChatGPT is nothing more than a conversational tool. It’s an incredibly advanced conversational tool, with access to fountains of information nearly instantaneously, but it’s still a conversational tool. 

It’s fairly competent and has a variety of functions within the context of chat. For instance, it can code but only if you ask it to do so and it complies by producing an accurate representation of what you asked. Just like asking someone in real life. The only difference here is speed. 

We do know that ChatGPT as a part of OpenAI, will continue to grow as the databases it has access to continue to grow. According to Peterson, at some point this year, OpenAI will be able to use patterns it observes throughout the world to create its own “constructions.”

Then it will take those “constructions” and test them against the world. In other words, it will be able to effectively participate in the scientific method. As it currently stands, there is still a fundamental lack of accuracy. 

One thing is for certain, this thing is learning at an astronomical rate. ChatGPT-4 is already being demoed and it can take a single prompt and convert it into a 60,000-word book. That’s impressive. The real question is, what does GPT-5 look like? 6? 100?

All Things Considered

Right now, writers can keep on writing—coders can keep on coding. But, how long will those last before the entirety of these jobs can be written in AI at a tenth of the cost? It’s a scary thing to consider, especially when what comes next is just around the corner.