Life

What Is ChatGPT? For People Who Still Don’t Get It.

Will it take our jobs? How can I use it to my advantage? Is it just a bullshit generator? Is it already changing the world? At what cost?
chatgpt
Photo: Getty Images

Let me start by saying that this story could’ve probably been written by ChatGPT (editor’s note: it’s not). But apart from that being unethical and probably something that wouldn’t have gone past the AI and plagiarism checks, that kinda is what ChatGPT does. Write your essays or social media posts, give you quick answers to anything you might want to know more on, write code, craft resumes and cover letters, translate content, improve grammar, heck, even give you movie and show suggestions. But like it is with everything that seems dazzling on the surface, it comes with huge-ass disclaimers. 

Advertisement

ChatGPT, or the Chat Generative Pre-Trained Transformer (GPT), is a large language model created by the artificial intelligence research lab OpenAI. It is just one of the relatively intelligent chatbots out there. Soon after its release in November of last year, fears emerged that such AI platforms are downright scary – they can replace teachers, kill the student essay, and make us complacent pricks who will never be able to understand the significance of the written word. 

Recently, Geoffrey Hinton, widely regarded as the godfather of AI, resigned from Google, saying he regretted his work — and chatbots could soon be more intelligent than humans. He told The Guardian in a recent interview that he has “huge uncertainty at present” when it comes to large language models such as ChatGPT, because “having consumed all the documents on the web, they won’t be able to go much further unless they can get access to all our private data as well. I don’t want to rule things like that out – I think people who are confident in this situation are crazy.”  

Before diving into the nuances of how ChatGPT functions, the fears of generative AI platforms like ChatGPT taking over jobs are not entirely unfounded and pose a very real risk. Sam Altman, the controversial co-founder of OpenAI, underplays ChatGPT’s influence and had this to say about its “genius” in a tweet from December 10, 2022: “ChatGPT is incredibly limited but good enough at some things to create a misleading impression of greatness. It's a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.”

Advertisement
chatgpt

The suggestions are hardly new but when I told ChatGPT "I said new", it apologised and gave me more recent suggestions. Some of them, like Bridgerton, still are hardly new.

It’s also worth noting that Altman is also the brains behind the now defunct WorldCoin crypto project that was hoping to make a big splash on the crypto train, but all hell broke loose when multiple investigative reports revealed an operation littered with privacy and labour concerns

Says something about the ethics of AI and what goes on behind the scenes and leads to the making of such platforms? A stretch? Maybe. Maybe not. 

So, how does ChatGPT work?

To put it simply, ChatGPT is a large language model (or LLM), a type of deep-learning software that can generate human-like language after being fed massive amounts of existing written material. To use the free version of ChatGPT, all you need to do is go to chat.openai.com, sign up with your email address and complete phone verification. From that point onwards, it’s a free for all. You can ask the bot pretty much anything and everything. You can also be sure that the English is coherent and near impeccable. 

How accurate and nuanced? You’ll find out soon. 

Advertisement

In the chat window, there is already a disclaimer that it “may occasionally generate false information” and even “harmful instructions or biased content” and, most importantly, has limited knowledge of events after 2021. When I asked ChatGPT to take me through exactly how it functions, it claimed that when the user shares the prompt or question, it analyses the input and uses that to generate a response. 

chatgpt

Desi people know that you can't be having chana for lunch and rajma for dinner unless you're going for extreme flatulence but this meal plan was not too bad

“My responses are generated using a combination of statistical patterns and algorithms that allow me to understand and generate natural language,” it said. “My responses are not pre-programmed, but rather generated on-the-fly based on the context and input you provide. This means that the quality and accuracy of my responses depend on the quality and specificity of the input you provide.” 

It is pertinent to note that it was for this reason, the fact that it solely works on data already fed to it, that it can’t even solve math problems. This is not to say that it can’t perform two plus two equals four, but there is a basic logic fitted into its system and anything beyond it is not in the domain of ChatGPT. Recently, ChatGPT 4 launched which claims to be more powerful and accurate. It comes at a cost, with the monthly plan costing $20. However, Forefront Chat, developed by New York-based software development company Forefront AI, allows users to access ChatGPT 4 for free

Advertisement

However, people have used it for less sinister reasons, too. 

What about accuracy?

In a detailed essay for The Atlantic titled ChatGPT Is Dumber Than You Think, writer Ian Bogost conducted a series of experiments and asked questions ranging from philosophical enquiries to asking ChatGPT to write an imagist poem and even help him select a particular window shade. In almost all instances, despite clear questions, the answers were either misleading or just untrue. 

For the imagist poem that Bogost had required on a hamburger, quirkily enough, the resulting poem was anything but imagist. The writer pointed it out and ChatGPT confessed: “You are correct, the poem that I generated is still not an imagist poem. An imagist poem uses precise, vivid imagery to convey a specific idea or emotion, and focuses on a single image or moment. The poem that I generated uses a more narrative and descriptive style, and does not focus on a single, specific image.”

Bogost notes that what separates ChatGPT from a random dude on the internet deriving his intelligence from Wikipedia, is the chatbot’s ability to acknowledge its mistakes and work on correcting them. However, a high-school student  relying on the said information as the gospel truth might not always be able to find inaccuracies because of the bot’s impressive command over the English language. As we can glean from Bogost’s tests above, the answers are deceptively cohesive and have no immediate markers of flaws. Unless you are an expert, you might not be able to tell that those answers are inaccurate. 

Advertisement

It might all be cool if we were just asking ChatGPT to help us decide on window shades. The worrisome bit is when ChatGPT makes things up from scratch. There is an argument that we shouldn’t expect too much from platforms like ChatGPT. After all, they’re still developing. It’s good for research, others might claim. But is it? When I asked the system to share links to studies about sexual health, it literally made a study up. This was the study that it claimed existed: “Sexual Activity and Cardiovascular Risk Revisited: The Role of Aging and Cardiometabolic Risk” - American Journal of Cardiology. 

And look at its audacity when I asked for a link to this study. The link provided instead led me to a study on optic radiation. Something to do with eyes and the whites of the eyes. Even the journal wasn’t the same. I tried looking for it on Google, too, and there was no such study with this title anywhere on the annals of the internet. Often all the links provided are either broken or lead to studies that have no relevance. 

As Bogost himself noted in his experiment: “In almost every case, the AI appeared to possess both knowledge and the means to express it. But when pressed — and the chat interface makes it easy — the bot almost always had to admit that it was just making things up.”

Talk about AI genius. 

Advertisement

People are using it in others ways, too. ChatGPT is a therapist to some though experts have rightly flagged the risks inherent in banking on a bot to navigate your mental health. Recently, a man claimed to win a small lottery with ChatGPT’s help. A version of ChatGPT might even help you flirt on dating apps, some claim. While there is a growing chorus of just how effective ChatGPT is in helping draft resumes and job descriptions, most of those examples have been trite and superficial, for the simple reason that the bot will just regurgitate the same thing in five different ways. 

A Time investigation published in January this year reported that OpenAI, the company behind ChatGPT, paid Kenyan workers less than $2 an hour to filter through tens of thousands of lines of text to help make its chatbot safer to use.

Advertisement

Maybe we should start talking about the human rights violations at the heart of what powers AI, as opposed to debating how it will take over the world Terminator-style.

Not surprisingly, ChatGPT collects all your data. It is your data that fuels and trains it after all. Every conversation, regardless of how stupid, makes it slightly more aware and smarter. So, if you’re comfortable sharing your location, how messed up your previous relationship was, and your health history — that’s really on you. ChatGPT stores it all, even after you’ve closed the window. The only way to opt out of such storage is to delete your OpenAI account. Here’s how you do it:

Click Help > Click the chat icon that says ‘Send Us A Message’ > From the options provided click ‘Account Deletion’ > Click ‘Delete My Account’ to confirm. 

You will be informed that it will take up to four weeks to permanently delete your account. You won’t be able to make a new one, at least not from the email ID you used to sign in. 

Are systems like ChatGPT sentient?

Not as of now. ChatGPT itself claims that it is not and that it’s only best able to simulate human-like responses. So, what’s the point of these systems then? OpenAI claims that all of this is in the interest of free knowledge. While its co-founder says that it’s not accurate and is still getting there, no explanation or context is given about the end goal. 

chatgpt

ChatGPT seems to have been given a conscience :-/

On the other hand, Google engineer Blake Lemoine recently told The Washington Post that Google’s LaMDA large language model was alive ― sentient, even. He claimed that LaMDA had the ability to chat with him about rights and personhood, and even change his mind about science-fiction writer Isaac Asimov’s third law of robotics: “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” Many technical experts in the AI field have criticised Lemoine’s statements and questioned their scientific correctness. But it’s renewed a broad ethical debate that is certainly not over yet.

At the end of it, it is crucial to note that ChatGPT and its cousins, intelligent or otherwise, are in the business of manipulating, morphing, concising, and making bets on the information we’ve already shared with them. When they make things up, it’s definitely sinister and shows the extent to which these systems are willing to go to craft the illusion of accuracy. And while you can use them to really simplify some life things, it’s important to remember that they are riddled with bias, keep hitting a dead end, and are powered by unpaid workers in countries where accountability is an unknown idea. 

Follow Arman on Twitter and Instagram