DRD 56: ChatGPT Was Not the First AI. Where Did AI Come From?
- Dr. ARUN V J

- 2 hours ago
- 5 min read
Before November 2022, most of us thought Artificial Intelligence was something from a sci-fi movie—a distant future where robots like Ultron or C-3PO would either save us or destroy us. Even the best minds in tech predicted that human-level conversation was decades away.
Then ChatGPT dropped.

Suddenly, the timeline collapsed. The future wasn't decades away; it was in your browser, writing poetry, debugging code, and drafting emails. But here is the problem: three years later, most people still treat AI like a magic 8-ball. They ask a question, get an answer, and have no idea how the sausage is made.
If you want to lead in this new era—whether you are running a hospital department or managing a small team—you need to stop looking at AI as magic and start understanding it as a machine.
1. You Were Using AI Long Before ChatGPT
We often act like AI was born in late 2022.
It wasn’t.
It was just hiding in plain sight.
For the last decade, you’ve been training and using "narrow" AI every single day.
Your Phone’s Keyboard: When you type "Good," and it suggests "Morning," that’s a prediction model.
Amazon & Netflix: "Because you watched The Office, you might like Parks and Rec." That’s a behavior prediction model.
Google Search: When you start typing and it finishes your sentence? Prediction.
The Shift: What changed with ChatGPT (and later Gemini, Claude, etc.) is that we moved from predicting a recommendation to generating new content. We went from "Here is a movie you might like" to "Write a script for a movie that doesn't exist."

What Is GPT, Really?
GPT stands for Generative Pre-trained Transformer.
Let’s break that down simply.
Generative
It generates text by predicting what comes next.
Pre-trained
It is trained on massive datasets before you ever use it.
Transformer
This is the architecture that allows it to understand context—not just individual words.
GPT does not “know” things. It recognizes patterns between words, ideas, and structures.
Who Built It—and How Was It Developed?
GPT models were developed by teams of researchers, engineers, and mathematicians working over many years.
Key stages:
Collecting large, diverse text datasets
Training models to predict the next word
Reinforcing outputs using human feedback
Continuously refining safety and accuracy
This wasn’t a sudden breakthrough.
It was compounded progress, quietly accelerating.
Google’s Gemini and Other AI Models
ChatGPT is not alone.
Google’s Gemini represents a parallel evolution—deeply integrated with search, data, and multimodal understanding (text, images, video).
Different companies train models with:
Different datasets
Different priorities
Different guardrails
That is why the same question produces different answers across models.
2. The "Sophisticated Parrot": How It Works
I want you to strip away the jargon. Forget "neural networks" and "transformers" for a moment.
Imagine you read every book, article, blog post, and comment section on the internet up until today. Then, I give you an incomplete sentence: "The best way to prevent burnout is..."
Based on everything you’ve read, you would know that statistically, the next word is likely "to," "rest," or "setting." You wouldn't pick "elephant."
This is what Large Language Models (LLMs) do. They are essentially hyper-advanced prediction engines. They don't "know" facts the way you or I do. They don't have a database of true statements. They have a massive map of which words tend to follow other words.
The Data: They are fed terabytes of text—books, Wikipedia, Reddit threads, coding repositories.
The Training: They play "fill in the blank" billions of times until they get really, really good at sounding human.
Can they create new things? Yes and no. They can create new combinations of ideas, but they cannot invent information they haven't seen the building blocks for. They are remixing human knowledge at lightning speed.
3. The Hardware Myth: You Don't Need a Supercomputer
I get asked this often: "Do I need a powerful laptop to run these advanced AIs?"
The Answer: No.
When you type a prompt into ChatGPT or Gemini, your phone isn't doing the thinking. Your phone is just a walkie-talkie.
You type "Draft a strategy for my blood bank operations."
Your text is sent via the internet to a massive server farm (likely in the US or Europe).
A room full of super-computers (GPUs) processes your request, predicts the response, and streams it back to you.
This is why you can access the world's smartest intelligence from a budget smartphone with a decent 4G connection. The brain isn't in your pocket; it's in the cloud.

4. The Major Players (Who is Who?)
The landscape changes fast, but you need to know the heavy hitters.
OpenAI (ChatGPT): The first mover. Led by Sam Altman, they shocked the world with GPT-3 and GPT-4. They are currently the standard everyone tries to beat.
Google (Gemini): The sleeping giant that woke up. Google has more data than anyone (think YouTube, Search, Scholar). Gemini is their answer—it’s multimodal, meaning it understands video and images as natively as text.
Anthropic (Claude): Founded by former OpenAI employees. They focus heavily on "safety" and creating AI that feels more human, nuanced, and less robotic.
Meta (Llama): Mark Zuckerberg’s play. Unlike the others, Meta often releases their models as "open weights," allowing developers to tinker with them freely.
5. Why Do They Give Different Answers?
If I ask you "What is 2+2?", you say "4." If I ask ChatGPT, Gemini, and Claude, they might explain it differently. If I ask ChatGPT the same question twice, it might give me two different answers.
Why? Because they are probabilistic, not deterministic.
Remember the "fill in the blank" game? The AI assigns a probability to the next word.
Input: "The sky is..."
AI thinks: "Blue" (90% chance), "Grey" (8% chance), "Dark" (2% chance).
To keep things creative and human-like, the AI doesn't always pick the #1 top choice. Sometimes it picks the #2 or #3 choice to vary its sentence structure. This "Temperature" setting is why AI can write poetry but sometimes fails at basic math (which requires exact, not probable, answers).
Click here to learn more about prompt engineering.
Actionable Takeaway: How to Use This
Understanding the mechanics changes how you use the tool.
Don't Trust, Verify: Since the AI is predicting words, not retrieving facts, it can "hallucinate" (lie confidently). It generates what sounds plausible, not necessarily what is true. Always check medical or legal info.
Context is King: The model only knows what you tell it. If you want a better output, give it more "context" to predict from. Don't just say "Write an email." Say, "Write a strict but empathetic email to a junior doctor about attendance, citing hospital policy."
Use it for Drafting, Not Finalizing: Let the AI do the heavy lifting of structure and initial ideas (the remixing). You add the soul, the experience, and the final judgment.
The Bottom Line: AI isn't going to replace leaders. Leaders who understand how to command AI will replace those who don't. It’s a tool—like a stethoscope or a spreadsheet. Master it.
What’s your experience?
Have you noticed the difference in "personality" between Gemini and ChatGPT? Which one fits your workflow better?
Let me know in the comments below.
For more on productivity and leadership in the modern age, subscribe to the ThirdThinker newsletter.





Comments