From Turing to ChatGPT: A Beginner's Journey Through AI
What AI really is, where it came from, and how it works — a friendly overview based on Introduction to Artificial Intelligence at UFPR.
Part 1 — What Even Is Artificial Intelligence?
Pop culture loves to make AI dramatic. Skynet wants to destroy humanity. HAL 9000 locks you out of the spaceship. Jarvis is basically a genius butler.
Real AI? A bit more nuanced — and honestly, more interesting.
Researchers define AI along two axes:
- Human vs. Rational — should machines think like us, or think optimally?
- Thinking vs. Acting — should they reason internally, or just produce the right output?
That gives us four approaches. The most famous: the Turing Test (1950). If a human can't tell whether they're chatting with a machine or a person, the machine passes. Simple premise. Profound implications.
The most practical approach today? Rational agents — systems that perceive their environment and take the action most likely to achieve a goal. No drama. Just results.
Key takeaway: AI isn't about making machines "conscious." It's about making them useful.
Part 2 — A History of Hype, Crashes, and Comebacks
AI has a messy history. And that's what makes it fascinating.
1956 – The Birth: Ten researchers spent a summer at Dartmouth and coined the term "Artificial Intelligence." McCarthy, Minsky, Shannon. The dream was born.
1969–1980 – First Winter: Funding dried up. Promises weren't kept. Turns out, making machines smart is really hard.
1987–1993 – Second Winter: Same story, second verse. Overhyped expert systems collapsed under their own complexity.
1990s–2000s – Quiet Renaissance: The internet arrived. Data exploded. Researchers quietly shifted from symbolic reasoning to statistics and machine learning.
2012–now – Deep Learning Era: Neural networks got big. Really big. GPT-2 had 1.5B parameters. GPT-3 had 175B. ChatGPT launched in 2022 and the world changed overnight.
Key takeaway: AI progress isn't linear — it's cyclical. Every "winter" cleared the hype and forced real breakthroughs.
Part 3 — How AI Solves Problems
Imagine you're in a maze. How do you find the exit?
You could try every path blindly (blind search). Or you could use intuition — "the exit is probably over there" — and head in that direction first (heuristic search). Or you could just look for the locally best step at each moment (local search).
These aren't just maze strategies. They're the backbone of how AI agents navigate problems:
- BFS / DFS — systematic, exhaustive, but potentially slow
- A* — the gold standard: uses a heuristic to find the optimal path efficiently
- Hill Climbing / Simulated Annealing — great for optimization, accepts "good enough" over "perfect"
- Minimax — the chess player's algorithm: assumes your opponent plays perfectly and plans accordingly
Key takeaway: Solving problems is about picking the right search strategy. Not all mazes need the same map.
Part 4 — Teaching Machines to Learn
Classical AI was hand-crafted: programmers wrote rules, machines followed them. Machine Learning flips that.
You show the machine examples. It finds the rules itself.
Three main flavors:
Supervised Learning — you label the data, the model learns to predict.
- Regression: predict a number (e.g., house price)
- Classification: predict a category (e.g., spam or not spam)
- Tools: neural networks, decision trees
Unsupervised Learning — no labels. The model finds hidden structure on its own.
- Clustering: group similar things together
- Association: find patterns (e.g., "people who buy X also buy Y")
The magic of neural networks: layers of interconnected nodes that learn by adjusting weights — inspired by how our own brains wire themselves through experience.
Key takeaway: ML doesn't replace human intelligence. It amplifies our ability to find patterns in data we'd never process manually.
Part 5 — Evolution as an Algorithm
What if instead of programming a solution, you evolved one?
That's the idea behind Genetic Algorithms — optimization methods inspired by natural selection.
Here's how it works:
- Start with a random population of candidate solutions
- Evaluate each one (fitness function)
- Select the best performers to "reproduce"
- Crossover: combine two parents to create offspring
- Mutate: randomly tweak a solution
- Repeat until you converge on something great
It's Darwin's survival of the fittest — but for software. And it works surprisingly well for problems where the solution space is too large for brute force.
Key takeaway: Sometimes the best way to find an answer is to let solutions compete, reproduce, and evolve.
This series covers the core curriculum of IAA001 – Introduction to Artificial Intelligence at UFPR. Each part is meant to be a friendly starting point, not a textbook.