
I was invited to speak at the Universidad Internacional de La Rioja (UNIR) and Grupo PROEDUCA in my hometown of Madrid on Dec. 1. Below is a transcript of my lecture, which — full disclosure — was translated into English with the help of AI. (Above: Jorge Heili, executive director of UNIR, and Eva Asensio, dean of the faculty of business and economics at UNIR, were among the leaders who attended and helped oversee my talk at UNIR's Madrid office.)
Humans and Machines in the University: What and How to Teach When Google Knows Everything and ChatGPT Explains It All Very Well
Magnus Carlsen is one of the greatest chess players in history. This Norwegian from Tønsberg holds the record for the highest Elo rating ever achieved by a human: 2,882 points. Yet today, any beginner armed with nothing more than an iPhone or Android and a free, open-source app like Stockfish 16 (rated over 3,700 Elo) would beat him 100% of the time — an insurmountable gap that makes human victory against the machine statistically impossible.
Until 1997, no computer could defeat the world’s top grandmasters. That year, IBM’s Deep Blue, a project that began at Carnegie Mellon University, beat world champion Garry Kasparov in a six-game match (3.5–2.5). The event sent shockwaves not only through chess but through our entire understanding of what machines can and cannot do.
Kasparov then predicted that the combination of humans and machines would prove superior to either one alone. In his view, computers are unbeatable tactically — they can calculate and evaluate millions of positions in seconds — but humans remain better at long-term game analysis, strategy, and the ability to avoid subtle errors that computers miss.
For a while, Kasparov appeared to be right.
In 2005, an obscure amateur team won the PAL/CSS Freestyle Chess Tournament (where absolutely anything was allowed) by convincingly defeating powerhouse teams, including one led by 14-year-old Russian grandmaster Vladimir Dobrov, who was playing alongside a colleague with an Elo rating of over 2600 and also using computers. Score: 2.5–1.5 for the amateurs.
The winning team consisted of two average American club players from New Hampshire — Steven Cramton (Elo 1685) and Zackary Stephen (Elo 1398) — using three ordinary PCs of the era: one AMD 3200+ and two Pentiums (2.8 GHz and 1.6 GHz). With that modest setup, they also beat standalone supercomputers such as Hydra, which had been purpose-built for chess.
Steven and Zackary were not grandmasters, but they were exceptional computer pilots (or co-pilots, as Microsoft would say). They knew exactly when to trust the machine and when to override it — when it might be “hallucinating” or had failed to grasp their long-term strategy.
For the next four years, it really did seem that Kasparov was correct and that humans still had a future — at least as guides of supercomputers. It was the era of the “centaur”: half human, half machine — combining human creativity, intuition, and strategic vision with the raw calculating power, data access, and pattern recognition of the computer.
That illusion shattered in 2009 when a simple mobile phone (an HTC Touch HD) running Pocket Fritz 4, with no human assistance, won the Copa Mercosur tournament in Argentina against a field of grandmasters. Not only did the machine beat the centaur, but it didn’t even need a NASA supercomputer to do so — just a device that fits in your pocket.
Today, humans can neither beat the engine, nor meaningfully correct it, nor outperform it strategically. If a human tries to “improve” a move suggested by Stockfish 16, there's a 99.9% chance that the human is wrong and that the position will worsen.
So, if humans are no longer even useful as co-pilots in chess, can we at least take comfort in the fact that we are the ones who programmed the machines that beat us?
I’m afraid not.
Until 2017, the answer was yes: Computers programmed by humans won through brute force, evaluating millions of positions per second using algorithms written by humans. Then Google DeepMind unveiled AlphaZero, an AI running on Google’s Tensor Processing Units (TPUs). Without knowing anything about chess except the rules, AlphaZero trained itself for four hours and proceeded to crush the best traditional chess engine at the time, Stockfish 8, which was, of course, programmed by humans.
AlphaZero gave Stockfish a resounding thrashing: 28 wins, 0 losses, and 72 draws. And it didn’t win by calculating more positions. AlphaZero evaluated only 80,000 positions per second compared to Stockfish’s 70 million. Instead, it displayed something that looked like genuine intuition, using its neural network on the TPUs to “understand” which positions were promising without needing to calculate to the end. Sometimes, its style looked human (such as sacrificing pieces to achieve a long-term strategy); at other times, it looked completely alien — playing in ways that no human (or earlier computer) could understand.
The machine now beats humans by building machines that beat both humans and the previous generation of human-programmed machines.
If I were a betting man, I wouldn’t put my money on humans in any intellectual, cognitive, or even creative task. Claims that machines “can’t drive,” “can’t write,” “can’t compose music,” or “can’t create art” have all been proven false, one after another. Every time we insist on human superiority, a new machine appears to put us in our place.
At best, humans are left with moral judgment and emotional intelligence — deciding what matters; what winning and losing mean; what is good or bad, beautiful or ugly. We remain the social, political, moral, and emotional agents who define whether chess (or investment banking, journalism, or music) is important, how one wins, and what winning is worth.
And universities, of course, will be no exception.
In higher education, we have no choice but to accept that machines already are — or very soon will be — better than humans at virtually every intellectual and cognitive task. We can resist, we can throw tantrums, we can ban AI in classrooms. It is a futile battle — and, in fact, it’s the wrong battle.
It's true that, after the Industrial Revolution, a few artisanal shoemakers remained, and beautiful Steinway pianos (which take a year to build and cost $200,000) are still made by hand. But they are exceptions — luxury niche products for nostalgics and enthusiasts. Meanwhile, Pearl River in China produces 150,000 pianos per year (400 per day) that sound excellent and cost a fraction of the price.
Etsy, the platform for handmade goods, is valued at roughly $5 billion. Amazon, selling mass-produced goods at scale, is worth 500 times more and is the fifth-largest company on Earth (around $2.5 trillion). The only companies larger than Amazon are NVIDIA, Apple, Google, and Microsoft — all protagonists of the new cognitive revolution.
If resistance is pointless, what is the alternative so we do not become relics of the past?
- Teach AI.
- Teach with AI.
- Research AI.
- Help others benefit from AI.
Here are some examples drawn from what we are doing at Georgia Tech. I am not pretending we have solved everything — far from it. We still have many experiments to run, mistakes to make, dollars to spend, and minds to convince. But we are all-in, and perhaps our experiences can spark critical reflection at UNIR and elsewhere.
Teaching AI
At Georgia Tech we don’t care whether you study computer science, music, engineering, or economics. Our starting assumption is that every single graduate needs to be a sophisticated, advanced user of AI, no matter which career path they choose. If they don’t work with AI, AI will take their money. Many of our students will start companies or rise to senior executive or government positions; they simply will not succeed unless they can make AI work for them instead of for their competitors.
There are very few geniuses like Kasparov or Carlsen. But there can be many like Steven Cramton and Zackary Stephen — smart people who know how to get the most out of machines. And I’d say that describes all of our students at Georgia Tech.
The most direct way to teach AI is to let students play with it — not just chat with Gemini or ChatGPT, but train their own models so they can understand, intuitively and deeply, how AI works and what it can do.
One distinctive feature of our campus is our “makerspaces” — workshops filled with every imaginable tool (laser cutters, 3D printers of every material, lathes, mills, welders, oscilloscopes, etc.). These are temples of the industrial and digital revolutions, inviting experimentation and hands-on learning. They teach mechanics, materials, and electronics better than any lecture.
We realized we were missing the makerspace of the cognitive revolution, so last year — with generous help from our friends at NVIDIA — we built one: 20 H100-HGX servers plus 18 H200s (each with 8 GPUs), giving students 304 cutting-edge GPUs. Seventy courses now use our AI Makerspace, along with thousands of independent student projects — roughly 2,000 regular users and growing fast every week.
What are students doing? Everything. For example, a bioengineering team created PatchPals: A nurse takes a photo of a chronic wound, and the tool instantly designs and cuts the exact dressing needed for negative-pressure wound therapy. If the dressing is not the right size and shape, it can damage surrounding tissue or cause infection. This solution dramatically improves the speed and outcomes of the treatment. Other projects optimize electric vehicle efficiency, smart grid power consumption, and dozens of other real-world problems.
U.S. universities love rankings. One we particularly liked came from JLL naming Georgia Tech the No. 1 producer of AI talent in the country. With about 56,000 students (approximately 30,000 residential and 26,000 online), we estimate that 25,000 of them are currently taking AI either as a dedicated course or as a central component of another course.
McKinsey predicts that half of today’s work will be automated in the next two or three decades. Perhaps they are underestimating. McKinsey, Goldman Sachs, and others forecast hundreds of billions of dollars in annual productivity gains. Whether or not those estimates hold, companies have already begun investing more in AI than in new hires. Since 2022, entry-level hiring in AI-sensitive fields like software engineering has fallen 10–20%. Interestingly, experienced workers are keeping their jobs — presumably because the experienced human working with AI is still seen as the winning centaur combination.
We see the effects in admissions. Georgia Tech is one of the fastest-growing U.S. universities (No. 1 or No. 2 depending on the year); applications have nearly doubled in six years, reaching 67,000 this past admission cycle. Yet, applications to computer science dropped 25% this year, mirroring a national trend.
Our strategy is not to produce more computer scientists, but to produce professionals in every field who are fluent in AI. We call it Computing + X. Study biology + computing, history + computing, civil engineering + computing, medicine + computing, and so on.
Of everything universities must do, teaching AI — to every student, not just computer science majors — is priority number one.
Teaching With AI
Personalized education is no longer a dream; it is reality.
I studied telecommunications engineering at Universidad Politécnica de Madrid. One of the classic tortures to which telecom students are subjected is the Fourier transform. Nobody understands it, but you have to memorize it for the exam. Then, as if that weren’t enough, the torture continues with the fast Fourier transform and even the discrete Fourier transform. After racking my brain for a very long time, I finally managed to understand it. I then explained it to a classmate as I had understood it, and he still remembers my explanation today.
Yesterday, I asked ChatGPT to explain it to me intuitively and conceptually. Thirty-five years later — eureka! A perfect explanation, clearer than anything I ever heard in class.
Every discipline has its own torture: Black-Scholes in finance, the Schrödinger equation in quantum physics, reaction mechanisms in organic chemistry… Whatever yours was, go ask Gemini or your favorite model about it. And if you are a professor who hasn’t done this yet, I’m sorry to tell you your students already have. In fact, your students also upload lecture slides, papers, and readings to their favorite AI and have fluent conversations about the hardest concepts.
While many faculty remain fixated on the idea that students use AI to cheat (which also happens), most students are using it to learn better and faster. My recommendation: Every professor should redesign their course from scratch, assuming (even requiring) AI use, and consider what the most valuable way to spend class time is. It’s probably not lecturing or explaining, but rather motivating, discussing, and challenging.
Most faculty are world-class in their domain but are not yet fluent in the latest AI tools like their students are. Closing that gap will require institutional faculty development programs and genuine collaboration with students, who often understand both the power and the pitfalls better than we do. We must move from seeing AI as a “crutch” to seeing it as an amplifier of human potential, decision-making, and creativity. The goal is to produce professionals who can steer the development and use of these tools while keeping human fundamentals — safety, ethics, integrity — at the center.
Personalization is not limited to courses. More than a decade ago, my colleague Ashok Goel created Jill Watson, an AI teaching assistant for our Online Master of Science in Computer Science. This degree program, which began as an experiment, soon became the largest program we’d ever offered — and, with about 18,000 students, is also the largest program of its kind in the world. As you can imagine, we were overwhelmed with work and, in particular, were drowning in questions from our class discussion forums. So, Jill was created to solve this problem with AI.
Today’s version of Jill Watson, built on ChatGPT and other AI technologies developed at Georgia Tech, outperforms both human teaching assistants and other AI platforms in terms of accuracy and student satisfaction. Students often cannot tell whether they are talking to a human or a machine — and when they know it’s a machine, they actually ask more daring and inquisitive questions because they fear no judgment.
Jill cites her sources so students can check the accuracy of answers, and she is infinitely scalable. She also monitors forums and answers student questions, freeing our faculty for more complex tasks and demonstrating how humans and machines can collaborate in educational environments.
The same personalization can (and will) extend to academic advising, curriculum planning, extracurricular recommendations, and career guidance.
There may be specific phases of learning — such as introductory programming, algebra, and creative writing — where we should deliberately set AI aside to train raw human skill. But that should be a deliberate pedagogical decision, not a dogmatic one. If we exclude AI from learning, students will ultimately pay the price.
Researching AI
All of these solutions proceed from research and innovation. Jill Watson grew out of Ashok Goel’s lab, now the National Science Foundation (NSF) AI Institute for Adult Learning and Online Education (AI-ALOE), one of three national AI institutes we lead. Our other NSF-funded AI research institutes focus on optimization (AI4Opt) and aging populations (AI-CARING).
Georgia Tech is one of the most research-intensive universities in the U.S. — No. 3 in federal research contracts, to be exact. We estimate that we have more than 1,000 faculty, Ph.D. students, and researchers are working on or with AI.
AI is not new here. In fact, I came to Georgia Tech in the 1990s precisely because of its interdisciplinary cognitive science program. That culture of cross-disciplinary collaboration is essential for AI progress, which needs computing power, sophisticated algorithms, and data. And data comes from every discipline — from physics, biomedical engineering, marketing, public policy, etc.
To accelerate collaboration, a couple of years ago, we created AI@GT, an internal network that also builds ties with industry. This year, we were awarded a $20 million NSF grant to build a national supercomputer called Nexus, which will help researchers all over the U.S. run their own models.
Helping Others Benefit From AI
A few years ago, we realized that small and medium-sized manufacturing companies in our state were not prepared to incorporate AI into their processes because they lacked resources to innovate and develop their own solutions. But without AI, they will struggle to compete with large, global corporations — and might not even survive.
So, with a $65 million investment from the U.S. Department of Commerce, we created GA-AIM (Georgia Artificial Intelligence in Manufacturing), a statewide consortium.
And it’s not just companies that need help. Their employees — in particular, specialists and mid-level managers — also need to learn to use these tools at work.
Last year, we created our seventh college, dedicated entirely to continuing education, and its primary objective is now helping working professionals learn AI through short courses, certificates, and specific micro-credentials.
Our new model, GT Infinity, is a subscription service — similar to Spotify or a gym membership. This service proceeds from an understand that, in the era of AI, knowledge has a short shelf life. In the past, we’d study for four years and then live off that knowledge for the next forty; that no longer works. With GT Infinity, our alumni and professionals do not “graduate” and disappear. They stay connected — taking learning modules and continuously retraining and updating their skills to remain current throughout their careers. The university thus becomes a lifelong service, not a one-time product.
Conclusion
Let me return to that 64-square board — to Magnus Carlsen and to those two amateurs from New Hampshire, Steven and Zackary, who, with cheap PCs, defeated grandmasters.
I told you Kasparov’s centaur (human + machine) had failed — that, by 2009, even pocket phones no longer needed humans to win at chess. And that is true. In chess, the centaur is dead.
But here is the crucial point — and the reason we are all gathered here today: Life is not a chessboard.
Chess is what mathematicians call a closed system — fixed rules, perfect information, rigid boundaries, and one clear objective (checkmate). In that closed world, AlphaZero’s brute force and silicon intuition will always win.
But the world we are preparing our students for — medicine, politics, business, art, climate change — is an open, chaotic system. Rules change, information is always imperfect and incomplete, and “winning” often does not involve defeating an opponent. Sometimes, it means collaborating, empathizing, or simply surviving.
In that open world, the machine by itself is a tremendously powerful engine — but without a steering wheel. AlphaZero can calculate the optimal path, but it cannot decide where we want to go. Stockfish can sacrifice a queen to win a game, but it cannot decide whether it’s worth sacrificing economic profits to save an ecosystem.
That is why, although the centaur lost at chess, the centaur is the only viable future for the university.
Our students are tomorrow’s Steven and Zackary. They may never be “grandmasters” who memorize entire encyclopedias like we once did — Google already knows everything and ChatGPT explains it all very well. But if we teach them right, they will be the greatest “pilots” in history — centaurs capable of harnessing AI’s raw power to cure diseases that seem incurable today, design energy systems that seem impossible today, and manage complexities that overwhelm us today.
We are not here to compete against the machine or to surrender to it. We are here to climb onto its shoulders.
Chess is lost. Let the machines have it. What remains for us is infinitely more important: the rest of the world. And that world, armed with these new tools, is waiting to be reinvented.
Thank you very much.