• Feite Kraay, Author |
6 min read

My younger brother is a professor of philosophy at one of Toronto’s three major universities. He has taught there continuously since earning his PhD more than 20 years ago. In that time, he’s seen a lot of changes in how students and professors interact, as well as in the ways technology is used. One of the first issues he had to grapple with was undergraduate students’ increasingly sophisticated ability to plagiarize their essays, facilitated by easy access to reams of information on the internet. Why get caught copying your classmate’s paper when you could download a perfectly good one from a complete stranger halfway around the world? This turned out to be easy to counter: my brother became an early adopter of online tools that allowed the professor or teaching assistant to submit a suspicious paper and have it automatically checked against a large internet database of similar pre-existing work. This was a bit controversial at first, and naturally the student unions protested for a while, but it quickly became standard practice across virtually all of academia.

In my previous post, I discussed the evolution of artificial intelligence and its latest manifestation, the AI-driven “chatbot,” also known as generative AI. One of generative AI’s unintended consequences causing some consternation in academic circles is its ability to generate content that can circumvent plagiarism-detection tools. The reason has to do with the way generative AI works, which is unlike typical search engines. Instead of just searching and returning a list of pointers to existing material, it goes through its database, finds relevant content and then recombines that content in a new way using its “large language processing” capability. The result is an apparently original essay or, in some cases, even the ability to pass an exam.

Some instructors have sniffed dismissively that generative AI’s output usually merits no better than a ‘D’ grade but remember this is still early days in the development of the technology. Universities have already started to fight back. My son, an undergraduate student in Vancouver, reports that all students at his school will be required to write a number of supervised short essay answers to simple questions. The Dean’s office will save these essays, which will be used as a baseline to identify each student’s authentic writing voice in order, it is hoped, to distinguish it from material produced by generative AI. Call it, if you will, one more skirmish in the ongoing arms race between human and artificial intelligence.

Dawn of mind
Interestingly, while generative AI’s work might (barely) pass an undergraduate seminar, it would fail in graduate school. The difference is that undergraduates are graded on their mastery of an existing body of knowledge while graduate students are expected to contribute something new and original to their field. True originality is still, and will continue to be, beyond the capability of even the best generative AI engine. Whatever you call it—intuition, creativity, the “aha” spark of a truly new idea—this quality originates somewhere deep inside the human brain, where tens of billions of neurons and synapses connect in unpredictable ways to generate unpredictable results. AI technology—including generative—is nowhere near the scale or complexity at which the brain operates.

So, let’s think for a minute about scale and complexity as they relate to AI.

Today’s AI systems run on massive supercomputers that store enormous amounts of data and run ultra-fast processors to execute the rules engine and generate the inferences we see when we ask the system a question. One measurement I’ve seen from a leading economic journal suggests that the computational power applied to AI systems has grown by a factor of more than 1020 since the 1950s. (That’s a 1 followed by 21 zeros, or a thousand billion billion.) For all that, we have a chatbot that’s barely in its adolescence—clumsy and occasionally useful. We’re not much closer to passing the Turing Test than we ever were. The environmental impact and power consumption of all this computation is a definite concern that I should probably discuss in a separate post. But that, combined with the constraints on Moore’s Law from the physical limits of semiconductor manufacturing, leads me to believe that we will not achieve another billions of billions of times’ increase in computer power that may be needed for further improvements in AI capability.

Except, of course, for the promise of quantum computing: qubits, after all, radically change our approach to data storage and computation. Quantum computers will eventually break through today’s limits and deliver computational power on a scale at which AI systems may possibly accomplish something worthy of the term “revolutionary.” And I’m not talking just about harnessing exponentially faster processors and huge amounts of data storage, although that is part of it. I mean taking quantum principles like superpositioning and entanglement and using them to consider a whole new approach to human consciousness—and to building AI algorithms.

I’ve written before about superpositioning, the ability of a quantum particle (an electron or photon, for example) to exist in multiple states at once. This is fundamental to the definition of a qubit in quantum computing. Entanglement means that quantum particles can sometimes be connected in such a way that even if they are far apart (even light-years apart) they will always behave identically—what happens with one particle instantaneously happens with the other despite there not being any communication between the two. Physicists don’t fully understand how it works—Einstein called it “spooky action at a distance”—and yet it works. Engineers are already entangling qubits to build quantum networks that may one day revolutionize the speed and security of internet traffic. That will be helpful in our transition to post-quantum cryptography.

Age of dreams
Ok, but what, you might ask, do superpositioning and entanglement have to do with AI? There’s a growing body of thought in neuroscience that quantum processes operating in the brain might be key to the foundation of consciousness. The idea is that our brain cells—neurons and synapses—may actually function on some level as quantum particles. Some research already suggests that entanglement is the best explanation for certain connections between the brain and other organs like the heart. Now, this is by no means settled science—there’s a strong counter-argument that quantum behaviours can only manifest themselves at the subatomic level, not at the cellular level—but quantum biology is a developing field of study with some evidence suggesting that quantum effects are at least possible at a higher level than initially thought.

A quantum explanation of the brain could have implications for our understanding and possible treatment of neurological disorders. I think it could also help explain human consciousness and creativity. If there’s superpositioning and entanglement at play among our neurons and synapses, this might shed light on the brain’s phenomenal ability to store and recall information, as well as its unique ability to generate original thought.

It’s mostly speculative for now, but consider this: today’s AI algorithms may well be inherently limited by the fact that they’re coded for classical computing. And while they do a great job solving real business problems in specific domains, they have not come close to passing the Turing Test and achieving what’s been called Artificial General Intelligence. But what if we re-architect AI algorithms for quantum? An AI system based on superpositioned and entangled qubits just might be powerful enough to replicate human memory and human creativity. What happens then? Well, we have at least five to ten years to worry about that, and probably quite a bit longer.

In the meantime, current AI technologies continue to be useful for solving a wide range of business problems. They will continue to evolve in the breadth and depth of the solutions they offer. Generative AI will take its place in this spectrum of solutions and deliver its own unique capabilities. But let’s be careful to put aside the hype and remain mindful of both its strengths and its limitations. AI, including generative, is only as good as the data we feed it and the rules we code around that data. Therefore, what we get from AI must always be tempered by human interpretation, judgement and common sense.

Come to think of it, this advice is as true of the relationships we build with each other as it is of the ones we build with technology. Technology, after all, is essentially an extension of ourselves. I wouldn’t expect any of this to change no matter where AI goes in the next decade and beyond.

Multilingual post

This post is also available in the following languages

Stay up to date with what matters to you

Gain access to personalized content based on your interests by signing up today