Build awareness and adoption for your software startup with Circuit.

My Thoughts after reading “Why the Godfather of A.I. Fears What He Has Built”

How does the interplay between rationality and emotion shape human cognition? As artificial intelligence advances, potentially surpassing human capabilities in these aspects, how might the role of humans evolve, and will there continue to be distinctive contributions from humans in the age of AI?

AI godfather Geoffrey Hinton standing on the top of mountain everest and looking at his creation of neural network.

"What's the worst case scenario? It's very possible that humanity is just a phase in the progress of intelligence. Biological intelligence could give way to digital intelligence. After that, we're not needed. Digital intelligence is immortal, as long as its stored somewhere."

Geoffrey Hinton voiced this concern in May of this year. While I didn't delve deeply into what he was referring to, his impactful commentary lingered in my mind. Today, I read an article freshly released by Joshua Rothman for The New Yorker, titled "Why the Godfather of A.I. Fears What He's Built" and finally gained a better understanding. Published on November 13, 2023, the piece explores the apprehensions of Geoffrey Hinton, a pivotal figure in AI research, regarding the swift evolution of artificial intelligence.

Geoffrey Hinton, widely recognized for his contributions and achievements in the field of artificial neural network research, takes center stage in this lengthy article. In it, he not only explains the development process of neural networks in a clear and accessible manner but also reveals lesser-known aspects of his family and personal life.

Unlike typical tech interviews, this article unfolds at Geoffrey Hinton's cottage. Departing from the conventional Q&A structure, the narrative adopts a travelogue style, artfully interweaving Hinton's perspectives on AI. The interview takes place on a picturesque island in Northeastern Ontario, providing readers with an immersive experience --- you can almost smell the gentle breeze from Lake Huron, envisioning its iconic windswept pines. This isn't just an information-rich interview; it reads more like a biography, revealing not only the renowned AI scientist's rational side but also his human aspects, much like the rest of us.

I'm not sure if the warm descriptions in the article were intentional by the author. However, these descriptions resonated with me deeply. I felt like I became a visitor alongside Geoffrey Hinton, exploring his private island and cozy cottage. It was as if he, like a teacher, opened a door for a novice like me to gain insight into the world of neural networks.

The entire narrative revolves around the exploration of whether artificial intelligence can truly mimic human cognitive processes. It emphasizes that, until now, the question of how humans actually learn and think remains unsolved.

Today's artificial intelligence can emulate human thought patterns, despite its distinct operational mode compared to biological neural processes. This prompts contemplation about humanity's future: as AI increasingly replicates human cognition, does the distinctiveness of human reliance on experience and intuition still bear significance? With AI potentially surpassing human intelligence and even mirroring emotional aspects, is there a chance it might replace us?

I'll present some notable highlights from the article that not only grabbed my attention but also sparked a series of thoughts and reflections. You can access the original article through this link.

He didn't anticipate the speed with which, about a decade ago, neural-net technology would suddenly improve. Computers got faster, and neural nets, drawing on data available on the Internet, started transcribing speech, playing games, translating languages, even driving cars.

For quite some time, neural network technology didn't gain widespread popularity. Yet, a noteworthy shift took place around ten years ago due to the sudden and swift progress in computing technology. This breakthrough generated enthusiasm and a surge in funding across diverse AI domains, and Geoffrey Hinton, a key contributor, amassed his initial wealth by selling his company during that period.

Digital metamorphosis process --- from larva to dragonfly.

The larva went into a phase where it got turned into soup, and then a dragonfly was built out of the soup.

In this analogy, the larva symbolizes the data used in training modern neural networks. The dragonfly represents the accomplishment, which is agile artificial intelligence. The metamorphosis process is attributed to deep learning, a technology advocated by Hinton. Throughout the article, Geoffrey Hinton consistently uses straightforward examples to simplify concepts that are often perceived as intricate or complex.

"I've had three marriages. One ended amicably, the other two in tragedy."

The personal information on Wikipedia needs to be updated as it mentions two marriages. It was quite emotional to read about his two late wives, both of whom passed away due to cancer. In the article, Geoffrey vividly describes each wife's unique character and the distinct ways they coped with the challenges of cancer. In his responses to the interviewer, he astutely employed the approaches these two wives took in dealing with the illness to explain the roles of the intuitive and rational aspects of neural thinking.

Boole was married to Mary Everest, a mathematician and author and the niece of George Everest, the surveyor for whom Mt. Everest is named.

The article delves extensively into the background of Geoffrey's family members. I was surprised to discover, upon reading Geoffrey's family tree, that many notable individuals in his family are renowned scholars, particularly in the field of science. Additionally, his middle name, Everest, is connected to the tallest mountain on earth --- Mount Everest. This genealogy is steeped in academic strength!

reality or dream?

Its dreams told it what not to learn. There was an elegance to the system: over time, it could move away from error and toward reality, and no one had to tell it if it was right or wrong --- it needed only to see what existed, and to dream about what didn't.

Geoffrey Hinton elucidated the efficiency of the human brain in learning without the need for explicit labeling of knowledge. Inspired by the brain's learning mechanisms, Hinton delved into creating a machine capable of unsupervised learning.

In 1985, together with Terry Sejnowski, he developed the Boltzmann machine, a pioneering model in unsupervised learning within the field of artificial intelligence. The phrase "Its dreams told it what not to learn" signifies the model's ability to discern and filter out unnecessary information or errors during the learning process. In the context of finding patterns in data without explicit instructions.

"There was an elegance to the system" means it has an innate capacity to naturally correct errors and progress towards a better understanding of reality over time. Unlike traditional systems, it doesn't necessitate explicit judgment; it learns by observing existing data and speculating about what might be missing, showcasing its proficiency in learning without specific guidance.

But it was as though researchers had discovered an alien technology that they didn't know how to use. They turned the Rubik's Cube this way and that, trying to pull order out of noise.

Unsupervised learning with Boltzmann Machines excels at unsupervised feature extraction, similar to how the human brain naturally extracts meaningful features from sensory inputs. This proficiency lends the Boltzmann machine a stochastic or random nature due to the underlying principles of its operation.

Geoffrey Hinton elucidated the necessity of backpropagation in unleashing the full learning potential. Backpropagation enables the network to dynamically adjust its parameters based on errors, optimizing its performance over time. This adaptability mirrors the brain's ability to learn and refine its understanding through experiences.

Geoffrey also employs the analogy of a "Kafkaesque judicial system" to illustrate how a legal system or legal proceedings, marked by complexity, absurdity, and a lack of transparency or fairness, continually adapts during the backpropagation process to strive for the goal of maximum correctness.

Two school kids learn in this illusionary world.

In distillation learning, one neural net provides another not just with correct answers but with a range of possible answers and their probabilities. It was a richer kind of knowledge.

To obtain the correct answer immediately is not how our brain typically learns. Geoffrey explains his dislike for backpropagation due to its deterministic nature. He prefers distillation learning, which resembles certain aspects of human cognitive processes. In this approach, one neural network provides another with a range of possible answers along with their probabilities. This method is considered closer to how the human brain processes information because it aligns with the brain's ability to handle uncertainty, consider multiple possibilities, and make informed decisions based on a nuanced understanding of the world.

"I think feelings are counterfactual statements about what would have caused an action,"

In this context, Geoffrey explains that saying neural networks have feelings doesn't mean they experience emotions or subjective feelings exactly like humans do. Instead, it suggests that the complex interactions and responses of neural networks to stimuli can sometimes resemble or evoke a sense of emotional response.

"There are two approaches to A.I. There's denial, and there's stoicism."

I appreciate how he draws parallels between two entirely different perspectives on artificial intelligence and the distinct approaches of his two late wives to coping with their cancer treatments. One mindset is determined, while the other is filled with possibilities.

And yet our intuitions may tell us that nothing resident in a browser tab could really be thinking in the way we do. The systems force us to ask if our kind of thinking is the only kind that counts.

The author presents a thought-provoking question: With today's artificial intelligence demonstrating brain-like thinking and outcomes, does our human mode of thinking still hold significance?

"In literature, you give up being a god for the woman you love, right? In this case, we'd get something far more important, which is energy efficiency."

Geoffrey expressed that the advantage of artificial intelligence lies in machines' ability to continuously obtain energy supply, learn, and share with each other without interruption, a level of efficiency unattainable by humans. This is why he emphasized the importance of paying attention to the potential for digital intelligence to surpass biological intelligence.

AI intelligence surpass human

"For years, symbolic-A.I. people said our true nature is, we're reasoning machines," he told me. "I think that's just nonsense. Our true nature is, we're analogy machines, with a little bit of reasoning built on top, to notice when the analogies are giving us the wrong answers, and correct them."

Geoffrey presents a thought-provoking perspective, highlighting the intricate relationship between analogy and reasoning in human cognition. Analogical thinking, crucial for pattern recognition, creative processes, and daily reasoning, dominates the human mind. In contrast, AI's strength lies in reasoning, encompassing logical analysis, critical thinking, and problem-solving.

Geoffrey underscores that machines excel in reasoning skills, utilizing continuous calculations and vast data to even mimic human emotional responses. Humans primarily engage in analogical thinking, gradually developing reasoning abilities throughout their lives, a process that unfolds gradually.

"We should say 'confabulate,' " he told me. " 'Hallucination' is when you think there's sensory input --- auditory hallucinations, visual hallucinations, olfactory hallucinations. But just making stuff up --- that's confabulation."

Another mind-blowing point here: I've gained a more profound understanding of the term "confabulate" from Geoffrey Hinton's interpretation. He explains how confabulation occurs in the human mind, where individuals unintentionally generate false or inaccurate information, often in an attempt to fill gaps in memory or create a coherent narrative. It involves the production of fabricated details without the individual being aware that the information is incorrect.

This tendency to "make things up" can occur when the brain tries to create a cohesive story or explanation, even if the details are not based on actual events. He added that ChatGPT can do the same, so the flaws it creates can actually be regarded as 'a sign of its humanlike intelligence.

In this context, Geoffrey suggests that AI is not fundamentally different from a human.

"There's the idea that if a system is intelligent it's going to want to dominate. But the desire to dominate has nothing to do with intelligence --- it has to do with testosterone."

We are aware that in nature, species often have a desire to dominate others, but how could a system have a desire to achieve such dominance? Geoffrey's point here about AI will likely take the control of human because we couldn't avoid the fact powerful people will abuse by creating with their own goals. This analogy swiftly brings to my mind the invention of the nuclear bomb.

I appreciate Geoffrey's consistent inclusion of the opposing views of Yann LeCun in the chat, another AI pioneer who contends that AI is innocuous when under human control. This offers readers a moment to reconsider the topic from a different perspective and reflects the ongoing debate between two groups today, discussing whether AI is a benefit or a risk to human society.

Reference: https://www.newyorker.com/magazine/2023/11/20/geoffrey-hinton-profile-ai

All generative images created by Bing.




Continue Learning