반응형
Quoting Geoffrey Hinton from MIT Technology Review (2023/05/02) https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/
On why he's leaving Google
“I'm getting too old to do technical work that requires remembering lots of details.”
“I’m still okay, but I’m not nearly as good as I was, and that’s annoying.”
“I want to talk about AI safety issues without having to worry about how it interacts with Google’s business.”
“As long as I’m paid by Google, I can’t do that.”
“It may surprise you.” “There’s a lot of good things about Google that I want to say, and they’re much more credible if I’m not at Google anymore.”
On the new generation of LLM—especially GPT-4
“These things are totally different from us.” “Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English.”
"We got the first inklings that this stuff could be amazing.” “But it’s taken a long time to sink in that it needs to be done at a huge scale to be good.”
On why backpropagation, not symbolic reasoning
“My father was a biologist, so I was thinking in biological terms.” “And symbolic reasoning is clearly not at the core of biological intelligence."
“Crows can solve puzzles, and they don’t have language. They’re not doing it by storing strings of symbols and manipulating them. They’re doing it by changing the strengths of connections between neurons in their brain. And so it has to be possible to learn complicated things by changing the strengths of connections in an artificial neural network.”
On whether AI is superior to human intelligence
“It’s scary when you see that.” “It’s a sudden flip.”
“Our brains have 100 trillion connections,” “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”
“People seemed to have some kind of magic.” “Well, the bottom falls out of that argument as soon as you take one of these large language models and train it to do something new. It can learn new tasks extremely quickly.”
“few-shot learning"
"Compare a pretrained large language model with a human in the speed of learning a task like that and the human's edge vanishes."
On confabulations (instead of “hallucinations”) of AI
“confabulations.”
“People always confabulate.” “Confabulation is a signature of human memory. These models are doing something just like people.”
“We don’t expect them to blather the way people do.” “When a computer does that, we think it made a mistake. But when a person does that, that’s just the way people work. The problem is most people have a hopelessly wrong view of how people work.”
On the fact that human intelligence is much more energy efficient than AI
“When biological intelligence was evolving, it didn’t have access to a nuclear power station.”
On AI's superiority in communicating ability
“If you or I learn something and want to transfer that knowledge to someone else, we can’t just send them a copy.”
“But I can have 10,000 neural networks, each having their own experiences, and any of them can share what they learn instantly. That’s a huge difference. It’s as if there were 10,000 of us, and as soon as one person learns something, all of us know it.”
On AI's status in the world
“It’s a completely different form of intelligence.” “A new and better form of intelligence.”
On the consequences of AI
“Whether you think superintelligence is going to be good or bad depends very much on whether you’re an optimist or a pessimist.”
“If you ask people to estimate the risks of bad things happening, like what’s the chance of someone in your family getting really sick or being hit by a car, an optimist might say 5% and a pessimist might say it’s guaranteed to happen. But the mildly depressed person will say the odds are maybe around 40%, and they’re usually right.”
“I’m mildly depressed, which is why I’m scared.”
On why he's scared of it
“I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future”
“How do we survive that?”
“Look, here’s one way it could all go wrong,” “We know that a lot of the people who want to use these tools are bad actors like Putin or DeSantis. They want to use them for winning wars or manipulating electorates.”
“Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” “He wouldn’t hesitate. And if you want them to be good at it, you don’t want to micromanage them—you want them to figure out how to do it.”
“Well, here’s a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let’s get more power. Let’s reroute all the electricity to my chips.’ Another great subgoal would be to make more copies of yourself. Does that sound good?”
On the necessity of international ban on AI such as one on chemical weapons)
“It wasn’t foolproof, but on the whole people don’t use chemical weapons,”
On an allegory of the Movie Don't Look Up
“I think it’s like that with AI.”
“The US can’t even agree to keep assault rifles out of the hands of teenage boys.”
“Enjoy yourself, because you may not have long left.” (chuckling)
Guest appearance (also quoting from the same article)
Yann LeCun
“There is no question that machines will become smarter than humans—in all domains in which humans are smart—in the future.” “It’s a question of when and how, not a question of if.”
"I believe that intelligent machines will usher in a new renaissance for humanity, a new era of enlightenment.”
“I completely disagree with the idea that machines will dominate humans simply because they are smarter, let alone destroy humans.” “Even within the human species, the smartest among us are not the ones who are the most dominating.”
“And the most dominating are definitely not the smartest. We have numerous examples of that in politics and business.”
Yoshua Bengio
“I hear people who denigrate these fears, but I don’t see any solid argument that would convince me that there are no risks of the magnitude that Geoff thinks about.” "But fear is only useful if it kicks us into action." “Excessive fear can be paralyzing, so we should try to keep the debates at a rational level.”
“I believe that we should be open to the possibility of fairly different models for the social organization of our planet.”
반응형
'AI, 인류 그리고 미래' 카테고리의 다른 글
책임있는 AI (Responsible AI) 원칙이 주목받고 있다. (0) | 2023.08.03 |
---|---|
[GAI.T 칼럼] 인공지능의 인간다움에 대하여 (4) | 2023.06.12 |
2023년 상반기 생성형 AI 서비스 10대 트랜드로 살펴보는 미래 (1) | 2023.05.01 |
[GAI.T 칼럼] 이해하기 어려운 AI가 지배하는 사회? (0) | 2023.04.26 |
왜 일론 머스크와 유발 하라리는 AI 개발을 잠시 중단하자고 하는가? (0) | 2023.04.07 |