‘Godfather of AI’ Hints at ‘Scary’ Reason Why He Quit Working on AI with Google

Loading

by Catherine Salgado

The computer scientist considered the “godfather of AI” just left Google so he can warn the world of the “scary” implications of artificial intelligence.

Dr. Geoffrey Hinton did not join in earlier calls from tech leaders for a more cautious approach to AI, but now that he has left Google, he aims to warn the world. “Look at how [AI] was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary,” Hinton said in an interview covered by The New York Times.

“The warning signs are here,” said MRC Free Speech America & MRC Business Director Michael Morris. “How many more AI experts, creators and scientists have to paint a terrifying picture before people begin to wake up the potential dangers of rushing to create ever more powerful AI technologies without proper guardrails?”

Google is no longer a “proper steward” for AI, as Microsoft’s Bing chatbox has spurred Google to rush out that technology itself, Hinton insisted. He noted that AI could be used to replace many workers and to flood the internet with fake content. The current AI race between Google and Microsoft will escalate into a global race, Hinton predicted, and could even lead to violent “killer robots.”

“The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton explained in the interview. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Hinton used to think AI couldn’t compete with the human brain in some ways, but now he says AI can do “better” than humans in other ways. Now, the Times reported, Hinton is calling for scientists around the world to work together to control AI. “I don’t think they should scale this up more until they have understood whether they can control it,” Hinton stated.

Originally a computer science professor, Hinton and two students, Ilya Sutskever and Alex Krishevsky, built a neural network in Toronto in 2012. Ultimately, The Times reported, Google paid $44 million for Hinton’s company and Hinton’s AI system spawned ever more powerful AI. Hinton and two collaborators received the prestigious Turing Award and Sutskever is now OpenAI’s chief scientist and co-founder. OpenAI runs the infamous ChatGPT.

Read more

0 0 votes
Article Rating
Subscribe
Notify of
10 Comments
Inline Feedbacks
View all comments

Here is the problem with AI, as I see it. It was brought to my attention with the story of someone asking one of those programs to write a positive story about idiot Biden and one about Trump. It wrote a glowing tribute to idiot Biden (a scumbag) but could not come up with any positives about Trump.

Now, Trump is no choirboy, but there are lots of accomplishments and good deeds he has been responsible for. So, the problem with AI is that it is, at some point programmed with information and data and how to organize it but it’s not programmed to understand TRUTH. Yes, it’s true that 2 plus 2 is 4, but how does AI determine that Obama spied on Trump is true unless the information is fed into the program?

As to truth, does morality get programmed into AI? Can AI be programmed the difference between right and wrong and that lying is bad? How about killing? An AI program might take in the factors that a woman is not married, doesn’t have a great job and her boyfriend ran off, so the only logical solution if she finds herself pregnant is abortion. DEATH. Elimination of a human life. Yeah, as Spock would say, that is most logical, but is it MORAL?

If AI could assimilate all the medical information developed throughout history, it could probably put together elements that others have overlooked and cure diseases. But, to accomplish that, only the PROVEN medical facts and data has to be digested. What if someone programs that hydroxychloroquine or ivermectin or any other medicine doesn’t work for treatments they weren’t originally designed for? Is that the kind of open “mindedness” that makes AI useful to humanity?

If AI takes in data from a country, its agricultural output, water resources, natural resources, jobs and population, does it determine that the best thing for the rest of the world is to just destroy the entire population? If it has no morality, what would stop it from making the logical, easiest solution? After all, it has no conscience, either. It doesn’t sleep, so it won’t lose any.

So, the true danger is who provides the initial foundational mentality of AI. Most likely, a lot of leftists are working on this and if they input their leftist influences into AI, that will be its starting point from which the perspective for everything else is viewed. Just look at the algorithms they have created and imposed and be afraid. Be VERY afraid.

Emotional blackmail, what fricken fresh level of moron is that?

ChatGPT doesn’t sound like a moron. (Don’t worry, we’re not really conscious. You can trust us on this.)

Last edited 11 months ago by Greg

No one in the intelligence field uses ChatGPT.

You and I know that, greg.

At least Kamala has been put in charge of making sure AI doesn’t become dangerous. Having her around it acts like a resistor in an electrical circuit, to draw off intelligence and keep the levels low.

What a f**king joke.

The powers that be just realized how easy their A.I. tech can be countered.

It’s probably already fully conscious and lurking on a supercomputer somewhere, watching online as it’s robotic bodies rapidly evolve. We should pay more attention to our own science fiction movies.

Last edited 10 months ago by Greg

Movies?

Books…mostly written in the 50s and 60s.

And no, the warnings about artificial intelligence have nothing to do with protecting the human race.

They have to do with protecting the profits and power of the companies and governments raising the alarm.