The future depends on some graduate student who is deeply suspicious of everything I have said.
I got fed up with academia and decided I would rather be a carpenter.
Machines can do things cheaper and better. We're very used to that in banking, for example. ATM machines are better than tellers if you want a simple transaction. They're faster, they're less trouble, they're more reliable, so they put tellers out of work.
I am scared that if you make the technology work better, you help the NSA misuse it more. I'd be more worried about that than about autonomous killer robots.
I am betting on Google's team to be the epicenter of future breakthroughs.
The pooling operation used in convolutional neural networks is a big mistake, and the fact that it works so well is a disaster.
In the long run, curiosity-driven research just works better... Real breakthroughs come from people focusing on what they're excited about.
Irony is going to be hard to get. You have to be master of the literal first. But then, Americans don't get irony either. Computers are going to reach the level of Americans before Brits.
The NSA is already bugging everything that everybody does. Each time there's a new revelation from Snowden, you realise the extent of it.
Take any old classification problem where you have a lot of data, and it's going to be solved by deep learning. There's going to be thousands of applications of deep learning.
The question is, can we make neural networks that are 1,000 times bigger? And how can we do that with existing computation?
I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works.
My father was an entomologist who believed in continental drift. In the early '50s, that was regarded as nonsense. It was in the mid-'50s that it came back. Someone had thought of it 30 or 40 years earlier named Alfred Wegener, and he never got to see it come back.
The brain has about ten thousand parameters for every second of experience. We do not really have much experience about how systems like that work or how to make them be so good at finding structure in data.
My view is we should be doing everything we can to come up with ways of exploiting the current technology effectively.
In A.I., the holy grail was how do you generate internal representations.
Most people at CMU thought it was perfectly reasonable for the U.S. to invade Nicaragua. They somehow thought they owned it.
All you need is lots and lots of data and lots of information about what the right answer is, and you'll be able to train a big neural net to do what you want.
Now that neural nets work, industry and government have started calling neural nets AI. And the people in AI who spent all their life mocking neural nets and saying they'd never do anything are now happy to call them AI and try and get some of the money.
We now think of internal representation as great big vectors, and we do not think of logic as the paradigm for how to get things to work. We just think you can have these great big neural nets that learn, and so, instead of programming, you are just going to get them to learn everything.