Neural networks don’t teach the computer rules, they simply feed lots an lots of example
By the time I began my Ph.D., the field of artificial intelligence had forked into two camps: the “rule-based” approach and the “neural networks” approach. Researchers in the rule-based camp (also sometimes called “symbolic systems” or “expert systems”) attempted to teach computers to think by encoding a series of logical rules: If X, then Y. This approach worked well for simple and well-defined games (“toy problems”) but fell apart when the universe of possible choices or moves expanded. To make the software more applicable to real-world problems, the rule-based camp tried interviewing experts in the problems being tackled and then coding their wisdom into the program’s decision-making (hence the “expert systems” moniker). The “neural networks” camp, however, took a different approach. Instead of trying to teach the computer the rules that had been mastered by a human brain, these practitioners tried to reconstruct the human brain itself. Given that the tangled webs of neurons in animal brains were the only thing capable of intelligence as we knew it, these researchers figured they’d go straight to the source. This approach mimics the brain’s underlying architecture, constructing layers of artificial neurons that can receive and transmit information in a structure akin to our networks of biological neurons. Unlike the rule-based approach, builders of neural networks generally do not give the networks rules to follow in making decisions. They simply feed lots and lots of examples of a given phenomenon—pictures, chess games, sounds—into the neural networks and let the networks themselves identify patterns within the data. In other words, the less human interference, the better. Differences between the two approaches can be seen in how they might approach a simple problem, identifying whether there is a cat in a picture. The rule-based approach would attempt to lay down “if-then” rules to help the program make a decision: “If there are two triangular shapes on top of a circular shape, then there is probably a cat in the picture.” The neural network approach would instead feed the program millions of sample photos labeled “cat” or “no cat,” letting the program figure out for itself what features in the millions of images were most closely correlated to the “cat” label.
We are still the masters of our fate. Rational thinking, even assisted by any conceivable electronic computors, cannot predict the future. All it can do is to map out the probability space as it appears at the present and which will be different tomorrow when one of the infinity of possible states will have materialized. Technological and social inventions are broadening this probability space all the time; it is now incomparably larger than it was before the industrial revolution—for good or for evil.
The future cannot be predicted, but futures can be invented.
It was man’s ability to invent which has made human society what it is. The mental processes of inventions are still mysterious. They are rational but not logical, that is to say, not deductive.
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
Social media has given everyone a virtual megaphone to broadcast every thought, along with the means to filter out any contrary view [...] The result is a creeping sense of isolation and emptiness, which leads people to swipe, tap, and click all the more. Digital distraction keeps the mind occupied but does little to nurture it, much less cultivate depth of feeling, which requires the resonance of another’s voice within our very bones and psyches.
Moravec's paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning (which is high-level in humans) requires very little ...