The navigation app used for traffic that’s called Waze, and that’s owned by Google, apparently knows its users really well. It’s able to recognize the user’s routine and it’s even known to suggest the next location, based on previous experiences. Basically, it’s using a form of Bayesian probabilistic reasoning. If the user’s routine includes the same few places for several days in the row, the system recognizes it as a pattern and tries to repeat it in the form of suggestion as soon as the app is started.
Predictive intelligence like this isn’t really that special these days, and many apps have it. All of these apps are searching through your phone, computer, tablet and any other device they’re linked to, trying to find data about you, your routine, and calculate what your next step or point of interest might be. They’re trying to calculate your actions, predict your tasks, and even discover your future. It’s just like suggestions that search engines offer as soon as you start typing. Marketers are trying to get to know you so that they would know what to offer.
In a way, it’s very similar to the way the human brain works. Even though our brains are still a mystery to us, every so often we discover something new about them, and the recent discoveries have shown that our brain works much like a Bayesian machine.
Professor of philosophy at the University of Edinburgh, Andy Clark, who’s also the author of Surfing Uncertainty: Prediction, Action and the Embodied Mind, has given an example of how many believe the mind works by using his cup of coffee. The process is actually a series of activities, like entering the office, seeing the cup, realizing its shape, matching it with a mental model stored in our minds, and then realizing what it is.
He suggests another way the mind might function, and that is the method of prediction. The mind has an entire set of coffee-and-office expectations, and it relies on them being true. It basically expects for something to be in a certain way, only because it has a mental image of it being that way. If the cup isn’t where the mind expects it to be, then it perceives it as an error and tries to guess better in the future.
These error-type situations are what’s actually interesting. This can explain why we cannot tickle ourselves, or why our own jokes aren’t that funny to us since the mind expects those “events” and it prepares the body for them. There are also events that minds expects to happen, so much so that it starts the appropriate reaction even if the event itself did not occur, like the false phone vibrations that you only think you felt.
This is basically how the predictive intelligence might work. The devices around us aren’t checking what we’re planning to do on a certain day, but instead have everything that they possibly can on us, and use it to make precise predictions as possible, and if they’re wrong, they use that knowledge adjust their predictions. They’re analyzing patterns and events and are trying to learn what’s important and how much. Google has done a lot of deep learning experiments already.
Robots are already able to realize the world around them, and even expect where something is, by using this prediction method. Clark hints that the next generation of robots won’t only be able to predict things in the world around them, but would also be able to perceive and process themselves.