Kenneth Payne
On the streets of San Francisco recently, Waymo’s ubiquitous autonomous cars hit a snag. By gently placing a traffic cone on a car’s bonnet, protestors discovered a way to confuse Waymo’s algorithm, stranding its cars in the middle of the street.
It was a beautiful illustration of a more general problem. Intelligent machines, as we know them now, are brilliant in structured worlds, where there is a clearly defined problem to solve, such as playing chess or working out the shortest route on a satnav. They are also increasingly adept at complex control problems, such as those facing robotic surgeons, or machines that speedily handle packages in warehouses. Navigating unpredictable human minds, on the other hand, is much harder.
That is a problem in all sorts of fields where enthusiasts hope that AI might bring gains – like healthcare or education. Understanding minds matters here, too, just as it does in warfare. Today, AI drones can find targets and drop bombs as though a game of Space Invaders had been transposed into the real world. These, though, are merely tactical control problems – ethically unsettling, certainly, but basically computable. The larger challenge in war is to think strategically about aims and ways – both our own and those of our adversaries. To date that has been much too difficult for AI.
No comments:
Post a Comment