Clusters of virtual nerve cells gauge spatial relationships instead of just relying on memorization
Artificial intelligence is getting some better perspective. Like a person who can read someone else’s penmanship without studying lots of handwriting samples, next-gen image recognition AI can more easily identify familiar sights in new situations.
Made from a new type of virtual building block called capsules, these programs may cut down the enormous amount of data needed to train current image-identifying AI. And that could boost such technology as machine-made medical diagnoses, where example images may be scarce, or the responsiveness of self-driving cars, where the view is constantly shifting. Researchers with Google will present this new version of an artificial neural network at the Neural Information Processing Systems conference in Long Beach, Calif., on December 5.
Neural networks are webs of individual virtual nerve cells, or neurons, that learn to pick out objects in pictures