Zero-shot learning (ZSL) or Zero-Shot Classification is the ability of predicting a class that wasn’t seen by a model during training. Used in natural language processing, it gives a model the ability to classify new examples not previously seen before. Typically, models are trained for one task in a specific way and leverage pre-set labels to help their classifications. After training on this set, models are considered “pre-trained” for that specific subset of data and can likely only return results that are within that scope. Zero-shot learning can also be pre-trained in a language model, but make calculated inferences on data it has not seen before. Cisco’s own Ethosight employs this incredible capability, where it translates images into discernible semantic structures and can make inferences on things it has not previously seen. Have you dabbled with natural language processing? Experienced in single- or zero-shot learning? Share with us below!