Supervised DictionaryLearningJulien MairalINRIA-Willow projectjulien.mairal@inria.frFrancis BachINRIA-Willow projectfrancis.bach@inria.frJean PonceEcole Normale Sup´erieurejean.ponce@ens.frGuillermoSapiroUniversity of Minnesotaguille@ece.umn.eduAndrew ZissermanUniversity of Oxfordaz@robots.ox.ac.ukAbstractIt is now well established that sparse signal models are well suited for restora-tion tasks and can be effectively learned from audio, image, and video data. Re-cent research has been aimed at learning discriminative sparse models instead ofpurely reconstructive ones. This paper proposes a new step in that direction, witha novel sparse representation for signals belonging to different classes in terms ofa shared dictionary and discriminative class models. The linear version of the pro-posed model admits a simple probabilistic interpretation, while its most generalvariant admits an interpretation in terms of kernels. An optimization frameworkfor learning all the components of the proposed model is presented, along withexperimental results on standard handwritten digit and texture classification tasks.1IntroductionSparse and overcomplete image models were first introduced in [1] for modeling the spatial recep-tive fields of simple cells in the human visual system. The linear decomposition of a signal using afew atoms of a learned dictionary, instead of predefined ones–such as wavelets–has recently led tostate-of-the-art results for numerous low-level image processing tasks such as denoising [2], show-ing that sparse models are well adapted to natural images. Unlike principal component analysisdecompositions, these models are in general overcomplete, with a number of basis elements greaterthan the dimension of the data. Recent research has shown ...