Skip to main content
placeholder image

A scalable algorithm for learning a Mahalanobis distance metric

Conference Paper


Download full-text (Open Access)

Abstract


  • A distance metric that can accurately re°ect the intrinsic

    characteristics of data is critical for visual recognition tasks. An e®ective

    solution to de¯ning such a metric is to learn it from a set of training sam-

    ples. In this work, we propose a fast and scalable algorithm to learn a Ma-

    halanobis distance. By employing the principle of margin maximization

    to secure better generalization performances, this algorithm formulates

    the metric learning as a convex optimization problem with a positive

    semide¯nite (psd) matrix variable. Based on an important theorem that

    a psd matrix with trace of one can always be represented as a convex

    combination of multiple rank-one matrices, our algorithm employs a dif-

    ferentiable loss function and solves the above convex optimization with

    gradient descent methods. This algorithm not only naturally maintains

    the psd requirement of the matrix variable that is essential for met-

    ric learning, but also signi¯cantly cuts down computational overhead,

    making it much more e±cient with the increasing dimensions of fea-

    ture vectors. Experimental study on benchmark data sets indicates that,

    compared with the existing metric learning algorithms, our algorithm

    can achieve higher classi¯cation accuracy with much less computational

    load.

Authors


  •   kim, junae (external author)
  •   Shen, Chunhua (external author)
  •   Wang, Lei

Publication Date


  • 2010

Citation


  • kim, j., Shen, C. & Wang, L. (2010). A scalable algorithm for learning a Mahalanobis distance metric. Computer Science, Springer, the 9th Asian Conference on Computer Vision (ACCV) (pp. 1-12). Springer.

Scopus Eid


  • 2-s2.0-78650423534

Ro Full-text Url


  • http://ro.uow.edu.au/cgi/viewcontent.cgi?article=1541&context=eispapers

Ro Metadata Url


  • http://ro.uow.edu.au/eispapers/535

Has Global Citation Frequency


Start Page


  • 1

End Page


  • 12

Abstract


  • A distance metric that can accurately re°ect the intrinsic

    characteristics of data is critical for visual recognition tasks. An e®ective

    solution to de¯ning such a metric is to learn it from a set of training sam-

    ples. In this work, we propose a fast and scalable algorithm to learn a Ma-

    halanobis distance. By employing the principle of margin maximization

    to secure better generalization performances, this algorithm formulates

    the metric learning as a convex optimization problem with a positive

    semide¯nite (psd) matrix variable. Based on an important theorem that

    a psd matrix with trace of one can always be represented as a convex

    combination of multiple rank-one matrices, our algorithm employs a dif-

    ferentiable loss function and solves the above convex optimization with

    gradient descent methods. This algorithm not only naturally maintains

    the psd requirement of the matrix variable that is essential for met-

    ric learning, but also signi¯cantly cuts down computational overhead,

    making it much more e±cient with the increasing dimensions of fea-

    ture vectors. Experimental study on benchmark data sets indicates that,

    compared with the existing metric learning algorithms, our algorithm

    can achieve higher classi¯cation accuracy with much less computational

    load.

Authors


  •   kim, junae (external author)
  •   Shen, Chunhua (external author)
  •   Wang, Lei

Publication Date


  • 2010

Citation


  • kim, j., Shen, C. & Wang, L. (2010). A scalable algorithm for learning a Mahalanobis distance metric. Computer Science, Springer, the 9th Asian Conference on Computer Vision (ACCV) (pp. 1-12). Springer.

Scopus Eid


  • 2-s2.0-78650423534

Ro Full-text Url


  • http://ro.uow.edu.au/cgi/viewcontent.cgi?article=1541&context=eispapers

Ro Metadata Url


  • http://ro.uow.edu.au/eispapers/535

Has Global Citation Frequency


Start Page


  • 1

End Page


  • 12