Skip to main content
placeholder image

Sample-adaptive multiple kernel learning

Conference Paper


Download full-text (Open Access)

Abstract


  • Existing multiple kernel learning (MKL) algorithms indiscriminately

    apply a same set of kernel combination weights to all samples. However,

    the utility of base kernels could vary across samples and a base kernel

    useful for one sample could become noisy for another. In this case, rigidly

    applying a same set of kernel combination weights could adversely affect

    the learning performance. To improve this situation, we propose a

    sample-adaptive MKL algorithm, in which base kernels are allowed to

    be adaptively switched on/off with respect to each sample. We achieve

    this goal by assigning a latent binary variable to each base kernel when

    it is applied to a sample. The kernel combination weights and the latent

    variables are jointly optimized via margin maximization principle.

    As demonstrated on five benchmark data sets, the proposed algorithm

    consistently outperforms the comparable ones in the literature.

Authors


  •   Liu, Xinwang (external author)
  •   Wang, Lei
  •   Zhang, Jian (external author)
  •   Yin, Jianping (external author)

Publication Date


  • 2014

Citation


  • Liu, X., Wang, L., Zhang, J. & Yin, J. (2014). Sample-adaptive multiple kernel learning. Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI-14) (pp. 1975-1981). Palo Alto, California: AAAI Press.

Scopus Eid


  • 2-s2.0-84908219454

Ro Full-text Url


  • http://ro.uow.edu.au/cgi/viewcontent.cgi?article=4607&context=eispapers

Ro Metadata Url


  • http://ro.uow.edu.au/eispapers/3588

Has Global Citation Frequency


Start Page


  • 1975

End Page


  • 1981

Place Of Publication


  • Palo Alto, California

Abstract


  • Existing multiple kernel learning (MKL) algorithms indiscriminately

    apply a same set of kernel combination weights to all samples. However,

    the utility of base kernels could vary across samples and a base kernel

    useful for one sample could become noisy for another. In this case, rigidly

    applying a same set of kernel combination weights could adversely affect

    the learning performance. To improve this situation, we propose a

    sample-adaptive MKL algorithm, in which base kernels are allowed to

    be adaptively switched on/off with respect to each sample. We achieve

    this goal by assigning a latent binary variable to each base kernel when

    it is applied to a sample. The kernel combination weights and the latent

    variables are jointly optimized via margin maximization principle.

    As demonstrated on five benchmark data sets, the proposed algorithm

    consistently outperforms the comparable ones in the literature.

Authors


  •   Liu, Xinwang (external author)
  •   Wang, Lei
  •   Zhang, Jian (external author)
  •   Yin, Jianping (external author)

Publication Date


  • 2014

Citation


  • Liu, X., Wang, L., Zhang, J. & Yin, J. (2014). Sample-adaptive multiple kernel learning. Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI-14) (pp. 1975-1981). Palo Alto, California: AAAI Press.

Scopus Eid


  • 2-s2.0-84908219454

Ro Full-text Url


  • http://ro.uow.edu.au/cgi/viewcontent.cgi?article=4607&context=eispapers

Ro Metadata Url


  • http://ro.uow.edu.au/eispapers/3588

Has Global Citation Frequency


Start Page


  • 1975

End Page


  • 1981

Place Of Publication


  • Palo Alto, California