MAXIMUM MARGINAL LIKELIHOOD ESTIMATION FOR NONNEGATIVE DICTIONARY LEARNING
Non-negative Tensor Factorization and Blind Separation
Presented by: Onur Dikmen, Author(s): Onur Dikmen, Cédric Févotte, CNRS LTCI / Télécom ParisTech, France
We describe an alternative to standard nonnegative matrix factorisation (NMF) for nonnegative dictionary learning. NMF with the Kullback-Leibler divergence can be seen as maximisation of the joint likelihood of the dictionary and the expansion coefficients under Poisson observation noise. This approach lacks optimality because the number of parameters (which include the expansion coefficients) grows with the number of observations. As such, we describe a variational EM algorithm for optimisation of the marginal likelihood, i.e., the likelihood of the dictionary where the expansion coefficients have been integrated out (given a Gamma conjugate prior). We compare the output of both maximum joint likelihood estimation (i.e., standard NMF) and maximum marginal likelihood estimation (MMLE) on real and synthetical data. The MMLE approach is shown to embed automatic model order selection, similar to automatic relevance determination.
Lecture Information
Recorded: | 2011-05-26 17:35 - 17:55, Club B |
---|---|
Added: | 18. 6. 2011 03:35 |
Number of views: | 16 |
Video resolution: | 1024x576 px, 512x288 px |
Video length: | 0:23:08 |
Audio track: | MP3 [7.84 MB], 0:23:08 |
Comments