Neural mixture models with expectation-maximization for end-to-end deep clustering

dc.contributor.authorTissera, D
dc.contributor.authorVithanage, K
dc.contributor.authorWijesinghe, R
dc.contributor.authorXavier, A
dc.contributor.authorJayasena, S
dc.contributor.authorFernando, S
dc.contributor.authorRodrigo, R
dc.date.accessioned2023-06-21T08:53:56Z
dc.date.available2023-06-21T08:53:56Z
dc.date.issued2022
dc.description.abstractAny clustering algorithm must synchronously learn to model the clusters and allocate data to those clusters in the absence of labels. Mixture model-based methods model clusters with pre-defined statistical distributions and allocate data to those clusters based on the cluster likelihoods. They iteratively refine those distribution parameters and member assignments following the Expectation-Maximization (EM) algorithm. However, the cluster representability of such hand-designed distributions that employ a limited amount of parameters is not adequate for most real-world clustering tasks. In this paper, we realize mixture model-based clustering with a neural network where the final layer neurons, with the aid of an additional transformation, approximate cluster distribution outputs. The network parameters pose as the parameters of those distributions. The result is an elegant, much-generalized representation of clusters than a restricted mixture of hand-designed distributions. We train the network end-to-end via batch-wise EM iterations where the forward pass acts as the E-step and the backward pass acts as the M-step. In image clustering, the mixture-based EM objective can be used as the clustering objective along with existing representation learning methods. In particular, we show that when mixture-EM optimization is fused with consistency optimization, it improves the sole consistency optimization performance in clustering. Our trained networks outperform single-stage deep clustering methods that still depend on k-means, with unsupervised classification accuracy of 63.8% in STL10, 58% in CIFAR10, 25.9% in CIFAR100, and 98.9% in MNIST.en_US
dc.identifier.citationTissera, D., Vithanage, K., Wijesinghe, R., Xavier, A., Jayasena, S., Fernando, S., & Rodrigo, R. (2022). Neural mixture models with expectation-maximization for end-to-end deep clustering. Neurocomputing, 505, 249–262. https://doi.org/10.1016/j.neucom.2022.07.017en_US
dc.identifier.databaseScienceDirecten_US
dc.identifier.doihttps://doi.org/10.1016/j.neucom.2022.07.017en_US
dc.identifier.issn0925-2312en_US
dc.identifier.journalNeurocomputingen_US
dc.identifier.pgnos249-262en_US
dc.identifier.urihttp://dl.lib.uom.lk/handle/123/21139
dc.identifier.volume505en_US
dc.identifier.year2022en_US
dc.language.isoen_USen_US
dc.publisherElsevieren_US
dc.subjectDeep Clusteringen_US
dc.subjectMixture Modelsen_US
dc.subjectExpectation-Maximizationen_US
dc.titleNeural mixture models with expectation-maximization for end-to-end deep clusteringen_US
dc.typeArticle-Full-texten_US

Files