Browsing by Author "Vithanage, K"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
- item: Article-Full-textEnd-to-end data-dependent routing in multi-path neural networks(Springer, 2025) Tissera, D; Wijesinghe, R; Vithanage, K; Xavier, A; Fernando, S; Rodrigo, RNeural networks are known to give better performance with increased depth due to their ability to learn more abstract features. Although the deepening of networks has been well established, there is still room for efficient feature extraction within a layer, which would reduce the need for mere parameter increment. The conventional widening of networks by having more filters in each layer introduces a quadratic increment of parameters. Having multiple parallel convolutional/dense operations in each layer solves this problem, but without any context-dependent allocation of input among these operations: the parallel computations tend to learn similar features making the widening process less effective. Therefore, we propose the use of multipath neural networks with data-dependent resource allocation from parallel computations within layers, which also lets an input be routed end-to-end through these parallel paths. To do this, we first introduce a crossprediction based algorithm between parallel tensors of subsequent layers. Second, we further reduce the routing overhead by introducing feature-dependent cross-connections between parallel tensors of successive layers. Using image recognition tasks, we show that our multi-path networks show superior performance to existing widening and adaptive feature extraction, even ensembles, and deeper networks at similar complexity.
- item: Article-Full-textNeural mixture models with expectation-maximization for end-to-end deep clustering(Elsevier, 2022) Tissera, D; Vithanage, K; Wijesinghe, R; Xavier, A; Jayasena, S; Fernando, S; Rodrigo, RAny clustering algorithm must synchronously learn to model the clusters and allocate data to those clusters in the absence of labels. Mixture model-based methods model clusters with pre-defined statistical distributions and allocate data to those clusters based on the cluster likelihoods. They iteratively refine those distribution parameters and member assignments following the Expectation-Maximization (EM) algorithm. However, the cluster representability of such hand-designed distributions that employ a limited amount of parameters is not adequate for most real-world clustering tasks. In this paper, we realize mixture model-based clustering with a neural network where the final layer neurons, with the aid of an additional transformation, approximate cluster distribution outputs. The network parameters pose as the parameters of those distributions. The result is an elegant, much-generalized representation of clusters than a restricted mixture of hand-designed distributions. We train the network end-to-end via batch-wise EM iterations where the forward pass acts as the E-step and the backward pass acts as the M-step. In image clustering, the mixture-based EM objective can be used as the clustering objective along with existing representation learning methods. In particular, we show that when mixture-EM optimization is fused with consistency optimization, it improves the sole consistency optimization performance in clustering. Our trained networks outperform single-stage deep clustering methods that still depend on k-means, with unsupervised classification accuracy of 63.8% in STL10, 58% in CIFAR10, 25.9% in CIFAR100, and 98.9% in MNIST.