Show simple item record

dc.contributor.author Dayarathna, T
dc.contributor.author Muthukumarana, T
dc.contributor.author Rathnayaka, Y
dc.contributor.author Denman, S
dc.contributor.author de Silva, C
dc.contributor.author Pemasiri, A
dc.contributor.author Aristizabal, DA
dc.date.accessioned 2023-11-28T04:43:53Z
dc.date.available 2023-11-28T04:43:53Z
dc.date.issued 2023
dc.identifier.citation Dayarathna, T., Muthukumarana, T., Rathnayaka, Y., Denman, S., de Silva, C., Pemasiri, A., & Ahmedt-Aristizabal, D. (2023). Privacy-Preserving in-bed pose monitoring: A fusion and reconstruction study. Expert Systems with Applications, 213, 119139. https://doi.org/10.1016/j.eswa.2022.119139 en_US
dc.identifier.uri http://dl.lib.uom.lk/handle/123/21747
dc.description.abstract Background and objectives: Recently, in-bed human pose estimation has attracted the interest of researchers due to its relevance to a wide range of healthcare applications. Compared to the general problem of human pose estimation, in-bed pose estimation has several inherent challenges, the most prominent being frequent and severe occlusions caused by bedding. In this paper we explore the effective use of images from multiple non-visual and privacy-preserving modalities such as depth, long-wave infrared (LWIR) and pressure maps for the task of in-bed pose estimation in two settings. First, we explore the effective fusion of information from different imaging modalities for better pose estimation. Secondly, we propose a framework that can estimate in-bed pose estimation when visible images are unavailable, and demonstrate the applicability of fusion methods to scenarios where only LWIR images are available. Method: We analyze and demonstrate the effect of fusing features from multiple modalities. For this purpose, we consider four different techniques: (1) Addition, (2) Concatenation, (3) Fusion via learned modal weights, and 4) End-to-end fully trainable approach; with a state-of-the-art pose estimation model. We also evaluate the effect of reconstructing a data-rich modality (i.e., visible modality) from a privacy-preserving modality with data scarcity (i.e., long-wavelength infrared) for in-bed human pose estimation. For reconstruction, we use a conditional generative adversarial network. Results: We conduct experiments on a publicly available dataset for feature fusion and visible image reconstruction. We conduct ablative studies across different design decisions of our framework. This includes selecting features with different levels of granularity, using different fusion techniques, and varying model parameters. Through extensive evaluations, we demonstrate that our method produces on par or better results compared to the state-of-the-art. Conclusion: The insights from this research offer stepping stones towards robust automated privacy-preserving systems that utilize multimodal feature fusion to support the assessment and diagnosis of medical conditions. en_US
dc.language.iso en en_US
dc.subject Feature fusion en_US
dc.subject Generative networks en_US
dc.subject Multimodal human pose analysis en_US
dc.title Privacy-Preserving in-bed pose monitoring en_US
dc.title.alternative a fusion and reconstruction study en_US
dc.title.alternative a fusion and reconstruction study en_US
dc.type Article-Full-text en_US
dc.identifier.year 2023 en_US
dc.identifier.journal Expert Systems with Applications en_US
dc.identifier.volume 213 en_US
dc.identifier.database Science Direct en_US
dc.identifier.pgnos 119139 en_US
dc.identifier.doi https://doi.org/10.1016/j.eswa.2022.119139 en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record