Publication
Leveraging deep neural networks for automatic and standardised wound image acquisition
dc.contributor.author | Sampaio, Ana Filipa | |
dc.contributor.author | Alves, Pedro | |
dc.contributor.author | Cardoso, Nuno | |
dc.contributor.author | Alves, Paulo | |
dc.contributor.author | Marques, Raquel | |
dc.contributor.author | Salgado, Pedro | |
dc.contributor.author | Vasconcelos, Maria João M. | |
dc.date.accessioned | 2023-07-10T15:48:55Z | |
dc.date.available | 2023-07-10T15:48:55Z | |
dc.date.issued | 2023 | |
dc.description.abstract | Wound monitoring is a time-consuming and error-prone activity performed daily by healthcare professionals. Capturing wound images is crucial in the current clinical practice, though image inadequacy can undermine further assessments. To provide sufficient information for wound analysis, the images should also contain a minimal periwound area. This work proposes an automatic wound image acquisition methodology that exploits deep learning models to guarantee compliance with the mentioned adequacy requirements, using a marker as a metric reference. A RetinaNet model detects the wound and marker regions, further analysed by a post-processing module that validates if both structures are present and verifies that a periwound radius of 4 centimetres is included. This pipeline was integrated into a mobile application that processes the camera frames and automatically acquires the image once the adequacy requirements are met. The detection model achieved mAP@.75IOU values of 0.39 and 0.95 for wound and marker detection, exhibiting a robust detection performance for varying acquisition conditions. Mobile tests demonstrated that the application is responsive, requiring 1.4 seconds on average to acquire an image. The robustness of this solution for real-time smartphone-based usage evidences its capability to standardise the acquisition of adequate wound images, providing a powerful tool for healthcare professionals. | pt_PT |
dc.description.version | info:eu-repo/semantics/publishedVersion | pt_PT |
dc.identifier.doi | 10.5220/0012031200003476 | pt_PT |
dc.identifier.eid | 85160763571 | |
dc.identifier.isbn | 9789897586453 | |
dc.identifier.uri | http://hdl.handle.net/10400.14/41639 | |
dc.language.iso | eng | pt_PT |
dc.peerreviewed | yes | pt_PT |
dc.publisher | Science and Technology Publications, Lda | pt_PT |
dc.rights.uri | http://creativecommons.org/licenses/by-nc/4.0/ | pt_PT |
dc.subject | Deep learning | pt_PT |
dc.subject | Mobile devices | pt_PT |
dc.subject | Mobile health | pt_PT |
dc.subject | Object detection | pt_PT |
dc.subject | Skin wounds | pt_PT |
dc.title | Leveraging deep neural networks for automatic and standardised wound image acquisition | pt_PT |
dc.type | book part | |
dspace.entity.type | Publication | |
oaire.citation.endPage | 261 | pt_PT |
oaire.citation.startPage | 253 | pt_PT |
oaire.citation.title | Proceedings of the 9th International Conference on Information and Communication Technologies for Ageing Well and e-Health, ICT4AWE 2023 | pt_PT |
rcaap.rights | openAccess | pt_PT |
rcaap.type | bookPart | pt_PT |