Repository logo
 
No Thumbnail Available
Publication

Leveraging deep neural networks for automatic and standardised wound image acquisition

Use this identifier to reference this record.
Name:Description:Size:Format: 
72208382.pdf3.69 MBAdobe PDF Download

Advisor(s)

Abstract(s)

Wound monitoring is a time-consuming and error-prone activity performed daily by healthcare professionals. Capturing wound images is crucial in the current clinical practice, though image inadequacy can undermine further assessments. To provide sufficient information for wound analysis, the images should also contain a minimal periwound area. This work proposes an automatic wound image acquisition methodology that exploits deep learning models to guarantee compliance with the mentioned adequacy requirements, using a marker as a metric reference. A RetinaNet model detects the wound and marker regions, further analysed by a post-processing module that validates if both structures are present and verifies that a periwound radius of 4 centimetres is included. This pipeline was integrated into a mobile application that processes the camera frames and automatically acquires the image once the adequacy requirements are met. The detection model achieved mAP@.75IOU values of 0.39 and 0.95 for wound and marker detection, exhibiting a robust detection performance for varying acquisition conditions. Mobile tests demonstrated that the application is responsive, requiring 1.4 seconds on average to acquire an image. The robustness of this solution for real-time smartphone-based usage evidences its capability to standardise the acquisition of adequate wound images, providing a powerful tool for healthcare professionals.

Description

Keywords

Deep learning Mobile devices Mobile health Object detection Skin wounds

Citation

Research Projects

Organizational Units

Journal Issue

Publisher

Science and Technology Publications, Lda

Altmetrics