Medical image recognition based on multilayer neural network
Abstract
As a common accidental injury, accurate identification of burns and scalded injuries is of great significance to the development of treatment programs and prognosis assessment. Traditional identification methods are subjective and inaccurate, this paper proposes a burns and scalds grade identification technique based on multilayer neural network. The three-degree classification standard of burns and scalds is described, and the structure and principle of multilayer neural network are introduced, including the network structure composed of input layer, hidden layer and output layer, forward propagation, back propagation and training process. The implementation steps of burns and scald grade recognition technology based on multi-layer neural network are discussed in detail, such as data acquisition, image pre-processing, feature extraction, classifier design, model training and evaluation. Through experiments, the model is tested using a dataset containing 362 burns and scalds images, and the model achieves more than 90% accuracy on the test set with high accuracy and reliability. The multi-layer neural network model is used to classify the burn and scald data, which improves the diagnostic accuracy, shortens the delay time of the disease, reduces the pressure of professional doctors, and helps patients have a preliminary understanding of the degree of burn.
References
1. Tekerek A, Al-Rawe I. A Novel Approach for prediction of lung disease using chest X-ray images based on DenseNet and MobileNet. Wireless Personal Communications, 2023: 1-15.
2. Choudhury S K, Padhy R P, Sa P K. Faster R-CNN with densenet for scale aware pedestrian detection vis-à-vis hard negative suppression. IEEE, 2017.
3. Nosanov L B, Travis T E, Shawn T, et al. 586 Graft loss: 5-year review of a single burn center's experience with an institutional grading scale. Journal of Burn Care & Research, 2022, (Supplement_1): Supplement_1.
4. Suvarna M, Kumar S, Niranjan U C. Classification methods of skin burn images. international journal of computer science & information technology, 2013.
5. Mashreky S R, Rahman A, Chowdhury S M, et al. Perceptions of rural people about childhood burns and their prevention: A basis for developing a childhood burn prevention programme in Bangladesh. Public Health, 2009, 123(8): 568-572.
6. Arbab M H, Henry S C, Warsen A, et al. Diagnosis of burn wounds using terahertz time-domain spectroscopy. IEEE, 2016.
7. Wang Y, Ke Z, He Z, et al. Real-time burn depth assessment using artificial network: a large-scale, multicentre study[J]. Burns, 2020, 46(8).
8. Romano Y, Elad M, Milanfar P. RED-UCATION: A Novel CNN Architecture Based on Denoising Nonlinearities[C]// 2018: 6762-6766.
9. Boissin C, Laflamme L, Wallis L, et al. Photograph-based diagnosis of burns in patients with dark-skin types: the importance of case and assessor characteristics. Burns: journal of the International Society for Burn Injuries, 2015, 41(6): 1253-60.
10. Yu N, Yu Z, Pan Y A. Deep learning method for lincRNA detection using auto-encoder algorithm. BMC bioinformatics, 2017, 18: 511.
11. Liu X, Pan Z K, Li X Z, et al A review of deformable model methods in medical image segmentation technology. Computer application research, 2006, 23(008): 14-18.
12. Kho A, Zafar A, Tierney W. Information technology in PBRNs: the indiana university medical group research network (IUMG ResNet) experience. Journal of the American Board of Family Medicine, 2007, 20(2): 196-203.
13. Marsden M, Mcguinness K, Little S, et al. ResnetCrowd: A residual deep learning architecture for crowd counting, violent behaviour detection and crowd density level classification. IEEE, 2017.
14. Kelley C, Garson J, Aggarwal A, et al. Place prioritization for biodiversity reserve network design: a comparison of the SITES and ResNet software packages for coverage and efficiency. Diversity & Distributions, 2010, 8(5): 297-306.
15. Yun, Jiang, Li, et al. Breast cancer histopathological image classification using convolutional neural networks with small SE-ResNet module. Plos One, 2019.
16. Wu Z, Shen C, Hengel A V D. Wider or deeper: revisiting the resnet model for visual recognition. Pattern Recognition, 2016.
17. Iandola F, Moskewicz M, Karayev S, et al. DenseNet: Implementing efficient convnet descriptor pyramids. Eprint Arxiv, 2014.
18. Alphonse S, Abinaya S, Kumar N. Pain assessment from facial expression images utilizing Statistical Frei-Chen Mask (SFCM)-based features and DenseNet. Journal of Cloud Computing, 2024, 13(1).
19. Sen L, Rongguo Z, Dayang L, et al. Multimodal 3D DenseNet for IDH genotype prediction in gliomas. Genes, 2018, 9(8): 382.
20. Zhang Z, Liang X, Dong X, et al. A sparse-view CT reconstruction method based on combination of DenseNet and deconvolution. IEEE Transactions on Medical Imaging, 2018, 37(6): 1-1.
21. Shen F, Yang Y, Xiang Z, et al. Face identification with second-order pooling in single-layer networks. Neurocomputing, 2016, 187:11-18.
22. Nguyen D P, Huber P M, Metzger T A, et al. A specific mapping study using fluorescence sentinel lymph node detection in patients with intermediate- and high-risk prostate cancer undergoing extended pelvic lymph node dissection. European Urology, 2016, 70(5): 734~737.
Copyright (c) 2024 Nan Ma, Yaxin Hou
This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright on all articles published in this journal is retained by the author(s), while the author(s) grant the publisher as the original publisher to publish the article.
Articles published in this journal are licensed under a Creative Commons Attribution 4.0 International, which means they can be shared, adapted and distributed provided that the original published version is cited.