Treffer: Intelligent feature fusion with dynamic graph convolutional recurrent network for robust object detection to assist individuals with disabilities in a smart Iot edge-cloud environment.

Title:
Intelligent feature fusion with dynamic graph convolutional recurrent network for robust object detection to assist individuals with disabilities in a smart Iot edge-cloud environment.
Authors:
Alohali MA; Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia. maalohaly@pnu.edu.sa., Alanazi F; Department of Information Systems, Faculty of Computer and Information Systems, Islamic University of Madinah, Medina, 42351, Saudi Arabia., Alsahafi YA; Department of Information Technology, College of Computers and Information Technology, University of Jeddah, Jeddah, 21493, Saudi Arabia., Yaseen I; Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, AlKharj, Saudi Arabia.; King Salman Centre for Disability Research, Riyadh, 11614, Saudi Arabia.
Source:
Scientific reports [Sci Rep] 2025 Nov 21; Vol. 15 (1), pp. 41228. Date of Electronic Publication: 2025 Nov 21.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: Nature Publishing Group Country of Publication: England NLM ID: 101563288 Publication Model: Electronic Cited Medium: Internet ISSN: 2045-2322 (Electronic) Linking ISSN: 20452322 NLM ISO Abbreviation: Sci Rep Subsets: MEDLINE
Imprint Name(s):
Original Publication: London : Nature Publishing Group, copyright 2011-
References:
Sci Rep. 2025 Aug 14;15(1):29822. (PMID: 40813886)
Bioengineering (Basel). 2025 Aug 08;12(8):. (PMID: 40868365)
Sensors (Basel). 2024 Jun 03;24(11):. (PMID: 38894387)
Sci Rep. 2025 May 13;15(1):16514. (PMID: 40360540)
Neural Netw. 2025 Aug 6;193:107965. (PMID: 40865382)
Front Neurorobot. 2024 May 20;18:1398703. (PMID: 38831877)
Sci Rep. 2025 Aug 17;15(1):30113. (PMID: 40820088)
Contributed Indexing:
Keywords: Cloud computing; Deep learning; Feature fusion; Individuals with disabilities; Internet of things; Object detection
Entry Date(s):
Date Created: 20251121 Date Completed: 20251121 Latest Revision: 20251124
Update Code:
20251124
PubMed Central ID:
PMC12638986
DOI:
10.1038/s41598-025-25048-7
PMID:
41271840
Database:
MEDLINE

Weitere Informationen

Smart Internet of Things (IoT)-edge-cloud computing defines intelligent systems where IoT devices create data at the network's edge, which is then further processed and analyzed in local edge devices before transmission to the cloud for deeper insights and storage. Visual impairment, like blindness, has a deep effect on a person's psychological and cognitive functions. So, the use of assistive models can help mitigate the adverse effects and improve the quality of life for individuals who are blind. Much current research mainly concentrates on mobility, navigation, and object detection (OD) in smart devices and advanced technologies for visually challenged people. OD is a vital feature of computer vision that includes categorizing objects within an image, allowing applications like augmented reality, image retrieval, etc. Recently, deep learning (DL) models have emerged as an excellent technique for mining feature representation from data, primarily due to significant developments in OD. The DL model is well-trained with manifold images of objects that are highly applicable to visually impaired individuals. This paper presents an intelligent Feature Fusion with Dynamic Graph Convolutional Recurrent Network for Robust Object Detection (FFDGCRN-ROD) approach to assist individuals with disabilities. The paper aims to present an intelligent OD framework for individuals with disabilities utilizing a smart IoT edge cloud environment to enable monitoring and assistive decision-making. At first, the image pre-processing phase involves resizing, normalization, and image enhancement to eliminate the noise and enhance the image quality. For the OD process, the FFDGCRN-ROD approach employs the faster R-CNN to identify and locate specific targets within the images automatically. Furthermore, the fusion models, namely CapsNet, SqueezeNet, and Inceptionv3, are used for the feature extraction process. Finally, the FFDGCRN-ROD model implements the dynamic adaptive graph convolutional recurrent network (DA-GCRN) model to detect and classify objects for visually impaired people accurately. The experimental validation of the FFDGCRN-ROD methodology is performed under the Indoor OD dataset. The comparison analysis of the FFDGCRN-ROD methodology demonstrated a superior accuracy value of 99.65% over existing techniques.
(© 2025. The Author(s).)

Declarations. Competing interests: The authors declare no competing interests.