Development of a dynamic intelligent recognition system for a real-time tracking robot

ABSTRACT


INTRODUCTION
The utilization of newfangled technologies in military, security, and surveillance has become noteworthy applications for researchers in the domain of robotics, where the employ of techniques to perform security and control tasks saves effort and costs. They have proved to perform these essential purposes to high precision, however, this much depends on the technology of image and video processing, tracking of moving objects, and the system's ability to recognize people. The development of recognition algorithms using MATLAB software has been an essential tool to increase the efficiency of object detection and recognition. Furthermore, the availability of small and cheap hardware development boards such as Arduino and Raspberry Pi has assisted in creating surveillance systems that have low power consumption and fast processing ability [1]. The self-autonomous navigation robot aims to reach waypoints specified by coordinates in a route starting from a start point in the environment. When the robot navigates, it will encounter obstacles or terrains that will hinder movement to the endpoint, which it needs to avoid it [2]. The employ of robots considerably increases the possibility of monitoring devices, which have evolved from a traditional function, whereby the device only detects incidents and activate alarms, to an energetic complex cooperative observation robot, which can meet with its environment and collaborate with people or with other robots [3]. An example of a security robot developed at the University of Waikato, Hamilton, New Zealand named "MARVIN" (mobile autonomous robotic vehicle for indoor navigation) has been developed to act as a security agent inside a building. In order to meet with human needs, the robot is provided with speech recognition and speech synthesis software, also, it can convey emotional states, verbally, and nonverbally [4]. Milella et al. a procedure for tracking people in the indoor medium employs a multi-sensor portable platform are presented [5]. It used for the approach is in Human-Augmented Mapping (HAM). Paola, et al. in 2010 suggested an eligible device independently public-objective missions and compounded monitoring matter together. It is Turns out that the suggested robotic monitoring project effectively several requests issues concerning climate mapping, and independent shipping, as well as monitoring duties, such as prospect treatment to reveal deserted or extract objects and detect and followed people [6]. Salh and Nayef [7], a smart safety robot utilizes field programmable analog array (FPAA) for crash-free movement, principle component analysis (PCA) and linear discriminant analysis (LDA) for characteristic extraction, support vector machine (SVM) classifier for face-recognition and a gas sensor (MQ4) for discovering gas leakages. This robot is used to captured audio-visual data. Bahrudin et al. present in their paper several studies and many models of security and surveillance systems have been established using numerous platforms. Such as the design of a fire alarm system using Raspberry Pi Model-B single-board computer, those systems can alert the security man immediately when any problem occurs and request for consent from the security man to inform the firefighter. They use the PHP programming language to design a webpage for displaying the warning messages [8].
A human tracking ability programmed on a robot is a help to identify people and several methods have been developed. One such method present by Ahmad and Youssef [9] enables the robot to track the center of mass of a human skeleton using a 3D sensor from Microsoft (Kinect). It can detect a human movement in its vicinity and avoid people and obstacles in its path. Wang et al. [10] shows a smart card based on Arduino, which is an integral part of this work, is inspired by car control systems designed using two methods, the first is wirelessly using a smartphone via Bluetooth and the second is by the gravitational sensor (the accelerometer sensor) built-in to the Android smartphone. Balogh [11], a modern robot, called Acrobat was examined and estimated at some different cases. A robotic capital lecture for students is provided by the Automotive section of the study. The major aim was to give them a concept of mobile robots. Through two sessions students were eligible to program basic movements and interactive the behavior of the robots. This paper characterizes the human activity hold analyses system on a CAN bus basis. The model embraces a distributed system architecture, to compile information inclusive plantar effort, external structure corner, and supply information backing for following movement recognition algorithm. It has easy measurement procedures and minimal subjection on the measurement milieu. This system meets the needs of perceiving human movement when controlling the rehabilitation exterior structure [12]. This paper characterizes the human activity hold analyses system on a CAN bus basis. The model embraces a distributed system architecture, which can compile data inclusive plantar effort, external structure angle, and supply data support for following motion recognition algorithm. It has a simple measurement process and minimal subjection on the measurement environment. This system meets the needs of perceiving human movement when controlling the rehabilitation exterior structure [13]. The primary objective of the security surveillance system proposed in this paper is to provide a reliable natural technique for a self-autonomous robot to navigate and detect moving objects, avoid obstacles, and detect smoke. Toderean et al. [14] proposition a model that displays a process to perform a robotic device with a deep learning-based target discovered in an emulation ambiance.
The emulation ambiance is developed in Gazebo and turns on on the robot operating system (ROS) This research inserts the strides to create a robot arm model controlled by ROS and discovered the target [15]. Robot-aid prostate involvement by helping magnetic resonance imaging (MRI) directing is a hopeful process to get better clinical pursuance to contrast with the manual surgery process. Ultrasonic motors are used to full operation an MRI-guided 6-DOF prostate involvement serial robot is prepared and the control planning is proposed. The mechanical layout of the suggested robot is offered for depends on the layout demands of the prostate involvement robot device. The microscope viewing is dependent as the invitro needle piece size pattern and the robotic model joined with the binocular cameras are clarify [16]. Visual grasp is a fundamental ability needful for intelligent mobile robots to react completely and safely with humans in the actual world. In this paper, a visible perception structure for an intelligent mobile robot is present. The framework merges a broad set of developed algorithms eligible for recognizing people, objectives, and human tricks, as well as portraying observed scenes [17]. this article sittings a novel RGB-D learning-free deformable target tracker in collection with a camera place optimization model for optimal deformable object grasp. The tactic is based on the appreciation of the target's visible area through the production of a super voxel graph that allows weighting new super voxel candidates among target states over time. Once a distortion state of the object is specific, super voxels of its related graph serve as input for the camera position optimization problem [18]. In this work, modern deep learning approaches are used for 163 effective and strong vehicle discovered method 2D LiDAR with a less expensive. This paper suggests an education-based process with the input of pseudo-images, which denominate the cascade pyramid region proposal convolution neural network (cascade pyramid RCNN). Results prove that this method offers accuracy and superior achievement of the rapidity and lightweight pattern [19]. A hybrid control model is suggested in this paper to realize full-body conflict evasion in internet robot riggers. The proposal mends vintage movement design algorithms by inserting a deep reinforcement learning (DRL) path trained ad hoc for implementing hurdle evasion and realizing attained duty in the effective area. Further particularly, change mechanization becomes strong when a situation of nearness to the hurdle is achieved. The proposed system has been lastly examined relying on an actual robot rigger simulated in a V-REP medium [20]. The heterogeneous multi-robot framework is one of the most substantial research trends in the robotic field. In this research, an amended real-time path delineation method is offered for a heterogeneous multi-robot system, which is organized of numerous unmanned aerial vehicles (UAVs) and unmanned earth vehicles (UGVs). In the suggested method, the 3D environment is planed as a neuron topology map, based on the grid method mutual with the bio-inspired neural network. The results display that the suggested method can efficiently guide the heterogeneous UAV/UGV system to the aim, and has better execution than conventional methods in the real-time path delineation tasks [21]. This paper is organized as: Section 2 demonstrates the system architecture; Section 3 shows the flow chart for the function of the designed system with the obtained results. Section 4 conclusions which obtained.

RESEARCH METHOD
A robot is a human-made electromechanical device that can move on its own according to a set of planned commands instigated in software and updated by sensory perception. This work develops a Robot that detects an object in real-time. The following is a description of the parts of the proposed robot system.

Robot platform designed
As shown in Figure 1 the proposed model includes all the electrical and mechanical components required to build the proposed robot system, including the motor driver, wheels, chassis, Arduino ATmaga256 microcontroller, mini-HD Wi-Fi camera with SG-9 servo motors and specific sensors. The robot system is controlled by an Arduino ATmega256 board, which is open-source electronics prototyping platform. It is a simple board containing a microcontroller, peripheral interfaces, power supply circuits, which is programmed by an existing software platform [22]. A portable mini-HD Wi-Fi camera is attached on the robot platform; it adopts P2P technology, which allows users to easily configure the camera mounted on an SG-90 servo motor for object tracking purpose [23]. An SRF-05 model ultrasonic sensor is also attached over an SG-90 servo motor for obstacle avoidance, it can sense obstacles from 0.01-to 4 meters space, and its ability connects to the Arduino board easily. It operated at (5 v, 30 mA, and 40 kHz). Its ability to discover the 3cm diameter body from further than 2 m space [24]. A smoke sensor is used to measure the smoke level; it will make the buzzer sound when reaches a certain level.

Operations of the designed robot
A robot is designed to monitor a building or government department when employees leave to provide security and to avoid loss of life and equipment caused by fires. The robot is equipped with a surveillance camera to monitor the building during the roaming period and wirelessly transmit live images to the central computer's recognition system as shown in Figure 2. A robot is designed to monitor a building or government department when employees leave to provide security and to avoid loss of life and equipment caused by fires. The robot is equipped with a surveillance camera to monitor the building during the roaming period and wirelessly transmit live images to the central computer's recognition system as shown in Figure 2. The robot will keep track of the generated path while constantly reading information from the environment using ultrasonic sensors. If the robot finds out an object that blocks the path, the robot will pause and update the chart with the new data and make the decision to turn 90 degrees to the right or left or turning around 180 degrees according to the position estimate. Figure 2. Robot model.
The recognition system determines the rotation angle and flow direction. The robot can identify anyone entering using facial recognition technology and comparing it to a special database of facial images, and this is done using a statistical technique that uses the main components analysis PCA or Eigen object technology [25], [26], there are five main stages of facial recognition see Figure 3. When the robot identifies a person, it starts to track him and takes pictures of his face. These images are considered as inputs to the face recognition system, pre-processing steps applied to the normalization stage (image size, translation, rotation, and illumination), background removal the face image pass in the feature extraction stage. The recognition system classifies the face image and alerts the security man for the stranger person. In the PCA algorithm the face Image (α) represented by a training set of (Z) images of volume (W x W) is performed by vectors of volume (W 2 ), then calculate the average face (φ) by (1).
Where (I = 1, 2, 3…., n), the normalization (Y) is performed by calculating the difference between each face and average.
A covariance matrix formula ( = ) is calculated for each Eigenvector, where A is a vector function. To simplify the output matrix ATA calculate, where = ( 1 , 2 , 3 , … … ) by considering the Eigenvectors ei of ATA such that; Where λi is the Eigen value and Xi is the Eigen vector. Now multiply both sides of (3) by A to get (4).
The input image (α), displays into the face space to get a vector (P).
The space of (P) to each face is called Euclidean space and realized by (6).
Where = 1, …, represent a vector of kth face category. A face is rated as a relationship to category k when the lower uk is less than selection sill θc, else, the face is sorted as unknown θc.

RESULTS AND DISCUSSION
The system was tested using the ORL database of faces explain in Figure 5. The accuracy of the system was measured by the Euclidian distance between the test face images and train faces. The robot is equipped with many sensors, such as an obstacle sensor during movement and smoke detectors, whereby the robot reports any security breach or fire that occurs inside the building to the central computer. When a strange situation occurs inside the building, the security system will report it by issuing voice commands from the computer to the security man, who will take the necessary action.
There have been many practical experiments and scenarios to control the movement of the robot. Figure 6 shows the construction of a suggested algorithm for a robot movement that corresponds to the nature of the proposed building. The place was designed concerning the coordination of the colors of the wall and the type of furniture that facilitates the movement of the robot.

CONCLUSIONS
This paper describes the security, safety, and independent navigation systems designed and implemented on a multi-sensor mobile security robot. The wireless camera is used to detect the movement of objects and fire. This is achieved through utilize both the images and the contour model which are quickly processed with the particle filter. Using this system objects can be discovered independently of lighting conditions. Frame tracking in images is used as an attractive system to get an estimate of a person's position. This operation will support the camera to detect a fire inside the building.