A simulated risk assessment of human-robot interaction in the domestic environment

ABSTRACT


INTRODUCTION
A robotic system is a software-enhancing device consisting of a complex and unpredictable setting with distributed, heterogeneous software components. The primary industrial robot safety standard in North America is the ANSI/RIA 15.06 Standard for Industrial Robots and Robot Systems-Safety Requirements [1,2]. This standard is explicitly written for industrial robots and does not apply to autonomous or service robots. Therefore, the safety of human-robot interaction is affected by isolating the robot from humans [3][4][5]. In effect, there is no interaction.
As robotic applications transition from isolated, structured, industrial environments to interactive, unstructured, human workspaces, this approach is no longer tenable [3,6]. However, in unstructured environments such as domestic areas, mechanical design alone is not adequate to ensure safe and humanfriendly interaction. Additional safety measures, utilizing system control and planning, are necessary. To ensure a secure interaction, the robot must be able to assess the level of danger in its current environment and act to minimize that danger. Safety can further be enhanced if the robot can anticipate potential hazards in advance and plan to avoid those hazards. This study focuses on collaborative robots or cobots safety that has been mainly limited to workplace environments. However, cobots still need to undergo safety assessments before implementation in a real environment. This can be achieved by using sensors and reducing the robot's motion speed in its trajectory during human-robot interaction. The objectives of the research are: to design and develop the safety configuration of an arm robot in the simulated non-industrial workspace, to investigate and evaluate the interactions between cobots and humans in domestic environments. The significations of the research are that a human monitoring system for estimating the position and orientation of the human participant and for estimating the human affective state during HRI. The human monitoring information is then incorporated into the planning and control system and also the implementation and testing of an exemplar HRI system. This paper is organized with a section of the research background and the problems discussed in Section 1. In Section 2, a brief literature review on the related works is illustrated. The system design of the model and analysis of the proposed model are discussed in Section 3. The initial results were analyzed in Section 4. The paper is concluded in Section 5.

RELATED WORKS
The research works on a collaboration of HRI that is increasing gradually in this advanced world. Adding the safety factor and the danger level during HRI have become the centre of the attraction among all researchers. Still, there is very little focus on HRI in the indoor environment. In fact, the systemic analyses of HRI in the domestic environment are not yet conducted. In reality, there is a lack of awareness of the real threats and the security priorities involved during HRI. Literature overviews on this critical issue are described briefly below.
Lera et al. [7] developed a taxonomy classifying cyber-security threats targeted at the protection and security of service robots. The proposed taxonomy differentiates the risks according to the type of user for safety threats. The estimated risks for each user form shall be defined by the physical impact level and the source of the risk. Only under the robot feature and form of sensor fitted with the robot, security threats are categorized. Bonaci et al. [8] addressed security risks for the Raven II, an advanced surgical device, teleoperated surgical robot. The authors demonstrated that intruders could control a wide range of robot functions maliciously by performing interruptions and manipulation attacks against the wireless communication connection between the surgeon and the robot. The attacks are based on the middle man model and have effectively affected the protection and usability of the operating robot, which may result in legal breaches and privacy. Olawoyin [9] researched safety and automation in a collaborative robot network in the working environment and concluded that performance needs to be improved to avoid safety constraints in automation and robotics-related issues. Dombrowski et al. [10] emphasized specific significance in the automated factory systems to prepare human-robot cooperation (HRC). Weitschat and Aschemann [11] have developed a new approach that still meets the international safety standards of collaborative robotics for robot performance enhancements. The attacks are based on a man-in-the-middle paradigm. They have effectively impacted the surgical robot's protection and usability, which could potentially lead to violations of law and privacy. Loukas et al. [12] argued that the restricted rule-based or lightweight machine learning techniques used in cyber-physical vehicle intrusion detection could be replaced by more sophisticated methods using cloud computational offloading. The improved computing power is used to introduce an indepth multilayer perception and recurrent neural network architecture that receives the cyber-physical data collected in the robotic vehicle in real-time and analyzes it for intrusion detection. Batson et al. [13] carried out a study to identify risks and weaknesses in the unmanned tactical autonomous control and communication framework for the system. Jones and Straub [14] implemented a two-stage intrusion detection system in autonomous robots for detecting network intrusions and malware. The authors used a deep neural network and taught it to detect commands that deviate from planned behavior. Vuong et al. [15] have suggested two separate methods for identifying robotic vehicle attacks. The first approach was based on the use of decision trees, and the second was based on the use of profound learning. Demir and Durdu [16] demonstrated how human and robot interaction could be monitor in an indoor place. The objective of their proposal was the human-robot relationships define the models of people's robot interaction expectations to direct robot design and algorithmic creation, which will make the interaction between people and robotics more natural and effective.
Kumar et al. [17] proposed the optimum movement control and trajectory planning approach to various robot freedoms with soft applications computing techniques and comparative analysis also evaluated and showed different degrees of robotic arm freedom to compensate for such uncertainties robotic arm movement and tension before time settling, optimization, the arm motion of the robot, i.e. its kinematics behavior. Shenawy et al. [18] explored multi robotics teamwork during the exploration phase and compared and evaluated the success of various multi-robot exploration policies for different environments and team sizes. In combinatorial method, the author [19] analyzed several techniques of representation of the space using previous research and its findings with various parameters, such as optimality, completeness, stability, memory utilization, effective time and computer time etc. The programming by demonstration system was presented by Amar et al. [20] on a trajectory level to reproduce hand/tool movement with a robot manipulator, which was achieved with the ArToolkit, tracking user movement and the trajectories with the limited cubic spine reworked. Attamimi et al. [21] provided in a free play scenario a way to estimate the concentration of an infant, one of the most critical mental states. Interaction between child and robot (CRI). First, they developed a system to feel the verbal and no-verbal multimodal signal of a child, including gaze, facial expression and proximity, to make a careful estimate in this CRI scenario. In order to determine an individual attention level, the observed information was then used to train a model based on a vector support mechanism (SVM). Some drawbacks of the previous researcher can be discussed briefly below.
-Industrial robotic systems: robots are mostly caged and isolated from humans in a safety guard environment. -Unstructured environments (domestic area): Mechanical design alone is not adequate to ensure safe and human-friendly interaction. -Safety measures, utilizing system control and planning, are necessary. -Safe interaction: a robot must assess the level of danger in its current environment, and act to minimize that danger. Thus, this has motivated this research study to propose a safety human-cobot collaboration method by implementing the algorithm using a Gazebo simulator in ROS.

RESEARCH METHOD
A methodology for software development is a framework used to organize, schedule, and monitor a system's software development process. This involves predefining unique deliverables and objects that a project team is developing and completing to create or manage an application [22]. While successful techniques for the production of safety-critical software are well established in the avionics industry, for example, these techniques are typically designed for projects with a long time scale and high level of personnel. This may be unsuitable for use in the field of groundbreaking robotics research without adaptation, where timescale is shorter and human resources and financial expenditure usually much lower [23]. Typically, one has to deal with a wide range of sensors and actuators with varying ability levels in the robotics domain. Adding to this difficulty of making heterogeneous hardware systems, robots have limited resources to deal with open-ended environments [24]. In other areas, there is a growing need to tailor validated software engineering approaches to the robotics requirements. In this direction, a methodology is required that does not constrain any particular architecture. Figure 1 shows the system overview of human-robot interaction. The user issues a command to start the contact with the robot. The command translator converts the instruction in the natural language into a series of target positions and acts (XYZ plane). This human-robot interaction model can be divided into a global controller and a local controller. The global controller segment begins designing a geometric course for the robot over large task segments. Segment endpoints are specified by the locations where the robot will stop and perform a maneuver for grip or release. For example, it defines one path segment from the robot's initial location to the object to be picked up. The local planner generates the trajectory along the globally planned path based on information obtained in real-time during the execution of the mission. At every control point, the local planner produces the control signal required. Since the local planner makes use of real-time information, the trajectory is produced in short segments. The user is tracked during the interaction to determine the user's approval level for cobot behavior.

System overview
The local controller segment uses this information to modify the robot's velocity along the expected path. At-control phase, the safety control module assesses the safety of the plan created by the trajectory planner. If an environmental change is identified, which threatens the safety of the interaction, a deviation from the planned path is triggered by the security control module. The deviation would push the cobot to a more secure spot. Concurrently, a re-evaluation of the program will be conducted by the rehabilitation evaluator and re-planning, if necessary. Figure 2 illustrates the steps or process of safety during HRI. In order to locate the safest configuration, the arm robot needs to monitor and go through all the processes and, in the end, re-plan his next step. If the arm robot not in action its checks the danger level. If the danger level is greater than the threshold, then it is activating its action. After the cobot figures out the danger level, it reduces his speed and If it finds any barrier then it can analyse which kind of barrier is it, if it is non-human, it is moving backward.

-Step 3: Computation
When it is human, it reduces its speed and checks the risk factor of its interaction.

-Step 4: Reconciliation
The robot reduces its speed and stops for a while when the interaction is not safe, but if the interaction is safe, then it goes on.

-Step 5: Completion
At the end of the day, this algorithm tests the study target, if the algorithm does not achieve the goal, but if it achieves the goal, the algorithm will go smoothly.

PATH PLANNING AND ANALYSIS
Safety path planning is a vital component of a stable policy overall for human-robot interaction. By having safety criteria at the planning level, the robot can be better respond to unexpected safety incidents. Planning is used to improve control outcomes by using a smooth route design [25,26] to enhance monitoring. A similar strategy is adopted to replicate the outcome [27,28]. Nevertheless, the potential risk criteria are defined and evaluated using the proposed method of motion planning [29,30]. In deciding hazards, that criteria explicitly consider the inertia manipulator of the user and the mass center. A two-stage planning strategy is intended to resolve potential conflicting planning criteria. The proposed plan is tested in a simulation to compare the parameters and show their output in a real-time scenario.

Approach
This research was applied to space planning for the cobot movement. By choosing safer configurations at the planning level, possible hazards can be avoided. Figure 4 illustrates HRI in a different position in a simulated environment, and the computational load for hazard response during real-time monitoring can be reduced. The cobot has the same end-effector location in both panels.
Safe preparation is an essential part of the Security Strategy. For example, if the path to follow is designed using a general path planning process, the robot will spend most of its configurations with high inertia. When the consumer unexpectedly moves closer to the robot, the possible impact force of the collision would be much higher than if the robot had been in a low inertia setting, regardless of the real-time controller used to deal with potential impact incidents.  Figure 5 illustrates the simulation of an arm robot which calls as an agent, and Figure 6 displays all the joints of an arm robot. The arm robot has specified, rewarded from the figure depending on the environment, and the agent takes the required action and discusses briefly below. Here, the environment is where the two arm joints are in space. The reward is the negative of the finger to goal difference. The actions consist of a specific movement upwards or downwards on one of the two joints. State force lifts the cup, holds a cup, lower cup used for defense activation.

Kinematics model for the seven-DOF of an arm robot
The kinematics model for the 7-DOF consists of seven degrees of freedom of an arm robot in Figure 7 and the schematic diagram shown in Figure 8. The seven-DOF models and each joint are rotating. The block diagram of a seven-degree real DOF model is presented in Figure 7. Coordinate frames are assigned for this 7-DOF (Frame 0 to Frame 7). 1 and any joint axis perpendicular to the plane is oriented. Frame 0 is set to base and aligns with frame 1, where the initial joints of ±1 are 0 and frame E is the end effector frame. Frame 0 is the reference frame. Table 1 shows the corresponding link parameters for the 7-DOF arm. Each known homogenous transformation Denavit-Hartenberg matrix −1 [31,32] can be derived from the transformation matrix of the 7-DOF arm model. Obtaining the place vector through the forward kinematics will not be hard.

Inverse Kinematics
A widespread reverse of Jacobian matrix, pseudoinverse † = ( ) −1 is widely used for a robot and its drawback is that the pseudoinverse often leads the robot into singularities [33]. Another generalized reverse is the inertia−weighted pseudoinverse † = −1 ( −1 ) −1 a sort of solved motion rate control technique proposed in work [34], which is used to minimize energy by using the inertia matrix as the weighting matrix. This will measure what each joint variable will be if we want the hand to be placed at a particular point and have a specific location. The end effector's location and orientation relative to the base frame, measure all possible joint angle sets and link geometries to use Achieve a given end effector position and orientation [35,36].
The inverse kinematics solution for the closed-loop describes here. Suppose a task space trajectory ( ( ), ( )) ̇i s given, and the objective is to find a feasible joint space trajectory ( ( ), ( )) that reproduces the trajectory given. The differential kinematics equation establishes a linear mapping between common space velocities and task space velocities, in terms of either the geometric or the analytical Jacobian and can be used to solve joint velocities using kinematic equation. Then the equation for differential kinematics takes the form as follows: The simple reverse solution to (1) can be obtained by using the pseudoinverse † of the matrix J due to the non-square Jacobian matrix for seven-DOF model and t and the reverse solution can be written as follows: where the pseudoinverse † can be computed as † = ( ) −1 For a kinematics seven-DOF model, a nonempty null space exists due to the excess of input space relative to space > , which is available to set up systematic procedures for effective handling of DOFs [37,38]. The null space is a set of task space speeds that generate null joint space speeds in the current configuration of the robot, and these task speeds are part of the orthogonal complement to the viable task space speed. A common method of including the null space in a solution is the formulation in [h], ̇= † ̇+ ( − † ) , where zR n . The first term is a particular solution to the inverse problem ̇=̇ , and the second term represents the homogeneous solution to the problem ̇= 0. Thus, the general inverse solution can be written as: where the matrix ( − † ( ) ( )) is a projector of the joint vector 0 onto N(J). Since numerical integration, open-loop solutions with joint variables eventually lead to solutions that drift and then workplace mistakes to solve these disadvantages.

Forward kinematics
The forward kinematic aspect is that, together with all the information on joints of the articulated model, certain parts of the model are computed at a specific time from the position and orientation of the object. The position of the thumb end will be determined from the angle of the shoulder, elbow, wrist, palm, and knuckle joints, if the object to be animated is an arm with the shoulder remaining on a fixed position. Three of these joints-the shoulder, wrist, and base of the thumb-are free to a greater than one degree. The position of the shoulder would also be determined from another model characteristics, if the model was a human being in its entirety [39,40]. The forward kinematics solution is given below [41].

Simulation assessment
A simulation environment was developed to evaluate various cobot architectures for the planning algorithms. The cobots are modeled in a domestic area by simulated online software. Figure 9 shows the expected movement of a 3-link planar robot using the basic algorithm, with the sum-based hazard criterion. The cobot aims to reduce the speed after detecting the level of danger during the interaction. The same function is shown as expected by the hazard criteria based on the commodity in Figure 10 when the human is in a backward position. In both cases, only target and hazard criterion cost functions are used to explain the effect of the hazard criterion.
Figures 9 and 10 demonstrate the differences between the two versions of the hazard criterion. The sum-based hazard criterion means that the hazard-influencing variables should be viewed separately when determining hazard rates. One benefit of the sum-based criterion is that the definition is similar and distancebased to other quadratic cost functions commonly used in the potential field method. In this proposed approach, the cobot is unable to locate an obstacle. It should determine what kind of hurdle if it is human, the robot can lower its speed and power, without complications, the risk factor of its contact. However, the danger of this interaction being pushed ahead and for the safest way for neighboring areas it should follow this steps: 1) testing the safety of the interaction can be minimized by any harm, 2) lowers its speed 3) stops somewhat afterward, but if it is safe, it goes ahead. This algorithm ultimately checks the research purpose. A sample of the pseudocode is given in Figure 11. In a programming language, pseudocode is used to log the algorithm, but it is intended for human reading rather than machine reading. The following pseudocode algorithm is demonstrated in Figure 11. In the gazebo simulator, the arm robot installation is identified. The Gazebo simulator has several library functions that are used for mounting the arm robot .There are some gazebo safety parameters accompanied by the arm robot.

CONCLUSION
Determination can be the first step towards study outcomes before any real-world implementation. Now, without considering the actual environmental implementation, it can be finally focused on generating, propagating, and using sensor values and corresponding actuator events. To achieve the objective of this research study, a systematic method will be established to ensure safety in a domestic environment during HRI, based on an explicit quantification of the level of danger in interaction using ROS based gazebo simulator. A framework will be developed for evaluating risk levels at both the preparation and control stages. The assessment trajectory fitness will be applied to accomplish the desired task of moving toward the target with a likelihood of collision with the person. The novel approach will be investigated into components of the physical framework incorporated and checked during real-time HRI on a robot platform and evaluated.