In battlefield reconnaissance missions, terrorist attacks or nuclear leakage accidents, sending mobile reconnaissance robot to replace human beings into the danger areas for reconnaissance, detection, sampling or emergency disposal, can greatly reduce the risk of personnel, and provide necessary information support for decision making. In this thesis, we carried out the theoretical and experimental research on two aspects of Human-Robot Interaction (HRI) and local autonomous behavior.
Firstly, according to the working environment of the mobile reconnaissance robot, we de-sign a small mobile reconnaissance robot, and according to the needs of the semi-autonomous control, we propose a HRI-based hybrid robot architecture. The architecture combines features of the cognitive model based functional decomposition architecture and behavior based reac-tive architecture. In the unseen scene or in the emergency case, the human operator can control the mobile robot directly through the intelligent behavior or the basic behavior of the mobile robot, so as to ensure the mobile robot has the capacity of emergency disposal.
According to the characteristics of the battlefield environment, an egocentric vision based posture control system is proposed. The posture control system consists of a hand detector and a posture recognition. In the hand detection and segmentation part, to extract the effective hand regions from the images that captured from the egocentric vision equipment and con-taining complicated backgrounds, large egomotions and extreme transitions in lighting, we propose a novel hand detector based on the contour cues and part-based voting idea. In the posture recognition part, a deep ensemble hybrid classifier is proposed by combing hybrid classifier and ensemble learning technique. To reduce misjudgments during consecutive pos-ture switches, a vote filter is proposed and applied to the sequence of the recognition results. This vote filter can correct the misjudgments results to the expected results.
In order to reduce the number of trial-and-error in traditional reinforcement learning, and apply it to the behavior learning of mobile robots in real scenes, by simulating the human learning process we present an imitating reinforcement learning algorithm, the algorithm can enable the mobile robot to improve the learning speed and reduce the number of trial-and-error by learning the human operation experience in advance. To verify the performance of the proposed imitation reinforcement learning algorithm, we apply this algorithm to the corridor following task in real scence. The experiments show that the mobile robot can grasp the control strategy of the corridor tracking behavior only after the trial and error in the corridor under the premise of learning the human experience. This shows that our proposed imitating reinforcement learning method can make the traditional reinforcement learning method need not be limited to the simulation environment but can be transplanted to the behavior learning of mobile robots in real scenes.
In order to make the mobile robot have the ability to replace human beings into the nuclear radiation environment for the pollution source searching task, we propose an nuclear radiation source autonomous approach algorithm for small mobile robot, the algorithm only requires a 2D laser radar and two nuclear radiation dose rate meter, and can realize the positioning of single point nuclear radiation source and autonomous approaching, and can avoid the normal obstacle in the autonomous approaching process. The algorithm consists of nucletaxis behavior module, trap escape behavior module and behavior arbitration module. The nucletaxis behav-ior can provide the guidance for the mobile robot according to the indicating value of the two nuclear radiation dose rate meter, and can avoid the normal obstacle for the mobile robot by the 2D laser radar. When the behavior arbitration module determines that the mobile robot has been trapped in the trap, the trap escape behavior module is activated. The trap escape behav-ior is obtained by learning directly from the human operation experience, and the fuzzy neural network is used to learn the human experience. In order to improve the robustness of the fuzzy neural network and reduce the adverse effects of bad human experience on a mobile robot, we propose a hybrid evolutionary algorithm for the optimizing of the rule layer of the fuzzy neural network, so that the mobile robot can not be confined to learn the robot experts’ experience, even ordinary people’s experience can still make the mobile robot learn the effec-tive control strategy.
In order to ensure the mobile robot to continue the target disposal task without being re-called when the joint sensors (one or more) of the mobile robotic manipulator are faulty, we presents a HRRC-based (Human-Robot-Robot-Cooperation) uncalibrated visual servoing control system. With the aid of the surveillance video of the auxiliary robot, the virtual exo-skeleton of the faulty manipulator is built online, the virtual exoskeleton can drive the manip-ulator and the terminal of the virtual exoskeleton can be guided by the artificial guiding point which is produced by the HRI devices. Considering the instantaneous large residual caused by non-uniform guiding in the guiding process, we propose a Residual Switching Algorithm. It can update the joint angle formula according to the motion characteristics of the artificial guiding point, so as to ensure the tracking stability. To further drive the manipulator, we pro-pose a Multi-joint Fuzzy Driving Controller, which can drive the corresponding joint of the manipulator according to a offset vector between the virtual exoskeleton and the manipulator. The experiment shows that our proposed control system can assist the operator to control the mobile robotic manipulator intuitively, effectively and efficiently with faulty joint sensors.