RESEARCH

Humanoid Robot Design  

The aim of this research is to design and build bipedal robots that can achieve dynamic walking on level and rough terrain.

NUSBIP_III_b


nusbipiii_d

Bipedal Walking

We will adopt different designs and apply different walking control algorithms to the robots. Learning paradigms like reinforcement learning will also be adopted in the control algorithm. From experimentation, we wish to extract rules for general bipedal walking.

General Legged Robots  

The aim of this research is to design and build legged machine that have more than two legs. We are interested to study different walking or running gaits. By building these machines, we hope to understand the mechanisms for locomotion adopted by the biological counterparts. We also wish to build machines that are able to walk dynamically and robustly over rough terrain.

Actuator Design: Force Control Actuators 

Legged robots is required to interact with the surrounding environment. In such applications, good actuator force control is very desirable. One such actuator design is called the series elastic actuator which is adopted by . It is an actuator that is connected to the external load through an elastic component. The desired force on the load is achieved by controlling the deflection of the elastic component. Series elastic actuators have many desirable properties like high bandwidth, low output impedance, shock absorption capability, etc. This research involves the design and control of the force-controlled actuator. We are interested to compare different designs, for example, rotary versus linear elastic components, steel versus elastomer elastic material, linear versus nonlinear spring. The actuator will eventually be applied to legged robots.

 Active Vision Mechanisms for a Humanoid Robot
Overview
Currently the Phantom group is working in developing and revamping vision mechanisms to help the humanoid robot which have human-like eye motion. The vision system for robot is biologically inspired, thus the goal is for robot to be able to recognize objects, fixate and track them, determine their depth, motion characteristics, etc.
Objectives
Improve the current active vision system that demonstrates capabilities similar to that of the human visual system Implement and test robust vision mechanisms such as saccades, smooth-pursuit, and vergence Determine depth and motion characteristics of objects through different techniques. Combine this information with sensory information from other sources in the robot to develop cognitive behaviors for robot
Humanoid Robot
                                Figure 1. vision directed
Project Description
The goal of this project is to improve the current Active Vision Gaze Controller on the humanoid robot. The active vision controller was originally implemented on a camera head that consists of two color cameras and four degrees of freedom (pan, tilt, left verge, and right verge). The camera controls were first designed to mimic five human-like eye movements:
  • Saccades: are the ballistic movements of the eyes when they jump from one fixation point in space to another
  • Smooth-pursuit: maintains a fixation point of a target moving at moderate speeds on the fovea
  • Vergence adjusts the eyes so that the optical axes intersecting on the same target while depth varies. It ensures that both eyes fixate on the same point on the target
  • Vestibulo-ocular reflex (VOR) and opto-kinetic reflex (OKR) are mechanisms to stabilize the image of the target during head movements
In our revamping of the system, a new and more precise camera head will be used as illustrated in Figure 2. The latter consists of four degrees of freedom but these are different from the previous head, where each camera has two degrees of freedom: pan and tilt. For this type of system VOR and OKR are not needed. Thus the goal consists of obtaining more accurate results in the existing mechanisms and developing new capabilities in our system such as motion detection and accurate depth estimation. 
                Figure 2. Testbed platform for new pan-tilt system 
Figure 3 shows several color swatches being tested on the new test bed. Robot will be able to distinguish between colors, objects, etc. and the new pan-tilt will offer more degrees of freedom than are presently available.
             Figure 3. Color swatches being tested on vision test bed 
Additionally, a greater shift towards cognitive robots is currently being pursued in the lab. Another goal is for the visual information to be combined with sensory information from other sources such as: audio queues, infra red data, torque values from the arm, and touch sensors from the hand. In attempting to implement cognitive behaviors, it is necessary to check for overlapping of sensory feedback, in that, if this information overlaps indicating activity occurring in the same location it will trigger a reach and grasp behavior; much like human baby behaviors


Leaf segmentation and leaf modelling from color and ToF data for robotized plant measuring

In the context of this project, we are interested in the segmentation of plants into their composite structures, i.e., leaves or part of leaves, and to extract color and 3D shape descriptors with which the robot can interact. In general, the identification and segmentation of 3D surfaces from an image is an important step towards solving object-manipulation tasks as it facilitates object recognition and grasp point selection, and, in consequence, the execution of appropriate grasping movements. We developed an algorithm that fuses color information and depth information from a time-of-flight depth camera (PMD CamCube) and uses this data to segment images into 3D surfaces. The robot arm used in the experiments is shown in Figure 1. Typical segmentation results of color-depth images of plants are shown Figure 2. 
Figure 1: Robot arm at the IRI for plant measuring. A time-of-flight camera, a color camera, and a probing tool are mounted on the robot arm, exploring a plant. 
Figure 2: Segmentation results (middle panel) and interpolated depth (right panel) for plant images. 

We use this framework at the Institut de Robotica i Informatica Industrial to find good candidate leaves for probing. Our experimental set-up is shown in Figure 1 and a flow diagram of the method is shown in Figure 3. 
Figure 3: Flow diagram of robotic plant measuring procedure. 


The time-of-flight camera and color camera are mounted on a robotic arm. The stick is a placeholder for a measurement device that will be inserted later. A candidate leaf is selected and the 3D surface information associated with the leaf is used to compute a new robot position, which moves the camera system closer to the leaf (active vision). In robotic experiments, candidate leaves have been approached by the robot in order to validate their suitability to being sampled. 









0 comments:

Post a Comment

Twitter Delicious Facebook Digg Stumbleupon Favorites More

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Affiliate Network Reviews