Over the past few years, we have been developing a number of basic capabilities to reach our goal mission.
These capabilities include:

Vision-based Stability and Position Control
The most basic capability our goal mission requires is robust autonomous flight. A truly autonomous craft can not completely rely on external positioning devices such as GPS satellites or ground beacons for stability and guidance. It must sense and interact with its environment. We chose to experiment with on-board vision as the primary sensor for this interaction.

We have developed a "visual odometer" which can visually lock-on to ground objects and sense relative helicopter position in real time. As the tracked objects leave the field of view, the odometer selects and tracks new objects to continue sensing helicopter motion.

The odometer tracks pairs of image templates (32x32) using a custom-made TI C40-based vision system. Pairs of templates are tracked to measure image rotation and range to properly rotate and scale templates before matching. The system supports six DSPs each tracking one pair of templates at 60 Hz with 24 ms latency. Images are filtered by an 8x8 convolution ASIC (GEC Plessey) before matching. The odometer was tested indoors (mpeg) before outdoor flight tests. The visual odometer using a pair of b/w video cameras aided by a set of inexpensive angular sensors (Gyration gyro-engines and KVH digital compass) were the only sensors employed to stabilize and maneuver (less than 15 mph) small to mid-sized model RC helicopters (see mpegs on the right).


indoor tests

outdoor flight

on-board view


Takeoff, Trajectory Following, & Landing
Our goal mission can not be accomplished without highly precise helicopter maneuvers. Precise helicopter maneuvering is a game of accurate force control. Inherently, vision alone can not provide the necessary feedback for this type of control.

This is because vision can only sense the motion created by applied forces not the forces themselves. It is impossible for a controller to completely eliminate undesired movements due to disturbances after they are sensed. Precise helicopter maneuvers such as takeoff, trajectory following, and landing require inertial sensing.

We have developed a state estimator which fuses data from an Inertial Measurement Unit (IMU), a GPS receiver, and a compass for accurate helicopter position estimation. The IMU, Litton LN-200, is composed of 3-axis Silicon accelerometers and angular rate sensors. The GPS receiver, NovAtel MillenRt2, is a dual-frequency carrier-phase unit capable of 2 cm accuracy positioning using nearby (less than 20 miles) ground differential correction stations. The compass, KVH Industries, is a flux-gate sensor with a toroidal element. The data from these sensors is fused by a 12th order Kalman Filter which keeps track of latitude, longitude, height, 3-axis velocities, roll, pitch, yaw, and accelerometer biases (Gauss-Markov model).

An on-board classical (feed-forward PD) controller controls the helicopter on smooth (quintic spline) trajectories planned by a flight path editor. As shown in the above figure, the flight path editor's console shows the planned trajectory between goal points supplied by a human operator. The helicopter can be programmed to maintain its heading tangent to the path or always orient itself towards a point in space. The console also displays 3D range data collected by the mapping system (see below) to help in selecting goal points. The auto trajectory mpeg (right) shows the helicopter executing a circular path, planned using the flight path editor, while pointing to a parked car.


auto takeoff

auto trajectory

auto land


Aerial Mapping
Our goal mission requires 3D range sensing for autonomous obstacle detection and aerial mapping. In general, aerial vehicles are particularly well-suited for 3D mapping for two main reasons. First, they can quickly and efficiently scan very large areas while carrying on-board active sensors. Second, active sensors such as laser rangefinders perform better on-board aerial vehicles because they receive better signal returns from ground objects than ground-based systems.

We have developed a mapping system which integrates a laser rangefinder (Reigl), custom synchronization hardware, and our inertial state estimator on-board an autonomous helicopter. The systems scans the environment line by line using a spinning mirror during helicopter flight. Each line, consisting of an array of range measurements, is tagged with helicopter state and is registered in memory to incrementally build a 3D map. You can see the system in action in the aerial mapping video on the right.


aerial imagery

aerial mapping


Object Recognition & Manipulation
Our goal mission requires detecting, tracking, and possibly manipulating objects during autonomous flight. We have built prototype systems which detect and track objects based on color and appearance.

We have developed an color discriminator which can detect objects based on their color. Built by high-speed digitally controllable analog hardware, the discriminator can be configured at NTSC field rate (60 Hz) to look for as many RGB color combination as necessary in sequential image fields. The discriminator normalizes RGB intensity levels, to eliminate the effects of lighting, determines the difference between each pixel color from the target color, and penalizes each pixel based on this distance. Most recently the discriminator was used to pick up an orange disk from a barrel on the ground for the 1997 Unmanned Aerial Robotics Competition. Although, the system did not work properly at the contest, you can see successful pickups in the videos (see mpegs to the right) shot prior to the contest. The pickup system tracked a blue magnet and aligned it with the orange disk as the helicopter descended to the estimated range to the disk. Helicopter range to the disk was measured by triangulation.

We have also developed a template-based detector to locate objects based on appearance. The detector locates image regions which resemble a picture of the object to find a potential match. Because searching every image for a the object's picture or template in any orientation requires enormous computational power, the detector exploits principal component methods to reduce its work load. Rotated templates look very similar and are typically highly correlated. The detector analyzes each template in every orientation and determines a small set of principal templates which would yield accurate matches without matching every possible templates angle. The above figure demonstrates this by comparing the 25 radioactive hazard symbol templates, rotated in 15 degree intervals, to the 8 principal templates used for matching.


object detection

indoor retrieval

outdoor retrieval