Drone360 Menu
X

SEARCH SITE

Enter keywords or a search phrase below:

News & Notes

Companies Race to Provide Tools to Help Analyze a Deluge of Aerial Drone Data

With drone-captured data pouring in, aerial image analysis becomes increasingly vital

July 12, 2016

The first week in May, drones and drone parts took over one of the halls of the enormous Ernest N. Morial Convention Center in New Orleans, LA. This year, Xponential, one of the largest drone conventions of 2016, attracted 668 companies and exhibitors from around the world. Xponential offers businesses, universities, states, and even countries the chance to show off some of the newest drone technology and to learn about the latest trends in unmanned systems.

The annual convention is organized by the Association for Unmanned Vehicle Systems International (AUVSI), an unmanned systems and robotics trade organization. Virtually every iteration of air, ground, or sea drones that exist today appear on the showroom floor, ranging from fully manufactured drones and drone parts (such as engines or navigational systems) to sensors and high-end HD cameras.

Do you see what I see?

The imagery analysis and computer vision sector of the drone industry is becoming increasingly important as more and more drones take to the sky.

Drones offer access to the airspace without too much difficulty or cost, but interpretation of flight data can prove challenging, even for trained operators and established companies. When the U.S. military began fielding drones in ever-greater numbers, it experienced the difficulty of matching manpower to the demands of data. As the commercial drone marketplace grows, civilians will face the challenge of UAV data, too.

The term “image analytics” covers a wide variety of technical solutions and processes. At its center, however, whether it’s a military reconnaissance mission or a crop inspection, the essential question for the companies that work in drone imagery is the same: how to extract information that can be valuable to decision makers from a vast amount of still and video images.

In many ways, the proliferation of aerial sensors produces a set of challenges that would have been familiar to those on the front line of aviation’s earliest days.
The rise of the aircraft as a tool for collecting aerial intelligence during World War I suddenly infused contemporary militaries with an unprecedented amount of data. Units dedicated entirely to interpreting and disseminating the imagery were created and analytical methods such as mosaic maps (aerial photographs laid out together to show a continuous representation of an area) enabled unparalleled insights into the enemy’s activities.

Following the war, new companies such as Brock & Weymouth and Fairchild Aerial Surveys took the technology from the conflict and adopted it to mapping and surveying American landscapes.
Something old, something new

Today, the old and incredibly important field of imagery analysis is undergoing a significant transformation as new methods of viewing and understanding imagery become available. Some of these solutions involve a process known as computer vision, which uses algorithms to isolate information from images. Others are concerned with creating tools that improve workflow and make data more accessible for interpretation. As onboard sensors become more sophisticated and drones continue to proliferate, the demand for these solutions is likely to become more acute.

“Even the cheapest video sensor gathers a tremendous amount of information,” Jon Damush, vice president and general manager of missions systems programs at Insitu, a maker of surveillance drones, explains. (In April 2015, Insitu’s parent company Boeing acquired 2d3 Sensing, a provider of forensic tools for full-motion video.) “The problem was that the air vehicles and those systems were only being used as security cameras in the sky. People would watch the video live and not do anything with it.”

The U.S. military is probably the most familiar with the challenges presented by the increase of drone data. In addition to the growing number of drones deployed by the military in the past decade, sensor technology has improved dramatically, creating sophisticated cameras — like Wide-Area Motion Imagery systems — with high resolutions that show a vast amount of information. While an early sensor system onboard an MQ-1 Predator provided a single narrow field of view, a Wide-Area Motion Imagery system like the Gorgon Stare can increase that field of view by 10 or more times.
On Oct. 25, 2015, Insitu performed the first commercial beyond visual line-of-sight UAV operation in the contiguous U.S. The flight, performed in conjunction with BNSF Railway, showed how UAS can be used for railway safety and infrastructure inspection. Insitu
Data deluge

Speaking at the C4ISR annual conference in 2009, retired Lt. Gen. David A. Deptula famously predicted, “We are going to find ourselves, in the not too distant future, swimming in sensors and drowning in data.”

The Air Force employs thousands of analysts to turn the raw imagery from drones into actionable intelligence — a job known as processing, exploiting, and disseminating. These analysts are part of the Distributed Common Ground System (DCGS), a global apparatus of over 20 different sites. Intelligence analysts at these locations are the linchpin for military drone operations and can often be in direct communication with drone operators, military commanders, and troops in the field. Up to 84 personnel, including 30 full-motion video analysts, can be involved in supporting an MQ-1 Predator or MQ-9 Reaper Combat Air Patrol, each consisting of four aircraft.

As with drone operators, the intense demand for intelligence combined with heavy workloads has had a taxing effect on DCGS personnel. Although portrayed by the military as important work using exciting new technologies, the day-to-day responsibilities of an imagery analyst can often be mundane.

Analyst performance can be negatively affected by what is known as underload, an effect resulting from work that is overly tedious and monotonous, according to a 2012 RAND Corporation study, called Occupational Burnout and Retention of Air Force Distributed Common Ground System Intelligence Personnel. When perceptions of the job come into conflict with the reality of the work, intelligence personnel can suffer from high levels of cynicism and skepticism, reducing effectiveness.

Computing power is seen by some as the solution to the data deluge and pressures on intelligence personnel. In 2013, another RAND study, Motion Imagery Processing and Exploitation, explored ways that algorithms and computers could potentially reduce the workload of human analysts.

Ideally, computers would automatically detect, identify, and track important objects or people, recognize flagged behaviors and patterns, and characterize certain scenes based on previous activity. By removing monotonous tasks, algorithms could let analysts focus on the valuable aspects of the imagery instead of spending time sorting through noise. Many of these algorithms are still in development.

Among the most accessible computer vision processes today is subtraction —when a computer distinguishes between moving objects in the foreground and static objects in the background, and then cuts down on aspects of the video that are not relevant to the analyst’s task. These algorithms currently help reduce the data consumption of CCTV surveillance cameras and track moving objects at sporting events.

In the high-stakes world of military drone operations, however, the Air Force has been unwilling to rely entirely on computers. Rather, the goal has been to develop technologies that can augment — not replace — the ability of the human analyst to interpret the imagery.
A sharper image

MotionDSP, a Burlingame, CA-based company that specializes in building software that enhances the quality of full-motion video streams, enables analysts to easily identify what’s important and what isn’t. Founded in 2005, the company’s original goal, according to CEO Sean Varah, was to improve the picture quality of cell phone videos for YouTube. Using a process known as computational super-resolution reconstruction, the company was able to extract data from each frame of a low-resolution video and rebuild the frames into a higher resolution.

Drone operators can’t always wait for optimal filming conditions. Varied weather conditions or long-distance shooting can negatively affect video quality, thus making it harder to interpret what’s happening in the film. MotionDSP’s Ikena ISR software reduces the impact of these effects in real-time by reducing haziness or improving stabilization to enhance the quality of the video.

“If you wanted to improve the quality of video from the older MQ-1 drones, for example, you’d have to replace their cameras, encoders, communication systems, and all that,” Varah explains. “We can do that in software on the analyst’s computer they already have. We can basically bring new life to their older sensors.”
MotionDSP software helps analysts in many ways, including haze removal (top), revealing obscured details such as land formations and people, and improving digital fidelity (bottom), performing deinterlacing, stabilization, super resolution, multiframe denoising, and light color and contrast enhancement, simultaneously in real time. MotionDSP
“The more time you have to wait to get your information, the less valuable it is."
From the military to the market

Today, MotionDSP builds software for customers with stakes slightly higher than those of YouTubers.

The military quickly saw the value in this software and, according to the Federal Procurement Data System, MotionDSP landed several contracts with the Air Force and Special Operations Command in 2012. Some contracts drew from the Rapid Innovation Fund, a Department of Defense program intended to quickly field new technologies.

However, Varah expects that more non-military customers will be interested in MotionDSP solutions. The company is already catering to first responders and businesses in the infrastructure and energy sectors, and has partnered with Canadian drone manufacturer Aeryon Labs to deliver commercial customers a drone analysis software package.

“The more time you have to wait to get your information, the less valuable it is,” Varah says.

In addition to video enhancement software, MotionDSP is also working on computer vision algorithms that cut down on analysts’ workload. For example, an analyst who is tasked with watching a house for any movement could place a virtual fence around the property that sounds an alarm whenever someone enters or leaves. Automating some of these tasks reduces the chance that the human sitting at the computer will become fatigued and make mistakes. In April 2015, MotionDSP announced a product called Ikena X that provides new computer vision solutions.

“We would never say that we’re going to replace a human analyst,” Varah says. “What we’re trying to do is to make that human analyst as efficient as possible.”

But MotionDSP is by no means the only player — more and more companies are popping up in the increasingly crowded commercial drone data analysis marketplace. Airware, a San Francisco-based company, is positioning itself as the go-to software solution for enterprise users. The company, which advertises itself as the provider of the premier drone operating platform, began building autopilot hardware for drones in 2011, but is expanding its offerings.

Its main product, the Aerial Information Platform, allows drone users to do everything from planning and executing flights to cataloging and analyzing data. Users can use flight data to automatically measure distance, stitch images together into mosaics, and create 3D models.

“What we’re taking to these enterprise customers is an entire end-to-end workflow that is seamlessly integrated,” says Alan Poole, head of product at Airware. “The idea is that this is going to help save time and give time back to the operator to do what’s important to them.”
In addition to helping catalog and analyze information, companies such as Airware offer software packages that give end users all the tools they’ll need to complete a job.
Airware
Robo search-and-rescue

Today, Airware is one of the best-funded startups in the drone world — in March 2016, the company announced that it had raised over $30 million. As with the military, civilian drone users are recognizing that it isn’t enough that drones can easily and cheaply put a camera in the sky. The value comes from efficiently deriving data from the imagery that can be applied in different ways. Airware seeks to cater to the growing number of commercial drone users by creating a product that emphasizes ease of access to data and to analytical tools.

The quick turnaround from aerial data collection to analysis to information products is increasingly viewed as an integral aspect of drone operations. Nowhere is this truer in the civilian world than in search-and-rescue operations. This point was emphasized in a crowded lecture hall at Xponential where Traci Sarmiento, a Ph.D. student at Texas A&M University, explained how computer vision could be applied to drone images to help find missing persons after a natural disaster.

Drones are emerging as a critical tool for emergency responders to examine damage firsthand. Even with a drone, however, it can be difficult to comb large areas for details that look out of place, such as human clothing. Under normal circumstances, explained Sarmiento, three analysts are required to review each still image and come to a consensus as to whether first-responders should be sent to the area. While this process can take as little as 10 seconds with trained personnel, a single flight can produce hundreds of images and, with multiple flights often required for each mission, the amount of time dedicated to processing the data can quickly get out of hand.

During the 2015 Memorial Day floods in Texas, the Center for Robot-Assisted Search and Rescue (CRASAR) at Texas A&M held two experiments to help reduce workload on the back end of drone operations. Students were asked to apply algorithms to images taken by drones to try to find people who may have been trapped in debris and floated downriver.

One student incorporated the RGB color spectrum into an anomaly detection algorithm in order to identify colors that appeared out of place in the landscape. This algorithm could isolate colors that indicated the presence of buildings or clothing. Another student created an algorithm to identify straight-edged objects in the images that might indicate a man-made structure.

“Instead of having to find three people to go look at this to validate, you’ve got the computer sorting through the images and doing a much more attentive task, giving you much more reliability,” Dr. Robin Murphy, CRASAR director and founder of Roboticists Without Borders, explains. “It’s going to spit out a bunch [of images], and it may be heavy on false positives, but now you know where to look.”

These tests prove that computer algorithms could help rule out images least likely to contain traces of missing persons, allowing human analysts to focus on areas that matter.
Still more to see

Whether it’s a rescue mission, a counterterrorism operation, or a building inspection, the wealth of imagery produced by drones is giving rise to new ways of processing and exploiting data. While the demand for these tools started with the military, as more drones enter U.S. airspace, they are transitioning to commercial and other civilian applications.

Computer vision is not perfect, and no one yet appears willing to leave potentially critical decisions entirely up to an algorithm. However, the future growth in the UAS industry could depend less on drones themselves, and more on their ability to deliver analytical insights.

Note: A version of this story appeared in the July/August 2016 issue of Drone360 magazine.
Featured image: Airware