Florent Martel, the fellow I talked to in May about the sense-and-avoid technology his company is developing for UAVs, sent an e-mail Tuesday afternoon reminding me about the live flight test he’ll be conducting today with UND. I didn’t see it for a few hours and, by then, it was too late to set up a photo assignment to illustrate the story. So I’ll be going out early tomorrow to catch the test.
As a preview, I mentioned in my story three technologies UND will be testing.
The first is Machine Visionaries’ sense-and-avoid technology. That’s the stuff Florent, a UND grad student, and his prof developed. I discussed it in May so I won’t repeat it here (Click on the link in the first graf if you missed it.).
The second is something called PrecisionAg. Here’s how UND described it in a press release from two years ago:
The "PrecisionAg" digital imaging payload was designed by the UND Unmanned Aircraft Systems Engineering team to snap digital pictures of crops and rangeland for monitoring vegetation health, specifically for North Dakota agribusiness applications. After each payload flight, this image data can be analyzed to help farmers decide where to apply fertilizers, herbicides, and pesticides on each of their fields, in addition to assisting ranchers in monitoring their grazing operations. The PrecisionAg payload was flown successfully for the first time on Wednesday, June 27, 2007, the third and final day of the mission.
Florent said it measures vegetation health using the NDVI index. That’s short for Normalized Difference Vegetation Index. The idea is that healthy plants reflect more near infrared light than unhealthy plants. The plant would overheat if it absorbed too much NIR. This is an old technology, it sounds like, but making it small and light enough to mount on an umanned aircraft is new.
He also mentioned the use of a TASE gimbal camera to snap pictures for research in image mosaics and super-resolution enhancement. I was going to ask him what he’s talking about, but his cell phone’s out of range apparently. Some Googling revealed this 2001 lecture by Dr. Richard Schultz, who’s involved in Machine Visionaries with Martel:
In modern spy movies such as Patriot Games and Enemy of the State, whenever an analyst magnifies a satellite image on a computer, it appears as a perfect, highly detailed picture. In reality, enlarging a digital image by using the magnifying glass tool in Adobe Photoshop generally results in a very blocky scene. Real-world video enhancement algorithms are simply not capable of calculating the perfect results produced in Hollywood; however, additional visual information can be extracted from a digital image sequence by temporally integrating several adjacent frames to compute a super-resolution video still. Provided that people and objects move between the digital video frames, this motion can be exploited to improve definition and to actually see details where there were once blocky pixels.
The concepts of sampling and image resolution will be introduced, in the context of capturing a single digital picture using a flatbed scanner or a digital still image camera, as well as capturing a sequence of pictures using a digital video (DV) camera. The resulting digital imagery may be undersampled, in which each pixel appears blocky when viewed close-up. A $20 bill scanned at various resolutions (dots per inch, or dpi) will be presented to provide the audience with an intuitive understanding of this concept. As another example from the remote sensing scientific community, the Landsat 7 satellite provides 30-meter resolution imagery to its end users. In essence, this means that each image pixel represents a 30-meter by 30-meter square region on the Earth’s surface. Obviously, there are a large number of details contained within a single Landsat 7 pixel that cannot be observed from the data directly. Postprocessing the data using various interpolation methods can help to extract some additional details from the digital imagery.
Interpolation is the process of "connecting the dots," such that newsignal points can be estimated between the known sample values. We will examine several methods of image interpolation that can be used to magnify a digital still image, and then compare these techniques to super-resolution video enhancement, in which a video still image is generated through the combination of several adjacent frames. A statistical method known as Bayesian maximum a posteriori (MAP) estimation will be utilized to compute the high-resolution image pixels from the original low-resolution data. The Bayesian estimation technique results in a highly-computational, iterative optimization problem that can be solved numerically using custom software. A number of high-resolution video stills will be presented, along with the inherent limitations of super-resolution enhancement technology.
This research is particularly useful for cleaning up surveillance and reconnaissance image sequences. For instance, after a crime takes place, it is often difficult to obtain an adequate picture of the suspects from the surveillance video. With super-resolution video enhancement, multiple video frames can be combined to extract a high-resolution image of the suspects and their distinguishing features, which in turn helps law enforcement agents identify the perpetrators. Quite obviously, there are a number of defense-related applications on the horizon.
Harry Nyquist, one of the pioneers of modern-day telecommunications technology and a graduate of the University of North Dakota (BSEE 1914; MSEE 1915), originally developed the sampling theorem, one of the most significant discoveries in signal processing. This theorem dictates the minimum sampling frequency necessary for the perfect reconstruction of a continuous-time signal from its discrete-time samples. Because of the massive increase in desktop computing power during this past decade, we are just now beginning to utilize and advance Dr. Nyquist’s theories in the digital image and video processing product development arena.
Schultz’s research focuses on enhancing low-resolution digital still images and video sequences to generate high-resolution pictures. Super-resolution enhancement technology, which is capable of increasing the level of detail and clarity of digital imagery for better viewing, has applications in law enforcement and national security.
Basically, take a ton of digital pictures and use the details that one may capture but not another to build one big, highly-detailed picture. The Pentagon seems very interested.
Pretty cool, eh? Even cooler, this stuff is so light a tiny unmanned aircraft like this one can haul it.