Remote sensing is defined as the technique of obtaining information about objects through the analysis of data collected by special instruments that are not in physical contact with the objects of investigation. As such, remote sensing can be regarded as “reconnaissance from a distance,” “teledetection,” or a form of the common adage “look but don’t touch.” Remote sensing thus differs from in situ sensing, where the instruments are immersed in, or physically touch, the objects of measurement. A common examples of an in situ instrument is the soil thermometer.
Traditionally, the energy collected and measured in remote sensing has been electromagnetic radiation, including visible light and invisible thermal infrared (heat) energy, which is reflected or emitted in varying degrees by all natural and synthetic objects. The scope of remote sensing has been recently broadened to include acoustical or sound energy, which is propagated under water. With the inclusion of these two different forms of energy, the human eye and ear are examples of remote sensing data collection devices.
The instruments used for this special technology are known as remote sensors and include photographic cameras, mechanical scanners, and imaging radar systems. Regardless of type, they are designed to both collect and record specific types of energy that impinges upon them. Remote sensing devices can be differentiated in terms of whether they are active or passive. Active systems, such as radar and sonar, beam artificially produced energy to a target and record the reflected component. Passive systems, including the photographic camera, detect only energy emanating naturally from an object, such as reflected sunlight or thermal infrared emissions. Today, remote sensors, excluding sonar devices, are typically carried on aircraft and earth-orbiting spacecraft, which has led to the familiar phrase “eye in the sky.” Sonar systems propagate acoustical energy through water for the reconnaissance of subaqueous features.
To complete the remote sensing process, the data captured and recorded by remote sensing systems must be analyzed by interpretive and measurement techniques in order to provide useful information about the subjects of investigation. These techniques are diverse, ranging from traditional methods of visual interpretation to methods using sophisticated computer processing. It cannot be emphasized too strongly that data is not information. Accordingly, the two major components of remote sensing are data capture and data analysis.
The Need For A Name
By the early 1960s, many new types of remote sensing devices were being introduced that could detect electromagnetic radiation in spectral regions far beyond the range of human vision and photographic film. This was also a time when many in the scientific community had great hopes for earth observations from orbiting satellites on a routine basis. To encompass these concepts, the term “remote sensing” was coined by Evelyn L. Pruitt, a geographer formerly with the Office of Naval Research, to replace the more limiting terms “aerial” and “photograph”. The new term, promoted throughout a series of symposia at the Willow Run Laboratories of the University of Michigan, gained immediate and widespread acceptance.
Although the term remote sensing has a recent origin, the technique has been used by humans since the dawn of our history. Everytime we sense our surroundings with out eye-brain system we are determining the size, shape, and color of objects from a distance by collecting and analyzing reflected visible light. In a similar manner, certain poisonous snakes use special heat sensors to perceive impressions of their surrounding environment, and bats use sound echoes to navigate and detect prey.
Introduction and History
The technology of modern remote sensing began with the invention of the camera more than 150 years ago. Although the first, rather primitive photographs were taken as “stills” on the ground, the idea and practice of looking down at the Earth’s surface emerged in the 1840s when pictures were taken from cameras secured to tethered balloons for purposes of topographic mapping. Perhaps the most novel platform at the end of the last century is the famed pigeon fleet that operated as a novelty in Europe. By the first World War, cameras mounted on airplanes provided aerial views of fairly large surface areas that proved invaluable in military reconnaissance. From then until the early 1960s, the aerial photograph remained the single standard tool for depicting the surface from a vertical or oblique perspective.
Satellite remote sensing can be traced to the early days of the space age (both Russian and American programs) and actually began as a dual approach to imaging surfaces using several types of sensors from spacecraft. In 1946, V-2 rockets acquired from Germany after World War II were launched to high altitudes from White Sands, New Mexico. These rockets, while never attaining orbit, contained automated still or movie cameras that took pictures as the vehicle ascended. Then, with the emergence of the space program in the 1960s, Earth-orbiting cosmonauts and astronauts acted much like tourists by taking photos out the window of their spacecraft.
The term “remote sensing,” first used in the United States in the 1950s by Ms. Evelyn Pruitt of the U.S. Office of Naval Research, is now commonly used to describe the science—and art—of identifying, observing, and measuring an object without coming into direct contact with it. This process involves the detection and measurement of radiation of different wavelengths reflected or emitted from distant objects or materials, by which they may be identified and categorized by class/type, substance, and spatial distribution.