What is PHOTOGRAMMETRY ?

December 14, 2018
Photogrammetry, as the name suggests, is a three-dimensional coordinate measurement technique that uses photographs as a fundamental medium for metrology (or measurement). The fundamental principle used by photogrammetry is triangulation or, more specifically, aerial triangulation. By photographing at least two different places, the so-called "lines of sight" can be developed from each camera to the points of the object. These lines of vision (sometimes called rays because of their optical nature) intersect mathematically to produce the three-dimensional coordinates of the points of interest



The expression photogrammetry was first used by the Prussian architect Albrecht Meydenbauer in 1867, who designed some of the first topographic maps and elevation drawings. The photogrammetric services in the topographic mapping are well established, but in recent years the technique has been widely applied in the fields of architecture, industry, engineering, forensic analysis, submarine, medicine, geology and many others for the production of accurate 3D data.






Branches of photogrammetry:


there are two broad-based branches in photogrammetry
Metric photogrammetry: it deals with the precise measurements and calculations of the photographs in relation to the size, shape and position of the photographic features and/or obtain other information such as relative positions (coordinates) of the characteristics, areas, volumes. These photographs are taken using a metric camera and are mainly used in engineering fields such as topography, etc.
Interpretive photogrammetry: deals with the recognition and identification of the photographic characteristics of a photograph, such as shape, size, shadow, design, etc. Add value and intelligence to the information seen in photography (annotation).


Remote Sensing:


Remote sensing is a technology closely aligned with photogrammetry, as it also collects information from images. The term derives from the fact that information about objects and characteristics are collected without coming into contact with them. Where remote sensing differs from photogrammetry it is in the type of information collected, which tends to be based on colour differences, so that land use and land cover are one of the main results of remote sensing processing. Remote sensing was originally conceptualized to exploit the high number of colour bands in satellite images to create 2D data primarily for GIS. Today, remote sensing tools are used with all types of images to help obtain and obtain 2D data, such as slope. Today's software tools tend to host a much wider range of imaging technologies, such as image mosaic, 3D visualization, GIS, radar and softcopy photogrammetry.



Key concepts:


1. Spatial resolution.
2. Radiometric resolution.
3. Spectral resolution.
4. Temporal resolution
5. Spatial resolution describes the ability of a sensor to identify the smallest size detail of a pattern on an image. In other words, the distance between distinguishable patterns or objects in an image that can be separated from each other and is often expressed in meters.
6. Spectral resolution is the sensitivity of a sensor to respond to a specific frequency range (mostly for satellite and airborne sensors). The frequency ranges covered often include not only visible light but also non-visible light and electromagnetic radiation. Objects on the ground can be identified by the different wavelengths reflected (interpreted as different colours) but the sensor used must be able to detect these wavelengths in order to see these features.
7. Radiometric resolution is often called contrast. It describes the ability of the sensor to measure the signal strength (acoustic reflectance) or brightness of objects. The more sensitive a sensor is to the reflectance of an object as compared to its surroundings, the smaller an object that can be detected and identified.
8. Temporal resolution depends on several factors–how long it takes for a satellite to return to (approximately) the same location in space, the swath of the sensor (related to its ‘footprint’), and whether or not the sensor can be directed off-nadir. This is more formally known as the ‘revisit period’.



To Know More :
Visit Website:https://pixstrait.blogspot.com/
Email Me:contact.piubhattcharjee@gmail.com



No comments:

Powered by Blogger.