Değişken rezolüzyonlu görüntü örnekleyici

thumbnail.default.alt
Tarih
1991
Yazarlar
Tarcan, Rıza Can
Süreli Yayın başlığı
Süreli Yayın ISSN
Cilt Başlığı
Yayınevi
Fen Bilimleri Enstitüsü
Özet
Sayısal görüntü işleme, bilimsel ve mühendislik alanlarında günden güne artan bir önem kazanmaktadır. Sayısal görüntü işleme terimi genel olarak iki boyutlu resimlerin bir sayısal bilgisayar yardımıyla işlem görmesi anlamında kullanılmaktadır. Sayısal görüntü işleme yöntemlerinin kullanıldığı alanlara örnek olarak uydularda ve diğer uzay araçlarındaki uzaktan algılamayı, ticari uygulamalarda görüntü saklama ve ile timi, tıbbi uygulamaları ve endüstride robot teknolojisini örnek olarak verebiliriz. Günümüzde robot görmesi alanında karşılaşılan en önemli sorun hız sorunudur. Dügey ekseninde 256, yatay ekseninde de 256 resim elemanı içeren, bir görüntüde toplam 65536 resim elemanı yer almaktadır ki bu sayı böylesi düşük rezolasyonlu bir görüntüde bile ne kadar çok işlenmesi gereken bilgi olduğunu ortaya koymaktadır. Bu çalışmada insan gözü model alınarak işlenmesi gereken bilgi miktarını aza indirgeyecek bir görüntü örnekleme devresi tasarlanmıştır. Devrenin özelliği bilgiyi işleyecek mikrobilgisayarın resim üzerinde istenen bir bölgeyi istenen rezolasyonda örneklemesine olanak tanımasıdır. Bu çalışmada tasarlanan örnekleme devresi aslında birden fazla işlemcinin yer aldığı bir sistemin alt^ parçası olmaktadır. Paralel çalışan işlemciler her birine verilen görevi yerine getirmek üzere aynı resmin farklı bölgelerini farklı rezolüsyonlarda örneklemek tedirler. Örneğin en kaba rezolüsyonla resmin tamamını işleyen bölüm hareket sezip hareket eden cismin hareket yönünü tayin etmeye çalışırken, hareket eden bölgenin üzerinde nispeten çok daha küçük bölgeyi en sık rezotüsyonla örnekleyen ikinci işlemci de hareket edenin ne olduğunu saptamaya çalışmaktadır.
Digital image processing is a rapidly evolving field with growing applications science and engineering. Image processing holds the possibility of developing ultimate machine that could perform the visual functions of all living beings. Many theoretical as well as technological breakthroughs are required before we could build such a machine. In the meantime, there is an abundance of image processing applications that can serve mankind with the available and anticipated technology of the near future. The term digital image processing generally refers to processing of a two dimensional picture by a digital computer. In a broader context, it implies digital processing of any two-dimensional data. A digital image is an array of real or complex numbers repre sented by a finite number of bits. Digital image processing has a broad spectrum of applications, such as remote sensing via satellites and other spacecrafts, image transmission and storage for business applications, medical processing, radar, sonar, and acoustic image processing, robotics, and automated inspection of industrial parts. Images acquired by satellites are useful in trac king of earth resources; geo-graphical mapping; prediction of agricultural crops, urban growth, and weather; flood and fire control; and many other environmental applications. Space image applications include recognition and analysis of objects contained in images obtained from deep space-probe missions. Image transmission and storage applications occur in broadcast television, teleconferencings transmission of facsimile images (printed documents and graphics) for office automation, communication over computer networks, closed-circuit television based security monitoring systems, and in military communications. In medical applications one is concerned with processing of chest X rays, cineangiograms, projection images of transaxial tomography, and other medical images that occur in radiology, nuclear magnetic resonance (UMR), and ultrasonic seanning. These images may be used for path screening and monitoring or for detection of - v - tumors or other disease in patient Radar and sonar images are used for detection and recognition of various types targets or in guidance and maneuvering of aircraft or missile systems. There are many other applications ranging from robot vision for industrial automation to image synthesis on cartoon making or fashion design. In other words, whenever a human or a machine or any other entity receives data of two or more dimensions, an image is processed. In image representation one is concerned with characterization of the quantity that each picture-element (also called pixel or pel) represents. An image could represent luminances of objects in a scene (such as pictures taken by ordinary camera), the absorption characteris tics of the body tissue (X-ray imaging), the radar cross section of a target (radar imaging), the tempe rature profile of a region (infrared imaging), or the gravitational field in an area (in geophysical imaging). In general, any two-dimensional function that bears information can be considered an image. Image models give a logical or quantitative description of the properties of this function. An important consideration in image- representation is the fidelity or intelligibility criteria for measu ring the quality of an image or the performance of a processing technique. Specification of such measures requires models of perception of contrast, spatial frequencies, color, and so on, us discussed in Chapter 3. Knowledge of a fidelity criterion helps in desig ning the imaging sensor, because it tells us the vari ables that should be measured most accurately. The fundamental requirement of digital processing is that images be sampled and quantized. The sampling rate (number or pixels per unit area) has to be large enough to preserve the useful information in an image. It is determined by the bandwidth of the image. Por example, the bandwidth of raster scanned common tele vision signal is about 4 MHz..Frorjj t fie sampling it he or em fchis -requires, a minimum sampling:rate'of 8 MHzv -.* At 30 îrames/s, this means each frame should contain approxi mately 266,000 pixels. Thus for a 512-line raster, this means each image frame contains approximately 512 x 512 pixels. Image quantization is the analog to digital conversion of a sampled image to a finite number of gray levels. Statistical models describe an image as a member of an ensemble, often characterized by its mean and covariance functions. This permits development of algorithms that are useful for an entire class or an ensemble of images rather than for a single image. - vi - Often the ensemble is assumed to be stationary so that the mean and covariance functions can easily be estimated. Stationary models are useful in data compression problems such as transform coding, resto ration problems such as Wiener filtering, and in other applications where global properties of the ensemble are sufficient. A more effective use of these models in image processing is to consider them to be spatially varying or piecewise spatially invariant. In image enhancement, the goal is to accentuate certain image features for subsequent analysis or for image display. Examples include contrast and edge enhancement, pseudbcoloring, noise filtering, sharpe ning, and magnifying. Image enhancement is useful in feature extraction, image analysis, and visual infor mation display. The enhancement process itself does not increase the inherent information content in the data. It simply emphasizes certain specified image characteristics. Enhancement algorithms are generally interactive and appl i cat i on- dependent. Image enhancement' techniques, such as contrast stretching, map each gray level into another gray level by a predetermined transformation. An example is the histogram equalization method, where the input gray levels are mapped so that the output gray level distribution is uniform. In presenting the output of an imaging system to a human observer, it is essential to consider how it is transformed into information by the viewer. Un derstanding of the visual perception process is important for developing measures of image fidelity, which aid in the design and evaluation of image processing algorithms and imaging systems. Visual image data itself represents spatial distribution of physical quantities such as luminance and spatial frequencies of an object. The perceived information may be represented by attributes such as brightness, color, and edges. Our primary goal here is to study how the perceptual information may be represented quantitatively. Light is the electromagnetic radiation that stimulates our visual response. It is expressed as a spectral energy distribution L ( ), where is the wavelength that lies in the visible region, 350 nm to 680 nm, of the electromagnetic spectrum. The illumination range over which the visual system can operate is roughly 1 to 10*0, 0r 10 orders of magnitude. - vii - The retina of the human eye contains two types of photoreceptors called rods and cones. The rods about 100 million in number, are relatively long and thin. They provide scotopic vision, which is the visual response at the lower several orders of magnitude of illumination. The cones, many fewer in number (about 6.5 million), are shorter and thicker and are less' sensitive than the rods. They provide photopic vision, the Visual response at the higher 5 to 6 orders of magnitude of illumination (for instance, in a well-lighted room or bright >: ?.::. sunlight). In the intermediate region of illumina tion, both rods and cones are active and provide mesopic vision. We' are primarily concerned with the photopic Vision', since electronic image displays are well lighted. The cones are also responsible for color vision. They are densely packed in the center of the retina (called fovea) at a density of about 120 cones per degree of are subtended in the field of vision. This corresponds to a spacing of about 0.5 min of arc, or 2 m. The density of cones falls off rapidly outside a circle of 1° radius from the fovea. The pupil of the eye acts as an aperture. In bright light it is about 2 mm in diameter and acts as a low- pass filter (for green light) with a passband of about 60 cycles per degree. The cones are laterally connected by horizontal cells and have a forward connection with bipolar cells, The bipolar cells are connected to ganglion cells, which join to form the optic nerve that provides communication to the central nervous system. For the human eye, V ( ) is a bell-shaped curve whose characteristics depend on whether it is scotopic or photopic vision. The luminance of an object is independent of the luminances of the surrounding objects. The brightness (also called apparent brihtness) of an object is the perceived luminance and depends on the luminance of the surround. Two objects with different surroundings could have identical luminances but different brightnesses. The following visual phenomena exemplify the differences between luminance and brightness. Image fidelity criteria are useful for measuring image quality and for rating the performance of a processing technique or a vision system. There are - viii - two types of criteria that are used for evaluation of image quality, subjective and quantitative. The subjective criteria use rating scales such as goodness scales and impairment scales. A goodness scale may be a global scale or a group scale. The overall goodness criterion rates image quality on a scale ranging from excellent to unsatisfactory. A training set of images is used to calibrate such a scale. The group goodness scale is based on comparisons within a set of images. The impairment scale rates an image on the basis of the level of degradation present in an image when compared with an ideal image. It is useful in applications such as image coding, where the enconding process introduces degradations in the output image. Sometimes a method called bubble sort is used in rating images. Two images A and B from a group are compared and their order is determined (say it is AB). Then the third image is compared with B and the order ABC or AGB is established. If the order is ACB, then A and C are compared and the new order is established. In this way, the best image bubbles to the top if no ties are allowed. Numerical rating may be given after the images have been ranked. In this thesis multi-resolution image sampler arises from the fact that the necessary time to proses an ordinary image is extremely long. In reality, a small portion of an image is really containing the useful! information. Por example in a motion detection and target tracking system, the object never occupies the whole picture but resides at some part of it." And also, every portion of the picture is not needed to be sampled at the same rate. A motion detector does not care the shape of the object so a rough sampling strategy can be applied. But in a pattern recognizer system, sampling rate should be high. The sampler hardware designed in this work satisfi es the changing demands of the image processing systems. Different segments of the picture can be sampled with user programmable sampling rates. The sampling window is determined by user and its coordinates can be anywhere in the picture. The function blocks of the system are explained below. - ix - Analog to digital converter and synchronisation separator block circuit : This block uses 15 MHz six bit flash analog to digital converter CA3306. High and low reference voltage values necessary for analog to digital converter iy obtained by using two operational amplifiers. Synchronisation separating is handled by LMÎ881 integrated circuit. The output signals of this block are six bit converted pixel values which are stored in random acces memory, vertical synchronisation, composit synchronisation and EVEN-ODD signal is of even scorlines or add scan lines. The control block generates the 10 MHz clock signal, controls the data flow from converter to memory, from computer or to computer. That block also uses the window information coming from window detector circuit. Control block waits for the START signal from computer and begins sampling. After sampling is completed computer is informed that sampled values can be trans ferred from samplers memory to computers memory. Window detector circuit consists of programmable counters and latches which are responsible for determi ning the window coordinates and sampling rate respecti vely. Counting values are supplied from computer. Sampled pixel values are stored in a 128K bytes of random acces memory. Memory block cons ist es of four 32K memory cells with 150 ns acces time. 10 MHz sampling scheme necessitates 100 ns acces time. To overcome this timing problem parallel storing technique is involved. In the* memory there exist two memory banks. While the first data is written into first bank, following data is written into second block. To sta bilize the written data, each memory block has a latch to hold the data. So that to let the computer to acces the memory locations, up and down counters are used for addressing. Expansion slot of the IBM compatible 80386 computer is used to set the communication between sampling board and the computer. Output port addresses are user selec table. This is done by using DIP switches on the sampl ing board. With the designed circuit presented in this thesis, it is possible to reduce the amount of the processed data and the processing time. Much more speed in t»ro- cessing can be obtained by using digital signal urbces- sors instead of an IBM compatible computer. And* also much more reliability can be obtained by realising the circuit on PAL devices or Gate Arrays.
Açıklama
Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 1991
Anahtar kelimeler
Görüntü örnekleme devresi, Sayısal görüntü işleme, Image sampling circuit, Digital image processing
Alıntı