Yapay sinir ağları kullanılarak EKG verilerinin sıkıştırılması

dc.contributor.advisorKorürek, Mehmet
dc.contributor.authorKoçyiğit, Yücel
dc.contributor.authorID55985
dc.contributor.departmentBiyomedikal Mühendisliği
dc.contributor.departmentBiomedical Engineering
dc.date1996
dc.date.accessioned2020-09-24T09:17:38Z
dc.date.available2020-09-24T09:17:38Z
dc.date.issued1996
dc.descriptionTez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 1996
dc.descriptionThesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 1996
dc.description.abstractEleküokardiyografik işaretler için veri azalüna yöntemleri, işarette herhangi klinik bilgi kaybına neden olmadan, depolama, gönderme veya analiz işlemleri için veri hacminin küçültülmesi ve transmisyon kanal band genişliğinin küçültülmesi ile veri aktarım hızının arttırılmasına yönelik olarak son yıllarda artan bir ilgi ile kullanılagelmektedir. Bu nedenle son 30 yıl içerisinde eleküokardiyogramın (EKG) sıkıştırılması için birçok algoritma önerildi. Bu tezde güncel bir konu olan yapay sinir ağlan (YSA) ile EKG verilerinin sıkıştırılması ve yeniden yapılandırılması yazılımla gerçekleştirildi. YSA'mn çalışma prensibinden yararlanarak eğitme aşamasında, EKG işaretleri YSA'nın hem gırişine hem de çıkışına uygulandı. Böylece elde edilen ağırlık parametreleri gelecek yeni EKG verileri için baz oluşturdu. Sıkıştırma ve yeniden yapılandırma, iki ayrı ağırlık parametre seti için yapıldı. Tezin ilk bölümünde, veri sıkışürmanın gerekliliği ve çok kullanılan veri sıkışüma teknikleri hakkında genel bilgi verildi. İkinci bölümde çalışma da kullanılan yapay sinir ağlarının yapısı, fonksiyonları ve çeşitleri hakkında geniş bilgilere yer verildi. Bulgular ve sonuçlar bölümünde çalışmadan elde edilen sonuçlar açıklandı. Son olarak, ek kısmında gerçekleştirilen yanlım verildi.
dc.description.abstractThe conffiling proliferation of computerized electrocardiogram (ECG) processing systems along with the increased feature performance requirements and demand for lower cost medical care have mandated reliable, accurate, and more efficient ECG data compression techniques. The importance of ECG data compression has become evident in many aspects of computerized electrocardiography including: a)increased storage capacity of ECG's as databases for subsequent comparison or evaluation, b) feasibility of transmitting real-time ECG's over the public phone network, c) implementation of cost effective real-time rhythm algorithms, d)economical rapid transmission of off-line ECG's over public phone lines to a remote interpretation center, and e) improved ffinctionality of ambulatory ECG monitors and recorders. The main goal of any compression technique is to achieve maximum data volume reduction while preserving the significant signal morphology features upon reconsnlction. Conceptually, data compression is the process of detecting and eliminafng redundancies in a given data set. Redundancy in a digital signal exists whenever adjacent signal samples are statistically dependent and/or the quantized signal amplitudes do not occur with equal probability. However, the first step towards ECG data compression is the of minimum sampling rate and wordlength. Consequently, ftrther compression of the ECG signal can be achieved by exploiting the known stafstical of the signal. Data compression techniques have been uflized in a broad spectrum of areas such as speech, image, and telemetry Data compression methods have been mainly classified into three major categories: a) direct data compression, b) transformafon methods, and c) parameter techniques. Data compression by the fransformation or the direct data compression methods contain fransformed or actual data from the orginal signal. Whereby, the orginal data are reconsüucted an inverse process. The direct data compressors base their detection of redundancies on direct analysis of the actual signal samples. In contrast, transformation compression methods mainly uflize spectral and energy distribution analysis for detecfon redundancies. On the other hand, the parameter method is an irreversible process with which a particular characteristic or parameter of the signal is extracted. The extracted parameters (e.g., measurement of the probability are subsequently utilized for classification based on a priori knowledge of the signal features. Existing data compression techiques for ECG signals lie in two of the three categories described: the direct data and transformation methods. The direct data compression techniques are: ECG differential pulse code modulation and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms. Some of the methods are: Fourier, Walsh and Karhunen Loeve transforms. Direct data compression techniques for ECG signals have shown a more efficient performance than the transformafon techniques in regard particularly to processing speed and generally to compression ratio. While it is done any comparison among ECG data compression techniques, the following properties are determined: 1) Signal sampling frequency (fo): In analog/digital converter utilized to convert ECG signals to digital the sampling frequency can be different frequencies in accordance with purpose but generally has been selected at 500 Hz. 2) Bit numbers in digital samples (p,"precision"): Bit numbers which show resolufon of stored ECG data can be 8 or 12 bits. 3) Compression ratio (CR): This is one of the important parameters in data compression algorithms and the large value of this ratio shows success of any algorithm. The number of samples before compression (1) The number of samples after compression 4) Performance index (PRD, "Percent Root Mean Square Difference"): PRI) is other one of the important parameters of any algorithms and the small value of PRI) shows success of algorithm. PRI) = • 100 (2) where Xorg anu are samples Qthe orr .al and re- astructed data 5) The higher speed any is, tv quicker orocess can one and real-ffie btudies oe done 6) Most of the databases utilized in evaluating ECG compression algorithms are nonstandart. However the algorithm results can be different in accordance with databases. Let's determine artificial neural networks (ANN). ANN is an information processing system which spreads parallel on. This system consist of processing elements connected by single sided signal channels (connections). The number of output signals is one, but it can be increased. ANN can determine its and adjust itself to provide different by using inputs and desired outputs which are given to the system. Recently, it has been shown that neural networks have abffies to solve various complex problems. On the other hand, multilayered feedforward network has a better ability to learn the correspondence between input pattern and teaching values from many sample data by the error backpropagation algorithm. Therefore, in this thesis, we used three-layered feedforward neural networks and taught them by error backpropagation. Fig. 1 shows a general sfi•ucture of a neural network. Unit jOutput layer Weight Wij Unit iHidden layer Input layer Fig. 1. Ordinary type neural network. The output oj of each unit j is defined by, oj = f (netj), netj = E w i0i +9 (3) where Oi is the output of unit i, wji is the weight of the connection from unit j to unit i, ej is the bias of unit j, 2i is a summafon of every unit i whose output flows into unit j, and f(x) is monotonously increasing ftnction. In practice, a logistic activation function f(x)=l/(l+exp(-x)) is used. When the set (fm-dimensional input patterns {ip= (Ip1,1p2,..., Ipm); pep } where P denotes set of presented patterns, and their correesponding desired Il-dimensional output patterns {tp= (tp1,tp2,...., tpm); peP} are provided, the neural network is taught to output ideal patterns as follows. The squared error funcfon Ep for a pattern p is defined by, 1 2 (4) jG output layer The purpose is to make E=Ep Ep small enough by choosing appropriate wji and ej. To realize this purpose, a pattern pep is chosen successively and randomly, and the wji and ej are changed by viii -s (Æp/avji (5) (6) where e is a small positive constant. By calculaüng the right hand side of (5) and (6), it follows that p ji =eðn.opi (7) AP Oj = cðn. (8) where — on.) (when j belongs to the output layer. ) (9) (otherwise) Note that k in the above represents every unit k whose output follows into unit j. In order to accelerate the computation, the momentum terms are added on (7), (8), A p wJi (n + 1) = sð pj o pi +ŒAp wji (n) (10) Apej (n + 1) = eðn. + ŒAp g (n) (11) where n represents the number of learning cycles, and is a small value. In this thesis, using artificial neural network (ANN) compression and reconsffucted ofthe some ECG's are done. These ECG's are from WT/BM database tapes 100, 107 and 109. Tapes 100, 107 and 109 are respectively normal, paced beat and left bundle branch block ECG's. Orginal data sampled at 360 Hz and this sampling rate is decreased to 200 Hz in this thesis. Compression processing which is run on computer with Pentium-75 microprocessor in Borland C is provided, as above mentioned, with 200 input nodes, 10 and 5 hidden nodes and 200 output nodes. At the same time, these values give compression ratio, i.e. 20:1 and 40:1. Therefore the ufilized ANN süuctures are as 200: 10:200 and Generally, 20 cycles from ECG's are used in training Fig. 2. Graphics of the and reconstructed ECG from NflT/BIH tapes 100 for 10 hidden nodes, and graphics of difference (5 between them process. As above learning coefficient (e) and momentum must be selected to minrmize the error in training process. Reconsffucted processing for compressed ECG is provided with 10 input nodes (because of 10 hidden nodes in compression) or 5 input nodes (because of 5 hidden nodes in compression) and 200 output nodes. Number of hidden nodes are selected to minimize the error along with E and œ. For normal ECG, orginal and reconstructed ECG is shown in Fig 2. An advantage of this technique is to compress large numbers of data, i.e. 10 cycles (samples) from ECG's are used for training and 100 cycles (samples) can be compressed. x
dc.description.degreeYüksek Lisans
dc.description.degreeM.Sc.
dc.identifier.urihttp://hdl.handle.net/11527/18686
dc.languagetur
dc.publisherFen Bilimleri Enstitüsü
dc.publisherInstitute of Science and Technology
dc.rightsKurumsal arşive yüklenen tüm eserler telif hakkı ile korunmaktadır. Bunlar, bu kaynak üzerinden herhangi bir amaçla görüntülenebilir, ancak yazılı izin alınmadan herhangi bir biçimde yeniden oluşturulması veya dağıtılması yasaklanmıştır.
dc.rightsAll works uploaded to the institutional repository are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectBiyomühendislik
dc.subjectElektrokardiyografi
dc.subjectYapay sinir ağları
dc.subjectBioengineering
dc.subjectElectrocardiography
dc.subjectArtificial neural networks
dc.titleYapay sinir ağları kullanılarak EKG verilerinin sıkıştırılması
dc.typeMaster Thesis

Dosyalar

Orijinal seri

Şimdi gösteriliyor 1 - 1 / 1
Yükleniyor...
Küçük Resim
Ad:
55985.pdf
Boyut:
2.27 MB
Format:
Adobe Portable Document Format

Lisanslı seri

Şimdi gösteriliyor 1 - 1 / 1
Yükleniyor...
Küçük Resim
Ad:
license.txt
Boyut:
3.16 KB
Format:
Plain Text
Açıklama