FBE- Bilgisayar Mühendisliği Lisansüstü Programı
Bu topluluk için Kalıcı Uri
Bilgisayar Mühendisliği Ana Bilim Dalı altında bir lisansüstü programı olup, yüksek lisans ve doktora düzeyinde eğitim vermektedir.
Lisansüstü eğitiminde uzmanlık alanları:
Bilgisayar Ağları,
Yapay Zeka,
Doğal Dil İşleme,
Paralel ve Dağıtık Sistemler.
Gözat
Sustainable Development Goal "Goal 9: Industry, Innovation and Infrastructure" ile FBE- Bilgisayar Mühendisliği Lisansüstü Programı'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgeAn enhanced two phase commit protocol for high performance consistency management in replicated state machines(Institute of Science and Technology, 2020-07) Uyanık, Halit ; Ovatman, Tolga ; 637248 ; Computer Engineering DepartmentState Machines are used to represent a set of transitions, which makes it a useful tool for representing life-cycle of a specific set of events. By utilizing this property, a state machine can be replicated into several machines and each one of them can communicate with one another in order to keep track of the order of changes. This is called the replicated state machine approach and it is highly used in replicated data services where there is a need to manage the consistency of a system. In order to provide any consistency, it is necessary to use a communication algorithm which provides both high throughput, and less number of failures in the case of conflicting operations. One of the widely known communication protocols used in our context is the two-phase commit (2PC) protocol. It provides a two step algorithm in order to manage the committing actions between different machines for the same resource. First it checks if every machine in a network is ready for writing operation, then if a machine receives a successful message from all other machines, it will then proceed to commit the specific operation to all of them. Finally it applies the commit to its own resource. In the case of no priority between the writing actions between different machines, algorithm gives the commit rights to the first machine which can successfully receive OK from all others. However, when priority comes into action, and it becomes necessary to cancel out transitions with less importance, algorithm starts to cancel out some incoming transitions, and in most cases, if the writing operations are too frequent, it cancels out a writing operation even if it obtains its OK from other machines. Disadvantage of the common 2PC algorithm in the case of priority introduction, due to its phases follow one another without any transitions, when an incoming writing request fails, it has to repeat all the preceding events from that point again. When the distance between the last important point of no-return, such as reading a value into cache, and the point of 2PC protocol becomes further away, this affect is increased and the number of messages for a successful transition is increased. In order to reduce the overhead introduced by this problem, a new algorithm is implemented by enhancing the existing 2PC algorithm. Both steps of the 2PC algorithm mentioned is separated from one another, and can be freely deployed in any place on a state machine, as long as their order is preserved.
-
ÖgeAn integrated architecture for information extraction from documents in Turkish(Institute of Science and Technology, 2009-12-25) Adalı, Şerif ; Sönmez, Coşkun A ; 504012098 ; Computer EngineeringIn this study, ontology based information extraction and document layout analysistechniques are integrated for extracting domain specific events and entities. Proposed?Concept Zoning? technique provides easy definition of extraction concepts andincreases portability of the IE system and requires only concept definitions whencompared to approaches that rely on large sets of linguistic patterns. Proposedarchitecture works well when applied to restricted domain applications. It alsosuccessfuly detects data in tabular, list or itimized form. In case of an unknown event,concept similarity is calculated by comparing the concepts in the input document againstthe concepts in the ontology and new attributes, key concept nodes and conceptsproperties are incrementally added to the knowledge base by the user. Domain ontologyis enriched by adding newly discovered instances. Experimental results indicate that ahigh performance document processing system has to cover enough number of lexicalresources, extraction concepts and document models. In addition, document layoutanalysis is used for detecting unknown entity types and approach verifies extractedinformation and relations among them by using key values defined for each domainevent.
-
ÖgeDeshufflegan: Self-supervised learning for generative adversarial networks(Institute of Science and Technology, 2020-07) Baykal Can, Gülçin ; Ünal, Gözde ; 637455 ; Computer Engineering ProgrammeGenerative Adversarial Networks (GANs) attracted the attention of the research community with its performance in high quality image generations. After the idea of two player game theory as well as the multi-objective and multi-task loss ideas are introduced with the GAN models, numerous modifications on the architectures of the generator and the discriminator networks and the learning objectives are proposed. The basic intuition behind the desired improvements is to increase the quality of the generations at the output of the generator network of the GAN model. One of the ways to improve the generation performance is to enhance the discriminator network of the GAN model in order to learn expressive features of the real data and feed that information back to the generator of the GAN model. Original conditional GANs support the discriminator by adding the information of the class label as input along with the data. Class label information can be helpful as an additional signal to the training or the information can be used as a new task for the discriminator in order to increase its representation capacity. The capacity of the discriminator needs to be enhanced in order to learn meaningful features that can be used to distinguish between the real data and the fake data. As the usage of class labels improves the discriminator performance, equivalently the generation performance by the generator, this information can be beneficial in the training of GANs. However, as the acquirement of class labels is expensive in terms of both time and human resources, new ways of creating and incorporating additional information about the data should be considered. Self-supervised learning is a method to make use of the pseudo-labels of the data where these labels are obtained through an automatic process which is computationally light and easy. For example, the image can be rotated by 4 different degrees and the rotation degree can be used as a label for the data. Other than this, the input can be divided into pieces and the pieces can be shuffled. Then, the shuffling order can be treated as an additional information about the data. In this work, we propose a new method called DeshuffleGAN that deploys the additional task of deshuffling a shuffled image to the discriminator network of the GAN in order to enrich the learnt features by the discriminator. In order to perform deshuffling, structural relations among image tiles should be learnt. This implies that the discriminator should learn structurally coherent features of the data. As the generator tries to trick the discriminator by the synthesized images so that the discriminator treats them as the real data, the image generation quality should be improved such that the discriminator cannot distinguish them even with the learnt structural features. Therefore, the deshuffling task also supports the generator network to synthesize structurally coherent images. DeshuffleGAN outperforms the baseline methods demonstrated in this thesis and achieves both numerically and visually better results. We use FID calculation as the numerical evaluation metric where lower FID values imply the generated data distribution is similar to the real data distribution which is the desired outcome. We show that the DeshuffleGAN achieves lower FID values on datasets such as LSUN-Bedroom and LSUN-Church. We also use CelebA-HQ and CAT datasets and observe that self-supervision tasks may not always show significant effects on the generation quality of GANs. We further show the effects of the deshuffling task by employing different GAN architectures, and discuss which kind of discriminator architecture may be more appropriate to be coupled with a self-supervision task.
-
ÖgeSelf-supervised pansharpening: Guided colorization of panchromatic images using generative adversarial networks(Institute of Science and Technology, 2020-07) Özçelik, Furkan ; Ünal, Gözde ; 637233 ; Bilgisayar Mühendisliği Bilim DalıSatellite images provide images with different properties. Multispectral images have low spatial resolution and high spectral resolution. Panchromatic images have high spatial resolution and low spectral resolution. The fusion process of these two images is called pansharpening. For decades, traditional image processing methods are designed for this process. After the inspirational success of Convolutional Neural Networks(CNN) in computer vision, CNN models are also designed for pansharpening. Convolutional Neural Networks (CNN)-based approaches have shown promising results in pansharpening of satellite images in recent years. However, they still exhibit limitations in producing high-quality pansharpening outputs. We identified a spatial detail disagreement problem between reduced resolution panchromatic images and original multispectral images, which are assumed to have the same resolution. This problem causes an insufficient training process in current CNN-based pansharpening models. We propose a new self-supervised learning framework, where we treat pansharpening as a colorization problem, which brings an entirely novel perspective and solution to the problem compared to existing methods that base their solution solely on producing a super-resolution version of the multispectral image. CNN-based methods provide a reduced resolution panchromatic image as input to their model along with reduced resolution multispectral images, hence learn to increase their resolution together. In the training phase of our model, reduced resolution panchromatic image is substituted with grayscale transformed multispectral image, thus our model learns colorization of the grayscale input. We further address the fixed downscale ratio assumption during training, which does not generalize well to the full-resolution scenario. We introduce a noise injection into the training by randomly varying the downsampling ratios. Those two critical changes, along with the addition of adversarial training in the proposed PanColorization Generative Adversarial Networks (PanColorGAN) framework, help overcome the spatial detail loss and blur problems that are observed in CNN-based pansharpening. The proposed approach outperforms the previous CNN-based and traditional methods as demonstrated in our experiments.
-
ÖgeStep length estimation using sensor fusion for indoor positioning(Institute of Science and Technology, 2020) Sevinç, Hasbi ; İnce, Gökhan ; 637456 ; Department of Computer EngineeringPeople use navigation applications to go to one place from another. Especially, if people traveling go to a place for the first time or if there is not enough information about the place, people traveling can get help from navigation applications. Navigation applications detect the person's location from Global Positioning System (GPS) or base station signals. However, the quality of these signals is not sufficient to use navigation applications in closed areas. In closed areas such as shopping centers, map information is usually provided through kiosk devices. The person looking for a store finds the store using the signs on the map at the mall. However, this is not possible for visually impaired people. Visually impaired individuals can reach their destination in open areas with navigation applications that use voice guidance. However, it is not possible to use these applications indoors, since they do not work properly in these locations. This study aims to provide navigation in indoor locations by using wearable and mobile devices. Various studies have been carried out to provide navigation in indoor locations. These studies generally used wireless networks such as WiFi, Bluetooth, Radio Frequency Identification (RFID). The basic principle of these systems is the calculation of the distance to the network devices that emit signals. If the distance to three or more devices is known, the position of the person can be obtained. However, in these studies, some technical arrangements should be made in the building for the system to function properly. Therefore, indoor areas where navigation is provided with the help of signals are not widely available. In robotic studies, different methods have been developed for indoor location tracking. In these studies, robots generally find their positions according to predefined objects or Quick Response (QR) codes in the field. However, the robot needs to know the position of the objects in the environment to find its position. In the method presented in this thesis, it is aimed that there is no need for any installation inside the building to follow the position of the person. For the study of the method presented in the thesis, the following elements were used: 1) textile-based capacitive sensors, 2) smart mobile phone, 3) WeWALK smart cane developed for the visually impaired. The data collection that is required for training of machine learning models were carried out with five different subjects walking on the established course. This track consists of walking paths with different stride lengths. Data taken from the sensors while the subjects walked in these areas were recorded by the system for further processing. Textile-based capacitive sensors are placed in both knees of the subject. These sensors measure the angle changes in the subject's knee joint. Thus, the system obtains information about the steps that the subject takes while walking. Information about the characteristics of walking is obtained by using the accelerometer, gyroscope, and the compass sensor in the smart cane and mobile phone. The direction of the walking subject was obtained from the compass sensor inside the mobile phone. As the first stage of the study, the data on the sensors were transmitted via Bluetooth connection. The system includes acceleration, gyroscope, and compass signals on x, y, and z axes for each of the smart canes and mobile phones. It also collects signals from the textile-based capacitive sensor. In total, the system collects nineteen signals. Firstly, the data collected from the sensors are cleaned from noise and outlier data using signal processing methods. An algorithm has been developed to detect the onset and offset points of the steps in the signal received from textile-based capacitive sensors. This algorithm calculates the local maximum and minimum points in the sensor signal and treats the interval between these points as steps. The signals in all sensors are segmented according to the determined start and finish points. In order to use sensor signals in regression models, feature extraction is performed on these signals. To improve the performance of the system, the extracted features are simplified by different feature selection methods. The performance of different methods were compared and regression models were trained with the best selection method. The system was trained with 1) Linear Regression, 2) Vector Support Regression, 3) Random Forest, 4) k-Nearest Neighbor models to find the best-fitted regression for the collected data set. The results obtained with these trained models were compared and the model with the best results was used for step detection. Sensor fusion was used to better determine stride lengths. In order to determine the contributions of the three different sensors in the system, different fusion alternatives have been tested separately and in pairs. As a result of these tests, it was observed that the fusion of three different sensors together provided the highest accuracy for step detection and lowest localization error. An Android application using Google Maps has been developed to perform localization. First of all, the plan prepared with the actual dimensions of the test environment was loaded on the map of the application. The initial location of the person in the test environment is defined in the application and this location is shown on the map with a marker. The application uses the regression models developed to determine the step length. Sensor data collected via Bluetooth is subjected to the same signal processing methods as in the learning phase of the model. Thus, when the person starts walking, the application determines the step length of the person with the help of the model and it updates the marker on the map according to the person's direction information. In order to test the system performance, a track with the same origin and destination points was determined in the test environment. When the person completed the entire track starting from the initial point, the distance between their actual final position and the projected final position was calculated. The method developed in this thesis aims to enable the visually impaired to reach their desired destinations in indoor locations by using the smart cane, textile-based capacitive sensor, and smartphones. With the data collected in walking tests, the development of regression models, and the Android application showing the position on the map, this study contributes to literature in the indoor localization and navigation.