Sustainable Development Goal "Goal 2: Zero Hunger" ile 'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgeBiological control of Aspergillus flavus growth and its aflatoxin b1 production by antagonistic yeasts(Graduate School, 2022-06-29) Dikmetaş, Dilara Nur ; Güler Karbancıoğlu, Funda ; Özer, Hayrettin ; 506181504 ; Food EngineeringThe presence of mycotoxins in food and feed poses a risk to human health and animal productivity and result large economic losses. In the field and during storage period, Aspergillus flavus infect grains. Aflatoxin B1, is classified as group 1 carcinogen, having hepatotoxic, genotoxic, and teratogenic properties, may be produced by generally A. flavus and Aspergillus parasiticus in addition to grain deterioration and yield loss. Aflatoxins have been found in a variety of foods including oilseeds, nuts, dried figs and spices. Dried figs, pistachios, hazelnuts and groundnuts cultivated in Turkey are risky products in terms of aflatoxins. The application of synthetic fungicides is the most common method for controlling decay in most crops. However, because of the fungal resistance and detrimental impacts on human and animal health, as well as the environmental concerns in general, their use is being tried to diminished. Due to these concerns, researchers have tended to investigate more eco-friendly and healthy methods to manage fungal diseases. As a result, detecting and preventing Aspergillus species contamination, as well as lowering the level of aflatoxins in foodstuffs used in many agricultural products. To reduce usage of synthetic fungicide, biological control is an important strategy as a promising alternative with low environmental impact in reducing fungal infection and mycotoxin production in the field and during postharvest period. In addition, among microorganisms, yeast species have been extensively studied as antagonist due to simple nutritional requirements, able to colonize dry surfaces for long periods of time and able to grow rapidly in bioreactors with inexpensive substrates. Furthermore, yeast is simply adapted to microenvironments. The crucial and most important step is to develop biocontrol agent is isolation and screening of yeast isolates. Antagonistic yeasts have been showed several different mechanism to control of different moulds such as competition for space and nutrients, biofilm formation, parasitism, production of antimicrobial volatile organic compounds and production of lytic enzymes. The antagonist mechanism generally explained with cell wall-degrading enzyme synthesis including chitinases, β-1,3-glucanase, protease, cellulase and pectinase. Yeasts with high cell wall degrading enzyme activity, also showed high biocontrol efficacy. Biocontrol of aflatoxin has been generally documented by non-aflatoxigenic Aspergillus species. However, the studies to control Aspergillus flavus by yeasts limited. In various industrial processes, the Metschnikowia yeast has wide range of biotechnological application and generally isolated from fruits and flowers. Several Metschnikowia based biocontrol products have been industrialized to control postharvest diseases including Botrytis or Monilinia spp. However, only a few biocontrol agents are converted into industrial products. Meyerozyma, Moesziomyces and Metschnikowia sp. yeasts antifungal activity have been studied by several researchers. Among antagonistic microorganism mechanism of action, production of antimicrobial volatile organic compounds is one of the least studied. Primarily, volatile organic compounds produced by antagonistic yeasts have great potential used as biocontrol agents of filamentous fungi. In this study, four yeast isolates have been isolated and identified by different plant parts including hawthorn, hoşkıran, bean and grape leaf collected from Turkey different region. Four previously isolated and identified isolates from grapes, blueberry have been also included in this study. The eight antagonistic yeasts have been belong to Moesziomyces sp., Meyerozyma sp. and Metschnikowia sp. Yeasts secrete fungal lytic enzymes which are typically associated with biocontrol mechanism. Lytic enzyme activities of yeasts were examined with screening method. All of the isolates have β-glucosidase and chitinase activity, which are crucial for antifungal mechanism, however absence of the pectinase activity. Among antagonistic yeasts, only Metschnikowia pulcherrima (26-BMD) found as protease negative. Dual culture assay have been conducted to observe antagonistic effect of yeasts against aflatoxin B1 producer Aspergillus flavus. All of the antagonistic yeasts formed inhibition zones in dual culture assay against to Aspergillus flavus due to secretion of diffusible antifungal compounds. After that, yeasts antifungal and antiaflatoxigenic impact on aflatoxin producer Aspergillus flavus by spot inoculation method with different incubation period by in vitro studies. Different yeasts used to investigate to control Aspergillus flavus growth. In addition to that, origin of the yeast affect their biocontrol efficacy. All of the antagonistic yeasts formed inhibition zones in dual culture assay against to Aspergillus flavus due to secretion of diffusible antifungal compounds. While Aspergillus flavus mycelial growth of inhibition 86-97% after three days. All isolated and identified yeasts were effective to control Aspergillus flavus, as well as aflatoxin. Aflatoxin B1 production was reduced from 1773 ng/g (in control samples) to 1.26-10.15 ng/g with the application of antagonistic yeast. Metschnikowia aff. pulcherrima (32-AMM) was found as the most effective yeasts to inhibit mycelial growth of Aspergillus flavus among other yeasts. In addition, yeasts plant origin and incubation period also affect their inhibition potential (p<0.05). All of the yeasts might be used as biocontrol agent against Aspergillus flavus growth. Additionally, all of the yeast volatile organic compounds (VOCs) reduced sporulation however among antagonistic yeasts only Moesziomyces bullatus (DN-FY), Metschnikowia aff. pulcherrima (DN-MP) and Metschnikowia aff. pulcherrima (32-AMM) were reduced Aspergillus flavus mycelial growth with in vitro studies. But only VOCs produced by Metschnikowia aff. fructicola (1-UDM) was also found effective in reduction of Aflatoxin B1 production in in vitro experiments. This activity was associated to different volatile organic compounds. As a result, more investigation into the role of volatile organic compounds in Aspergillus flavus and aflatoxin B1 control is required. Further field experiments would indicate yeasts biocontrol potential on the products prone to contaminated from Aspergillus flavus. By the way, also isolation of volatile organic compounds from yeasts should be used to protect products from contamination without harmless to humans and environmentally friendly.
-
ÖgeConsensus algorithm for calculation and labeling of protein binding affinity using multiple models(Graduate School, 2023-01-30) Ergin, Ayşenaz Ezgi ; Altılar, Deniz Turgay ; 504191543 ; Computer EngineeringThe major histocompatibility complex (MHC) molecules, which bind peptides for presentation on the cell surface, play an important role in cell-mediated immunity. In light of developing databases and technologies over the years, significant progress has been made in research on peptide binding affinity calculation. Several in techniques have been developed to predict peptide binding to MHC class I. Most of the research on MHC Class I due to its nature brings better performance and more. Considering the use of different methods and different technologies, and the approach of similar methods on different proteins, a classification was created according to the binding affinity of protein peptides. For this classification, MHC Class I was studied using the MHCflurry, NetMHCPan, NetMHC, NetMHCCons and ssm-pmbec. In these simulations conducted within the scope of this thesis, no overall superiority was observed between the models. It has been determined that they are superior to each other in various points. Getting the best results may vary depending on the multiple uses of models. The important thing is to recognize the data and act with the appropriate model. But even that doesn't make a huge difference. Since the consensus approach is directly related to the models, the better the models, the better.
-
ÖgeÇamaşır makinesi atık suyunun arıtımı ve yeniden kullanımı için selüloz nanofibril kaplamalı nanolif membranların geliştirilmesi(Lisansüstü Eğitim Enstitüsü, 2024-06-24) Gençtürk, Sezer ; İmer, Derya Yüksel ; 501191758 ; Çevre Bilimleri Mühendisliği ve YönetimiGündelik yaşamda ve temizlik endüstrisinde yoğun kullanılan çamaşır makineleri, evsel atıksu kirliliğinin önemli kaynakları arasında yer almaktadır. Özellikle içeriğinde bulunan deterjanlar, boyalar, ağartıcılar, mikroplastikler ve nanoplastikler ile su kaynaklarına ve su yaşamına zarar vermektedir. Bunun yanında, özellikle evsel kullanımda çamaşır makineleri çıkış sularının içerdiği kompleks bileşenler atıksu arıtma tesislerinde bazı işletme problemlerine yol açabilmektedir. Bu nedenle çamaşır makinesi atıksularının geri kazanımı ve tekrar kullanımı için malzeme ve teknolojilerin geliştirilmesi önemli bir araştırma konusudur. Bu atıksuların geleneksel arıtma yöntemleri ile merkezi arıtma sistemlerinde arıtımı yüksek enerji sarfiyatı ve işletme maliyetlerine yol açabilmektedir. Yerinde arıtım yöntemleri düşünüldüğünde geleneksel yöntemler yetersiz kalmaktadır. Son yıllarda kompozit membran sistemleri yenilikçi yaklaşımlarıyla artıma proseslerinde kullanılmaktadır. Bu noktadan hareketle, tez çalışmasında yerinde çamaşır makinesi atıksularının arıtımını ve yeniden kullanımını sağlayacak selüloz nanofibril (CNF) gibi doğal kaynaklı, biyobozunur malzemeler ile üretilmiş hibrit membranların üretimi, karakterizasyonu ve performansları araştırılmıştır. Tez kapsamında, membran üretiminde elektroeğirme tekniği kullanılarak polimer bazlı (Poliakrilonitril- PAN) destek tabaka membranları üretilmiş olup, düşük basınç altında CNF malzemesinin membran yüzeyine fiziksel kaplaması sağlanmış ve nihai olarak kompozit membran malzemesi geliştirilmiştir. Deneysel çalışmalar kapsamında, kompozit membran üretimi, karakterizasyonu, sentetik boya çözeltisi ve gerçek çamaşır makinesi atıksuyu arıtım performans testleri gerçekleştirilmiştir. Tezin önemi, literatürde kısıtlı çalışma alanına sahip olan fakat günlük hayatta çok yüksek atıksu hacimlerine ulaşabilen çamaşır makinesi atıksularının yönetiminde sürdürülebilir malzemelerinin detaylı incelenmesi, optimum çalışma şartlarının belirlenmesi ve çamaşır makinesi atıksularına özel geliştirilen deneysel bir metodoloji çerçevesinde kurgulanması ve bukonuda literatüre katkı sağlayacak sonuçlara ulaşılmasıdır.
-
ÖgeDesign and analysis of privacy-preserving and regulations-compliant central bank digital currency(Graduate School, 2024-07-12) Doğan, Ali ; Bıçakcı, Kemal ; 707211012 ; Cybersecurity Engineering and CryptographySignificant advances has been made in the field of Central Bank Digital Currency (CBDC) in the last five years. These advances are available not only in the academic world but also in central banks. Currently, more than 130 countries continue their CBDC studies at research, pilot and proof of concept levels. The increased interest in CBDC can be attributed to various factors such as the increasing progress in digital payment technologies, the widespread use of cryptocurrencies in the digital money market and the advantages brought by this technology. In addition to these advantages, there are challenges and problems that have not yet been resolved in order for CBDCs to reach the maturity level. One of these problems is the conflict between efforts to protect the privacy of digital currency users and the compliance mechanisms introduced by states to ensure financial stability and social order. States try to prevent and monitor financial crimes through regulations such as combating dirty money and preventing financing of terrorism. However, such regulations could lead to citizens' lives being completely monitored in the transition to digital money. In addition to this conflict, a significant part of the existing CBDCs are operated on a blockchain-based system. Due to the transparent structure of the blockchain, parties included in the network can track and monitor users' transactions, but transaction privacy is ignored. In the present study, solutions to the mentioned privacy problems are introduced with cryptographic techniques such as zero knowledge proofs, threshold cryptography, and homomorphic encryption. In the proposed system, the user's balance is kept homomorphically encrypted in the blockchain. To perform a transfer transaction, the sender encrypts the amount he wants to transfer with his own public key, the receiver's public key, and the regulators' public key. The sender then creates a zero-knowledge proof that the amount is the same in all three ciphertexts. Since the transaction is processed through encrypted texts, the user must create a range proof that the balance he has is sufficient. After creating all the proofs and transmitting them to the blockchain, the nodes confirm the transaction and the user's balance is homomorphically reduced via the ciphertext and the recipient's balance is increased. In any suspicious case, the user's transaction history can be traced back by government institutions called regulators. However, threshold encryption was used to ensure that this control was not left to the initiative of a single institution. These institutions must reach a consensus and after reaching the threshold value, they can see the transaction details. Additionally, techniques have been suggested so that commercial banks can continue their services in this system.
-
ÖgeDetection of common IoT authentication attacks and design of a lightweight authentication and key management protocol(Graduate School, 2023-12-18) Çetintav, Işıl ; Sandıkkaya, Mehmet Tahir ; 504182509 ; Computer EngineeringThe Internet of Things (IoT) has grown rapidly over several years. IoT establishes connections between devices and the Internet. Thing could be everything and every kind of object. Things are smart as they are able to connect to the Internet and make decisions automatically. IoT devices are widely used and there are numerous IoT devices worldwide. As devices are deployed in diverse settings, such as daily life, smart homes, smart cars, and smart agriculture, they offer various benefits. However, while IoT devices are helpful, the spread of devices causes several concerns. Management is crucial for IoT devices and numerous devices cannot be managed easily. Besides the management of devices, IoT devices are vulnerable due to their characteristics. One of the characteristics is that the IoT devices are resource-constraint. They generally have limited CPU, memory, and storage. As implementing comprehensive security mechanisms is expensive, users do not prefer to use them. Additionally, authentication and key exchange protocols are generally deficient. All of the mentioned issues make IoT devices vulnerable. While a vulnerable device could be easily captured by attackers, a group of devices could be also captured. Botnets, also known as robot networks, pose a threat to IoT devices as they infect and capture them. Thus, botnets compose a large-scale attack on Information Systems. There are numerous attacks on IoT devices. Thus, IoT devices need to implement robust security mechanisms. In this thesis, IoT devices mentioned are resource-constrained, include weak security mechanisms, establish continuous Internet connectivity, and perform specific functions. Even if these devices may include personal data, it is assumed that a breach of this data does not constitute a problem. For instance, a weather sensor data breach is not a significant concern for users. Besides these concerns, the mentioned devices transmit small data such as temperature, humidity, and commands. This thesis begins with a test environment setup to monitor attacks and attackers to understand attacker behavior and features. The test environment is installed on the WalT platform, a reproducible and reusable computer management environment. On this platform, a honeypot mechanism is installed for monitoring. It is aimed to provide comprehensive support to propose a suitable and effective security mechanism. Honeypot helps to present the attackers, who send malicious requests, attack types, and ease of attack. Upon analyzing the honeypot data, it becomes evident that weak authentication introduces vulnerabilities, leading to authentication attacks. Against detected attacks, a lightweight One-Time Password (OTP) authentication and key exchange protocol that is available from long-range or close-range areas, easy-to-use for users, and computationally low-cost protocol is proposed. The proposed authentication protocol includes a key exchange and presents a hierarchic management model. The hierarchic model provides easy management, cost-effective key exchange, and independence between devices. The proposed protocol has another crucial feature: all session data and session keys (ephemeral keys) are updated in every session. Every session is independent of each other. All session data and ephemeral keys are computed using only primitive cryptographic functions such as XOR operation and hash functions. Thus, the protocol is cost-effective and lightweight protocol. The protocol begins with registrations of servers and devices. Firstly, all servers are registered to their upper-level server. The registered servers start the authentication phase to register devices. Thus, device registration is completed with authenticated servers. The communication can be in two directions: device-to-server and server-to-device. The protocol is initiated by both participants separately. Devices and servers verify several values during the authentication. They generate ephemeral keys at the end of the authentication and messages are encrypted with these ephemeral keys. The protocol is guaranteed with the AVISPA model checker in a formal way. This thesis also presents an informal security analysis of the protocol. Security analysis shows that the protocol is robust against attacks like replay, theft, and DOS. Performance analysis is also presented of the proposed protocol. Devices compute only 1 XOR operation and 11 hash computations. Every authentication protocol has specific features, requirements, and goals when looking at the literature studies. The proposed protocol has the following features: Lightweight and cost-effective authentication, key exchange, and message transfer protocol. It consumes low power due to its primitive computations. There is no need to use extra hardware (i.e. smart card, RFID tag) for the protocol. It is possible to authenticate devices both remotely and nearby. Users can communicate with a single device or a group of devices. In this thesis, the goals are achieved; attackers are monitored with a honeypot, the security issues of the IoT devices are revealed, and a lightweight authentication and key exchange protocol is proposed with a well-suited management model.
-
ÖgeDetermination of spatial distributions of greenhouses using satellite images and object-based image analysis approach(Graduate School, 2023-03-02) Şenel, Gizem ; Göksel, Çiğdem ; Torres Aguilar, Manuel Angel ; 501182620 ; Geomatics EngineeringIn the face of the expected pressure on agricultural production systems with the increasing world population, one of the most suitable options for sustainable intensification of agricultural production is greenhouse activities that allow an increase in production on existing agricultural lands. Greenhouse activities can cause environmental problems at the local and regional scales. Since the primary material used in the covering of greenhouses is plastic, ecological problems are expected in the near future due to the excessive use of plastic. Besides, they may affect the integrity of ecosystems by changing land use and land cover (LULC) into extensive agricultural areas. On the other hand, the economy of many rural regions is supported by greenhouse activities, especially in Mediterranean countries. Moreover, due to the exposure of these structures to floods, especially with climate change effects, producers face economic and social problems. While all these situations make the production system unsustainable, they also endanger the ecology and economy of the region. Thanks to synoptic data acquisition and high temporal resolution, remote sensing images allow periodic agricultural sector monitoring. Considering the positive outcomes and adverse effects of greenhouses, determining greenhouse areas using remote sensing images is essential in providing better management strategies. In that case, monitoring through remote sensing images is the most suitable approach to obtain information about the effects of greenhouses on climate and environment and improve their economic output. Within the scope of this thesis, answers to different questions were sought by using the object-based image analysis (OBIA) approach, which is stated to give better results in the literature to determine greenhouses. OBIA approach consists of mainly three stages which are image segmentation, feature extraction, and image or object classification, and these sections formed the structure of this thesis In the image segmentation step, which is the first step of the OBIA, answers were sought for two crucial questions for the segmentation of plastic-covered greenhouses (PCG). The first of these questions is which of the supervised segmentation quality assessment metrics performs better in evaluating PCG segmentation. An experimental design was formed in which segmentation metrics were evaluated together with interpreter evaluations. At this stage, sixteen different datasets consisting of different spatial resolutions (medium and high spatial resolution), seasons (summer and winter), study areas (Almería (Spain) and Antalya (Turkey)), and reflection storage scales (RSS) (16Bit and Percent) were used. Various segmentation outputs were created using the Multiresolution segmentation (MRS) algorithm. Six different interpreters evaluated these outputs and compared them with the eight segmentation quality metrics. As a result of the evaluations, it was concluded that Modified Euclidean Distance 2 (MED2) was the most successful metric in the evaluation of PCG segmentation. On the other hand, Fitness and F-metric failed to identify the best segmentation output compared to other metrics investigated. In addition, the effects of different factors on the visual interpretation results were analyzed statistically. It was revealed that the RSS is an essential factor in visual interpretation. In detail, it was concluded that when evaluating the segmentation outputs created by using the Percent format, the interpreters were more in agreement and interpreted this data type more efficiently. In the second part of the segmentation phase, how much factors or their interactions affect the greenhouse segmentation was investigated. Approximately 4,000 segmentation outputs were produced from sixteen data sets, and MED2 values were calculated. For each shape parameter in each data set, the values reaching the best MED2 value were determined and statistically tested by analysis of variance (ANOVA). The segmentation outputs calculated from the datasets showed that the optimal scale parameters clustered by taking values close to each other in Percent format and took values in a broader range in 16Bit format. This showed that it would be effortless to determine the most appropriate segmentation outputs obtained from the Percent format. In addition, statistical tests have shown that the segmentation accuracy calculated from different RSS formats is directly dependent on the shape parameter. While segmentation accuracy increases with decreasing shape parameters in Percent format, this is the opposite in 16Bit format. This situation revealed that the shape parameter selection is critical depending on the RSS. In summary, it has been revealed that the Percent format is the appropriate data format for PCG segmentation with the MRS algorithm, and in addition, low-shape parameters should be preferred in the Percent format. In the second stage of the thesis, it was hypothesized that different feature space evaluation methods and feature space dimensions affect the classification in terms of accuracy and time. Based on this hypothesis, 128 features were obtained from Sentinel-2 images of the Almería and Antalya study areas, and classification performance was evaluated by random forest (RF) algorithm by applying different feature space evaluation methods. As a result of this evaluation, it was seen that the reduction of the feature space has a direct effect on the accuracy. But moreover, it has been determined that reducing the size of the feature space significantly reduces the time required to run the classification algorithm. Therefore, among the examined feature space evaluation algorithms, it has been concluded that RF and Recursive Feature Elimination (RFE)-RF (RFE-RF) algorithms are more suitable for classification accuracy and the time required to run the algorithm. Moreover, it has been found that these algorithms are less dependent on feature space variation in terms of classification accuracy, but reducing the feature space significantly reduces the computation time. In addition, among a total of 128 features obtained from the segments, including spectral, textural, geometric features and spectral indices, Plastic GreenHouse Index (PGHI) and Normalized Difference Vegetation Index (NDVI) were the most relevant features for PCG mapping according to RF and RFE-RF methods. As a result, the necessity of including indices such as PGHI and NDVI in the feature space and the application of one of the feature space evaluation methods such as RF or RFE-RF in terms of reducing the calculation time are the main outputs of this stage. In the third and final stage of the thesis, the effectiveness of ensemble learning algorithms for the PCG classification has been tested. According to the experimental results, Categorical boosting (Catboost), RF, and support vector machines (SVM) algorithms performed well in both studied areas (Almería and Antalya), but the implementation time required for CatBoost and SVM is higher than all other algorithms studied. K-nearest neighbor (KNN) and AdaBoost algorithms achieved lower classification performance in both study areas. In addition to these algorithms, the light gradient boosting machines (LightGBM) algorithm achieved an F1 score of over 90% in both study areas in a short time. In summary, considering the computation time and classification accuracy, RF and LightGBM are the two up-front algorithms. In general, within the scope of this thesis, answers to the questions encountered in the three steps of OBIA were sought to reach the best PCG determination approach. The determination of greenhouses from satellite images was carried out in two essential study areas in the Mediterranean Basin, where greenhouse activities are intensively carried out. Although these outputs belong to selected test sites, they provide important outputs for generalizing the findings on a large scale. Determining the spatial distribution of PCG to minimize the negative effects on the environment and increase their economic returns will make an important contribution to planners and decision-makers in achieving sustainable agriculture goals.
-
ÖgeEffect of atmospheric non-thermal plasma against food-borne bacteria on food packaging film surfaces(Graduate School, 2024-01-18) Doğanöz, Dilan ; Güler Karbancıoğlu, H. Funda ; 506211507 ; Food EngineeringOver the past decade, non-thermal plasma has become established as a potential technology for microbial inactivation. As commonly known, plasma treatment can produce highly specific surface modifications, so it has been extensively used in packaging. Methods such as heating, surface washing with hydrogen peroxide, irradiation, or their combinations, widely used for sterilization in the packaging industry, are known to have negative drawbacks. For instance, using chemicals or preservatives may cause residue problems, or high thermal processes may cause the loss of the desired structure of the food or package. Despite that, sterilization by non thermal plasma has several advantages: A highly energy-efficient system, eco environmental nature, low cost, and versatility. Vegetative and especially spore forming bacteria exhibit strong resistance to external factors such as environmental stress, chemicals, and thermal inactivation due to their intrinsic resistance, outer layers, and low water content. These characteristics make spores more difficult to kill than vegetative forms. In this study, the effect of atmospheric non-thermal plasma application on Gram positive Staphylococcus aureus ATCC 25923, Pseudomonas aeruginosa ATCC 39327 and Clostridium sporogenes ATCC 3584 spores, which are among the important food pathogens, was compared in different packaging materials. Within the scope of the study, microbial inactivation at different exposure times and application types (wet or dry application) was investigated using dielectric barrier discharge (DBD) plasma and corona discharge plasma devices. Low-density polyethylene (LDPE), biaxially oriented polypropylene (bOPP), and polyethylene terephthalate (PET) were used as packaging materials. Each film (3×3 cm2) was initially treated with DBD with an electrode gap set to 3 mm for 0.4 seconds. The plasma power was high, with 100% voltage and minimum frequency set for all treatments. The test microorganisms were inoculated into the center of the 3x3cm2 DBD-treated film surface before corona NTP treatment. The corona NTP power was set at 25-27% voltage, low power, and minimum frequency. The inoculated films were exposed to cold plasma (with the setting of 1.5 cm electrode gap, ~17 kHz wet or dry application) at different application periods depending on microorganism. Exposure periods for S. aureus and P. aeruginosa were 60, 120, and 180s and for C. sporogenes was 0, 360, 540, and 720s. The results showed that non-thermal plasma had an antimicrobial effect for all microorganisms, and wet application, by adding 10 µL of sterile distilled water before exposure to plasma, enhanced the microbial inactivation effect. There was also a direct relationship between exposure time and microbial inhibition. A significant antimicrobial effect was observed only after longer exposures. Considering the results of dry corona plasma application of all microorganisms, the highest D-value belongs to C. sporogenes inoculated on the PET film surface with 94.81±31.56 min. In addition, it was statistically observed that C. sporogenes was the most resistant bacteria to corona treatment for all films (p <0.05). For S. aureus, after dry and wet corona plasma application, all films showed statistical similarity and wet corona application was more effective than dry application (p<0.05). Dry corona application were not performed for P. aeruginosa due to the lack of vitality on the surface after drying. Among vegetative bacteria, the Gram-negative bacterium P. aeruginosa showed higher microbial inactivation than the Gram-positive S. aureus. The most effective result for P. aeruginosa after wet corona plasma application is 1.99±0.03 min and 1.96±0.02 min, with no statistical difference between LDPE and PET films (p>0.05). The results obtained in this study have provided a new perspective on the surface sterilization of packaging materials used in the food industry with cold plasma application of DBD or corona discharge non-thermal plasma systems. In addition, compared to the currently used packaging surface sterilization methods, its disadvantages have been reduced, and an environmentally friendly, affordable system has emerged without the need for complex high/low-pressure systems or gas systems. Thanks to the atmospheric cold plasma devices used in the study, the effective and innovative plasma system design can be easily integrated into the industry without damaging the film surface by using O2 in the atmosphere without needing a pressurized environment or additional gas systems.
-
ÖgeEnvironmental and economic assessment of zero waste management(Graduate School, 2023-12-08) Maçin, Kadriye Elif ; Arıkan, Osman Atilla ; Damgaard, Anders ; 501172716 ; Environmental Sciences Engineering and ManagementThe waste management (WM) approach for the protection of resources known as "zero waste" (ZW) has become popular in recent years. A number of measures have been taken and facilities have been installed to facilitate improved management of municipal solid waste (MSW). ZW to landfill targets have gained significant traction worldwide, including in Turkiye. The aim of this thesis is to assess the environmental and economic results of WM pathways within the context of an increasingly applied ZW approach.This study evaluates the sustainability of municipal solid WM through the ZW goal on multiple scales (Istanbul and Ayazağa Campus). The thesis provide a framework that enables institutions to develop a goal-oriented WM strategy using material flow analysis (MFA) and life cycle assessment (LCA). In addition to presenting the framework, a case study was conducted on a campus scale by using primary and secondary data. The framework assumes that no prior data is available, and the study will begin by collecting primary data on campus. For providing primary data; waste characterization and recycling potential of the Istanbul Technical University (Turkiye) Ayazağa Campus before (2019) and after (2022) the ZW management strategy the campus was divided into four distinctive groups, which are (i) academic (ii) administrative (iii) residential/dormitory and (iv) cafeteria. First, initial field study was conducted afterwards (for waste characterization and recycling potential), the new containers were placed. Students and campus personnel have been trained in within the scope of ZW management practices through both in-person and online seminars. The final phase of the study, the second field work, was completed. The results demonstrate that the waste generation rate in the pilot areas fluctuates between 0.045-0.190 kg/cap/day, but it decreases to 0.011-0.117 kg/cap/day in the second field study. The first field study had a potential recycling rate of 76.3%, but then it dropped to 68.2% in the second study. The MFA results indicate that landfill diversion ranges between 29.8% to ~99-100%, some residuals or ash from the incineration plant will still be disposed in landfill. Furthermore, simply diverting waste from landfill does not necessarily lead to circularity or directly address sustainable consumption and public attitudes towards ZW goals. The study framework aims to address potential challenges in campus-based, goal-oriented WM studies. Future case studies from other institutions and their campuses could help validate and improve this methodology. Based on the Ayazağa waste characterization results, the efforts to establish a ZW management system also led to a reduction in waste generation and increase in recycling performance. However, further studies are still required to assess the ZW's public awareness activities. In addition, a case example was developed based on ITU Ayazağa campus, Turkiye, with annual separated food waste of 577 tonne per year to provide a more circular and decarbonised economy. A LCA was conducted using the EASETECH software. Four scenarios were evaluated: anaerobic digestion, composting, incineration, and landfill. Of these, incineration resulted in the highest CO2-eq savings (-192 kg CO2-eq/tonne FW), but lacked decoupling and circularity of resources. Conversely, anaerobic digestion demonstrated the highest circularity and lowest toxicity. Based on these findings, anaerobic digestion was selected for further investigation. Economic transactions for the anaerobic digestion system's business models were analysed,including revenues, municipality fees and operating costs. The new economic model is expected to align with circular economy strategies and promote stakeholder collaboration as a significant social outcome.
-
ÖgeHorseshoe adası Antarktika'da İHA-GPR gözlemlerine dayalı buzul izleme ve 3D modelleme(Lisansüstü Eğitim Enstitüsü, 2024-06-26) Arkalı, Mehmet ; Selbesoğlu, Mahmut Oğuz ; 501221640 ; Geomatik MühendisliğiKüresel ısınma, sera gazlarının güneş ısısını atmosferde hapsederek Dünya'nın ortalama yüzey sıcaklığını artırmasıyla oluşur. Bu artış, küresel iklim değişikliğine yol açar ve iklim koşullarının dağılımını değiştirir. İnsan faaliyetleri nedeniyle sera gazlarının seviyeleri artmış ve atmosferin ısınmasına neden olmuştur. İklim değişikliğini ve etkilerini takip edebilmek için çeşitli gözlemler yapılmakta ve bu gözlemler kullanılarak çeşitli modelleme çalışmaları gerçekleştirilmektedir. Küresel ısınma, kriyosfer ve termohalin döngü gibi önemli olguları etkiler. Kriyosfer, kar ve buz alanlarını kapsar ve Dünya'nın en büyük tatlı su rezervini oluşturur. Kar ve buz, güneş ışığını yansıtarak gezegenin sıcaklığını düzenler ve kriyosferin bazı kısımları binlerce yıldır çözünmemiştir. Termohalin döngü, okyanus sularının sıcaklık ve tuzluluğun etkileşimiyle oluşturduğu küresel bir akıntı sistemidir. Bu sistem, rüzgarların yüzey akıntılarını ve su yoğunluğu farklılıklarının derin su akıntılarını yönlendirdiği bir "taşıyıcı bant" işlevi görür. Termohalin döngü, okyanus sularını karıştırır, kutuplara ısı taşır ve deniz buzu oluşumunu etkiler. Bu da iklim üzerinde önemli etkilere sahiptir. Kutup bölgeleri, iklim değişikliğinin geçmişten günümüze kadar yaşanan sürecin anlaşılması için ideal araştırma ortamıdır. Çünkü buzullar, atmosferin bileşimini barındıran önemli tarihî kaynaklardır. Buzulların izlenmesi ve takibi; iklim değişikliğinin etkisiyle artan buzul erime hızı, deniz seviyesinin yükselmesi, tatlı su kaynaklarının sürdürülebilir yönetiminin sağlaması, yerel ekosistemlerin korunması, jeolojik ve jeomorfolojik gibi çalışmalara katkı sağlanması açısından önemlidir. Buzullar, küresel ısınmanın doğrudan göstergesidir. Yapılan araştırmalar sonucu ortaya konan bulgular doğrultusunda iklim değişikliğinin trendi hakkına bilgiler sağlar. Yapılan değerlendirmeler buzulların erimesinin mevcut haliyle devam ettiği durumda özellikle kıyı ekosistemleri olumsuz etkilenecektir. Bu tez çalışması kapsamında Türkiye Cumhuriyeti Cumhurbaşkanlığı himayelerinde, Sanayi ve Teknoloji Bakanlığı uhdesinde ve Tübitak Marmara Araştırma Merkezi (MAM) Kutup Araştırmaları Enstitüsü (KARE) koordinasyonunda gerçekleştirilen Türkiye Ulusal Antarktika Bilim Seferleri (TAE -Turkish Antarctic Expedition) kapsamında yer radarı kullanılarak buzul derinliğinin belirlenmesi amaçlanmıştır. Yer radarı (Ground Penetrating Radar-GPR), tahribatsız ya da yıkıcı olmayan özelliği sayesinde yeraltının incelenmesi ve haritalanmasında geniş bir kullanım alanına sahiptir. Yöntem, elektromanyetik (EM) dalgaları kullanarak yeraltındaki yapıları dielektirik özelliklerine göre tespit eder. Bu yöntem, ortama gönderilen elektromanyetik dalgalarının farklı malzemelerden yansımasını ve iletilmesini ölçer. Yeraltındaki malzemeler, su içeriği, yoğunluk ve elektrik iletkenliği gibi dielektrik özelliklerine bağlı olarak bu dalgalara farklı tepkiler verir. Yer radarı, gönderdiği sinyallerin yeraltındaki farklı katmanlardan geri dönüş süresini ve gücünü analiz ederek, yeraltı yapılarını ve anormalliklerini haritalar. Yer radarı, verilerinin analizi ve yorumlanması, yer altı yapılarının ve özelliklerinin incelenmesinde önemli bir adımdır. Öncesinde verilerinin toplanması süreci, uygun ekipman seçimi, arazi hazırlığı adımlarına bağlıdır. Veri işlenmesi için ön işleme adımı gereklidir. Verinin tasnif edilmesi, profillere ayrılması ve isimlendirilmesi gibi düzenleme işlemleri yapılır. Ardından ham verilerdeki gürültüleri ve hataları azaltmak için çeşitli filtreler uygulanır. Literatürde kabul görmüş sistematik bir veri işleme akışı bulunmamaktadır. Uygulanan filtreleme teknikleriyle veriler analiz edilip sayısallaştırılır. Ham veriden yorumlanabilir görüntüye kadar bu süreç uygulanır. İşlem uygulanırken araştırmacının tecrübesi de yorumu etkileyen bir faktördür. Bu haliyle ortama herhangi bir zarar vermeden yer radarı uygulaması tamamlanır. Yer radarı yöntemi başta jeofizik çalışmalar olmak üzere arkeoloji, jeoloji, hidroloji, inşaat ve madencilik gibi birçok alanda kullanılmaktadır. Yer radarı buzul bilimi için de oldukça önemli bir araçtır. Çünkü buzulların altındaki yapıları, kalınlıklarını ve iç özelliklerini belirlemek için kullanılır. Buzul kalınlığının ölçülmesiyle buzul hacmi ve potansiyel su kaynağı hakkında bilgi sağlar. Buzulların iç yapısıyla alakalı olarak katmanlar, çatlaklar ve diğer yapılar radar verileriyle incelenebilir. Bu yapılar buzul dinamikleri ve geçmiş iklim olayları hakkında bilgi verir. Bunların yanında buzul hareketlerinin izlenmesine de olanak sağlar. Yer radarının buzul bilimi açısından bahsedilen avantajları göz önünde bulundurularak Türk Bilim Üssü yakınında seçilen çalışma alanında yersel ve insansız hava aracı (İHA) tabanlı yer radarı ölçümleri gerçekleştirilmiştir. TAE-7 seferinde yer radarı atım değeri 400 nanosaniye (ns) iken TAE-8 seferinde 800 nanosaniye olarak seçilmiştir. 400 ns için düşey hassasiyet 6-7 santimetre (cm) iken 800 ns için 13-14 cm'dir. Yapılan her iki uygulamada taramalar birbirine dik olacak şekilde bir grid ağı oluşturulmuştur. Grid ağı sayesinde verinin tutarlığını kontrol etmeye yarayan crossover analiziyle uygulanmıştır. Yersel GPR yöntemi doğru kabul edilerek İHA-GPR yöntemiyle olan farkları bulunarak bir karşılaştırma yapılmıştır. Elde edilen fark değerleri %99.72'lik güven aralığına göre analiz edilmiştir. TAE-7 seferi için ortalama değerin 2.84 cm ve RMSE değeri 8.88 cm olarak hesaplanmıştır. TAE-8 seferinde ise ortalama değer -0.07 cm, RMSE değeri 8.32 cm olarak hesaplanmıştır. Yersel GPR ve İHA-GPR yöntemleri arasındaki farklar yardımıyla çalışma alanında üç boyutlu modeller yapılmıştır. Böylece her iki seferde de yöntemlerin karşılaştırılması sağlanmıştır. Üç boyutlu bu modellerde metre altında değerler görülmektedir. Modellemelerde seçilen enterpolasyon modelinin grid ağının sınır değerlerde iyi çalışmadığı görülmüştür. Buzul derinliğinin tespiti ve modellenmesine yönelik yersel ve İHA bazlı GPR uygulamasının umut verici bir yöntem olduğu görülmüştür. İHA-GPR, yersel GPR yöntemine göre birim zamanda 16 kata kadar daha hızlı tarama yapabildiği saptanmıştır. Ulaşılması zor olan alanlarda İHA-GPR yönteminin kullanılabilir olduğu görülmektedir.
-
ÖgeImprovement of electrical and photocatalytic properties of boron-doped ZnO nanorods and synthesis design optimization by taguchi approach(Graduate School, 2024-01-29) Tabak, Eray ; Benli, Birgül ; 513201031 ; Nanoscience and NanoengineeringToday, increasing environmental pollution affects the whole world. One of the most important pollutions among environmental pollution is water pollution. Water pollution seriously threatens living life. With the developing technology, clean water resources are decreasing, and water resources are polluted with industrial and domestic wastes every day. Today, traditional methods are used for water treatment. However, these methods are not able to provide adequate response to water treatment due to low efficiency for small-sized pollution and secondary pollution. Therefore, applications of photocatalysis offer an effective opportunity among advanced oxidative methods; in this study, innovative photocatalyst structures have been developed. Boron doped ZnO nanorods were successfully synthesised by hydrothermal method and Boron doping has led to enhancements in its photocatalytic and electrical properties. Boron doped ZnO nanorods were grown in two stages. In the first stage, since dipole forces will be effective for growth, the thin film called seed layer layer on the glass surface was coated with the surface spin coating method. Then, with zinc nitrate dehydrate and hexamethylenetetetraamine added in equal molar amounts, nanorods were grown on the glass with seed layer at 90 °C for 3 hours. X-ray diffaraction (XRD), UV-Vis spectrometer, FT-IR, DC electrical analysis (I-V), AC electrical analysis, characterisation by Scanning electron microscopy (SEM) and photocatalytic analysis by UV-A 366 nm light were also performed. Then, Taguchi experimental design method was used to determine the best conditions for selected parameters such as pH, pollutant concentration, time, additive amount and to calculate the effects between each other. The XRD results indicate that boron has successfully integrated into the structure. Also, pure and boron doped ZnO nanorods grew in wurtzite structure and the crystal sizes were confirmed by SEM images. As a result of FT-IR analysis, it was shown that the peak belonging to B-O, B-O-B bonds increased with the increase in doping. It was proved that the band gap intervals calculated as a result of UV-Vis spectrometry analysis decreased with boron doping. Electrical analyses showed that the electrical conductivity increased with increasing boron doping. As a result of all these analyses, electrical conductivity increased from 0.03 μA for pure ZnO to 1.9 μA for 10% boron doped ZnO. With the increase in surface defect, the band gap decreased with the increase in conductivity, boron doping was proven to increase the number of electrons and it was thought that their photocatalytic activity should increase. As a result of photocatalytic tests, it was shown that the efficiency increased with boron doping. When we compare the numerical values of the rate constants, under the same conditions, the 1% B doped sample has a 94% higher rate than the pure ZnO sample for pH 4. For pH 7, the 3% B-doped sample has a 36% higher rate than pure ZnO, and finally for pH 10, the 7% B-doped sample has a 194% higher rate constant. Moreover, the effect of pH was discussed. It was observed that boron doped ZnO nanorods had better photocatalytic efficiency for each pH range and concentration. As a result of the calculated rate constants, 3% boron doping for 2x10-6 M concentration, pH 7 showed the best result with a rate constant value of 0.00856 min-1. Finally, optimized parameters for pH 4, concentration 2x10-6 M, time 90 min and doping amount 7% were determined by Taguchi method. As a result of ANOVA analysis, the model was proved to be 85% fit.
-
ÖgeInnovative computational techniques for accurate internal defect detection in trees: A stress wave tomography approach enhanced by machine learning(Graduate School, 2024-06-10) YIldızcan, Ecem Nur ; Tunga, Burcu ; 509211206 ; Mathematics EngineeringThe detection of internal defects in trees holds critical importance given the health of forest ecosystems and the industrial significance of wood products. The identification of these internal defects without damaging the wood is a significant factor in the forestry industry and in the production of wood products. While traditional methods often require cutting or processing the wood, non-invasive techniques such as stress wave tomography offer the possibility of identifying internal defects without disrupting the wood's structure. This contributes both to the sustainable management of forest resources and to the improvement of wood product quality. A branch of artificial intelligence, machine learning algorithms allow computer systems to analyze data, recognize patterns, make decisions, and solve problems. These algorithms are critical tools in analyzing large datasets obtained from non-invasive techniques like stress wave tomography, and in accurately detecting and classifying internal defects. In this thesis, an algorithm design capable of generating stress wave tomography based on ray segmentation and machine learning has been developed for the purpose of detecting internal defects in trees. A two-stage algorithm has been proposed based on data obtained from stress waves produced by sensors mounted on trees and on the segmented propagation rays generated from these data. In the first step, a ray segmentation method maps the velocity of stress waves to create segmented sensors. In the second step, data obtained from these segmented rays are processed using K-Nearest Neighbors (KNN) and Gaussian Process Classifier (GPC) algorithms to create a tomographic image of defects within the tree. The algorithm carries the potential to detect internal defects in wood without causing damage and provides more precise results compared to traditional methods. Implemented using the Python programming language, the algorithm equips researchers with the ability to understand and analyze the internal structure of trees. This method stands out as a practical tool for contributing to forest health assessment and conservation through stress wave tomography. During experiments, data from four real trees were collected via sensors, and an algorithm was developed to generate four sets of synthetic defective tree data in the sensor's data format. Real tree data was provided by Istanbul University Cerrahpaşa Faculty of Forestry. All tree data were individually used to feed the proposed defect detection algorithm, and the outputs were transformed into tomographic images. Success rates above 90% were achieved for all evaluation metrics. Compared to related studies, the results showed improvements ranging from 7% to 22% relative to the literature. This thesis aims to contribute to the development of the sustainable wood industry by offering a new approach to detecting internal tree defects. Although the results obtained are quite good compared to the results in the scientific literature, it is thought that even better results will be obtained by optimizing the parameters of the algorithm or by differentiating the machine learning algorithms integrated into the method.
-
ÖgeIşıkla tetiklenen ters elektron gereksinimli Diels-Alder reaksiyonu ile tek zincir polimer nanoparçacıkların hazırlanması(Lisansüstü Eğitim Enstitüsü, 2024-05-27) Altunkaya, Adalet Nur ; Kahveci, Muhammet Übeydullah ; 509201243 ; KimyaDoğadan alınan ilham kimya alanının gelişmesinde büyük bir rol oynamaktadır. Son yıllarda öne çıkan tek zincir polimer nanoparçacıklar (SCNP) da proteinlerin katlanıp düzenlenmesinden ilham alınarak geliştirilmiştir. Tek zincir polimer nanoparçacıklar, tek bir polimer zincirinin özgün bir şekilde katlanması ve çapraz bağlanması sonucu ortaya çıkan nano ölçekli yapıları ifade etmektedir. Bu katlanma, proteinlerin hidrojen bağları ile katlanmasına benzemektedir, ancak polimerlerin katlanması için radikal kenetlenme, metal ligasyonu, click kimyası, supramoleküler etkileşimler gibi birçok kovalent, dinamik kovalent ve kovalent olmayan çapraz bağlanma yöntemleri kullanılmaktadır. Bu reaksiyonlar içinde, özellikle ılımlı koşullarda yüksek verimlilikle kovalent bağlar oluşturan click kimyası dikkat çekmektedir. Click kimyası başlığı altında siklokatılma, nükleofilik halka açılma, karbon-karbon çoklu bağlara katılma ve aldol olmayan karbonil bileşiklerinin reaksiyonları vardır. Siklokatılma reaksiyonlarının bir örneği olan ters elektron gereksinimli Diels-Alder (IEDDA) reaksiyonu elektronca zengin bir dienofil ve elektronca fakir bir konjuge dien arasında meydana gelen önemli bir [4+2] katılma reaksiyonudur. Genellikle 1,2,4,5-tetrazinler ve olefinler arasında gerçekleşen IEDDA reaksiyonunun biyoortogonal olması, hızlı kinetiğe ve yüksek verimliliğe sahip olması gibi avantajları bulunmaktadır. Işıkla tetiklenen ters elektron gereksinimli Diels-Alder (foto-IEDDA) reaksiyonlarında ise geleneksel IEDDA reaksiyonları ışıkla kontrol edilebilmektedir. Dihidtrotetrazin (dHTz) molekülleri IEDDA reaksiyonu için inaktifken bir fotooksidan varlığında ışık ile uyarıldığında yükseltgenerek tetrazin (Tz) molekülüne dönüşmekte ve IEDDA reaksiyonu için reaktif hale gelmektedir. Bu tez kapsamında, polimer üzerinde bulunan 6-(6-(piridin-2-il)-1,4-dihidro 1,2,4,5-tetrazin-3-il) piridin-3-amin (PPA-dHTz) ve norbornen (Nb) grupları arasında in situ olarak gerçekleşen foto-IEDDA reaksiyonu kullanılarak molekül içi çapraz bağlı nanoparçacıkların hazırlanması için yeni bir strateji geliştirilmiştir. İlk olarak, 4-okso-4-((6-(6-(piridin-2-il)-1,2,4,5 tetrazin-3-il)piridin-3 il)amino)bütanoik asit (PPA-dHTz-COOH) sentezlenmiştir, ardından poli(metil metakrilat-ko-hidroksietil metakrilat) P(MMA-ko-HEMA), tersinir katılma ayrılma zincir-transfer polimerizasyonu (RAFT) kullanılarak sentezlenmiştir. Daha sonra, PPA-dHTz-COOH ve bisiklo[2,2,1]hept-5-en-2-karboksilik asit (Nb-COOH), esterifikasyon reaksiyonu yoluyla polimerin yan zincirlerine bağlanmıştır. Son olarak, SCNP'leri hazırlamak için öncü polimer, P(MMA-ko-HEMA)-g-PPA-dHTz/Nb, çözeltisi metilen mavisi (8μM) varlığında etanol içerisinde 0,5 mg/mL konsantrasyonda hazırlanmıştır. Polimer, kırmızı lazer ile (λ=680nm, 0,3W/cm2) uyarılmış ve dihidrotetrazinlerin tetrazine oksidasyonu UV-Vis spektrofotometresi ile takip edilmiştir. Bu tez kapsamında, sentezlenen bileşikler, nükleer manyetik rezonans (NMR) spektroskopisi, jel geçirgenlik kromatografisi (GPC) ve kızılötesi spektroskopisi (FT-IR) ile karakterize edilmiştir. Hazırlanan SCNP'lerin termal davranışlarını incelemek için diferansiyel taramalı kalorimetre (DSC), boyut analizleri için dinamik ışık saçılım (DLS) ve morfolojik yapılarını incelemek için taramalı elektron mikroskobu (SEM) kullanılmıştır. Elde edilen sonuçlara göre SCNP'ler başarıyla hazırlanmıştır.
-
ÖgeKakao yağının serbest yağ asitliği düzeyini etkileyen proses parametrelerinin incelenmesi(Lisansüstü Eğitim Enstitüsü, 2023-02-28) Çiçek, Büşra ; Yeşilçubuk Şahin, Neşe ; 506181517 ; Gıda MühendisliğiHam kakao yağında yüksek miktarda bulunan serbest yağ asitleri (SYA) yağın deodorize edilmesi ile belli bir seviyeye düşürülür. Daha yüksek SYA içeriğinin, kakao yağının sertliğinde bir azalmaya yol açtığı ve hem üreticiler hem de çikolata üreticileri için ham kakao ticari değerini azaltan bir faktör olduğu düşünülmektedir. Hem yağın kalitesini arttırmak hem de yasal sınırlara uygun hale getirmek için çeşitli proseslerle SYA kontrol altında tutulmalıdır. Bu çalışmada, kakao yağı üretiminde SYA'ya etki edecek debakterizasyon, kavurma ve deodorizasyon proseslerindeki debakterizasyon sıcaklığı ve süresi, kavurma alt-üst sıcaklıkları, deodorizasyon sıcaklığı, süresi, buhar basıncı ve vakum değerleri değişkenlerinin etkisinin istatistiksel yöntemlerle incelenmesi amaçlanmıştır.Yapılan veri analizine göre anlamlı bulunan değişkenler deodorizasyon süresi, deodorizasyon vakum değeri ve çok sayıdaki verinin medyanı alınarak analiz edilen kavurma üst sıcaklığı olmuştur (p ≤ 0,05). Deodorizasyon sıcaklıkları 150-174 °C arasında değişmiş olup, SYA'ya etki etmemiş ve anlamlı değişken olarak gözlemlenmemiştir. Bunun yanı sıra deodorizasyonda kullanılan buhar basıncı/miktarı SYA değişiminde efektif bir değişken olarak gözlemlenmemiştir (p> 0,05). Kakao yağını oluşturmak için birden fazla kakao çekirdeği partileri kavrularak birleştirilir, veri analizindeki farklı partilerin kavurma sıcaklık dağılımları geniş skalada olduğundan ortalama alınarak elde edilen tek veride, istatistiksel analize göre değişken anlamlı bulunmamıştır (p> 0,05). Medyan alınarak elde edilen kavurma verilerinde ise sadece üst sıcaklık anlamlı çıkmıştır (p ≤ 0,05). Kavurmanın üst sıcaklıkları nemin uzaklaştırılması amacı ile kullanılırken, kavurmanın alt sıcaklıklarında ise aroma ve renk oluşturma işlemlerinin yapılmasından dolayı kavurma alt sıcaklıklarında SYA değişimi minimal düzeyde olmuştur ve kavurma alt sıcaklığı SYA'yı etkileme noktasında anlamlı çıkmamıştır. Debakterizasyon proses aşamasında uygulanan sıcaklıklar 250℃ altında ve işlem süresi saniye düzeyinde olduğundan SYA uzaklaştırılması minimal düzeyde olmakta ve etki düzeyi düşük olduğundan SYA'yı etkileme noktasında anlamlı bulunamamıştır (p> 0,05). Çalışmanın sonucuna göre, debakterizasyon süresi arttırılarak prosesin devamında SYA için stabilizasyon durumunun arttırılabileceği ve deodorizasyon sıcaklıkları da 200 °C ve üzerine çıkarılarak uzaklaştırılan SYA arttırılarak proses veriminin arttırılabileceği tespit edilmiştir.
-
ÖgeLaktik asit bakterilerinden elde edilen postbiyotiklerin antifungal etkinliğinin incelenmesi(Lisansüstü Eğitim Enstitüsü, 2024-07-09) Erdem Akinan, Zeynep ; 506211520 ; Gıda MühendisliğiLaktik asit bakterileri fermantasyon sırasında postbiyotik adı verilen birçok metabolik yan ürün üretirler. Literatürde postbiyotiklerin antifungal etkileri üzerine yapılan çalışma sayısı sınırlıdır. Bu nedenle, gerçekleştirilen çalışmada üç farklı laktik asit bakterisinin (Lactobacillus acidophilus, Lactobacillus rhamnosus, Lactobacillus plantarum) postbiyotiklerinin antimikrobiyal etkisi araştırılmış ve bu postbiyotiklerin keklerde raf ömrünü uzatmada etkili olup olmadığı incelenmiştir. Tez kapsamında küflenmiş kekten küf izolasyonu ve identifikasyonu gerçekleştirilmiştir. Elde edilen izolat %99,43 benzerlik oranı ile Penicillium brevicompactum ZAEDND23 olarak tanımlanmıştır. Postbiyotiklerin karakterizasyonu için yüksek performanslı sıvı kromatografisi ile organik asit tayini yapılmış ve laktik asit, tartarik asit ve propiyonik asit varlığı tespit edilmiştir. Antibakteriyel etki bulgularına göre Salmonella Typhimurium, Staphylococcus aureus ve Pseudomonas aeruginosa bakterilerine karşı en yüksek etkinliği Lb. acidophilus ve Lb. rhamnosus'tan elde edilen postbiyotikler göstermiştir. Üç bakteriden elde edilen postbiyotiğin Penicillium spp.'ye karşı etkili olduğu ancak Aspergillus niger'e karşı herhangi bir inhibitör etki göstermediği bulunmuştur. En etkili antifungal etki P. brevicompactum ZAEDND23'e karşı %48,07 ± 5,65 inhibisyon ile Lb. plantarum postbiyotiğinde olduğu anlaşılmış ve ürün denemesinde bu postbiyotik kullanılmıştır. Seçilen postbiyotik üzerinden kek ürünlerinde doğru konsantrasyonu belirlemek için ön denemeler kapsamında kek hamuruna %0,25, %0,5 ve %1 oranında postbiyotik ilavesi yapılmıştır. Konsantrasyonlar arasında %1 ilaveli kekler tekstürel ve renk olarak uygun bulunmazken %0,25 ve %0,5 postbiyotik ilaveli keklerde tekstürel olarak kimyasal koruyuculu ve koruyucusuza göre bir farklılık gözlenmemiştir. Ancak hızlandırılmış raf ömründe takip edilen keklerden %0,25 postbiyotik ilaveli kekler 12 gün sonunda küflenmiş %0,5 ve %1 postbiyotik ilaveli kekler küflenme göstermemiştir. Bu nedenle karşılaştırmalı hızlandırılmış raf ömrü çalışmalarına %0,5 postbiyotik konsantrasyonu ile devam edilmiştir. Hızlandırılmış raf ömründe koruyucusuz, kimyasal koruyuculu ve %0,5 postbiyotik içeren kekler için depolama süresince fizikokimyasal ve mikrobiyolojik testler yapılarak sonuçlar elde edilmiştir. Genel olarak üretilen kekler arasında incelenen özellikler bakımından istatiksel olarak anlamlı bir fark belirlenmezken duyusal değerlendirmede postbiyotik ve kimyasal koruyucu içeren kekler koruyucusuzdan daha yüksek puan almıştır. Hızlandırılmış raf ömrü testinde 60 gün boyunca postbiyotiğin keklerde raf ömrünü uzatıp uzatmadığı incelenmiştir. Değerlendirmede 37°C ve %75 bağıl nemde depolanan koruyucusuz kekler 60. günde küflenmiş ancak postbiyotikli ve kimyasal koruyuculu keklerde depolama boyunca küflenme görülmemiştir. 28°C'de her çeşitten dört adet takip edilen keklerde 15 gün sonunda koruyucusuz keklerin dört tanesi de küflenirken postbiyotikli keklerin sadece bir tanesi küflenmiş, kimyasal koruyucu keklerin hiçbiri küflenmemiştir. Ayrıca 28°C'de takip edilen üç postbiyotikli kekte 30. güne kadar küf gelişimi gözlenmemiştir. Çalışmalar sonucunda postbiyotikli kek örneklerinin duyusal özellikleri açısından kabul edilebilir olması, koruyucusuz ve kimyasal koruyuculu çeşitlerle kalite özellikleri açısından aralarında önemli bir fark tespit edilmemesi ve mikrobiyolojik açıdan koruyucu özellik göstermesi endüstride doğal koruyucu alternatifi olarak kullanılması açısından umut vadetmektedir.
-
ÖgeNetwork digital twins: Tackling challenges and enhancing wireless network management(Graduate School, 2024-09-02) Ak, Elif ; Canberk, Berk ; 504182507 ; Computer EngineeringIn the age of rapid digital transformation, wireless networks are increasingly critical, connecting billions of devices and supporting bandwidth-heavy applications like virtual reality and HD video streaming. However, the surge in network traffic and the dynamic nature of modern networks have exposed limitations in traditional network management. These systems, which often rely on outdated data and lack predictive capabilities, struggle to manage the complexity of emerging technologies like 6G and the Internet of Things (IoT), leading to suboptimal performance, security vulnerabilities, and inefficiencies. This thesis explores Network Digital Twins (NDTs) as a solution to these challenges. NDTs are virtual replicas of physical networks that allow real-time monitoring, simulation, and optimization. By mirroring real network behavior, NDTs enable proactive management through predictive analytics and scenario simulations. These capabilities are vital for optimizing network performance, anticipating problems, and integrating new technologies seamlessly. The research introduces a new digital twin networking framework called T6CONF, designed for IPv6 infrastructures. This framework tackles communication and synchronization challenges in NDT ecosystems, ensuring robust, real-time network management. It incorporates a What-if Analysis module powered by AI, which generates synthetic data to simulate various network conditions and predict outcomes, improving decision-making across different network environments. Through various case studies, the thesis demonstrates how NDTs can enhance key performance metrics like throughput, latency, packet loss, and coverage. In WiFi networks, the proposed Digital Twin WiFi Network (DTWN) uses AI-based techniques to improve interference management and throughput. In wireless ad-hoc networks, NDTs optimize network selection and packet delivery, while in IoT networks, NDTs support context-aware data management, contributing to smart city initiatives and sustainability goals like net-zero carbon emissions. In conclusion, the thesis provides a comprehensive framework for implementing and evaluating NDTs in wireless network management. It highlights the potential of NDTs to improve network performance and scalability, paving the way for the future integration of emerging technologies. Through this research, NDTs are positioned as essential tools for managing the growing complexity of modern wireless networks.
-
ÖgeNükleosit analoğu bazı sitotoksik antineoplastik ilaçların dsDNA ile etkileşim mekanizmaları ve elektrokimyasal analizleri(Lisansüstü Eğitim Enstitüsü, 2023-10-16) Şenel, Pelin ; Gölcü, Ayşegül ; 509172007 ; KimyaKlinik olarak kullanılan birçok antikanser ilaç antitümör etkisini DNA ile "kovalent ya da kovalent olmayan bağlanma" yolu ile hasar oluşturarak veya DNA sentezini inhibe ederek gerçekleştirmektedir. Ayrıca, birçoğu DNA'yı hedef aldığı zaman sitotoksik etkisinin bir sonucu olarak DNA'nın aktivitesini değiştirerek hücre ölümüne de sebep olmaktadır. DNA'yı direkt ya da dolaylı olarak hedef alan moleküller, klinik kullanım için en etkili antikanser ilaçlardır. Ancak günümüzde klinikte kullanılan antikanser ilaçlarda, seçici olmama ve metastaz kontrol yeteneğinin az olması gibi sorunlar bulunmaktadır. Bu sebeple immünoterapi, hücre döngüsünün modülasyonu gibi yeni yaklaşımlar geliştirilmiştir ve geliştirilmeye de devam etmektedir. Diğer bir taraftan, DNA'ya bağlanabilen ve DNA hasarı oluşturabilen ilaçların bu yeni geliştirilen tedavi yöntemleri ile kombine olarak kullanımlarının pozitif etki gösterdiği de bilinmektedir. DNA'ya bağlanabilen, DNA hasarı oluşturabilen ve sitotoksik özellik gösteren moleküllerin, kanser tedavisindeki olumsuz bazı yan etkilerine rağmen güçlü olan tedavi edici yönleri göz önüne alındığında, daha uzun yıllar bu tür moleküllere ihtiyaç duyulacağı ön görülmektedir. Tüm bunlar düşünüldüğünde klinikte kullanılmakta olan antikanser ilaçların DNA ile olan etkileşim mekanizmalarını tam olarak anlamak, onları daha güvenilir bir şekilde kullanmak ve daha etkili, aynı zamanda seçici ilaçlar geliştirebilmek için son derece önemlidir. DNA'nın sarmal yapısı ve içerdiği baz çiftleri onun küçük moleküllere (ilaç etken maddeleri, metal iyonları, metal kompleksleri, proteinler, bazı organik ve anorganik türler) bağlanmasını veya bu türleri çift sarmal yapının içine almasını mümkün kılmaktadır. DNA ile moleküllerin etkileşimi kovalent bağlanma ve kovalent olmayan bağlanma (moleküller arası zayıf etkileşimler) şeklinde iki yol ile olabilmektedir. DNA ile küçük moleküller arasında meydana gelen kovalent bağlanma, hücre ölümüne sebep olan, geri dönüşümsüz bir süreçtir. Bu süreç, moleküllerin DNA yapısında bulunan major oluklardaki zengin elektron yoğunluğuna sahip guanin(G) ya da adenin(A) bazlarının 7 nolu azot atomuna doğrudan ya da çapraz bağlanmasını içermektedir. DNA ile küçük moleküller arasındaki kovalent olmayan bağlanma ise elektrostatik dış bağlanma, interkalasyon ve oluk(groove) bağlanma şeklinde üçe ayrılmaktadır. Bu tez kapsamında kanser tedavisinde kullanılan ve ciddi yan etkilere sahip altı farklı kanser ilaç etken maddesinin (Azasitidin, Kladribin, Klofarabin, Eltrombopag, Fludarabin ve Pemetrekset) çift sarmal DNA (dsDNA) ile etkileşim mekanizmaları incelenmiştir. Tüm araştırmalarda, spektrofotometrik teknikler (ultraviyole görünür alan ve floresans), voltametrik yöntemler (dönüşümlü, diferansiyel puls ve kare dalga), ısıl bozunma ve viskozite ölçümleri kullanılmıştır. Her bir yöntemden elde edilen sonuçlar doğrultusunda ilaç etken maddelerinin dsDNA ile etkileşim mekanizmaları belirlenmiştir. Ayrıca, her bir molekülün dsDNA'ya bağlanma gücünü gösteren bağlanma sabiti (Kb) değerleri gün içi ve günler arası tekrar edilebilirlik parametleri ile hesaplanmıştır. Farklı sıcaklıklarda yapılan ultraviyole soğurma spektrofotometresi deneylerinden yararlanarak dsDNA-ilaç etkileşimlerine ait Gibbs serbest enerji değişimi (∆G), entalpi değişimi (∆H) ve entropi değişimi (∆S) gibi termodinamik parametreler de hesaplanmıştır. Tez kapsamında gerçekleştirilen tüm dsDNA-ilaç etken maddeleri etkileşimi deneysel çalışmaları sonucunda, Azasitidin ve Eltrombopag etken maddelerinin dsDNA'ya "interkalasyon yolu" ile bağlandığı, Kladribin, Klofarabin, Fludarabin ve Pemetrekset etken maddelerinin ise dsDNA yapısındaki minor oluklara yerleşerek dsDNA ile "groove bağlanma" yaptığı kanıtlanmıştır. Ayrıca teorik çalışan farklı bir araştırma grubu tarafından bu ilaç etken maddelerinin "moleküler yerleştirme" ve "moleküler dinamik simülasyon" çalışmaları da gerçekleştirilmiştir. In silico çalışmalar da ilaçların bu bağlanma modlarını doğrulamıştır. Tez çalışmasının ikinci bölümünde ise dsDNA-ilaç etken maddeleri etkileşimine dayalı miktar tayin yöntemleri geliştirilmeye çalışılmıştır. Sadece, Azasitidin ve Kladribin ilaç etken maddeleri ile dsDNA'nın etkileşimine dayalı iki yöntem geliştirilerek farmasötik dozaj formlarına uygulanmıştır. Yöntem geliştirmede, dönüşümlü, diferansiyel puls ve kare dalga voltametrisi teknikleri kullanılmıştır. Bu tekniklerde, yükseltgenme akım sinyalinin hangi pH ve tampon ortamında var olduğunu ya da daha yüksek bir değere sahip olduğunu belirlemek amacıyla farklı tampon ortamlarında (H2SO4, asetat, fosfat) pH taraması yapılmıştır. Çalışılacak tampon ortamı ve pH değeri belirlendikten sonra, ilaçların voltametrik davranışı farklı tarama hızlarında (0.005–1 V/s aralığı) incelenmiştir. Yapılan hız taraması deneyleri sonucunda ilaç etken maddeleri oksidasyonunun difüzyon ya da adsorpsiyon kontrollü olup olmadığı ve reaksiyonun geri dönüşümlü mü, geri dönüşümsüz mü olduğu belirlenmiştir. Bu belirlemeden sonra, Azasitidin ve Kladribin ilaçlarının tayini için hızlı, kolay, doğru, duyarlı, kesin, seçici ve herhangi bir ayırma işlemine gerek duyulmayan voltametrik teknikler geliştirilmiş ve bunların etken madde içeren farmasötik dozaj formlarına uygulanabilirliği istatistiksel olarak gösterilmiştir. Geliştirilen yöntemin gerekli tüm validasyon (yöntem geçerliliği) çalışmaları yapılmıştır. Validasyon parametrelerinden elde edilen sonuçlar; Kladribin için algılama limiti (LOD) değerinin 0.30 µM olduğunu göstermiştir. İlaç etken maddelerinin tayini, dsDNA-ilaç etken maddesinin etkileşimi üzerinden geliştirilen ikinci bir yöntem ile de kıyaslanmıştır. Bu yöntemde kalibrasyon eğrisi, farklı ilaç etken maddesi derişimleri ile dsDNA'nın guanin bazının yükseltgenme akım sinyali arasındaki değişimler izlenerek elde edilmiştir. Bu yöntemde Kladribin için algılama limiti 0.92 µM, Azasitidin için ise 0.62 µM olarak bulunmuştur.
-
ÖgeOptimum finance-based scheduling(Graduate School, 2024-12-24) Akın, Furat Doğu ; Damcı, Atilla ; 501192007 ; Structure EngineeringFinancial issues are among the most common causes of business failure in the construction industry. Hence, construction contractors must acknowledge the financial parameters in the project's planning phase. However, the schedulers mostly neglected integrating work scheduling and finance management until finance-based scheduling methods were introduced. Although several researchers developed models to provide solutions for finance-based scheduling problems, they neglected essential issues that must be visited. Some of these researchers used the activities' total floats to shift the activities' start times to minimize the contractor's financing cost. However, they never considered that there could be more than one work schedule to provide the minimum financing cost. Also, except for a few of them, they never indicated that using the total floats of the activities may cause more critical work schedules. Moreover, the studies in finance-based scheduling literature involved parameters and used techniques for resource-leveling. Still, none considered the fact that there are nine different resource-leveling objective functions widely used to obtain smoother resource histograms. Lastly, most of the researchers in the finance-based scheduling literature use only the line of credit as the sole financing alternative. A few researchers considered other financing alternatives, such as short-term loans and long-term loans, but they neglected the use of more financing alternatives to minimize the financing cost of the contractor. In this study, three different models are proposed to fill the gaps mentioned above. Using the first model ensures the selection of one schedule among multiple schedules with the minimum financing cost by calculating the float consumption cost and selecting the schedule with the minimum float consumption cost. In the first proposed model, a line of credit and short-term and long-term loans are used as the financing alternatives. Line of credit, short-term loans, and long-term loans are the financing alternatives to minimize the contractor's financing cost in the second proposed model of this study, which is similar to the first model. Besides, using the second model enables contractors to select the optimal work schedule by considering their priorities. If the contractor gives more importance to minimizing the financing cost of the project than the minimization of resource fluctuations, the contractor can obtain the work schedule with the minimum resource fluctuations among the schedules with the minimum financing cost by using the second proposed model of this study. On the contrary, if the contractor prioritizes the minimization of the resource fluctuations rather than the minimization of the financing cost, then the reverse of the first process becomes available to the contractor. Nine different resource-leveling objective functions are considered in the scope of the second proposed model to ensure that the selected work schedule provides the smoothest resource histogram possible. In the last proposed model of this study, balloon loans are considered in addition to other financing alternatives used in the first and second proposed models. Also, the contractors can frontload their offers along with the usage of the different financing alternatives, as mentioned earlier, to minimize the financing cost further. The users of all three models can obtain financing schedules that involve the borrowing and repaying time of the funds and the repaying time of the interest payments. The amounts of these financial parameters can also be determined as the outputs of the three proposed models in this study. However, the proposed models in this study have several limitations that can be addressed in future studies. First, none of the three models considered the likelihood of the contractor having many projects in their portfolio. Contractors in the construction sector typically manage multiple projects in their portfolios simultaneously, so improvements to the proposed models are needed to accommodate this aspect. Second, none of the models provided in this study use a stochastic method to forecast the uncertain environment of the construction industry. There is a need for additional research to close this gap. As a third limitation, the presented models do not incorporate any time-cost tradeoff analysis, which may assist contractors in reducing financing costs even more. Future research must take these assessments into account in order to increase the proposed models' performance. Fourth, all three models can incorporate a broader range of financing alternatives employed by contractors in the construction industry that have never been addressed in the present finance-based scheduling literature.
-
ÖgeOrthogonality based feature selection for ai applications(Graduate School, 2024-08-19) Şentop, Mehmet Selahaddin ; Üstündağ, Burak Berk ; 504221523 ; Computer EngineeringFeature selection is a significant aspect of AI models, which directly influences their accuracy and efficiency. A common problem in this process is redundancy among features, where multiple features provide overlapping information. Besides being inefficient, this redundancy can cause overfitting, where a model becomes too tailored to the specific data it was trained on and fails to generalize to new data. To tackle these challenges, this thesis introduces an orthogonality-based approach to feature selection. By ensuring that the selected features are independent and non-redundant, this approach improves the model's performance across various tasks. Two example applications—data imputation and short-term forecasting—are explored to demonstrate the effectiveness of this approach. Missing, distorted, or inaccurate data is a serious problem in many fields, including agriculture, healthcare, and environmental monitoring. These gaps in data can make it hard to trust the results of any analysis or decisions based on that data. Problems like sensor breakdowns, transmission errors, or incomplete data collection can make entire datasets unreliable. When this happens, it can lead to biased conclusions and poor decisions. This issue is especially serious in situations where decisions need to be made quickly and accurately, like in real-time systems. For example, if there's missing data in an agricultural monitoring system, it could lead to wrong decisions about watering crops, which could harm yields. To solve this problem, this study introduces a new orthogonality-based feature selection method called the Predictive Error Compensated Neural Network (PECNET) model. PECNET uses a method that focuses on selecting data features that are independent from each other and correcting errors in predictions to improve the accuracy of filling in missing data and making short-term forecasts. The study is based on two main ideas. First, it suggests that advanced machine learning models like PECNET can do a better job than traditional methods at finding and using patterns in complex data. Second, it believes that by making sure the features the model uses are independent, PECNET can avoid overfitting, which happens when a model is too closely tailored to the specific data it was trained on and does not work well with new data. PECNET's approach to select which data to focus on is a key part of its success. The model begins by looking at how different data points relate to each other and to the target being predicted. It first picks the data feature that has the biggest impact on the target. Then, instead of just adding more similar features, PECNET focuses on predicting and correcting errors from earlier predictions. This way, it finds new patterns in the data that were not considered before, helps to avoid repetition and makes the model better at handling new data. The study tested PECNET using data from The Agricultural and Environmental Informatics Research and Application Center (TARBIL), a system that collects agricultural and environmental information from across Türkiye. PECNET was tested in two types of experiments for missing data imputation: one where data from just one station was used, and another where data from several nearby stations was combined. In both types of experiments, PECNET, especially when combined with Discrete Wavelet Transform (DWT), showed better accuracy than traditional methods. Numerically, PECNET + DWT achieved more than 50% less Root Mean Squared Error (RMSE) for single station experiments and up to 80% less RMSE for multi-station experiments. The model's ability to use data from multiple stations led to big improvements in predicting challenging variables like wind speed and humidity. Besides filling in missing data, PECNET was also tested on predicting short-term rainfall, which is very important for farming. Accurate rainfall predictions help farmers make better decisions about when to water crops, manage land, and estimate yields. In these tests, PECNET performed better than traditional models like Long Short-Term Memory (LSTM) and Prophet by achieving 50% less Mean Absolute Percentage Error (MAPE) and three times less RMSE and Mean Absolute Error (MAE). PECNET's ability to combine different types of independent data helped it make more accurate and reliable short-term rainfall forecasts. In summary, orthogonality-based feature selection method, whose impact is shown through PECNET, offers a new and effective way to deal with the challenges of missing data and short-term forecasting. By focusing on selecting independent data features, the method not only improves accuracy but also avoids common pitfalls like overfitting. The study's results support the initial hypotheses, showing that orthogonality-based feature selection can effectively overcome the limitations of traditional methods. Its successful application to the TARBIL dataset suggests that it could be a valuable tool in many fields where accurate data and forecasts are crucial. This research is an important step forward in improving how data is analyzed and decisions are made.
-
ÖgePotential of lactic acid bacteria fermentation as a strategy for valorisation and biotransformation of mushrooms(Graduate School, 2024-08-15) Sümer Ayar, Eda Nur ; Özçelik, Beraat ; 506172502 ; Food EngineeringConsumers increasingly recognise the importance of healthy eating and the potential benefits of incorporating mushrooms into their diets. Known for their rich nutritional profile, mushrooms provide essential vitamins, minerals, fibre, and antioxidants, making them valuable plant-based food sources. They are also rich in bioactive compounds with anti-inflammatory, antioxidant, antitumor, antiviral, and antimicrobial properties, promoting health and reducing disease risks in humans. Among various bioactive components in mushrooms, phenolic compounds are particularly noteworthy. These compounds are considered significant secondary metabolites in mushrooms and are found in free and bound forms within food matrices. However, the bioaccessibility and bioavailability of bound phenolic compounds are lower than that of free phenolic compounds due to their covalent bonds to cell wall matrices, which prevent absorption in the small intestine. Additionally, mushroom production generates various by-products, which pose environmental and financial challenges due to their disposal. Innovative processing techniques are required to enhance the bioavailability of phenolic compounds and add value to mushroom by-products. Fermentation with lactic acid bacteria (LAB) is effective in this context. LAB fermentation not only extends the shelf life and improves sensory properties but also breaks down macronutrients such as carbohydrates and proteins. This alters the nutritional composition of the food and facilitates the transformation of bound phenolic compounds into more bioavailable forms, similar to free phenolic compounds. Through LAB fermentation, macronutrients are transformed, antioxidative peptides are released, and phenolic compounds are modified. This process enhances the health benefits of mushrooms by increasing the bioavailability of mushroom phenolics, making them more accessible for absorption and use by the body. Therefore, fermentation techniques can significantly improve mushrooms' use, health benefits, and by-products. Given this information, a research framework for this doctoral thesis explores modifying industrial mushroom wastes and specific extracted components, as well as mushrooms like L. edodes and La. deliciosus, which may become waste due to their short shelf life. The research plan is based on processing mushroom waste and mushrooms through LAB fermentation. The objectives of this doctoral thesis are: (i) to valorise the mushroom waste generated from bioactive substance extraction, modifying its structure and nutritional composition through fermentation with lactic acid bacteria; (ii) to ferment the mushrooms with lactic acid bacteria to facilitate the transition of phenolic compounds from bound to free form, altering the structure of L. edodes and La. deliciosus; (iii) to determine biotransformation of these phenolic compounds use analytical identification; (iv) to investigate the effects of LAB fermentation on the bioaccessibility and intestinal transport of mushroom phenolics using an in vitro gastrointestinal digestion model. To achieve these objectives, three different experimental studies (Chapters 3-5) were conducted within the scope of this thesis. The first study focused on the fermentation of L. edodes mushroom waste, L. edodes residue (LER), with lactic acid bacteria compared to L. edodes itself (LE), examining changes in its structure and nutritional composition for functional properties (Chapter 3). The second study was based on the changes in phenolic components, interactions with other metabolites, and the profiling of phenolic substances in fermented L. edodes and La. deliciosus (Chapter 4). Based on the previous chapter's findings, the third study examined phenolic components' bioaccessibility and antioxidant activity trends in an in vitro gastrointestinal digestion model.
-
ÖgePredictive error compensated wavelet neural networks framework for time series prediction(Graduate School, 2024-07-22) Macit, Serkan ; Üstündağ, Burak Berk ; 504221532 ; Computer EngineeringMachine learning algorithms have gotten considerable attention and recognition in the context of addressing time series prediction problems; however, the task of constructing an accurate machine learning model with optimal architecture and hyperparameters becomes highly challenging if the data is non-linear, encompasses multi variable characteristics with chaotic or stochastic properties, and has sensitivity to environmental factors. In this context, common issues encountered in time series prediction models and frameworks include overfitting, a machine learning problem arising in situations with limited labeled data but a large variety of input data; generalization issues due to insufficient or inadequate input data; the need for extensive feature engineering to properly set internal weights in artificial neural networks; dependency on network parameters in developed solutions and limited adaptability to different problems; and being computationally expensive and time-consuming. Predictive Error Compensated Wavelet Neural Networks (PECNET) is an innovative artificial neural network architecture for time series predictions. It avoids overfitting by training the data separately in cascaded networks based on different frequency bands and using the remaining errors of each subsequent network as target data. In the PECNET architecture, data is fed into the networks with a low-frequency band in a wide time window, and the subsequent network is trained with narrower time windows and high-frequency data while using the error from the previous low-frequency network as the target data. This method improves the orthogonality of data features across time windows and lets you choose orthogonal features in data fusion applications. It also makes predictions more accurate as more networks are added, which lowers the risk of overfitting. Additionally, by applying wavelet transform as a feature extraction method to the various frequency components of each network , it is possible to distinguish and extract the variety of patterns present in the data. PECNET also overcomes the traditional normalization problems for non-stationary time series data by using adaptive normalization techniques. In conclusion, PECNET is a very good alternative for solving time series prediction problems due to its high prediction accuracy without overfitting, unique structure that allows adaptation to different problems independently of network parameters, and being computationally cheap and low time-consuming just with two-layer MLP structure. The PECNET model, due to its composition of cascade networks, modular feature extractions, and fusion networks, presents challenges in its implementation at the coding level. PECNET contains many sequential cascaded neural networks that are trained by each other's errors. When training the sequential networks with input data sequences, it is necessary to shift the input values to the previous time window. Otherwise, it would result in using data from the future, which is not feasible in practice. The continuous use of error data sequences as both label and input data in training and the use of the final error in the fusion network for predicting the remaining error value of the time series as both input and target data increase computational complexity for solutions involving numerous data sources and networks. This can lead to execution errors in time synchronization management. To overcome these challenges, PECNET framework software has been developed within the scope of this thesis, ensuring optimal use of memory and processor resources along with the modular design. In the development of the PECNET framework software, the C compiler and Python interpreter were utilized. NumPy, Pandas, PyWavelets, and Matplotlib libraries were used for data processing tasks. The PyTorch library was chosen for constructing the artificial neural network model due to its extensive modification features and options for interacting with the graphics processing units (GPUs). The design adhered strictly to object-oriented programming principles, and a syntax similar to Keras was used. Additionally, the machine learning application cycle followed the sklearn flow (fit-predict-eval). The PECNET framework is made up of various modules that work together. In the "models" module, the BasicNN class forms the core of the neural network architecture. This class manages key tasks like initializing the network, fitting data, computing loss, and adjusting the model during training. In the "network" module, specialized classes like ErrorNetwork, FinalNetwork, and VariableNetwork handle specific stages of the prediction process. ErrorNetwork focuses on correcting prediction errors; FinalNetwork integrates predictions from previous networks for a final output; and VariableNetwork manages training input data across different frequencies and integrates data fusion mechanism for multivariate data. In the same module, PECNET's functionality is encapsulated in the Pecnet class, which coordinates the workflow among different networks. It manages the data flow between cascaded networks, error compensation, and final prediction generation. The PecnetBuilder class provides a fluent interface to construct the Pecnet object. It sequentially adds various network components, ensuring a streamlined building process for the PECNET model. In the "preprocessing" module, the DataPreprocessor class plays a crucial role in data preparation. It lets users change how data is processed into different frequency bands, sampling periods, and sequence sizes that are appropriate for each of cascaded networks in the model. It also does scaling, adaptive normalization, denormalization, and wavelet transform with the help of other classes in the same module. This makes sure that the input data is in the best possible shape for the prediction pipeline. In the "utils" module, the Utility class facilitates hyperparameter optimization, offers tools for loading datasets and plotting results. Overall, PECNET's code-level functionality revolves around these classes, each contributing to the framework's ability to process and predict time series data efficiently. PECNET has already demonstrated successful outcomes on various datasets as a discrete code, showing promising results against existing machine learning models such as ARIMA, MLP, CNN, LSTM. In this study, as a framework form, it is initially tested on a historical time series dataset of daily adjusted close prices of Apple stocks and then compared with LSTM, which is known for its strong memory and sequence comprehension capabilities. As a result, in terms of the RMSE metric, the LSTM model has had an error of \$2.55, while PECNET has had an error of \$1.24. In terms of the R2 metric, the LSTM model has achieved a value of 0.94, whereas PECNET has reached 0.98. Following that, it is comparatively tested with LSTM for seismic energy estimation on real-time chaotic Electric Field Data (EFD) which is collected within the scope of earthquake prediction research project conducted at Istanbul Technical University (ITU). In this experiment, in terms of the RMSE metric, the LSTM model has showed errors ranging between 300J-400J, while PECNET has showed errors ranging between 130J-150J. In terms of the R2 metric, the LSTM results has fluctuated between 0.2-0.3, while PECNET has achieved values between 0.5-0.6. In both scenarios, PECNET outperforms LSTM, and the developed framework is being integrated into the portal software for real-time earthquake prediction. In conclusion, the developed modular and customizable framework facilitates the use of PECNET, which is highly performant and robust against overfitting, for various types of time series predictions by other developers in real-time machine learning systems without requiring specific coding knowledge of PECNET.