Sustainable Development Goal "Goal 8: Decent Work and Economic Growth" ile 'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgeA Dutch disease approach into the premature deindustrialization(Graduate School, 2022-08-18) Çakır, Muhammet Sait ; Aydemir, Resul ; 412142006 ; EconomicsWe explore the main causes and consequences of the premature deindustrialization phenomena. We argue that local currency overvaluations mainly associated with a surge in capital inflows into the emerging market economies following the deregulation of their capital accounts severely hurt the output share of manufacturing industry. First, we empirically establish a causal link from capital flows to local overvaluations. According to the two-way error component model which controls for the full set of country and time fixed effects, a surge in capital flows by one standard deviation is associated with an overvaluation of 1.67 percent. To address the possible endogeneity between capital flows and real exchange rate, we run two-variate first-order panel vector autoregressive model since the feedback effects from overvaluation to net financial inflows might introduce a bias into the fixed effect estimation. When we isolate the effect of positive capital inflow shock of one standard deviation by the Cholesky decomposition, we find that it is statistically significantly associated with an immediate overvaluation in real terms with 95 percent confidence level. Then we construct our baseline regression model. Applying the second generation estimators allowing for cross-section dependency (Augmented Mean Group and Common Correlated Effects Mean Group), we run a panel data regression model based on a sample of 39 developing countries in Latin America, Sub-Saharan Africa, East Asia, North America, and Europe from 1960 to 2017. We find that an overvaluation of 50 percent which corresponds approximately to one and half standard deviations is associated with a contraction of manufacturing output share as high as 1,25 percent over the five year period. With the turn of new century, the developing countries also experienced a massive deindustrialization by shedding manufacturing value-added as large as 1.24% of national income. Moreover, the evidence suggests that the relationship between real exchange misalignments and the manufacturing share in output might be nonlinear so that the manufacturing competencies which have been eroded by local currency overvaluations in real terms cannot simply be brought back during the undervaluation periods. We also show that the baseline regression results are robust to different data sets, alternative real exchange rate/deindustrialization measurements, and dynamic model specifications which allow us to treat the real exchange rate as endogenous variable to address any potential concern regarding the simultaneity bias. As a further robustness check on our findings, we empirically examine the effects of supply chain disruptions, inequality shocks, and institutional innovations on the path of industrialization in developing countries by running a panel vector autoregressive model. We found that deterioration in income distribution unequivocally harms the developing countries' bid for industrialization while better institutions proxied by an improvement regulatory quality invariably foster it. On the other hand, the effects of supply chain disruptions on the pace of industrialization follow a nonlinear path, showing the great resilience of local industries in absorbing imported input bottlenecks through intermediate input import substitution. We also provide evidence that backward participation into GVCs and regulatory quality do not mutually Granger-cause each other, and suggest that the well-established link from better governance to GVCs may be missing in the developing country case. Based on these empirical findings, the need for a comprehensive industrial policy along with a firm use of capital controls and macroprudential measures given a robust institutional framework comes out as the main policy implication of our work, and they are duly discussed in light of recent developments in the literature.
-
ÖgeA new approach in studying the engineering behavior and mechanical properties of artificial bonded soils in the laboratory(Graduate School, 2022-01-31) Ricardo, Richard Vall Ngangu ; Lav, Musaffa Ayşen ; 501142303 ; Soils Mechanics and Geotechnical EngineeringThe construction of structures on structured soils or the exploitation of such materials for construction purposes, such as in road pavement projects, has gained more importance with time. In some parts of the world, their study has become a necessity. Such soils, like residual soils, are widely encountered in tropical and subtropical regions. Even though their names may vary according to local culture or their morphology, they have all in common the bond structures. This property is a key parameter of those soils. However, to better study their behavior, the use of the artificial bonded sample in the laboratory has been adopted, offering an effective simulation. In the present study, the behavior of residual soil-like has been investigated under undrained conditions in triaxial equipment by using a large number of artificial samples made in the laboratory. The artificial bonded and unbonded samples were made from a mixture of sand, kaolin, and water. A thermal process was applied for the bonded specimens, whereas the unbonded samples were not fired. A preliminary investigation was carried out on four different particle size distribution curves. In those gradation curves, the dry ratio of kaolin/sand, and the kaolin particle size distribution paths, were kept the same, only the sand grain size distribution was varied. The study was conducted on the chosen best-fitted gradation curve of sand-kaolin. Besides the triaxial tests, direct shear box apparatus was also used, for comparative purposes. For every type of the tested material, three different initial effective confining pressures or normal stresses were applied. Throughout this process, five different bonding levels were used. Several properties of such soils were examined, among them: the stress-strains, the pore water pressure evolution, the stress ratio, other strength parameters, and so on. The equivalent artificial bonded specimens, but in an unbonded state, were used to gain a better understanding of their mechanical characteristics. A novel approach was investigated and established, based on a new parameter called bonding index (B_i). This parameter was set from the bounding surface, which is one of the most important features of bonded soils studied under triaxial tests. The proposed method was evaluated as an effective and practical one. The strength parameters of the bonded soils such as the cohesion intercept, the angle of internal friction, the peak strength, and the stress ratio, were found to be straightly related to B_i. The latter asserted well the enhancement of bonding. Furthermore, B_i would be used to define the confining stress level, from which a B_i close to zero value implies the highest stress level for the artificial bonded soils. However, independent of the stress level, all unbonded soils display a B_i equal to zero value. The coupled effect of B_i and the confining pressure was grouped in three main stages. The first stage, at lower confining stresses, where a remarkable high value of B_i is recorded. The second stage is a step of moderate stress and, the third stage, as where the smallest B_i value was observed. Every stage was associated with a particular behavior of those soils according to the bonding level in presence. It is worth pointing out that a soil sample of higher B_i was found to be less ductile. The suggested method was observed to be an appropriate alternative means for the geotechnical evaluation and analysis of the behavior of structured soil materials. Comparison from the results of both CIU tests and DST revealed a good agreement for weakly and unbonded samples, particularly for strength parameters, the cohesion intercept, and the angle of internal friction. However, for highly bonded materials important divergence was observed, with an overestimation from the DST results. A study of the debonding process was carried out through a new approach. This method was constructed from the deviatoric stress increment (∆q) against the axial strain (ε_a) curves, drawn in a natural scale. Six important features, points, were found to be typical of bonded soils, while only two of them were observed for unbonded samples. The first yield was identified at the initial point, after which the slope of ∆q decreased significantly coupled with the maximum pore water pressure increment 〖d∆u〗_max. This point revealed the debonding process starting point. The second point is at 〖∆q〗_max, at the second yield, a point of major loss of strength. The third and fourth points were at d∆u=0 and ∆q=0 (q_max), respectively; while the fifth point was identified as where 〖∆q〗_min. The last point was at the critical state or the equivalent state. Every point represented a particular behavior state of bonded soils. Throughout the study, it was observed that confining pressure influences considerably the response of bonded soils. For example, the aforementioned six features, specific to bonded soils, were found to be reduced to only two points, particularly for weakly and moderately bonded materials, with the increase of σ_3 from 30 kPa to 700 kPa. Furthermore, a bigger value of the bonding index was achieved at lower confining stress. Therefore, it is recommended, for a better understanding of the behavior of the bonded soil materials, to conduct such investigations at lower initial effective stress, especially for the analysis of the debonding process.
-
ÖgeAkıllı çok ölçütlü yasal takip avukatlık ofisi performans yönetimi sistemi(Fen Bilimleri Enstitüsü , 2020-06) Uruç, Erdinç ; Onar, Sezi Çevik ; 642735 ; Endüstri Mühendisliği Anabilim DalıYapılan tez kapsamında, zamanında ödenmeyen borçlar için firmaların birlikte çalıştığı yasal takip süreçlerini yürüten avukatlık ofislerinin performanslarının ölçülmesi için bir model geliştirilmiştir. Model geliştirilirken analitik hiyerarşi süreci ve bulanık analitik hiyerarşi temel alınarak iki farklı yöntem ile hesaplama yapılmıştır. Hesaplamaları yapmak üzere Java dili kullanılarak bir yazılım uygulaması geliştirilmiştir. Yazılım uygulaması, hem AHP(Analitik Hiyerarşi Prosesi) hem de BAHP(Bulanık Analitik Hiyerarşi Prosesi) için firmaların performanslarını hesaplamakta ve hesaplama sonucunda avukatlık ofislerini performans puanına göre sıralamaktadır. Günümüzde ödenmeyen borçların miktarı gün geçtikçe artmaktadır. Firmalar müşterilerine ürün ve hizmetlerini sunmakta ancak her zaman bunların karşılığında ödemelerini zamanında alamamaktadır. Bu durum firmaların nakit akışlarını, cirolarını, kredi puanlarını ve hatta marka değerlerini ciddi anlamda etkilemektedir. Bu nedenle firmalar ödenmeyen borçların tahsilatı konusuna büyük önem vermektedirler.
-
ÖgeAn operating costs estimation model for the tanker ships(Institute of Science and Technology, 2020) Aksoy, Mehmet Sabri ; Çiçek, Kadir ; 637698 ; Department of Maritime Transportation EngineeringWith its advantages, maritime transportation has been the most widely used transportation mode throughout recorded history. So much so that, according to the United Nations Conference on Trade and Development (UNCTAD) Review of Maritime Transport 2019 Report, 11 billion tons of cargo which are 80% of world transportation is carried by seaway. The specialization of ships by type and by size has been the most important trigger for the development in maritime transportation. With the growth in specialization at shipping, the need for economic information on shipping has increased considerably. In other words, the need for economic analysis studies in international maritime transportation is increasing day by day, with the increasing volume of maritime transport. In the literature, it is observed that the interest in the studies in the economic analysis for maritime cargo vessels has increased and reports published by maritime consultancy agencies have continuously increased. On the other hand, the global political fluctuations, environmentally driven regulations, energy efficiency concerns for maritime cargo vessels enforces ship owners and ship management companies to make better planning and flexible and forward-looking decision-making processes in order to maintain their competitiveness in such a dynamic environment. Especially, the enhancement of economic performance through the development of an appropriate financial management system is the most critical point for shipping companies in an increasingly competitive environment. For these reasons, with the forecasting of the changes in operational cost items, the shipping companies will meet provides reasonable contributions to the enhancement of the economic performance. That is why, nowadays, economic forecasting on critical operational cost items has become an important part of financial planning for shipping companies. From these viewpoints, this thesis study focuses on the estimation of the operational cost items for the tanker ships based on previous years' data. With the estimation on the operational cost items such as; crew costs, spare and stores costs, repair and maintenance cost, insurance costs, administration costs, and technical management fee costs; it is expecting to improve the effectiveness of the financial plans and management systems of shipping companies under dynamic market condition. The general financial approach applied in the ship management companies is mainly constituted on the revision of costs of the previous year by considering the financial expectations like inflation, price stability, growth forecast, etc.
-
ÖgeApplications of multi-agent systems in transportation(Graduate School, 2023-03-21) Tunç, İlhan ; Söylemez, Mehmet Turan ; 518152012 ; Mechatronics EngineeringTraffic density is a growing drawback of the crowding of cities in contemporary societies. As a consequence of financial and technological innovations, the living standards of people are improving yet this increases the number of cars and traffic density accordingly. Thus, the density of traffic is reducing the quality of life for individuals in metropolitan areas in particular. Traffic is an important factor affecting human life quality in crowded cities. The increasing population and increasing individual vehicle ownership lead to an increase in traffic density. This causes an increase in loss of time and pollution. Traffic density in big cities is an important factor that reduces the quality of human life. Due to the growing population in metropolitan areas and the inadequate infrastructure to accommodate this density, traffic problems are on the rise. As a result, passengers waste more time in traffic, and the amount of emissions, and hence air pollution, also increases. The issue of traffic congestion is a significant concern for numerous metropolitan areas across the globe, as it causes delays, increases commuting time, and contributes to air pollution. Controlling the flow of traffic is problematic in terms of many complexities and uncertainties. Despite this situation, this problem needs to be solved as it reduces productivity and living standards. Modern traffic control methods offer a more effective solution, unlike traditional methods. As traffic congestion continues to increase rapidly in the world, the need to research and apply more effective methods of traffic control than the traditional method is increasing. Solving traffic congestion is one of the most important and complex problems, as it causes chaos in metropolitans, especially during heavy traffic hours. Traditional methods that continue to be used have proven to be inadequate, and as a result, the developing technology has affected all areas as well as the solutions to the traffic control problem. With the emergence of Intelligent Transportation Systems (ITS), utilizing artificial intelligence and communication technologies, a more effective and efficient solution to traffic congestion is possible. Transportation techniques are improving day by day with the pace of growing technology. Intelligent Transportation Systems (ITS) provide advanced services such as high-tech traffic controllers and various transportation modes, reducing the burden on drivers and thus enabling them to meet the need for complex decision-making while on the road. Intelligent transportation solutions have enabled an unprecedented level of data collection within the industry, leading to significant advancements in transportation system management and operation. With the increasing demand and rate of data collection, ITS has also been advancing day by day and increasing the speed of progress of smart transportation systems. ITS can be described as systems consisting of technologies such as electronics, data processing and wireless networks that provide security and efficiency in the transportation network. ITS provides communication and information exchange between each transport unit. These units can be centres that provide information to pedestrians, vehicles, infrastructure, transportation and other peripherals such as traffic lights and other communication and control units. The application of MAS (Multi-Agent Systems) techniques, as a new development in information technology, can help to increase interest in traffic and promote energy-efficient transportation. ITS-based multi-agent technology is an important approach to solving complex traffic problems. The complexity of the elements of the traffic makes them convenient for multi-agent structures. ITS-based multi-agent technology provides us with safer controllers and makes us feel more comfortable in our daily lives. It increases the quality of our lives by decreasing the amount of time spent in traffic and by lowering the amount of emission gases released by our vehicles. The structurally dispersed nature of components in heterogeneous environments causes application difficulties, such as interoperability between agents forming a demand for a unified software platform as an underlying infrastructure. Therefore, it is preferable to use centralized solutions for relatively simple problems such as the one considered in this paper. For both transport decision-makers and drivers, ITS have a great potential for efficient and intelligent traffic management, threat identification, driving comfort and safety. ITS can also provide a flexible approach for the effective management of complex networked transportation systems letting traffic management decision-makers to control signal changes, regulate route flows, and broadcast real-time traffic information. In addition to providing route scheduling, weather forecasting, and emergency services for drivers, ITS (Intelligent Transportation Systems) can also help to reduce driving loads and improve safety. The implementation of ITS (Intelligent Transportation Systems) can generate positive outcomes across a range of areas, spanning from environmental and national security issues to emergency management and transportation. ITS applications can reduce time spent on the road. Short travel times provide economic savings for both individual and commercial vehicles and usually mean less environmental pollution. Intelligent Intersection Management (IIM) technology has started to develop in traffic intersections as part of Traffic Light Control (TLC) systems. Intersections are some of the busiest parts of roads. Therefore, the control of traffic lights plays an important role in decreasing the density. In this thesis, particular attention is given to the control of intersections in order to find solutions to decrease traffic density leading to an increased quality of life in big cities. Intelligent traffic control methods, the use of which is increasing with the development of new methods, are used especially in traffic intersections with high traffic density in order to provide efficient solutions. Control of a single intersection with traffic lights is considered first in the thesis. Various methods, including Fuzzy Logic Control (FLC), Proportional Integral (PI) control and State Space Model Control techniques, have been proposed and compared for a better traffic light controller architecture so as to increase the traffic flow and reduce the overall waiting time of the cars and the emissions released by them. It is demonstrated that the proposed architectures give better results compared to the traditional fixed-time traffic light control method. Different types of traffic intersections are considered in the study. Initially, a simple single-lane traffic intersection with no left or right turn is taken into consideration. Later on, intersections on which three-lane (or four-lane) roads meet with vehicles turning left and right are considered. Finally, a realistic case study, in which the Altunizade Region of Istanbul, is examined to demonstrate the efficiency of some of the proposed methods. The results of simulations indicate that the FLC, in which the positions of the vehicles are used as the state variables, gives superior results in comparison to the other classical methods. In order to increase the efficiency of the FLC further, a built-in learning algorithm is proposed to be used in addition to the FLC. A deep Q-learning method is employed for this purpose as a part of the agent-based traffic light controller. Hence, the resulting intelligent traffic light controller is named DQ-FLSI. In this method, a state matrix which divides the arms of the traffic intersection into cells is used. The traffic light durations are determined using fuzzy logic, and traffic light actions are determined by the help of deep Q-learning. A stability analysis is also carried out for this newly proposed method. Another important traffic problem is route planning. This is particularly important in large cities with complex traffic networks. In order to address this problem, an agent-based traffic route planning method has also been proposed as part of this thesis with the motivation of vehicles choosing the fastest route. In this method, route planning is made by deciding at traffic intersection points. Vehicle agents make decisions when they reach traffic intersections. In this way, dynamic route planning becomes possible for the vehicles. Another solution for the traffic intersection problem is multi-agent reservation-based traffic intersection control. With this method, all vehicles (called agents) can pass the intersection without the need for a traffic light thanks to a traffic intersection agent. A platoon method, which can work in harmony with reservation-based traffic intersection management, is proposed as an improvement in this part of the study. The proposed method aims to reduce the slowdowns that occur when approaching the traffic intersection by properly lining up the vehicles approaching the traffic intersection. It is shown by simulations that the proposed platoon method reduces energy consumption and gas emissions while increasing the average speed of the vehicles, especially as the density of the traffic increases. Work environments for all studied traffic problems are designed and simulated using the SUMO program. Simulation of Urban MObility (SUMO) is an open-source simulation package that works on networks imported from maps, provides various workspaces at micro levels, also allows pedestrian simulation, and has a sufficient set of tools that makes it more reachable.
-
ÖgeArtificial intelligence based and digital twin enabled aeronautical AD-HOC network management(Graduate School, 2022-12-20) Bilen, Tuğçe ; Canberk, Berk ; 504172508 ; Computer EngineeringThe number of passengers using aircraft has been increasing gradually over the following years. With the increase in the number of passengers, significant changes in their needs have been made. In-flight connectivity (IFC) has become a crucial necessity for passengers with the evolving aeronautical technology. The passengers want to connect to the Internet without interruption regardless of their location and time. The aeronautical networks attract the attention of both industry and academia due to these reasons. Currently, satellite connectivity and air-to-ground (A2G) networks dominate existing IFC solutions. However, the high installation/equipment cost and latency of the satellites reduce their efficiency. Also, the terrestrial deployment of A2G stations reduces the coverage area, especially for remote flights over the ocean. One of the novel solutions is the Aeronautical Ad-hoc Networks (AANETs) to satisfy the IFC's huge demand by also solving the defects of satellite and A2G connectivities. The AANETs are based on creating air-to-air (A2A) links between airplanes and transmitting packets over these connections to enable IFC. The AANETs dramatically increase the Internet access rates of airplanes by widening the coverage area thanks to these established A2A links. However, the mobility and atmospheric effects on AANETs increase the A2A link breakages by leading to frequent aircraft replacement and reducing link quality. Accordingly, the mobility and atmospheric effects create the specific characteristics for AANETs. More specifically, the ultra-dynamic link characteristics of high-density airplanes create an unstructured and unstable topology in three-dimensional space for AANETs. To handle these specific characteristics, we first form a more stable, organized, and structured AANET topology. Then, we should continuously enable the sustainability and mapping of this created AANET topology by considering broken A2A links. Finally, we can route the packets over this formed, sustained, and mapped AANET topology. However, the above-explained AANET-specific characteristics restrict the applicability of conventional topology and routing management algorithms to AANET by increasing its complexity. More clearly, the AANET specific characteristics make its management challenging by reducing the packet delivery success of AANET with higher transfer delay. At that point, artificial intelligence (AI)-based solutions have been adapted to AANET to cope with the high management complexity by providing intelligent frameworks and architectures. Although AI-based management approaches are widely used in terrestrial networks, there is a lack of a comprehensive study that supports AI-based solutions for AANETs. Here, the AI-based AANET can take topology formation, sustainability, and routing management decisions in an automated fashion by considering its specific characteristics thanks to learning operations. Therefore, AI-based methodologies have an essential role in handling the management complexity of this hard-to-follow AANET environment as they support intelligent management architectures by also overcoming the drawbacks of conventional methodologies. On the other hand, these methodologies can increase the computational complexity of AANETs. At that point, we propose the utilization of the Digital Twin (DT) technology to handle computational complexity issues of AI-based methodologies. Based on these, in this thesis, we aim to propose an AI-based and DT-enabled management for AANETs. This system mainly consists of four main models as AANET Topology Formation Management, AANET Topology Sustainability Management, AANET Topology Mapping Management, and AANET Routing Management. Here, our first aim is to form a stable, organized, and structured AANET topology. Then, we will enable the sustainability of this formed topology. We also continuously map the formed and sustained AANET topology to airplanes. Finally, the packets of airplanes are routed on this formed, sustained, and mapped AANET topology. We will create these four models with different AI-based methodologies and combine all of them under the DT technology in the final step. In the Topology Formation Management, we will propose a three-phased topology formation model for AANETs based on unsupervised learning. The main reason for proposing an unsupervised learning-based algorithm is that we have independently located airplanes with unstructured characteristics in AANETs before forming the topology. They could be considered as the unlabeled training data for unsupervised learning. This management model utilizes the spatio-temporal locations of aircraft to create a more stable, organized, and structured AANET topology in the form of clusters. More clearly, the first phase corresponds to the aircraft clustering formation, and here, we aim to increase the AANET stability by creating spatially correlated clusters. The second phase consists of the A2A link determination for reducing the packet transfer delay. Finally, the cluster head selection increases the packet delivery ratio in AANET. In the Topology Sustainability Management, we will propose a learning vector quantization (LVQ) based topology sustainability model for AANETs based on supervised learning. The main reason for proposing a supervised learning-based algorithm is that we already have an AANET topology before the A2A link breakage, and we can use it in supervised learning for training. Accordingly, we can consider the clusters in AANET topology as a pattern; then, we can find the best matching cluster of an aircraft observing A2A link breakages through pattern classification instead of creating topology continuously. This management model works in three phases: winning cluster selection, intra-cluster link determination, and attribute update to increase the packet delivery ratio with reduced end-to-end latency. In the Topology Mapping Management, we will propose a gated recurrent unit (GRU) based topology mapping model for AANETs. In topology formation, we create AANET topology in the form of clusters by collecting airplanes having similar features under the same set. In topology sustainability, we sustain the formed clustered-AANET topology with supervised learning. However, these formed and sustained AANET topologies must be continuously mapped to the clustered airplanes to notify them about the current situation. This procedure could be considered a part of sustainability management. Here, we continuously notify the airplanes with GRU at each timestamp about topological changes. This management model works in two main parts ad forget and update gates. In Routing Management, we propose a q-learning (QLR) based routing management model for AANETs. For this aim, we map the AANET environment to reinforcement learning. Here, the QLR-based management model aims to let the airplanes find their routing path through exploration and exploitation. Accordingly, the routing algorithm can adapt to the dynamic conditions of AANETs. In this management model, we adapt the Bellman Equation to the AANET environment by proposing different methodologies for its related QLR components. Accordingly, this model mainly consists of two main parts current state & maximum state-action determination and dynamic reward determination. Therefore, we execute the topology formation, sustainability, and routing management modules through unsupervised, supervised, and reinforcement learning-based algorithms. Additionally, we take advantage of neural networks in topology mapping management. After managing the topology and routing of AANETs with AI-based models, in the DT-enabled AANET management, we will support them with the DT technology. The DT can virtually replicate the physical AANET components through closed-loop feedback in real-time to solve the computational challenges of AI-based methodologies. Therefore, we will introduce the utilization of DT technology for the AANET orchestration and propose a DT-enabled AANET (DT-AANET) management model. This model consists of the Physical AANET Twin and Controller, including the Digital AANET Twin with Operational Module. Here, the Digital AANET Twin virtually represents the physical environment. Also, the operational module executes the implemented AI-based models. Therefore, in this thesis, we aim to propose an AI-based and DT-enabled management for AANETs. In this management system, we will first aim to propose AI-based methodologies for AANET topology formation, topology sustainability, topology mapping, and routing issues. Then, we will support these AI-based methodologies with DT technology. This proposed complete management model increased the packet delivery success of AANETs with reduced end-to-end latency.
-
ÖgeAssessing the impact of promotions on sales: A quantitative approach for a large-scale retailer's sales performance(Graduate School, 2022-07-05) Zeybek, Ömer ; Ülengin, Kemal Burç ; Kaya, Tolga ; 403162009 ; ManagementAmerican Marketing Association defines marketing as "a system and operational process which includes various interconnected subsystems, harmonising with co-contributor elements to reach maximum efficiency." This thesis seeks to understand and explain the role of the promotional mix, one of the four Ps of the marketing-mix problem. Although sales promotions were acknowledged as a dissolving stock activity by sales management literature until the late 1960s, with increasing competition in the retail sector, it has become a vital instrument to impulse sales both for the long term and short term. However, within three decades, especially after the introduction of the television and online media (proper mediums for transmitting instant offers), the sales promotion activities had become more sophisticated, and their frontiers had outstretched beyond the frontiers of a single chapter to a stand-alone book. In the meantime, technological advances in data storage and computation areas enable scholars to create complex, multi-channel promotion analyses; the literature recognised promotions as an essential element in the marketing system. As a result, the sales promotion domain has become one of the significant divisions of marketing literature. In a modern sense, as a critical element of the marketing mix, the promotion has three main objectives, providing information to consumers and others, increasing demand, differentiating a product or category, stabilising sales, and accentuating a product's value. Therefore, most of the tools used by retailers from the sales promotion toolbox could be taxonomised in the price promotions domain. Price promotions' concrete impulse and response mechanism ease the effort of tracking and understanding effects created by sales trends. However, To accomplish a price promotion task, a company needs a clear understanding of which kind of promotions are working and why? Although current business intelligence applications can reflect the flow of cash and stocks of a company, promotions have complex relationships with various Key Performance Indicators (KPIs) of a business system. This aspect makes distinguishing the next value-added from a promotion a complicated job. This thesis aims to portray a reliable quantitative model to decompose price promotion's effect on sales trends. Most studies in price promotions have only focused on single response mechanisms like category effect, cannibalism, cross-category effects or brand switching. This study aims to contain responses of all courses given above in a single inclusive system of equations. Formulating a complete model system for the pilot category would allow practitioners to replicate results to all product inventory and eases the decision-making process extensively. Regarding the outcome of the studies, managerial relevance can be classified on the timing and nature of the impact created. In modern retailers' daily lives, promotions are critical to sustaining an increase in demand in the long term. On the other hand, retailers expect promotions to act as a balancer on lowering excess stocks in the short term. That promotes current actions and future strategies for sustaining long term increased demand. Therefore, a researcher studying in this field, especially in the business intelligence domain, should be aware that they should provide actionable, essential and meaningful insights for the practitioners. In order to achieve these kinds of outcomes, academic scholars should suggest a decision-making process in which retailers can observe the response of sales trends to their strategic moves, which are mainly classified as exogenous effects. Developing a reliable base for assessing promotions' impact on sales is strongly related to formulating a correctly specified model. Therefore, market response models are beneficial, especially for researchers working on the company's secondary data compiled from the enterprise database. A response model exhibits how one variable depends on other variables. For example, dependent variables could be sales or other KPIs of interest to marketing practitioners. On the other hand, independent variables are assumed to affect the dependent variable. Together these variables constitute a market response model. A response model defined on time series or cross-sectional data as an empirical response model. Accordingly, if a researcher prefers to study the direct and secondary impacts of price promotions on the retail level, market response models extended with exogenous promotion policy variables would be a yielding alternative. On the other hand, it is possible that to solve the true nature of promotions (create academic knowledge) and create insightful managerial relevance (prescriptive analytics), quantitative methods, including econometric response models, are the most efficient way. In this dissertation, I aimed to provide this kind of dual knowledge discovery for academic research and the retail industry. The research is based on three hypotheses which led me to construct two categories /multi-brand promotional effectiveness models. While the first section of the empirical study formulates the variables used in the next part of the study, including volatility modelling to promotion effectiveness has created an alternative approach. The second part of my empirical work was my playground to decompose and analyse net sales effects created by price promotion. The findings concluded that promotional strategy significantly affects the sales quantity of category/brand pairs both in the long and short term. As expected, I concluded that most of the sales bump created through promotions stems from the product's own promotional activity. On the other hand, results provided that promotions applied to competitor brands and categories also cause a significant change in sales trends. This research provides a timely and necessary study of creating pioneering a central promotion effectiveness assessment system for practitioners. By utilising the models produced in this study, the retail executives could organise promotional strategies from a wide-angle.
-
ÖgeBirleşme ve satın almaların değişim yönetimi açısından incelenmesi: Bir banka örneği(Lisansüstü Eğitim Enstitüsü, 2022-06-29) Pekcan, Burak ; Akdoğan Küskü, Fatma ; 507112001 ; İşletme MühendisliğiÖrgütler, bir çevrede faaliyet gösteren, temelinde kar elde etme amacı ile kurulmuş fiziki ve beşeri unsurlardan meydana gelen sistemlerdir. Temelinde yaşamlarını sürdürme amacı güden örgütler için, içerisinde bulundukları çevre, bir yandan ihtiyaç duydukları kaynakları elde etmelerini sağlamakta, diğer yandan bir belirsizlik kaynağı yaratmaktadır. Örgütler bir yandan içerisinde bulundukları çevrede güçlenmeye ve rekabetçi üstünlük elde etmeye çabalarken, bir yandan da çevrenin getirdiği belirsizlikleri ve tehlikeleri azaltmaya çalışırlar. Birleşme ve satın almalar, örgütlerin önemli kaynaklara erişebilmeleri, dış çevrede kendilerine rekabetçi üstünlük elde edebilmeleri ve yaşamlarını sürdürebilmeleri açısından önemli bir stratejik seçim unsuru olmaktadır. Kaynakların heterojen dağıldığı, faaliyet gösterilen çevre içerisinde rekabetin arttığı ve büyük ölçekli örgütlerin pazara giderek hâkim olduğu bir dış çevre içerisinde birleşme ve satın alma hamleleri örgütlere rekabetçi üstünlük sağlamaktadır. Birleşme ve satın almalarda hem satın alan hem de satın alınan örgütlerde çeşitli değişimler yaşanmakta, bu değişimler örgüt yapılarını da doğrudan etkilemektedir. Bu araştırmada, bankacılık sektöründe faaliyet gösteren ve yurtdışı merkezli bir grup tarafından satın alınan bir firma incelenmiştir. Örgüt kuramları ve birleşme ve satın alma yazınından temellerini alan bu çalışmada, satın alım sonrasında örgütsel uyumun yapı, işleyiş ve kültürel boyutlarda sağlanması için gerçekleştirilen çalışmalar vaka araştırması yöntemi ile incelenmektedir. Açıklayıcı araştırma tasarımı kullanılan bu çalışmada veriler birincil (örgütün yönetiminde söz sahibi olan ve satın alım sonrası dönemi yaşamış olan kişiler ile yapılan görüşmeler ve gözlemler) ve ikincil (basılı doküman incelemeleri) veri kaynaklarından yararlanılarak toplanmıştır. Araştırma bulguları, Kaynak Bağımlılığı Kuramı ve Birleşme ve Satın Alma yazını çerçevesinde değerlendirilmiştir. Birleşme ve Satın almalar, beraberinde uyum sağlama gerekliliğini de getirmektedir. Örgütlerin yapılarında, işleyişlerinde ve kültüründe (insan kaynaklarında) çeşitli değişimlere yol açmaktadır. Ancak bu değişimin yoğunluğu, örgütün tüm birimlerinde aynı olmadığı gibi, aynı zaman diliminde de gerçekleşmeyebilmektedir. Değişimin örgütün yapı, işleyiş ve kültürel unsurlarında ne derece gerçekleşeceği yine örgütün bu unsurlarda ne derece yapılanmış ve kuvvetli olduğuyla ters orantılı olduğu kadar, satın alan kuruluşun da bu unsurları değiştirme isteği ile doğru orantılı olmaktadır. Birleşme ve satın alma sonrası entegrasyon sürecinin başarılı bir şekilde gerçekleştirilebilmesi için, görevlerin ve kültürlerin benzer derecede uyumlaştırılması önemli olmaktadır. Araştırılan vakada da iki örgütün görev ve kültürlerinin uyumlaştırılabilmesi için görev entegrasyonu ve kültürel entegrasyon yöntemlerinin birlikte kullanıldığı görülmektedir. Bunun bir sonucu olarak da örgüt yapıları ve işleyiş şekilleri birbirine yakınlaşmış, ayrıca insan kaynakları yönetimi uygulamaları da benzer hale gelmiştir. Örgütsel değişimin gerçekleşebilmesi ve uyumun sağlanabilmesi için uygulanan uyum mekanizmalarının uygulanış biçimleri de değişebilmektedir. Yapısal uyumun sağlanması için uygulanan uyum mekanizmalarının, çoğunlukla yazında da değinilen yöntemler ile gerçekleştirildiği görülmektedir. Bununla birlikte, örgütsel uyumun gerçekleştirilebilmesi için uygulanan insan kaynakları ve görev entegrasyonu mekanizmalarının uygulanış biçimlerinin, bu vakada yazında aktarıldığı biçiminden bir miktar farklılaştığı görülmektedir.
-
ÖgeClaim management and dispute resolution under FIDIC contracts 2017 edition(Institute of Science and Technology, 2020-07) Çoban, Gökhan ; Artan, Deniz ; 637212 ; Construction Management ProgrammRecent developments in the construction industry, such as the growing project scales, innovative designs, challenging technological advances, global competition, diminishing profit margins and multi-cultural working environments require advanced project management to ensure the success of projects. Contract administration, claim management and dispute resolution are crucial project management activities and require a detailed knowledge and expertise on the procedures adopted in the standard forms of contracts. FIDIC (International Federation of Consulting Engineers - Fédération Internationale des Ingénieurs Conseils) contracts are one of the widely used contracts in the international construction projects. FIDIC 1999 editions have been adopted in many international projects and local projects financed internationally. Sector professionals are quite knowledgable regarding the different types of standard forms of FIDIC 1999. Although the standard forms of contracts are used in the industry for a long time, they are revised regularly for necessary updates. In 2017, a new suite of FIDIC contracts were published bringing significant changes. Claim management and dispute resolution are two of the topics where visible changes took place. Also, FIDIC standard form of contracts is well-known with their detailed claim management and dispute resolution clauses. Therefore, sector professionals are now challenged to understand and adapt to these changes in contract administration procedures under New Suite of FIDIC Contracts. However, the resources to support this adaptation are still very limited. The aim of this thesis is identifying the changes in claim management and dispute resolution processes between 1999 and 2017 editions of FIDIC contracts and provide a visual guidance for sector professionals to help them understand new procedures required. To achieve that goal, background information about FIDIC contracts, claim management and dispute resolution are provided. A literature review was performed on the limited resources tackling FIDIC 2017 contracts, including articles, legal notes, and sector reports
-
ÖgeComparative evaluation of nutrient, land, water and energy requirements of hydroponic vs. conventional agricultural methods: Case study for lettuce, basil, and arugula(Graduate School, 2023-02-07) Aktuğ, İlayda ; Sözen, Seval ; Kutman, Ümit Barış ; 501181809 ; Environmental BiotechnologyThe rapidly growing world population needs more environmental resources, mainly water and food, to the limit of extinction and defunctionalize traditional solution methods. Available water resources are decreasing day by day, moving to a value below the previously determined rate in researches as 3%. The most powerful reason for this is the increase in the carbon footprint created by industrialization. Global warming, changes in climate lead to insufficient water and food resources for the existing population. The amount of water per capita in year for our country is around 1500 m3, this amount is projected to decrease to 1.100 m3 in 2030. In this direction, efforts to prepare watershed protection action plans including long term conservation programs and measures to protect water resources for all types of use, prevent pollution, improve the quality of contaminated water resources, as well as project works to effectively use the community water resources by reducing losses and leaks in the water supply system have been initiated. Using water resources in our country general directorate of state water works for irrigation datas, other water use datas based on Turk Stat in Turkey as of 2016, 71,3% of the water in agricultural irrigation, 18,4% in industry, 10,3% in drinking and using water was determined. Based on these datas, it is concluded that the amount of agricultural irrigation should be under more controlled, considering the percentage of agricultural water use. In agricultural irrigation, 70% surface, 17% sprinkler, 13% drip irrigation methods are used. New method is used as an another alternative to conventional agricultural food production and also other modern greenhouse food production as the amount of water usage, more efficiently by 95% called "hydroponic farming" technology of food production simultaneously in both climate commitment reduction, reducing production time, while eliminating the problem of transportation into the city in conformance with the installation, reduce your carbon footprint. Dissolved nitrogen (N) and phosphorus (P) are the two main elements that trigger eutrophication. When the elements are above the limit concentrations, it is the result of water pollution and threatens aquatic life. As a result of uncontrolled fertilization in traditional agriculture, these pollutants, which are mixed into the soil release through irrigation water and then into groundwater, threaten the available water resources and the aquatic ecosystem. In the hydroponic vertical farming method, on the other hand, the amount of water used is reduced and fertilizer is used as much as the plant needs, so that there is no uncontrolled release into natural water resources. Comparative evaluation researches of plants grown in a controlled environment have proven that the plant is able to retain more nitrogen and phosphorus. Plants grown in hydroponic agriculture are healthy and nutritious for human health and consumption, while at the same time reducing the higher amount of nitrogen and phosphate in the water. Hydroponic farming systems are agricultural production methods made with only water without using soil. Plants get the minerals they need from the water in a usable form. The effects of technology on agriculture have reached to the inclusion of mechanization in time, then the development of sensor technologies, and finally the automated soilless vertical farming systems in the closed area, where lighting and air conditioning technologies can be realized by replicating nature. Vertical agricultural products, in which almost all leafy greens and some fruits can be grown, are nutritious in terms of content and can be grown in a shorter time. If the plants are grown in these systems, need much less nutrient use, can be carried out indoors and with automation systems, then the compliance of the plants grown with the increasing food requirement and the principle of "food safety and sustainability" is determined. Since the importance of growing indoors will be independent of the effects that may come from outside, chemicals used for pests are not required in these systems. With the development of lighting technologies, sunlight that will operate the photosynthesis mechanism of plants can also be imitated in these systems. The light spectra required by the plant vary at different rates depending on the type of plant. For the most efficient lighting, plants can be tested continuously and the highest yield can be given at any time of the year with full commitment. With advanced technology; automation systems, air conditioning, lighting, dosing, circulation and disinfection processes are monitored by sensors. In addition, the high quality tastes and images of fruits and vegetables grown hydroponically are better quality since the products grown in traditional agriculture are generally used both chemical usage and stress factors such as wind, irregular nutrients distribution and raining. However, in the literature, the nutrient and oil content of plants can be changed without affecting their naturalness by changing the ambient conditions given. Based on studies in literature, it is planned to prepare a thesis that can be examined under the title of Environmental Biotechnology within the scope of the hydroponic system consuming 95% less water compared to traditional agriculture within the principle of sustainability; examining nitrogen, phosphorus and energy consumption; obtaining quantity and plants are grown faster and under the principle of higher yield compared to the climate and arable area problems encountered in traditional agriculture. The aim of the thesis is to realize the reuse of wastewater, higher nitrogen and phosphorus consumption, energy consumption and area usage in the hydroponic system in Gebze Technical University (GTU) Institute of Biotechnology in collaboration with Plant Factory Inc. In the thesis, the prototype installed by Plant Factory Bitki ve Gıda Sistemleri A.Ş. at GTU, Biotechnology Institute; trials of automation will be carried out in which plants will grow in suitable conditions, healthy, higher yield plants. Generally, there are hydroponic studies with lettuce, basil and arugula plants in the literature. The contribution of the study to the literature is a more comprehensive examination of five parameters, in five different experiments, in four different experimental area, with three different leafy greens in a single study. In the study, energy, area, nitrogen, phosphorus and water consumption results were obtained by using three soil experiments and two hydroponic experiments (nutrient film technique, deep water culture) in open field (OF), greenhouse (GH), growth chamber (GC) and container (C) experimental areas that were carried out simultaneously with lettuce, basil and arugula plants. According to the datas obtained from the growing conditions, the nitrogen and phosphorus consumption rates in the hydroponic "Nutrient Film Technique (NFT)" and the "Deep Water Culture (DWC)" experiments are higher than soil agricultural studies. As plants grow, the growing medium only acts as a carrier for nutrients. For this reason, the environment of plants in traditional agriculture is soil, while hydroponic systems' is water, so plants take nutrients and transport them to tissues faster. In this case, because of providing homogeneity in water faster; homogeneous growth of plants is higher than soil agriculture by the way. Hydroponic systems are supportive alternative to traditional agriculture for efficient use of water in addition to efficient nitrogen and phosphorus consumptions. In the study conducted with NFT, it was observed that the water consumption rates were the lowest was more higher than others followed by DWC. High area use efficiency can be achieved successfully with the NFT hydroponic system in plant cultivation followed by DWC. In addition to that, another reason for the different responses of grown plants to different environmental conditions is the positive effect of lighting technology on plant growth. In addition, the importance of climatic conditions for the plant is as valuable as the lighting technology. As a result of the temperature and humidity conditions being adjusted where the plant does show required stress conditions to balance both the root and upper parts of the plant under the effect of transpiration and photosynthesis. The amount of energy consumption, which is another parameter obtained from datas, calculated as per gram dry leaf weight, is from low to high, respectively; in soil-based experiments as OF, GH, GC experimental areas; in hydroponic studies, NFT, DWC systems. NFT consumes less energy than the DWC hydroponic system but more than greenhouse production. In today's conditions, energy is provided from fossil sources. For this reason, although the carbon emission rate due to transportation is much less than traditional agriculture with its establishment in city centers, the energy used during production, especially due to lighting technologies, is quite high. Renewable technologies should be used to prevent energy-related carbon emissions. Solar, geothermal, wave, wind, biomass, hydroelectric, hydrogen energies are among the renewable energy sources that can be used. Considering the advantages and disadvantages, indoor hydroponic systems in green leafy plant cultivation is considered as an alternative method to support soil agricultural methods, both in terms of water, area, nitrogen-phosphorus use efficiency and the yield per square meter area.
-
ÖgeComparison of stock selection methods: An empirical research on the borsa İstanbul(Graduate School, 2023-04-12) Özdemir, Ali Sezin ; Tokmakçıoğlu, Kaya ; 403142012 ; Business AdministrationVarious investment instruments or index-linked financial instruments in various markets, made with reference to stock indices, cause negative returns, i.e., loss, for investors in periods when the index is declining. In some cases, while the indices follow a course in line with the country's inflation, the funds or investment instruments linked to the relevant indices may not be at the desired level in terms of generating above market returns or above inflation. Investment companies have developed stock selection models for various portfolios, by using the literature for the funds and investment instruments they have created, to protect themselves from the negative movements of indices. Portfolio analysis methods developed to obtain positive returns from financial instruments can cause negative returns even in cases where the market is stable or stagnated due to adverse economic conditions and increased risks. In addition, investments made in financial instruments that reference only indices or various indices' derivatives may cause negative returns as the index is negatively affected by the economic effects in the relevant country. The issue of stock selection is an important issue not only for large investors but also for individual investors. Moreover, some funds (such as pension funds) belonging to the indices of different markets depend on the movements of the stocks in the market. For this reason, stock selection has been one of the most important issues in finance for the last hundred years. In the literature, a wide range of stock selection models with diverse theoretical underpinnings have been developed, particularly over the past seventy years. Moreover, numerous empirical and theoretical studies have been conducted to compare the performance of these models. In this thesis, three models that have not yet been empirically compared with each other in the literature have been identified, and an empirical study has been carried out on the stocks of Borsa Istanbul indices. The models compared are as follows: (1) Markowitz model stock selection (detection of the percentage distribution of stocks in the portfolio), (2) stock selection model with second-order stochastic dominance method, (3) stock selection method with artificial neural network method. All models are models that can be considered quantitative analysis, while the utilization of financial ratios within the ANN model signifies a fundamental approach in the realm of stock selection. In first section of this thesis, a review of relevant literature on stock selection is presented, with particular emphasis on the rationale behind selecting the Turkish stock market, specifically the Borsa Istanbul. The subsequent section of this study places emphasis on literature pertaining to the pertinent models. Within the third section, the theoretical foundations underpinning Markowitz's model, Second Order Stochastic Dominance, and Artificial Neural Networks, all of which are utilized within this research, are thoroughly expounded. The fourth section of this thesis provides a detailed account of the relevant 18-year dataset, alongside an explication of the technical structure of stock selection models. Specifically, the artificial neural network stock selection model was constructed using the MATLAB programming language, while Microsoft Excel application was utilized to conduct Markowitz and Stochastic Dominance tests. The fifth section of this thesis presents the results of a comparative analysis of the aforementioned models. Specifically, return values are tabulated and compared across the models. Based on the analysis, it has been determined that the stock selection model utilizing artificial neural networks demonstrates a relatively higher return potential compared to other models. Furthermore, all three models were found to be capable of generating portfolios with returns that were between 8 to 20 times higher than the BIST-100 index. This thesis aims to achieve several objectives, namely: (1) to conduct a comparative analysis of the return performance of three stock selection models whose relative performance has not yet been evaluated in the literature, (2) to undertake a quantitative analysis of the selected models, (3) to compare the alpha returns (i.e., portfolio return – index return) within a market context such as Turkey, where the stock market is consistently influenced by political and economic events, and (4) to contribute to the literature by introducing models that demonstrate the potential to generate portfolios with returns that surpass the market or index return.
-
ÖgeÇapraz e-ticaret pazarlarında hibrit öneri sistemi(Lisansüstü Eğitim Enstitüsü, 2023-08-04) Köse, Emre ; Yaslan, Yusuf ; 504181559 ; Bilgisayar MühendisliğiÖneri sistemleri, film, müzik, e-ticaret ve diğer çeşitli platformlarda, çeşitli algoritmalar kullanarak kullanıcıların ihtiyaçlarına uygun ürünlerin tavsiye edilmesini amaçlamaktadır. Bu algoritmalar genellikle kullanıcı-öğe temsillerini elde ederek öneri yapmaktadır. Çalışmalar başlangıçta matris çarpanlarına ayırma ile ilerlerken, daha sonra hem işbirlikçi hem de içerik tabanlı önerilerde farklı bellek veya model tabanlı yaklaşımlar geliştirilmiş ve geliştirilmeye devam etmektedir. Çapraz pazar öneri problemi sosyal medya, e-ticaret uygulamaları ve diğer çevrimiçi platformlarda ortaya çıkmış, farklı kaynak pazarın/pazarların verilerini kullanarak, hedef pazar olarak adlandırılan kısıtlı veri kümesinde kullanıcılara öneri amaçlayan yeni bir çalışma alanı olarak ifade edilebilir. Veriden öğrenme aşamasında dikkat edilmesi gereken bazı noktalar bulunmaktadır. Kaynak pazarların verisinden öğrenilen ve optimize edilen modeller, hedef pazarın davranışları dikkate alınmadan uygulanırsa sorunlu sonuçlar ortaya çıkabilmektedir. Örneğin giyim kategorisinin diğer kategorilere göre daha yoğun kullanıldığı bir ülke düşünelim. Bu ülkenin ortalama sıcaklığı hedef pazardan çok daha yüksekse, kaynak pazarda standart pantolon alan bir müşteriye tişört önermek mantıklı olabilir ancak bu hedef pazarda alakasız olabilir. Bu nedenle verilerden öğrenme, her iki pazardaki dağılımları ve yanlılıkları dikkate alabilen bir kapsamda olmalıdır. Çapraz pazar öneri sistemleri son yıllarda ortaya çıkmış yeni sayılabilecek bir konu olarak ifade ediliyor olsa da bahsi geçen yöntemler burada farklı şekillerde çözüm olarak kullanılabilmektedir. Literatürde, FOREC algoritması bu alanda hem getirdiği çözüm hem de sağladığı açık kaynak veri kümesi ile önemli bir çalışma olarak yer almaktadır. Pazar adaptasyonu ve meta-öğrenme kavramları üzerinde ilerlenerek, 2021 yılında yayınlanan Pazarlar Arası Ürün Önerisi araştırmasında geliştirilen çoklu ağ yapısına sahip algoritma, XMarket ismiyle 18 yerel pazarın, yani ülkenin, 16 farklı kategorideki kullanıcı-öğe ikililerini ve skorlarından oluşan veri kümesini de içermektedir. Algoritma içinde ilk olarak GMF, MLP ve NMF modellerini kullanarak pazar-bağımsız, yani kaynak ve hedef pazar verisinin birlikte kullanıldığı bir eğitim gerçekleştirilir. Bu adımda buna ek olarak MAML çerçevesi ile few-shot öğrenme tekniğini de kullanır. İkinci aşamada ise pazara-özel olarak ifade edilen sadece hedef pazar verisi ile ekstra MLP katmanları eğitilerek FOREC sistemi eğitimi tamamlanmış olur. Yapay sinir ağları milyonlarca parametre ile ürün-kullanıcı çiftleri ile beslenerek, benzerliklerini anlayabileceğimiz ve karşılaştırabileceğimiz temsiller elde edebiliyor olsa da başlangıç noktasında her bir veri örneğini, örneğin kullanıcıları (veya ürünleri) fiziksel manada yakınlıklarını temsil eden bir yapıda değildir. Bu noktada, elimizdeki veriyi kullanıcı ve ürünlerin etkileşim halinde olduğunu da düşünerek, bir çizge ağı olarak temsil etmek, bağlama farklı bir mimari ve öğrenme yöntemi olarak girebilir. Evrişimli çizge ağları, komşu birleştirme yöntemini sadeleştirilmiş bir şekilde kullanarak, derin sinir ağlarının ya da few-shot öğrenme yönteminin mimari olarak öğrenmesi mümkün olmayan farklı derinliklerdeki komşu düğüm ilişkilerinin kullanımıyla birçok pazar verisinde, tek başına diğer yaklaşımların üstünde bir performans göstererek başarılı sonuçlar alabilmektedir. Bu çalışmada çapraz marketler için geliştirilen öneri sistemi çizge yapısını kullanmaktadır. Hafif Çizge Evrişimli Ağı (LGCN) yapısı, FOREC çalışmasında olduğu gibi pazar-bağımsız ve pazara-özel adımlarla eğitilmiştir. Bu iki aşama arasında temsil aktarımını uygulayarak geliştirdiğimiz sistem daha sade bir eğitim akışından oluşmaktadır. Eğitimin ilk adımında kaynak ve hedef pazar verisindeki ikililerle oluşturulan çizge ağı yine bu iki pazarın verisiyle eğitilmiştir. Bu aşamadaki eğitim sonrası kaydedilen kullanıcı ve ürün temsilleri, ikinci adımda yeni çizge ağı oluşturulurken yeni temsillerin yarısının başlangıç noktası olarak kullanılmıştır. Temsilin diğer parçası ise pazara-özel öğrenime odaklanabilmesi için bu adımda belli bir dağılımla rastlantısal olarak başlatılmıştır. Çalışmamızda test aşamasından önce, eğitimi tamamlanan çizge ağı ile farklı pazar verilerinin ilişkilerini ve potansiyel iyileştirme noktalarını keşfedebilmek için, doğrulama verisi ile ilinti gösterebilecek farklı metriklerin incelemesi yer almaktadır. Bu metrikler aşağıda listelenmiştir. - Kullanıcıların eğitim verisindeki ürünlerine verdiği ortalama puan değeri - Kullanıcının hedef pazar eğitim kümesinde birinci dereceden kaç ürün ile etkileşimde olduğu - Kullanıcıların kaynak ve hedef eğitim kümelerindeki ikinci dereceden kaç ikiliye sahip oldukları - Derece Merkezliliği (Degree Centrality) - Yakınlık Merkezliliği (Closeness Centrality) - Düğüm Fazlalık Katsayısı (Node Redundancy Coefficient) - Kümeleme Katsayısı (Clustering Coefficient) Görüldüğü üzere bu değerler arasında ham veriden çıkarılabilen temel istatistik değerlerinin hem de iki-parçalı çizge oluşumu sonrası çıkarılabilen metrikler bulunmaktadır. Bu aşamadaki sonuçlardan elde ettiğimiz çıkarım, kullanıcıların bireysel olarak nDCG skorlarının iki-parçalı çizgeden elde edilen Düğüm Fazlalık Katsayısı ve Kümeleme Katsayısı değerlerinin, diğerlerine oranla daha fazla ilintiye sahip olduğudur. Çalışmamızın detayında bu ilinti değerlerinin gelecek çalışmalarda nasıl kullanılabileceği ile ilgili fikirlere yer verilmiştir. Deney sonuçları yedi farklı modelin sonuçlarını içermektedir. Bunların beş tanesi referans araştırması olarak düşündüğümüz FOREC çalışmasında da yer alan sonuçların bizim benzer şekilde uygulamamız sonrası elde ettiğimiz sonuçlardır. Diğer iki model ise bu problem için geliştirdiğimiz sistemin ilk adımındaki pazar-bağımsız adımın sonucu, diğeri ise iki-aşamanın eğitimi sonrası elde ettiğimiz nihai hibrit LGCN model sonucudur. Bahsedilen sonuçlar pazarların ikili olarak eğitimini ve sonucunu içeren deneylerdir. Yani, FOREC çalışması yedi hedef pazarı üzerinden sonuçları her bir pazar için geriye kalan diğer altı pazarı tekli olarak kaynak pazar olarak kullanır ve eğitimlerini buna göre gerçekleştirerek sonuçlarını alır. Biz de referans noktası olarak düşündüğümüz FOREC çalışmasına benzer şekilde eğitimlerini ilerlettiğimiz sistemimizde, bu hedef pazarların içinden seçtiğimiz dört tanesini alarak ilerledik. Bunlar Almanya, Japonya, Meksika ve İngiltere pazar verileridir. Buna ek olarak Amerika pazarının verisi sadece kaynak veri olarak deneylerde yer almıştır. İki aşamalı yaklaşımımız ile farklı hedef pazarlar için %5 ve %8'lik bir aralıkta FOREC'in tüm sonuçlarından daha iyi sonuçlar elde ettiğimiz gözlemlenmiştir. Buna ek olarak, ilk adımdan sonra uyguladığımız pazara-özel eğitimin sonuçların iyileşmesinde %1 ile %2 oranında katkı sağladığı açığa çıkmıştır. Sonuç olarak, bu çalışmada çapraz pazarlar için iki aşamalı çizge sinir ağı ile öğrenilen model önerilmiş ve başarımları bu alanda yüksek sonuç verdiği gözlemlenen FOREC algoritması ile karşılaştırılmıştır. Önerilen model farklı hedef pazarlarında nDCG@10 değerlendirme metriği kullanıldığında FOREC algoritmasından daha iyi sonuçlar vermektedir.
-
ÖgeDeniz taşımacılığında emniyet esaslı akıllı gemi denetim analitiği(Lisansüstü Eğitim Enstitüsü, 2023-03-06) Demirci, Seyid Mahmud Esad ; Çiçek, Kadir ; 512172011 ; Deniz Ulaştırma MühendisliğiDeniz taşımacılığı sahip olduğu düşük maliyet ve yüksek kapasite sayesinde küresel ticari yük akışındaki payını giderek artırmakta ve küresel tedarik zincirinin omurgasını oluşturmaya devam etmektedir. Ancak giderek artan gemi hacimleri ve gemi sayısı ile birlikte deniz kazalarının sayısını da artırmıştır. Deniz kazaları sonucunda meydana gelen can, mal kaybı ve çevre kirliliği deniz taşımacılığında katı yasal düzenlemelerin oluşturulmasını gerektirmiştir. Yasal düzenlemeler çerçevesinde belirli standartlarda işletilmesi gereken gemilerle deniz taşımacılığında sürdürülebilirliğin ve denizlerde emniyetin sağlanması hedeflenmiştir. Bu hedef doğrultusunda, gemilerde standartların uygulanmasında ilk sorumlu bayrak devletleri ve bayrak devletleri tarafından yetkilendirilen klas kuruluşları iken bu standartlara uyulması ve gemiye özgü emniyet yönetim sisteminin oluşturulması gemi işletme firmaları sorumluğundadır. Bu sorumluluklar çerçevesinde bayrak devletleri, klas kuruluşları ve gemi işletme firmaları gemilerde standartların sağlanması açısından ilk emniyet bariyerini oluşturmaktadır. Ancak standart altı gemilerden kaynaklanan deniz kazalarının meydana gelmesi ikinci bir emniyet bariyerini gerektirmiş ve Uluslararası Denizcilik Örgütünün teşvikiyle liman devleti denetim rejimleri oluşturulmuştur. Emniyet bariyerlerinde yer alan sektör paydaşlarının yürüttükleri gemi denetimleri yanı sıra yük sahipleri, armatörler ve sigorta kuruluşları gibi önemli sektör paydaşlarının iştirak ve teşvikleriyle farklı denetim mekanizmaları geliştirilmiştir. Geliştirilen tüm bu denetim mekanizmaları, gemilerin gerekli standartları sağlaması, deniz taşımacılığındaki kazaları asgari seviyeye indirgenmesi ve çevre kirliliğinin önlenmesi amacıyla emniyet esaslı gemi denetimlerini yürütmektedirler. Ancak gemi denetimlerinde etkinliği ve verimliliği azaltan ve aşılması gereken bir takım zorluklar bulunmaktadır. Gemi denetim verimliliğinin artırılması için aşılması gereken en önemli zorluklardan biri, birbirinden bağımsız ve çok sayıda denetim mekanizmasının olması sebebiyle gemi denetim sürecinde oluşan uygulama farlılıklarıdır. Son emniyet bariyeri olarak kabul edilen liman devleti denetim rejimlerinde harmonize bir denetim sürecinin izlenebilmesi ve denetim verimliliğinin artırılabilmesi için ilk olarak Paris MoU denetim rejiminde uygulamaya başlanan gemi risk profili hesaplama yöntemi geliştirilmiştir. Gemi özellikleri temelinde hesaplanan gemi risk profili yardımıyla gemilerin denetim kapsamı ve periyodu belirlenmektedir. Paris MoU denetim rejimini takiben diğer denetim rejimlerinde de benzer gemi risk profili hesaplama yöntemleri uygulanmaya başlanmıştır. Ancak gemi risk profili hesaplamasında değerlerinden her bir gemi özelliğinin gemi tutulma indeksi odaklı bir yaklaşımla sınıflandırılması ve gemi tutulmasının denetim uzmanının profesyonel yargısına bağlı olması denetim sürecinde farklılaşmalara sebep olmakta ve harmonize bir denetim yapısının oluşturulmasına engel teşkil etmektedir. Öte yandan gemilerde tespit edilen eksiklikler ise denetim uzmanı yargısından bağımsız olup geminin sağlaması gereken standartlar uymaması ya da bunlardan sapması durumudur. Bu bağlamda, gemi özelliklerinin ortak bir yaklaşım sağlanabilmesi ve denetim sürecinde ortaya çıkan tutarsızlıkların en aza indirgenebilmesi için tutulma indeksi yaklaşımından bağımsız bir yaklaşım izlenmesi gerekmektedir. Gemi denetimlerinde etkinliğin artırılması önündeki en önemli zorluk ise şiddetli zaman kısıt ve baskısı altında ve az sayıda denetim uzmanı ile yürütülen denetimlerde kontrol edilmesi gereken birçok alan ve yüzlerce öğenin bulunmasıdır. Liman devleti denetim rejimleri tarafından denetim uzmanlarına rehber niteliğinde gemi ve denetim tipi özelinde kontrol edilmesi gereken alanlar ve eksiklik öğeleri tavsiye edilmiştir. Ancak gemi risk profili hesaplamasında oluşabilecek hatalar ve gemi tipi yanında diğer gemi özelliklerinden kaynaklanan eksiklikler göz önüne alındığında kontrol edilmesi gereken alanların ve eksikliklerin tahmin edilmesi için denetim sürecinde sistematik bir yaklaşıma ihtiyaç duyulmaktadır. Sektör paydaşlarının ihtiyacı odağında yürütülen bu çalışmada, gemi denetim verimliliği ve etkinliği önündeki zorlukların üstesinden gelinmesi amacıyla akıllı gemi denetim analitiği modeli önerilmiştir. Önerilen model tasarımında veri tabanında bilgi keşfi sürecinin beş temel adımı, iyi yapılandırılmış ve son emniyet bariyeri olarak kabul edilen liman devleti denetim rejimlerinde uygulanmakta olan denetim sürecine uyarlanmıştır. Ancak liman devleti denetim rejimlerinden uygulanmakta olan denetim sürecinde denetim kapsamının sadece gemi risk profiliyle sınırlı olması sebebiyle önerilen modelde doğrudan gemi özellikleri odağında denetimin şekillenmesi, odaklanılması gereken alanların ve eksiklik öğelerinin tahmin edilmesi esas alınmıştır. Akıllı gemi denetim analitiği modeliyle gemi denetimlerinde verimliliğin ve etkinliğin artırılması amacıyla aşılması gereken zorlukların çözümü için bulanık kümeleme ve apriori algoritmalarından faydalanılmıştır. Bulanık kümeleme algoritmasıyla denetim sürecinde değerlendirilen ve kritik öneme sahip gemi özelliklerinden gemi yaşı, gemi tipi, gemi gros tonajı, geminin bayrak devleti, klas kuruluşu ve işletme firması performansı tutulma endeksi yaklaşımından farklı olarak eksiklik indeksi yaklaşımıyla kümelenerek yeni bir sınıflandırma yapılmıştır. Bu sayede, her bir gemi özelliği için denetim uzmanı profesyonel yargısından bağımsız ve yansız kümeler elde edilmiştir. Elde edilen kümelerin modelde kullanılmasıyla önerilen modelin tüm sektör paydaşlarının kullanımına elverişli olması, harmonize bir denetim sistemi için zemin oluşturması ve eksiklik odaklı kümeleme yaklaşımı sayesinde denetim verimliliğini artırması beklenmektedir. Gemi denetiminde etkinliğin artırılması için apriori algoritmasıyla denetim alanları ve eksiklik öğeleri arasındaki birliktelik kuralları elde edilerek örüntüler keşfedilmiştir. Keşfedilen örüntülerde tümdengelim yaklaşımıyla birbirine bağlı iki çevrimle, öncelikle odaklanılması gereken denetim alanları ve sonrasında bu denetim alanlarında kontrol edilmesi gereken eksiklik öğelerinin tahmin edilmektedir. Önerilen modelde denetlenen geminin önceki denetim kayıtlarının yahut denetim esnasında tespit edilen eksiklikler odağında yürütülebilmesi sayesinde gemi denetiminde dinamik bir yaklaşım sağlanmıştır. Sonuç olarak, Paris MoU veri tabanından elde edilen gemi denetim verilerinin kullanıldığı veri tabanında bilgi keşfi süreci temelinde önerilen model, denetim uzmanlarına gemi denetiminde odaklanılması gereken alanları ve bu alanlarda kontrol edilmesi gereken eksiklik öğeleri için karar destek sistemi sunmaktadır. Sunulan karar destek sisteminde kullanılan her bir gemi özelliğinin eksiklik odaklı bir yaklaşımla yansız bir ölçekle kümelenmesi sayesinde gemi riskine odaklanmaktan ziyade denetimde eksikliğin tespit edilmesi odağında model tasarımı sağlamıştır. Böylelikle tüm sektör paydaşlarının gemi denetimlerinde faydalanabileceği bir model sunulmuştur. Ancak sunulan modelde sadece Paris MoU veri tabanından elde edilen verilerin kullanılması çalışmanın sınırlılığını oluşturmaktadır. Bu yüzden diğer denetim mekanizmalarından elde edilen denetim verilerinin önerilen modelde kullanılabilir hale getirilerek modele entegre edilmesiyle modelin güçlendirilmesine yönelik çalışmaların yapılması mümkündür.
-
ÖgeDeniz ticaret endekslerini zaman serisi modelleri kullanarak tahminleme(Lisansüstü Eğitim Enstitüsü, 2022-08-25) Koyuncu, Kaan ; Tavacıoğlu, Leyla ; 512172001 ; Deniz Ulaştırma MühendisliğiGünümüzde şirketlerin geleceklerine yön verebilmesi, karlılıklarını arttırarak anlamlı bir büyüme ivmesi yakalayabilmeleri, gelir gider dengesini optimum düzeyde tutarak karlılığı maksimize etmeleri, olumsuz koşullarda varlıklarını sürdürebilmeleri ve doğru zamanda doğru kararlar alabilmeleri, sektörlerinde hızlı hareket eden ve yön veren bileşenleri okuma, anlama, doğru yorumlayarak karşılık verebilme kabiliyetlerine bağlıdır. "Veri Bazlı Karar Verme" yaklaşımı bugünün dünyasında kurumların ve kişilerin var olabilmesi için gerekli hale gelmiştir. Eldeki verilerden belli değişkenlerin tahminlenmesi, gelecekte karşılaşılması muhtemel durum ve senaryoları uygun veri ve analiz teknikleri kullanarak öngörmek ve duruma göre önlem almaktır. Denizcilik sektöründe ise özellikle iş zekâsı ve analitiği, raporlama, iş ve süreç geliştirme departmanları ve yöneticiler genellikle navlun oranları, ürün satışları, envanter gereksinimleri, sefer hesapları, sevkiyat oranları hakkında düzenli olarak tahminler yapmakta ve yorumlamaktadırlar. Yapılan analizler doğrultusunda tahmin değerlerine dayalı olarak kısa ve orta vadede operasyonel ve stratejik kararlar alınmaktadır. Denizcilik endüstrileri karmaşık küresel pazarlarda faaliyet göstermekte, işletmeler dış girdilere ve değişken çevre koşullarına karşı oldukça hassastır. Özellikle pandemi ile birlikte küresel çapta değişen yeni dünya ekonomisi ve gelişmeler denizcilik piyasalarını da derinden etkilediği gözlemlenmektedir. Ülkelerin virüse karşı aldığı tedbirler; örümcek ağı gibi birbirine bağlı denizcilik endüstrisi üzerinde Domino (Kelebek) etkisini başlatmıştır. Tedarik zincirindeki bu kırılma, ABD, Avrupa ve Asya' nın diğer limanlarını da etkisi altına alarak zincir boyunca dalgalanma yarattığı açıkça gözlemlendi. Özellikle konteyner piyasasında yaşanan önce azalan taleple düşen navlun oranları ve hemen akabinde boş konteyner ihtiyacı ile sürekli artış eğilimi gösteren taşıma ihtiyacı navlunları agresif şekilde yükseltmiştir. Bu sebeple, denizcilik sektöründe navlun fiyatlarının trendini tahmin etmek ve geleceğe yönelik pozitif değer yaratmaya yönelik stratejiler geliştirmek çok daha önemli hale gelmiştir. Geçmiş ve güncel veriler (veri ambarları) ile yapılacak çalışmalar dijitalleşen piyasalarda geleceği öngörmek için güçlü bir araç olarak karşımıza çıkmaktadır. Covid-19, dijital paralar, Fed faiz politikası, ticaret savaşları, enerji krizleri, enflasyon, web3.0 ve savaşlar gibi dünya küresel piyasalarında dönüm noktası sayılan gelişmelerde önemi daha da artmaktadır. Öngörünün önemi, yalnız kriz zamanlarında değil, tüm şartların uygun olduğu zamanlarda bile kolayca görülebilir. Bu bağlamda kullanabilecek öngörü modellerinden en çok tercih edilen ve başvurulan yöntemlerin başında zaman serisi modelleri gelmektedir. Tahmin için kullanılabilecek birçok zaman serisi tekniği üzerine son yıllarda çalışılmaktadır. Tek ve karmaşık değişkenlere bağlı veri yapılarında tahminleme için ARIMA, Support Vector Regression (SVR), bulanık kümeler, yapay sinir ağları ve makine öğrenimi ön plana çıkmaktadır. Zaman serileri üzerine ilk çalışma Klein ve Verbeke (1987) tarafından yapılmış olup, Antwerp limanında aylık veriler içeren tek değişkenli zaman serileri kullanılarak Antwerp limanındaki çelik trafik akışını modellemiştir. Pino ve arkadaşları çok fazla veriden oluşan herhangi bir zaman serisini hızlı şekilde tahminlemek için çalışmalarında, İspanya'nın elektrik üretim piyasasındaki rolü içi enerji fiyatının saatlik öngörülerini hesaplamaktadır. Zeng ve arkadaşları, Baltic Dry Index (BDI) tahmini için empirical mode decomposition (EMD) ve artificial neural networks (ANN) dayalı bir yöntem geliştirmiştir. Angelopoulos çalışmasında, Baltic Dry Index (BDI) dinamik spektral içeriğini araştırmaktadır. Baltic Dry Index'in dinamik spektral içeriği, zaman içindeki olası varyasyonlarını ortaya çıkarmak için Zhao-Atlas-Marks çift doğrusal zaman-frekans gösterimi aracılığıyla analiz ederek tasvir etmiştir. Tahmin üzerine başka bir çalışmada, Fahran mevsimsel değişiklikleri dikkate alarak bazı uluslararası konteyner limanlarında SARIMA modellerini kullanarak konteyner elleçleme tahmininde bulunmuştur. Sezer ise çalışmasında 2005-2019 arasındaki finansal zaman serisi tahmin uygulamasına ilişkin çalışmalara yönelik kapsamlı bir literatür incelemesi yapmıştır. Çalışmalarını endeks, forex ve emtia tahmini gibi öngörülen tahmin uygulama alanlarına göre kategorize etmiştir Zaman serisi analizi kullanılarak çeşitli endüstrilerde birçok model ve yöntem geliştirilmiştir. Zaman serileri literatürde yeni bir konu olmamasına rağmen denizcilik endüstrisinde çalışmaların kısıtlı olduğu ve çoğunlukla güncel olmadığı görülmektedir. Bu çalışma ile Institute of Shipping Economics and Logistics (ISL) and the Leibniz-Institut für Wirtschaftsforschung (RWI) Container Throughput Index, Shanghai Containerized Freight Index (SCFI), The Baltic Exchange Dry Index aylık verilerine dayanarak bir öngörü yapılması amaçlanmaktadır. RWI/ISL container throughput index ile ilgili bir çalışmaya rastlanmamış olmaması nedeniyle bu konuda yapılan ilk çalışma olacaktır. Pandemi ve sonrası tahminlemeye dayalı güncel yayınların az olmasına istinaden yapmayı hedeflediğimiz SCFI ve BDI ile ilgili çalışmamızın literatüre önemli katkı sağlaması beklenmektedir. Aynı zamanda denizcilik sektörü de bu çalışmalardan faydalanabilecektir. Çalışmanın giriş bölümünde veri bazlı karar verme ve tahminlemenin önemi, denizcilik sektörü üzerine değerlendirilmesi ve çalışmanın aşamalarına değinilmektedir. Toplamda 3 farklı indeks ile yapılacak olan bu çalışma 4 aşamadan oluşmaktadır. Birinci bölümde literatür çalışmaları çerçevesinde çalışmamızın amacı ve önemi önemi vurgulanmaktadır. Ayrıca denizcilik ve diğer sektörlerde zaman serileri üzerine yapılan öngörü çalışmaları incelenmiştir. İkinci bölümde zaman serileri ve temel kavramlar açıklanmaktadır. Üçüncü bölüm, çalışma metodolojisi ve modellerin belirlenmesi, seçilen denizcilik indeksleri ile uygulama süreçlerinin belirlenmesi, analizlerin gerçekleştirilmesi, değerlendirilmesi ve yorumlanması aşamalarını içermektedir. Dördüncü bölümle çalışma sonuçlandırılmıştır. Çalışmada, RWI/ISL Container Throughput Index, Shanghai Containerized Freight Index (SCFI), The Baltic Exchange Dry Index aylık verileriyle modellemek için Box-Jenkins modellerinden ARIMA ve SARIMA'ya odaklanılmıştır. RWL/ISL ve BDI verilerini modelleme aşamasında R programlama dili, SCFI'da ise Python programlama dili kullanılmıştır. İlk çalışmada, Ocak 2007'den Aralık 2019'a kadar olan dönem aralığında aylık veriler ile RWI/ISL Konteyner Endeksi'nin kısa vadeli tahmininin incelemesi amaçlanmıştır. Model, SARIMA ve ETS modelleri kullanılarak tahmin edilmiştir. Tahmin sonuçları, orijinal RWI/ISL serisi Nisan 2020 ayından sonra artarken, Mart 2020'den sonra mevsimsel ve iş günü ayarlı RWI/ISL serisi azalmıştır. SARIMA modelin ETS modeline göre daha iyi sonuç vermektedir. RWL/ISL ile literatürde yapılan ilk çalışma olması ve öngörü modelinizin başarılı performansının denizcilik sektörüne ve literatüre katkı sağlaması beklenmektedir. Bu çalışmada, iki farklı zaman serisi modelleme yaklaşımı ve yaklaşımların varsayımlarını sağlamak için uygun kriterler kullanarak SCFI'nin tahmin doğruluğunun iyileştirilmesine katkıda bulunulması amaçlanmıştır. Bu amaçla SCFI verileri Holt-Winters ve SARIMA yöntemleri ile incelenmiştir. SARIMA modeli Holt-Winters ile karşılaştırıldığında minimum MAPE ve RMSE değerlerine sahiptir. Zaman serisi analizi sonucunda, tahmin doğruluğu kabul edilebilir olduğundan seçilen SARIMA (0,2,3) (1,0,0)12 modeli gelecekteki değerleri tahmin etmek için kullanılabilir. Çalışma sonuçları, SARIMA modelinin daha kesin ve doğru bir model olduğunu göstermektedir. Belirlenen SARIMA modeli doğrultusunda Ağustos 2021-Şubat 2022 dönemi için tahmin değerleri hesaplanmıştır. Baltıc Dry Index çalışmasında ise Ocak 2011 ve Haziran 2021 dönemini kapsayan aylık veriler, tek değişkenli zaman serilerinde Otoregresif Bütünleşik Hareketli Ortalama (ARIMA) yöntemi ile Baltic Dry Index (BDI) için 12 (aylık) dönemlik öngörü yapılmıştır. ARIMA modelinin belirlenmesi için önce inceleme kapsamında mevsimsellik ve birim kök testleri ile veri setinin dışsal bileşenleri araştırılmıştır. Belirlenen model için spesifikasyon testleri varsayımlarını sağlamış ve modelin geçerli ve güvenilir olduğuna karar verilmiştir. Sonuç olarak, deniz taşımacılığı piyasası analistleri, önerdiğimiz tatmin edici tahmin modellerinin performansından faydalanabilir, bunları kendi analiz araçlarına entegre edebilir.
-
ÖgeDenizcilik işletmelerine yönelik entelektüel sermaye değerlemesi üzerine bir model önerisi(Lisansüstü Eğitim Enstitüsü, 2023-03-03) Çevik, Gizem ; Arslan, Özcan ; 512152014 ; Deniz Ulaştırma MühendisliğiBir işletmedeki entelektüel sermaye, fikirlerin zenginliğini ve firmanın geleceğini büyük ölçüde belirleyen yenilik yeteneğini içerir. Mevcut piyasalarda rekabet gün geçtikçe artarken, günümüz iş ortamında entelektüel sermaye, organizasyonlarda gelişmeyi ve rekabet gücünü sağlayan en kritik faktörlerden biridir. Birçok işletmenin hayatta kalması, değişikliklere uyum sağlama istekliliğine ve yeteneğine bağlıdır. Entelektüel sermaye sayesinde firmalar, değişimlere hızla adapte olabilmekte ve piyasalarda rekabetçi kalabilmektedir. Denizcilik işletmeleri, faaliyet alanları dikkate alındığında hizmet yoğun işletmeler arasında değerlendirilmekte olup, dahası ulusal ve uluslararası sosyal, ekonomik ve siyasi tüm değişkenlerin etkin rol oynadığı bir sektördedir. Bu firmalar coğrafik açıdan nerede olursa olsun uluslararası yapısı itibariyle dünya pazarından pay almak için savaşmaktadır. Bu sırada işletimlerindeki gemi sayısı ve tonajları kadar, bilgi, deneyim, örgüt kültürü, organizasyon yapısı, ulusal ve uluslararasındaki paydaşları ile ilişkileri, çalışanlarının eğitim seviyesi, operasyonlarının emniyet derecesi, denetim ve takip prosedürleri gibi birçok değişken olumlu veya olumsuz olarak bu işletmeleri etkilemektedir. 2008 krizinden bu yana yaşam savaşı veren denizcilik işletmelerinin 2016 yılında BDI (Baltic Dry Index)'nin de gemi kira bedellerinin çok altında seyretmesi sebebiyle iflas veya el değiştirme yüzdelerinin sürekli artan bir ivmeye sahip olmasıdır. Bunlara ek olarak mevsimsel döngünün dışında bir de Covid- 19 pandemisinin yarattığı dalgalanmanın da etkisinin ne kadar olacağı ve süresi önceden hesap edilemediğinden karar vericileri zorlamıştır. Sektörel haberlerin ışığında görülebilir ki öngörüsüz yapılan yatırımlar kadar işletme değerini yukarı taşıyacak deneyim, patent, bilinirlik, sektörel yazılımlar, çalışan eğitimleri, örgüt kültürü gibi entelektüel sermayenin iyi planlanmıyor oluşu hatta tamamen önem verilmeyişi bu işletmelerin sonlarını getirmiştir. Buna göre, gemi işletmeciliği firmalarının entelektüel sermaye esaslı değerlendirme yapmaları ve buna bağlı olarak değer arttırıcı karar vermeleri, bu etkenleri iyi çözümlemeleri ve süreçlerini denetlemeleri gerekmektedir. Geçmişte bireyler, örgütsel performansın finansal ve gider kalemlerine bağlı olduğuna inanıyorlardı. Bu yaklaşım, artık örgütlerin başarısının büyük ölçüde örgütsel performansa katkıda bulunan entelektüel sermaye öğelerine bağlı olduğu yönünde değişmiştir. Gemi işletmesi şirketlerinin performansı endüstrideki müşteriler (kiracı/taşıtan) için de önem arz etmektedir. Gün geçtikçe, işletmeler ve işletme altında seyreden gemiler için denetleme ve takip sistemlerinde artış görülmektedir. Bu çalışmada hedeflenen, bir gemi işletme şirketinin entelektüel sermayesini ölçülmesini sağlayan denizciliğe özel bir model oluşturulması ile işletmelerdeki karar vericiler için rekabet avantajlarını arttırabilecekleri başarı alanlarını gözlemleyebilecekleri veri sunarak, stratejik karar alımı süreçlerine destekte bulunulmasıdır. Bu çalışma kapsamında geliştirilen denizcilik işletmeleri olarak gemi işletme firmaları baz alınmış olup, yapılan sistematik literatür çalışması sayesinde entelektüel sermayeyi ölçmek için kullanılan kriterler ile denizciliğe özgü anahtar performans göstergeleri karşılaştırılarak entegre edilmiş ve taksonomi çalışması yapılmıştır. Farklı gemi tipleri ve sefer bölgelerinde çalışan firmalara uyum sağlanabilmesi için çerçeve geniş tutulmuş ancak ölçümlerde bu kısıtların olumsuz etkilerinin oluşmaması adına firmanın sorumluluk alanına girmeyen göstergeler devre dışı bırakılmıştır. Literatürde kabul görmüş entelektüel sermaye unsurları olan insan sermayesi, yapısal sermaye ve ilişkisel sermayeyi ölçmek için kullanılmak üzere, 21 uzman tarafından yapılan değerlendirme sonucunda nihai 99 anahtar performans göstergesi tespit edilmiştir. Ölçümde kullanılacak anahtar performans göstergeleri, 3 unsurun alt boyutlarını oluşturacak şekilde 15 grup performans göstergesi seviyesinde toplanmıştır. Bulanık Analitik Hiyerarşi Süreci (AHS) yöntemi kullanılarak 13 uzman tarafından entelektüel sermaye unsurlarının ve grup performans göstergelerinin ağırlıklandırılması sağlanmıştır. Anahtar performans göstergelerinin ölçülmesinde ise 158 performans göstergesinden faydalanılmıştır. Göstergelerin ölçüm periyotları literatürle de uyumlu olacak şekilde her çeyrek yılda bir veya yıllık olarak belirlenmiştir. Çoğu gösterge objektif ölçüm metotları ile hesaplanmaktadır ve bunların bazıları literatürden çekilmiş bazıları yazar tarafından üretilmiştir. Kısıtlı sayıda da olsa sübjektif metotlarla ölçülmek durumunda kalan göstergelerin tespiti için ise yine literatürdeki anket uygulamalarından yararlanılmıştır. Her bir anahtar performans göstergesi için minimum gereklilik ve sektörel hedefler belirlenmiş, bu sayede başarı oranları saptanabilmesi mümkün kılınmıştır. Uygulanan methodoloji sonucunda "Gemi İşletme Şirketleri için Entelektüel Sermaye Öz Değerlendirme (ICSA_SMC) Modeli" geliştirilmiştir. Yapılan ölçümleri kontrol etmek amacıyla ise anahtar performans göstergeleri boyutunda İdeal Çözüme Benzerlik Bakımından Sıralama Performansı Tekniği (TOPSIS) kullanılmıştır. Çalışmanın ilk çıktısı, bulanık AHS yöntemi kullanılarak yapılan ağırlıklandırma çalışmasının sonuçlarıdır. İnsan sermayesi unsuru (0,537) ile ilk sırada gelirken, unsurun alt boyutunda incelenen, İnsan Kaynakları Operasyonel Yönetimi (0,197), Eğitim ve Gelişim (0,189), Çalışan Yetkinliği (0,151) grup performans göstergelerinin ilk 3 sırayı aldığı gözlemlenmiştir. Diğer bir unsur olan yapısal sermaye (0,292) ise, bir gemi işletme şirketinin entelektüel sermayesini oluşturan ikinci önemli unsur olarak karşımıza çıkmaktadır. Ancak bu unsurun alt boyutunda incelenen 10 grup performans göstergesi bulunduğundan, gruplar tek tek incelendiğinde önem dereceleri göreceli olarak düşük kalmaktadır: 1. Sağlık ve Emniyet Performansı (0,056); Seyir Emniyeti Performansı (0,056); 3. Teknik Performans (0,032); 4. Çevresel Performans (0,028); 5. Operasyonel Performansı (0,026); 6. Güvenlik Performansı (0,025); 7. Kontrol Performansı (0,022); 8. Bilgi Teknolojisi Performansı (0,018); 9. Hukuki Performansı (0,015); ve 10. Gelişimsel Performansı (0,014). Unsurlar arasında son sırada yer alan ilişkisel sermaye unsuru (0,171) sadece 2 grup performans göstergesinden oluşmaktadır ve Paydaşlarla İlişki Performansı (0,127) genel değerlendirmede 4. sırayı alırken, Toplumsal İlişki Performansı (0,044) 7. sıradadır. Çalışmanın bu aşamasında açıkça görülmektedir ki, bir gemi işletme şirketinin entelektüel sermayesini artırmak için öncelikli olarak odaklanması gereken insan sermayesini iyi yönetmek ve bu konularda yatırımlara öncelik verilmesi gerekmektedir. İkincil öncelikli sermaye unsuru yapısal sermaye olmasına karşın, ilişkisel sermaye alt gruplarında incelenen paydaşlarla ilişkiler yine bu firmaların stratejik kararlarını etkileyebilecek boyutta olduğu görülmektedir. Bütünsel bakış açısıyla bir gemi işletme firmasının hayatta kalabilmesi ve piyasada tutunabilmesi için ön şart yapısal sermayesinin güçlü olmasıdır. Ancak konu değer katma olduğunda, insan kaynaklarının yetkinliği ve yönetimi, dış paydaşlarla uzun soluklu ve güvenilir şekilde kurduğu ilişkiler ile ilgili alınacak stratejik kararlar firma değerini kat ve kat arttırabilir. ICSA_SMC model testinin yapılması amacıyla, 5 gemi işletme şirketinden 2021 yılı verisi çekilerek analizler yapılmıştır. Analizde puan kartı prensibi benimsenerek, firmaların entelektüel sermayeleri 100 üzerinden değerlendirilmiştir. İlk sırayı değerleme her ne kadar bu parametrelerden bağımsız olarak yapılmış olsa da hem tanker hem kuru yük işletmeciliği ile uğraşan ve filosunda diğerlerinde nazaran daha fazla gemi bulunduran Firma 3 (%81,65) almıştır. İkinci sırada yer alan tanker işletmeciliği ile uğraşan Firma 2'yi (%74,56), yine tanker işletmeciliği yapan Firma 5 (%69,77) takip etmektedir. Firma 2'nin filodaki gemi sayısının Firma 3'e kıyasla daha fazla olduğu dikkatleri çekerken, filo büyüklüğünün entelektüel sermaye yönetimi konusunda etkisinin olup olmadığı sorusu dikkat çekmektedir. Bu üç firma ardından kuruyük işletmeciliği ile uğraşan Firma 1 (%62,31), 4. sırayı almıştır. Son sırada ise koster tipi gemileri işleten Firma 5 (%27,42) karşımıza çıkmaktadır. TOPSIS yöntemi kullanılarak anahtar performans göstergeleri seviyesinde ideal çözüme yakınlıklarına bakılan firmaların sıralaması aynı kalarak, ICSA_SMC modeli doğrulanmıştır. Bu tez kapsamında önerilen ICSA_SMC modeli ile gemi işletmeciliği ile uğraşan şirketler için ilk defa bulanık mantık ile ağırlıklandırma (Bulanık AHS) ve puan kartı (SC) entegrasyonu yapılarak, kontrol yöntemi olarak da TOPSIS'in eklenmesi ile hibrit bir yaklaşım güdülmüştür. Önerilen model diğer hizmet bazlı sektörler için çalışılarak, öz değerlendirme kılavuzları geliştirilebilir. Bu çalışma ile gemi işletme şirketlerinin performans bazlı entelektüel sermaye değerlemesi için hibrit model sunulmuştur. Minimum gereklilik ve hedefler sabit tutulduğunda bu model sayesinde oluşturulacak bir veri tabanında değerlerini güncel tutan işletmeler entelektüel sermaye değerleri karşılaştırabilme yetisine sahip olabilirler. Çalışma amacına ek olarak, ICSA_SMC modeli ile işletmelerin öz değerlendirmelerini yapabilmeleri mümkün kılınmıştır. Bu aşamada firmalar kendi hedeflerini belirleyerek, firma içinde verilen stratejik kararlarının etkilerini gözlemleyebilecek ve entelektüel sermaye yönetimi konusunda yetenek kazanmış olacaklardır. Model şirketlere ek veri takibi yükü yüklemesine karşın, BIMCO standartları ve TMSA ile uyumlu bir şekilde hazırlandığından bütünsel bir takip sistemi de kazandırmıştır. Öncelikle çalışmanın uygulanabilir kalmasını sağlamak için herhangi bir takip çalışmasının planlanmasında özen gösterilmesi gerekecektir. İleri bir çalışma alanı, takip edilmesinin benimsenmesini kolaylaştırmanın bir yolu olarak, performans ölçüleri ve stratejik hedefler arasındaki bağlantıları anlamada araçların yöneticilere nasıl yardımcı olabileceğini değerlendirmektir. Gemi işletmelerinin değer kazanımlarını sağlayacak şekilde, senaryo bazlı yatırım ve stratejik karar modelleme üzerinde çalışılabilir.
-
ÖgeDesigning business model framework for public bus transportation authorities: A fuzzy approach(Graduate School, 2023-02-22) Buran, Büşra ; Erçek, Mehmet ; 507162010 ; Management EngineeringCovid-19, which has taken the world under its influence, has also deeply affected the public transportation sector. While public transportation is sustainable with subsidies, the gap between expenses and incomes has grown with the effect of Covid-19. The sector, which meets the majority of its revenues from tickets, decreased by ninety percent during the days of curfews during the pandemic period. Expenses such as disinfection, mask, etc. increased with the addition of additional items. Despite all the difficulties, public transportation continued to serve without stopping during the pandemic period. For this reason, public transportation is seen as a backbone for the cities. Especially in developed and developing countries, it plays a critical role to overcome traffic jams. To achieve this, the service quality of public transportation is seen as a key point. The service quality of public transportation depends on different factors such as operation, repair and maintenance, audit, and management. From the management side, the business model provides a holistic perspective due to taking into account activity, value, and financial status. Business models represent a critical tool for strategic management. It provides managers with an integrated perspective to shape business operations regarding the activity, value, and finance dimensions. When activity composes of key partners, key resources, and key activities, value includes value proposition, customer segments, channels, and customer relationships. Lastly, the finance block comprises revenue streams and costs from Business Model Canvas (BMC) perspective. It serves to understand, communicate, share, change, measure, simulate, and learn more about the different aspects of the firms. According to the type of organization, the business model can be varied such as profit-based or social-based. This thesis presents a business model canvas framework for public transportation organizations including impact factors and their external environment. Impact factor includes social and environmental issues for public bus transportation such as elderly people, disabled people, electric buses, and green transportation. From an external view, PESTEL analysis is taken into account which is political, economical, social, technological, legal, and environmental. Taking into account impact factors and the external environment provides managers or policymakers with a holistic perspective to manage effectively. The main and sub-criteria of the model are designed according to the literature under three hierarchical levels. While the first level of the model is the main criteria which are internal and external environment criteria, the second level comprises sub-criteria of internal and external environment criteria which are business model canvas and PESTEL analysis. Finally, the third level is related to the sub-criteria of the business model and PESTEL analysis. In addition, this thesis aims to query the viability of a new strategic action tool specifically geared to the interests of public bus transportation authorities (PBTAs) around the globe and explore the degree of homogeneity in their responses as well as their possible drivers of them. To answer its research question, the study first offers a generic business model design for a PBTA, which integrates an extended version of the business model canvas with external environmental factors in order to enhance its sustainability. Subsequently, the importance attributions of international transportation experts to different model components are evaluated by using the Spherical and Intuitionistic Fuzzy AHP method. The proposed model is evaluated by experts from four continents: America, Asia, Australia, and Europe. International experts are selected according to their experience. They are selected from different departments which are planning, operation, innovation, strategy development, and finance with more than ten years of experience. There are different methods in the literature such as Multi-Attribute Utility Theory, Analytic Hierarchy Process (AHP), Analytic Network Process (ANP), Fuzzy Set Theory, Case-Based Reasoning, Data Envelopment Analysis, Simple Multi-Attribute Rating Technique, Goal Programming, Elimination and Choice Translating Reality (ELECTRE), Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE), Simple Additive Weighting, Multicriteria Optimization and Com promise Solution (VIseKriterijumska Optimizacija I Kompromisno Resenje-VIKOR), Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), and The Weighted Aggregated Sum Product Assessment (WASPAS). Analytic Hierarchy Process (AHP) is mostly used method due to its ease of application, flexibility, applicable to many subjects. But in some circumstances such as vagueness, complexity fuzzy logic is preferred to classical AHP. In fuzzy logic, there are different fuzzy sets. Some of those are Ordinary, Interval-Valued, Intuitionistic, Neutrosophic, Nonstationary, Hesitant, and Spherical. In this problem, fuzzy logic is applied to the model with two extensions which are Intuitionistic Fuzzy Sets (IFS) and Spherical Fuzzy Sets (SFS) to evaluate the proposed model. A solution set is also provided with a traditional AHP in order to check the robustness of the former methods. According to the results, the internal environment is ranked as the most important criterion at the first level for all methods. Whereas the activity element is ranked first at the second level, key partners are ranked first at the third level for all methods. The relative similarity of the results obtained by the traditional and Intuitionistic Fuzzy AHP model suggests that the Spherical Fuzzy AHP model may have better potential to handle the vagueness of the business modeling problems. Sensitivity analyses show that the model is sensitive to expert judgments. From the convergence-divergence perspective, the expert opinions tend to converge more on the internal components of the model and diverge on the external components, especially regarding economic and technological factors. A strategic response action set is also designed to facilitate the adoption of the model by PBTAs. The strategic responses include short and long terms actions, separately. When unexpected conditions such as pandemics trigger short-run responses, the long-run term is mobilized by the planning process. Innovation of the business model is important as designing it. Changing political, economical, social, technological, and legal conditions, the business models can be needed to redesign to survive in the business ecosystem. Development of the proposed business model can be tracked using defined performance indicators time by time. This study contributes to the works of both academicians and practitioners in terms of designing and evaluating public transportation business models. The study not only extends the research on the strategic management of the public bus transportation domain but also contributes to the convergence and divergence debate by offering a reconciliatory duality perspective.
-
ÖgeDetermination of spatial distributions of greenhouses using satellite images and object-based image analysis approach(Graduate School, 2023-03-02) Şenel, Gizem ; Göksel, Çiğdem ; Torres Aguilar, Manuel Angel ; 501182620 ; Geomatics EngineeringIn the face of the expected pressure on agricultural production systems with the increasing world population, one of the most suitable options for sustainable intensification of agricultural production is greenhouse activities that allow an increase in production on existing agricultural lands. Greenhouse activities can cause environmental problems at the local and regional scales. Since the primary material used in the covering of greenhouses is plastic, ecological problems are expected in the near future due to the excessive use of plastic. Besides, they may affect the integrity of ecosystems by changing land use and land cover (LULC) into extensive agricultural areas. On the other hand, the economy of many rural regions is supported by greenhouse activities, especially in Mediterranean countries. Moreover, due to the exposure of these structures to floods, especially with climate change effects, producers face economic and social problems. While all these situations make the production system unsustainable, they also endanger the ecology and economy of the region. Thanks to synoptic data acquisition and high temporal resolution, remote sensing images allow periodic agricultural sector monitoring. Considering the positive outcomes and adverse effects of greenhouses, determining greenhouse areas using remote sensing images is essential in providing better management strategies. In that case, monitoring through remote sensing images is the most suitable approach to obtain information about the effects of greenhouses on climate and environment and improve their economic output. Within the scope of this thesis, answers to different questions were sought by using the object-based image analysis (OBIA) approach, which is stated to give better results in the literature to determine greenhouses. OBIA approach consists of mainly three stages which are image segmentation, feature extraction, and image or object classification, and these sections formed the structure of this thesis In the image segmentation step, which is the first step of the OBIA, answers were sought for two crucial questions for the segmentation of plastic-covered greenhouses (PCG). The first of these questions is which of the supervised segmentation quality assessment metrics performs better in evaluating PCG segmentation. An experimental design was formed in which segmentation metrics were evaluated together with interpreter evaluations. At this stage, sixteen different datasets consisting of different spatial resolutions (medium and high spatial resolution), seasons (summer and winter), study areas (Almería (Spain) and Antalya (Turkey)), and reflection storage scales (RSS) (16Bit and Percent) were used. Various segmentation outputs were created using the Multiresolution segmentation (MRS) algorithm. Six different interpreters evaluated these outputs and compared them with the eight segmentation quality metrics. As a result of the evaluations, it was concluded that Modified Euclidean Distance 2 (MED2) was the most successful metric in the evaluation of PCG segmentation. On the other hand, Fitness and F-metric failed to identify the best segmentation output compared to other metrics investigated. In addition, the effects of different factors on the visual interpretation results were analyzed statistically. It was revealed that the RSS is an essential factor in visual interpretation. In detail, it was concluded that when evaluating the segmentation outputs created by using the Percent format, the interpreters were more in agreement and interpreted this data type more efficiently. In the second part of the segmentation phase, how much factors or their interactions affect the greenhouse segmentation was investigated. Approximately 4,000 segmentation outputs were produced from sixteen data sets, and MED2 values were calculated. For each shape parameter in each data set, the values reaching the best MED2 value were determined and statistically tested by analysis of variance (ANOVA). The segmentation outputs calculated from the datasets showed that the optimal scale parameters clustered by taking values close to each other in Percent format and took values in a broader range in 16Bit format. This showed that it would be effortless to determine the most appropriate segmentation outputs obtained from the Percent format. In addition, statistical tests have shown that the segmentation accuracy calculated from different RSS formats is directly dependent on the shape parameter. While segmentation accuracy increases with decreasing shape parameters in Percent format, this is the opposite in 16Bit format. This situation revealed that the shape parameter selection is critical depending on the RSS. In summary, it has been revealed that the Percent format is the appropriate data format for PCG segmentation with the MRS algorithm, and in addition, low-shape parameters should be preferred in the Percent format. In the second stage of the thesis, it was hypothesized that different feature space evaluation methods and feature space dimensions affect the classification in terms of accuracy and time. Based on this hypothesis, 128 features were obtained from Sentinel-2 images of the Almería and Antalya study areas, and classification performance was evaluated by random forest (RF) algorithm by applying different feature space evaluation methods. As a result of this evaluation, it was seen that the reduction of the feature space has a direct effect on the accuracy. But moreover, it has been determined that reducing the size of the feature space significantly reduces the time required to run the classification algorithm. Therefore, among the examined feature space evaluation algorithms, it has been concluded that RF and Recursive Feature Elimination (RFE)-RF (RFE-RF) algorithms are more suitable for classification accuracy and the time required to run the algorithm. Moreover, it has been found that these algorithms are less dependent on feature space variation in terms of classification accuracy, but reducing the feature space significantly reduces the computation time. In addition, among a total of 128 features obtained from the segments, including spectral, textural, geometric features and spectral indices, Plastic GreenHouse Index (PGHI) and Normalized Difference Vegetation Index (NDVI) were the most relevant features for PCG mapping according to RF and RFE-RF methods. As a result, the necessity of including indices such as PGHI and NDVI in the feature space and the application of one of the feature space evaluation methods such as RF or RFE-RF in terms of reducing the calculation time are the main outputs of this stage. In the third and final stage of the thesis, the effectiveness of ensemble learning algorithms for the PCG classification has been tested. According to the experimental results, Categorical boosting (Catboost), RF, and support vector machines (SVM) algorithms performed well in both studied areas (Almería and Antalya), but the implementation time required for CatBoost and SVM is higher than all other algorithms studied. K-nearest neighbor (KNN) and AdaBoost algorithms achieved lower classification performance in both study areas. In addition to these algorithms, the light gradient boosting machines (LightGBM) algorithm achieved an F1 score of over 90% in both study areas in a short time. In summary, considering the computation time and classification accuracy, RF and LightGBM are the two up-front algorithms. In general, within the scope of this thesis, answers to the questions encountered in the three steps of OBIA were sought to reach the best PCG determination approach. The determination of greenhouses from satellite images was carried out in two essential study areas in the Mediterranean Basin, where greenhouse activities are intensively carried out. Although these outputs belong to selected test sites, they provide important outputs for generalizing the findings on a large scale. Determining the spatial distribution of PCG to minimize the negative effects on the environment and increase their economic returns will make an important contribution to planners and decision-makers in achieving sustainable agriculture goals.
-
ÖgeDevelopment of application specific transport triggered processors for post-quantum cryptography algorithms(Graduate School, 2022-10-18) Akçay, Latif ; Yalçın Örs, Sıddıka Berna ; 504152210 ; Electronics EngineeringAlthough initially only at the level of theoretical studies, many quantum computer development projects have been carried out in recent years. The promising results so far and the competition among companies indicate that number of such studies will increase even more. Quantum computers are not yet close to becoming a part of our daily lives in the near future. However, it is most likely that they will be used much more widely in certain areas. In particular, search, optimization and factorization problems can be solved by quantum computers much more faster than classical computers. Thus, operations such as big data analysis, machine learning or multivariate simulations can be performed in reasonable time. This is a valuable process for the advancement of science and technology. On the other hand, public key cryptography is under serious threat against quantum computer attacks. Because most of the commonly used algorithms are based on the hardness of the factorization problem. However, this may not be the case for quantum computers. Therefore, NIST initiated Post-Quantum Cryptography Standardization Process to develop quantum-resistant algorithms. Currently, this process has reached the final stage and there are four key encapsulation mechanisms and three digital signature methods. Just as important as the security of an algorithm is that it can be implemented and run efficiently. Especially in embedded systems, low power consumption and small chip area are fundamental requirements that must be met for a sufficient performance level. Application-specific processor designs are often needed to accomplish such demands. This study proposes suitable processor architectures for quantum-resistant Lattice-based Cryptography algorithms in the final stage of the NIST standardization process. For this purpose, it compares widely used Reduced Instruction Set Computing methodology with Transport-Triggered Architecture. Strengths and weaknesses of the both techniques are analyzed through test results of open source sample designs. This work also suggests application-specific cores with various custom operations. In addition, the difficulties in processor development process and possible solutions are evaluated. In the introduction, the mathematical background of the lattice-based algorithms and the principal computation approaches of the both architectures are presented. Several comparisons for various cores are shared in the next sections. After that, the design methodology of custom operations and obtained FPGA and ASIC results are given. Finally, possible future improvements are evaluated.
-
ÖgeDevelopment of secure e-commerce protocol(Institute of Science and Technology, 2022) Cebeci, Sena Efsun ; Özdemir, Enver ; 744926 ; Department of Applied InformaticsIncreased security breaches in e-commerce over the previous decade have prompted e-commerce enterprises to take more precautions. The inability to securely retain personal data has resulted in violations primarily involving credit card fraud and the theft of user accounts. In this context, e-commerce companies invest in increasing database security to more securely keep user data. Furthermore, these security flaws demonstrate the critical necessity for security protocols and solutions to be implemented. Existing approaches are insufficient and place undue pressure on e-commerce businesses in terms of calculation and connectivity. Similar database flaws exist in mobile payment systems, and numerous safeguards should be included. In general, user data is maintained encrypted and unencrypted in database systems, and user data can be viewed when the database is compromised. In this thesis, we propose an e-commerce protocol that takes security to the next level by employing a novel technique to address existing security flaws in e-commerce and mobile payment systems. Furthermore, even if the database is taken, this protocol prevents access to personal data, removing users' concerns about privacy. Users will have a right to claim how their data is used and will be able to regulate it. To accomplish these enhanced capabilities, users' data is transformed using mathematical procedures and stored in the database in this modified form. Finally, we implemented the proposed protocol and designed attack scenarios to demonstrate that it has been validated and compared to other known protocols and algorithms. The analysis revealed that the suggested protocol outperforms the compared approaches in terms of execution time.
-
ÖgeDisposition bias for different investor categories in Borsa Istanbul(Graduate School, 2022-10-18) Kahya, Evrim Hilal ; Ekinci, Cumhur Enis ; 507102007 ; Management EngineeringFinancial theory based many of its theoretical models on the rationality of investors which was challenged by behavioral finance since two decades. Disposition bias is among the many biases that investors face with and behaving against rationality assumption. Having its base from Tsversky and Kahneman (1979) prospect theory, in which the individuals are assumed to be loss averse, Shefrin and Statman (1985) named this loss aversion as "Disposition bias" for the behaviors of the investors, basically refer to the tendency to sell the investment held for a loss at a slower rate than the investment held for a gain. Related with our study, it is important to explain the human psychology on positive and negative outcomes. It is found that we approach positive occurrences differently from negative ones. One way or another we overweigh the positive circumstances and underweigh the negative ones and shape our daily choices based on this subjective evaluation. This discrepancy leads to irrational choices and behaviors which is the main issue in coping mechanisms of human beings. This irrational behavior finds its projection on finance through disposition bias. If an investor is exposed to disposition bias, she behaves differently when faced with loser portfolio compared to winner portfolio. The psychology of this two different choices was analyzed by Shefrin and Statman (1986) with prospect theory of Kahneman and Tversky (1979). This theory suggested that investors are risk takers when a loss is occurred and refrain from risk when a gain is certain. Weber and Camerer (1998), Anaert et al. (2008), and Lee et al. (2008) checked it through either experiments, simulations and other kinds of statistical analysis and found statistically significant results favoring the prospect theory, whereas Odean (1998), Jiao (2017), and Barberis and Xiong (2009) found mixed results in favoring this theory and many other such as Kaustica (2010), Hens and Vlcek (2011), Ben-David and Hirshleifer (2012), and Kubinska et al. (2012) found no statistically significant results favoring prospect theory. Other than prospect theory, the disposition bias was tried to be explained through mental accounting and regret aversion by Brown et al. (2006), Dhar and Zhu (2006), Kaustica (2010), Goo et al. (2010), and Rau (2015), Self-attribution Bias by Barber et al. (2007), Direct Causal Effect of Emotions by Summers and Duxbury (2012), Aspara and Hoffmann (2015), Garling et al. (2016), and Chang et al. (2016) but again the results from all these studies are mixed. The main results for the disposition bias analysis was that investors are mainly less willing to realize losses than gains. Disposition bias not only was analyzed with respect to general investor groups, it has been analyzed with many aspects from different investors types, different cultures, to the effects of the disposition bias on the stock prices, wealth of investors, and other behavioral biases. In studying disposition on different behaviors of investor groups, it was hypothesized that when one can find the difference among groups, then one can come up with a better explanatory idea on the reasoning behind the DB. Grindblatt and Keloharju (2001), Shu et al. (2005), Lehenkari and Perttunen (2005), Barbet et al. (2007)., Chen et al. (2007), Boolell-Gunesh (2009), Goo et al. (2010), and Frino et al. (2014) and many others analyzed the effect of gender and/or age on DB, Shapira and Venezia (2001), Shu et al. (2005), Lehenkari and Perttunen (2004) Dhar and Zhu (2006), Weber and Welfens (2008), Boolell-Gunesh (2009), Choe and Eom (2010), and others analyzed the effect of sophistication on DB. However, even if there are many studies on DB the results are mixed and neither the literature could come up with a definite result on the reasoning behind the disposition bias nor the effects of the disposition bias on different groups. Seeing this gap in the literature in our paper we constructed a methodology on analyzing disposition bias through subgroups of investors with newly defined proxies. Our aim is to understand the reasoning behind the differences of the size of disposition bias. For example, we know from literature that not all women or men are exposed to disposition bias but we do not know the determinant of this difference. We know that institutional investors are less exposed to DB but we cannot generalize this to all institutional investors. If we can understand the reasoning behind this difference we can come up with important policy recommendation to reduce this bias, and reduce the wealth reduction effect of DB on investors. Motivated by the above arguments, we perform an analysis on the existence and equivalence of disposition bias across investors. Our main research questions are as follows. Do investors have a disposition bias when grouped in terms of their type, i.e. their gender or status (male, female and legal person); size (small, medium sized and large) and trading frequency (infrequently, occasionally and frequently trading)? Is the disposition effect the same across different groups as well as subgroups, i.e. when their different features are jointly considered? To answer those questions we used an improved methodology on the classification of the subgroups. This was an important gap in disposition bias literature, because there were different proxies on trading size and trading frequency. For instance, when classifying for size, many researchers such as Grindblatt and Keloharju (2001), Brown et al. (2006), Dhar and Zhu (2006), Weber and Welfens (2008) refer to investors' overall portfolio value or asset value (i.e., the value of their portfolio at an instant or an average portfolio value over a time period). Yet, these overall or average values have some limitations in capturing the true size of investors. Similarly, trading frequency is usually measured with average number of trades for a time period as of Lehenkari and Perttunen (2005), Dhar and Zhu (2006), Chen et al. (2007) and others. Indeed, if an investor trades actively at some period but less in others, this means that he/she always keeps a potential to trade actively. An average number, therefore, does not necessarily reflect his/her attitude. Based on this idea we developed new proxies for both trading size and trading frequency benefiting from the intraday investor base data. We calculated the disposition effect both base on numbers and values as of Barber et al. (2007) where many other studies preferred to make their analysis based on number of DB's. Last but not least we are the first paper which makes the comparative analysis of the group of investors through ANOVA and Tukey HSD test. Our study contributes to the literature in the following ways. Most studies consider a single characteristic of investors in terms of gender, size or trading frequency (e.g., female, small or frequently trading) neglecting joint features such as 'frequently trading small female' investor. To fill this gap we run base-, two- and three-level analyses, i.e. by combining all the investor features (type, size and trading frequency). To the best of our knowledge, such an analysis is unique in the literature and helps shed more light on the disposition bias in investor subcategories. Moreover, we propose better proxies for investor size and trading frequency that seek capturing investor sophistication at an intraday setting (detailed in the Methodology section). Furthermore, our calculation of disposition effect is based on both the 'value' and the 'number' of paper gains and losses. Last but not least; although behavioral biases, and in particular disposition bias, have been widely studied worldwide, a detailed investigation on Turkish market is still missing, the reason presumably being the lack of investor level data for research. An exception includes Tekçe et al. (2016) that examines the determinants of various biases of individual investors such as age, gender, experience, wealth and location. Our dataset encompasses the whole investor base in the country (we start with a sample of 462,488 investors and end up with 283,913, after extracting noisy data). Hence, we can catch a large portion of investor activity. In addition, the descriptive statistics obtained on a large dataset reflects the distribution and general characteristics of investors in terms of gender or status, size and trading frequency in Borsa Istanbul.