LEE- Oyun ve Etkileşim Teknolojileri Lisansüstü Programı
Bu topluluk için Kalıcı Uri
Gözat
Çıkarma tarihi ile LEE- Oyun ve Etkileşim Teknolojileri Lisansüstü Programı'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgeArgent: A web based augmented reality framework for dynamic content generation(Institute of Science and Technology, 2020-07) Kurt, Gökhan ; İnce, Gökhan ; 636478 ; Game and Interaction Technologies ProgrammeIn the modern world, people are more and more interested in interactive technologies. Education, research and business habits are effected by this change and humans can be more efficient using interactive technologies. Augmented reality (AR), which is a novel addition to those interactive technologies, is especially effective in this matter. Through augmented reality, people can immerse more deeply with the subject experience and they can have enhanced interaction. Despite the usefulness of augmented reality, it may not always be efficient develop an AR application in terms of development cost. AR development still requires knowledge and experience with certain tools and frameworks. Such tools are usually programming and game development tools and they require programming and technical skills that is gained by long-term education and training. People experienced in design and content creation can be deprived of the ability to create and maintain AR applications. Nowadays, tools like Unity, Vuforia, ARKit and ARCore provide ways to develop AR applications without the need to have knowledge of low-level calculation and programming that is required for AR technology. Normally, developing an AR application would have taken years of research and development by large teams, but thanks to SDK and APIs provided by these tools, AR applications can be developed by small development teams easily and quickly. However, AR is still not easily accessible by all the tech-savvy people that may be interested in developing such applications. Majority of AR applications are developed using Unity. There are visual programming solutions in Unity, but they are not suitable to be used in AR applications. A Unity-based tool that allows people without programming skills to create AR applications, will be utmost useful. Such a tool would require features such as, creating an application without programming, optional support to do programming and scripting, real time updates and ability to ship without any build and packaging step, support for 3D object, image and video, the ability to modify objects and preview them in real time, and the ability to create user interfaces. The tool should also have a user friendly interface and experience. It should introduce the innovative features without changing the conventional workflows. Existing tools do not provide these features which are crucial for an ordinary person to create AR applications.
-
ÖgeDrone wars 3D: an interactive simulator for drone swarms(Graduate School, 2023) Karadeniz, Gökhan ; İnce, Gökhan ; 779468 ; Oyun ve Etkileşim Teknolojileri Ana Bilim DalıThe utilization of drone swarms, characterized by their formidable destructive capabilities and broad range of potential applications, has led to the pressing need for effective countermeasures to mitigate their potential threats. Additionally, the relatively low cost of drones has further amplified the need for robust defense strategies. Although consumer-type drones can be neutralized by electronic countermeasures or microwave weapons, military-grade drones are produced to be protected against such attacks, leaving the physical destruction as the best choice. Within the scope of this study, a simulation environment was developed in the Unity3D game engine in order to measure the effectiveness of defense systems against drone swarms and to find effective defense tactics and swarm formations. Unless otherwise stated, swarms with the same default values were used in the tests. The actor types in the simulation were 1) drone, 2) drone swarm, 3) machine gun, 4) laser weapon, 5) anti-aircraft gun, 6) air defense missile launcher. In the tests of defensive drone swarms against offensive drone swarms, two enemy swarms (identical in everything but their formations) were created. Then attack drones destroyed were inspected by applying all possible formation combinations. Results show that, the drone swarms have an average success rate of over 90% in destroying enemy drone swarms. Though, in order to achieve this success rate, the defending swarm must be located on the attacking swarm's target approach path. Another noteworthy finding was that each formation was most effective in defending against the enemy swarm having the same formation of attacking party. A strategy to copycat the offensive swarm's shape when possible seems to be viable given the fact that identical swarms are more effective against each other. This finding was also supported by the results of drone spacing tests. When two swarms of the same number and formation were used against each other with different drone spacing, the most effective defense was obtained when the drone spacing of the two swarms were equal.
-
ÖgeGenerative models for game character generation(Graduate School, 2023-06-13) Emekligil Aydın, Ferda Gül ; Öksüz, İlkay ; 529191006 ; Game and Interaction TechnologiesGenerating visual content and character design for games is generally a time-consuming process and is carried out by designers. The design process can be both costly and time-consuming for small businesses and independent developers. Working in this field requires a detailed understanding of visual aesthetics, creativity, and technical skills. It is important for the characters and visual content used in games to be compatible with the game's story, atmosphere, and gameplay. Designers and artists work to create original visual content and characters that align with the game's objectives and target audience, considering these requirements. Due to these reasons, content creation for games is a challenging process. Automating the design process helps to save time and budget. Many game companies and developers use procedural methods to automate the design process. Procedural content generation involves automatically generating game content using algorithms and rules. This approach offers significant advantages in generating repetitive content and enables developers and designers to create content faster. However, the visual content generated by these algorithms may have certain limitations in terms of diversity. With the advancement of technology and the progress of deep learning methods, approaches incorporating deep learning models have also started to be used instead of procedural methods. Examples of such methods include Generative Adversarial Networks (GANs) and Latent Diffusion models. In addition, in the studies presented in the thesis, the transfer learning method has been used in conjunction with generative models, and its success has been evaluated compared to these methods. In order to perform machine learning, a large amount of labeled data is typically required. However, it is not always possible to have access to a large labeled dataset, and obtaining and labeling data can be costly and time-consuming. The transfer learning method has been proposed to reduce or eliminate this requirement. When applying transfer learning, a pre-trained machine learning model is selected, which has been trained on a significant amount of labeled data. This model is a deep neural network that has learned general features from a large amount of labeled data. For example, a pre-trained classifier model trained on a popular dataset like ImageNet can be used. The initial layers of the selected model contain useful information about learned general features, while the top layers are not applicable to the target task. Therefore, some or all of the layers of the pre-trained model can be frozen, and only specific layers (usually the classification layers) can be retrained on the target dataset. This way, a much more successful model that is tailored to the target task's dataset is obtained. Transfer learning can be used in situations where the dataset is small, just like we have tried. Since a pre-trained model is used, the training process is much faster compared to training from scratch methods. Pre-trained models have learned generalized features from datasets that contain a wide variety and a large number of examples. This gives the models more generalizability, making them applicable to a wider range of domains. In summary, the transfer learning method involves transferring knowledge gained from previous experiences to a new task. It provides benefits in terms of speed, reduced need for labeled data, and improved model performance. Pretrained models trained on diverse and large datasets are used to apply this method effectively. Generative Adversarial Networks (GANs) can generate highly successful results for image generation and are also used in game character generation. GANs are composed of two distinct deep learning models: they are the Generator and the Discriminator. The primary role of the Generator network is to generate synthetic images, while the Discriminator network determines whether the generated images are real or fake. These two artificial neural networks compete with each other during the training phase. The Generator tries to deceive the Discriminator by generating images that are close to reality, while the Discriminator tries to identify the images generated by the Generator accurately. Feedback obtained at each iteration is used for training purposes. Latent Diffusion modeling method is a deep learning approach that involves generating synthetic data, denoising, and noise estimation. This method is based on capturing the temporal evolution of data points. It creates a latent space network in the training data, and through this distribution, it iteratively performs noise estimation and noise removal operations, allowing for the generation of synthetic, high-resolution, and impressive images. The U-Net architecture is used for the denoising model. To achieve this, word embeddings are utilized. Word embeddings are fed as input to all layers of the U-Net. The complexity of the U-Net model increases as the size of the input image increases, necessitating dimensionality reduction. Variational Autoencoders (VAEs) are used to reduce the dimensionality of the input image. By iteratively generating the latent vector, high-resolution images can be obtained. Latent Diffusion models can capture more complex data distributions and achieve more realistic and successful results that align with the real world. However, compared to other generative models, the training process of Latent Diffusion is much more time-consuming and challenging, and it also has higher computational costs. As a result, its implementation can be more demanding and resource-intensive. In this thesis, visual content generation for games is addressed in two different studies. In the first study, six different GAN models were trained using visual image datasets of two different RPG and DND characters. In 3 out of 18 experiments, transfer learning methods were used due to the small size of the datasets. The Frechet Inception Distance (FID) metric was used to compare the models. The results showed that SNGAN was the most successful in both datasets. Additionally, it was concluded that transfer learning methods (WGAN-GP, BigGAN) outperformed the training from scratch approach. In the second study presented in the thesis, a different dataset containing images of 2 different animals and fruits was used. Stylegan, and Latent Diffusion methods were employed. In the training of StyleGAN, eight types of fruit images and three types of animal images were used as conditioning inputs, and conditional learning was applied. In the Latent Diffusion method, the datasets were labeled with descriptive sentences about the images and fed into the model. FID scores were calculated for the generated outputs, and these outputs were transformed into a web game and played by 164 players. The results showed that the Latent Diffusion model performed well in the animal dataset according to the FID score, while StyleGAN performed well in the fruit dataset. In terms of the overall evaluation, the Latent Diffusion method yielded better results. According to the scores obtained from the players, the Latent Regression method also achieved better overall rankings. This indicates the consistency between the results obtained from the FID score and the player evaluation. Both studies demonstrate the feasibility of generating game characters or synthetic artistic visuals using deep neural networks and have produced consistent and continuous results.
-
ÖgeExploring the role of game mechanics in generating spatial compositions: Snaris case(Graduate School, 2023-08-02) Özvatan, Ozan Can ; Alaçam, Sema ; 529201018 ; Game and Interaction TechnologiesAs the world becomes increasingly digitized, the study of virtual environments and user agency in it has emerged as an important field of research. Video games, in particular, serve as an influential subset of these digital spaces, presenting an array of dynamic, complex, and interactive worlds. These game spaces are experienced by users regularly and also raise interesting questions about human interaction, cognition, and experience within virtual realities. In a game, the player acts as both the subject, engaging with the gaming system, and the object, receiving responses based on their input. Their actions and decisions are interwoven into a complex system of cause and effect, where each decision leads to a change in the state of the virtual environment, thereby influencing their subsequent actions. This cyclical process organizes all decisions made during the gaming session, and the game space emerges as a result. This research is about user interaction with virtual spaces, exploring the impact of game mechanics -specifically risk and reward mechanics- on the player's agency in shaping the virtual environment, thus establishing a discussion on exploration at the intersection of game studies, architecture, and human-computer interaction. More specifically, the goal of this research is to answer the following questions: How does the involvement of risk and reward mechanics impact the spatial outcomes generated by participants using a 3D puzzle game? In which ways do the risk and reward mechanics affect the player's role in shaping virtual spaces? What is the impact of more challenging situations on the spatial composition in a virtual environment? How do software limitations influence the generation of spatial elements? To investigate the impact of risk and reward mechanics on generating spatial outcomes, we designed and developed a two-state digital application called Snaris, a 3d puzzle game, in which we can isolate various mechanics in two distinct modes. We compared the spatial compositions created under these distinct conditions by comparing scenes created in Play Mode which harbors risk and reward mechanics with scenes created in Build Mode, which lacks said mechanics. We proposed a method for qualitatively assessing spatial features of scenes created in Snaris. This method employs 12 criteria for evaluating unique 3D spatial compositions generated by the application. We collected data by conducting playtests with 20 participants with each submitting a scene from both modes of application. We observed that where the risk and reward mechanics exist, participants were usually more preoccupied with being able to shape the spatial outcome deliberately. Thus, creating more random and incoherent structures. On the other hand, the absence of risk and reward mechanics and a clear, unobstructed game environment allowed players to engage in unconventional actions, and create familiar topologies, in a setting that encourages exploration and experimentation throughout the session.
-
ÖgeGeneralized game-testing using reinforcement learning(Graduate School, 2023-10-17) Önal, Uğur ; Sarıel Uzer, Sanem ; Tinç, Kutay Hüseyin ; 529201019 ; Game and Interaction TechnologiesThe gaming industry has experienced significant growth and evolution, becoming a prominent sector in entertainment and technology. This growth has led to increased consumer expectations regarding the quality and complexity of games, prompting developers to explore innovative solutions to meet these demands. To meet these demands, one of the pivotal approaches adopted by game developers is the game testing process. Game testing is an incredibly resource-intensive procedure, demanding comprehensive evaluation of all aspects of a game through actual gameplay. To address this challenge and alleviate the associated workload, this thesis proposes an innovative approach to game testing. This method integrates a generic environment framework with reinforcement learning (RL) models, facilitating seamless communication between any game and an RL model under specific conditions. The framework optimizes the game testing process by capitalizing on the efforts of game developers. It relies on developers to compile and transmit essential information, such as state and reward data, to the generic environment. Subsequently, this data is processed and harnessed within the RL model, allowing it to learn and play the game in accordance with developers' intentions, while simultaneously generating valuable data for game testing purposes. This method also capitalizes on the beneficial aspect of game-playing AI agents trying out various actions in different states as they learn to play games. Game testing entails the creation of diverse scenarios by implementing different actions in various in-game situations. These scenarios are observed, and, when necessary, actions are taken in the game development process based on these observations. Therefore, as the situation where game-playing agents experience various scenarios closely resembles game testing, we can utilize not only the actions performed by agents during testing but also their behaviors during training as part of the game-testing content. The experimental phase of the study involved the deployment of six distinct builds of the same game, each serving as a means to test the functionalities of the generic environment and observe their impact on the behavioral patterns of RL models. These builds were thoughtfully crafted to uncover various aspects of RL model behavior and the diverse methods of representing game states. These builds can be summarized as follows: - Basic side-scroller: This build's purpose is to test the seamless communication between the generic environment framework, the game build, and the RL model. It features a simple reward system designed to guide the player to a target point, an action space consisting of three actions, and employs a state image as the state information. Exploration-oriented side-scroller:} Designed to encourage the player to explore the entire game area, this build incorporates a comprehensive reward system. It boasts an action space comprising four actions and utilizes a state image as the state information. - Exploration-oriented side-scroller with colored textures: This build serves as a variant of the exploration-oriented side-scroller build, with the only alteration being the modification of game textures. Its purpose is to investigate the impact of texture changes on the training of RL models. - Goal-oriented side-scroller: Sharing the same action space and state information as the exploration-oriented side-scroller build, this build primarily aims to observe the effects of reward system modifications. It employs a detailed reward system to guide the player toward specific objectives and a goal. - Exploration-oriented side-scroller using no image: With an identical action space and reward system structure as the exploration-oriented side-scroller build, this build seeks to examine how using a state array as state information influences the RL model's behavior. - Exploration-oriented side-scroller using image and array: Being similar to the exploration-oriented side-scroller build in action space and reward system structure, this build aims to maximize its impact on the RL model's behavior. It achieves this by employing both a detailed state array and a state image as state information. - Arcade: This build aims to demonstrate how the generic environment framework will perform in a completely different game. It has both exploratory and goal-oriented structures. It features a moderately complex reward system and an action space consisting of five actions. It uses both arrays and images as state information. The investigation into the communication system between the RL agent and the game build yielded valuable insights. It became evident that the generic environment framework played a crucial role in achieving positive and efficient outcomes. Nevertheless, the research also pinpointed areas ripe for enhancement, particularly concerning the reduction of the workload on game developers and the resolution of issues stemming from external factors. The logging system integrated into the generic environment has proven to be a valuable asset in the realm of game testing. It leverages the total reward accrued in each episode, efficiently guiding the selection of episodes meriting closer scrutiny. Furthermore, the supplementary information provided by this system offers exceptionally insightful data, greatly enhancing our comprehension of the actions taken in various gaming scenarios. Our proposed approach holds significant potential in the realm of game testing. It enables AI agents to adjust their behaviors by utilizing dynamic rewards and extensive state information from arrays and images to meet specific criteria. Moreover, successful game-testing outcomes have been consistently observed throughout both the training and testing phases, where agents adeptly exploit game vulnerabilities and uncover unforeseen features and bugs. In spite of the apparent successful outcomes, the implementation involving both a state image and a state array exhibited a notable reduction in training speed and encountered a substantial level of system load attributed to hardware constraints during the training process. When evaluated in accordance with the objectives of the thesis, it can be concluded that, overall, the proposed method has achieved successful outcomes in the game testing process and holds promise for future development potential. Further endeavors aimed at enhancing system performance may yield positive results concerning the broader applicability of game testing.