LEE- Oyun ve Etkileşim Teknolojileri-Yüksek Lisans
Bu koleksiyon için kalıcı URI
Gözat
Konu "bilgisayar oyunları" ile LEE- Oyun ve Etkileşim Teknolojileri-Yüksek Lisans'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgeDynamic difficulty adjustment by changing enemy behavior using reinforcement learning(Graduate School, 2024-07-25) Akşahin, Burak Furkan ; Sarıel, Sanem ; 529201003 ; Game and Interaction TechnologiesDynamic difficulty adjustment (DDA) systems are essential in modern gaming to provide to the diverse skill levels of players. These systems ensure that games remain challenging yet enjoyable by automatically adjusting the difficulty based on the player's performance. Traditional fixed difficulty settings often fail to provide an optimal experience for all players, leading to frustration for less skilled players and boredom for more skilled ones. Implementing DDA systems aims to enhance player engagement and satisfaction by maintaining an appropriate level of challenge throughout the game. Various techniques have been explored to implement DDA systems. One common approach is dynamic scripting, which involves adjusting the game's rules and parameters in real-time based on the player's actions. This technique allows for a more responsive and adaptable gaming experience. Other methods include player modeling, which uses data from the player's performance to predict their future behavior and adjust the difficulty accordingly, and machine learning algorithms that continuously learn and adapt to the player's skill level over time. Reinforcement learning (RL) has emerged as a powerful tool for developing DDA systems. In this approach, Artificial Intelligence (AI) agents are trained to play the game and learn optimal strategies to maximize their rewards. These agents can then dynamically adjust the game's difficulty by modifying the behavior of non-player characters (NPCs) or the game's mechanics based on the player's performance. This method allows for a more nuanced and effective DDA system that can adapt to the player's skill level in real-time. In this thesis, a DDA framework was created to be used in various gaming environments. Three different game scenarios were developed to demonstrate its effectiveness: a basic shooter, a basic action game, and a complex action game. Each of these scenarios provided a unique set of challenges and complexities, allowing for a thorough evaluation of the framework's adaptability and performance. The developed framework is capable of analyzing the performance of AI agents against human players and suggesting new difficulty levels accordingly. All parameters for these difficulty adjustments can be modified in the editor, providing game developers and designers with the flexibility to tweak the system to suit their specific needs. This capability ensures that the DDA system remains effective and relevant across different games and player demographics. The BrainBox plugin, developed for Unreal Engine, is a versatile tool designed to facilitate the creation of environments for DDA systems. It seamlessly communicates with a Python backend, managing the complex interplay between game environments and AI training processes. The plugin efficiently handles the creation and management of game environments, executes player and agent actions, calculates rewards, and xxii implements difficulty change procedures. This integration ensures that game developers can easily implement and tweak DDA systems, enhancing the gaming experience by maintaining an optimal level of challenge. A Python backend was created for training and evaluating the RL models. This backend communicates with the game environments created in Unreal Engine using Transmission Control Protocol (TCP), facilitating seamless integration between the training process and the game. The backend is responsible for managing the training data, running simulations, and updating the models based on the results, ensuring a robust and efficient training process. In the complex action game scenario, models were trained and evaluated to determine their effectiveness. The models were ordered by their median rewards across 20 episodes and mapped into difficulty levels. This process allowed for a detailed analysis of the models' performance and provided insights into their learning capabilities and adaptability to different levels of game complexity. A test case was conducted with 20 participants of varying game experience and skill levels. In this test case, the DDA was benchmarked with multiple testers. All sessions were logged, and a comprehensive analysis was performed on the collected data. This analysis provided valuable feedback on the system's performance and effectiveness in real-world scenarios, highlighting areas for improvement and potential future developments. In conclusion, the DDA demonstrated a robust capability in tailoring game difficulty to individual player needs. Its ability to adapt in real-time, guided by both player performance and feedback, highlights its potential to enhance gaming experiences significantly. The findings suggest that the DDA system not only improves player engagement and satisfaction but also offers a scalable solution for balancing difficulty in a wide array of games. Future implementations could benefit from refining this system to further optimize player retention and enjoyment, ensuring the game remains accessible and rewarding for all players regardless of their initial skill level.
-
ÖgeGeneralized game-testing using reinforcement learning(Graduate School, 2023-10-17) Önal, Uğur ; Sarıel Uzer, Sanem ; Tinç, Kutay Hüseyin ; 529201019 ; Game and Interaction TechnologiesThe gaming industry has experienced significant growth and evolution, becoming a prominent sector in entertainment and technology. This growth has led to increased consumer expectations regarding the quality and complexity of games, prompting developers to explore innovative solutions to meet these demands. To meet these demands, one of the pivotal approaches adopted by game developers is the game testing process. Game testing is an incredibly resource-intensive procedure, demanding comprehensive evaluation of all aspects of a game through actual gameplay. To address this challenge and alleviate the associated workload, this thesis proposes an innovative approach to game testing. This method integrates a generic environment framework with reinforcement learning (RL) models, facilitating seamless communication between any game and an RL model under specific conditions. The framework optimizes the game testing process by capitalizing on the efforts of game developers. It relies on developers to compile and transmit essential information, such as state and reward data, to the generic environment. Subsequently, this data is processed and harnessed within the RL model, allowing it to learn and play the game in accordance with developers' intentions, while simultaneously generating valuable data for game testing purposes. This method also capitalizes on the beneficial aspect of game-playing AI agents trying out various actions in different states as they learn to play games. Game testing entails the creation of diverse scenarios by implementing different actions in various in-game situations. These scenarios are observed, and, when necessary, actions are taken in the game development process based on these observations. Therefore, as the situation where game-playing agents experience various scenarios closely resembles game testing, we can utilize not only the actions performed by agents during testing but also their behaviors during training as part of the game-testing content. The experimental phase of the study involved the deployment of six distinct builds of the same game, each serving as a means to test the functionalities of the generic environment and observe their impact on the behavioral patterns of RL models. These builds were thoughtfully crafted to uncover various aspects of RL model behavior and the diverse methods of representing game states. These builds can be summarized as follows: - Basic side-scroller: This build's purpose is to test the seamless communication between the generic environment framework, the game build, and the RL model. It features a simple reward system designed to guide the player to a target point, an action space consisting of three actions, and employs a state image as the state information. Exploration-oriented side-scroller:} Designed to encourage the player to explore the entire game area, this build incorporates a comprehensive reward system. It boasts an action space comprising four actions and utilizes a state image as the state information. - Exploration-oriented side-scroller with colored textures: This build serves as a variant of the exploration-oriented side-scroller build, with the only alteration being the modification of game textures. Its purpose is to investigate the impact of texture changes on the training of RL models. - Goal-oriented side-scroller: Sharing the same action space and state information as the exploration-oriented side-scroller build, this build primarily aims to observe the effects of reward system modifications. It employs a detailed reward system to guide the player toward specific objectives and a goal. - Exploration-oriented side-scroller using no image: With an identical action space and reward system structure as the exploration-oriented side-scroller build, this build seeks to examine how using a state array as state information influences the RL model's behavior. - Exploration-oriented side-scroller using image and array: Being similar to the exploration-oriented side-scroller build in action space and reward system structure, this build aims to maximize its impact on the RL model's behavior. It achieves this by employing both a detailed state array and a state image as state information. - Arcade: This build aims to demonstrate how the generic environment framework will perform in a completely different game. It has both exploratory and goal-oriented structures. It features a moderately complex reward system and an action space consisting of five actions. It uses both arrays and images as state information. The investigation into the communication system between the RL agent and the game build yielded valuable insights. It became evident that the generic environment framework played a crucial role in achieving positive and efficient outcomes. Nevertheless, the research also pinpointed areas ripe for enhancement, particularly concerning the reduction of the workload on game developers and the resolution of issues stemming from external factors. The logging system integrated into the generic environment has proven to be a valuable asset in the realm of game testing. It leverages the total reward accrued in each episode, efficiently guiding the selection of episodes meriting closer scrutiny. Furthermore, the supplementary information provided by this system offers exceptionally insightful data, greatly enhancing our comprehension of the actions taken in various gaming scenarios. Our proposed approach holds significant potential in the realm of game testing. It enables AI agents to adjust their behaviors by utilizing dynamic rewards and extensive state information from arrays and images to meet specific criteria. Moreover, successful game-testing outcomes have been consistently observed throughout both the training and testing phases, where agents adeptly exploit game vulnerabilities and uncover unforeseen features and bugs. In spite of the apparent successful outcomes, the implementation involving both a state image and a state array exhibited a notable reduction in training speed and encountered a substantial level of system load attributed to hardware constraints during the training process. When evaluated in accordance with the objectives of the thesis, it can be concluded that, overall, the proposed method has achieved successful outcomes in the game testing process and holds promise for future development potential. Further endeavors aimed at enhancing system performance may yield positive results concerning the broader applicability of game testing.
-
ÖgeGenerative models for game character generation(Graduate School, 2023-06-13) Emekligil Aydın, Ferda Gül ; Öksüz, İlkay ; 529191006 ; Game and Interaction TechnologiesGenerating visual content and character design for games is generally a time-consuming process and is carried out by designers. The design process can be both costly and time-consuming for small businesses and independent developers. Working in this field requires a detailed understanding of visual aesthetics, creativity, and technical skills. It is important for the characters and visual content used in games to be compatible with the game's story, atmosphere, and gameplay. Designers and artists work to create original visual content and characters that align with the game's objectives and target audience, considering these requirements. Due to these reasons, content creation for games is a challenging process. Automating the design process helps to save time and budget. Many game companies and developers use procedural methods to automate the design process. Procedural content generation involves automatically generating game content using algorithms and rules. This approach offers significant advantages in generating repetitive content and enables developers and designers to create content faster. However, the visual content generated by these algorithms may have certain limitations in terms of diversity. With the advancement of technology and the progress of deep learning methods, approaches incorporating deep learning models have also started to be used instead of procedural methods. Examples of such methods include Generative Adversarial Networks (GANs) and Latent Diffusion models. In addition, in the studies presented in the thesis, the transfer learning method has been used in conjunction with generative models, and its success has been evaluated compared to these methods. In order to perform machine learning, a large amount of labeled data is typically required. However, it is not always possible to have access to a large labeled dataset, and obtaining and labeling data can be costly and time-consuming. The transfer learning method has been proposed to reduce or eliminate this requirement. When applying transfer learning, a pre-trained machine learning model is selected, which has been trained on a significant amount of labeled data. This model is a deep neural network that has learned general features from a large amount of labeled data. For example, a pre-trained classifier model trained on a popular dataset like ImageNet can be used. The initial layers of the selected model contain useful information about learned general features, while the top layers are not applicable to the target task. Therefore, some or all of the layers of the pre-trained model can be frozen, and only specific layers (usually the classification layers) can be retrained on the target dataset. This way, a much more successful model that is tailored to the target task's dataset is obtained. Transfer learning can be used in situations where the dataset is small, just like we have tried. Since a pre-trained model is used, the training process is much faster compared to training from scratch methods. Pre-trained models have learned generalized features from datasets that contain a wide variety and a large number of examples. This gives the models more generalizability, making them applicable to a wider range of domains. In summary, the transfer learning method involves transferring knowledge gained from previous experiences to a new task. It provides benefits in terms of speed, reduced need for labeled data, and improved model performance. Pretrained models trained on diverse and large datasets are used to apply this method effectively. Generative Adversarial Networks (GANs) can generate highly successful results for image generation and are also used in game character generation. GANs are composed of two distinct deep learning models: they are the Generator and the Discriminator. The primary role of the Generator network is to generate synthetic images, while the Discriminator network determines whether the generated images are real or fake. These two artificial neural networks compete with each other during the training phase. The Generator tries to deceive the Discriminator by generating images that are close to reality, while the Discriminator tries to identify the images generated by the Generator accurately. Feedback obtained at each iteration is used for training purposes. Latent Diffusion modeling method is a deep learning approach that involves generating synthetic data, denoising, and noise estimation. This method is based on capturing the temporal evolution of data points. It creates a latent space network in the training data, and through this distribution, it iteratively performs noise estimation and noise removal operations, allowing for the generation of synthetic, high-resolution, and impressive images. The U-Net architecture is used for the denoising model. To achieve this, word embeddings are utilized. Word embeddings are fed as input to all layers of the U-Net. The complexity of the U-Net model increases as the size of the input image increases, necessitating dimensionality reduction. Variational Autoencoders (VAEs) are used to reduce the dimensionality of the input image. By iteratively generating the latent vector, high-resolution images can be obtained. Latent Diffusion models can capture more complex data distributions and achieve more realistic and successful results that align with the real world. However, compared to other generative models, the training process of Latent Diffusion is much more time-consuming and challenging, and it also has higher computational costs. As a result, its implementation can be more demanding and resource-intensive. In this thesis, visual content generation for games is addressed in two different studies. In the first study, six different GAN models were trained using visual image datasets of two different RPG and DND characters. In 3 out of 18 experiments, transfer learning methods were used due to the small size of the datasets. The Frechet Inception Distance (FID) metric was used to compare the models. The results showed that SNGAN was the most successful in both datasets. Additionally, it was concluded that transfer learning methods (WGAN-GP, BigGAN) outperformed the training from scratch approach. In the second study presented in the thesis, a different dataset containing images of 2 different animals and fruits was used. Stylegan, and Latent Diffusion methods were employed. In the training of StyleGAN, eight types of fruit images and three types of animal images were used as conditioning inputs, and conditional learning was applied. In the Latent Diffusion method, the datasets were labeled with descriptive sentences about the images and fed into the model. FID scores were calculated for the generated outputs, and these outputs were transformed into a web game and played by 164 players. The results showed that the Latent Diffusion model performed well in the animal dataset according to the FID score, while StyleGAN performed well in the fruit dataset. In terms of the overall evaluation, the Latent Diffusion method yielded better results. According to the scores obtained from the players, the Latent Regression method also achieved better overall rankings. This indicates the consistency between the results obtained from the FID score and the player evaluation. Both studies demonstrate the feasibility of generating game characters or synthetic artistic visuals using deep neural networks and have produced consistent and continuous results.