LEE- Oyun ve Etkileşim Teknolojileri-Yüksek Lisans
Bu koleksiyon için kalıcı URI
Gözat
Konu "artificial intelligence" ile LEE- Oyun ve Etkileşim Teknolojileri-Yüksek Lisans'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgeDynamic difficulty adjustment by changing enemy behavior using reinforcement learning(Graduate School, 2024-07-25) Akşahin, Burak Furkan ; Sarıel, Sanem ; 529201003 ; Game and Interaction TechnologiesDynamic difficulty adjustment (DDA) systems are essential in modern gaming to provide to the diverse skill levels of players. These systems ensure that games remain challenging yet enjoyable by automatically adjusting the difficulty based on the player's performance. Traditional fixed difficulty settings often fail to provide an optimal experience for all players, leading to frustration for less skilled players and boredom for more skilled ones. Implementing DDA systems aims to enhance player engagement and satisfaction by maintaining an appropriate level of challenge throughout the game. Various techniques have been explored to implement DDA systems. One common approach is dynamic scripting, which involves adjusting the game's rules and parameters in real-time based on the player's actions. This technique allows for a more responsive and adaptable gaming experience. Other methods include player modeling, which uses data from the player's performance to predict their future behavior and adjust the difficulty accordingly, and machine learning algorithms that continuously learn and adapt to the player's skill level over time. Reinforcement learning (RL) has emerged as a powerful tool for developing DDA systems. In this approach, Artificial Intelligence (AI) agents are trained to play the game and learn optimal strategies to maximize their rewards. These agents can then dynamically adjust the game's difficulty by modifying the behavior of non-player characters (NPCs) or the game's mechanics based on the player's performance. This method allows for a more nuanced and effective DDA system that can adapt to the player's skill level in real-time. In this thesis, a DDA framework was created to be used in various gaming environments. Three different game scenarios were developed to demonstrate its effectiveness: a basic shooter, a basic action game, and a complex action game. Each of these scenarios provided a unique set of challenges and complexities, allowing for a thorough evaluation of the framework's adaptability and performance. The developed framework is capable of analyzing the performance of AI agents against human players and suggesting new difficulty levels accordingly. All parameters for these difficulty adjustments can be modified in the editor, providing game developers and designers with the flexibility to tweak the system to suit their specific needs. This capability ensures that the DDA system remains effective and relevant across different games and player demographics. The BrainBox plugin, developed for Unreal Engine, is a versatile tool designed to facilitate the creation of environments for DDA systems. It seamlessly communicates with a Python backend, managing the complex interplay between game environments and AI training processes. The plugin efficiently handles the creation and management of game environments, executes player and agent actions, calculates rewards, and xxii implements difficulty change procedures. This integration ensures that game developers can easily implement and tweak DDA systems, enhancing the gaming experience by maintaining an optimal level of challenge. A Python backend was created for training and evaluating the RL models. This backend communicates with the game environments created in Unreal Engine using Transmission Control Protocol (TCP), facilitating seamless integration between the training process and the game. The backend is responsible for managing the training data, running simulations, and updating the models based on the results, ensuring a robust and efficient training process. In the complex action game scenario, models were trained and evaluated to determine their effectiveness. The models were ordered by their median rewards across 20 episodes and mapped into difficulty levels. This process allowed for a detailed analysis of the models' performance and provided insights into their learning capabilities and adaptability to different levels of game complexity. A test case was conducted with 20 participants of varying game experience and skill levels. In this test case, the DDA was benchmarked with multiple testers. All sessions were logged, and a comprehensive analysis was performed on the collected data. This analysis provided valuable feedback on the system's performance and effectiveness in real-world scenarios, highlighting areas for improvement and potential future developments. In conclusion, the DDA demonstrated a robust capability in tailoring game difficulty to individual player needs. Its ability to adapt in real-time, guided by both player performance and feedback, highlights its potential to enhance gaming experiences significantly. The findings suggest that the DDA system not only improves player engagement and satisfaction but also offers a scalable solution for balancing difficulty in a wide array of games. Future implementations could benefit from refining this system to further optimize player retention and enjoyment, ensuring the game remains accessible and rewarding for all players regardless of their initial skill level.
-
ÖgeGeneralized game-testing using reinforcement learning(Graduate School, 2023-10-17) Önal, Uğur ; Sarıel Uzer, Sanem ; Tinç, Kutay Hüseyin ; 529201019 ; Game and Interaction TechnologiesThe gaming industry has experienced significant growth and evolution, becoming a prominent sector in entertainment and technology. This growth has led to increased consumer expectations regarding the quality and complexity of games, prompting developers to explore innovative solutions to meet these demands. To meet these demands, one of the pivotal approaches adopted by game developers is the game testing process. Game testing is an incredibly resource-intensive procedure, demanding comprehensive evaluation of all aspects of a game through actual gameplay. To address this challenge and alleviate the associated workload, this thesis proposes an innovative approach to game testing. This method integrates a generic environment framework with reinforcement learning (RL) models, facilitating seamless communication between any game and an RL model under specific conditions. The framework optimizes the game testing process by capitalizing on the efforts of game developers. It relies on developers to compile and transmit essential information, such as state and reward data, to the generic environment. Subsequently, this data is processed and harnessed within the RL model, allowing it to learn and play the game in accordance with developers' intentions, while simultaneously generating valuable data for game testing purposes. This method also capitalizes on the beneficial aspect of game-playing AI agents trying out various actions in different states as they learn to play games. Game testing entails the creation of diverse scenarios by implementing different actions in various in-game situations. These scenarios are observed, and, when necessary, actions are taken in the game development process based on these observations. Therefore, as the situation where game-playing agents experience various scenarios closely resembles game testing, we can utilize not only the actions performed by agents during testing but also their behaviors during training as part of the game-testing content. The experimental phase of the study involved the deployment of six distinct builds of the same game, each serving as a means to test the functionalities of the generic environment and observe their impact on the behavioral patterns of RL models. These builds were thoughtfully crafted to uncover various aspects of RL model behavior and the diverse methods of representing game states. These builds can be summarized as follows: - Basic side-scroller: This build's purpose is to test the seamless communication between the generic environment framework, the game build, and the RL model. It features a simple reward system designed to guide the player to a target point, an action space consisting of three actions, and employs a state image as the state information. Exploration-oriented side-scroller:} Designed to encourage the player to explore the entire game area, this build incorporates a comprehensive reward system. It boasts an action space comprising four actions and utilizes a state image as the state information. - Exploration-oriented side-scroller with colored textures: This build serves as a variant of the exploration-oriented side-scroller build, with the only alteration being the modification of game textures. Its purpose is to investigate the impact of texture changes on the training of RL models. - Goal-oriented side-scroller: Sharing the same action space and state information as the exploration-oriented side-scroller build, this build primarily aims to observe the effects of reward system modifications. It employs a detailed reward system to guide the player toward specific objectives and a goal. - Exploration-oriented side-scroller using no image: With an identical action space and reward system structure as the exploration-oriented side-scroller build, this build seeks to examine how using a state array as state information influences the RL model's behavior. - Exploration-oriented side-scroller using image and array: Being similar to the exploration-oriented side-scroller build in action space and reward system structure, this build aims to maximize its impact on the RL model's behavior. It achieves this by employing both a detailed state array and a state image as state information. - Arcade: This build aims to demonstrate how the generic environment framework will perform in a completely different game. It has both exploratory and goal-oriented structures. It features a moderately complex reward system and an action space consisting of five actions. It uses both arrays and images as state information. The investigation into the communication system between the RL agent and the game build yielded valuable insights. It became evident that the generic environment framework played a crucial role in achieving positive and efficient outcomes. Nevertheless, the research also pinpointed areas ripe for enhancement, particularly concerning the reduction of the workload on game developers and the resolution of issues stemming from external factors. The logging system integrated into the generic environment has proven to be a valuable asset in the realm of game testing. It leverages the total reward accrued in each episode, efficiently guiding the selection of episodes meriting closer scrutiny. Furthermore, the supplementary information provided by this system offers exceptionally insightful data, greatly enhancing our comprehension of the actions taken in various gaming scenarios. Our proposed approach holds significant potential in the realm of game testing. It enables AI agents to adjust their behaviors by utilizing dynamic rewards and extensive state information from arrays and images to meet specific criteria. Moreover, successful game-testing outcomes have been consistently observed throughout both the training and testing phases, where agents adeptly exploit game vulnerabilities and uncover unforeseen features and bugs. In spite of the apparent successful outcomes, the implementation involving both a state image and a state array exhibited a notable reduction in training speed and encountered a substantial level of system load attributed to hardware constraints during the training process. When evaluated in accordance with the objectives of the thesis, it can be concluded that, overall, the proposed method has achieved successful outcomes in the game testing process and holds promise for future development potential. Further endeavors aimed at enhancing system performance may yield positive results concerning the broader applicability of game testing.