Graduation Year

2024

Document Type

Thesis

Degree

M.S.C.S.

Degree Name

MS in Computer Science (M.S.C.S.)

Degree Granting Department

Engineering

Major Professor

John Licato, Ph.D.

Committee Member

Gene Kim, Ph.D.

Committee Member

Seungbae Kim, Ph.D.

Committee Member

Tempestt Neal, Ph.D.

Keywords

player modeling, action prediction, NLP, prompt engineering, GPT

Abstract

As artificial intelligence (AI) surpasses human performance in more tasks, the interest in leveraging and collaborating with this technology for greater productivity continues to grow. However, the black-box nature of current AI can make it difficult to interpret and unsuitable to perform tasks that are more complex and require human intuition. This has led to the pursuit of AI systems that can model individual behavior. Chess offers an ideal environment to explore this task due to its complexity, structure, and the abundance of data containing unique human decision-making examples. Given that a chess game can be fully represented with text, and considering the prominence of state-of-the-art natural language processing techniques in reasoning tasks, this work investigates how well large language models can predict human chess moves compared to approaches that are specialized in chess. Additionally, I determine the extent to which prompt engineering affects the performance of large language models in predicting human chess moves. The results show that, while prompt engineering improves the accuracy of large language models, this technique falls short of matching the performance of chess-specific approaches.

Share

COinS