Elevating Chess Strategy with Fine-Tuned Language Models
Explore the docs »
View Demo
· Report Bug
· Request Feature
Table of Contents
StockLLM represents a initiative focusing on refining chess instruction and language modeling through the fine-tuning of a Large Language Model. This project encompasses two pivotal components, each engineered to enhance and streamline the comprehension and dissemination of chess-related knowledge:
StockLLM stands as an ongoing endeavor aimed at developing a highly specialized Large Language Model tailored explicitly for the domain of chess instruction. StockLLM endeavors to distill and encode intricate chess-related concepts, strategies, and instructional nuances into a language-based model.
Key Features of StockLLM (WIP):
- Fine-tuned Specialization: Through meticulous fine-tuning on curated chess instructional datasets, StockLLM seeks to encapsulate the inherent complexities and strategic depth of chess gameplay within its language-based representations.
- Advanced Contextual Understanding: StockLLM aims to grasp the subtleties of chess moves, positions, tactics, and strategic principles, fostering an enriched understanding for instructional purposes.
- Adaptive Learning Capabilities: The model aspires to adapt dynamically to diverse skill levels, providing tailored guidance, analyses, and instructional content catering to beginners, intermediate, and advanced players alike.
The ChessInstruct Dataset serves as the foundation for training and fine-tuning Language Models (LLMs) specifically in the realm of chess instruction. Derived from the laion/strategic_game_chess dataset, this meticulously curated dataset encompasses a wide array of annotated instructional chess content.
Features of the ChessInstruct Dataset:
- Rich and Diverse Content: Curated with a broad spectrum of instructional resources including annotated games, strategic analyses (incoming) and positional evaluations, the dataset facilitates comprehensive learning and modeling.
- Customizable Training Resource: The ChessInstruct Dataset allows for the tailored fine-tuning of any Language Model, enabling researchers and practitioners to adapt and optimize LLMs for chess-specific instructional contexts.
- Annotated Instructional Insights: Detailed annotations and instructional cues within the dataset provide valuable guidance for language model training, emphasizing strategic moves, tactics, and decision-making processes.
StockLLM, in conjunction with the ChessInstruct Dataset, aims to propel the boundaries of language modeling in the domain of chess instruction. Through nuanced linguistic representations and tailored instructional datasets, this project envisions revolutionizing the efficacy and depth of chess education by harnessing the power of advanced Natural Language Processing.
TODO
To replicate the experiments, there are two primary methods:
Utilize DVC to rerun the entire pipeline or modify certain parameters:
Run the entire pipeline:
dvc repro
Play with the parameters used:
dvc exp run -S "param.to.overwrite=new-value"
Execute a single stage by running the following command:
python src/entrypoint.py <name-of-stage-to-run>
For instance:
python src/entrypoint.py train-instruct-model
Upon executing the DVC pipeline, multiple outputs will be generated and stored in the outputs/
folder:
- Intermediate version of Mistral-7b : This model incorporates each possible chess move within its vocabulary as a distinct token.
- ChessInstruct Dataset: An instructive dataset focused explicitly on chess.
- StockLLM: An instructive iteration of Mistral-7b specialized specifically for chess.
These outputs encapsulate the refined components developed during the experiment replication process, providing a comprehensive suite of resources for further exploration and utilization within the realm of chess instruction and language modeling. Feel free to adapt and utilize these outputs in alignment with your specific requirements and research goals.
- Convert intermediate output format to parquet
- Generate an intermediate dataset independent of any specific model's prompt format
- Facilitate local move evaluation by running a StockFish server
- Introduce a new task: "detect illegal moves"
- Introduce a new task: "calculate WIN/DRAW/LOSE statistics"
- Generate strategic game analyses
- Implement the use of
LABEL_PROMPT
to enhance the model's output format - Investigate additional datasets to bolster ChessInstruct
- Capture and incorporate strategies employed by various players to enhance the strategic analysis section.
- Introduce Time Control Variation: Include datasets or variations that encompass different time controls (blitz, rapid, classical) to diversify the model's exposure to varied game styles.
- Conduct an empirical analysis to assess the influence of varying game ELO ratings used during pre-training on the model's performance and adaptability to different skill levels (ELO).
- Provide training as a dvc step
- Conduct pre-training utilizing a blend of laion/strategic_game_chess and chess-related literature
- Perform instruction fine-tuning using a combination of ChessInstruct and tatsu-lab/alpaca
- Supply a trained version of StockLLM
- Integrate an evaluation step involving matches against Stockfish (across different ELOs)
- Determine StockLLM's ELO rating
- Experiment with reinforcement learning approaches to allow the model to learn and adapt during gameplay, potentially enhancing its performance.
- Focus on improving the model's interpretability by incorporating methods to explain its decisions, especially in strategic analyses or move recommendations.
See the open issues for a full list of proposed features and known issues.
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated. If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
Don't forget to give the project a star! 🌟 Thanks again!
Please try to follow Conventional Commits.
I extend my heartfelt thanks to LAION for graciously providing the laion/strategic_game_chess dataset that served as the backbone of this project.
Valentin De Matos - @ThytuVDM - [email protected]