Reinforcement Learning — Blog — TOSHI STATS Co.,Ltd.

Reinforcement Learning

Paying homage to AlphaGo, we've launched our own AI Go project at ToshiStats!

Reinforcement learning has become a hot topic since the release of OpenAI's o1-preview. Looking back, it was Google DeepMind's AlphaGo, released in March 2016, that truly brought reinforcement learning into the public eye. Go, with its vast search space, was traditionally a formidable challenge for computers. Amateur high-dan levels were roughly the limit at the time. However, AlphaGo, combining reinforcement learning and Monte Carlo Tree Search (MCTS), exceeded expert expectations, becoming the first AI Go player to defeat a top professional. Inspired by this, we've launched our own AI Go project, "ToshiStats-Go project," to research reinforcement learning. We're excited to see what we can achieve.

 

1. Creating a Go Game Environment

We've decided to build our own Go game environment from scratch. Given the exceptional coding capabilities of o1-preview, we're using it as a coding assistant for this project. We're iteratively developing the code by requesting o1-preview to generate the Go game environment code, executing it in Google Colab, then requesting further refinements based on the results, and repeating the process. Within a few iterations, we were able to establish a basic framework and a functional environment. While we can't perfectly implement a complex game like Go, we've created something akin to "simple-go." This should be sufficient for implementing reinforcement learning and improving its accuracy. Below is an example of o1-preview's explanation of a code modification. As you can see, it's quite detailed.

                                                      o1-preview's explanation of code modification

 

2. Trying a Game of Go

Let's give it a try! The current AI model plays random moves, so it's not very strong. As shown in the example below, a human can win with careful play. While a 9x9 board is available, the calculations can be time-consuming, so we'll stick with a 5x5 board for now. It's enjoyable enough, and if you'd like to try it yourself, please download the Colab notebook from our Github repository (1). A GPU is not required.

                                                                     Trial run of ToshiStats-Go

 

3. Perfect Go Rules Are Difficult

Go has some very complex rules. In particular, determining the life and death of stones, especially in the endgame, proved challenging. Implementing "ko" and "seki" also seems difficult. Connecting to an external Go system might solve these issues, but for now, we'll continue with a lightweight environment that completes calculations within the notebook to facilitate reinforcement learning experimentation. We'll strive to make this series engaging and easy to follow, comparing our progress with simpler games like Gomoku or connect five. We appreciate your continued interest.

 

So, there you have it! We've successfully implemented a Go playing environment in Colab. From here, we'll dive into reinforcement learning and begin training our AI Go player. Stay tuned!





 
 

1) ToshiStatsGo-project

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

The combination of Monte Carlo Tree Search (MCTS) and generative AI could be a real game-changer in the future!

"Monte Carlo Tree Search," a search technique, gained fame in March 2016 when AlphaGo became the first AI to defeat a top professional Go player. Its effectiveness increases significantly when combined with reinforcement learning, making it a powerful tool. However, implementing it can be quite challenging. With the recent release of ChatGPT canvas (1) on October 3rd, I want to explore implementing Monte Carlo Tree Search in a simple game. Let's begin!

 

1. AlphaGo and Monte Carlo Tree Search
AlphaGo, which decisively defeated 18-time Go world champion Lee Sedol in March 2016, owed its strength to the combination of reinforcement learning and Monte Carlo Tree Search (MCTS), as discussed previously. A research paper (2) illustrates the performance comparison of various Go AI programs.

Performance comparison of various Go AI programs.

The leftmost "Raw network" doesn't utilize MCTS during inference, resulting in lower performance compared to AlphaGo Zero next to it. This highlights the significant contribution of MCTS. In AlphaGo Zero, MCTS is executed as shown in the diagram below. The action probability 'p' is trained to approach the probability 'π' of the next move selected by MCTS, gradually improving accuracy. For details, please refer to (2).

AlphaGo Zero "Monte Carlo Tree Search"


2.Implementing MCTS in a simple game
Witnessing MCTS's success in AlphaGo makes you want to try it out yourself. The recent release of ChatGPT canvas (1) from OpenAI provides the perfect opportunity. As their message "A new way of working with ChatGPT to write and code" suggests, it offers a new user experience. I promptly asked ChatGPT canvas, "Could you make code of Tic Tac Toe by using python and MCTS?"

Unlike regular ChatGPT, a separate window opens and generates Python code as shown below.

I also wanted an explanation, so I added a prompt to provide it in English. Since the generated code cannot be executed within the canvas, I copied and pasted it into Google Colab to run it.

I was able to enjoy the game as shown below. Fantastic!

The generative AI model GPT-4o, powering ChatGPT canvas, appears to have improved coding abilities, likely due to post-training with data distilled from the recently released, logically robust o1 preview. While I encountered occasional errors, copying and pasting them into a prompt for correction quickly resolved the issues. It felt like a significant upgrade to a full-fledged code assistant. I'm eager to use it more. The generated code can be found at (3).

 

3.Promising combination of Generative AI and MCTS
Research on incorporating the AlphaGo mechanism into generative AI is actively underway. Versions after AlphaGo Zero, released in 2017, don't require any human input (in this case, game records). This freedom from data constraints makes it a promising technology to address training data scarcity. The combination of reinforcement learning and MCTS offers flexible design possibilities, making it highly intriguing for developers. From the perspective of test-time computing, highlighted by OpenAI's o1-preview, it's a technology worth focusing on. In the next post, I plan to delve deeper into MCTS by examining published research papers. Stay tuned!

 

What do you think? The concept of MCTS is relatively simple, which broadens its applicability. It works well with ChatGPT canvas, and I'm excited to continue experimenting. Currently, it's available only to paid subscribers, but it's expected to be available to free users upon general release. I'm looking forward to it. That's all for today. Stay tuned!

 

1)Introducing canvas, OpenAI, Oct 3 2024
2)Mastering the game of Go without human knowledge,  David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert , Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel & Demis Hassabis,  GoogleDeepMind, Oct 19 2017, VOL 550, NATURE, 355
3)Monte-Carlo-Tree-Search-with-ChatGPT-canvas, Oct 6 2024

 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Looking at OpenAI's o1-preview, I thought, "Reinforcement learning might become the main character in AI development!"

It's been three weeks since OpenAI's o1 preview unveiled a new paradigm for generative AI. Its accuracy on logical tasks during inference is remarkable. Unfortunately, the mechanism isn't public, but it would be fascinating to know the state of the art in related technologies. Luckily, a helpful research paper (1) has been released by the University of California, Berkeley and Google DeepMind, which I'd like to introduce here and use to speculate on the mechanisms behind o1 preview. Let's begin!

 
  1. What We Learned from OpenAI's o1 Preview and the Latest Research Papers

According to the OpenAI website (2), we've learned two key things. First, o1 preview leverages reinforcement learning for enhanced performance. Second, it emphasizes "chain of thought" and prioritizes test-time computing. However, this information alone isn't enough for a fruitful technical discussion. Therefore, let's examine recent research papers on natural language processing using reinforcement learning. From several papers, I've selected one related to hierarchical reinforcement learning. This algorithm is reportedly effective for "multi-turn" conversations that extend over multiple exchanges. As you may have experienced, when using ChatGPT or similar models to obtain information, rarely do you get the desired results in a single attempt; often, several interactions with the generative AI are required. In such cases, the number of generated tokens or words steadily increases, creating a challenging situation for efficient training of the generative AI. This new algorithm aims to address this challenge. A possible application is the task of "maximizing customer satisfaction at the end of a multi-turn conversation with a generative AI assistant."

 

2. Hierarchical Reinforcement Learning

The algorithm presented in this paper (1) is called "hierarchical reinforcement learning" and is characterized by the following hierarchical structure:

The most notable aspect here is the two-tiered structure consisting of the Utterance level and the token level. Separating utterance-level language processing from the processing of individual minimal units of action at the token level is highly effective for efficient training. Typically, generative AI operates on "next token prediction," where it diligently predicts the next best word based on the prompt's instructions. Its accuracy is remarkable, often generating more polished language than I can. However, in "multi-turn" scenarios with continuous utterances, the number of tokens increases, making training more challenging. This is where reinforcement learning at the Utterance level comes into play, with rewards also being considered at this level. For example, a reward scould be devised where "+1" is awarded for successfully retrieving necessary information by searching a website and "0" for failure. This facilitates efficient training. Based on this reward, an action-value function is calculated and used for reinforcement learning at the token level. This reportedly enables significantly more efficient training. For further details, please refer to (1).

 

3. Flexibility in Reinforcement Learning Design

As we've seen, hierarchical reinforcement learning offers flexibility and a high degree of design freedom. While it's used here to separate utterance-level and token-level analysis, it appears to be employed for other enhancements as well. For example, a research paper (3) from Google DeepMind uses hierarchical reinforcement learning to improve self-correction capabilities:

“Self-correction is a highly desirable capability of large language models (LLMs), yet it has consistently been found to be largely ineffective in modern LLMs. Existing approaches for training self-correction either require multiple models or rely on a more capable model or other forms of supervision. To this end, we develop a multi-turn online reinforcement learning (RL) approach, SCoRe, that significantly improves an LLM’s self-correction ability using entirely self-generated data. "

It's exciting to anticipate the various use cases that will likely emerge in the future. For more details, please refer to (3).

 

What do you think? The acclaim for o1-preview seems to be growing daily. While it's unlikely that the details of its mechanism will be revealed soon, speculating about it from the outside is crucial for understanding AGI. Next time, I'd like to consider the application examples of o1-preview. That's all for today. Stay tuned!

 

1) ArCHer: Training Language Model Agents via Hierarchical Multi-Turn, Yifei Zhou, Andrea Zanette, Jiayi Pan, Sergey Levine,  Aviral Kumar, University of California, Berkeley, 1Google DeepMind,  Feb 29,2024
2) Introducing OpenAI o1, OpenAI, Sep 12, 2024
3) Training Language Models to Self-Correct via Reinforcement Learning, Aviral Kumar, Vincent Zhuang, Rishabh Agarwal, Yi Su, JD Co-Reyes , Avi Singh , Kate Baumli , Shariq Iqbal , Colton Bishop , Rebecca Roelofs , Lei M Zhang , Kay McKinney , Disha Shrivastava , Cosmin Paduraru , George Tucker , Doina Precup , Feryal Behbahani,  Aleksandra Faust,    Google DeepMind,  Sep 19,2024

 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.