Google DeepMind

Reflections on the Future of AI Inspired by the 2024 Nobel Prizes in Physics and Chemistry

Last week was truly astonishing. Two prominent figures in AI, Geoffrey Hinton and Demis Hassabis, were awarded the Nobel Prizes in Physics and Chemistry, respectively. To my knowledge, no one had predicted these individuals as Nobel laureates. The world must be equally surprised. I'd like to take this opportunity to reflect on their achievements and speculate on the future of AI.

 

1.The Nobel Prize in Physics

Let's start with Geoffrey Hinton, a professor at the University of Toronto, who has been researching AI since the 1970s. In 2018, he shared the Turing Award, a prestigious prize for computer scientists, with two other researchers. He's often called the "Godfather of AI." Now 76, he's still actively working. I actually took a massive open online course (MOOC) he offered back in 2013. It was a valuable lecture that led me into the world of AI. Over a decade ago, courses teaching Neural Networks were scarce, so I was fortunate to stumble upon his lectures. Back then, my knowledge was limited to logistic regression models, so much of what he taught seemed incredibly complex and I remember thinking, "This seems amazing, but probably won't be immediately useful." I never imagined he'd win the Nobel Prize in Physics ten years later. Fortunately, his lectures from that time appear to be accessible on the University of Toronto website (1). I highly recommend checking them out. (The Nobel Prize in Physics was awarded jointly to John Hopfield and Geoffrey Hinton.)

 


2. The Nobel Prize in Chemistry

The Nobel Prize in Chemistry recipient is considerably younger, Demis Hassabis, currently 48. He is a co-founder of one of the world's leading AI companies, Google DeepMind. AlphaFold2 is specifically cited for his award. It's a groundbreaking AI model for predicting the 3D structure of proteins, and is said to have made significant contributions to drug discovery and other fields. He is not only a brilliant AI researcher but also a business leader at Google DeepMind. When presenting to a general audience, he mostly talks about the achievements of Google DeepMind, rather than his personal accomplishments. There's no doubt that the catalyst that propelled this company to the top tier of AI companies was AlphaGo, which appeared about four years before AlphaFold2, in March 2016. The reinforcement learning used in this model is still actively being researched to give large language models true logic and reasoning capabilities. AlphaGo inspired me to seriously study reinforcement learning. I wrote about it on my blog in April 2016. It's a fond memory. (The Nobel Prize in Chemistry was awarded jointly to David Baker, John M. Jumper, and Demis Hassabis.)

                                                                                 AlphaGo

 

3. Scientific and Technological Development and AI

I completely agree that the two individuals discussed here have pioneered new paradigms in AI. However, their being awarded the Nobel Prizes in Physics and Chemistry is a landmark event, demonstrating that AI has transcended its own boundaries and become an indispensable tool for scientific advancement as a whole. Going forward, we need to discuss how to leverage AI and integrate it into all aspects of human intellectual activity. Further development might even lead to the kind of intelligence explosion described by Leopold Aschenbrenner's "SITUATIONAL AWARENESS" that I previously mentioned on my blog, potentially surpassing human intelligence. The implications of these Nobel Prizes are profound.

 

What are your thoughts? I'm a business person, but I believe the same applies to the business world. With the incredibly rapid pace of AI development, I hope to offer new insights based on a clear understanding of these trends. That's all for today. Stay tuned!

 


(1) X.post by Geoffrey Hinton,  Jan 16, 2019

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software




Looking at OpenAI's o1-preview, I thought, "Reinforcement learning might become the main character in AI development!"

It's been three weeks since OpenAI's o1 preview unveiled a new paradigm for generative AI. Its accuracy on logical tasks during inference is remarkable. Unfortunately, the mechanism isn't public, but it would be fascinating to know the state of the art in related technologies. Luckily, a helpful research paper (1) has been released by the University of California, Berkeley and Google DeepMind, which I'd like to introduce here and use to speculate on the mechanisms behind o1 preview. Let's begin!

 
  1. What We Learned from OpenAI's o1 Preview and the Latest Research Papers

According to the OpenAI website (2), we've learned two key things. First, o1 preview leverages reinforcement learning for enhanced performance. Second, it emphasizes "chain of thought" and prioritizes test-time computing. However, this information alone isn't enough for a fruitful technical discussion. Therefore, let's examine recent research papers on natural language processing using reinforcement learning. From several papers, I've selected one related to hierarchical reinforcement learning. This algorithm is reportedly effective for "multi-turn" conversations that extend over multiple exchanges. As you may have experienced, when using ChatGPT or similar models to obtain information, rarely do you get the desired results in a single attempt; often, several interactions with the generative AI are required. In such cases, the number of generated tokens or words steadily increases, creating a challenging situation for efficient training of the generative AI. This new algorithm aims to address this challenge. A possible application is the task of "maximizing customer satisfaction at the end of a multi-turn conversation with a generative AI assistant."

 

2. Hierarchical Reinforcement Learning

The algorithm presented in this paper (1) is called "hierarchical reinforcement learning" and is characterized by the following hierarchical structure:

The most notable aspect here is the two-tiered structure consisting of the Utterance level and the token level. Separating utterance-level language processing from the processing of individual minimal units of action at the token level is highly effective for efficient training. Typically, generative AI operates on "next token prediction," where it diligently predicts the next best word based on the prompt's instructions. Its accuracy is remarkable, often generating more polished language than I can. However, in "multi-turn" scenarios with continuous utterances, the number of tokens increases, making training more challenging. This is where reinforcement learning at the Utterance level comes into play, with rewards also being considered at this level. For example, a reward scould be devised where "+1" is awarded for successfully retrieving necessary information by searching a website and "0" for failure. This facilitates efficient training. Based on this reward, an action-value function is calculated and used for reinforcement learning at the token level. This reportedly enables significantly more efficient training. For further details, please refer to (1).

 

3. Flexibility in Reinforcement Learning Design

As we've seen, hierarchical reinforcement learning offers flexibility and a high degree of design freedom. While it's used here to separate utterance-level and token-level analysis, it appears to be employed for other enhancements as well. For example, a research paper (3) from Google DeepMind uses hierarchical reinforcement learning to improve self-correction capabilities:

“Self-correction is a highly desirable capability of large language models (LLMs), yet it has consistently been found to be largely ineffective in modern LLMs. Existing approaches for training self-correction either require multiple models or rely on a more capable model or other forms of supervision. To this end, we develop a multi-turn online reinforcement learning (RL) approach, SCoRe, that significantly improves an LLM’s self-correction ability using entirely self-generated data. "

It's exciting to anticipate the various use cases that will likely emerge in the future. For more details, please refer to (3).

 

What do you think? The acclaim for o1-preview seems to be growing daily. While it's unlikely that the details of its mechanism will be revealed soon, speculating about it from the outside is crucial for understanding AGI. Next time, I'd like to consider the application examples of o1-preview. That's all for today. Stay tuned!

 

1) ArCHer: Training Language Model Agents via Hierarchical Multi-Turn, Yifei Zhou, Andrea Zanette, Jiayi Pan, Sergey Levine,  Aviral Kumar, University of California, Berkeley, 1Google DeepMind,  Feb 29,2024
2) Introducing OpenAI o1, OpenAI, Sep 12, 2024
3) Training Language Models to Self-Correct via Reinforcement Learning, Aviral Kumar, Vincent Zhuang, Rishabh Agarwal, Yi Su, JD Co-Reyes , Avi Singh , Kate Baumli , Shariq Iqbal , Colton Bishop , Rebecca Roelofs , Lei M Zhang , Kay McKinney , Disha Shrivastava , Cosmin Paduraru , George Tucker , Doina Precup , Feryal Behbahani,  Aleksandra Faust,    Google DeepMind,  Sep 19,2024

 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

The Future of Generative AI: Predicting the Next Generation Based on Google DeepMind's Math Olympiad Breakthrough

Generative AI has a reputation for struggling with math, often making mistakes even with simple elementary-level arithmetic. However, Google DeepMind recently announced that their AI achieved a score equivalent to a silver medal in the International Mathematical Olympiad (IMO)(1). Based on this article, let's delve into predicting the future of next-generation generative AI.

 

1. How Did AI Solve Complex Math Problems?

The achievement is impressive:

“Today, we present AlphaProof, a new reinforcement-learning based system for formal math reasoning, and AlphaGeometry 2, an improved version of our geometry-solving system. Together, these systems solved four out of six problems from this year’s International Mathematical Olympiad (IMO), achieving the same level as a silver medalist in the competition for the first time.”

                                                                          

This is an amazing score, just shy of a gold medal. We'll focus on AlphaProof, the reasoning system, out of the two models.

AlphaProof is explained as follows:

“AlphaProof is a system that trains itself to prove mathematical statements in the formal language Lean. It couples a pre-trained language model with the AlphaZero reinforcement learning algorithm, which previously taught itself how to master the games of chess, shogi and Go.”

In simple terms, while there is abundant data available for math problems written in natural language, generative AI tends to make plausible yet incorrect statements (hallucinations), making it difficult to utilize effectively. Therefore, Google utilized its generative AI, Gemini, to translate math problems into the formal language Lean. This formal representation was then fed into AlphaZero, known for its long-term planning and reasoning capabilities, for computation. The chart below provides a clear illustration.

                                                                          AlphaProof's Structure

AlphaZero has already proven its reasoning prowess in board games like Go. This achievement demonstrates the successful application of its capabilities to the realm of mathematics. Remarkable!

 

2. Implications from AlphaZero

Let's briefly revisit AlphaZero, which made a reappearance. It is a groundbreaking AI that combines RL (Reinforcement Learning) and MCTS (Monte Carlo Tree Search). The initial model gained fame in March 2016 as the first AI to defeat a top professional Go player. It's important to emphasize that AlphaZero achieved superhuman ability without relying on human-created data; it trained itself using self-generated data. Upon hearing this for the first time, many might wonder, "How is that even possible?" AlphaZero accomplishes this through self-play, generating massive amounts of training data by playing against itself. Refer to the research paper(2) for more details. For context, consider AlphaGo as the initial version of AlphaZero.

 

3. The Fusion of Current Generative AI and AlphaGo

Interestingly, Demis Hassabis, CEO of Google DeepMind, recently hinted at the future of their generative AI(3). The key takeaways are:

  • “Gemini” is a natively multimodal model.

  • It can understand various aspects of the world, including language, images, videos, and audio.

  • Current models are incapable of long-term planning and problem-solving.

  • DeepMind possesses expertise in this field through AlphaGo.

  • The next-generation model will be an agent that fuses Gemini and AlphaGo.

 

It's plausible to view the project that secured a silver medal in the Math Olympiad as a step towards overcoming the limitations of generative AI in "long-term planning." However, one might question, "How exactly will this fusion work?" A prominent long-form paper (4) in June of this year provides clues.

A look back at AlphaGo—the first AI system that beat the world champions at Go, decades before it was thought possible—is useful here

• In step 1, AlphaGo was trained by imitation learning on expert human Go games. This gave it a foundation.

• In step 2, AlphaGo played millions of games against itself. This let it become superhuman at Go:

remember the famous move 37 in the game against Lee Sedol, an extremely unusual but brilliant move a human would never have played. Developing the equivalent of step 2 for LLMs is a key research problem for overcoming the data wall (and, moreover, will ultimately be the key to surpassing human-level intelligence).

AlphaGo eventually transitioned to self-play, generating its own training data and eliminating the need for human input. This is a remarkable achievement achieved through the combination of "Reinforcement Learning and MCTS." The future of next-generation AI hinges on how generative AI can be trained using this mechanism.

 

Conclusion:

The ability to execute long-term plans opens up a plethora of possibilities. Imagine AI formulating long-term investment strategies or serving as legal advisors in court, excelling in tasks that demand prolonged reasoning and debate. The world is undoubtedly on the verge of transformation, and the future is incredibly exciting.

That's all for today. Stay tuned!

 





1) AI achieves silver-medal standard solving International Mathematical Olympiad problems, Google DeepMind, 25 JULY 2024
2)Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm, Google DeepMind, 5 DEC 2017
3)Unreasonably Effective AI with Demis Hassabis, Google DeepMind, 14 AUG 2024  (around 18:00)
4) SITUATIONAL AWARENESS p28,  The Decade Ahead, Leopold Aschenbrenner, June 2024 













Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Gemma2-2B: A Small Yet Powerful Generative AI - A Hands-On Review

Today, we'll be diving into Google DeepMind's recently announced compact generative AI model, "Gemma2-2B" (1), and running a simple demo. Gemma is an open-source library. While medium-sized models with 70B and 9B parameters are already available, this latest release boasts a significantly smaller 2B parameter model. It promises remarkable performance despite its size, generating considerable excitement. Let's take a closer look.

 

1. Remarkable Fundamental Performance

Despite its compact size, the Gemma model exhibits impressive performance, as detailed below. Surpassing GPT3.5 is a feat unimaginable just a year ago. The rapid advancements in open-source models continue to amaze.

Google's website describes it as follows (1):

""This lightweight model produces outsized results by learning from larger models through distillation. In fact, Gemma 2 2B surpasses all GPT-3.5 models on the Chatbot Arena, demonstrating its exceptional conversational AI abilities.

The "distillation" technique mentioned here is key to enhancing the performance of smaller models. It's employed not only in Gemma but also in Llama3 and various other small models, making it a concept worth remembering. With the performance of a 2B parameter model reaching such heights, it's tempting to explore its capabilities. Let's move on to the demo.

 

2. Performance Check with a News Article Classification Task

For this demo, we'll tackle the task of classifying Japanese articles from the publicly available Livedoor-news dataset (2) into five genres. We'll fine-tune the Gemma2-2B model and evaluate its classification accuracy. Since we're using Japanese articles, this will also assess its multilingual capabilities. Let's get started!

The following article is an example from the validation data. The model's task is to identify this article as belonging to the sports category.

                Example of validation data

Specifically, each article is categorized into one of the following categories. The goal of today's demo is to improve the accuracy of this classification.

  • 'kaden-channel' (Electronics)

  • 'topic-news' (General News)

  • 'sports-watch' (Sports)

  • 'it-life-hack' (IT/Life Hacks)

  • 'movie-enter' (Movies/Entertainment)

We prepared 100 samples for training data and 1000 samples for validation data. We'll apply fine-tuning using the impressive quantization tool Unsloth, and the data will be in the Alpaca format. For details, please refer to this link (3).

Without extensive tuning, we achieved an accuracy of 81.5%, as shown below. Considering the small training dataset of only 100 samples, this is an impressive result. With further optimization, the accuracy could likely be improved. It's hard to believe this performance comes from a model with only 2B parameters. Its ability to handle Japanese text is also commendable. The notebook used for the demo can be found here.

 

3. Limitless Potential Applications

With such high performance in a small model, the possibility of implementation on devices like smartphones, previously deemed impractical, becomes a reality. It also opens doors for applications where cost and computational speed were prohibitive. It seems particularly well-suited for customer service applications requiring real-time responses. Additionally, it could be deployed in developing countries where the cost of using frontier models like GPT4 has been a barrier. The future possibilities are truly exciting.

 



So, what did you think? The Gemma2-2B model can run on Google Colab's free T4 GPU, making it a valuable asset for startups like ours. It's truly remarkable. The small yet powerful Gemma2-2B model is poised for widespread adoption. At ToshiStats, we're committed to developing tuning techniques to maximize the benefits of open-source libraries. We'll be sharing more on this blog in the future. That's all for today. Stay tuned!

 
 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Google introduces new open-weight generative AI "Gemma2". The competition with Llama3 has finally begun!

Google has finally introduced a new type of open-weight generative AI, "Gemma2" (1). Although it had been previously announced, it came out sooner than expected. As shown below, the 27B model boasts an impressive 12th place on the leaderboard, closely rivaling larger models. A technical report (2) is also available, so let's take a look at what kind of evolution has occurred.

LMSYS Chatbot Arena Leaderboard

 

1. Model Architecture

Gemma2 adopts the familiar decoder-only transformer architecture. It's the same as GPT4. The context window, which indicates the amount of information that can be input and output at once, is 8192 tokens. The model structure is largely the same as Gemma1, but according to the technical report, the following points have been updated:

“We alternate between a local sliding window attention (Beltagy et al., 2020) and global attention (Luong et al., 2015) in every other layer. The sliding window size of local attention layers is set to 4096 tokens, while the span of the global attention layers is set to 8192 tokens.”

Global attentional model (3)

Comparison of full self-attention pattern and other attention patterns (4)

 

2. Pre-training

Gemma2's training data is as follows:

  • 27B model: 13 trillion tokens, primarily English data

  • 9B model: 8 trillion tokens

  • 2.6B model: 2 trillion tokens

"These tokens come from a variety of data sources, including web documents, code, and science articles.  Our models are not multimodal and are not trained for state-of-the-art multilingual capabilitiesthe.”

“same tokenizer as Gemma 1 and Gemini: a SentencePiece tokenizer with split digits, preserved whitespace, and byte-level encodings. The resulting vocabulary has 256k entries."

Knowledge distillation was also adopted for the 9B and 2.6B models. In my opinion, this might be the most evolved point of Gemma2. It's a Google-specific strategy to leverage the advantages of their existing large-scale generative AI to improve the performance of smaller models. The technical report explains in detail: "Given a large model used as a teacher, we learn smaller 9B and 2.6B models by distilling from the probability given by the teacher of each token 𝑥 given its context 𝑥𝑐, i.e., 𝑃𝑇(𝑥 | 𝑥𝑐). More precisely, we minimize the negative log-likelihood between the probabilities from the teacher and the student.

where 𝑃𝑆 is the parameterized probability of the student. In practice, we run inference on the teacher once and store the probabilities. Since the vocabulary has 256k entries, we only store a sampled subset of the teacher probabilities."

 

3. Post-training

This part uses techniques commonly seen in other generative AIs. According to the technical report, it is implemented in the following process:

“For post-training, we fine-tune our pre-trained models into instruction-tuned models. First, we apply supervised fine-tuning (SFT) on a mix of text-only, English-only synthetic and humangenerated prompt-response pairs. We then apply RLHF on top of these models with the reward model trained on labelled English-only preference data and the policy based on the same prompts as the SFT phase. Finally, we average the models obtained after each phase to improve their overall performance.“

It's noteworthy that knowledge distillation is adopted again. "We run behavioral cloning on synthetic and real prompts, and responses predominantly synthetically generated by the teacher, that is a larger model. We also run distillation from the teacher on the student’s distribution." In the future, knowledge distillation from large models to small models may become common practice. It's exciting to see.

 

What do you think? Gemma2 seems to be a model with high potential even in small sizes, and it's promising. The 2.6B model is also expected to be released soon. By the way, Google, which created Gemma2, and Meta, which created Llama3 that we covered last time, have been rivals in the open-source world for more than 8 years with "Tensorflow vs PyTorch". It seems that a similar battle has begun in generative AI as well. Next time, I'd like to try various things with the Gemma2 model. Stay tuned!

 
 

1) Gemma 2 is now available to researchers and developers, Google, 27 June 2024
2) Gemma 2 technical paper,  Google DeepMind, 27 June 2024
3) Effective Approaches to Attention-based Neural Machine Translation, Minh-Thang Luong Hieu Pham Christopher D. Manning Computer Science Department, Stanford University, 20 Sep 2015
4) Longformer: The Long-Document Transformer, Iz Beltagy,  Matthew E. Peters,  Arman Cohan, Allen Institute for Artificial Intelligence, 2 Dec 2020
5) On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes, Rishabh Agarwal12, Nino Vieillard1, Yongchao Zhou13, Piotr Stanczyk1, Sabela Ramos1, Matthieu Geist1, Olivier Bachem1, 1Google DeepMind, 2Mila, 3University of Toronto, 17 Jan 2024

 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

The new generative AI "Google Gemini 1.5 Pro" is as amazing as expected!

Last month, I informed you that Google released a new generative AI called "Gemini 1.5 Pro" (1). And today, the "Gemini 1.5 Pro" finally arrived at Toshi Stats. I would like to experiment with it right away.



1. Can the 1 million token long context window really work?

Gemini 1.5 Pro boasts an incredibly long context window of 1 million tokens, which is unthinkable for previous LLMs. Because it is so amazing, anyone would wonder, "Can this really work?" Today, I would like to explore its capabilities here. I have prepared two experiments. The first one is to extract detailed information including numbers from relatively short materials, and the second one is to see if it can answer comprehensive questions well from materials over 200,000 tokens long. Let's begin.



2. Information extraction from Toyota Motor Corporation's financial results  

First, I will check if it can accurately extract numerical information from Toyota Motor Corporation's financial results for the fiscal year ended March 2023. The number of pages is 28, and the number of tokens is about 27,000, which is not a long material, but it is a task often seen in practice. This time, I have prepared 13 questions. Let's upload the material to Google AI Studio and ask the questions one by one.

Google AI Studio 




Here are the results. The correct answer rate is about 88%.

Questions and Results & Computation Time

For question 8, the financial results refer to ROE as "Return on equity attributable to owners of the parent company". This long phrase in Japanese (Kanji) may have been difficult for the American-born Gemini 1.5 Pro to understand. However, it showed the value by calculating from other related numbers by itself, so I gave it a △ evaluation with 0.5 points added. It seems to be able to handle Japanese quite well. By the way, the average computation time was around 30 seconds to output the results, so it seems that the computation resources have also been optimized. Just to note, this material was published on May 10, 2023, so it's possible that it has been incorporated into the training data.





3. Information extraction from the Financial Inspection Manual 

Next, I would like to analyze the lengthy financial administrative document "Financial Inspection Manual" with Gemini 1.5 Pro. This is an unforgettable document for those who have been in the financial industry since 2000 in Japan, as it sparked discussions on establishing risk management and governance frameworks. The number of pages is 366, and the number of tokens is 220,526, exceeding 200,000 tokens. Gemini 1.5 Pro is the world's first generative AI that can take in documents of this volume all at once. I was able to upload it smoothly as shown below. It answers properly when asked questions.

Uploading the Financial Inspection Manual

I actually asked, "What should be done to verify the appropriateness of the credit rating assignment process?" and received the following response (showing only 1 and 2 here as it is long).  To verify the appropriateness of the credit rating assignment process, the following items need to be examined:

* 1. Validity of rating criteria

    * Do the rating criteria appropriately reflect the credit risk of the debtors?

    * Are the rating criteria defined in an objective and verifiable manner?  

    * Are the rating criteria consistent with the debtor classifications?

    * Are the rating criteria periodically reviewed?

* 2. Appropriateness of rating assignment procedures      

    * Are the rating assignment procedures carried out appropriately in accordance with the rating criteria?

    * Are the rating assignment procedures implemented in an accurate and verifiable objective manner?

    * Can the rating assignment procedures timely reflect changes in the debtor's situation? 

    * Are the rating assignment procedures performed by an independent department not influenced by sales departments, etc.?

It continues with 3: Model Utilization, 4: Data Maintenance, 5: Internal Audit, and so on. It is a comprehensive and holistic answer that follows the principles of risk management and is also consistent with the descriptions around page 142 of the Financial Inspection Manual. Furthermore, the descriptions related to credit risk management are in the middle of this manual, and there were past comments that generative AIs tend to have lower accuracy in the middle parts of long data. However, Gemini 1.5 Pro does not seem to have any issues. Despite the specialized content, it provided a very good answer. The computation time was also around 90 seconds, which is sufficiently practical. It will surely make a good risk management assistant.  





How was that? It seems that it can analyze materials over 200,000 tokens quite accurately even in Japanese. It might also be useful for internal document search tasks at work. Next time, I would like to challenge even more difficult tasks in English. Stay tuned!"

 

1) Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context, Gemini Team, Google

Copyright © 2024 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

"REST MEETS REACT" is a new prompt-engineering method using synthetic data. It holds immense potential for enhancing AI without relying on human-generated data

Happy New Year! Thank you for your continued support. Promptly, Google DeepMind has announced a new, advanced prompt engineering method suitable for the new year. It is a paper titled "REST MEETS REACT: SELF-IMPROVEMENT FOR MULTI-STEP REASONING LLM AGENT"(1). It incorporates fine-tuning with synthetic data, which looks promising! Let's get started.

 

1.Prompt Structure

This prompt is designed with a web Q&A system in mind that answers complex questions. The structure is as follows:

The blue part in the figure above represents the flow of the agent described in the prompt, aiming to answer complex questions using web search. In the latter half, "Relevance self-check" and "Grounding self-check" are functions for the agent to check its own answers. It's a self-check function. For a detailed explanation of the entire flow, please refer to the paper.

 

2. "Reward Model" - The Key to Success

Now, let's explain the core part of self-improvement. In a nutshell, it's about "creating new high-quality data and fine-tuning the model with it." . This function consists of three parts:

  • Grow: Start with a model capable of running Search Agent, using Google PaLM 2-L model for this purpose. Trajectories are collected based on a selected set of 2000 public questions. Trajectory, though an unfamiliar term, refers to the reasoning process and is commonly used in reinforcement learning.

  • Improve: Convert trajectories into data for fine-tuning, using the Reward model to select only high-quality data. No external data, like labels, are used.

  • Fine-tuning: Fine-tune a new model of the same size with this new data, ensuring it performs better than the original.

This process is repeated with the better model using the new data. As a result, accuracy improves while maintaining the original data, without adding external data. Therefore, the accuracy of the Reward model in ranking is crucial. The Reward model is constructed as a set of prompts in this paper. Let's look more closely at these prompts, showing only the initial part.

  • The goal of this rating is to filter out bad actions so that they'll be excluded from the fine-tuning dataset.

  • Overall, we want the agent to produce relevant and grounded answers with minimal steps. Anything deviating from this goal is considered bad.

  • If any element (thoughts, comments, etc.) is empty, then it's automatically bad.

"Filter out" indicates a method of discarding items that don't meet the standards and adopting only the high-quality data that remains. Please see the paper (p19) for details.

 




3.Improve Accuracy with Synthetic Data

Papers including this one have been published in late 2023, focusing on using the Reward model to create high-quality synthetic data for model fine-tuning and accuracy improvement. Vigorous research is expected to continue in 2024, yielding various results. Especially in the LLM field, collecting high-quality training data is becoming increasingly difficult, and fine-tuning with synthetic data is anticipated as a solution.


 


How was it? The improvement in model accuracy with synthetic data is expected to be a very effective development method for startups like us, who cannot collect vast amounts of data independently. Our blog will continue to follow these synthetic data and other technological innovations, so stay tuned. Wishing you a great year!






1) “REST MEETS REACT: SELF-IMPROVEMENT FOR MULTI-STEP REASONING LLM AGENT" Renat Aksitov†1 , Sobhan Miryoosefi†1 , Zonglin Li†1 , Daliang Li†1 , Sheila Babayan†2 , Kavya Kopparapu†2 , Zachary Fisher1 , Ruiqi Guo1 , Sushant Prakash1 , Pranesh Srinivasan3 , Manzil Zaheer2 , Felix Yu1 , and Sanjiv Kumar1,    1Google Research, 2Google DeepMind, 3Google †Core contributors, 15 Dec 2023, https://arxiv.org/abs/2312.10003





Copyright © 2023 Toshifumi Kuga. All right reserved





Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.