NLP

Fine-tuning GPT-3.5 with synthetic text generated by GPT-4. The accuracy has improved! In the future, we might not even need training text???

Hello, despite being in the latter half of September, it is still quite hot in Japan. The photos feel mismatched, but I'm deliberately sticking to the autumn theme, hoping it will get cooler soon. However, it might stay hot for the rest of the month.

Now, about the fine-tuning of ChatGPT-3.5 that I introduced the other day, it's certainly a hot topic. I think there is a strong demand in companies to specialize its performance for specific tasks. For this reason, we conducted an experiment assuming cases where you would want to proceed even without data at hand by generating synthetic text and then fine-tuning it.

 
  1. Experiment Details

Just like the previous experiment, we set a task to determine which financial product a given English-language complaint is about. They are complaints for the banking industry, so the task involves differentiating between six types of financial products such as mortgages and bank accounts. The data used for fine-tuning was minimal, with 100 samples for validation, just like last time. However, the training data is different this time. We generated customer complaint emails using GPT-4, and they are indistinguishable from real ones at a glance. GPT-4's performance is indeed impressive. We generated 15 similar customer complaints for training and then proceeded with fine-tuning.

synthetic text generated by GPT-4


2. Experiment Results

Since this was our first time using synthetic text, we were worried about the outcome, but we were able to confirm the effectiveness of fine-tuning as follows. Though the improvement isn't dramatic with just 15 samples, the accuracy for this task has improved compared to the base GPT-3.5, which had an accuracy of 0.5 to 0.55.

For more details on the experiment, please refer to this notebook.

 

3. Discussion

Fine-tuning with synthetic text was a method not even considered before, but with the arrival of GPT-4, it's becoming more realistic. There are several points to consider, such as the number of samples and how to write prompts, but the advantage of being able to start even without data is significant. Currently, GPT-4 is the only option for generation models, but it seems like new models like Gemini from Google will also be available next year. Technology is advancing rapidly, so we can expect a lot more in the future.

So, what did you think? We will continue to conduct various experiments and share our findings here. See you again soon!




Copyright © 2023 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

The "Graph of Thoughts" might pave the way for new avenues in "human and LLM (Large Language Model) collaboration"!

Last week, I came across an intriguing paper on Large Language Models (LLMs). It appears to further develop the "Tree of Thoughts" (ToT) reasoning method I mentioned before, introducing a new technique called the "Graph of Thoughts" (GoT). Let's take a closer look.

 
  1. Features of GoT

First, let's compare various methods using the chart provided in the paper.

The far right shows the newly introduced GoT. Key differences from ToT may be that GoT allows for the merging of thoughts, and that users can define the shape of the GoT themselves. Incidentally, this merging is referred to as "aggregation" within the paper. While it may seem similar to ToT, the differences might be significant. Let's explore this in more detail.

 

2. Four Key Modules

GoT (Graph of Thoughts) has the following four crucial modules. Understanding these will clarify the differences between it and ToT (Tree of Thoughts).

  • Prompter

  • Parser

  • Scoring & Validation

  • Controller

Let's look at each one in detail. The Prompter, as the name suggests, is the module responsible for creating prompts. The Parser extracts the required information, or "thoughts," from the LLM (Large Language Model). You might think of the Prompter as handling input and the Parser as managing output. Scoring & Validation is the module that evaluates the gathered thoughts. This evaluation allows us to select the thoughts worth keeping. Finally, let's elaborate on the Controller. It is responsible for adding new thoughts or merging multiple thoughts, a process referred to as "transform." The Controller decides which transformations should be applied to which thoughts and passes this information to the Prompter. It is a critical module for executing problem-solving strategies. It has two functions: Graph of Operations (GoO), which is an execution plan for operations defined by the user, and Graph Reasoning State (GRS), which maintains the ongoing LLM reasoning process based on the thought state.


3. Considering the Number Sorting Problem

Since merely talking abstractly may not advance understanding, let's consider an actual task. This time we will consider sorting a list of 64 numbers in ascending order. Here, we'll see how the Graph of Operations (GoO) comes into play. In the chart below, each thought is tagged with operations like G (Generate), S (Sort), K (Keep the best), and A (Merge). Initially, we take a list of 64 numbers and divide it into four lists, each containing 16 numbers. Each of these lists is then sorted and evaluated. Only the list with the highest accuracy is kept. These are then further merged to form a new list containing 32 numbers. You'll see various operations functioning as the process progresses.

For those who want to delve deeper, detailed explanations are available here, particularly in the green part of the chart above.

It might feel complex at a glance, but it's user-controllable, allowing you to incorporate your domain knowledge. I am excited to conduct various experiments in the future.

Thank you for your attention! I will keep you updated on the progress of GoT and hope to share more with you soon. Stay tuned!









1) "Graph of Thoughts: Solving Elaborate Problems with Large Language Models",  Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler, 21 Aug 2023, https://arxiv.org/abs/2308.09687v2







Copyright © 2023 Toshifumi Kuga. All right reserved




Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Fine-tuning has come to ChatGPT. Its effects are outstanding, and if implemented according to the task, we can perhaps expect significant improvements in accuracy!!

Hello everyone, how are you doing? Although the illustration is autumn-like, it seems that summer will stick around for a while in Japan

While that was happening, I suddenly received a message from OpenAI saying, "The fine-tuning feature has been implemented." I have always fine-tuned open-source models, so I was a little disappointed that ChatGPT didn't have this feature. But it seems that it has finally made its appearance. I guess OpenAI got a little serious. Let's get started right away.

 
  1. Is fine-tuning effective for ChatGPT?

I'm sure you all want to know, "Does fine-tuning work well with ChatGPT?" So I created a small dataset and conducted a simple experiment. To put it briefly, "Something amazing is happening!" Below is the table with the results.

Accuracy for 100 samples

I had GPT3.5 perform a 6-class classification task and expected some fine-tuning effects. However, exceeding an accuracy of 0.8 was unexpected. The normal GPT3.5 only barely surpassed 0.5, so I initially thought that the model's potential was lacking. However, an accuracy of 0.88 appeared on the first fine-tuning, which was hard to believe. Upon changing the seed and refreshing the data, it still yielded an accuracy near 0.8, completely different from the normal accuracy. The compatibility between fine-tuning and ChatGPT must be outstanding.

 

2. Experiment Details

In this experiment, the task was to identify what type of financial product a given English complaint was about. This is a task of classifying 6 different financial products such as home loans or bank accounts, and the data used for fine-tuning consisted of 100 samples each for training and validation, which is a minimum configuration. The training results show a decrease in training loss and eventually seem to reach zero (actually it continues to go down further). Quick conclusion: it went well. Using this fine-tuned model yielded the results mentioned in section 1.

 

3. Discussion

Just by looking at the results of this experiment, we can't definitively say that fine-tuning always succeeds. Various cases will emerge in the future, and it will be important to make comprehensive judgments based on those results. Especially this time, minimal prompt engineering was done. Combining prompt engineering and fine-tuning to achieve the best performance is a future challenge. There are many points to consider, like cost and computation time. It will require trial and error. While GPT-4 indeed performs well with an accuracy around 0.8 for this task, its cost is high, and implementation isn't always straightforward. Even in such cases, the new weapon of fine-tuning has come into our hands, increasing our options and potentially moving us a step forward in problem-solving.

How was it? I would like to introduce more experiments and their results here in the future. Stay tuned!




Copyright © 2023 Toshifumi Kuga. All right reserved



Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

"Llama2" is a great LLM as it is Open source and for commercial use. I want to try many applications with this language model.

Hi friend, I would like to introduce a new LLM, which was released from Meta on July 18,2023. It is called “Llama2”. I have some experiments with this model. Let us start!

 

1. What is Llama2?

“Llama2” is language model from Meta AI. Many researchers are very excited because it is a open source and available for commercial usage. Its specs are explained in the table below.

 
 

2. Let us extract information from the article in English

I want to perform a small experiment to extract information from text.

  • sentiment

  • root cause of the sentiment

  • name of product

  • name of makers of product

I made my prompt and fiction story in the mail. Then run Llama2 13B chat. Here are the results

Woh, looks good! I can obtain the information I need from text. Unfortunately the model cannot output it in Japanese.

 

3. Let us see how it works against Japanese sentences

Next, I would like to apply the same prompt against Japanese sentences here.

Woh, looks good, too! Although the model cannot output it in Japanese, either.

 

4. Llama2 has a great potential for AI applications in the future!

Today I found that Llama2 works in English very well. When we want to minimize running costs for AI applications or keep secret/confidential data within our organization, this model can be a good candidate for AI models in our applications. It is great to have many choices of LLMs in addition to proprietary models, such as ChatGPT.

 
 

I want to mention a great repo on GitHub. It makes it easier to compare many open source LLMs, It is a strong recommendation for everyone who is interested in LLMs. Thanks camenduru!

Thanks for your attention! would like to follow the progress of Llama2 and share it with you soon. Stay tuned!


Copyright  ©  2023  Toshifumi Kuga  All right reserved


Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

"Tree of Thoughts" can go mainstream in prompt engineering!

Today, I found a very interesting paper called “Tree of Thoughts (ToT)”(1). With ToT, we can solve the tasks, where we could not do it before. So I want to share it with you and consider how it works together. Let us start now!

1.Chain of Thoughts (CoT)

This paper provides four kinds of prompting as the chart below says. The left one is called “IO prompting” and is relatively simple. The right one is the most complex, called “Tree of Thoughts (ToT)”.

Among four kinds of prompting, I focus on Chain of Thoughts (CoT) first because it gives us a fundamental space to explore. The paper says “The key idea is to introduce a chain of thoughts z1, · · · , zn to bridge x and y, where each zi is a coherent language sequence that serves as a meaningful intermediate step toward problem solving“. By CoT, we explore a prompting method for improving the reasoning abilities of LLMs and solve complex tasks effectively. Once we understand how CoT works, let us move on ToT.

 

2. Tree of Thoughts (ToT)

Let us expand CoT with tree search so that we can apply it to more complex tasks effectively. This paper says “we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving.”. Sounds great! OK, let us consider how it works.

ToT has four steps to implement it. I would like to explain them step by step.

  • decompose the process into thoughts

    • each thought should be small enough so that LLMs can generate promising and diverse samples

  • generate states

    • generate potential thoughts from each state. There are two kinds of methods to do this according to this paper.

  • evaluate each state

    • LLMs evaluate each state to decide how a tree should grow

  • search for the best state

    • If the current state is not good enough, we should search into other branches. There are several search algorithms to do that.


3. ToT can be solved by MCTS

Although ToT can be solved with relatively simple Tree Search algorithms, we can use more advanced algorithms, such as Monte Carlo Tree Search (MCTS). It has been famous since AlphaGo defeated a professional human Go player in March 2016. In AlphaGo, MCTS is combined with Neural network. This is sometimes called “model guided Tree Search” and we do not need to search for the whole possible state anymore. In the picture, Demis Hassabis, Google DeepMind CEO, explained how it works(2).

It must be exciting when ToT can be searched by MTCS in the near future as wider and deeper states can be explored and it must provide us good results.

 

Thanks for your attention! I would like to follow the progress of ToT and share it with you soon. Stay tuned!

 

1) “Tree of Thoughts: Deliberate Problem Solving with Large Language Models” Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan, 17 May 2023, https://arxiv.org/abs/2305.10601

2) Using AI to Accelerate Scientific Discovery | Campus Lecture with Demis Hassabis, https://www.youtube.com/watch?v=Ds132TzmLRQ&t=1381s

 



Copyright  ©  2023  Toshifumi Kuga  All right reserved



Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

“Function calling” is a game changer as GPT can access outside and be converted to our agents easily!

Today, I want to create web-site with a description of the Japanese sweets collection, just like “Dorayaki“ in the picture above. So I ordered my AI agent to create an awesome web-site. But is it really possible? I am sure yes, it is!. As you know, OpenAI created GPT, which is very intelligent large language model (LLM). On 13 June 2023, “Function calling” was introduced by OpenAI. It can bridge GPT to other systems, APIs and functions outside. Let me explain step by step!

 

1.What is the advantage of “Function calling”?

Function calling makes it easy for GPT to access functions outside. For example, when you want to create a web-site where Japanese sweets are explained to customers, you need to connect GPT to the function that can write code of web-site with HTML/CSS. With “Function calling”, GPT can call this function and pass the parameters, such as “explanations of Japanese sweets” to this function. Official documents says “The latest models (gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to both detect when a function should to be called (depending on the input) and to respond with JSON that adheres to the function signature.”

 

2. The list of “functions” is key to set “function calling” up

“Function calling”looks great! But how can we implement in our code. I think it is so simple. Just prepare the list of functions. This should have

  • "name"

  • "description"

  • "parameters" : "type" , "properties", "required"

In ChatCompletion.create, we should add “functions=functions” because we want to call the function. The other part of the code has not changed so much. The code below shows us an example of functions, which comes from Official documents. Please look at these docs for the details if needed.

 

3. Let us see how the generated web looks like

OK, it is the time that we see the result from our agent. I instruct "Create a web-site for a pretty Japanese sweets collection" to our agent. Text of “title” and “explanation” are generated by GPT3.5-turbo and are sent to the function that creates a web. Here is the result. All are written in Japanese. The title means “a pretty Japanese sweets collection". The sentences of the explanation are pretty good! I do not think there is a need to fix or modify these sentences at all.

If you want to know more details with the code, you can see it here.

https://github.com/TOSHISTATS/Wagashi-Collection-Web-Generation-agent-by-GPT3.5#readme

 

Hope you can understand how AI agents work. I think potential use-cases of “Function calling”are limitless. I tried several use cases by “Function calling” and found that it can be a game changer to develop LLM application systems. I would like to update my article about AI agents by OpenAI GPT soon. Stay tuned!

 
 
 

Copyright ©  2023  Toshifumi Kuga.  All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

"Large Language Models as Tool Makers" is a good idea to enhance the accuracy of the model while keeping the cost of computation low!

Since GPT4 , one of the most intelligent Large Language Model (LLM) in the world, was released on 14 March 2023, many people are surprised because it is very intelligent. This is great. But there is one problem for users. It is not free service. Users should pay the fee of GPT4, based on how much tokens they use on GPT4. Therefore when we continue to use GPT4 all day long, it must be very expensive. Of course we prefer more intelligence. But we should consider the cost of it. There is a trade off between them. What should we do to solve it? Last week, I found a good research paper called “Large Language Models as Tool Makers“(1). All charts below come from this awesome paper. The idea is simple and looks promising to tackle these problems. So let me explain more details.

 

1. Tool user and Tool maker

The basic idea is as follows. Let us have two LLMs, one is called “Tool maker” and another is called “Tool user”. When a new task comes to us, Tool maker create “tools” for this task. Once “tools” are ready, they are passed to Tool user for inference. These tools are reusable to solve similar tasks in the future. So GPT4 can be used only as Tool maker as it should be more intelligent. Light weights models, such as GPT3.5, can be used as a Tool user. Then we can reduce the cost of computation for inference. It sounds great! The chart below explains how it works.

 

2. How can we create tools for our task?

As we want to keep the accuracy of the results, Tool maker should create better tools. There are three steps to do that.

• Tool Proposing: The tool maker generates a Python function to solve the given task. If the proposed tool makes errors, the tool maker makes another tool.

• Tool Verification: The tool maker generates unit tests using validation samples and subsequently executes these tests. 3 validation samples are prepared here. If the tool fails any of these tests, the tool makes an attempt to fix the issues. The paper explains it as follows “This stage fulfills two key roles: 1) it provides examples that demonstrate how to convert natural language questions into function calls, and 2) it verifies the tool’s reliability, enabling the entire process to be fully automated.”

• Tool Wrapping: If the execution or verification passes the preset threshold, tool maker prepares the wrapped tool for tool user. This step involves wrapping up the function code and providing demonstrations of how to convert a task into a function call. This final product is then ready for use by the tool user.

The chart below shows us how it works.

Once the tool is ready, the tool is passed to the tool-user. The tool user should solve various instances of the task by using tools made by the Tool maker. The prompt for this stage is the wrapped tool which contains the function for solving the task and demonstrations of how to convert a task query into a function call. With the demonstrations, tool user can then generate the required function call in an in-context learning fashion. The function calls are then executed to solve the task. The chart below shows us how the processes from Tool maker to Tool user are going.

 

3. Can we confirm if tools that fit our tasks are available or not?

Here, we use a third LLM called “the dispatcher”. Because the dispatcher maintains a record of existing tools produced by the tool maker, it can confirm if tools that fit our task are available at the time when a task is received, If no appropriate tool is found, the dispatcher identifies the instance as a new task and solves the instance with a powerful model, such as GPT4. The dispatcher’s workflow is shown here.

 

That is it! This is a major part of “Large Language Models as Tool Makers” or “LATM” in short. By LATM, we might reduce computational cost for heavy models, such as GPT4. It is amazing! Hope you enjoy the article today. I will update new technologies around LLM in the near future. Stay tuned!

 

1) “Large Language Models as Tool Makers” Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, Denny Zhou, 26 May 2023, https://arxiv.org/abs/2305.17126



Copyright  ©  2023  Toshifumi Kuga  All right reserved



Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.