Llama3-8B has shown impressive performance even when fine-tuned on Japanese data. Its high base performance likely plays a significant role in this.

In the previous post, we introduced the high performance of Llama3-70B. However, Llama3 also has a smaller 8B model, and I've been wanting to fine-tune it to fit my own tasks. Since it's small, it's cost-effective and fast, so if you have a clear task in mind, this 8B model will surely be an option. Therefore, this time, we will fine-tune the Llama3-8B model for the task of classifying the published Livedoor-news Japanese articles (3) into several genres, and check its accuracy. Let's get started!

 
  1. Creating an Alpaca-style dataset

Livedoor-news Japanese articles are divided into the following 9 genres. The distribution of each genre is shown in the following chart.

  • 'kaden-channel',

  • 'livedoor-homme',

  • 'topic-news',

  • 'sports-watch',

  • 'peachy',

  • 'dokujo-tsushin',

  • 'it-life-hack',

  • 'movie-enter',

  • 'smax'

Distribution and sample size of each genre

This time, we will randomly extract 1000 samples for both training and validation data, and actually classify each article into the above 9 genres to verify whether high accuracy can be achieved. We have adopted Alpaca as the data format. As shown below, it consists of instruction, input, and output. Here, the instruction is common to all samples.

Example of Livedoor news

 

2. Fine-tuning using Hugging face TRL + "unsloth"

This time, we used Hugging face's TRL (1), a library for fine-tuning LLMs, along with "unsloth", a library for accelerating training, to efficiently perform fine-tuning. The development environment was Google Colab, and we prepared a paid L4 (GPU) instance. The training time was about 100 minutes for 4 epochs. L4 has 22.5GB of GPU-RAM, which is large enough for this training. Also, "unsloth" prepares a 4-bit quantized model for fine-tuning, so you can download and use it directly from Hugging Face Hub, which is convenient. This training process was based on the "unsloth" notebook (2). If you are interested in speeding up training, please check it out.

"Unsloth" model

 

3. Verify model accuracy

At first, I simply asked, "The skill to score a penalty kick from this impossible angle is amazing." The answer was "sports-watch". It's a soccer/football story, so I think it's a reasonable answer.

Next, I asked, "Which is better, iPhone or Android?" The answer was "it-life-hack". This is also a good answer.

It's hard to type in one by one, and the actual articles are longer and more complex. This time, I prepared 1000 validation data samples and tried it. The result was a very good accuracy of 94.5%. Since the input is Japanese, I thought Llama3 would struggle, but I was surprised that it easily exceeded 90%. It must be the effect of pre-training with a huge corpus of 15 trillion tokens. Even the 8B model seems to be practical in Japanese if fine-tuned.

 

How was it? Even though Llama3-8B is small, it has high potential and seems to be active in various places. Fine-tuning is required for each task, but "unsloth" can help speed it up. If you want to shorten the training time, please try it. This time, we were able to obtain sufficient accuracy in about 2 hours even with a general-purpose single GPU. It's a reliable ally for small startups like us! If you want to try it by yourself, you can use my notebook here.

We will update you as we gain new insights. Stay tuned!

 

(1) TRL - Transformer Reinforcement Learning https://huggingface.co/docs/trl/en/index

(2) Alpaca + Llama-3 8b full example.ipynb https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing#scrollTo=iHjt_SMYsd3P

(3) Livedoor-news Japanese articles https://www.rondhuit.com/download.html

 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.