Running Google's Generative AI 'Gemma 3' on a MacBook Air M4 is Impressive!
Gemma 3 (1) has been released by Google. While open-source generative AI seemed to be somewhat lagging behind Chinese competitors, it looks like a model capable of competing has finally arrived. Of course, its performance is excellent, but its efficiency, allowing implementation even on a single GPU, is also a key appeal. So, this time, we got our hands on the latest M4 chip-equipped MacBook Air 13 (10-core GPU, 24GB unified memory, 512GB storage) to actually run it locally and check its accuracy and computation speed. Let's get started right away.
1. Data Used in the Experiment
Customer complaints submitted to US banks are publicly available (2). We prepared 10,000 of these data points and had Gemma 3 predict, "What specific financial product is this complaint about?". Specifically, this is a 6-class classification task, choosing one from the following six financial products. The numbers listed above in the image description are used as the labels.
2. Hardware and Software Used
We prepared the latest model of the MacBook Air 13. To implement Gemma 3 locally, we used Ollama (3). This software is often used for implementing generative AI on PCs; it lacks a UI, but is consequently lightweight and easy to use. Additionally, to enable easy swapping of the generative AI with different models in the future, we built the classification process using LangChain (4). The generative AI model used this time was Gemma3-12B-it, downloaded via Ollama.
3. Confusion Matrix Showing Results
We ran the classification on 10,000 samples. Although the model was used straight out-of-the-box without fine-tuning, it achieved a sufficient accuracy of 0.7558. Despite the considerable sample size, the computation time was about 14 hours, manageable within a day. The latest M4 chip truly is powerful. Looking at the confusion matrix, it seems distinguishing between "Bank account or service" and "Checking or savings account" was challenging.
Conclusion
So, what did you think? While I've tried various generative AIs in the past, this was my first time experimenting with 10,000 samples. The classification accuracy was good, and above all, not having to worry about costs is one of the advantages of running generative AI locally. Also, while the analysis data used this time is public, some tasks involve confidential information that cannot be uploaded to the cloud. In such cases, the analysis method presented here becomes a valid solution. I highly encourage everyone to give it a try. We plan to conduct more experiments using various generative AIs, so please look forward to them. Stay tuned!
1) gemma3 https://blog.google/technology/developers/gemma-3/
2) Consumer Complaint Database https://www.consumerfinance.gov/data-research/consumer-complaints/
3) Ollama https://ollama.com/
4) LangChain https://www.langchain.com/
Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.