agent — Blog — TOSHI STATS Co.,Ltd.

agent

Marketing AI agents for customer targeting in telemarketing can also be easily implemented using the new library "smolagents." This looks promising!

1. Marketing AI Agent

To efficiently reach potential customers, it's necessary to target customers who are likely to purchase your products or services. Marketing activities directed at customers without needs are often wasteful and unsuccessful. However, identifying which customers to focus on from a large customer list beforehand is a challenging task. To meet the expectation of easily targeting customers without complex analysis, provided you have customer-related data at hand, we have implemented a marketing AI agent this time. Anyone with basic Python knowledge should be able to implement it without much difficulty. The secret to this lies in the latest framework "smolagents" (1), which we introduced previously. Please refer to the official documentation for details.

 

2. Agent Predicting Potential Customers for Deposit-Taking Telemarketing

Let's actually build an AI agent. The theme is "Predicting potential customers for deposit-taking telemarketing with an AI agent using smolagents." As before, by providing data, we want the AI agent itself to internally code using Python and automatically display "the top 10 customers most likely to be successfully reached by telemarketing."

While the coding method should be referenced from the official documentation, here we will present what kind of prompt to write to make the AI agent predict potential customers for deposit-taking telemarketing. The key point, as before, is to instruct it to "use sklearn's HistGradientBoostingClassifier for data analysis." This is a gradient boosting library, highly regarded for its accuracy and ease of use.

Furthermore, as a question (instruction), we specifically add the instruction to calculate "the purchase probability of the 10 customers most likely to be successful." The input to the AI agent is in the form of "prompt + question."

Then, the AI agent automatically generates Python code like the following. The AI agent does this work instead of a human. And as a result, "the top 10 customers most likely to be successfully marketed to" are presented. Customers with a purchase probability close to 100%! Amazing!

         "Top 10 customers most likely to be successfully marketed to"

In this way, the user only needs to instruct "tell me the top 10 customers most likely to be successful," and the AI agent writes the code to calculate the purchase probability for each customer. This method can also be applied to various other things. I'm looking forward to future developments.

 

3. Future Expectations for Marketing AI Agents

As before, we implemented it with "smolagents" this time as well. It's easy to implement, and although the behavior isn't perfect, it's reasonably stable, so we plan to actively use it in 2025 to develop various AI agents. The code from this time has been published as a notebook (2). Also, the data used this time is relatively simple demo data with over 40,000 samples, but given the opportunity, I would like to try how the AI agent behaves with larger and more complex data. With more data, the possibilities will increase accordingly, so we can expect even more. Please look forward to the next AI agent article. Stay tuned!

 
 
 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.








I tried using the new AI agent framework "smolagents". The code is simple and easy to use, and I recommend it for AI agent beginners!

At the end of last year, a new AI agent framework called "smolagents" was released from Hugging Face (1). The code is simple and easy to use, and it even supports multi-agents. This time, I actually created a data analysis AI agent and tried various things. I hope it will be helpful.

 

1. Features of "smolagents"
The newly released "smolagents" has features that existing frameworks do not have. 1) First, it has a simple structure. You can execute an AI agent by writing 3 to 5 lines of code. It's perfect for those who want to start with AI agents. 2) Also, since it was released by Hugging Face, there are already a huge number of open-source models on the Hub. You can easily call and use them. Of course, it also supports proprietary models such as GPT4o, so you can use it for both open and closed models. 3) Finally, when you execute an agent, python code is generated and acted upon. Therefore, you can use the assets of the vast Python ecosystem, which is very convenient. Especially for those who specialize in data analysis like me, it is a perfect framework because you can use Python libraries such as sklearn.

 

2. An Agent for Predicting Credit Card Defaults

Now, let's actually build an AI agent. The theme is "AI agent by smolagent predicts credit card defaults". Normally, when building a default prediction model, you would code using machine learning libraries such as sklearn, but this time, I want to give it data and have the AI agent itself code internally using Python and automatically display the default probabilities of the first 10 customers.

For how to write the code, please refer to the official documentation , but here I would like to present what kind of prompts I actually wrote to make the AI agent predict defaults. The point is to specifically instruct it to "use sklearn's HistGradientBoostingClassifier for data analysis". This library is highly evaluated for creating machine learning models with high accuracy and ease of use. This is domain knowledge of data analysis, but by including that knowledge in the prompt, we expect to obtain higher accuracy.

Furthermore, as a question, I will add an instruction to specifically calculate "the default probability of 10 customers". The AI agent is input in the form of "prompt + question".

Then, the AI agent automatically generated the following Python code. Normally, this is what I would write myself, but the AI agent does it for me. And as a result, the default probabilities for 10 people are also shown. Amazing!

In this way, the user only needs to instruct "use sklearn to calculate the default probability", and the AI agent writes the code to calculate the default probability for each customer. And you will be able to make default predictions for each customer. I tried it with default prediction this time, but I think it can be covered to the probability in any business, such as marketing, customer churn and human resources. I'm looking forward to future developments.

 

3. Impressions after using "smolagents" for the first time

Until now, I used LangGraph to implement AI agents. I liked it because I could make various detailed settings, but it was necessary to code each of state, tool, node, edge, etc., and I felt that the hurdle was high for beginners to start with. After implementing it with "smolagents" this time, I found that if I coded according to the template, it would run by writing a few lines, so anyone could start. Of course, it fully meets the needs of AI developers, so I plan to actively use it in 2025 to develop various AI agents. I have published the code this time in a notebook (2). Please look forward to the next AI agent article. Stay tuned!

 

(1) Introducing smolagents, a simple library to build agents,  Aymeric Roucher, Merve Noyan, Thomas Wolf, Hugging Face, Dec 31,2024   
(2) AI-agent-to-predict-default-of-credit-card-with-smolagent_20250121

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.






Can you believe it? A prediction that "AGI will appear before us in 2027" has been announced by a former OpenAI researcher!

A surprising prediction has been announced. Leopold Aschenbrenner, a former researcher at OpenAI, claims that AGI, which matches the capabilities of human experts in various fields, will emerge in 2027, just four years from now. It’s hard to believe, but let’s take a look at his argument.

 

1.From the Past to the Present, and into the Future

To predict the future, it is important to understand how generative AI has developed from the past to the present. Here, the author introduces the concept of OOM (Orders of Magnitude). Simply put, if you create a graph where each unit increase represents a tenfold increase, the trajectory becomes a straight line. OOM=4 means 10,000 times. The vertical axis of this graph represents the computational power (physical computation and algorithmic efficiency) and is displayed in OOM. The current pinnacle of AI, GPT-4, is used as the benchmark.

GPT-2 appeared in 2019, and four years later, in 2023, GPT-4 was introduced. The performance improvement during this period is roughly OOM=5 (100,000 times). If GPT-2 is like a preschooler, then GPT-4 is like a smart high schooler. Now, if we extend this straight line from 2023 to 2027, we can predict that an AI with OOM=5 (100,000 times) higher performance than GPT-4 will be born. This level of AI is expected to achieve AGI. If it is reasonable to connect the points with a straight line, then this prediction is not entirely far-fetched.

 

2. From Which Fields Will AI Growth Emerge?

We have explained that AI will significantly improve its performance by 2027, but what technological innovations will make this possible? The author points to the following three drivers. This graph also has a vertical axis in OOM, so a one-unit increase means tenfold.

First, the blue part represents improvements in computational resource efficiency. This is achieved through the development of new GPUs and the construction of ultra-large GPU clusters.

The second green part is due to improvements in training and inference algorithms and training data. There are concerns that training data may become scarce in the near future, but it is expected that this can be overcome by generating synthetic data.

The third red part refers to advancements in technology that allow us to extract the necessary information from the raw AI, give precise instructions, and have the AI execute what we want. Even now, research on how to give instructions to AI, such as Chain of Thought, is actively being conducted. In the future, it is expected that AI will function as an agent and further develop, leading to significant performance improvements in AI.

Wow, that sounds amazing.

 

3. What Happens After AGI is Achieved?

Once AGI is achieved, the evolution of AI will move to the next phase. Here, the main players are no longer humans but countless AIs. These AIs are trained for AI development and can work continuously 24/7. Therefore, by taking over AI development from humans, productivity will dramatically increase, and as a result, it is predicted that Super Intelligence, which completely surpasses humans, will be born by 2030. The graph below illustrates this.

It was already challenging to understand the prediction of AGI appearing in 2027, but by this point, it’s honestly beyond imagination to think about what our society will look like. Work, education, taxation, healthcare, and even national security will likely look completely different. We can only hope that AI will be a bright star of hope for all humanity.

 

Let’s conclude with the author’s words. It will be exciting to see if AGI is truly realized by 2027. The original paper is a massive 160 pages, but it’s worth reading. You can access it from the link below, so please give it a try.

 

Again, critically, don’t just imagine an incredibly smart ChatGPT: unhobbling gains should mean that this looks more like a drop-in remote worker, an incredibly smart agent that can reason and plan and error-correct and knows everything about you and your company and can work on a problem indepen-dently for weeks. We are on course for AGI by 2027. These AI systems will basically be able to automate basically all cognitive jobs (think: all jobs that could be done remotely).

 

1) SITUATIONAL AWARENESS The Decade Ahead, Leopold Aschenbrenner, June 2024, situational-awareness.ai 

 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.


I tried the new generative AI model "Claude3 Haiku". Fast, smart, and low-priced. I want to use it as an AI agent!

On March 14th, "Claude3 Haiku" (1), the lightest model among the Claude3 generative AIs, was released and became available for use in web applications and APIs. I'm usually drawn to the highest-performing models, but this time I'd like to focus on the lightest one. Recently, algorithms that execute repetitive calculations like AI Agents have become more common. I want to use high-end models like GPT4, but they are very costly to run. So I was looking for a low-cost, high-performance model, and "Claude3 Haiku" is perfect as it costs 1/60th of the high-end model "Claude3 Opus" while still delivering excellent performance. I'd like to try it out here right away. The details of each model are as follows.




1. First, let's test the text

I checked if "Claude3 Haiku" knows about Hiroshima-style okonomiyaki, a hyper-local Japanese food. I used to live in Hiroshima, so I know it well, and I think this answer is generally good. The Japanese is clean, so it passes for now.




Next, I asked about transportation from Tokyo to Osaka. Unfortunately, there was one clear mistake. The travel time by bus is stated as "about 4 hours and 30 minutes," but in reality, it takes around 8 hours. This is a hallucination.



Then I asked about the "Five Forces," a framework for analyzing market competitiveness. It analyzed the automotive industry, and the analysis incorporates the latest examples, such as the threat of electric vehicles as substitutes, making it a sufficient quality starting point for discussion. However, the fact that it's not in a table format is a drawback.





2. Next, let's analyze images.

First, I asked about the number of smartphones, but unfortunately, it got it wrong. It may not be good at counting.




This is a photo of the Atomic Bomb Dome in Hiroshima. It answered this perfectly. It seems to understand famous Japanese buildings.





This is a photo of a streetcar running in Hiroshima City. I think it captures it pretty well overall. However, the streetcars don't run solely for tourists, so the explanation may be somewhat incomplete.




This is a flight information board at Haneda Airport. It perfectly understands the detailed information. Excellent.





Counting the number of cars in a parking lot is a difficult task for generative AI. This time it answered 60 cars, but there are actually 48. If the accuracy improves a bit more, it will reach a practical level, which is a bit disappointing.






3. Impressions of using "Claude3 Haiku".

Honestly, the performance was unbelievable for a general-use AI. The Japanese is natural and clean. The fact that it can incorporate and analyze images in the first place is groundbreaking. Multimodality has arrived in general-use AI. The calculation speed is also fast, and I think it will be applied to applications that require real-time responses. And the cost is low. This allows for plenty of interesting experiments. It's a savior for startups with tight cost constraints! I want to continue doing interesting experiments using "Claude3 Haiku". Stay tuned!

(1) Claude 3 Haiku: our fastest model yet   2024.3.14  Anthropic

Copyright © 2024 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

The era of "agent-style applications" has arrived, earlier than expected and seems to be accelerating even further

On November 6, the OpenAI DevDay was held, marking its first annual developer's conference. The technological developments since the debut of GPT-4 in March 2023 were introduced at once. There's too much to cover comprehensively, so I'll leave that to OpenAI CEO Sam Altman, but here I want to raise three key points I've considered and explore them further.




  1. Price is Key

The anticipated price reduction has been realized. GPT-4 is roughly about 65% off. Of course, the reduction varies depending on usage. I've already tried the new GPT-4 Turbo for half a day, and it cost about $5, which would have definitely exceeded $10 before. This makes it more viable for Proof of Concept (PoC) use. It seems the time has come to utilize GPT-4's still unseen potential in various areas. A wallet-friendly approach is a welcome change for everyone.



2. Building AI Apps Without Being a Programmer

At this developer's conference, I noticed many features that operate with no-code. GPTs, which allow creation of customized ChatGPT in a dialogue format, is a prime example. The developer-oriented Assistants API also doesn't require coding if used with the Playground. With the code interpreter tool already implemented, writing prompts to invoke and execute it automates the rest. This is impressive.

I implemented a model to calculate default probabilities using a step-by-step prompt, from 1 to 5, with the code-interpreter turned on, without writing any specific code. When executed, the model was successfully created, and it performed tasks like calculating AUC and generating histograms as instructed.





3. Easy Construction of "Agent-Style Applications"

Listening to OpenAI CEO Sam Altman's presentation, I felt a strong emphasis on agents. The Playground Tool includes function calling, which seems to make it much easier to create agents that determine their next actions based on situations. While open-source implementations of agents have been increasing, I didn't expect them to be implemented this quickly on the OpenAI platform. Paired with GPTs, the year of 2024 feels like it could be the first year of "agent-style applications." This is truly exciting.

How about these new services? Following the announcements at DevDay, developers worldwide seem to be thinking about various AI applications. I'm also eager to start creating an agent-style application. Stay tuned!




Copyright © 2023 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

LLM can be "reasoning engine" to create our agents. It must be a game changer!

Recently, Large Language model (LLM) is getting more attractions all over the world. Google released their new LLM called “PaLM 2” on 10 May 2023. It starts competing against “ChatGPT” which was released in Nov 2022 and attracts over 100 million users in just two months. LLM is expected to be more intelligent in a short period as competition between big IT companies is getting tough. What does it mean to us? . Let us consider step by step!


1. How can we create our own agent?

In my article in Feb 2023, I said AI can be our agent which understands our languages. Let us consider how it is possible step by step. When I want to eat lunch. I just order my agent, saying “I would like to have lunch”, LLM can understand what I say and try to order my favorite hamburger at the restaurant. For LLM to act against the outside world (such as call restaurants), it needs some tools, which can be created with libraries such as “LangChain”. Then LLM can order my lunch and finally I can have lunch, anyway. It sounds good. Let us move deeper.


2. LLM is not just an “interface” to us with natural languages.

As I said in Feb this year, the first time I used ChatGPT, I felt like it could understand what I said. But now I do not think it is just an“interface” any more. Because LLM is trained with massive amounts of text from the web, books and other sources, LLM obtains a lot of knowledge of human beings from the past to the present. Since Chat GPT appeared in front of us last year, I performed many experiments with LLM and found that LLM has an ability to make decisions. Although it is not perfect, it sometimes performs at the same level as human beings. It is amazing! In addition to that, LLM is still in the early stage and evolves on a daily basis!



3. LLM will be more and more intelligent as a “reasoning engine”!

Mr. Sam Altman, OpenAI CEO says in youtube “ChatGPT may be a reasoning engine”(1). I completely agree with his opinion. When we create our agents, LLM works as a “reasoning engine” to make decisions to solve complex tasks. Around LLM, there are many systems to act against the outside world, such as “search web” or “shop in e-commerce”. All we have to do is think “how can we enable LLM make the right decisions”. Because LLM is very new for everyone, no one knows the right answer. Fortunately, LLM can understand our languages, it may not need programming anymore. It is very important for us. So let us consider step by step!


I would like to update the progress of AI agents. Stay tuned!



1) Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 https://www.youtube.com/watch?v=L_Guz73e6fw&t=867s (around 14:25)


Copyright  ©  2023  Toshifumi Kuga  All right reserved


Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.