Mr. Makoto Murai, COO of Adacotech Incorporated
Deep learning technology enables complex, sophisticated learning and output by loading large amounts of data into a computer. Although there is currently a strong, active movement to use AI for product inspection in the manufacturing industry, the accountability of AI is not yet guaranteed, and judgment criteria tend to be black-boxed. Adacotech Incorporated takes an approach different from deep learning to resolve issues using technology that people can use with confidence and trust. We spoke with Mr. Makoto Murai, the COO of the company.
―Your company is a startup from the National Institute of Advanced Industrial Science and Technology (AIST), one of the largest public research institutes in Japan. What kind of business are you doing?
We are working to automate product inspection operations using our unique AI image analysis technology, which is different from deep learning. In data analysis, the term “feature” refers to a variable that indicates, such as what is used as a clue for prediction or which part of the data is used as an index. We are applying this feature-based technology to business. Specifically, we use a technology that converts correlations among image, video, and sensor data into numerical values so that they can be computed.
―What kind of products do you provide, exactly?
We have two main products. One is an image inspection solution. It’s an application that learns without supervision by collecting only images of the normal state to detect anomalies. It uses a method called HLAC (High-order Local Auto-Correlation) feature extraction (*1), which makes it possible to compute shapes within images by converting them into multidimensional feature vectors for calculation. This product is used in infrastructure system applications, such as product line inspection of component manufacturers and completion inspection of tunnels. We are also working with customers in the electronic device industry.
―What is the other product?
It’s a video anomaly detection solution. This software detects anomalies and alerts users when there is unusual movement in a location where a camera is installed. It’s mainly used to detect anomalies in production lines and to monitor production facilities. Some unusual applications include monitoring fuel supply lines in biomass power plants and molten slag in garbage incinerators. In a unique application, it’s also being used to monitor anomaly behavior in the use of boxing machines in game arcades.
This product uses the CHLAC (Cubic High-order Local Auto-Correlation) feature extraction (*2) method, which extends the HLAC feature extraction method through time to convert movements in videos into 251-dimension feature vectors that can be computed. Since the technology can also quantify even the state of viscous fluid, it can be applied to process control and environmental monitoring. So, we have started joint research with a university.
―What is the difference between your original image analysis technology and conventional AI-based deep learning?
Our technology guarantees accountability for its output. While deep learning can produce sophisticated and complex outputs, the background and criteria are often unclear. When it comes to infrastructure or crisis management, for example, where a minor flaw can cause fatal damage, or where quality assurance and safety are of utmost importance, it’s critical for humans to understand the background behind why AI makes the decisions it does.
―I see. Based on that, what are the features and strengths of your company’s products?
Our products don’t require graphics processing units (GPUs) because they detect anomalies through simple linear processing. They can be used on a general-purpose PCs because of their simple processing and algorithms and are easy to interpret and control. Our video anomaly detection solution also doesn’t require an object recognition model, so it can be used even when there is no general-purpose model, such as with real manufacturing equipment or products, which is a major difference from anomaly detection systems that are based on deep learning.
In addition, the processing of our products is incredibly lightweight and so can learn sequentially. In other words, they can re-learn every few minutes to generate a new model. They can even be used outdoors, where environments change rapidly, or in factories where external light shifts over time, while absorbing and responding to changes in sunlight conditions.
―It’s very useful that the products automatically respond to environmental changes.
There are many deep learning-based image inspection solutions out there, but they generally lack guidance on what images to relearn. This has been a repeated problem for users. Our solution comes with software that semi-automatically sets up the data that will be relearned. I think it’s worth noting that we have a mechanism that can follow the fluctuation of data during mass production and can thus support the entire inspection life cycle.
―Could you tell us about your current organizational structure?
We are currently a company of about 20 members. 20% of the employees are doctoral graduates, and all the employees, except the CEO and the corporate director, are engineers. That’s one of the reasons why we’re a company that works hard and can do more down-to-earth work directly. Adacotech was founded in 2012 as a successor to a an 2006-founded AIST-certified startup, and two of the original founding engineers are still with us and continue to be engaged in development.
―Are you also from a technical background?
Yes, I am. I graduated from a master’s program in the graduate school of engineering and spent three years in a research laboratory at Sanyo Electric and then 12 years at Sony. During these years, I was consistently involved in semiconductor R&D, commercialization, and the establishment of production bases. I was also involved in the mass-production of the world’s first 2.5 D semiconductor for gaming large scale integrations (LSIs), the commercialization and quality control of in-vehicle image sensors, and the establishment of overseas production bases.
―How did you join Adacotech?
I moved to the consulting industry to study business in addition to technology. I worked with engineers there to support high-tech manufacturing companies and build and globally expand productivity solutions using machine learning. I felt, however, frustrated that I couldn’t do it on my own. In addition, while working on many data science projects, I sometimes really felt the weaknesses and limitations of deep learning. I decided to do something that could contribute to society using a different approach.
The world’s attention is only focused on deep running and generative AI right now, so there are few companies that try to break into the market with other technologies. I was attracted to and joined Adacotech largely because it’s one of these few companies. In 2022, we raised 1.54 billion yen and are now in the phase of expanding the company.
―Your company’s technology seems very specialized and unique, but how much does it cost for your customer to implement it?
We receive images from our customer and offer an inspection model for approximately 200,000 yen and in two to three weeks. To enable use on a production line, the customer loads our inspection model into our partner’s inspection equipment or into the customer’s software using a library we provide. The minimum price is 50,000 yen per month including model. We also offer SaaS on Amazon Web Services that allows customers to create their own inspection models, starting at 200,000 yen per month. It took us a long time to convert the technology into usable solutions, but now we are able to offer an environment that allows customers to try our solutions immediately and easily.
―What do your future prospects look like?
Up until now, we have mainly been targeting the manufacturing industry, but I think there are other industries where we can also make use of our technology. For example, our image classification feature that is currently available in beta is a very unique technology that can automatically classify images according to the human senses. In operations like infrastructure maintenance, humans have always used their senses to judge whether something needs repair or whether something is cracked. Our new technology converts this passive judgment into formal knowledge. In other words, it incorporates human judgment into an algorithm. I think this will be a pillar of our next business expansion.
We intend to continue to work on our mission of solving problems through technology that people can use with confidence and trust, as a deep tech company that uses scientific discoveries and technology to bring significant impacts and problem-solving power to society.
*1 HLAC (Higher-order Local Auto-Correlation) feature extraction: A method that produces features by multiplying image brightness information according to mask patterns. This method converts image shape information into 25-dimension feature vectors.
*2 CHLAC (Cubic High-order Local Auto-Correlation) feature extraction: A method that extends HLAC through time. This method converts image information into 251-dimension feature vectors based on the time variation of differential images between frames of a video.
Company name:Adacotech Incorporated |
Founded:March 2012 |
Number of employees:22 |
Main Business:Developing and selling anomaly detection software powered by AIST’s patented technology URL:https://adacotech.co.jp/en |
This article is part of a series of articles introducing venture companies working together as ICF members to resolve societal issues.