KAIST School of Computing professor Kim Min-soo (left) and Dr. Han Donghyoung, developers of FuseME, a matrix computation-fused engine that can significantly improve machine learning systems (Courtesy of KAIST) Korea Advanced Institute of Science & Technology (KAIST), a prestigious South Korean university, unveiled an artificial intelligence computation model, which it said outperforms Google's and IBM's.A research team at KAIST School of Computing led by Professor Kim Min-soo said on Monday it has developed FuseME, a matrix computation-fused engine that can significantly improve machine learning systems such as deep learning.“The result of comparative evaluation with Google’s TensorFlow and IBM’s SystemDS showed FuseME increased the processing speed of deep learning models by up to 8.8 times and succeeded in processing much larger data than the other two could handle,” said the team.The team also said the speed of the computational fusion will be up to 238 times faster than those competitors, while the cost of network communication can be reduced to as low as one-sixty-fourth.FuseME is an upgrade of DistME, a fast and elastic matrix computation engine unveiled in 2019 by the team.“The new technology is expected to have a significant ripple effect in industries as it can drastically enhance the scale and performance of machine learning model processing,” Kim said.The technology was unveiled during the ACM Special Interest Group on Management of Data conference, a prestigious international academic forum on the database sector held on June 16 in Philadelphia.TO FUSE MATRIX MULTIPLICATIONTo process large-scale matrix computation in a fast and scalable way, a number of distributed matrix computations systems on top of frameworks based on MapReduce, an evolving programming framework for massive data applications proposed by Google, have been suggested, according to the KAIST team.These systems generate and execute a query plan as Directed Acyclic Graph (DAG) of basic matrix operators for a matrix query, the team said.But they often fail to process data or take too long to handle data when the size of the model increases because the intermediate data of the matrix computation is suddenly stored in memory or transferred to another computer through network communication before it is properly processed.“The existing machine learning systems could not improve the performance as they excluded matrix multiplication, which is the most complex computation, while fusing the rest only. They also executed the entire DAG query plan as one simple operation,” said a KAIST official.FuseME can handle computation by fusing all, including matrix multiplication.The team came up with a method of determining which operations can improve AI performance and grouping them based on cost.It developed the technology for FuseME that can generate optimal performance by considering the properties of each group, network communication speed and input data size.By Hae-Sung Leeihs@hankyung.comJongwoo Cheon edited this article.