This is how it works.
Park, the designer behind the Greedilous brand, feeds questions like “What would flowers on Venus look like?” to Exaone, which processes the language for Tilda. Based on the processed texts, Tilda generates images. Once the images are generated, Park begins her designing process based on them.“In the past, I had to work with dozens of designers for months to get inspiration and prepare a collection,” Park said. “With Tilda, I could finish the work in a month and a half.”The South Korean-born designer said the collection is the result of Tilda’s creativity and human emotions.The Greedilous New York collection is the first time a hyperscale AI has succeeded in the visual design field. Up until now, AI was mostly used to create text-based contents using natural language processing (NLP.) LG explained that Tilda uses ‘multi-modal’ AI, which allows it to comprehend not just the texts but also their context. That in turn translates to the ability to create completely new images. Exaone has some 300 billion parameters and is equipped with multi-modal capabilities to acquire and process information related to nearly all aspects of human communication, not just written and spoken languages.The research institute has trained Exaone to study 600 billion text corpus and 250 million images simultaneously. Parameters refer to where AI’s learned data via deep learning gets stored, and is often used as a measure of how well a model is performing. In human physiology, it is somewhat comparable to synapses. “The latest collaboration has shown that AI can work with humans in areas that require creativity and imagination,” Chief of LG AI Research Bae Kyung-hoon said. “Later this year, fashion items designed by Tilda will be available for sale offline and at metaverse events.” By Su-Bin Leelsb@hankyung.comJee Abbey Lee edited this article.2024-09-20 22:18:53