Skip to content
  • KOSPI 2746.63 +0.81 +0.03%
  • KOSDAQ 905.50 -4.55 -0.50%
  • KOSPI200 374.63 +1.41 +0.38%
  • USD/KRW 1349 -2 -0.15%
  • JPY100/KRW 891.28 -1.15 -0.13%
  • EUR/KRW 1453.55 -3.98 -0.27%
  • CNH/KRW 185.89 -0.14 -0.08%
View Market Snapshot
Artificial intelligence

LG unveils multimodal ‘supergiant AI’ with unprecedented parameters

EXAONE works beyond language limits and has already collaborated with Google

By Dec 14, 2021 (Gmt+09:00)

2 Min read

▲ LG AI Talk Concert
LG Corp.'s artificial intelligence research institute unveiled what it dubs a “supergiant AI” on Tuesday.

The big reveal was for EXAONE, which the scientists aim to develop into an artificial intelligence system that can think, learn, and make judgments like a human by learning large-scale data on its own; and can be used across various fields for multiple purposes. 

‘SUPERGIANT’ PARAMETERS

The new development has some 300 billion parameters and is equipped with multimodal capabilities to acquire and process information related to nearly all aspects of human communication, not just written and spoken languages.

The research institute has trained EXAONE to study 600 billion text corpus and 250 million images simultaneously. 

Parameters refer to where AI’s learned data via deep learning gets stored, and is often used as a measure of how well a model is performing. In human physiology, it is somewhat comparable to synapses. 

EXAONE stands for EXpert Ai for everyONE. In his keynote speech, the head of AI research at LG AI Research Bae Kyunghoon noted that EXA means one quintillion, highlighting the large amounts of data the machine can process. 

Back in May, Korea’s fourth-largest chaebol announced its plans to invest $100 million over the next three years to develop a supergiant AI. Tuesday, it marked the first anniversary of LG AI Research and livestreamed a web seminar titled LG AI Talk Concert on YouTube. 

MULTIMODALITY: BEYOND LANGUAGE

What sets EXAONE apart from other artificial intelligence, in addition to the massive parameters, are the multimodality and collaborative initiatives formed across the globe.

Previously, other Korean AIs were also successful in understanding and writing texts but were mostly limited to being used as chatbots.

Thanks to its multimodal capabilities, EXAONE can work across different modes of communication. It can interpret and convert still and moving imagery, in addition to written and spoken languages.

An example of a completed task by EXAONE
An example of a completed task by EXAONE

For instance, existing AIs can find an image of a pumpkin by analyzing text. LG’s EXAONE can go a few steps further to register an assignment to create a pumpkin-shaped hat and actually complete the task with learned data.

COLLABORATION WITHIN LG GROUP AND BEYOND

EXAONE learned from data compiled by LG Corp.’s diverse affiliates, namely LG Electronics Inc., LG Chem Ltd., and LG Uplus -- and its target industries reflect that dataset. The latest addition to the conglomerate hopes to deepen its ties with affiliates to tailor EXAONE's learning to fit the needs of specific industries.


Beyond the peninsula, the research firm embarked on a strategic partnership with Google, LLC. The US tech giant provided the institute with a yet-to-be-released AI chip named TPU v4 and its AI research team Google Brain created a compatible software framework specifically designed for EXAONE.

The company believes Google needs LG AI Research’s success to challenge its competitor NVIDIA. According to data by Analyst firm Omdia, the US chipmaker has an 80% share in the AI processor market. 

Write to Jee Abbey Lee at jal@hankyung.com
Jee Abbey Lee edited this article.

Comment 0
0/300