LG Electronics & Korea's AI Safety Institute Partner for Global AI Compliance

Strategic Collaboration for AI Safety
A Memorandum of Understanding (MoU) has been signed between LG electronics and Korea Artificial Intelligence to align with international regulations, as reported by the Korean Times. The agreement was formalized at LG Twin Tower in Seoul and aims to have a collaboration for AI risk assessment, safety technology development, and work in accordance with European Union’s AI act.
Also Read- SoundCloud Updates AI Policy Amid Artist Backlashes
Objectives of the Partnership
The MOU outlines several key objectives:
- Joint Research Initiatives: Studies in collaboration with AI development can be considered reliable, safe, and ethically sound.
- Global Regulatory Alignment: Working in tandem and navigating/complying with international AI regulations to reach global safety standards.
- Better AI Governance: Strengthening LG's internal AI governance frameworks, focusing on risk management and ethical considerations.
LG's Commitment to responsible AI
LG Electronics has been very committed to the AI development field. It was reported that an AI control tower had been established at LG in late 2024 for the governance of AI, overseeing the compliance of AI development with its Responsible AI policy. The policy is said to revolve around five principles: respect for humanity, fairness, safety, responsibility, and transparency. Hence, incorporating these principles into software development will allow LG to give their users AI experiences that are new in the field of AI when it is related to global development.
AISI's Role in AI Safety
The Korea Artificial Intelligence Safety Institute was founded in November 2024 by the Electronics and Telecommunications Research Institute (ETRI), it is said to one of the leading centers for AI safety research in South Korea. One of the AISI's approaches includes assessing the risks that AI might pose, it can also include the development of new strategies with an encouragement in collaboration among industries, such as academia and research institutions among other stakeholders. As a member of the "International Network of AI Safety Institutes," the goal of AISI is to gain AI safety across global collaborations.