The Need for AI Ethics and Corporate Efforts
2023/11/17 | Written By: Sungmin Park (Content Manager)
Artificial Intelligence (AI) unquestionably offers abundant benefits to our lives and society. It greatly contributes to improving convenience across a wide range of sectors, including finance, healthcare, education, and logistics. However, it is not only convenience that the advancement of AI brings; it also raises concerns and anxieties. These concerns stem from worries that AI may infringe on human rights and freedom, weaken responsibility and creativity, and exacerbate social inequality.
Therefore, there is a growing voice advocating for AI ethics and reliability, emphasizing the need for AI to be designed and operated in a way that respects and upholds human dignity and the common good. On January 1st, the first-ever AI Safety Summit was held in the United Kingdom, marking a global effort to ensure reliable AI. Let's delve into how we can ensure our well-being and safeguard our lives in a world with AI and how businesses are seeking solutions to these challenges together.
What is AI Ethics?
AI ethics refers to a set of principles, values, and guidelines that advise the ethical and responsible use of artificial intelligence in its design and outcomes. According to a government publication titled "Artificial Intelligence Technology Outlook and Innovation Policy Directions," AI ethics is defined as "universal social norms and relevant technologies that stakeholders in AI should abide by" in a contextual definition.
There are many ethical considerations in the use and development of AI, with particular importance placed on awareness and response to inherent cognitive biases in the data used for training. AI learns from various data sources, and these data often contain biases inherent in human society. Therefore, it is crucial to construct experiments and algorithms that minimize biases and enhance fairness and diversity.
Renowned scholar in the field of AI ethics, Professor Sandra Wachter of the Oxford Internet Institute at the University of Oxford, emphasized in an interview with JoongAng Ilbo in 2021 that AI systems are likely to be biased because they learn from past data, which may contain inequalities and injustices. Hence, there is a growing need for a strong and consistently applied mechanism for AI ethics enforcement and accountability. Trustworthiness in AI is a vital component to realize AI ethics, encompassing predictability, verifiability, safety, fairness, transparency, and the capability for human supervision and intervention.
The 5 Major Issues of AI Ethics
As of now, the five major ethical issues in artificial intelligence (AI) that we are facing are as follows:
Bias
As mentioned earlier, humans have various cognitive biases such as recency bias and confirmation bias, and these biases influence human behavior and data. Since data forms the basis for machine learning algorithms, AI trained on such data may exhibit biases related to social inequality or prejudice. Therefore, it is essential to develop algorithms that minimize these biases to ensure fairness and diversity.Errors and Safety
AI errors can have a significant impact on human lives and society. For instance, in the medical field, AI applications can save lives through accurate diagnosis and treatment, but errors or incorrect judgments can pose unexpected and potentially life-threatening risks.Misuse
There are concerns about the unethical or illegal use of AI in activities like phishing, deepfakes, and cybercrimes.Privacy Protection
AI relies on vast amounts of data for algorithmic learning, making it crucial to prevent violations of individuals' privacy or data breaches.Killer Robots
Killer robots refer to AI-based autonomous military robots capable of attacking humans without human intervention, often referred to as "Lethal Autonomous Weapons." These raise concerns about human safety and life, international security issues, and the ambiguity of accountability in case of harm.
Moreover, as we enter the era of generative AI and large language models (LLMs), issues such as the potential for generating false information, intellectual property rights, job displacement, security, privacy, and unethical expressions are becoming prominent AI ethics concerns.
To address these ethical issues effectively, it is crucial to establish AI ethics standards and develop trustworthy AI. The national AI ethics guidelines, prepared to suggest a desirable direction for the development and utilization of artificial intelligence (AI), emphasize three fundamental principles: "human dignity, public benefit, and technology neutrality." These principles are intended to be voluntarily adhered to by everyone in all fields and continuously improved.
The guidelines outline ten key requirements that should be met throughout the entire process of AI development and utilization, ensuring the practical application of the three fundamental principles. These requirements include:
Protection of Human Rights
Privacy Protection
Respect for Diversity
Prohibition of Harm
Public Benefit
Promotion of Solidarity
Data Management
Responsibility
Safety
Transparency
These guidelines serve as a framework for fostering AI development and utilization that aligns with ethical standards and ensures the responsible and reliable use of AI technology.
Ways to Achieve AI Ethics and Trustworthiness
To achieve AI ethics and trustworthiness, it is essential for various stakeholders, ranging from academia and industry to civil society, to collaborate and communicate effectively.
[AI Developers and Researchers]
They should comply with AI-related regulations and ethical guidelines to develop algorithms that minimize bias and ensure safe operation through continuous monitoring.
Data security should be a priority to avoid privacy issues. When necessary, they should obtain user consent for data usage.
Clear documentation of AI system operation and limitations is also helpful in maintaining transparency.
[AI Users and Consumers]
Users and consumers need to understand the advantages and risks associated with AI systems, cultivate trust, and foster critical thinking.
They should provide feedback and requirements to contribute to continuous improvement.
[AI Regulators and Policy Makers]
Regulators and policy makers should provide guidance for AI development and usage, establish standards for trustworthy AI, and develop regulations and standards that consider ethical, legal, and social aspects. Ongoing monitoring and supervision are essential.
[AI Educators]
AI educators should disseminate knowledge about the fundamentals and principles of AI systems, enhance awareness and education about AI ethics, and promote diverse opinions and debates related to AI.
International Discussions on AI Ethics and Trustworthiness
International discussions on ensuring safe and trustworthy artificial intelligence (AI) have become more active as AI receives global attention.
On the 1st of January, the UK hosted the world's first AI Safety Summit, initiating a surge in international discussions. The "Bletchley Declaration" announced during the summit outlines an agreement between governments and AI companies to test various risks, including national security and societal harm, before the release of new AI models. Prominent tech giants such as Google, OpenAI, Microsoft, and Meta are reported to be part of this initiative.
Furthermore, discussions related to AI ethics have been ongoing for the past few years. UNESCO adopted the "Recommendation on the Ethics of Artificial Intelligence," defining common values and principles to guide the establishment of legal infrastructure ensuring the responsible development of AI, with unanimous support from its 193 member states during its 41st General Conference in 2021. In the same year, the European Union (EU) introduced a proposal for AI regulation, categorizing AI systems into four levels of regulation based on their risk levels. The United States also unveiled a national strategy for responsible AI technology usage.
In South Korea, the "AI Ethics Charter" was proclaimed in 2020, which laid out five principles and ten concrete action plans for the human-centered development and utilization of AI. Recently, the Seoul Digital Foundation introduced the "Seoul City AI Ethics Guidelines," emphasizing the importance of ethical compliance for users, operators, and developers, placing the user at the center of AI ethics considerations.
Efforts of Global AI Companies for AI Ethics and Trustworthiness
For international discussions and collaborations on AI ethics and trustworthiness to have a real impact on the development and operation of AI systems, active participation from AI companies is crucial. What efforts are global AI companies currently making in this regard? AI companies that rely on extensive data for model training are addressing various issues, not only data bias but also copyright compliance, personal data protection, and transparency.
Examining the cases of major overseas companies, Microsoft (MS) introduced its "AI Ethics Guidelines" in 2019, which include principles like "AI should be human-centered," "AI should be fair and non-discriminatory," "AI should protect personal information and security," and "AI should be transparent and explainable." In line with these guidelines, Microsoft unveiled the "Fairlearn" AI fairness assessment system to implement these principles effectively. Google, on the other hand, published its "AI Principles" in 2018 and has taken steps to realize them. They developed the open-source tool "TensorFlow Model Analysis 4" to analyze the performance and bias of AI models. Google also supports research and education on ethical issues related to AI systems.
In South Korea, major corporations like Naver, LG, and SKT are actively participating in complying with AI ethics principles. Notably, the AI startup UpStage has played a prominent role. UpStage has established its own set of five AI ethics principles and goes beyond merely adhering to them in product development and operations. They have established an ecosystem where data providers and model-building companies collaborate through the data collection and sharing platform called "1T Club." Partners participating in the 1T Club receive benefits such as API usage fee discounts and a share of LLM's API business revenue. This initiative aims to gather transparent and ethical high-quality data, potentially enhancing the independence and global competitiveness of domestic LLM (Large Language Model) technology.
[Upstage's AI Ethics Principles]
Human-Centric: Developing AI that is community-based and human-centered, providing tangible help and benefits to people.
Trustworthiness: Creating reliable and trustworthy AI that people can have faith in.
Fairness: Developing AI that is based on responsibility and provides fair benefits.
Safety: Designing services with a strong focus on security and safety while protecting privacy.
Universality: Building AI for everyone, respecting diversity.
In Conclusion
As technology advances, discussions on how it can lead our lives and society in a better direction must continue. In modern society, as the influence of AI continues to grow, responsibility and trustworthiness have become essential elements, no longer optional. We hope for ongoing interest and participation not only from companies but also from individuals who benefit from AI, as this will contribute to the development of more trustworthy technology and a better future.