The technological landscape is changing rapidly. At times, it can be challenging to keep up with technological innovations, decipher their risks and opportunities, and mitigate any related adverse effects.
Yet, allowing technology to race ahead and leave our values in balance is more worrisome.
This rings especially true for the United Nations, where the adoption of technologies can directly impact the UN system and its work on development, human rights, peace, and international security.
To address the need for technological oversight within the UN, OICT’s Policy, Strategy and Governance Division (PSGD) is engaged in several governance efforts.
Leveraging an interdisciplinary team’s expertise, we aim to democratize the use of emerging technologies within the Secretariat while developing ethical guidelines, practices, and methodologies that can mitigate associated adoption risks.
To this end, we prioritize the understanding of internal needs and challenges as to develop normative outputs that are context-driven and fit-for-purpose.
Ultimately, OICT aims to ensure the reliability, safety, and security of frontier technologies for critical applications, when they are being deployed within the Organization.
Our Governance efforts ensure that technological projects are developed in accordance with UN values (e.g., respect for diversity) and obligations (e.g., protect and promote human rights), core competencies (e.g., creativity and technology awareness), UN reform benefits framework, principles, strategies on data, and new technologies, and help achieve the Sustainable Development Goals.
To achieve this goal, we emphasize the importance of soft and hard governance initiatives to effectively address the challenges and opportunities presented by artificial intelligence (AI). Soft governance initiatives, best practices, and standards serve as guiding principles and practical resources for ethical and responsible AI development and deployment. These initiatives promote transparency, accountability, and the protection of human rights in the AI ecosystem.
Hard governance measures, including AI and data policies, and an operational AI governance framework, establish a robust regulatory and operational infrastructure that ensures compliance, risk management, and effective decision-making processes throughout the Secretariat.
Crucially, we aim to foster a collaborative approach by engaging the entire Secretariat in the governance process. Consensus-building is facilitated through the active involvement of a Working Group dedicated to AI governance. Moreover, workshops with internal and external partners provide valuable insights and perspectives, enabling a holistic and inclusive approach to address the complex challenges posed by AI.