Artificial Intelligence (AI) is transforming societies and institutions across the globe. While AI operates without regard for borders, human rights remain universal. This makes global coordination essential: the development and deployment of AI must respect these rights everywhere. Ensuring that AI advances human well-being requires frameworks that translate ethical principles into clear guidance for governments, organizations, and practitioners. Several international initiatives provide such guidance.
For instance, the United Nations' Principles for the Ethical Use of Artificial Intelligence in the UN System establish a rights-based foundation, emphasizing fairness, transparency, human oversight, and sustainability. These principles guide AI systems at all stages of their lifecycle within UN entities, offering a universal benchmark for ethical practice. The European Union complements this approach with the AI Act, implemented in 2024. This risk-based legal framework prohibits certain high-risk practices, sets obligations for transparency and accountability, and mandates human oversight for critical AI systems. In the United States, AI governance relies on federal strategies, sector-specific guidance, and voluntary frameworks that encourage innovation while promoting responsible use.
Frameworks and regulations provide essential guidance, but they are not sufficient on their own. AI also needs operational ethics, practical systems and tools that make it possible to verify, trace, and understand how AI decisions are made. This helps ensure that ethical principles guide AI behavior effectively.
One example of this operational layer is Luri. Its tools, including BriefCase and Complainer, provide citation traceability, transparent reasoning, and audit logs for AI-generated briefs. These features allow users to follow how conclusions are reached, ensuring that AI outputs are verifiable and accountable. Regulation is important, but real trust in AI comes from systems that can demonstrate how they reach their conclusions, showing how ethical principles can move from guidance into actionable, measurable workflows.
By combining global frameworks with practical, operational tools, AI can be guided to uphold human rights and values in practice. This demonstrates how ethics, policy, and technology can work together to support responsible and trustworthy AI.
Written by Ana Matilde N.