Responsible AI for AI Sustainable Future: Governance, Ethics, and The Reality Behind the Promise
DOI:
https://doi.org/10.70715/jitcai.2025.v2.i2.012Keywords:
Ethics, Governance, Responsible AI, SustainabilityAbstract
Artificial intelligence has emerged as a powerful force shaping global development, offering promising solutions across health, education, climate change, and governance. However, its rapid integration into critical sectors raises urgent questions about ethics, governance, and sustainability. This systematic review explores the promise and practice of responsible AI through the lens of three core objectives: the governance mechanisms guiding AI implementation, the ethical frameworks shaping its design, and the practical realities influencing its deployment across contexts. Drawing from sixty peer-reviewed articles published between 2017 and 2024, the review identifies strong global consensus on foundational principles such as fairness, accountability, and transparency. Nonetheless, a significant implementation gap persists, particularly in low-resource settings, where enforcement mechanisms and institutional readiness are often lacking. The findings also reveal that ethical commitments are frequently undermined by organizational constraints and commercial interests, leading to surface-level adherence without substantive change. Environmental sustainability, a critical dimension of responsible AI, remains underrepresented in current governance discussions despite mounting evidence of AI’s carbon footprint. This review contributes to the growing body of scholarship advocating for inclusive, enforceable, and context-sensitive approaches to responsible AI. It underscores the need for deeper engagement with the political, social, and environmental realities that shape AI’s impact on sustainable development. Ultimately, bridging the gap between AI’s ethical aspirations and real-world outcomes requires not only technical innovation but also strong institutional leadership, interdisciplinary collaboration, and meaningful stakeholder participation.
Downloads
References
Anthony, L., Kanding, B., & Selvan, R. (2020). Carbontracker: Tracking and predicting the carbon footprint of training deep learning models. arXiv. https://doi.org/10.48550/arXiv.2007.03051
Babic, B., Cohen, I. G., & Evgeniou, T. (2021). AI in healthcare: The hopes, the hype, the promise, the peril. The American Journal of Bioethics, 21(5), 4–11. https://doi.org/10.1080/15265161.2021.1906616
Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. In Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency (pp. 149–159). https://doi.org/10.1145/3287560.3287598 DOI: https://doi.org/10.1145/3287560.3287598
Binns, R., Veale, M., Van Kleek, M., & Shadbolt, N. (2018). “It’s reducing a human being to a percentage”: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14). https://doi.org/10.1145/3173574.3173951 DOI: https://doi.org/10.1145/3173574.3173951
Birhane, A. (2020). Algorithmic injustice: A relational ethics approach. Patterns, 2(2), 100205. https://doi.org/10.1016/j.patter.2020.100205 DOI: https://doi.org/10.1016/j.patter.2021.100205
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (pp. 77–91). https://doi.org/10.48550/arXiv.1801.00001
Butcher, J., & Beridze, I. (2019). What is the state of artificial intelligence governance globally? The RUSI Journal, 164(5–6), 88–96. https://doi.org/10.1080/03071847.2019.1694260 DOI: https://doi.org/10.1080/03071847.2019.1694260
Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080 DOI: https://doi.org/10.1098/rsta.2018.0080
Cihon, P. (2019). Standards for AI governance: International standards to enable global coordination in AI research and development. Futures, 117, 102493. https://doi.org/10.1016/j.futures.2019.102493 DOI: https://doi.org/10.1016/j.futures.2019.102493
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. DOI: https://doi.org/10.12987/9780300252392
Cowls, J., & Floridi, L. (2018). Prolegomena to a white paper on an ethical framework for a good AI society. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Dignum, V. (2019). Responsible Artificial Intelligence: How to develop and use AI in a responsible way. Springer. https://doi.org/10.1007/978-3-030-30371-6 DOI: https://doi.org/10.1007/978-3-030-30371-6
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication (2020-1). https://doi.org/10.2139/ssrn.3518482 DOI: https://doi.org/10.2139/ssrn.3518482
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689–707. https://doi.org/10.1007/s11023-018-9482-5 DOI: https://doi.org/10.1007/s11023-018-9482-5
Friedman, B., & Hendry, D. G. (2019). Value Sensitive Design: Shaping technology with moral imagination. MIT Press. DOI: https://doi.org/10.7551/mitpress/7585.001.0001
Green, B. (2021). The contestation of tech ethics: A sociotechnical approach to technology ethics in practice. Journal of Social Computing, 2(3), 209–225. https://doi.org/10.23919/JSC.2021.0018 DOI: https://doi.org/10.23919/JSC.2021.0018
Green, B. (2022). Data science as political action: Grounding data science in a politics of justice. Patterns, 3(5), 100497. https://doi.org/10.1016/j.patter.2022.100497 DOI: https://doi.org/10.1016/j.patter.2022.100497
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8 DOI: https://doi.org/10.1007/s11023-020-09517-8
Holstein, K., Wortman Vaughan, J., Daumé, H., Dudik, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–16). https://doi.org/10.1145/3290605.3300830 DOI: https://doi.org/10.1145/3290605.3300830
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399. https://doi.org/10.1038/s42256-019-0088-2 DOI: https://doi.org/10.1038/s42256-019-0088-2
Kaack, L. H., Bakhtiar, A., Callahan, A., Donti, P. L., Jung, C., Karplus, V. J., & Roddenberry, M. (2022). Aligning artificial intelligence with climate change mitigation. Nature Climate Change, 12(6), 518–527. https://doi.org/10.1038/s41558-022-01372-x DOI: https://doi.org/10.1038/s41558-022-01377-7
Latonero, M. (2018). Governing artificial intelligence: Upholding human rights & dignity. Data & Society Research Institute. https://doi.org/10.2139/ssrn.3288990
Madaio, M., Stark, L., Wortman Vaughan, J., & Wallach, H. (2020). Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–14). https://doi.org/10.1145/3313831.3376445 DOI: https://doi.org/10.1145/3313831.3376445
Mittelstadt, B. D. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1, 501–507. https://doi.org/10.1038/s42256-019-0114-4 DOI: https://doi.org/10.1038/s42256-019-0114-4
Mökander, J., Axente, M., Casalicchio, E., & Floridi, L. (2021). Ethics-based auditing to develop trustworthy AI. Minds and Machines, 31(2), 263–278. https://doi.org/10.1007/s11023-021-09542-9 DOI: https://doi.org/10.1007/s11023-021-09557-8
Molnar, P., & Gill, L. (2018). Bots at the gate: A human rights analysis of automated decision-making in Canada’s immigration and refugee system. Citizen Lab. https://doi.org/10.2139/ssrn.3290602
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics, 26(4), 2141–2168. https://doi.org/10.1007/s11948-019-00165-5 DOI: https://doi.org/10.1007/s11948-019-00165-5
Munoko, I., Brown-Liburd, H., & Vasarhelyi, M. A. (2020). The ethical implications of using artificial intelligence in auditing. Journal of Business Ethics, 167, 209–231. https://doi.org/10.1007/s10551-019-04407-1 DOI: https://doi.org/10.1007/s10551-019-04407-1
Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A, 376(2133), 20180089. https://doi.org/10.1098/rsta.2018.0089 DOI: https://doi.org/10.1098/rsta.2018.0089
Rahwan, I. (2020). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 22(1), 5–14. https://doi.org/10.1007/s10676-017-9430-7 DOI: https://doi.org/10.1007/s10676-017-9430-8
Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 3645–3650). https://doi.org/10.18653/v1/P19-1355 DOI: https://doi.org/10.18653/v1/P19-1355
Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752. https://doi.org/10.1126/science.aat5991 DOI: https://doi.org/10.1126/science.aat5991
Taylor, L. (2021). Algorithmic impact assessments and the politics of AI accountability. Data & Policy, 3, e8. https://doi.org/10.1017/dap.2021.8 DOI: https://doi.org/10.1017/dap.2021.8
Veale, M., & Edwards, L. (2018). Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling. Computer Law & Security Review, 34(2), 398–404. https://doi.org/10.1016/j.clsr.2017.12.002 DOI: https://doi.org/10.1016/j.clsr.2017.12.002
Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., ... & Fuso Nerini, F. (2020). The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications, 11(1), 1–10. https://doi.org/10.1038/s41467-019-14108-y DOI: https://doi.org/10.1038/s41467-019-14108-y
Winfield, A. F. T., & Jirotka, M. (2018). Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions of the Royal Society A, 376(2133), 20180085. https://doi.org/10.1098/rsta.2018.0085 DOI: https://doi.org/10.1098/rsta.2018.0085
Zeng, Y., Lu, E., & Huangfu, C. (2018). Linking artificial intelligence principles. arXiv. https://doi.org/10.48550/arXiv.1812.04814
Downloads
Published
Data Availability Statement
This study is based on a systematic review of peer-reviewed literature, and no primary data were collected. All sources analyzed are publicly available through academic databases such as Scopus, IEEE Xplore, and Web of Science. A complete list of reviewed articles is included in the references. No additional datasets were generated or analyzed during the study.
Issue
Section
License
Copyright (c) 2025 Miracle Atianashie, Mark K. Kuffour, Bernard Kyiewu , Philipa Serwaa (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.





