MACHINE LEARNING (ML) TO EVALUATE GOVERNANCE, RISK, AND COMPLIANCE (GRC) RISKS ASSOCIATED WITH LARGE LANGUAGE MODELS (LLMs)

Authors

DOI:

https://doi.org/10.70715/jitcai.2025.v2.i2.022

Keywords:

Artificial Intelligence, Machine learning, Large Language Model, Governance, Risk, Compliance

Abstract

In today’s AI-driven digital world, Governance, Risk, and Compliance (GRC) has become vital for organizations as they leverage AI technologies to drive business success and resilience. GRC represents a strategic approach that helps organization using Large Language Models (LLMs) automation tasks and enhances customer service, while maintaining the regulatory complexity across various industries and regions. This paper explores a machine learning approach to evaluate Governance, Risk, and Compliance (GRC) risks associated with Large Language Models (LLMs). It utilizes Azure OpenAI Service logs to construct a representative dataset, with key features including  response_time_ms, model_type, temperature, tokens_used, is_logged, data_sensitivity, compliance_flag, bias_score, and toxicity_score. These features are used to train a model that predicts GRC risk levels in LLM interactions, enabling organizations to improve efficiency, foster innovation, and deliver customer value, while maintaining compliance and regulatory requirements.

References

[1] Balasubramaniam, N., Kauppinen, M., Kujala, S., & Hiekkanen, K. (2020). Ethical guidelines for solving ethical issues and developing AI systems. Product-Focused Software Process Improvement, 331, 331–346. https://link.springer.com/chapter/10.1007/978-3-030-64148-1_21

[2] European Commission. (2021). Ethics guidelines for trustworthy AI. https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines

[3] Software and Information Industry Association (SIIA). (2017). Ethical principles for artificial intelligence and data analytics, 1–25.

[4] IEEE. (2021). Ethically Aligned Design, First Edition. https://ethicsinaction.ieee.org/

[5] Chazette, L., & Schneider, K. (2020). Explainability as a non-functional requirement: Challenges and recommendations. Requirements Engineering, 25(4), 493–514. https://link.springer.com/article/10.1007/s00766-020-00333-1

[6] Lombrozo, T. (2012). Explanation and abductive inference. https://ifilnova.pt/wp-content/uploads/2021/12/Lombrozo-2012.pdf

[7] Chazette, L., Brunotte, W., & Speith, T. (2021). Exploring explainability: A definition, a model, and a knowledge catalogue. International Requirements Engineering Conference, 2021, 197–208.

[8] Chazette, L., Karras, O., & Schneider, K. (2019). Do end-users want explanations? Analyzing the role of explainability as an emerging aspect of non-functional requirements. International Requirements Engineering Conference, 2019, 223–233.

[9] Horkoff, J. (2019). Non-functional requirements for machine learning: Challenges and new directions. International Requirements Engineering Conference, 2019, 386–391.

[10] Vaughan, J. W., & Wallach, H. (2021). A human-centered agenda for intelligible machine learning. In M. Pelillo & T. Scantamburlo (Eds.), Machines We Trust: Perspectives on Dependable AI (pp. 32–47). Cambridge, MA: MIT Press.

[11] Russom, P. (2011). Big Data Analytics. TDWI Best Practices Report, Fourth Quarter, 4–5, 22.

[12] Zhang, Q., Cheng, L., & Boutaba, R. (2010). Cloud computing: State-of-the-art and research challenges. Journal of Internet Services and Applications, 1, 7. https://jisajournal.springeropen.com/articles/10.1007/s13174-010-0007-6

[13] Kumar, V., Sinha, D., Das, A. K., Pandey, S. C., & Goswami, R. T. (2020). An integrated rule-based intrusion detection system: Analysis on UNSW-NB15 data set and the real time online dataset. Cluster Computing, 23, 1397–1418. https://link.springer.com/article/10.1007/s10586-019-03008-x

Downloads

Published

10/29/2025

How to Cite

Bhatta, U. (2025). MACHINE LEARNING (ML) TO EVALUATE GOVERNANCE, RISK, AND COMPLIANCE (GRC) RISKS ASSOCIATED WITH LARGE LANGUAGE MODELS (LLMs). Journal of Information Technology, Cybersecurity, and Artificial Intelligence, 2(2), 107-118. https://doi.org/10.70715/jitcai.2025.v2.i2.022