Graduation Year


Document Type




Degree Granting Department

Business Administration

Major Professor

Alan R. Hevner, Ph.D.

Co-Major Professor

Gert-Jan de Vreede, Ph.D.

Committee Member

T. Grandon Gill, D.B.A.

Committee Member

Loran Jarrett, D.B.A.

Committee Member

Uday Murthy, Ph.D.


AI Ethical Practices, AI Ethical Principles, AI Ethics, Artificial Intelligence, ERM, Ethics


The effective use of Artificial Intelligence (AI) has immediate business benefits for an organization and its stakeholders through efficiency and quality gains, and the potential to explore and implement new business models. However, there are risks of unintended ethical consequences. Enterprise Risk Management (ERM) focuses on managing risk while maximizing business value from exploiting opportunities. Using applied ethics as a basis and the perspective that ethics includes both enabling human flourishing and not violating accepted norms, I argue that greater business value is achieved when an organization simultaneously targets the maximization of benefits and the minimization of harms for the organization and its stakeholders. Further, AI system solutions (AISS) are complex socio-technical solutions that present increased business risks because of their rapid and unexpected emergent behaviors, and their tight integration with the human user and the environment. Therefore, with the goal of creating AISS in an ethical way, this study proposes a novel enhancement to existing ERM frameworks to maximize business value through the effective management of the dynamic AI ethical risks presented by advanced AISS. In this research, I first argued that creating AISS in an ethical way requires ensuring that the organization’s AI Ethical Principles are operationalized as daily AI Ethical Practices that are incorporated into an organization’s business and ERM processes. Second, I argued that managing AI-related ethical risk is best achieved through building onto and enriching an organization’s existing enterprise-wide risk management approaches. To this end, I proposed an enhanced and dynamic ERM approach, which I called the e-ERM framework. The e-ERM framework is built upon the theories and constructs of risks and risk management, ethics and ethical risks, information technology system development, and the impact of AI on these theories and constructs; all framed within the context of ERM. Because of the breadth of each of these domains, this study is bounded and considers only applied ethics, cognitive AI, AI ethical risks, ERM, and parts of the AISS system development lifecycle. Its focus is primarily on the ethical impact of AISS on Human Resources-related processes. I followed a sequential multi-phased mixed-method research approach based on the elaborated Action Design Research (eADR) process model to co-create, along with a variety of stakeholders and subject matter experts, the e-ERM framework. This process included refining the problem through leaning on my experience, performing a literature review, and engaging focus groups of subject matter experts. I then iterated the design of the e-ERM framework using extensive surveys of organizations developing AISS, one-to-one interviews, and the analysis of publicly available AI incident databases. Based on a survey with 2,047 participants, I found that AI-enabled solutions were being developed by 28.2 percent of the business in United States, and in more than 50 percent of the larger (more than 20,000 employee) businesses. In a more detailed survey with 206 responses, I found that AI was seen to be more than “moderately important” by those developing AISS with the concomitant levels of spending. These finding confirmed the need for my research. Through the focus groups I found that the approach of using an enhanced ERM framework was one way of maximizing business and stakeholder benefit and minimizing harm. I confirmed through the survey and interviews that the principles-to-practice gap remains, and with the interviewees, identified several pragmatic AI Ethical Practices to aid in making the AI Ethical Principles tangible, and, thereby, closing the gap. Further, I found that enhancing an organization’s ERM approach to include managing the dynamic ethical risks resulting from the creation and use of AISS was a valid approach. The alternative was leaving it to the AISS creators and their processes to address the risk, which was seen as ineffective based on the focus group, survey, and interview analyses. I also confirmed with the focus groups and interviewees, the need for a dynamic approach to ERM, rather than only a routine-based approach (e.g., risk quarterly reviews). Based on these findings and working in collaboration with subject matter experts in focus groups, through surveys, and in interviews, we together designed the e-ERM framework. This pro-ethical framework can be applied in organizations wishing to maximize the business and stakeholder benefits and minimize the harms of the use of AISS. My research makes several contributions. First, I validated the need for a dynamic and enhanced ERM approach. Second, I provided a corroborated design of an e-ERM framework for the benefit of both research and practice. Third, I cataloged the details of the use of AI by more than 200 organizations in terms of their AI Ethical Principles and AI Ethical Practices in use along with insights into their risk management approaches. Finally, applying the eADR process model to focus groups, survey participants, and several interviews across multiple organizations, rather than working within a single organization to solve an existing problem, was a novel application of eADR that provided an additional contribution to the research body of knowledge. In conclusion, I propose the e-ERM framework developed in my research as an approach to manage the potential risks of unintended negative consequences to both a business and its stakeholders for those businesses that desire to make the most of the opportunities afforded by AI-enabled solutions in a beneficial and pro-ethical way.