Humans rely on machines in accomplishing missions while machines need humans to make them more intelligent and more powerful. Neither side can go without the other, especially in complex environments when autonomous mode is initiated. Things are becoming more complicated when law and ethical principles should be applied in these complex environments. One of the solutions is human-machine teaming, as it takes advantage of both the best humans can offer and the best that machines can provide. This article intends to explore ways of implementing law and ethical principles in artificial intelligence (AI) systems using human-machine teaming. It examines the existing approaches, reveals their limitations, and calls for the establishment of accountability and the use of a checks-and-balances framework in AI systems. It also discusses the legal and ethical implications of this solution.




Format changes (various)