top of page

Publications

Research and Articles

My research addresses current AI and analytics ethics questions faced by organizations. I use data analytics, machine learning, qualitative interview-based research methodologies to study AI ethics from multiple angles.

Open Book

Tailoring Explainable Artificial Intelligence to Reduce Algorithm Aversion and Improve Profitability (Stephanie Kelley, Anton Ovchinnikov, Adrienne Heinrich, Sook Yee Chong)

2025 WIP

We conduct a series of lab-in-the-field and lab experiments to investigate the preferences of real AI users concerning the tailoring of explainable artificial intelligence (XAI) and its impact on reducing algorithm aversion and improving profitability. Study #1 involves a choice-based conjoint (CBC) survey, analyzed using Hierarchical Bayes method, to study the effect of tailoring the 1) XAI approach, 2) type of AI model, 3) number of features in the visualization, 4) aggregation level, and 5) lending outcome on AI user preferences. Study #2 integrates the preference results from Study #1 into a series of lab-in-the-field and lab experiments to determine whether tailoring the XAI visualizations reduces AI user algorithm aversion. These findings are integrated into Study #3 to measure the impact of tailoring XAI visualizations and their resulting changes to algorithm aversion behaviour, on firm profitability. In Study #1, we find that (i) the XAI approach is the most important factor driving preferences, (ii) AI users prefer certain combinations of XAI approaches and models to be used together, (iii) users prefer to see more features in the XAI visualizations, (iv) users do not have a preference between individual and group aggregation levels, and (v) their preferences do not change across favourable or unfavourable model outcomes. Study #2 shows us that (vi) tailoring XAI visualizations to AI user preferences leads to increased AI model recommendation adherence (reduced algorithm aversion); however, this gain is restricted to favourable model outcomes. Lastly, we find that (vii) by tailoring XAI visualizations, firms can capture more profit than without tailoring. All studies are conducted in a lending setting, using real loan data and models developed in collaboration with data scientists from a large institutional lender in Asia. The lab-in-the-field experiments involve real AI users—the firm’s loan officers—and the results are supported by a student lab experiment. The paper integrates guidance from industry practitioners on organizational implementation of the research findings.

​

Developing an Artificial Intelligence Ethics Governance Checklist for the Legal Community (Stephanie Kelley)

2025 WIP

This study develops a stakeholder-informed artificial intelligence (AI) ethics governance checklist tailored for Canadian law firms to help them harness the productivity and economic advantages of AI while minimizing the risks of unethical outcomes. Recognizing the limitations of existing AI principles (AIPs) in preventing unethical outcomes, this research uses semi-structured interviews, qualitative content analysis, and expert stakeholder engagement to design an eight-page AI ethics governance checklist. In addition to the output of a practical governance checklist, the study reports findings about the development of stakeholder-informed governance checklists. The findings reveal that Canadian lawyers share global concerns surrounding AI risks, including privacy, accountability, safety and security, transparency and explainability, human oversight, professional responsibility, and the promotion of human values. In addition, many law firms interact with AI primarily through third-party vendors, making a principle-based checklist the most practical approach. The research also highlights the importance of question format, suggesting that balancing clarity (using Yes/No options) with flexibility (allowing for open-ended comments) is essential, given the complex ethical considerations involved. The study also finds there is a need to integrate the checklist with existing policies, such as privacy impact assessments and IT risk evaluations, alongside relevant regulatory frameworks. Additionally, tailoring language and definitions to reflect the specific needs of stakeholders (in this case, lawyers) enhances usability and effectiveness. The resulting eight-page, stakeholder-informed AI ethics governance checklist has been adopted by several Canadian law firms and Barristers Societies, offering a practical tool to guide the responsible adoption of AI in the legal sector.

The emergence of artificial intelligence ethics auditing (Daniel S. Schiff, Stephanie Kelley, Javier Camacho Ibanez)

Big Data & Society 2024

The emerging ecosystem of artificial intelligence (AI) ethics and governance auditing has grown rapidly in recent years in anticipation of impending regulatory efforts that encourage both internal and external auditing. Yet, there is limited understanding of this evolving landscape. We conduct an interview-based study of 34 individuals in the AI ethics auditing ecosystem across seven countries to examine the motivations, key auditing activities, and challenges associated with AI ethics auditing in the private sector. We find that AI ethics audits follow financial auditing stages, but tend to lack robust stakeholder involvement, measurement of success, and external reporting. Audits are hyper-focused on technically oriented AI ethics principles of bias, privacy, and explainability, to the exclusion of other principles and socio-technical approaches, reflecting a regulatory emphasis on technical risk management. Auditors face challenges, including competing demands across interdisciplinary functions, firm resource and staffing constraints, lack of technical and data infrastructure to enable auditing, and significant ambiguity in interpreting regulations and standards given limited (or absent) best practices and tractable regulatory guidance. Despite these roadblocks, AI ethics and governance auditors are playing a critical role in the early ecosystem: building auditing frameworks, interpreting regulations, curating practices, and sharing learnings with auditees, regulators, and other stakeholders.

Employee Perceptions of Effective AI Principle Adoption (Stephanie Kelley)

Journal of Business Ethics, 2022

This study examines employee perceptions on the adoption of artificial intelligence (AI) principles in their organizations. 49 interviews were conducted with employees of 24 organizations across 11 countries. Participants worked directly with AI across a range of positions, from junior data scientist to Chief Analytics Officer. The study found that there are eleven components that could impact the effective adoption of AI principles in organizations: communication, management support, training, an ethics office(r), a reporting mechanism, enforcement, measurement, accompanying technical processes, a sufficient technical infrastructure, organizational structure, and an interdisciplinary approach. The components are discussed in the context of business code adoption theory. The findings offer a first step in understanding potential methods for effective AI principle adoption in organizations.

Anti-discrimination Laws, AI, and Gender Bias: A Case Study in Non-mortgage Fintech Lending(Stephanie Kelley, Anton Ovchinnikov, David R. Hardoon, & Adrienne Heinrich)

Manufacturing & Service Operations Management, 2022

*Selected by IRCAI & UNESCO as a Global Top 100 AI Solutions for the UN SDGs 

Problem definition: We use a realistically large, publicly available data set from a global fintech lender to simulate the impact of different antidiscrimination laws and their corresponding data management and model-building regimes on gender-based discrimination in the nonmortgage fintech lending setting. Academic/practical relevance: Our paper extends the conceptual understanding of model-based discrimination from computer science to a realistic context that simulates the situations faced by fintech lenders in practice, where advanced machine learning (ML) techniques are used with high-dimensional, feature-rich, highly multicollinear data. We provide technically and legally permissible approaches for firms to reduce discrimination across different antidiscrimination regimes whilst managing profitability. Methodology: We train statistical and ML models on a large and realistically rich publicly available data set to simulate different antidiscrimination regimes and measure their impact on model quality and firm profitability. We use ML explainability techniques to understand the drivers of ML discrimination. Results: We find that regimes that prohibit the use of gender (like those in the United States) substantially increase discrimination and slightly decrease firm profitability. We observe that ML models are less discriminatory, of better predictive quality, and more profitable compared with traditional statistical models like logistic regression. Unlike omitted variable bias—which drives discrimination in statistical models—ML discrimination is driven by changes in the model training procedure, including feature engineering and feature selection, when gender is excluded. We observe that down sampling the training data to rebalance gender, gender-aware hyperparameter selection, and up sampling the training data to rebalance gender all reduce discrimination, with varying trade-offs in predictive quality and firm profitability. Probabilistic gender proxy modeling (imputing applicant gender) further reduces discrimination with negligible impact on predictive quality and a slight increase in firm profitability. Managerial implications: A rethink is required of the antidiscrimination laws, specifically with respect to the collection and use of protected attributes for ML models. Firms should be able to collect protected attributes to, at minimum, measure discrimination and ideally, take steps to reduce it. Increased data access should come with greater accountability for firms.

Dr. Stephanie Kelley

AI & Analytics Ethics Researcher

  • LinkedIn

©2022 by Stephanie Kelley - AI & Analytics Ethics Researcher.

bottom of page