Graduation Year

2023

Document Type

Dissertation

Degree

Ph.D.

Degree Name

Doctor of Philosophy (Ph.D.)

Degree Granting Department

School of Accountancy

Major Professor

Thomas Smith, Ph.D.

Committee Member

Uday Murthy, Ph.D.

Committee Member

Jong Chool Park, Ph.D.

Committee Member

Eun Sook Kim, Ph.D.

Keywords

Machine Learning, Natural Language Processing, Risk Disclosure, Management Discussion and Analysis, Going Concern Opinion, Type I Error, Type II Error

Abstract

Auditors are required to issue modified audit opinions if they have sufficient doubts about the client’s ability to continue as a going concern. These going concern opinions represent an important information resource for financial statement users to evaluate client performance, and are associated with a number of negative capital market outcomes (e.g. negative returns, increased cost of capital, etc.). Despite being used by capital market participants, going concern opinions are commonly plagued with Type I errors (false positive) and Type II errors (false negative), making them a particularly noisy measure. The purpose of this study is to determine whether machine learning can be leveraged to reduce this noise by (1) identifying disclosure patterns where going concern accuracy is likely lower (higher) and (2) developing measures from these disclosures that can help predict variation in going concern accuracy. Specifically, I use a machine learning technique (Top2Vec) to identify differences in disclosure topics among financially distressed clients’ Risk disclosures (Item 1A) and Management Discussion and Analysis disclosures (Item 7) conditioned on the accuracy of the going concern opinion (accurate, Type I error or Type II error). I find significant differences in the topics that are discussed among Type I error/Type II error clients compared to clients receiving accurate going concern opinions/evaluations. Accurate going concern opinions are the situation that clients receive going concern opinions in the current year and file for bankruptcy protection in the subsequent year. Accurate going concern evaluations not only include the situations of accurate going concern opinions but also include the situations that clients do not receive going concern opinions in the current year and do not file for bankruptcy protection in the subsequent year (e.g., an accurate omission of a going concern opinion). In the Type I error settings (ignoring Type II errors in this analysis), the probability of accurate going concern opinion is higher if clients disclose human capital and supply chain risks, or if clients disclose tax related factors. The probability of accurate going concern evaluation is higher (lower) if clients disclose human capital, dispersion, legal, and macro-economic risks (funding, financial condition, debt, operational, attestation, and stock market risks), and the probability is lower if clients disclose the facts regarding growing potentials, stocks, and political contributions. In the Type II error settings (ignoring Type I errors in this analysis), the probability of accurate going concern opinion is higher (lower) if clients disclose bankruptcy and operational risks (development, supply chain, and environmental risks), or if clients disclose the facts regarding bankruptcy, performance changes, and costs (operational performance and tax). The probability of accurate going concern evaluation is higher (lower) if clients disclose macro-economic, intellectual property, and investment risks (development and oil/gas risks), or if clients disclose the facts regarding human capital (loan and operational performance). After providing evidence of which disclosure topics are associated with going concern accuracy, I then examine whether machine learning can be used to create measures (based on the textual information disclosed in Item 1A and Item 7) to improve models attempting to determine whether an observed going concern opinion is accurate. My findings support the validity and effectiveness of these machine learning developed proxies in predicting accurate going concerns, identifying Type I errors, and identifying Type II errors. I further demonstrate their superiority over other common text-based measures that do not utilize machine learning. The findings of this study have important implications for auditors, regulators, and academia.

Included in

Accounting Commons

Share

COinS