ABSTRACT

The audit profession is facing a major transition toward a tech-savvy environment, i.e., extensively employing technologies such as data analytics and continuous auditing in daily work. During this transition, one of the biggest challenges is the lack of skilled and experienced auditors who are able to use technologies effectively and efficiently. To solve the problem, this editorial proposes a new architecture, named Continuous Audit Intelligence as a Service (CAIaaS), to facilitate auditors to fully use technologies even with limited experience and knowledge. In the CAIaaS, auditors could capture and transmit their client data to a cloud, and then generate intelligent apps to accomplish specific tasks. Moreover, a recommender system could further suggest the most appropriate apps to use in a particular engagement. The CAIaaS platform and the recommender system, together with other intelligent audit aids, compose a CAI-based audit paradigm that enables semi-automatic app development and recommendations, and result analysis.

I. INTRODUCTION

Recent advances in technology have increased the digitalization and the acquisition of intelligence of the auditing profession. Auditors are starting to explore IT applications designed for auditing purposes, such as continuous auditing (CA), audit data analytics (ADA), and robotic processing automation (RPA). The purpose is to enable risk identification and evidence collection from the entire population of company data as well as exogenous information, to automate repetitive audit procedures, and to change the frequency of performing audits from annually or quarterly to close to real time. Moreover, the recent vision of the next generation of auditing, named Audit 4.0, largely increases the degree of technology-based assurance (Alles and Gray 2019) by leveraging various emerging technologies that encompass Industry 4.01 upon audit processes (Dai and Vasarhelyi 2016). In Audit 4.0, auditors will “piggyback on technology promoted by Industry 4.0, especially the Internet of Things (IoT), Internet of Service (IoS), Cyber-Physical Systems (CPSs), and smart factories, to collect financial and operational information, as well as other audit-related data from an organization and its associated parties” (Dai and Vasarhelyi 2016). Audit 4.0 links the physical world to a “mirror” world by continuously transmitting the conditions, locations, surrounding environment, etc. of each object in an organization, as well as its business partners, to a virtual model of the value chain. Auditors could then rely on the information collected in a mirror world to build analytic models for anomaly identification and to automate a variety of audit processes such as remote inventory, cash balance evaluation, real-time faults, and irregularity detection.

Although auditors are increasingly aware of the value of intelligent technologies, surveys indicate that the adoption and use of technology are substantially below expectations (EY 2014; KPMG 2015; Li, Dai, Gershberg, and Vasarhelyi 2018). One reason is that the lack of skilled and experienced auditors impedes the transformation (Dai and Vasarhelyi 2016), especially for small audit firms or those in less-developed countries. Human IT skills, along with IT infrastructure and IT reconfigurability, are unique IT resources of a firm (Haseeb, Hussain, Ślusarczyk, and Jermsittiparsert 2019) and are difficult to acquire by small audit firms while being necessary to employ new technologies in audit tasks. To appropriately use technologies, auditors should use professional judgment to determine issues such as what tasks can be supported by those technologies, what data should be collected to build models, and what algorithms/tools are suitable to accomplish the tasks. Due to limited experience, auditors may only analyze portions of data and concentrate on issues they are familiar with or have experienced while ignoring other critical ones (O'Leary 2015). Also, auditors that are new to those data-driven technologies may not be able to create effective models, which could lead to failure of misstatement detection or overwhelming false alerts. Such an outcome may discourage auditors from exploring the beauty of those intelligent applications. Another reason is that large investments in IT infrastructure could hinder small audit firms from the adoption of new technologies (Munoko, Brown-Liburd, and Vasarhelyi 2020).

Solutions for the aforementioned challenges may be inspired by other industries. For example, an IT company offers customized image recognition models to video surveillance system developers by automatically training their images in a comprehensive cloud using built-in algorithms, which allows them to operate models at local devices.2 As a result, the developers without expertise in modeling can simply download the intelligent modules trained in the cloud and add to their video streams while protecting the data privacy and saving bandwidth costs because the incoming video streams are analyzed at their sites. A similar mechanism could be generated to help less-experienced auditors use intelligent applications. In the mechanism, a cloud could serve as the main host for the development and operation of a large number of intelligent apps and charge based on usage. Auditors could upload their clients' data to the cloud and use built-in functions to create intelligent apps and deploy them at clients' sites to perform CA/CM activities. Apps created in the cloud could be later used to help auditors that deal with similar clients.

As the number and variety of apps increase, it would be challenging to choose the right apps for each particular auditor and client, while choosing appropriate tools is crucial to perform effective analyses (Brown-Liburd, Issa, and Lombardi 2015). A potential solution is to rely on the recommender system technique to select the right apps. Recommender systems have been broadly used in ecommerce to predict user preferences on products. Such a technique could be employed to generate customized app suggestions based on client industry, auditor background knowledge, previous experience, familiarity with technology, etc.

This editorial explores the potential of a new architecture, named Continuous Audit Intelligence as a Service (CAIaaS), which could facilitate auditors fully using emerging technologies, even with limited experience and knowledge. In the CAIaaS, IoT is armed with the cloud as well as artificial intelligence, data mining, and machine learning to provide a comprehensive platform that allows auditors to capture and transmit their client data, automatically formulate CA/CM models and generate intelligent apps, and deploy at their own or clients' sites. If an auditor cannot perform model formulation or would like to explore new apps, a recommender system could further help to suggest the most appropriate apps to deploy in a particular engagement. Finally, a CAI-based audit paradigm is presented, which is composed of the CAIaaS platform, the recommender system, and other intelligent audit-aid systems.

The remainder of this editorial proceeds as follows: Section II provides the background of technologies that enable the CAIaaS. Section III presents the key elements and structure of CAIaaS architecture. Section IV improves the architecture by suggesting the most appropriate apps to auditors based on the recommender system technique. A CAI-based audit paradigm is demonstrated and discussed in Section V. The challenges facing the CAI-based audit paradigm adoption and implementation are discussed in Section VI. Section VII concludes and discusses potential research directions

II. BACKGROUND

Both academia and practice are exploring the potential use of emerging technologies in the field of auditing. Those technologies include among other topics the Internet of Things (IoT), cloud computing, and recommender systems. Also, several auditing-oriented IT applications, such as continuous auditing and monitoring, are gradually developed and used by audit firms and clients. This section summarizes the technologies and IT applications that are necessary to enable CAIaaS.

Big Data and the Internet of Things

In recent years, the term “Big Data” received increasing attention from the auditing profession because more data have been collected by organizations in the recent two years than in the previous 2,000 years (Syed, Gillela, and Venugopal 2013). One main resource that contributes to Big Data is the information captured or generated from the Internet of Things (IoT). IoT refers to millions of objects that are interconnected with each other via the internet (Xia, Yang, Wang, and Vinel 2012). The objects include sensors (collecting data from the physical world), actuators (receiving commands to take actions that impact the physical world), smartphones, home/work appliances, cars, and any other device or object that can be connected, monitored, or actuated (Biswas and Giaffreda 2014). The rapid development of IoT opens tremendous opportunities for a variety of fields such as living assistance, healthcare, transportation, city development, environment monitoring, agriculture, and manufactory (Wikipedia 2020).

Audit 4.0, the new auditing schemata proposed by Dai and Vasarhelyi (2016), explores various potential uses of IoT to facilitate auditors. By equipping goods with IoT devices, auditors could examine the locations and conditions of inventory items on their computers or smartphones and receive alerts when the sales data in financial statements do not match the actual movement of physical products. IoT could enhance the quality of inventory or assets examination, especially when the inventory or assets are less accessible using traditional audit methods. For example, a Chinese seafood company explained that the repeatedly dramatic changes in its financial statements were because a large number of scallops fled from its farms. It was later proved to be a major fraud of the company (Xinyue and Wei 2019). The examination of seafood inventories underwater is difficult. Such fraud could be instantly detected if IoT devices are placed in the farms monitoring the quantity and conditions of scallops.3 Another example is using IoT air quality monitors to continuously audit government officials' performance on air protection (Dai, He, and Yu 2019). Citizens, as well as other interested parties, can collect real data about their surrounding air quality using IoT monitors and report air pollution cases in real time and alert government auditors and officials to take appropriate actions.

Cloud Computing

The recent trend of combining IoT and artificial intelligence (AI), data mining (DM), and machine learning (ML) with cloud computing technology opens a new horizon for real-time data collection and analysis (Bacciu, Chessa, Gallicchio, and Micheli 2017; Biswas and Giaffreda 2014). The National Institute of Standards and Technology defines cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” (Mell and Grance 2011). Cloud computing can provide elasticity and scalable computing resources, especially data storage and computing power, with minimal management effort by users, and can be accessed from anywhere anytime (Biswas and Giaffreda 2014). Audit firms can outsource the management of IT infrastructures to the cloud to save IT costs and labor. The cloud serves as a remote workstation with a large data center and a variety of auditing software, while it provides scalable IT solutions that allocate more computing and storage resources to auditors during peak time. The cloud plays an even more important role if auditors are armed with AI/DM/ML and IoT. For example, a recent dynamic audit solution (DAS) project4 launched by the AICPA and CPA.com encourages audit firms to develop data analytics and machine learning applications on a cloud. Operating AI/DM/ML algorithms upon an enormous amount of data collected from IoT devices and other sources, as well as providing fast discernment of risky or erroneous transactions from normal ones, requires substantial and specialized IT resources. The cloud can offer specialized hardware for AI/DM/ML operations such as GPU for intensive workloads, state-of-the-art analytical software, and a large space to store and analyze a very large amount of data.

Recommender Systems5

With the growth of the ecommerce market, companies seek a tool to accurately present customers with those products they are most likely to purchase. Recommender systems fulfill this task by predicting the preference a user would give to a certain item based on the features of the product or the user's social environment (Ricci and Shapira 2011). Recommender systems collect customers' preferences in the past, demographic information of users, or the attributes of items, and make suggestions based on those data. Such information can be obtained explicitly (by collecting user ratings, comments, etc.) or implicitly (by monitoring user behavior, such as websites visited and goods purchased) (Choi, Yoo, Kim, and Suh 2012; Lee, Cho, and Kim 2010).

Based on the type of underlying filtering algorithms, recommender systems generally can be divided into four categories: (1) demographic, (2) content-based, (3) collaborative, and (4) hybrid (Adomavicius and Tuzhilin 2005; Bobadilla, Ortega, Hernando, and Gutiérrez 2013). Demographic-filtering recommender systems (Krulwich 1997; Pazzani 1999) predict customers' preferences from the opinions of people who have similar demographic characteristics (such as gender, age, educational level, region, etc.). Content-based-filtering (CBF) recommender systems choose products for customers that are similar to those they preferred in the past (Lang 1995; Mooney and Roy 2000). They usually generate recommendations using the contents of the items such as category, production date, or even more complex information like textual descriptions. Collaborative-filtering (CF) recommender systems analyze customer ratings and suggest the items that are preferred by people with similar tastes. For example, the GroupLens recommender system (Resnick, Iacovou, Suchak, Bergstrom, and Riedl 1994) uses customer ratings to estimate their preferences and clusters users with similar preferences into groups. Hybrid recommendation systems combine the aforementioned methods to provide optimal performance (Burke 1999).

Continuous Auditing and Monitoring

The concept of CA was first proposed in academia, and its earliest application was developed for a corporate billing system (Vasarhelyi and Halper 1991) in 1991. Later, CA evolved into a much broader concept named “continuous assurance” (Vasarhelyi, Alles, and Williams 2010), which consists of three main components: continuous data assurance (CDA) (Kogan, Alles, Vasarhelyi, and Wu 2014), continuous controls monitoring (CCM) (Alles, Brennan, Kogan, and Vasarhelyi 2006), and continuous risk monitoring and assessment (CRMA) (Moon and Krahel 2020), providing assurance close to real time. CDA continuously and automatically executes transaction-level verification to provide timely assurance (Kogan et al. 2014). CCM monitors internal control activities for violations (Alles et al. 2006; Chan and Vasarhelyi 2011). CRMA focuses on identifying emerging and material risks and prioritizing audit and risk management control procedures (Moon and Krahel 2020). These components provide comprehensive, timely, and accurate assurance and preemptively address significant risks.

The use of continuous auditing (CA) and continuous monitoring (CM) in organizations is increasing with the recent advancements in technologies. For example, Curtis, Chui, and Pavur (2020) investigate the factors that influence management accountants' intentions to champion the adoption of CM in their organizations from the individual, organizational, and innovation-specific perspectives. Acar, Gal, Öztürk, and Usul (2020) demonstrate the use of a company's flowcharts to identify control points in business processes and therefore improve its continuous monitoring and auditing activities. Moon and Krahel (2020) propose a novel methodology that continuously monitors and assesses an organization's business risks using internal and external real-time data sources. O'Leary (2020) explores the use of signal theory to understand the characteristics of signals and their impacts on CA/CM systems. Codesso, Machado de Freitas, Wang, de Carvalho, and da Silva Filho (2020) illustrate a real-life implementation of a CA system for the tax processes at a large manufacturing and retail company. Eulerich, Georgi, and Schmidt (2020) provide empirical evidence showing factors that have significant impacts on the use of CA outcomes when internal auditors perform rule-based audit planning.

III. CONTINUOUS AUDIT INTELLIGENCE AS A SERVICE (CAIAAS) ARCHITECTURE

One of the main challenges in technology adoption by the auditing profession is the need for IT that supports very large data storage and fast analyses, as well as experts that can provide professional help on how to obtain optimal solutions. Auditors, especially those with less experience and knowledge in the use of technologies, will likely need assistance to build intelligent models to accomplish audit tasks. Audit firms, on the other hand, are not adopting because of the nature of standards, costly hardware and software, as well as investment in maintenance and IT labor.

The “as-a-service” business model could be a potential solution for part of the aforementioned challenge. “As-a-Service,” also known as XaaS, is an emerging trend in the digital economy where “no one wants to sell you anything anymore, they just want to rent it to you” (Barker 2017, ¶1). XaaS allows consumers to pay only for the amount of computing and storage resources, as well as knowledge and labor, that they have used, which provides a cost-effective option for companies to operate their business (Perera, Zaslavsky, Christen, and Georgakopoulos 2014). Audit 4.0 also imagines a similar business model to provide flexible, low-cost, and high-quality audit services remotely using a large cloud (Dai and Vasarhelyi 2016). The cloud collects the requests from clients and matches them with the digitalized services offered by audit firms, and makes recommendations to the clients based on the timing and quality. Once an agreement is reached, the audit firm then offers the service remotely by analyzing the client's data in the cloud. The models, software, and hardware are paid per-use instead of purchasing, and auditors can obtain instant technical help from the cloud vendor. Such a subscription business model can largely reduce the upfront IT cost and later maintenance expenses for both audit firms and clients by outsourcing to a professional vendor. It can also improve market fairness, as smaller firms can afford advanced IT resources to provide high-quality audit services with limited initial investments.

To extend the service model in Audit 4.0, a Continuous Audit Intelligence as a Service (CAIaaS) architecture is proposed, which outsources the management of technologies and development of intelligent apps to professional providers. Continuous audit intelligence (CAI) refers to the use of AI/DM/ML algorithms upon data from a multiplicity of sources (such as IoT, ERPs, and websites) to identify risks and detect financial errors by promptly discerning problematic data elements from those tendencies from normal ones. In the CAIaaS architecture, data collected from IoT devices at the audit clients' sites, as well as ERP systems and other sources, are uploaded to a large secure cloud platform. Next, those data are prepared and standardized for formulating CA/CM models in the cloud. The models are then refined by auditors with audit or business logic and used to build intelligent apps. The apps are further deployed at auditors' computers to collect evidence or at clients' sites for monitoring transactions in real time. Those apps are also stored in a cloud marketplace for future use. As a result, burdens are largely alleviated from auditors, which allows them to focus on making auditing judgment based on the results from the intelligent apps.

Figure 1 demonstrates the CAIaaS architecture. This architecture mainly contains three stages: (1) data input, (2) app development, and (3) app deployment. In the first stage, both financial and nonfinancial data are collected from audit clients, not only through traditional approaches (bookkeeping), but also from IoT devices (e.g., cameras, sensors, GPS trackers), operational databases, and outside sources such as social media and news network, and uploaded to a cloud platform where data analyses and modeling are further processed. For example, GPS trackers and weighing sensors embedded in trucks could capture the real routes and weights of shipments, which could be further used to investigate the revenue of a logistics company. Cameras next to billboards could show solid evidence about whether an advertisement company claims fraudulent income by earning money from empty billboards. By using IoT devices, auditors can collect evidence, mainly from the physical world, anywhere, anytime.

FIGURE 1

The CAIaaS Architecture

FIGURE 1

The CAIaaS Architecture

The second stage is to develop apps that will be used for CA/CM purposes on a secure cloud-based platform. This platform includes three main processes: (1) data preparation and standardization, (2) model formulation, and (3) developing apps based on formulated and refined models. In the data preparation and standardization process, data are cleansed, integrated, and transformed to facilitate further analysis (Tan, Steinbach, and Kumar 2016). Moreover, data need to be standardized to a format that could be recognized by the AI/DM/ML algorithms provided by the platform. This could be a time-consuming process because organizations usually have their data models that need to be mapped to the one used in the platform. A potential solution could be that both companies and the platform provider follow industry standards (e.g., Auditing Data Standards issued by the AICPA)6 to build their data models, which will make the transmission of data from companies' IoT devices or ERP systems to the platform seamless (Dai 2017). Next, the prepared data are stored in a large data repository. The data repository also stores data that were previously uploaded from other clients. Auditors could choose to integrate anonymized and relevant data from the data repository with the current dataset, which could enrich the dataset to be used in the model formulation process and could enhance the accuracy of the data-driven models.

In the model formulation process, data prepared in the prior step are used to formulate CA/CM models that will facilitate auditors in detecting misstatements and fraud in a time-efficient manner. The platform is equipped with state-of-the-art software that can operate AI/DM/ML algorithms such as deep learning, Support Vector Machine (SVM), clustering, decision trees, regression, etc., which could be automatically or semi-automatically employed upon those data. For example, a regression could be applied to make predictions for future goods returns by analyzing their relationship with the conditions of inventory items and the surrounding environment (e.g., humidity, pressure, which can be obtained from IoT sensors). Such a prediction could be further used to monitor and detect abnormal return activities. Auditors may select appropriate parameters for models if they have sufficient knowledge and experience, but also may let the algorithms choose optimal ones if they are new to the technologies. Auditors then review the models and make them in compliance with audit standards or business policies. For example, auditors may disable certain data variables that provide redundant information and distort the results, assign a higher penalty cost to reduce a large number of false-positive cases, or set up a “materiality threshold” of transaction amounts to avoid overwhelming alerts. However, the refinement process could be sometimes limited, especially when AI algorithms with low interpretability (such as neural networks) are used to generate the models, as the internal logic of the models is difficult to understand. As a result, explainable AI (Samek, Wiegand, and Müller 2017) algorithms could be preferable in the development of models for audit purposes. Finally, the models are reformulated and ultimately achieve optimal performance.

In the third process, the refined models are built into intelligent apps by adding user interface, creating user manuals that define data input and output, and adding interfaces that allow performing CA/CM activities on auditors' and clients' computers, or their IoT devices. The platform then adds the newly developed apps to its app marketplace, which could benefit other auditors in the future. As a result, besides formulating models using their data, auditors could also find apps that can match their demands by searching the keyword of the semantic descriptions of apps on the market.

The third stage begins with auditors receiving apps and implementing these at clients' sites to monitor and detect risky or abnormal transactions in a continuous manner. RPAs (Huang and Vasarhelyi 2019) could be developed to routinize the data collection, monitoring, and alerting processes by linking data resources, apps, and email applications. Apps may also be deployed in auditors' computers to perform the end-of-period audit. Auditors and clients could operate the apps using the cloud if they have limited computing and storage resources. Apps may be deployed in the IoT devices at clients' sites so that the actuators7 could take instant actions to prevent fraudulent activities. For example, a smart locker could lock a warehouse when it detects that products without corresponding sales or shipping information are leaving the warehouse. Attached lights could turn red and flash if slow-moving inventory items are identified. After using apps in an engagement, auditors may provide feedback regarding performance and ratings to the apps for future improvement and recommendation.

IV. AUDIT APP RECOMMENDATION8

The CAIaaS architecture provides auditors opportunities not only to generate intelligent apps using their clients' data, but also to choose apps from the marketplace. However, with the growth of the app marketplace, the increasing volume of apps is likely to diminish auditors' ability to manually seek the most appropriate ones. Therefore, the demand for a tool that can effectively choose apps, and even inspire auditors to explore new CA/CM models, will dramatically increase.

A recommender system could serve as a very valuable tool to identify the most appropriate apps to be used in a specific engagement and filter out irrelevant ones. This technology is superior to other information-filtering applications because of its ability to provide customized and meaningful recommendations (Zhou, Xu, Li, Josang, and Cox 2012). Unlike standard search engines that provide the same results for the same queries even though they are from different users, recommender systems can use personal characteristics and behaviors to provide personalized, relevant results to each user. Because of this advantage, recommender systems can suggest the right apps by analyzing the audit standards, audit clients, and auditors' historical preferences.

To help auditors select appropriate apps for a particular engagement, an app recommender system (ARS) is designed in this section.9 The framework of the ARS is shown in Figure 2. The proposed system makes app recommendations via three components: audit standards, audit clients, and auditors' preferences on apps. Recommendations based on audit standards are generated by creating a structure that categorizes apps by industry, business cycle, accounts, audit assertions, and audit objectives. These recommendations create a narrowed initial selection of apps that are then refined by audit clients and auditors' preferences. Recommendations based on audit clients are created to estimate the suitability of an app for a particular client. Recommendations based on auditors' preferences are also performed to predict the rating that a particular auditor will give to an app. The system creates a final score for each app by combining the results from these two filtrations, recommending apps with high scores to the auditor.

FIGURE 2

Design of the ARS

FIGURE 2

Design of the ARS

Recommendations Based on Audit Standards

In an engagement, auditors often follow five key steps to develop audit objectives: understanding objectives and responsibilities, dividing financial statements into cycles, knowing management assertions, knowing general audit objectives, and knowing specific audit objectives (Arens, Elder, and Mark 2012). The app selection process must follow this procedure, additionally controlling for the client's industry. The client industry is an important factor that would drive the choice of audit apps to use in particular engagements. Each industry has special business processes and risks in which auditors may need different models and data to collect evidence. For example, finance and insurance companies do not purchase or produce physical products, and thereby inventory-testing apps should be filtered out for those client types. Finance and insurance industry-specific apps should likewise be filtered out when dealing with retail clients. Similarly, water pollution is likely to be considered a significant risk for beverage companies but may have a moderate impact on other industries. The AICPA (2014) has provided guidance and delivered “how-to” advice for handling auditing issues in different industries. Within a specific industry, auditors must also identify business cycles (e.g., sales and collection, procurement and payment), individual accounts within those cycles, and associated management assertions. Auditors then test a specific audit objective based on such an assertion using apps.

A proper app recommender system must fit into this process. The proposed system is shown in Figure 3. Industry selection generates a list of industries that covers all possible industry categories. Business cycle selection links each industry to all possible business cycles for clients in that industry. Account selection associates each business cycle with all possible related accounts. Assertion selection links assertions with corresponding accounts. Objectives are linked to corresponding assertions during objectives selection. Finally, the system links all available audit apps with the audit objectives they can test. Each audit app may investigate one or more audit objectives, while each audit objective may also be linked to many audit apps since those audit apps cooperate to accomplish that audit objective. Using the system, an auditor could choose the client's industry and the relevant business cycle, account, assertion, and audit objective, and finally obtain a narrowed initial array of objective-appropriate audit apps. On very large multinationals or dynamically changing industries, or in heterogenous consolidated entities, much narrower recommender schemata may/should be used.

FIGURE 3

Recommendations Based on Audit Standards

FIGURE 3

Recommendations Based on Audit Standards

Recommendations Based on Audit Clients

It is possible that a narrowed initial selection still contains dozens of apps since new CA/CM models could be formulated and associated apps could be added to the marketplace every day. Therefore, the results of standards-based filtration should be further refined. Since audit problems usually have a large number of solutions and it is difficult to choose the best one, they often are solved using heuristics (rule-based) approaches (O'Leary and Watkins 1989). The ARS uses the heuristics approach. Specifically, if an app has been frequently used by auditors for similar clients (e.g., Adidas AG), that app is likely to be appropriate for the next such client (e.g., Nike, Inc.). A mechanism to identify such apps further refines and prioritizes recommendation results.

To generate client-based recommendations, a CF recommendation approach (derived from Zhang, Dai, P. Li, Q. Li, and Luo [2011]) is used to predict the suitability of an app to be used for an audit client. The underlying assumption is that the more the app has been used for similar audit clients in the past, the more suitable that app is for being used in a given instance. As shown in Figure 4, the approach has two clustering-based phases: (1) preparation, which groups audit clients based on an app usage matrix, and (2) recommendation, which makes predictions based on the nearest cluster to a given client.

FIGURE 4

Two-Phase Clustering-Based Recommendation

FIGURE 4

Two-Phase Clustering-Based Recommendation

In the preparation phase, an app-usage matrix (shown in Table 1) is first created to record the usage frequency of each app for each audit client. This matrix is used as the basic data source to generate recommendations. Each row represents as audit client, and each column represents an app. Each cell, at the intersection of a row and a column, represents how many times a specific app has been used for a specific client in the past year. The reason to choose a short one-year window for calculating audit app usage is that in a dynamic environment, audit apps commonly gain or lose popularity due to updates or competition from other apps. One potential problem of the matrix is that it may be too sparse to be used for providing accurate recommendations. Apps, especially newly launched ones, could be known by only a few auditors, and the use of them in audit engagements would be even rare. To solve this problem, the app-usage matrix could be extended by adding clients' information such as firm size, risk profile, etc. Such information could reduce the sparseness of the matrix, and thereby improve the recommendation accuracy.

TABLE 1

App-Usage Matrix

App-Usage Matrix
App-Usage Matrix
Based on the app-usage matrix, audit clients are then clustered into groups using classic clustering methods such as k-medoids (Han, Kamber, and Pei 2006). The main objective of the clustering is to accelerate the recommendation phase. Another benefit of the client clusters is to facilitate further mitigation of the sparseness problem. The ARS estimates the missing values in the app-usage matrix based on the information from the clients in the same cluster. This method is based on the assumption that the usage frequency of a certain app should be similar for the audit clients in the same clusters because those audit clients are similar to each other. Thus, it is reasonable to utilize the usage frequency of an app for audit clients to estimate the missing usage of the app for clients in the same cluster. For audit client i in cluster k, the missing usage for app j can be smoothed as:
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\bf{\alpha}}\)\(\def\bupbeta{\bf{\beta}}\)\(\def\bupgamma{\bf{\gamma}}\)\(\def\bupdelta{\bf{\delta}}\)\(\def\bupvarepsilon{\bf{\varepsilon}}\)\(\def\bupzeta{\bf{\zeta}}\)\(\def\bupeta{\bf{\eta}}\)\(\def\buptheta{\bf{\theta}}\)\(\def\bupiota{\bf{\iota}}\)\(\def\bupkappa{\bf{\kappa}}\)\(\def\buplambda{\bf{\lambda}}\)\(\def\bupmu{\bf{\mu}}\)\(\def\bupnu{\bf{\nu}}\)\(\def\bupxi{\bf{\xi}}\)\(\def\bupomicron{\bf{\micron}}\)\(\def\buppi{\bf{\pi}}\)\(\def\buprho{\bf{\rho}}\)\(\def\bupsigma{\bf{\sigma}}\)\(\def\buptau{\bf{\tau}}\)\(\def\bupupsilon{\bf{\upsilon}}\)\(\def\bupphi{\bf{\phi}}\)\(\def\bupchi{\bf{\chi}}\)\(\def\buppsy{\bf{\psy}}\)\(\def\bupomega{\bf{\omega}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\begin{equation}\tag{1}{U_{i,j}} = \delta \left( {{{\bar U}_{k,j}}} \right) \end{equation}
where \({\bar U_{k,j}}\) is the average instances of the use of app j for all audit clients in cluster k, and δ is a coefficient that allows adjustment in the contribution of the data smoothing.

Next, the ARS re-clusters the clients using the smoothed app-usage matrix. After obtaining new audit client groups, the preparation phase ends. This step could be performed continuously without any human intervention.

When an auditor requests an app recommendation for a particular client, the recommendation phase begins. In this phase, the ARS predicts the usage frequency of an app for the target client using the average usage frequency for similar audit clients in the past. To speed up the selection of similar audit clients, the ARS first finds the top N similar client clusters for the target client and chooses the top M similar clients from those similar clusters.

In order to find the top N similar client clusters, the similarity between the target client and the centroid of each client cluster should be measured. The similarity could be calculated using the Pearson Correlation Coefficient (Sarwar, Karypis, Konstan, and Riedl 2001):
\begin{equation}\tag{2}s\left( {x,y} \right) = {{\sum\nolimits_{j = 1}^{\left| {x \cap y} \right|} {\left( {{U_{x,j}} - {{\bar U}_x}} \right)\left( {{U_{y,j}} - {{\bar U}_y}} \right)} } \over {\sqrt {\sum\nolimits_{j = 1}^{\left| {x \cap y} \right|} {{{\left( {{U_{x,j}} - {{\bar U}_x}} \right)}^2}\sqrt {\sum\nolimits_{j = 1}^{\left| {x \cap y} \right|} {{{\left( {{U_{y,j}} - {{\bar U}_y}} \right)}^2}} } } } }} \end{equation}
where \(s\left( {x,y} \right)\) denotes the similarity between clients x and y (in this case, x is the target client, and y is the central client in a cluster); \(\left| {x\!\cap\! y} \right|\) is the number of apps that have been used for both clients; \({\bar U_x}\) and \({\bar U_y}\) are the average app usage frequency for clients x and y; and Ux,j and Uy,j denote the usage frequency of app j for clients x and y.
Using the same formula, the similarity between the target client and each client in the top N similar clusters are calculated, which are then used to select the top M target clients. With the top M similar audit clients, the usage frequency of an app for the target client is predicted by taking the weighted average of deviations from the mean usage frequency of the app for similar audit clients. The weighted sum (Sarwar et al. 2001) could be used to predict the usage frequency of audit app j for audit client i:
\begin{equation}\tag{3}{P_{i,j}} = {{\sum\nolimits_{k = 1}^m {s\left( {i,k} \right) \times {U_{k,j}}} } \over {\sum\nolimits_{k = 1}^m {\left| {s\left( {i,k} \right)} \right|} }} \end{equation}
where Pi.j, represents the predicted usage frequency of app j for client i; m denotes the top M similar clients of the target client i; \(s\left( {i,k} \right)\) measures the similarity between client i and each similar client; and Uk,j represents the usage frequency of app j for client k (which is one of the similar clients). Using this formula, the potential usage frequency of each app for the target client is predicted by capturing how similar clients use the app.

Recommendations Based on Auditors' Preferences on Apps

Auditors' familiarity with the technique could also drive the choice of technique(s) to use in particular environments (Murthy and Groomer 2003); therefore, auditors' preferences can be used to further refine app recommendations. Auditors may have specific preferences regarding developers, versions, underlying CA/CM models, user interfaces, etc. Some auditors like apps developed by large firms more than those developed by small firms; some auditors prefer older, stable versions of apps, while others prefer the latest versions; some favor apps with a sophisticated user interface such as those allowing hand gestures, while others prefer more conventional operation. An effective recommendation system should incorporate these preferences to enhance result accuracy.

The ARS incorporates auditors' preferences into recommendations using a similar approach as used for the client-based recommendation. This approach is based on the assumption that auditors often choose apps that are consistent with their historical preferences, as well as the experiences and knowledge gained from a relatively large group of colleagues. Two auditors that have chosen the same apps in the past are likely to have similar preferences on apps in the future. The ratings of the first should influence the recommendations for the second. The preference-based approach also has two phases: preparation and recommendation. In the preparation phase, auditors are clustered based on preference similarity; in the recommendation phase, the system generates a list of apps for a specific auditor based on the app ratings from similar auditors.

In the preparation phase, an auditor-rating matrix (shown in Table 2) is created. Each row and column represents an auditor and an app, respectively. Each cell represents the rating that the auditor in the row has given to an app in the past. This matrix may have the same data sparseness problem as that in the app-usage matrix, as one auditor is likely to use and rate only a few apps. To weaken this problem, the matrix could also be extended by adding demographical information of auditors, such as their position levels, the accounting firms they work for, etc. Such information could facilitate the clustering of auditors and identifying those that have similar preferences on apps.

TABLE 2

Auditor-Rating Matrix

Auditor-Rating Matrix
Auditor-Rating Matrix

Using the auditor-rating matrix, the ARS clusters similar auditors into groups, and smooths missing ratings using Formula (1). Then, auditors are re-clustered, and the preparation phase ends.

In the recommendation phase, the ARS predicts the rating of an audit app using the average of ratings that similar auditors have given to the app in the past. Specifically, the ARS first identifies the top N similar auditor clusters to the target auditor and then selects the top M similar auditors within those similar clusters. The similarity between the target auditor and the centroid of each auditor cluster could also be measured using Formula (2). After obtaining M most similar auditors, the ARS predicts the rating of an app given by the target auditor by using the weighted sum (Formula (3)) of the ratings that the similar auditors have given to the app.

Scores of Apps and Final Recommendation

The two predictions from the client-based and preference-based approaches are combined to generate a final, client- and auditor-specific recommendation score for an app using a weighted linear model. The final recommendation score for the app is calculated as:
\begin{equation}Score = \delta {P_u} + \left( {1 - \delta } \right){P_r}\end{equation}
where Pu represents the predicted usage frequency of the audit app for the target client, Pr represents the predicted rating that the target auditor will give to the app, and δ is the coefficient to adjust the contribution of each component. Finally, apps with high scores will be recommended to the auditor to perform CA/CM activities for the client.

V. TOWARD A CAI-BASED AUDIT PARADIGM10

Kozlowski (2016) describes an audit ecosystem in which various agents automatically execute functions such as importing client data into a standardized form, selecting appropriate audit apps to execute, and a feedback loop for unresolved results, etc. This section enriches the ecosystem by imaging a CAI-based paradigm that enables semi-automatic app development and recommendation, and also result analysis with the architecture and system proposed in this editorial as well as several other intelligent mechanisms.

The proposed CAI-based audit paradigm is shown in Figure 5. The paradigm is composed of a risk assessment module, a CAIaaS platform, an ARS, and a result analysis system with several intelligent modules, as well as the process of generating internal and external audit reports. The risk assessment module assists in locating the business cycles, accounts, and processes with high inherent risks. By deriving experienced auditor knowledge on information evaluation and judgment making concerning risks (Brown-Liburd, Mock, Rozario, and Vasarhelyi 2016), and integrating them into an expert system, the automatic risk assessment could be realized. Moreover, by incorporating the CRMA methodology (Moon and Krahel 2020), the module could identify emerging risks in a rapidly changing environment in addition to those that have been investigated by auditors in the past. Based on assessed business risks and auditors' judgment, auditors would gather relevant data that have been formalized by the Audit Data Standards, from IoT sensors, ERP systems, and other databases, as well as public websites. Next, auditors use the CAIaaS platform to formulate customized apps using the collected data. The ARS also suggests a couple of appropriate apps be used in the engagement according to auditors' preferences and the client's attributes. Later on, outcomes from the apps are further investigated by the result interpretation module, the exception prioritization module, and the exception investigation module. The result interpretation module explains the underlying mathematics or statistics of the results to facilitate auditors' decision making. Byrnes (2015) created a “super-app” to provide such knowledge for basic statistical models. The exception prioritization module ranks risky transactions by severity to avoid extremely heavy information load caused by Big Data. Methodologies proposed by Li, Chan, and Kogan (2016) and No, Lee, Huang, and Li (2019) for outlier prioritization could provide insights for the design of this module. The exception investigation module integrates auditors' knowledge and generates a list of exceptional exceptions that require further investigation. Issa (2013) devoted efforts in this area by designing a weighting system that utilizes experts' knowledge to identify irregularities. Finally, auditors use their professional judgment to determine whether sufficient evidence has been collected, and then either operate the process of generating final reports or retrieve to prior stages and perform more tests to collect useful evidence.

FIGURE 5

The CAI-Based Audit Paradigm

FIGURE 5

The CAI-Based Audit Paradigm

VI. CHALLENGES

The corporate reporting environment is based on a wide set of statutory rules and regulations not designed for frequent reporting and not able to properly represent the dynamic environment of the current organization. The balance sheet and income statement, as an example, create a set of measurement adjustments such as owner's equity depreciation, goodwill, etc. that do not agree with real-life day-to-day operations. Articulating real-time databases with dashboards that represent the firm and call attention to discrepancies does not necessarily connect easily with traditional financial statements nor with the standard setters rules. Large economic actions inordinately affect monitoring schema, potentially creating false positives or hiding false negatives. These factors may lead to overreactions by management and misleading concerns of the auditors. Analytics and AI models, under current analytic technology, represent historical behavior and are better able to represent the linear part of operations, not extremes or fluctuations. For example, corporate behavior during a pandemic does not conform to normal standards, and exception models would naturally explode in alerts.

The idea of sharing intelligent apps could largely benefit auditors with little background and experience in using technologies. However, the capability of creating effective apps and using them appropriately to enhance the quality, efficiency, and value of an audit is a unique competitive advantage of auditors or audit firms; consequently, they may hesitate to share with others. Therefore, the app marketplace should set up solid policies to protect those intellectual properties and create a new business model to facilitate the trading of intelligent apps and providing CA/CM services in a cloud.

The third concern is regarding security and data ownership issues. The PCAOB requires auditors to maintain the confidentiality of client information (AU Section 339.11). With a very large amount of data captured and sent to the cloud for analysis, protecting the confidentiality of those data should be a critical issue. Besides data encryption, a hybrid analysis and storage mechanism could be performed. For example, auditors could send archived data to the cloud for model formulation purposes and download models to local servers and examine their effectiveness using clients' recent data. Furthermore, using one company's data to help build models for another organization may result in a potential breach of confidentiality. For example, rules learned from a client's past sales may reveal the sales patterns of the company. More regulations and detailed guidance regarding confidentiality protection and testing of apps for potential leakage are needed. Also, performing a comprehensive data anonymity mechanism, such as the one proposed by Kogan and Yin (2017), may facilitate learning empirical patterns while keeping them secret. Another issue is whether a cloud provider should use data provided by audit firms and the intelligent models formulated using those data to improve the apps used for other firms. The ownership of the data and models may be debatable. Some cloud providers treat them as their assets, while others only consider the cloud as a container. Regulations are necessary to stipulate the rights and obligations of cloud providers in the use of client data.

The quality control of apps is a critical and challenging task that may require the active engagement of standard-setting bodies. The PCAOB, for example, could perform accreditation to apps to ensure their compliance with existing standards. A “trial stage” may be set up for each newly created app that allows experienced auditors to evaluate its performance. Alternatively, regulators and professional agencies could move toward standardization on a collection of apps for each specific audit scenario. However, such standardization may harm the flexibility of using the state-of-the-art techniques/models to improve audit quality. Therefore, regulators need to find a balance between allowing auditors to explore emerging technologies and standardization of their use.

VII. CONCLUSION

The inherent complexity of emerging technologies such as IoT and the cloud, and the anachronistic nature of accounting and audit standards, hinder technology adoption and full use by the auditing profession. Auditors with more or less knowledge and experience need the assistance of effective applications of technologies in their engagements. This editorial proposes a novel architecture, named CAIaaS, to fulfill this need. An app recommender system is further designed to provide customized suggestions for a particular auditor and client, enabling the selection of appropriate apps. Armed with the right tools, auditors would be able to efficiently perform audit activities and provide timely opinions. The CAIaaS, the app recommender system, as well as other intelligent mechanisms would together comprise a semiautomatic audit paradigm, which could facilitate the progressive transformation toward audit automation.

Future research should explore the implications that could explore methodologies and issues related to using information from one auditee in the analysis and modeling of other auditees and methods to protect confidentiality (Kogan and Yin 2017) while facilitating audit modeling and exception reporting.

The cloud computing model and capabilities bring a new set of capabilities to auditing as well as a new or changed set of security concerns to the assurance function. Furthermore, real-time reporting that is facilitated by this technology requires a substantial rethinking of corporate financial, predictive, and analytic reports into a modern form of informing stakeholders of the status of an organization.

Recommender models have taken a very important role in ecommerce marketing and advertising, but they have not been studied in the accounting/auditing literature. They bring together the formalization of many analytic technologies and can function on an evolving man × machine system, substantially improving assurance. Research is badly needed in this area.

A capital question that arises with the continuous assurance of organizational systems is the actual role of this process. It could be argued that close monitoring and detection of exceptions is a meta-control, not an assurance function. Research should reconceptualize the roles of measurement, control, and assurance in the systems that are emerging, managing data in real time and acting on prediction models.

REFERENCES

REFERENCES
Acar,
D.,
Gal
G.,
Öztürk
M.,
and
Usul
H.
2020
.
A case study in the implementation of a continuous monitoring system
.
Journal of Emerging Technologies in Accounting
17
(
2
Adomavicius,
G.,
and
Tuzhilin
A.
2005
.
Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions
.
IEEE Transactions on Knowledge and Data Engineering
17
(
6
):
734
749
.
Alles,
M.,
and
Gray
G. L.
2019
.
Will the medium become the message? A framework for understanding the coming automation of the audit process
.
Journal of Information Systems
.
Alles,
M. G.,
Brennan
G.,
Kogan
A.,
and
Vasarhelyi
M. A.
2006
.
Continuous monitoring of business process controls: A pilot implementation of a continuous auditing system at Siemens
.
International Journal of Accounting Information Systems
7
(
2
):
137
161
.
American Institute of Certified Public Accountants (AICPA).
2014
.
AICPA Guides
.
Arens,
A. A.,
Elder
R. J.,
and
Mark
B.
2012
.
Auditing and Assurance Services: An Integrated Approach
.
Boston, MA
:
Prentice Hall
.
Bacciu,
D.,
Chessa
S.,
Gallicchio
C.,
and
Micheli
A.
2017
.
On the need of machine learning as a service for the internet of things
.
Barker,
D.
2017
.
An unofficial guide to Whatever-as-a-Service
.
Biswas,
A. R.,
and
Giaffreda
R.
2014
.
IoT and cloud convergence: Opportunities and challenges
.
Bobadilla,
J.,
Ortega
F.,
Hernando
A.,
and
Gutiérrez
A.
2013
.
Recommender systems survey
.
Knowledge-Based Systems
46
:
109
132
.
Brown-Liburd,
H.,
Issa
H.,
and
Lombardi
D.
2015
.
Behavioral implications of Big Data's impact on audit judgment and decision making and future research directions
.
Accounting Horizons
29
(
2
):
451
468
.
Brown-Liburd,
H.,
Mock
T.,
Rozario
A.,
and
Vasarhelyi
M.
2016
.
Examination of audit planning risk assessments using verbal protocol analysis: An exploratory study
.
Burke,
R.
1999
.
Integrating knowledge-based and collaborative-filtering recommender systems
.
Byrnes,
P. E.
2015
.
Developing automated applications for clustering and outlier detection: Data mining implications for auditing practice
.
Doctoral dissertation, Rutgers, The State University of New Jersey, Newark.
Chan,
D. Y.,
and
Vasarhelyi
M. A.
2011
.
Innovation and practice of continuous auditing
.
International Journal of Accounting Information Systems
12
(
2
):
152
160
.
Choi,
K.,
Yoo
D.,
Kim
G.,
and
Suh
Y.
2012
.
A hybrid online-product recommendation system: Combining implicit rating-based collaborative filtering and sequential pattern analysis
.
Electronic Commerce Research and Applications
11
(
4
):
309
317
.
Codesso,
M.,
Machado de Freitas
M.,
Wang
X.,
de Carvalho
A.,
and
da Silva Filho
A. A.
2020
.
Continuous audit implementation at Cia Hering SA in Brazil
.
Journal of Emerging Technologies in Accounting
17
(
2
Curtis,
M.,
Chui
L.,
and
Pavur
R.
2020
.
Intention to champion continuous monitoring: A study of intrapreneurial innovation in organizations
.
Journal of Emerging Technologies in Accounting
17
(
2
Dai,
J.,
2017
.
Three essays on audit technology: Audit 4.0, blockchain, and audit app
.
Doctoral dissertation, Rutgers, The State University of New Jersey, Newark.
Dai,
J.,
and
Vasarhelyi
M. A.
2016
.
Imagineering Audit 4.0
.
Journal of Emerging Technologies in Accounting
13
(
1
):
1
15
.
Dai,
J.,
He
N.,
and
Yu
H.
2019
.
Utilizing blockchain and smart contracts to enable Audit 4.0: From the perspective of accountability audit of air pollution control in China
.
Journal of Emerging Technologies in Accounting
16
(
2
):
23
41
.
Eulerich,
M.,
Georgi
C.,
and
Schmidt
A.
2020
.
Continuous auditing and risk-based audit planning—An empirical analysis
.
Journal of Emerging Technologies in Accounting
17
(
2
EY.
2014
.
Big risks require big data thinking: Global forensic data analytics survey 2014
.
Han,
J.,
Kamber
M.,
and
Pei
J.
2006
.
Data Mining: Concepts and Techniques
.
Burlington, MA
:
Morgan Kaufmann
.
Haseeb,
M.,
Hussain
H. I.,
Ślusarczyk
B.,
and
Jermsittiparsert
K.
2019
.
Industry 4.0: A solution towards technology challenges of sustainable business performance
.
Social Sciences
8
(
5
):
154
.
Huang,
F.,
and
Vasarhelyi
M. A.
2019
.
Applying robotic process automation (RPA) in auditing: A framework
.
International Journal of Accounting Information Systems
35
:
100433
.
Issa,
H.
2013
.
Exceptional exceptions
.
Doctoral dissertation, Rutgers, The State University of New Jersey, Newark.
Kogan,
A.,
and
Yin
C.
2017
.
Privacy-preserving information sharing within an audit firm
.
Kogan,
A.,
Alles
M. G.,
Vasarhelyi
M. A.,
and
Wu
J.
2014
.
Design and evaluation of a continuous data level auditing system
.
Auditing: A Journal of Practice & Theory
33
(
4
):
221
245
.
Kozlowski,
S.
2016
.
A Vision of an ENHanced ANalytic Constituent Environment: ENHANCE
.
Doctoral dissertation, Rutgers, The State University of New Jersey.
KPMG.
2015
.
2015 data & analytics-enabled internal audit survey
.
Krulwich,
B.
1997
.
Lifestyle finder: Intelligent user profiling using large-scale demographic data
.
AI Magazine
18
(
2
):
37
.
Lang,
K.
1995
.
NewsWeeder: Learning to filter netnews
.
Lee,
S. K.,
Cho
Y. H.,
and
Kim
S. H.
2010
.
Collaborative filtering with ordinal scale-based implicit ratings for mobile music recommendations
.
Information Sciences
180
(
11
):
2142
2155
.
Li,
H.,
Dai
J.,
Gershberg
T.,
and
Vasarhelyi
M. A.
2018
.
Understanding usage and value of audit analytics for internal auditors: An organizational approach
.
International Journal of Accounting Information Systems
28
:
59
76
.
Li,
P.,
Chan
D. Y.,
and
Kogan
A.
2016
.
Exception prioritization in the continuous auditing environment: A framework and experimental evaluation
.
Journal of Information Systems
30
(
2
):
135
157
.
Mell,
P.,
and
Grance
T.
2011
.
The NIST definition of cloud computing
.
Moon,
D.,
and
Krahel
J. P.
2020
.
Continuous risk monitoring and assessment: New component of continuous assurance
.
Journal of Emerging Technologies in Accounting
17
(
2
Mooney,
R. J.,
and
Roy
L.
2000
.
Content-based book recommending using learning for text categorization
.
Muhuri,
P. K.,
Shukla
A. K.,
and
Abraham
A.
2019
.
Industry 4.0: A bibliometric analysis and detailed overview
.
Engineering Applications of Artificial Intelligence
78
:
218
235
.
Munoko,
I.,
Brown-Liburd
H. L.,
and
Vasarhelyi
M. A.
2020
.
The ethical implications of using artificial intelligence in auditing
.
Journal of Business Ethics
2020
:
1
26
.
Murthy,
U. S.,
and
Groomer
S. M.
2003
.
Accounting Information Systems: A Database Approach
.
Bloomington, IN
:
Cybertext Publishing
.
No,
W. G.,
Lee
K.,
Huang
F.,
and
Li
Q.
2019
.
Multidimensional audit data selection (MADS): A framework for using data analytics in the audit data selection process
.
Accounting Horizons
33
(
3
):
127
140
.
O'Leary,
D. E.
2015
.
Armchair auditors: Crowdsourcing analysis of government expenditures
.
Journal of Emerging Technologies in Accounting
12
(
1
):
71
91
.
O'Leary,
D. E.
2020
.
A signal theory model for continuous monitoring and continuous intelligence systems
.
Journal of Emerging Technologies in Accounting
17
(
2
O'Leary,
D. E.,
and
Watkins
P. R.
1989
.
Review of expert systems in auditing
.
Expert Systems Review
2
(
1
):
3
22
.
Pazzani,
M. J.
1999
.
A framework for collaborative, content-based, and demographic filtering
.
Artificial Intelligence Review
13
(
5/6
):
393
408
.
Perera,
C.,
Zaslavsky
A.,
Christen
P.,
and
Georgakopoulos
D.
2014
.
Sensing as a service model for smart cities supported by the Internet of Things
.
Transactions on Emerging Telecommunications Technologies
25
(
1
):
81
93
.
Quach,
K.
2018
.
Google goes bilingual, Facebook fleshes out translation and TensorFlow is dope, and Microsoft is assisting fish farmers in Japan
.
Resnick,
P.,
Iacovou
N.,
Suchak
M.,
Bergstrom
P.,
and
Riedl
J.
1994
.
GroupLens: An open architecture for collaborative filtering of netnews
.
Ricci,
F.,
and
Shapira
B.
2011
.
Recommender Systems Handbook
.
New York, NY
:
Springer
.
Samek,
W.,
Wiegand
T.,
and
Müller
K. R.
2017
.
Explainable artificial intelligence: Understanding, visualizing, and interpreting deep learning models
.
Sarwar,
B.,
Karypis
G.,
Konstan
J.,
and
Riedl
J.
2001
.
Item-based collaborative filtering recommendation algorithms
.
Syed,
A.,
Gillela
K.,
and
Venugopal
C.
2013
.
The future revolution on Big Data
.
International Journal of Advanced Research in Computer and Communication Engineering
2
(
6
):
2446
2451
.
Tan,
P. N.,
Steinbach
M.,
and
Kumar
V.
2016
.
Introduction to Data Mining
.
Chennai, Tamil Nadu, India
:
Pearson Education India
.
Vasarhelyi,
M. A.,
and
Halper
F. B.
1991
.
The continuous audit of online systems
.
Auditing: A Journal of Practice & Theory
10
(
1
):
110
125
.
Vasarhelyi,
M. A.,
Alles
M. G.,
and
Williams
K. T.
2010
.
Continuous Assurance for the Now Economy. A Thought Leadership Paper for the Institute of Chartered Accountants in Australia
.
Queensland, Australia
:
Institute of Chartered Accountants
.
Wikipedia.
2020
.
Internet of Things
.
Xia,
F.,
Yang
L. T.,
Wang
L.,
and
Vinel
A.
2012
.
Internet of Things
.
International Journal of Communication Systems
25
(
9
):
1101
1102
.
Xinyue,
S.,
and
Wei
H.
2019
.
Company haunted by missing scallops fined for fraud
.
Zhang,
C.,
Dai
J.,
Li
P.,
Li
Q.,
and
Luo
X.
2011
.
Two-phase clustering-based collaborative filtering algorithm
.
Zhou,
X.,
Xu
Y.,
Li
Y.,
Josang
A.,
and
Cox
C.
2012
.
The state-of-the-art in personalized recommender systems for social networking
.
Artificial Intelligence Review
37
(
2
):
119
132
.
1

Industry 4.0 is the fourth industrial revolution that enables “overall transformation of using digital integration and intelligent engineering” in the manufacturing industry (Muhuri, Shukla, and Abraham 2019).

3

Companies have developed solutions that count seafood under water using IoT (Quach 2018).

5

This section is based on “Three essays on audit technology: Audit 4.0, blockchain, and audit app” (Dai 2017, Chapter 1, Section 1.7).

7

An actuator is an important component in IoT that receives commands from the internet to take actions that impact the physical world.

8

This section is based on “Three essays on audit technology: Audit 4.0, blockchain, and audit app” (Dai 2017, Chapter 4).

9

In addition to apps that can be found in the marketplace, many free routines (e.g., in Python) exist and may be considered.

10

This section is based on “Three essays on audit technology: Audit 4.0, blockchain, and audit app” (Dai 2017, Chapter 4).

Author notes

The ideas, comments, and suggestions of Michael Alles, Soo Hyun Cho, Ivy Munoko, Qing Huang, and Chanyuan Zhang are very much appreciated.

Part of this editorial is based on “Three essays on audit technology: Audit 4.0, blockchain, and audit app” (Dai 2017).

Jun Dai, Michigan Technological University, College of Business, Department of Accounting, Houghton, MI, USA; Miklos A. Vasarhelyi, Rutgers, The State University of New Jersey, Rutgers Business School, Department of Accounting and Information Systems, Newark, NJ, USA.