AI was a major theme at the January 2024 World Economic Forum in Davos, Switzerland. World leaders wrestled with its potential impact on society and global economies. As OpenAI CEO Sam Altman said, “AI has been somewhat demystified because people really use it now. And that’s always the best way to pull the world forward with a new technology.” The expansion and evolution of AI is so rapid that organizations can’t afford to blink. So as companies increasingly turn to AI, staying on top of technology helps ensure they reap the benefits—and will be able to navigate the latest challenges. Because AI-driven management accounting applications can create enormous value for companies, management accountants are uniquely positioned to support their organization’s AI initiatives.


So, let’s explore nine best practices that management accountants can support to provide their organizations with a competitive advantage from AI.


1. Choose Strategy


AI deployment requires a strategy that aligns with an organization’s business or mission. A Deloitte report recommends that a company should develop an enterprise strategy and make dynamic adjustments to accommodate changes in market and technology conditions. Organizations following that approach are more likely to achieve AI transformation—and that’s helping companies turbo-charge their wider digital transformation.


Another must-have factor? AI support from top leadership that seeds proactive decision making. Deloitte points out that in 2010 Amazon president Jeff Bezos directed company leaders to plan how they would adopt AI and machine learning to achieve a competitive advantage. And the market showed how that was a winning strategy. Management accountants can play a key role in developing performance management systems and key performance indicators (KPIs) to measure and track performance against their organization’s AI strategy.


2. Assess Risk


The AI risk level is soaring because of underlying complexities. Organizations must continuously monitor, assess, and mitigate AI risks. This includes understanding AI risks. Table 1 explains these risks and what actions can be taken to address them.


Table 1: AI Risks and Mitigation Recommendations


AI Risks



AI and machine learning (ML) complexity

AI and ML are software algorithms that mimic the human brain.

Monitor, document, and control AI and ML algorithm development.

Lack of transparency

Complex mathematics and autonomous ML can handicap human understanding of algorithms.


Ensure the purpose and application of AI programs are fully disclosed, understandable to humans, and audited.

Trust issues

Teams need confidence that underlying AI technology is accurate.

Document that applications are supported with clear instructions, remove any bias, and choose the right data sets.

Algorithmic bias

AI engineers might unintentionally program bias into algorithms.

Implement a rigorous design review and testing controls to detect and eliminate bias.

Algorithmic drift and hallucination

AI might reach imperfect conclusions caused by calculating spurious or nonexistent patterns.

Audit the algorithms and the output.

Automation job loss

AI-generated automation is predicted to result in job loss.

Develop an upskilling/reskilling program (see “Manage Talent” below).

Hacker vulnerability

There’s a high risk of cyberattacks and data theft because hackers will try and use AI to penetrate organizations.

Defend data with AI detection and invest in cybersecurity defenses.

Rise of techno elite

AI complexity might put disproportionate power in the hands of tech-savvy workers who might misuse their knowledge.

Create a culture of AI ethical values while strictly monitoring AI development.


AI isn’t perfect. As such, it can introduce inaccurate information. That’s why management accountants need to adopt a healthy skepticism when evaluating results. There’s also a growing risk of misuse, abuse, and data theft calling for more diligence to safeguard data. And perhaps the biggest risk of all is addressed in the sign on Sam Altman’s desk: “No-one knows what happens next.” To defend against unknown risk requires companies to monitor AI developments and a willingness to change course if necessary.


3. Follow Regulations


Emerging regulations will also impact how companies deploy AI. Even if your country hasn’t passed legislation, it’s important to follow global trends and be ready when and if local regulations take effect. The European Union (EU) took the global lead in March 2024 when its parliament passed the Artificial Intelligence Act to regulate AI across the EU.


The rules are designed to protect EU citizens and democracy, uphold laws and environmental sustainability from AI impact, promote innovation, and ensure AI is controlled. The guidelines define banned applications including those that threaten citizen rights like biometric categorization (i.e., untargeted scraping of facial images to create facial recognition databases) and using AI to manipulate human behavior. High-risk systems—those that could cause great harm to essential public services such as banking and healthcare—must assess and mitigate threats, implement usage logs, be transparent, and provide human oversight.


In the United States, President Biden recently issued two executive orders on AI. The first—Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, announced in October 2023—addresses AI safety standards, privacy protections for Americans, and equity and civil rights guidelines, and promotes innovation and competition. The secondPolicy to Advance Governance, Innovation, and Risk Management in Federal Agencies’ Use of Artificial Intelligence, unveiled in March 2024—outlines what the U.S. Office of Management and Budget can and can’t do with AI. It focuses on ensuring the safe and responsible use of AI and encouraging innovation. Management accountants must follow AI regulations like these that might impact their organizations.


4. Establish Governance


The technical complexity and inherent risks of AI make organizational governance imperative. It must start with strong oversight from the board of directors to establish delegation of AI authority and define lines of responsibility. Ideally, at least one board member will possess the technical expertise to understand emerging AI technology and how it can complement the organization’s strategic objectives. Also, the board should ensure that a chief AI officer and a chief AI ethics officer are members of the leadership team and required to submit periodic reports. The board should make sure its organization instills core values to do no harm through AI—a critical step that can hold the organization accountable for following responsible AI development and application deployment.


In addition to AI organizational governance, data governance must be addressed. Establishing policies and internal controls will protect data privacy, ensure ethical data use, safeguard and secure data, and promote data transparency, quality, and trust. Management accountants can participate in developing internal controls and AI policies and procedures for organizational and data governance, and audit the organization’s compliance with them.


5. Create Technology Infrastructure


Hardware, software, and engineers drive AI development, and to leverage AI’s power, major investment is needed to support AI infrastructure. For many companies, the immense digital horsepower required for AI processing means that unless a company has deep financial pockets, it must consider outsourcing its AI infrastructure to the cloud. Whether a company has robust resources or it’s a small or medium-sized enterprise, management accounting’s capital budgeting tools will support decision making for AI resource allocation. AI project use cases must be developed to measure the costs and benefits and return on investment computed to assess the value added.


AI projects then need to be racked and stacked for priority. Because of the complexity of AI projects, project managers must measure project progress, control costs, and evaluate outcomes. And the project cost system must be integrated into the management accounting cost system so results are visible to organization managers. Planning for capital budgeting requires cross-functional collaboration among engineering, finance and accounting, and marketing teams. An integrated approach is necessary for accurate development of AI project tasks, deliverables, and budgets.


Another option is to engage AI consultants, who can help support technical due diligence, erasing any doubts of technical requirement and specification accuracy. Finally, reviews throughout project life cycles can help flag problems and adjust budget and resources. For example, cloud computing time for AI is often expensive and can quickly exceed budget. Management accountants can help run capital budgeting analysis, support project management, and develop the AI project cost system.


6. Monitor Application Development


It’s imperative to implement, document, and monitor controls over each step of the AI development process. Figure 1 is an example of an AI business process development model proposed by Andrew Ng, Landing AI founder and CEO and adjunct professor at Stanford University’s computer science department. Based on the capital budgeting analysis proposed above, AI projects should follow the development process explained in Table 2.


Figure 1: Example of AI Business Process Development Model


 Figure 1: Example of AI Business Process Development Model



Source: A Step By Step Guide To AI Model Development



Table 2: Explanation of an AI Development Model Process


Process No.

Process and Explanation

Management Accountant’s Role


Identify the business problem. Describe the process goal and how AI will facilitate the process, and define detailed specifications and KPI metrics to track process progress.

Ensure the business problem is aligned with the company’s AI strategy and requirements and addresses the business process goal.


Collect data. Define required data, necessary data sources, and data type.

Participate in identifying data and ensure it supports the business problem in process 1.


Prepare data. Evaluate the data for accuracy and completeness. This can look like cleaning and correcting the data, supplementing it for missing elements, or ensuring it’s formatted correctly. Failure to perform data due diligence will jeopardize the quality of the model.

Management accountants can actively evaluate the data cleansing process for completeness and accuracy and look at the data through the business problem’s lens to ensure it’s relevant.


Build the model. AI engineers will build the model to address process 1.

Participate in application reviews to ensure the model satisfies the business problem requirements.


Test the model. Ensure the model defined in process 4 satisfies solving the business problem defined in process 1.

Participate in the evaluation of the test model.


Deploy the model. Launch the model to solve the business problem.

Evaluate the results to support decision making.


Govern the model. Monitor the model.

Monitor the model to verify it achieves the intended results.



7. Manage Talent


Talent management plans that align with AI strategy and applications will ensure organizations have adequate AI talent today—and tomorrow. Detailed plans must keep up with evolving technology so companies can proactively upskill and reskill the workforce. Forbes recommends classifying AI talent management into three buckets: frontline workers, technical professionals, and executives—with distinct management strategies for each. 

  • Frontline workers. Although representing the largest group at 70%, frontline workers receive only 14% of the training. Upskilling this group should focus on developing their AI literacy and foundational technology knowledge while raising awareness about how AI fits into their jobs. Frontline workers are a potential pipeline for expanded AI reskilling.
  • Technical professionals. This group should already have fundamental AI skills, so training should focus on continued development to understand how they can help build and expand the company’s AI capabilities for business processes and workflow. Although many members of this group are full-time employees, the nature of AI means that more complex and advanced AI skills might require companies to hire specialized consultants.
  • Business executives. High-level training for C-suite members should help them understand and develop business use cases that apply across their organizations. Training can also help them learn how AI can be leveraged effectively—and where it doesn’t fit.

Any structured AI upskilling/reskilling program must provide continuous training. This includes in-house programs as well as partnerships with universities, vendors, and consultants. Remember that finding, securing, and retaining AI talent will become a war to find the best and brightest. Incentivize your workforce to go all in. Doing so will help your efforts and enhance your organization’s competitive edge. And if an organization decides to hire AI talent, get ready to pay. Million-dollar compensation packages aren’t unusual for sought-after software engineers with generative AI talent.


8. Safeguard Cybersecurity


The digital life of AI tools gives hackers a new weapon with which to attack computer systems and AI applications, necessitating that companies invest in much greater cybersecurity defense levels. According to the World Economic Forum, highly sophisticated system infiltration is occurring. One example is deepfakes—where hackers build content that emulates top executives’ voices and writing styles. Such capabilities could be used to issue illicit orders and authorizations that would be very difficult to detect. So it’s no surprise that major targets are money and data. In February 2024, CNN reported a case where a finance professional within a multinational company was tricked into attending a videoconference populated with deepfake colleagues who sounded authentic enough to fool the worker. Despite initial concerns, the worker ended up remitting around $25 million to the fake colleagues.


Organizations must be proactive and embed rigorous cybersecurity procedures, internal controls, and detection systems into their AI systems to defend against breaches like the one previously mentioned. Management accountants can participate in cyberattack defense by being aware of the schemes used to trick employees, comply with all security protocols, and conduct cybersecurity procedure compliance audits.


Talent is a major issue here, too. The (ISC)2 Cybersecurity Workforce Study estimates that there is a global shortfall of 3.4 million cybersecurity professionals needed to protect organizations. So, while consultants can supplement required skills, management accountants can also develop cybersecurity skills to help bridge the talent gap. And as Tom Burt, corporate vice president, customer security and trust at Microsoft, said, “Artificial Intelligence will be a critical component of successful defense. In the coming years, innovation in AI-powered cyber defense will help reverse the current rising tide of cyberattacks.”


9. Promote Ethics and Privacy


AI ethics and privacy are a critical issue for organizations. “In no other field is the ethical compass more relevant than in artificial intelligence,” said Gabriela Ramos, assistant director-general for social and human sciences at UNESCO. “AI technology brings major benefits in many areas, but without the ethical guardrails, it risks reproducing real world biases and discrimination.”


Let’s look at how the United Kingdom’s Association of Chartered Certified Accountants (ACCA) promotes AI ethical accountability and privacy for accounting professionals to safeguard organizations. Table 3 summarizes ethical risk and controls to safeguard organizations.


TABLE 3: AI Ethical Accountability Framework


Core AI Risks


Ethical Control

1. Explainability and transparency

AI systems algorithms are often extremely complex, making it difficult for many to trust and understand these systems.

IBM recommends implementing explainable AI (XAI) to translate AI algorithms into more understandable terms.

2. Bias and discrimination

AI system designs might advance societal biases and prejudices.

Require development of unbiased algorithms and ensure AI training sets are diverse and unbiased.

3. Privacy concerns and security risks

Because AI consumes vast amounts of data, privacy and security are at risk, especially from cyber hackers.

Implement ironclad data protection and data handling controls.

4. Legal and regulatory challenges

Countries and jurisdictions are enacting mandates that include ethical requirements to monitor and regulate AI.

Continuously monitor the development of AI regulatory mandates and ensure organizational compliance.

5. Inaccuracy and misinformation

AI algorithms and applications can generate erroneous information, leading to incorrect conclusions.

Establish rigorous policies and procedures for AI development to monitor results for inaccuracy.

6. Magnification effects

Inherent biases in training data sets left undetected might cause a minor error that could snowball into a major problem, such as amplifying gender bias results.

Implement test procedures, monitoring, and controls to detect data magnification errors.

7. Unintended consequences

As AI develops rapidly, unexpected issues might emerge.

Ensure there’s human oversight to monitor and question AI results.


Source: Adapted from ACCA



Management accountants should educate themselves about these risks and ensure their organizations are taking steps to promote an ethical AI culture.   


As AI evolves, organizations can capitalize on its unprecedented opportunities to leverage a competitive advantage. However, the pot of gold at the end of the AI rainbow must be tempered with the reality of potential risks and great harm. Management accountants must be active participants in the deployment and governance of AI models, leading their company’s ethical conscience to ensure values and principles are protected. As European Commission president Ursula von der Leyen said: “AI is a very significant opportunity—if used in a responsible way.”



The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of the Air Force, the Space Force, the Department of Defense, or the U.S. Government. Distribution A: Approved for Public Release, Distribution Unlimited: PA#: USAFA-DF-2024-345.

About the Authors