As AI expands its potential, what are the risks for Finance?

The huge number of mundane tasks that accounting teams still do throughout the year make financial software an obvious target for more and more software automation. We can even go further now. Capacity and technology have grown to the point that we can go beyond simple automation to actually using artificial intelligence to make decisions on our behalf. But what does this mean for finance teams?

As the power of AI has expanded, its decision-making capability has become the center of debate. How should it be conducted and what criteria should be placed on it? For example, the UK has recently taken up serious political debate on how to make sure AI safeguards people and remains explicable – I’ll address these and other topics in this article.

Technology and capacity

AI has moved beyond the simplicity of validation rules and macros with the advent of machine learning. Machine learning is the ability for the system to get smarter as it recognizes patterns. This allows the system to make more optimal decisions over time. The issue in the past has been the inability to expose the system to enough data – we’re talking millions of instances – to accurately identify patterns. The Google face recognition project is an example of this. The research team had to feed a massive number of images into the system to teach it what to look for. In accounting, we are already collecting massive amounts of data just by the nature of our standard processes. What about the capacity to handle all that data?

Thanks to cloud computing, we can now run processes, like big data reporting and machine learning, on a large scale. We can literally use billions of transactions to teach the system to find things like expense exceptions and then flag them as fraudulent, errors, or needing further review. Accountants will tell you that much of what they do is managing the exceptions. At Sage Intacct, we are building AI with the capacity to learn to identify and possibly rectify exceptions – leaving the accounting team to work on strategic tasks involved in growing the business. By maintaining high elasticity in our cloud servers – more than two times the average load – we are able to handle AI operations across a large array of servers and large data sets to get results.


As we enhance AI and machine learning, one of the big concerns is to avoid creating some dystopian sci-fi society where human life takes a back seat to the aims of robotic overlords. This means we have to make sure to keep people in charge of systems by building in controls. These controls can be thought of as levers, much like the levers we use to direct computers to other automated tasks. Currently, computers aid heavily in mapping stars, but the sections they are to map are defined and overseen by people. Likewise, the use of computer diagnostics in medicine are controlled and adjusted at every level by flesh and blood physicians and medical technicians.

In finance, we can create levers that apply boundaries to AI, keeping them from moving beyond discrete tasks, like search for expense anomalies. The biggest safeguard we are working on at Sage Intacct is to teach the AI to report on itself in a human readable way.


In the debates going on in Great Britain, they refer to this reporting back as making the AI explicable. The technology industry refers to this as explainable AI. By giving the AI a mandate to explain its decisions, recommendations, or actions, we are keeping the people in control and safeguarding the AI from going off the rails. The AI may have found a couple thousand anomalies in the billion records it looked at, and rather than looking at those thousands of records, we want the AI to tell us why it chose those records or at least put them into reviewable buckets, so we can run further automations to correct errors or reject fraud. Sage Intacct is working on making sure that our AI processes always return human readable justifications for every action. We will also need ways to audit these explanations to make sure that a developing AI is explaining all the reasons or criteria for a decision.

Impacts moving forward

What does all this advancement mean for finance teams? Moving forward, we foresee changes to the accounting function as people move to more strategic activity and hand the mundane over to AI. We also see the need for specialists able to understand and direct the actions of AI. The main criteria for getting a handle on AI is a strong understanding of statistics, probably beyond what the typical finance professional currently requires. This new role is akin to the reporting specialists we saw in the 1990s and early 2000s who needed a stronger understanding of relational databases, as querying data became more advanced and companies looked for ways to move away from reliance on spreadsheets. And as with every transition to new technology, we will see disciplines arise that help us guide and improve the role of AI as it revolutionizes the way we do business.


Article by channel:

Read more articles tagged: