Quality financial data isn’t just about accuracy. It is structured, compliant and accessible. And, yes, it should be accurate.
While financial data is integral to high-quality decision-making, it can raise serious concerns. A recent IFA Magazine article (evocatively titled ‘What’s Paralysing UK Finance Decision-Makers With Fear?’) explores how decision-makers' paranoia over preserving historical data allows them to be held hostage by legacy accounting software. Their concern over the sanctity of historical data prevents them from changing software despite staggering advancements in financial data technology.
Irina Staneva, former auditor at PwC, explains the intricacies of uploading data to new systems: “Some companies approach migrating all historical data as a distinct phase of the project. For them, it is vital to have all historically available data accurately loaded into the new software.
In contrast, for others, it is enough to ensure that accurate beginning balances are properly loaded into their new accounting software and keep the detailed historical data separately. How companies would like to handle the transfer and storage of historical data may affect the innovative product they may want to use."
However, maintaining high-quality data from the past, present and future should be easy by adopting a pro-governance, pro-technology and anti-manual manipulation stance. Let’s dive into each of these.
Data governance and data quality are sometimes used interchangeably, but they have slightly different definitions. Data accuracy and accessibility define its quality. On the other hand, data governance refers to how an organisation maintains its data’s security, privacy and accessibility. A robust data governance framework – where issues are tracked, all data is auditable and data standards are upheld – will improve data quality automatically. To assess your company’s data governance, consider asking yourself the following questions:
Each company will have a different approach to data governance. But if you can answer each of these questions positively, you’ll have a firm foundation to try different approaches to improving data quality.
Many companies struggle to maintain a timely delivery when it comes to their financial data. Despite many calls for wider implementation of automated real-time technology, relatively few high-profile companies have announced adopting cloud-based technology that supplies a pipeline of real-time insights. Implementation takes commitment and resources that some companies are unwilling to invest.
The unique benefit of receiving a real-time flow of financial data is that it puts you one step ahead of potential issues that traditional accounting software does not immediately recognise. Let’s use a practical example.
Suppose you’re a retailer and note a decline in inventory levels for certain key products in real-time. In that case, you can proactively identify and resolve the cause before experiencing supply chain disruption. Congratulations, you’ve caught the error before it moves further downstream and becomes more costly.
Even more concretely, Dun & Bradstreet used state-of-the-art AI to extract 200,000 pages daily in real-time. As a data provider, they must provide accurate shareholder information of nearly six million UK companies. Previously deploying manual methods to deal with the multiple document variations, they tested AI solutions until they found ‘perfect accuracy with minimal latency’. If you’re interested in achieving a real-time data flow, it’s worth taking a practical approach. Consider the following questions:
Achieving high-quality data in real-time is an example of automation. Many people think of automation as a tool for speeding up data processes. While this is the case, automation can improve data quality at the same time. Here’s how.
Machine learning, or AI-powered automation, excels at flagging outliers – even with limited training data. Moreover, if an AI-powered automated system produces a false positive, the machine will automatically learn from its error (an example of continuous learning).
Let’s explore what else AI-based automation can offer for financial data quality management.
Various AI financial data systems can improve data quality by automating particular manual touchpoints. Examples of these financial data tools include:
Too many companies use junior analysts or dedicated full-time employees (FTEs) to extract financial data from documents. In contrast, AI-powered tools can automate data capture, resulting in fewer manual errors.
Analysis capabilities might be baked into some data extraction tools. For example, some software might calculate financial ratios by adding and subtracting data fields (e.g. starting inventory + purchases – ending inventory) to calculate financial ratios like the Cost of Goods (COGS).
Robust validation algorithms can receive data and cleanse it by ensuring it is accurate and in the correct format. AI can handle and process vast datasets. Automating validation and cleansing tasks (once the exclusive domain of data engineers) is transforming how we work with data.
These tools prevent potential data loss following system migration. By working alongside tools for automating financial data processes, decision-makers maintain autonomy over their company’s financial data while achieving measurable improvements in its quality.
Benjamin Franklin (didn’t) once say that ‘a penny saved is a penny earned’. The same principle applies here – conserving costs by cutting costly errors can reclaim a surprising amount of revenue. Let’s take a closer look.
The cost savings from improved data quality can be substantial. The 1:10:100 rule, developed by economists George Labovitz and Yu Sang Chang in 1992, suggests:
Labovitz and Chang’s rule presents a compelling case for proactive data management. As aforementioned, technology presents a proactive approach by dealing with poor quality data automatically and in real-time. However, saving pennies from poor data quality and immediately spending them on expensive AI tools is not capitalising on the cost-saving properties of AI-powered technologies.
Firstly, you’ll discover that AI-powered data management models offer different pricing models (e.g. subscription, per page, freemium, etc.). Finding the right one to suit your data volume can immediately save costs. You could also consider using a reputable vendor rather than building an in-house solution. Building an in-house AI solution can cost anywhere from £5,000 - £500,000 and above, depending on the project’s complexity and integration requirements.
Ultimately, the right type of financial data management technology will produce the right data, naturally providing a high Return On Investment (ROI). It really is that straightforward. As one of our clients says, “Let the data speak for itself.” We couldn’t agree more.
Technology on its own cannot improve financial data quality. That’s a fact.
Moreover, introducing a new, high-powered financial data platform won’t instantly eliminate errors in real-time. Rather, the collaboration between skilled finance professionals and sophisticated AI tools creates a synergy that tangibly manifests as improved data quality and cost savings. Therefore, if better financial data quality sounds appealing, you might be interested in Financial Statements AI.
If you would like to improve the accuracy and speed of your financial statement data management, Financial Statements AI is our new tool designed for just that. Financial Statements AI extracts data from financial statements by classifying the balance sheet and income statement. The extracted data is then processed to show key financial ratios, such as EBITDA, Gross Profit, Depreciation, etc. Both the extracted and processed data are then available for instant download to your device.
We’re offering free access to Financial Statements AI – book a demo or email hello@evolution.ai for more information.
Not interested in financial statement extraction? Get in touch to discover more about how we can establish a real-time feed of information from other financial documents.