Vice President, Operations, DECIMAL
Sébastien Boivin CPA, CMA
With today’s improvements to and democratization of business intelligence tools, organizations are confronted with a new reality that involves a certain number of not inconsiderable risks and challenges. The newly available management information is clearly valuable but, as with any decentralized process, the ability to generate and share it must be regulated. Otherwise, organizations may find themselves exposed to significant risks that, once detected, might be difficult to recover from.
In light of this, we are bringing you five best practices for sharing management information that organizations should bear in mind if they want to avoid the most significant risks associated with data sharing.
Organizing and communicating clear data governance that provides a framework for access to authorized data sources within the organization
Organizations have a lot to gain from democratizing access to data, as this enables every kind of stakeholder to analyze the content and extract valuable information. As with any organizational asset, it’s important to establish rules and policies for accessing, communicating and monitoring data. These rules should cover such points as:
- Locating and communicating the source of original data
- All data stored in a data warehouse has a source, and it’s important for people to understand the source system of record (SSoR) if they want to correctly interpret the results and make corrections to the source systems if need be.
- Knowing the limitations of the data we have and use
- Limitations concerning data use must be communicated to ensure that the information remains relevant and truthful. For example, aggregating data may involve applying an assumption that may make the data less reliable in some respects.
- Based on the “garbage-in, garbage-out” paradigm, some systems collect low quality information that can only be improved by modifying certain processes. In such cases, managers must inform users that some data may be less reliable, and explain the processes being used so data quality can be improved.
- Ensuring that data usage complies with the organization’s rules and regulations
- Users wanting to access sensitive information such as salaries, contracts and personal information may require a higher level of security, or they may not be able to access it at all.
- Establishing an approval process that must be followed before information is released
- It’s important to have an approval process that users must follow before releasing management information within the organization, to prevent managers from making misguided decisions. This process should identify the data custodians who are responsible for ensuring that the information to be released is accurate. The data custodian may vary depending on the data being analyzed, i.e.: Finance for financial data, HR for human resource data, Production for operational scenarios, etc.
- Ensuring that users experiencing data issues know the support options available to them
- We must not underestimate the additional workload caused by providing support for a new tool in a decentralized process. In order to effectively support the organization’s various stakeholders in the time allotted, a support procedure must be set up so users know who to contact if they run into problems, as the support person may vary depending on whether the problem is related to access to the system, or data quality (processing errors, security errors, etc.). In other words, decentralized users must know who to contact regarding their specific issue, and there must be a corresponding support ticket process.
- Having employees sign a yearly contract that reiterates the rules governing access to and use of the organization’s data
- Since making the organization’s data more accessible involves a certain amount of risk – such as possible data leaks – users must sign a yearly contract that reminds them of their responsibilities in terms of data use.
Transforming data into information and sharing it in a ready-to-use format that requires no additional manipulation
- When the information is released, it should enable users to analyze the intended subject, with no additional processing required other than a certain level of drilling and analysis that will be allowed in the published report. If the information can be analysed from different angles (i.e., if additional angles can be added), it’s best to produce different reports, with each one covering a different angle, rather than leaving it up to the end-users to handle the data in order to achieve the desired outcome.
- If repeatedly analyzing data means it must be exported to Excel and processed via formulae, this processing could be integrated into a different metric and made directly available in data cubes. Remember that whenever end-users manipulate information, there is always the risk that they may get discouraged and terminate their analysis or make a mistake that could result in misleading information.
- When data is processed outside of its official source, , several versions of the same analysis or the same figure may exist, because the people who processed it had a different perspective of the information. This can lead to unproductive discussions about which is the correct figure, and to misguided business decisions.
Providing a self-service access platform enabling users to access information and drill down for the information they need
- Despite the fact that most users are familiar with Excel, this tool should only be used sparingly to distribute official corporate information as it provides far too many opportunities for data manipulation. Instead, managers should consider acquiring simple, easy-to-use tools that are also powerful and easily-deployed via web browser.
- Once data governance principles have been implemented, data generation tools should be rolled out via a self-service platform. This approach is safer, more efficient, enables users to view relevant reports, and means they can easily drill down to the data they’re looking for.
Training people so that they can independently analyze and search for information
- Organizations can’t wait for their IT experts to get around to addressing the diverse needs of the various different sectors, so self-sufficiency is key. As soon as a user has a question, accessing the information to answer it is almost instantaneous.
Being open to emerging technologies that make it easier to combine internal information with external complementary information, and that enable the organization’s past and present results to be projected into the future through assisted forecasting
- AI and machine learning represent the way of the future. Many different sectors will have to be open to these advances if they want to take full advantage of them. One advantage, among many, is their ability to aggregate a variety of internal and external data sources that could confirm the organization’s sometimes instinctive approaches, thereby influencing its decisions and actions.
- It’s interesting to think that proven and increasingly sophisticated statistical models could take the wealth of historical data organizations possess and use it to extrapolate the data they will need to deal with in the near future.