Geordie Consulting fully endorses the Centre of Excellence Model for Analytics Platforms, but we often get asked why. Reading this will help you understand what sets the Centre of Excellence model apart from the more traditional working methods and look at the actual costs.

The concept

The heart of the model is that you have a cross-function group that forms the Centre of Excellence (CoE). Their representation is critical to the success of the CoE; stakeholders must be represented, as must those expected to manage the Data Platform (often IT). Stakeholder representatives should also be committed to and supported to learn more about the data platform and how to get the most out of data within it now and in the future. The result is a technically competent body with deep ties into the business to understand the pressures being faced by colleagues.

This must be a technical function as that is how to prevent the erosion of your Data Platform by “Shadow IT” functions. In a traditional Data Platform model, when a request is defined as not possible or even expected to take “too long”, an interim or temporary report will be generated. In practice, these temporary solutions are rarely that and instead grow and evolve in isolation from the core data platform. Consider how often you have heard of an Access Database or Excel Spreadsheet built by someone who is no longer with your organisation, yet it is still in use, and no one knows how to fix or update it.

Who should be in our Centre of Excellence?

The CoE should comprise those who already support the Data Platform and those Analysts and Citizen Developers who create the “Shadow Analytics” solutions. The reason for that is two-fold; firstly, the saying about it being better to have them on the insider looking out rather than on the outside looking in is apt; if they are within the CoE and benefiting from it to further their career, they are incentivised to support the Data Platform. Secondly, these people are often excellent and understand the crux of a business request rather than the words of the request. That, combined with their inevitably excellent problem-solving ability, means that solutions can be built quickly to meet and frequently exceed customer expectations.

When you understand that what you have established is an organisation that is made up of those with significant process discipline alongside those who excel and problem-solve regardless of the impact, this provides excellence as the left-field ideas that solve problems can be wrapped into a proper solution by the more disciplined team members. All while both sides learn from each other.

The benefits

The result of this structure is that as requests come in from across the organisation, the inevitable conflicts can be better resolved and fed back to the requestors. For example, if finance wants to get more data out of the warehouse dispatch tool, but it is being updated to an entirely new platform in three months, the appropriate thing would be to discuss the request with the project team to see if the new data is natively available within the new tool and if not how quickly it can be added (remember this may be too late to change a project deliverable). Understanding that and being able to have conversations with the Finance and Warehouse Departments because trusted resources represent them prevents the Data Platform from being blamed for something beyond its control. Furthermore, money is saved, and changes are not made to a platform about to be removed.

This interaction between the business and a core infrastructure function is transformative. Suddenly, the person who would traditionally undermine the data platform by producing unsupported solutions to point problems is working within the CoE and using their skills to leverage an interim solution quickly while supporting the triage for when changes can be made.

The ability to fine-tune rather than just triage means that if you still rely on third-party resources to do the more significant elements of work, you will use them less frequently and for higher/proven-value tasks. This reduces the capital outlay on these services. The ability for you to also allow many more people to build report content within the guide rails of your existing Semantic Models means that you will also spend less time building ad-hoc content. Secondary (thin) Reports and connected Excel spreadsheets do not just provide better security that the traditional alternatives, but they also are simple enough for most people to use with only basic training.

Sounds too good to be true

Modern solutions have a significant problem that is rarely discussed and certainly is not understood at the board room. That is that the time they will perform worst is when they are launched. It is not an issue with development or a project failure it is a reality of the process.

For a CoE that is started when you have a single solution coming in, the scope of the data model is limited by what is available. So many new requests cannot be immediately accommodated, and they must be scheduled as soon as possible, typically resulting in Data Model updates. Fast forward one year and the amount of data and breadth covered by the model(s) is such that when a new request comes in, the response will often be, “Hmmm, we don’t have that today. However, there are these other metrics, that sound like they should support your request, we can produce a report for you on those today”. Faced with that and upon receiving the result, the typical response will be, “Oh, I didn’t realise X, maybe we should be looking at Y instead, is that available?” This can lead to a repeat – we have most or an analogue of – while the changes are fine-tuned until there is no need to update the model of there is still a need. Still, it is less urgent as the immediate need has been satisfied. Remember as well, because of the CoE, many of these requests can be worked on by someone within the department or team making the request, removing a significant amount of risk of misunderstanding.

NOTE: There is a similar issue with Machine Learning. On Day 1, only “test” data was used, and that sample could be different from real-world data. Typically, a Machine Learning model should be retrained periodically to continue to fine-tune performance; this means that on Day 1, by definition, it has the least training.

These are cautionary tales to remind you that the must be top-level buy into the process of a CoE. The first quarter will be challenging for all involved as sticking to the CoE structure seems to slow development down.

Resourcing your CoE is a significant concern for many organisations. They will have people who can do some of the tasks that a CoE requires, but often, multiple skills gaps will need to be plugged, and there will be some outright hostility to the idea of working within a CoE. There is not a one-size-fits-all answer to this; contract resources can plug the gap and allow internal resources to catch up, but that is an expense and will add to the initial cost of the CoE. Ultimately, this is a vision that must be embraced, however, with teams being supported to be onboard by training and development programmes.

Conclusions

Committing to a CoE is one many businesses struggle with after all, the traditional “We just have support” seems so enticing. The challenge with that model is it is self-destructive. If that seems dramatic, remember why your previous Data Platform(s) failed; chances are agility and shadow reports will be pretty high on the list. The only way to address those challenges is with a CoE. Understanding that your CoE will also get better with age is also vital. Your Day 1 CoE will be at its worst. Give it a year, however, and you will have something that gives your organisation agility and capabilities far beyond those being considered today.

Ross worked for a company in 2020 before he started Geordie Consulting, in February of that year the situation globally seemed to be getting worse as a virus was picking up in China. In 24 hours, he folded the Johns Hopkins data into the corporate IT model and produced reports about the spread of the disease relative to the staff globally. That was only possible because the global IT model had gradually expanded over the previous three years. Starting from scratch when that report was needed would have taken at least a week, and while certain things were an approximation, the business value of having an approximation now versus a definite number at an unknown date in the future.

It typically takes between one and two years for an Excel Power User to become an expert in the Power Platform, although they can become intermediate within three months. Technical resources will often need a similar period to get up to speed, although the extent of training and for whom is something that should be worked through on a case-by-case basis. Undertaking development and training plans will ensure you better understand the additional resource costs needed. Once the initial period is complete and resources are able to run the day-to-day, you will find that your CoE is cost-neutral while delivering a continually increasing value. The cost flex should be closely monitored to reduce and remove third-party spending as much as possible.

Geordie Consulting uses the Microsoft Power Platform to provide businesses with agile, cost-effective data solutions. We specialise in integrating data from across business applications into a single, manageable platform. Our services include enabling business users to develop custom reports using Power BI and Excel, leveraging Semantic Models for consistent data analysis, and enhancing user self-sufficiency through Copilot. We aim to ensure your data agility to support a data-driven business. We provide comprehensive documentation and support to maximise the business value provided by your data. Let our insight unlock your insights.