Top 11 Enterprise Mobile Center of Excellence (MCOE) Metrics
Let’s start with a question: “Do you have any Mobile Center of Excellence (MCOE) metrics?” And I don’t mean this to be didactic or facetious. I know you invested in mobile. You built a MCOE (with actual funding and headcount!). Of course you have metrics.
Except that many companies we talk to don’t. They all have individual app metrics, but when it comes to looking at whether their underlying investments in mobile are moving the needle…not so much. Perhaps they think it’s difficult to quantify the success of their MCOE, since an MCOE offers so many widespread benefits. So here’s a list of MCOE metrics to consider:
- Number of app ideas. One of the key roles of an MCOE is to act as an evangelist. Can we educate the company on how mobile impacts the business? Can we help create and shape the demand for mobile projects? IT is always talking to the business about possible initiatives and most of them could and should have a mobile component to them. But they don’t always. If you have a project budgeting and approval tool, make sure all projects are tagged with mobile to track them. If it contains projects that lack a mobile tag, address the team to find out why. Keep track of the mobile demand and make sure it’s trending in the right direction. This also helps ensure we’re not building the same thing twice for different groups and reduces duplicate spend.
- The number of apps built. This builds on #1 and is a good metric of execution-on-demand. But we should also track how many of those ideas are actioned. Build a funnel of ideas in and ideas out. Try to figure out when ideas fall out of the funnel and understand why it happens.
- Number of apps built internally v. built by partners. This may or may not be important to you depending on the objective of your MCOE or the historical use of your partners. For those of you trying to seed mobile as a core competency within the company via the MCOE, track how many projects you’re “winning” versus how many go to third-party partners. How many projects are transitioning from the MCOE building them versus having the capability diffuse throughout the entire IT organization? Think about what you’d like those numbers to be and then track progress against them. This will impact how you evolve the MCOE’s role through its lifecycle.
- ROI of our App Portfolio. While App ROI is important to evaluate on an individual basis, of course, it’s also important to track ROI across your entire app portfolio. Evaluating the ROI of the portfolio maps back to—and informs—your process for evaluating, prioritizing and selecting new mobile projects. Are we valuing the right things? Do we need to adjust our selection criteria so we pick “winners” more consistently?
- Budget estimates v. actual. Everybody loves this one. The more we do something, the better we get at it. And app estimation (mobile or otherwise) is an area in which every organization is lacking. Check to see that the gap between budget estimates and actuals continues to narrow over time—both in terms of time and money. Where you can, try to understand what is driving the variance—did we fundamentally misunderstand what was being built? How much of it was caused by changing requirements (or something else)? Propelics built a calculator to help you determine the cost of your next Enterprise app or confirm the validity of a previous app’s price tag.
- Customer satisfaction surveys. And by customers, I mean your MCOE’s internal customers. Run surveys or at least capture feedback during the project retrospective (you’re doing those, right?) to get a sense for how the MCOE is perceived. Easy to work with? Perception of expertise? Design capability (if that’s in your purview)? Create an improvement plan for any areas your MCOE receives low scores in.
- Release velocity. I’m not just talking in terms of developer productivity, but also in terms of how many apps are ‘one and done’ projects versus how many are on a regular release cycle. Mobile apps should be updated on a consistent basis. At the very least for bug fixes, but usually updates are used to add features that weren’t originally budgeted for. This doesn’t include the fact that (surprise!) things generally don’t work as well in the wild as they do on the whiteboard. Without regular updates, declining usage and increased dissatisfaction are inevitable.
- Use of reusable components. This is a biggie. To improve developer productivity you need a reusable components library. Invariably when we build an app there are modules or snippets of code that are ripe for reuse in other projects. Commonly used features like barcode scanning, signature capture, cascading menus, etc. These can even be third-party solutions your mobile architect has blessed. But re-using code saves your developers (both in-house and third-party) time and budget, while at the same time ensuring product quality and consistency by avoiding having to craft a solution from scratch each time.
- Number of third-party apps selected and approved by the MCOE. While building apps is of course important, many quick-wins are to be had deploying off-the-shelf apps. Often there are a bunch of apps employees want to use. But allowing the Wild West of apps to prevail at your business creates inconsistencies, conflicts, and security risks. So the MCOE should always act as the central authority that vets and approves all third party apps.
- Adherence to Standards. MCOEs create and institute mobile standards—development and testing tools, visualization processes, coding standards, and a whole lot more. But all that work is for naught if they’re not put into practice. So establish and track adherence to standards metrics by internal teams as well as by third-party development partners to measure the effectiveness of your MCOE’s mobile governance.
- Extent of test automation. This is a bit of a trailing indicator but a good one to gauge the maturity of your mobile activities. Even in traditional development, testing always gets short shrift. But at least it’s well understood. Many corporate QA teams don’t have the first idea what’s unique about mobile testing and how it impacts the tools and processes they should use. Nor do they appreciate its level of complexity. And of all the things they underestimate, the impact of fragmentation—especially in the Android ecosystem—has the greatest repercussions. As we build more complex apps and increase the velocity of our releases, test automation can become a real force multiplier.
So there you have it. If you found this list helpful, let me know. Even better, do you have any other metrics your MCOE is using that I didn’t cover here? Please let me know on Twitter at @ggruber66 or firstname.lastname@example.org.