What Every Leader Should Be Thinking About Right Now in Order to Quickly Reinvent Critical Business Processes At Scale
Gartner reports that by 2024 organizations will lower operational costs by 30% by combining hyper-automation technologies with redesigned operational processes. Still, at present, most large organizations are nowhere near full maturity (which is to be expected), so there will likely be a steep learning curve over the next few years, especially for non-technical leaders and teams who are still trying to find their way.
This resource is intended to get to the crux of what leaders should be thinking about right now in order to create a more process-efficient organization. It’s called a “quick-start” guide because it outlines the most important activities and conversations every enterprise leader should be prioritizing in order to adapt on the fly, create a more data-driven organization, and reinvent critical business processes at scale.
Shy away from the impulse to automate everything simply for the sake of automating (or because there’s organizational pressure to do so). Instead, be selective. Be measured in your approach and build momentum by delivering quick wins that demonstrate what’s possible. Doing so will enable you to keep your focus on process reinvention, not just mildly incremental process improvement.
— Brian Hughes, Chief Financial Officer, RevUnit
A May 2020 report published by Forrester Research suggests that the COVID-19 crisis will only accelerate enterprise automation plans, making “automation a boardroom imperative.”
Your organization is likely already making use of automation technologies—whether that be business process automation software (BPA), robotic process automation (RPA), or ML/AI tools—to simplify, expedite, or reinvent existing processes. In fact, a report published last year by UiPath found that nine in ten large organizations are doing exactly that. Drastic reductions in the cost, complexity, and “approachability” of these technologies have resulted in greater adoption among the Fortune 1000.
Still, however, most organizations are only scratching the surface of what’s possible (in this case, starting small is a smart decision). What’s more, many teams—especially non-technical ones, at that—are likely less familiar with these types of tools and technologies, as adoption within the enterprise has largely been driven first by more technical teams. Clearly, this isn’t the case in every organization, but one that seems to be repeating itself as C-level and senior leaders continue to lean on demonstrable results in order to build a more quantifiable business case for continued investment in said tools.
That said, while the move toward automation at scale is happening, it’s still important to recognize that automation isn’t the end game for everything, nor has it advanced to the point of full maturity. It’s now estimated that roughly 40% of large organizations are already using RPA, a much smaller percentage actually have managed to scale RPA more broadly. The reasons vary, but it usually comes back to one thing: implementing and training robotic components—not to mention the procedural and change management requirements necessary to support them—typically takes longer and winds up more costly than most organizations expect. So, resist the urge to rush toward automation as a “cure-all” for longstanding inefficiency. Instead, understand that a more measured approach is needed—one that invests heavily in the people and processes required to manage the move toward automation at scale.
By 2030, decision support/augmentation will surpass all other types of AI initiatives to account for 44% of the global AI-derived business value, according to Gartner.
Thus, investing in the people and processes that will effectively manage large-scale automation is critical — perhaps even more critical in the immediate future than the automation itself. That said, the rallying cry of augmentation and automation isn’t new, nor is it particularly unique. Still, it’s more relevant than ever.
So, too, is the need for organizations and their teams to truly figure out what it means to support the massive tactical change that’s required to power automation at scale. For most, these conversations need to start shifting toward the practical (“how do we actually do this”), instead of the theoretical (“this is why we should do this”). Admittedly, that’s a lot to think about, which is part of the reason why many teams have difficulty: (1) Making sense of all of the overlapping components, and (2) Doing so in a way that creates clear paths of action for everyone involved.
What’s more, the directive and accompanying vision coming from the C-level can often feel like a far cry from the reality that you and your team may operate in today. This gap between what’s desired and what exists tends to create a pent-up demand to accelerate and democratize process automation as quickly as possible, which, in some cases, can serve to perpetuate the misconception that automation becomes the go-to answer for most operational or procedural problems. In such a scenario, the tendency then becomes to funnel resources toward tactical automation without fully understanding the accompanying change management components necessary to support it.
Thus, any tactical automation-related conversation should start with four, core components: people, process, data, and technology. It’s this complementary, symbiotic relationship between people, process, data, and machine that will be the distinguishing factor between organizations that are truly successful at generating maximum value from business process automation at scale and those that end up struggling to get out of their own way.
The overlap or “sweet spot” between people, process, data, and technology is the point at which you can change the conversation from mere process improvement to process reinvention at scale.
While it’s true that incremental improvements can produce legitimate gains for your team, business unit, or organization, the real power comes when you’re able to fundamentally alter the very way(s) in which the work gets done.
In most enterprise environments, historically speaking, you’d typically encounter a number of different business units or functions working intently to solve a set of specific problems. Year after year, the solutions to those problems get “delivered” back to the business and then it’s on to the next problem. Over time, these types of project-specific workstreams have created what can feel like a patchwork quilt of disparate, legacy systems that are often either redundant, inflexible, or incapable of communicating effectively with critical operational systems. As a result, IT has often spent the bulk of it’s time building makeshift bridges between systems instead of building new capabilities that can then be consumed in a “self-serve” fashion by various groups inside the organization.
Many Fortune 1000 organizations have made significant progress toward creating more modular capabilities (in the form of technical products) that are built not simply to be delivered and maintained, but to be improved upon, re-delivered, and reinvented as the needs of its consumers (re: the business and its various constituents) evolve over time.
For most, the next step is to push even more aggressively to make these capabilities more reusable and interoperable; to turn them into “micro-service” building blocks (as flexible APIs, typically) that can be consumed individually, combined with other such building blocks, or accessed on-demand in a self-serve fashion that allows teams to either circumvent, re-create, or invent new ways of working—and even new services, products, or revenue streams—that weren’t previously accessible.
As an example, Tableau has championed the advance of self-service business intelligence (SSBI), which “empowers teams like product developers, sales, finance, marketing, operations, and more to answer data questions, with governance supported by IT and business intelligence (BI) analysts. SSBI focuses on supporting the end user, allowing business users and analysts to be more involved in their own data analysis (instead of relying on IT to process all requests).”
Perhaps the most powerful benefit of these types of micro-services lie in its ability to democratize value creation, allowing both semi- and non-technical teams and individuals to pick and choose the services and data they need in a secure, compliant, self-service fashion. In doing so, you remove IT as the singular gatekeeper and/or creator of value, instead enabling teams and individuals to build freely what the business needs.
Don’t be scared by the thought of what you don’t know. Break things. Experiment often. Figure out what isn’t working, define some tangible goals, and start there. Typically, if your goals are clear and there’s legitimate business value to be gained, you can likely test multiple ideas without making a major investment in time, people, or dollars.
— Mason McClelland, Director of Strategy , RevUnit
According to the International Data Corporation, a leading global market research firm, organizations lose an estimated 20-30 percent in revenue every year as a direct result of process inefficiencies.
There’s often an attempt to correct or eliminate much of this inefficiency with one (or a couple) broad strokes, which can usually mean a rushed implementation of yet another enterprise system or application designed to solve for a specific set of pain points. These types of enterprise-wide deployments often provide immediate value, sure, but there’s almost always some sort of future technical and procedural debt that goes along with it.
Instead, opt for a more laser-focused approach, especially if you’d consider yourself a non-technical leader unsure of where to begin. It’s of utmost importance that you guide your team toward areas where you believe you can deliver significant improvement with minimal resource investment. Typically, that means identifying existing friction points that are creating: (1) an unnecessary and continued increase in operating costs, (2) an unnecessary redundancy or duplication of efforts, or (3) an unnecessary delay in decision-making ability. Each of these three friction points—whether directly or indirectly—add additional expenditures to at a time when you can least afford it. Clearly, there are other friction points that exist, but those that either increase costs or introduce redundancies are often easier to identify and pinpoint.
Additionally, pay close attention to how individuals and groups talk about existing processes. Listen for phrases like “this could be better,” or “I wish we did this instead.” How your team talks about specific processes often provide the most useful and accurate feedback. Data can tell you a lot, but it can’t explain everything.
One of the most common issues large organizations face, especially when it comes to process inefficiency, is manufactured delays in its decision-making abilities — for individuals, teams, and business units, respectively.
These delays are typical for large teams and organizations that are often left operating within a bureaucratic process that was either established by a previous team or set of governance standards that are both antiquated and possibly no longer relevant.
It can be easy then to approach any sort of process improvement through the lens of what’s possible, not what’s best. That is, instead of designing what could be the best possible experience (no matter how “out there” it may seem), many limit themselves only to what’s possible or realistic at that particular moment. While completely natural, this sort of self-limited thinking perpetuates incremental process improvement instead of more radical process reinvention. So, before you even begin to more closely examine specific processes—and especially before you allow yourself to begin to jump to possible solutions—adjust your mindset.
Then, once you pinpointed specific processes where you believe there’s room for improvement, look at each of those processes more closely. Start with minimum requirements (in terms of people, process, data, and technology), and ask yourself a few questions:
Are there obvious gaps that create “stop signs” within this process? Is this a process or decision point that must be made on a regular basis? If so, by whom? Who’s affected? Can I reduce the complexity of this process by removing inputs, people, or both? Finally, if this is indeed a decision that must be made on a recurring basis, is it possible to automate all or parts of this process using systems or tools that are already in place? How might we do that? RPA? Something else?
Lastly, if you are heading down a path to introduce any sort of new technical component (automation or otherwise), make sure that its use is specific, clear, and additive — additive in that it creates legitimate, procedural efficiency that frees up time so that your people can focus instead on higher-value tasks. The key is to seek out low-risk, high-reward opportunities to either reduce friction in existing-yet-critical processes, improve the quality of the existing output, or build a test case for rapid experimentation of an entirely new, redesigned process.
Use “if/then” statements as you begin to redesign key processes. Say, for example, you’ve identified data visualization as a key bottleneck or area of opportunity, you might say something like:
“If I/we need to create more accurate, real-time dashboards in order to [SOME BENEFIT], then we must find a better way to cleanse and consolidate data into a single source of truth.”
These statements allow you to communicate and set guideposts in a simple, clear fashion.
McKinsey has reported that the top 10 percent of companies in terms of revenue growth are more than 50 percent more effective than peers in testing, measuring, and executing based on what they’ve learned.
You typically have more to gain from targeted experimentation than waiting for permission to take action. So test and test often. At this stage, you’ve likely identified problematic pain points, and you’ve explored how you might either: (1) Significantly reduce inefficiency, or (2) Eliminate the problem altogether.
So, it's imperative you both show and tell. Your focus should be to either illustrate the potential of a certain solution or to potentially disprove its validity, which is just as useful an outcome in this scenario. Keep in mind that the name of the game here is speed-to-validity, not necessarily speed-to-perfection. It’s okay if the solution you’re testing doesn’t scale right off the bat; you’re simply trying to establish valid data points that signal that your test is producing the results you desire.
Your job, then, is to develop a specific test case: outlining what you hope to accomplish, in what time frame, at what cost (to you, your team, and the business), and what’s required in order to effectively test your hypotheses (user stories, business requirements, acceptance criteria). After you deploy the test, monitor what you’re observing, compare against what you had expected, then use that information as inputs to determine efficacy (both in an experimental setting and projecting longer-term potential, if such potential exists).
There’s often a tendency to lean too heavily on results (and results alone) when making a push for continued investment. Yes, results are good, but they’re not enough. You’ll need to show, tell, and teach. Again and again and again.
— Joey Grillo, Principal Designer, RevUnit
This is an obvious step, yet one that must be accounted for. In the vast majority of organizations, support for business process automation is typically strong.
Again, this isn’t a recent development. Still, like most initiatives—no matter how big or small—you’re competing for finite resources. It’s critical that you show and tell; you must deliver demonstrable results that advance the organization's strategic priorities.
In most cases, you don’t need to have every little detail ironed out at this point (you’ll ultimately know your audience best here, so proceed accordingly). Yet, there’s usually an appetite for continued investment when there’s both a tangible plan and supporting, quantifiable data to suggest that the plan is returning legit value.
Nevertheless, you’re likely to run up against dollars that have either already been allocated and handed out to other teams for similar purposes, or others who are gunning for the same pool of available budget. So, do your homework. Be prepared and seek out collaboration. Find out who’s championing similar initiatives (if anyone). Understand where you and your team fit—whether upstream or downstream—from any affected processes. You know the drill here; you’ll need to rely on your own ability to effectively navigate your own organization in order to earn the opportunity to bring about the change you desire.
Delivering results is only half the battle; you’ll also need to take every opportunity to educate senior management, especially non-technical leaders, along the way.
Not just to gain access to resources, but to equip leadership with the information they need in order to: (1) Make more well-informed decisions, and (2) Defend those decisions—and their subsequent investments—to their respective boards. Keep in mind that those leaders need to be able to effectively answer many of the same questions you’ve already answered (expectation of results, timeline, costs, estimated ROI, etc).
It’s largely up to you to determine how best to bring people along; each leader and organization is different here. You could embark on a largely solo mission (not advised), or you can work with your peers and co-champions to lead and educate with results. The best educational tool in the enterprise is often a compelling story, specifically one that’s backed by strong hypotheses, real-life data, and multiple, successful implementations that bring that story to life.
Most organizations have yet to truly solve the puzzle of process automation at scale, which is to be expected given the inherent complexity of drastic procedural and operational change (not to mention the learning curve associated with more advanced automation technologies like BPA, RPA, and ML/AI). Still, by 2024, organizations are expected to lower operational costs by 30% by combining hyper-automation technologies with redesigned operational processes, so the need for practical experimentation and show-and-tell style education is imperative.
The key to business process efficiency at scale lies at the intersection of people, process, data, and technology; this is the “sweet spot” at which you can turn the conversation from process improvement toward process reinvention.
This sort of mindset shift is important in order to begin thinking of your team, business unit, and organization as an unencumbered value creator, free to design portable, modular micro-services that enable your organization to work in new ways, rapidly explore new opportunities, and realize a not-too-distant future where both highly-skilled individuals and intelligent machines work together harmoniously.