The Enterprise Leader's Quick-start Guide to
Three key focus areas every leader should be thinking about right now in order to rebound and build momentum heading into summer of 2022.
It sounds hyperbolic, but it isn’t. Thus, it’s not an exaggeration then to repeat what many others are also saying right now: The survival of every business—whether big or small—now depends almost entirely on its ability to adapt to whatever comes next. This simple notion has always been true, but it feels more real right now.
This resource is intended to get to the crux of how to become a more adaptive organization, specifically looking at the most important areas of the business you can focus on right now (or the conversations you can move forward) in order to take the most meaningful steps necessary toward creating a more nimble, resilient organization — from HQ to the field.
How Refining Your Data Capabilities Helps You Become More Adaptable:
Data and insights are useless if they don’t provide the business any value. So, your first step is to ensure that the outcomes you’ve established for your data initiatives map directly to your organization’s business objectives. Defined objectives ensure you are measuring and monitoring the right things; it also helps you better respond to unforeseen bumps along the way.
In this scenario, the OKR Framework (Objective and Key Results) should be your guiding light. The OKR identification gives clarity to what data is needed, what format makes the most sense to display the data, how the data might need to be acquired, and how the data is presented. Once OKRs are defined, revisit the core of your data strategy. Determine how your strategy will be used to drive the narrative of your broader data initiative. Doing so will inform the process and intended output, as well as shed light on how the data should ultimately be displayed.
The global market for data analytics and business intelligence services is expected to boost revenues north of $200 billion by the end of 2020. Additionally, more than 150 zetabytes, as in 150 trillion gigabytes of data, will need regular analysis by 2025. It’s of paramount importance, then, that larger organizations work quickly and diligently to control data sources and architect a data structure that accommodates accuracy, speed, and scalability (the two-speed data architecture approach is an example of such a structure). This type of approach can be useful when data needs to be both sourced and structured for future use.
If you’re just getting started, or you’re re-examining your data strategy entirely, start by examining some of the basics: for instance, are there organizational standards in place that govern the labeling and storage of critical data? Are there methods of normalizing the data so that the right people, system, and tools can easily access the data? Answering these questions (and others like them) should help you identify common data uses, patterns, and queries.
Once finished, you now have a likely opportunity where you can begin to more confidently codify data structures, which will immediately increase both the quality and actionability of the data you’re generating. It’s at this stage, too, that you’ll likely want to assess the rest of your data governance structure. Don’t be afraid to make recommendations for future data needs, or vocalize ideas as to how the business might shift operations to maximize the effort toward your stated outcomes and objectives. Equip the business to shift objectives as data influences existing knowledge.
The list of people, systems, and tools that now need to be able to view, interpret, and analyze organizational data points continues to grow exponentially. Each of these actors must also be able to independently make mission-critical decisions based on these data sets, which means that most organizations—in addition to optimizing their data structure—are likely due for a significant upgrade in the area of data visualization.
It’s more important than ever that teams have the processes, standards, and tooling in place to quickly convert raw data into easily understandable, actionable formats for various audiences. Yet, one of the most pressing challenges for data teams right now is a gap in data visualization expertise. For instance, analysts may know their way around some of the more popular data visualization and enterprise BI tools, but lack specific capabilities in the field of visualization. Still, others say they feel a wave of demand for better data visualization and storytelling, but don’t feel like they have the team in place to adequately respond to the demand from the rest of the business.
Five quick tips for effective data presentation:
One of our Fortune 500 partners was looking for a faster, more efficient way to surface key data points to their executive team so that each could make more well-informed decisions faster. They’d previously needed to rely on data generated from multiple, disparate systems, which was often inaccurate, poorly structured, and hard to interpret.
In a relatively short period of time, we cleansed and migrated their necessary data sets into a single system, then designed custom dashboards that could be easily configured for individual needs. In doing so, we were able to decrease their decision-making time from four hours to less than five minutes.
How Quick-Win Optimization Helps You Become More Adaptable:
By 2023, Gartner has said that approximately 40% of Infrastructure and Operations (I&O) teams will use AI-augmented automation to power large-scale process modernization. Still, however, most teams and organizations aren’t there yet. What’s more, you likely can’t afford to wait until then to start making substantive process improvements.
So, especially right now, it’s of utmost importance to find areas where you can make an immediate impact. Typically, that means identifying existing friction points that are creating: (1) an unnecessary and continued increase in operating costs, (2) an unnecessary redundancy or duplication of efforts, or (3) an unnecessary delay in decision-making ability. Each of these three friction points—whether directly or indirectly—add additional expenditures to your bottom line at a time when you can least afford it. Clearly, there are other friction points that exist, but those that either increase costs or introduce redundancies are often easier to identify and pinpoint.
Once you pinpointed specific processes where you believe there’s room for improvement, look at each of those processes through a decision-making lens. Are you able to identify inefficiencies that delay the decision-making process? One of the most common issues large organizations face, especially when it comes to process inefficiency, is manufactured delays in its decision-making abilities — for individuals, teams, and business units, respectively. These delays are typical for large teams and organizations that are often left operating within a bureaucratic process that was either established by a previous team or set of governance standards that are both antiquated and possibly no longer relevant. A good first step then, once you’ve identified friction points that delay decision making, is to determine which inputs and parties are absolutely necessary to make the decision in question.
Start with minimum requirements (both in terms of data, process, and people), and ask yourself a few questions: Is this a process or decision point that must be made on a regular basis? If so, can I reduce the complexity of this process by removing inputs, people, or both? Finally, if this is indeed a decision that must be made on a recurring basis, is it possible to automate all or parts of this process using systems or tools that are already in place?
If the answer to the latter is no, then (and only then) should you seriously consider thinking about introducing new systems or tools. An all-too-common mistake is to automatically assume that an inefficient process can be automatically improved by introducing a new tool or custom piece of technology. While doing so may eliminate steps in the process, it rarely treats the root cause of the problem. It’s at this point that you should determine whether or not all parties involved are equipped with the right inputs, access, and tools needed in order to increase both the accuracy and speed of the decision-making process.
It’s people who are most directly affected by any significant changes or modifications to an existing process. Thus, make sure you seek out each individual, team or business unit who will be affected—whether upstream or downstream—by any change to an existing process (even more so if you’re attempting to make a wholesale redesign).
The reason for doing so is obvious, yet there’s another, often overlooked benefit to making sure that all affected parties have an opportunity to participate in a significant procedural change — doing so allows each individual or team to bring different perspectives and learnings to the table about what didn’t work previously. The importance of this step cannot be overstated. In fact, even before the COVID-19 crisis, it was reported that the top 10 percent of companies in terms of revenue growth are more than 50 percent more effective than peers in testing, measuring, and executing based on what they’ve learned.
So, for instance, if you’re a Business Analyst intent on modifying the ways in which your business unit reports progress or key metrics (like making significant changes to reporting systems and/or dashboards), involve all those who may be affected by such a change, which might include those who manage the data inputs themselves, your peers who are responsible for interpreting and presenting the data, to other cross-functional leaders who are making planning or budgeting decisions based on this data. Intentional inclusion in this part of the process is a great way to build camaraderie and solidify effective ways of working. Even still, make sure that these parties are represented as you fine-tune any procedural changes.
If you are introducing technology, ensure that its use is specific, clear, and additive — additive in that it creates legitimate efficiency and frees up time so that your people can focus instead on higher-value tasks. It’s important to understand that automation isn’t the end game for everything, nor is it the answer right now for every enterprise company. In fact, despite the buzz you’ve likely heard about RPA, only three percent of organizations have managed to scale RPA to a level of 50 or more robots. The reasons vary, but it usually comes back to one thing: implementing and training machine learning models or any sort of robotic components typically takes longer and winds up more costly than most organizations predict at the outset. Additionally, many organizations simply aren’t yet ready to support a full-scale RPA type effort. But some, like Walmart, AT&T, and Walgreens, have begun rolling out more advanced, full-scale RPA programs.
Thus, it’s often a smarter move to pick and choose your spots carefully, specifically looking for low-risk, high-reward opportunities to either reduce friction in existing processes or increase the quality of the output altogether. What’s more, your quickest path to results — those that you can actually use to generate ROI and advance your cause — will likely come via smaller process improvement opportunities, especially those that don’t require a ton of input from various business units inside of your organization.
We recently had a partner come to us looking for a solution to a common problem; they were losing millions of dollars annually due to lost inventory resulting from the mislabeling of products. After working through the steps above, we worked with their teams to design a machine learning model that identifies products with 90% accuracy, significantly reducing product inventory loss.
How On-the-Job Training Helps You Become More Adaptable:
Your first priority, now more than ever, is to ensure that all existing training content is as easily accessible as possible (regardless of format). The training delivery pipeline you’re using doesn’t need to be perfect right now; it simply needs to be hyper-efficient to ensure quick delivery of time-sensitive content to the right groups of people. Don’t worry about whether that content is delivered via video, some piece of software, or a three-ring binder (seriously). Priority number one is to make sure that you’re able to get mission-critical updates and training content to your frontline workforce in a format that allows them to take decisive, necessary action at a moment’s notice. Doing so also means reducing barriers to comprehension and providing alternative accessibility methods to ensure deliverability to different groups.
The Research Institute of America concludes that methods like on-demand training boost information retention by 25-60 percent, compared to more traditional methods. Yet, many organizations haven’t yet implemented the content, tools, and governance required in order to deliver an on-demand training program at scale.
Still, now is the time to push for small, yet significant training improvement — no matter how imperfect the solution. For instance, if your goal is to improve your organization’s ability to quickly deliver personalized content at scale, don’t overthink the solution. You don’t necessarily need a full-scale content delivery platform. Tools like Arist make it possible to quickly deliver on-demand training to your frontline workforce via text message. Arist works because it’s simple, effective, doesn’t require a significant behavioral change, and the content is presented in an easily digestible format; there are a number of other tools just like Arist (Skill Pill is another one) that enable you to quickly optimize your content delivery pipeline at a time when deliverability and comprehension is critically important. Remember: it all comes down to the timeliness and presentation of content.
Ensuring the deliverability and accessibility of content is only half the battle: it’s equally important to implement short, natural feedback loops that will allow you and your team(s) to monitor feedback from the frontlines. You’ll want to look for signals that will force you to make necessary adjustments on the fly; the signals you’ll typically want to monitor include: usefulness of content (does what you’re training make sense in a real-world environment), actionability of content, usefulness of the tool itself, specific gaps where bottlenecks exist, and so on and so forth. Any feeling of frustration, confusion, or avoidance should be strong signals that there’s room for rapid iteration and improvement. Be especially mindful of wide gaps that exist between the skills you’re training and the skills that your frontline workforce is telling you they feel are most critical right now.
Take the feedback you’re receiving, prioritize those that are lowest-effort, highest-reward, and do what’s necessary to ship those improvements as quickly and efficiently as possible. Again, the name of the game here is speed-to-improvement, not necessarily speed-to-perfection. Your goal should be to adjust process and/or tooling in order to get critical information and enhanced capabilities into the hands of your frontline workforce as quickly as possible.
Understand that the highest-value shippable improvement might not be digital; for instance, many essential businesses (especially big-box or warehouse retailers) are currently struggling to keep employees equipped with up-to-date stock information for the most in-demand household products (even digital inventory systems are struggling to process real-time stock information). In such an event, perhaps the quickest and most effective solution could be to equip all store associates with small, physical placards that could be updated each day to include inventory, expected shipments, and location information for those products that are most in-demand during COVID-19.
Lastly, also realize that the highest-value shippable improvement might be procedural. That is, they might have nothing to do with training methods, content, or tools. So, if you find that your frontline workforce regularly pinpoints existing processes as critical bottlenecks, perhaps ask yourself a simple question: Is the feedback I’m hearing the result of procedural flaws, unnecessary complexity, or perhaps even an over-compensation because of those two things? If so, you’ve just found a high-value pain point that needs to be solved quickly.
The COVID-19 pandemic has likely forced you and your team(s) to abandon any previously planned or longer-range learning and development plans you had expected to roll out in 2020. What’s more, it’s just as likely that you may now be questioning whether some of those longer-range plans are even relevant. That’s an entirely fair question. Yet, there’s one, pivotal priority that won’t change: the continued need for significant advancements in on-the-job training.
According to Gartner, on-the-job training is the primary method used right now in order to develop frontline employees’ digital skills. Still, however, 47% of on-the-job learning opportunities are at risk of being automated and eliminated by artificial intelligence in the coming years. So, something’s got to give. Very few organizations were making significant progress to bridge this gap even before the COVID-19 pandemic.
The fastest way to make progress here is to join forces with HR and operations leaders (those making key decisions regarding the future of the business), field training directors (or similar), and frontline employees to accurately identify the skills that will be most needed in both the immediate and near-term future (even if that’s only an educated guess at this point). Your goal here is not to train all the things but to identify those skill-sets (whether hard skills, soft skills, or a hybrid of both) that you believe will have the greatest likelihood to significantly impact the future of your organization. Use this method to identify a tangible starting point around which you can work to shift your existing plans to prepare for what comes next.
One of our partners was looking for a way to more efficiently deliver necessary training to their frontline employees who work each day in one of the most uniquely challenging training environments — quick-serve restaurants. There was nothing inherently wrong with the way things were working, but it was obvious that there was some glaring inefficiency in both process and tooling. In fact, more than one team felt particularly constrained by the limitations of the existing learning management system (LMS).
In less than nine weeks, we turned an idea into a working, functional prototype that was then rolled out to 15 different restaurants nationwide. The pilot training tool not only delivered more accurate, on-the-job training materials, but also exposed several glaring holes and areas of immediate opportunity that otherwise would’ve gone unnoticed and unrecognized.