4 Tips for Successful Adoption of AI at Scale

Utilizing artificial intelligence on a smaller scale is relatively simple in nature. At the enterprise level, however, it isn’t always so straightforward. This may be a contributing factor to the recent survey results from Gartner indicating that while 46% of CIOs have plans for implementing AI, only a mere 4% have actually done so. Those early adopters no doubt have faced and overcome many challenges along the way. Here are four lessons that you can learn from that will make adopting AI at scale easier.

Start small.

Desmond Tutu once said that the best way to eat an elephant is “one a bite at a time.” Just because your end goal is to have enterprise-wide adoption of AI doesn’t mean you have to aim for that large of an outcome right off the bat.

Most often, the best way to initiate a larger AI project is to start with a smaller scope and aim for “soft” rather than “hard” outcomes. In other words, rather than primarily seeking direct financial gains, focus instead on things like process improvement and customer satisfaction. Over time, the benefits gained by achieving these smaller “soft” goals will lead you to your larger objectives anyway.

If decision-makers in your organization require a financial target in order to start an AI initiative, Gartner VP and distinguished analyst Whit Andrews recommends setting the target as low as possible. He suggests the following: “Think of targets in the thousands or tens of thousands of dollars, understand what you’re trying to accomplish on a small scale, and only then pursue more-dramatic benefits.

Focus on augmentation vs. replacement.

Historically, significant tech advances have often been associated with a reduction in staff. While cutting labor costs may be an attractive benefit for executives, it’s likely to generate resistance amongst staffers who view AI as a threat to their livelihood. A lack of buy-in from front line employees may hinder progress and result in a less favorable outcome.

To avoid this, shift your approach to one that focuses on augmenting human workers as opposed to replacing them. Ultimately, communicating that the most transformational benefits of AI lie in the technology’s ability to enable employees to pursue higher value and more meaningful work. For instance, Gartner predicts that by 2020, 20% of organizations will have workers dedicated to overseeing neural networks.

Make an effort to engage employees and get them excited about the fact that an AI-powered environment will enhance and elevate the work they do.

Prepare for knowledge transfer.

The majority of organizations are not adequately prepared for AI implementation. In particular, most lack the appropriate internal skills in data science and, as a result, plan on relying heavily on external service providers to help bridge the gap. Furthermore, Gartner predicts that 85% of AI projects initiated between now and 2022 will deliver erroneous outcomes due to inaccurate or insufficient data and/or lack of team knowledge/ability.

In order for an AI project to work at scale, there must be a robust knowledge-base fueled by accurate information and there must be adequately trained staff to manage it. Simply put, relying on external suppliers for these things isn’t a feasible long-term solution. Instead, IT leaders should prepare in advance by gathering, storing and managing data now and investing in the reskilling of existing personnel. Building up your in-house capabilities is essential before taking on large-scale AI projects.

Seek transparent solutions.

Most AI projects will inevitably involve some type of software, system, application or platform from an external service provider. When evaluating these providers, it’s important that decision-makers take into account not only whether the solution will produce the appropriate results, but also why and how it will be most effective.

While explaining the in-depth details of something as complex as a deep neural network may not always be possible, it’s imperative that the service provider be able to, at the very least, provide some type of visualization as to the various choices available. At the end of the day, the more transparency that is present, the better – especially when it comes to long-term projects.

For more information on how to incorporate artificial intelligence into your strategic planning for digital transformation, check out this resource from Gartner. And when you’re ready to move forward with your AI initiative, give Ayehu NG a try free for 30 days. Click here to start your complementary trial.

IT Incidents: From Alert to Remediation in 15 seconds [Webinar Recap]

Author: Guy Nadivi

Remediating IT incidents in just seconds after receiving an alert isn’t just a good performance goal to strive for. Rapid remediation might also be critical to reducing and even mitigating downtime. That’s important, because the cost of downtime to an enterprise can be scary. Even scarier though is what can happen to people’s jobs if they’re found to be responsible for failing to prevent the incidents that resulted in those downtimes.

So let’s talk a bit about how automation can help you avoid situations that imperil your organization, and possibly your career.

Mean Time to Resolution (MTTR) is a foundational KPI for just about every organization. If someone asked you “On average, how long does it take your organization to remediate IT Incidents after an alert?” what would your answer be from the choices below?

  • Less than 5 minutes
  • 5 – 15 minutes
  • As much as an hour
  • More than an hour

In an informal poll during a webinar, here’s how our audience responded:

More than half said that, on average, it takes them more than an hour to remediate IT incidents after an alert. That’s in line with research by MetricNet, a provider of benchmarks, performance metrics, scorecards and business data to Information Technology and Call Center Professionals.

Their global benchmarking database shows that the average incident MTTR is 8.40 business hours, but ranges widely, from a high of 33.67 hours to a low of 0.67 hours (shown below in the little tabular inset to the right). This wide variation is driven by several factors including ticket backlog, user population density, and the complexity of tickets handled.

Your mileage may vary, but obviously, it’s taking most organizations far longer than 15 seconds to remediate their incidents.

If that incident needing remediation involves a server outage, then the longer it takes to bring the server back up, the more it’s going to cost the organization.

Statista recently calculated the cost of enterprise server downtime, and what they found makes the phrase “time is money” seem like an understatement. According to Statista’s research, 60% of organizations worldwide reported that the average cost PER HOUR of enterprise server downtime was anywhere from $301,000 to $2 million!

With server downtime being so expensive, Gartner has some interesting data points to share on that issue (ID G00377088 – April 9, 2019).

First off, they report receiving over 650 client inquires between 2017 and 2019 on this topic, and we’re still not done with 2019. So clearly this is a topic that’s top-of-mind with C-suite executives.

Secondly, they state that through 2021, just 2 years from now, 65% of Infrastructure and Operations leaders will underinvest in their availability and recovery needs because they use estimated cost-of-downtime metrics.

As it turns out, Ayehu can help you get a more accurate estimate of your downtime costs so they’re not underestimated.

In our eBook titled “How to Measure IT Process Automation ROI”, there’s a specific formula for calculating the cost of downtime. The eBook is free to download on our website, and also includes access to all of our ROI formulas, which are fairly straightforward to calculate.

Let’s look at another data point about outages, this one from the Uptime Institute’s 2019 Annual Data Center Survey Results. They report that “Outages continue to cause significant problems for operators. Just over a third (34%) of all respondents had an outage or severe IT service degradation in the past year, while half (50%) had an outage or severe IT service degradation in the past three years.”

So if you were thinking painful outages only happen at your organization, think again. They’re happening everywhere. And as the research from Statista emphasized, when outages hit, it’s usually very expensive.

The Uptime Institute has an even more alarming statistic they’ve published.

They’ve found that more than 70% of all data center outages are caused by human error and not by a fault in the infrastructure design!

Let’s pause for a moment to ponder that. In 70% of cases, all it took to bring today’s most powerful high-tech to its knees was a person making an honest mistake.

That’s actually not too surprising though, is it? All of us have mistyped a keyboard stroke here or made an erroneous mouse click there. How many times has it happened that someone absent-mindedly pressed “Reply All” to an email meant for one person, then realized with horror that their message just went out to the entire organization?

So mistakes happen to everyone, and that includes data center operators. And unfortunately, when they make a mistake that leads to an outage, the consequences can be catastrophic.

One well-known example of an honest human mistake that led to a spectacular outage occurred back in late February of 2017. Someone on Amazon’s S3 team input a command incorrectly that led to the entire Amazon Simple Storage Service being taken down, which impacted 150,000 organizations and led to many millions of dollars in losses.

If infrastructure design usually isn’t the issue, and 70% of the time outages are a direct result of human error, then logic suggests that the key would be to eliminate the potential for human error. And just to emphasize the nuance of this point, we’re NOT advocating eliminating humans, but eliminating the potential for human error while keeping humans very much involved. How do we do that?

Well, you won’t be too surprised to learn we do it through automation.

Let’s start by taking a look at the typical infrastructure and operations troubleshooting process.

This process should look pretty familiar to you.

In general, many organizations (including large ones) do most of these phases manually. The problem with that is that it makes every phase of this process vulnerable to human error.

There’s a better way, however. It involves automating much of this process, which can reduce the time it takes to remediate an IT incident down to seconds. And automation isn’t just faster, it also eliminates the potential for human error, which should radically reduce the likelihood that your environment will experience an outage due to human error.

Here’s how that would work. It involves using the Ayehu platform as an integration hub in your environment. Ayehu would then connect to every system that needs to be interacted with when remediating an incident.

For example, if your environment has a monitoring system like SolarWinds, Big Panda, or Microsoft System Center, that’s where an incident will be detected first. The monitoring system (now integrated with Ayehu) will generate an alert which Ayehu will instantaneously intercept. (BTW – if there’s a monitoring system or any kind of platform in your environment that we don’t have an off-the-shelf integration for, it’s usually still pretty easy to connect to it via a REST API call.)

Ayehu will then parse that alert to determine what the underlying incident is, and launch an automated workflow to remediate it.

As a first step in our workflow we’re going to automatically create a ticket in ServiceNow, BMC Remedy, JIRA, or any ITSM platform you prefer. Here again is where automation really shines over taking the manual approach, because letting the workflow handle the documentation will ensure that it gets done in a timely manner (in fact, in real-time) and that it gets done thoroughly. This brings relief to service desk staff who often don’t have the time or the patience to document every aspect of a resolution properly because they’re under such a heavy workload.

The next step, and actually this can be at any step within that workflow, is pausing its execution to notify and seek human approval for continuation. To illustrate why you might do this, let’s say that a workflow got triggered because SolarWinds generated an alert that a server dropped below 10% free disk space. The workflow could then go and delete a bunch of temp files, it could compress a bunch of log files and move them somewhere else, and do all sorts of other things to free up space. Before it does any of that though, the workflow can be configured to require human approval for any of those steps.

The human can either grant or deny approval so the workflow can continue on, and that decision can be delivered via laptop, smartphone, email, instant messenger, or even regular telephone. However, please note that this notification/approval phase is entirely optional. You can also choose to put the workflow on autopilot and proceed without any human intervention. It’s all up to you, and either option is easy to implement.

Then the workflow can begin remediating the incident which triggered the alert.

As the remediation is taking place, Ayehu can update the service desk ticket in real-time by documenting every step of the incident remediation process.

Once the incident remediation is completed, Ayehu can automatically close the ticket.

Finally, Ayehu can go back into the monitoring system and automatically dismiss the alert that triggered the entire process.

This, by the way, illustrates why we think of Ayehu as a virtual operator which we sometimes refer to as “Level 0 Tech Support”. A lot of incidents can be resolved automatically by Ayehu without any human intervention, and thus without the need for attention from a Level 1 technician.

This then is how you can go from alert to remediation in 15 seconds, while simultaneously eliminating the potential for human error that can lead to outages in your environment.

Gartner concurs with this approach.

In a recently refreshed paper they published (ID G00336149 – April 11, 2019) one of their Vice-Presidents wrote that “The intricacy of access layer network decisions and the aggravation of end-user downtime are more than IT organizations can handle. Infrastructure and operations leaders must implement automation and artificial intelligence solutions to reduce mundane tasks and lost productivity.”

No ambiguity there.

Gartner’s advice is a good opportunity for me to segue into one last topic – artificial intelligence.

The Ayehu platform has AI built-in, and it’s one of the reasons you’ll be able to not only quickly remediate your IT incidents, but also quickly build the workflows that will do that remediation.

Ayehu is partnered with SRI International (SRI), formerly known as the Stanford Research Institute. In case you’re not familiar with them, SRI does high-level research for government agencies, commercial organizations, and private foundations. They also license their technologies, form strategic partnerships (like the one they have with us) and creates spin-off companies. They’ve received more than 4,000 patents and patent applications worldwide to date. SRI is our design partner, and they’ve designed the algorithms and other elements of our AI/ML functionality. What they’ve done so far is pretty cool, but what we’re working on going forward is what’s really exciting.

One of the ways Ayehu implements AI is through VSAs, which is shorthand for “Virtual Support Agents”.

VSA’s differ from chatbots in that they’re not only conversational, but more importantly they’re also actionable. That makes them the next logical step or evolution up from a chatbot. However, in order for a VSA to execute actionable tasks and be functionally useful, it has to be plugged in to an enterprise grade automation platform that can carry out a user’s request intelligently.

We deliver a lot of our VSA functionality through Slack, and we also have integrations with Alexa and IBM Watson. We’re also incorporating an MS-Teams interface, and looking into others as well.

How is this relevant to remediating incidents?

Well, if a service desk can offload a larger portion of its tickets to VSA’s, and provide its users with more of a self-service modality, then that frees up the service desk staff to automate more of the kinds of data center tasks that are tedious, repetitive, and prone to human error. And as I’ve previously stated, eliminating the potential for human error is key to reducing the likelihood of outages.

Speaking of tickets, another informal webinar poll we conducted asked:

On average, how many support tickets per month does your IT organization deal with?

  • Less than 100
  • 101 – 250
  • 251 – 1,000
  • More than 1,000

Here’s how our audience responded:

Nearly 90% receive 251 or more tickets per month. Over half get more than 1,000!

For comparison, the Zendesk Benchmark reports that among their customers, the average is 777 tickets per month.

Given the volume of tickets received per month, the current average duration it takes to remediate an incident, and most importantly the onerous cost of downtime, automation can go a long way towards helping service desks maximize their efficiency by being a force multiplier for existing staff.

Q:          What types of notifications can the VSA send at the time of incident?

A:           Notifications can be delivered either as text or speech.

Q:          How does the Ayehu tool differ from other leading RPA tools available on the market?

A:           RPA tools are typically doing screen automation with an agent. Ayehu’s automation is an agentless platform that primarily interfaces with backend APIs.

Q:          Do we have to do API programming or other scripting as a part of implementation?

A:           No. Ayehu’s out-of-the-box integrations typically only require a few configuration parameters.

Q:          Do we have an option to create custom activities? If so, which programing language should be used?

A:           In our roadmap, we will be offering the ability to create custom activity content out-of-the-box.

Q:          Do out-of-the-box workflows work on all types of operating systems?

A:           Yes. You just define the type of operating system within the workflow.

Q:          How does Ayehu connect and authenticate with various endpoint devices (e.g. Windows, UNIX, network devices, etc.)? Is it password-less, connection through a password vault, etc?

A:           That depends on what type of authentication is required internally by the organization. Ayehu integrated with the CyberArk password vault can be leveraged when privileged account credentials are involved. Any type of user credential information that is manually input into a workflow or device is encrypted within Ayehu’s database. Also, certificates on SSH commands, Windows authentication, and localized authentication are all accessible out-of-the-box. Please contact us for questions about security scenarios specific to your environment.

Q:          What are all the possible modes that VSAs can interact with End Users?

A:           Text, Text-to-Speech, and Buttons.

Q:          Can we create role-based access for Ayehu?

A:           Yes. That’s a standard function which can also be controlled by and synchronized with Active Directory groups out-of-the-box.

Q:          Apart from incident tickets, does Aheyu operate on request tickets (e.g. on-demand access management, software requests from end-users, etc.)?

A:           Yes. The integration packs we offer for ServiceNow, JIRA, BMC Remedy, etc. all provide this capability for both standard and custom forms.

Q:          Does Ayehu provide APIs for an integration that’s not available out of the box?

A:           Yes. There are two options. You can either forward an event to Ayehu using our webservice which is based on a RESTful API, or from within the workflow you can send messages outbound that are either scheduled or event-driven. This allows you to do things such as make a database call, set an SNMP trap, handling SYSLOG messages, etc.

Q:          Does Ayehu provide any learning portal for developers to learn how to use the tool?

A:           Yes. The Ayehu Automation Academy is an online Learning Management System we just launched recently. It includes exams that provide you an opportunity to bolster your professional credentials by earning a certification. If you’re looking to advance your organization’s move to an automated future, as well as your career prospects, be sure to check out the Academy.

Q:          Does Ayehu identify issues like a monitoring tool does?

A:           Ayehu is not a monitoring tool like Solarwinds, Big Panda, etc. Once Ayehu receives an alert from one of those monitoring systems, it can trigger a workflow that remediates the underlying incident which generated that alert.

Q:          We have 7 different monitoring systems in our environment. Can Ayehu accept alerts from all of them simultaneously?

A:           Yes. Ayehu’s integrations are independent of one another, and it can also accept alerts from webservices. We have numerous deployments where thousands of alerts are received from a variety of sources and Ayehu can scale up to handle them all.

Q:          What does the AI in Ayehu do?

A:           There are different areas where AI is used. From use in understanding intent through chatbots to workflow design recommendations, and also suggesting workflows to remediate events through the Ayehu Brain service. Please contact an account executive to learn more.

New call-to-action

5 Mistakes to Avoid with Self-Service Automation

Self-service automation is becoming more of the norm rather than the exception. In fact, a recent survey by SDI found that 61% of businesses were focusing on some type of self-service initiative (up from 47% in 2015). And it’s not only for making your customers’ lives easier. Many organizations are realizing the benefits of providing self-service options to employees to eliminate the need for many of the common issues plaguing the help desk, such as password resets and system refreshes. If you’re thinking about jumping on the bandwagon, here are a few common mistakes you should actively avoid.

Inadequate Communication – If you want your employees to adopt and embrace self-service technology, you have to ensure that they understand its many benefits. This is particularly important for your IT team, some of whom may feel uneasy or even threatened by the thought of automated technology handling some of their tasks. Gain acceptance and buy-in by communicating how self-service options will actually make the lives and jobs of everyone easier and more efficient.

Lack of Knowledge – What types of activities can you – and more importantly – should you be transitioning over to self-service? Many otherwise savvy IT decision makers rush into self-service implementation before they truly have a good understanding of what tasks are most beneficial to automate. Take time to learn about what your IT team is bogged down by and also what areas the end-user might not only benefit from, but actually appreciate the ability to handle things on their own.

Not Choosing a Tool Carefully – Not all self-service automation platforms are created equal and if you don’t carefully and thoroughly do your homework, you could end up with a less-than-ideal result. Not only does implementing a faulty tool mean more headaches for your IT department, but the frustration of everyone who has to use it will ultimately lead to disengagement, resistance and/or complete lack of adoption. Make sure the platform you choose is robust, user-friendly and versatile enough to handle both full and semi-automation needs.

Setting and Forgetting It – Like anything else in technology, self-service automation isn’t something that you can simply put in place and never think about again. Not only is it important to keep up to date from a tech standpoint, but it’s equally important to ensure that the system you have in place remains as effective as possible. Conducting regular audits of both the IT department and the end-users can help you determine whether new tasks could be automated or if existing ones could use some tweaking.

Forgetting the Intangibles – Last but not least, maintaining an environment in which self-service automation is embraced and celebrated involves regular assessment and selling of the many benefits this technology provides. When calculating ROI, don’t forget to also consider the intangible ways self-service is good for your organization, particularly how it allows IT to improve its meaningful contribution to the organization. That is a value that can and should be recognized across the board.

What could self-service automation do for your company? Why not find out today by starting your free 30 day trial of Ayehu. No obligation, just enhanced efficiency and better overall operations. Get your free trial now by clicking here!

Understanding the Stages of Automation

Every organization reaches a point (or several points) at which auditing and updating applications and business systems becomes necessary. Whether your company is currently at that junction or it will be coming down the road, it’s imperative that you use this opportunity to explore the power of intelligent process automation. From streamlining repetitive tasks to deploying AI-powered virtual support agents to perform complex end-to-end IT workflows, automation can skyrocket productivity and efficiency.

All this being said, there are a significant number of automation solutions available on the market today, all of which are not created equal. Decision-makers must educate themselves on the various levels and capabilities of automation. Let’s take a closer look at these levels below.

Simple Automated Tasks

The most basic level of process automation that exists presently involves simple tasks which are triggered by basic “if, then” actions. This type of entry-level automation has been around for decades and can (and should) be applied to all business functions.

IT teams specifically can use basic automation to create simple triggers, such as activating network changes whenever traffic thresholds are met or exceeded. Simple process automation is also capable of being integrated with other tools to create a more robust system. For instance, a monitoring system like Solarwinds can be integrated with an ITPA platform so that alerts trigger end-to-end automated workflows.

Chatbots and Self-Service Automation

The next phase in the process automation lifecycle is that of self-service automation, most of which is handled by chatbots. With this type of automation, an end-user can initiate a support ticket for basic needs, such as password resets. The automated bot is pre-programmed to respond to certain triggers in the interface and perform the requested action without the need for intervention from a human agent.

Self-service automation is valuable because it saves time, both for the end-user as well as the IT support team, thus facilitating a much higher degree of productivity across the board. It does, however, lack in the way of context. In other words, chatbots are only capable of understanding basic commands and following pre-determined decision pathways. They cannot interpret meaning or context.

Cognitive Intelligent Process Automation

This phase of automation takes the concept of chatbots to the next level by introducing advanced technologies, like artificial intelligence, machine learning and natural language processing into the mix. Unlike basic self-service automation, virtual support agents (VSAs) are capable of understanding context in communication with an end-user and delivering intelligent responses based on automated research. They are also capable of learning, making their own decisions and executing complex tasks.

It’s important to note that in order to be effective, intelligent process automation must be built upon the foundation of a strong and accurate knowledge-base, as it is this data from which the tool will pull its answers.

The Next Wave of Intelligent Process Automation

What does the future hold for automation? Tomorrow’s (and to some degree already, today’s) business environment will feature on-demand, shared services that facilitate end-to-end operations and enable automation of even the most complex business processes. Thanks to ever-improving artificial intelligence technology, platforms will not only be able to learn and evolve from user interactions and human input, but eventually identify and create newer and better processes for even more streamlined operations.

The good news is, you don’t have to settle for basic automation, nor do you have to wait for the future to witness the power of cognitive intelligent process automation. In fact, you can experience it firsthand today by downloading your free 30-day trial of Ayehu NG.

Free eBook! Get Your Own Copy Today

Episode #21: Powering Up Energy & Utilities Providers’ Digital Transformation with Intelligent Automation & AI – Capgemini’s Philippe Vié

July 15 2019    Episodes

Episode #21: Powering Up Energy & Utilities Providers’ Digital Transformation with Intelligent Automation & AI

In today’s episode of Ayehu’s podcast we interview Philippe Vié – Group Leader Energy, Utilities and Chemicals at Capgemini.

The Energy, Utilities and Chemicals industries are vital to our everyday lives today, & the digital world of tomorrow. Futurists envision amazing new technological capabilities that will rely heavily on these sectors. Yet these industries currently face so many challenges, there’s growing concern about their ability to keep pace with expectations. Enter Philippe Vié, Capgemini’s Group Leader for Energy, Utilities and Chemicals.

As an industry thought leader, Philippe advises many of the biggest Energy, Utilities and Chemicals players on how intelligent automation can accelerate their digital transformation. Recently, his Capgemini team published a report which found that the energy and utilities sector could realize $237 to $813 billion of cost savings if it were to implement intelligent automation in its target processes at scale. Philippe shares with us a number of insights from this report, along with the revelation that intelligent automation can not only cut costs for organizations, but also generate new revenue streams.



Guy Nadivi: Welcome everyone. My name is Guy Nadivi, and I’m the host of Intelligent Automation Radio. Our guest on today’s episode is Philippe Vié, Group Leader Energy, Utilities and Chemicals at Capgemini, the world’s 2nd largest consulting firm by revenue based out of Paris, France. We’ve never had an expert on energy, utilities, and chemicals on our show before, but there’s a lot of interesting things going on in this space with regards to automation, AI, and machine learning. So we reached out to Philippe and asked him to join us, and he graciously accepted.

Philippe, welcome to Intelligent Automation Radio!

Philippe Vié: Thank you, thank you so much for interviewing me, Guy. This is the appropriate moment, since we have collected answers from 500 energy & utilities executives, and we have published May 20th point of view on intelligent automation for our industry. Thank you.

Guy Nadivi: So let’s talk about some of those findings, Philippe. What are some of the biggest ways automation, AI, and machine learning are impacting the energy, utilities, and chemical industries today?

Philippe Vié: First of all, energy & utilities are considering a lot of use cases for core business processes and super-functions too. It seems thanks to our studies in this sector but also in other industries, that 38% of energy & utilities players report at least one use case which has been deployed at scale, and 15% [report] multiple use cases at scale. But these figures show also that for the moment only a minority of players have been able to scale up their intelligent automation initiatives. For which benefits?

In average they answer that, 30-35% of executives report an operations boost compared to 15-30% in other sectors. 35-50%, there is a range because we have multiple KPI’s around these benefits, 35-50% of executives report top-line growth. And 70-80% increase in customer satisfaction, which is also ahead of other industries. And our calculations, and you will find it on the report we have published today, is that there is a put-on-shoulders savings range from $200 Billion to $800 Billion, depending on the way you consider automation & intelligent automations.

More than 30 use cases were reported in core functions & 50 for super-functions. Some examples – forecasting…..weather forecasting, load forecasting, the typical example in core functions. Create behavior interface. Energy storage. Energy trading, in which you have a lot of possible automations. Vegetation management, meaning intelligent ticketing for transmission & distribution operators. Complaints management on the retail side of the business. Customer chatbots on top of costs of the classical and well-known predictive maintenance that is coded in any energy & utilities player.

All should be starting with quick wins, low complexity but tangible results. This is the landscape of potential benefits of AI & automation.

Guy Nadivi: So let’s talk a little bit about that greater landscape. Between June 2014 & January 2016, oil suffered a drastic decline in price, dropping by about 2/3, which led to many layoffs in the energy business. And in addition to that, it was estimated in 2015 that over the next 5 – 7 years, 50% of the workforce would be retiring, leaving behind a huge talent shortage. How has this massive personnel turnover affected adoption of automation, AI, & machine learning in the utilities, energy, and chemicals businesses?

Philippe Vié: 2 questions in fact, Guy, here. The first one on oil price’s drop, which was artificial. It was artificial because OPEC & Russia wanted to kill US shale oil producers 3 years ago.

It obliged US shale oil producers to make efficiency progresses, and they used automation for that. But this war is today over. Oil prices are more comfortable for all the players, floating from $60-$80 per barrel, and should remain at that level, if we trust the analysts. With demand growth, it depends on the economic health of the planet. We are not {unintelligible}. And international political tensions, President Trump against Iran, and US-Iran waivers.

On the last part of your questions, in our views, the adoption triggers for intelligent automation, for automation, resides more in technology adoption and performance improvement targets than in dealing with aging workforce & retiring personnel consequences.

Energy & utilities players are focusing on quick wins, as I already mentioned, rather than automating to replace [an] aging workforce, and the answer, and this is a good question, that the talent-related challenge remains, even with a high level of automation. Of course, not the same skills shortage, but skills shortage again.

Guy Nadivi: Philippe, You are one of the co-authors of a Capgemini report called “The Digital Utility Plant” which found that only 8% of utility companies have operations which could be described as digitally mature. Perhaps this is at least partially because so many utilities have natural monopolies structurally shielding them from competition, & insulating them from the need to automate or to innovate. What do you tell utilities executives to persuade them that now is the time to become what Capgemini calls Digital Masters, or innovation leaders by implementing AI, automation, and machine learning?

Philippe Vié: The publication of this “Digital Utility Plant” you mentioned was a long time ago, in 2017. 2 years is a very long time for digital transformation. And we have observed since that, growing appetite for digital operations to save costs up to 20-30% savings {unintelligible}. Digital operations for centralized generation assets, decentralized generations assets – renewables, and also transmission and distribution networks.

So this potential for digital operations is today’s top priorities of the players when they go digital, and they are all going digital. They have all started by the customer experience.

When you see for example EDF, the French leading utility is the 2nd-largest utility in the world, they are going full blast in nuclear engineering digital transformation, and they are trying to create a digital twin for each reactor, new reactor, or to-be-retrofitted reactor, to expand their lifetime. This is a huge project and a huge investment, but very profitable. When you see smart grids deployment at scale after years of experimentation in several world-leading distributors, in EDF for example in Europe and many in the US also. This is a clear signal that energy & utilities are really moving forward on the digital transformation route.

So our arguments are to push concrete histories related to techno-leverage, not only AI & RPA, but also IoT, Cloud, and other digital nuggets, to help decision-makers to move forward with “of use, use cases”, low complexity, or big profit potential, easy to develop & deploy on the 3 pillars of the customer experience, digital operations, and new business models around digital transformation, but also on worker enablement in any section. The selection of key use cases based on their value is very, very useful to select the appropriate initiatives with which you should start.

Guy Nadivi: Now Philippe you mentioned costs there, and automation, AI, and machine learning are often applied as ways of optimizing resources and cutting costs. In the utilities industry though, I understand these technologies are also being touted as the basis for entirely new services. So can you tell us a bit about some of the more interesting use cases Capgemini is involved with where Automation, AI, and machine learning are creating new revenue streams for utilities?

Philippe Vié: Yes, let’s start with some use cases and with the profit that can come out of their implementation and their deployment. Just a long list of possible use cases, I will bring 6-8: • Online self-services & self-sales. • Smart charging / smart discharging for electric vehicles • Energy management solutions software for building microgrades • Automated demand response for getting access to flexibility to better manage load & the demand • Smart lighting • Transactive energy solutions offers personalizations and far more What kind of KPIs the energy & utilities are considering when they get through these new business models is new revenue streams.

First of all, the answer is that 47% of them, the executives answer that they can get quicker access to customer data and more reliable customer data.

41% of them insist on the faster time to market.

45% of them an increase in inbound customer leads.

And 40% of them in quicker break even for these new business models.

Very tangible results in which the utilities are really engaged and they have demonstrated this value.

Guy Nadivi: According to Capgemini’s March 2018 Automation Advantage report, 46% of firms are refraining from innovation due to concerns about cybersecurity. Philippe, what impact are concerns about cybersecurity having on utilities executives’ decision to move forward with the kinds of digital transformations that automation, AI, and machine learning can produce?

Philippe Vié: First of all, electricity, gas, water, [and] oil are critical assets in any country, and it’s not because of digital transformation. They are critical assets, meaning that energy & utilities players are used to deal[ing] with cyber security threats. With threats in general, and cyber security threats, which is particularly true for exploration prediction, generation, transmission, and distribution. This is less real for retail and energy services.

So they start by selecting cyber-proofed or certified platforms with a national agency certifying cyber security of some products, and they work also mainly with serious players, well-known for managing cyber security threats. System integrators, for example, and really taking into account these cyber security threats. It seems that these threats don’t prevent them to move forward, which is good, but can make the automation project longer and more expensive. But that’s it. They have no choices as they want to move forward. They have to move forward to get the value out of the intelligent automation projects, and they have to manage on the other hand cyber security.

Guy Nadivi: You were quoted in La Tribune as stating that of the forty plus energy suppliers in France, ultimately, only 3 to 4 major players will survive. You then encouraged them to accelerate their digital transformation, in particular by making better use of AI. What are the top 3 ways Philippe, that you would recommend that AI, machine learning, and automation be used by utility companies to survive into the future?

Philippe Vié: In European markets, which are open to competition since more than 20 years now, all European markets. This is {unintelligible} and you see 40 competitors in France, 70 in UK, far more in Germany & in Austria, and only 3 to 5 will get significant market share, and some of them are dying every day in the smaller countries.

Considering AI, machine learning, automation – our recommendations are to move forward with quick wins first, then evaluate & choose carefully pragmatically intelligent automation use cases which can be the more profitable or the more interesting in terms of competitiveness in the market. To integrate & optimize the right processes for deployment, and deploy at scale as soon as they have demonstrated the value of their use cases. Quick wins are the most profitable ones, but more complex. And finally, to involve their workforce to invest in their capabilities, to put their money on the table to be successful, and to drive dedicated change management program around these new processes which change the life of their workforce, and also the way they interact and sell to their clients and the new value they can bring to the market.

Guy Nadivi: Aside from doing those things because you need to, to survive, is there a single metric other than ROI that best captures the effectiveness of automating IT operations in the utilities, energy, and chemicals businesses?

Philippe Vié: In fact, as already mentioned, we have 3 dimensions – customer satisfaction, operations boost, and top-line growth. And for each of them in our paper, you will find about 10 KPIs and probably more in some energy & utilities players, on which you can make real measurement of your successes. Depending on the chosen use cases which can be divided in this circle of KPIs, let me give you an example for each pillar – customer satisfaction, operations boost, and top-line growth.

Customer satisfaction, you can reduce the number of steps in customer interactions. You can improve your customer experience through faster response. You can be more customized to your customer needs and bring the appropriate answer.

On boosting operations, you can definitely improve your workforce efficiency & agility, and you can measure that with related KPIs.

On top-line growth, I have just mentioned before the typical KPIs, quicker access to customer data, faster time to market, increase in inbound customer leads, quicker break even, and so on and so forth.

Guy Nadivi: Chatbots or Virtual Support Agents are playing an increasingly important role in the automation of IT operations by enabling end user self-service. What do you envision Philippe, will be the role of Virtual Support Agents for companies in the utilities, energy, and chemical sectors?

Philippe Vié: Virtual Support Agents can bring various advantages to a company in IT, but also in many other domains of operations & core business support functions. Let me choose an example – applications diagnostics, customer credit check, real-time conversation with your customer analysis, payroll management, employee data management, finally a lot of use cases you will find and a lot of Virtual Support Agents you will find in our paper too.

Guy Nadivi: Philippe, for the CIO’s, CTO’s, & other IT executives from utilities, energy, and chemical companies listening in, what is the one big must-have piece of advice you’d like them to take away from our discussion with regards to implementing automation & AI for their operations?

Philippe Vié: Difficult for me Guy, to answer with only one. Generally these profiles, meaning CIOs, CTOs, IT Executives, they don’t need explanations or particular focus on intelligent automation potential advantages, because automation started very, very early in the 80’s, 90’s to automate their information systems.

They all, when you interview them, they all have one or two compelling stories to tell about the gains or savings they’ve recorded through these technologies. On data quality, on application diagnostics, on monitoring protocol compliance. If I were just to mention one or two benefits, I would say first – reducing complexity, the number of applications.

We have seen a lot of energy & utilities players moving from thousands of applications to hundreds of applications, and this has realized the ability to simplify the portfolio of their applications.

And the second one I would mention would be cost saving[s] on the run side, which is very important today. Servers, infrastructures, the run [side] is very important & shrinks their ability to make more developments. So this is cost saving on the run to enable more developments.

Guy Nadivi: One last thing Philippe, please tell our audience once more about the report Capgemini just issued & how they can get a hold of it for themselves.

Philippe Vié: So you go to Capgemini.com, you have industry-specific reports, and you will find on energy & utilities/chemical this report, published May 28th. You can download it. You can download also an infographic of this report with key figures of the report. Again we have interviewed 540 executives from energy & utilities, specifically on this topic – intelligent automation. And you will find a ton of figures in this report, 40 pages report. And we will be very pleased to engage a conversation in any country with you around this topic and around our great experiences and also our fails on bringing intelligent automation to life in many utilities and what are the advantages against that we can report to you. Guy Nadivi: We’ll be sure to include a link to that report also from our website, along with this episode so people can download directly from there.

Alright! That’s all the time we have for on this episode of Intelligent Automation Radio.

Philippe, merci beaucoup for coming on to the show and giving us great new insights on the state of automation & AI in the energy, utilities, and chemicals sectors. We’ve really enjoyed having you.

Philippe Vié: Thank you, Guy. Thank you, and enjoy reading this report and going through intelligent automation for the benefit of your companies.

Guy Nadivi: Philippe Vié, Group Leader Energy, Utilities and Chemicals at Capgemini. Thank you for listening everyone, and remember – Don’t Hesitate, Automate!



Philippe Vié

Group Leader Energy, Utilities and Chemicals at Capgemini

Philippe joined Capgemini in 1997, after a career in software, where he founded an early 80’s startup. He is now Vice President, Capgemini Group and Energy Utilities and Chemicals sector leader, based in Paris.

Philippe has over 25 years of Energy and Utilities industry experience and dedication, with a strong focus on Utilities Transformation projects, digital or not.  His tenure has also notably covered the deregulation and market opening period.  His many roles within the Energy, Utilities and Chemicals sector include:

  • • Thought leader, managing strategic studies and shaping Capgemini group EUC offers portfolio
  • • Leading the annual World Energy Markets Observatory by Capgemini (www.capgemini.com/wemo)
  • • Performance benchmarks (DNO – Distribution Network Operator – and Retail)
  • • Writing multiple POVs and press articles
  • • Creating Capgemini offers: Digital Utilities Transformation, Utilities to Energy Services, Utility in a Box, Digital edge, and training Capgemini representatives to sell and deliver the related services
  • • Delivering keynotes on industry trends

Philippe Vié can be found at:

Office:                      +33 (0)1 57 99 19 83

Mobile:                    +33 (0)6 12 72 82 67

Email:                       philippe.vie@capgemini.com

LinkedIn:                  https://www.linkedin.com/in/philippevie/

Quotes

“…in our views, the adoption triggers for intelligent automation, for automation, resides more in technology adoption and performance improvement targets than in dealing with aging workforce & retiring personnel consequences.”

"When you see smart grids deployment at scale after years of experimentation in several world-leading distributors, in EDF for example in Europe and many in the US also. This is a clear signal that energy & utilities are really moving forward on the digital transformation route.”

“Virtual Support Agents can bring various advantages to a company in IT, but also in many other domains of operations & core business support functions.”

About Ayehu

Ayehu’s IT automation and orchestration platform powered by AI is a force multiplier for IT and security operations, helping enterprises save time on manual and repetitive tasks, accelerate mean time to resolution, and maintain greater control over IT infrastructure. Trusted by hundreds of major enterprises and leading technology solution and service partners, Ayehu supports thousands of automated processes across the globe.

GET STARTED WITH AYEHU INTELLIGENT AUTOMATION & ORCHESTRATION  PLATFORM:

News

Ayehu NG Trial is Now Available
SRI International and Ayehu Team Up on Artificial Intelligence Innovation to Deliver Enterprise Intelligent Process Automation
Ayehu Launches Global Partner Program to Support Increasing Demand for Intelligent Automation
Ayehu wins Stevie award in 2018 international Business Award
Ayehu Automation Academy is Now Available

Links

Episode #1: Automation and the Future of Work
Episode #2: Applying Agility to an Entire Enterprise
Episode #3: Enabling Positive Disruption with AI, Automation and the Future of Work
Episode #4: How to Manage the Increasingly Complicated Nature of IT Operations
Episode #5: Why your organization should aim to become a Digital Master (DTI) report
Episode #6: Insights from IBM: Digital Workforce and a Software-Based Labor Model
Episode #7: Developments Influencing the Automation Standards of the Future
Episode #8: A Critical Analysis of AI’s Future Potential & Current Breakthroughs
Episode #9: How Automation and AI are Disrupting Healthcare Information Technology
Episode #10: Key Findings From Researching the AI Market & How They Impact IT
Episode #11: Key Metrics that Justify Automation Projects & Win Budget Approvals
Episode #12: How Cognitive Digital Twins May Soon Impact Everything
Episode #13: The Gold Rush Being Created By Conversational AI
Episode #14: How Automation Can Reduce the Risks of Cyber Security Threats
Episode #15: Leveraging Predictive Analytics to Transform IT from Reactive to Proactive
Episode #16: How the Coming Tsunami of AI & Automation Will Impact Every Aspect of Enterprise Operations
Episode #17: Back to the Future of AI & Machine Learning – SRI International’s Manish Kothari
Episode #18: Implementing Automation From A Small Company Perspective – IVM’s Andy Dalton
Episode #19: Why Embracing Consumerization is Key To Delivering Enterprise-Scale Automation – Broadcom’s Andy Nallappan
Episode #20: Applying Ancient Greek Wisdom to 21st Century Emerging Technologies

Follow us on social media

Twitter: twitter.com/ayehu_eyeshare

LinkedIn: linkedin.com/company/ayehu-software-technologies-ltd-/

Facebook: facebook.com/ayehu

YouTube: https://www.youtube.com/user/ayehusoftware

Disclaimer Note

Neither the Intelligent Automation Radio Podcast, Ayehu, nor the guest interviewed on the podcast are making any recommendations as to investing in this or any other automation technology. The information in this podcast is for informational and entertainment purposes only. Please do you own due diligence and consult with a professional adviser before making any investment

3 Ways Intelligent Automation is Outpacing Outsourcing

The subject of IT outsourcing has long been one of staunch debate over the past few decades or so. It seemed that with the increasing use of cloud computing, having IT operations managed elsewhere would become the norm rather than the exception. Yet, there’s one thing that is now giving outsourcing a run for its money and making the need for it nearly obsolete. That thing is intelligent automation, and here’s why it’s tipped the world of outsourcing on its side.

While the concept of outsourcing may not be disappearing completely, the need for it – particularly in the IT industry – seems to be shrinking. The reason is simple, really. Intelligent automation has provided a solution to each of the main reasons organizations turned to outsourcing in the first place.

Cost – For small to medium sized businesses, the costs associated with managing IT operations in-house were simply too high to justify. Instead, they turned to outside sources to provide these services, reducing expenditure. Automation makes managing IT operations internally much more affordable, so there’s really no need to rely on a third party provider.

Scalability – Another common reason businesses choose to outsource is the ability of these services to evolve along with the changing needs and demands of the business – something managing IT in-house couldn’t easily accomplish in years past. With intelligent automation, however, businesses can scale up or down at the click of a button – far faster than any third-party outsourced solution to could ever do.

Resources – Finally, there is the topic of logistics. Housing IT internally once meant the need for bulky and complex equipment, something that many smaller to mid-sized organizations simply could not accommodate. Intelligent automation, combined with cloud technology, eliminates this need, streamlining IT and making it simple to manage in an office of any size.

Essentially, intelligent automation is enabling enterprises of any size to do more with less. No longer is it necessary to employ a huge group of professionals to manage IT operations. Rather, the day to day IT tasks needed to keep the business running smoothly can be handled by just a handful of people. Now, what once made sense both financially and logistically – outsourcing –  is becoming more of a hassle than it’s worth.

As we’ve pointed out in previous articles, there are many benefits to keeping IT operations in-house. These advantages include, but are not limited to:

  • Increased control over procedures and processes
  • Enhanced security due to limiting access to only internal employees
  • Improved flexibility and customization (unique to each organization’s specific needs)
  • Time savings – no more relying on a third party
  • More cost-effective option than outsourcing to an MSP

For those businesses that choose to continue outsourcing their IT needs, automation will likely still play a significant role, as IT service providers leverage automation to deliver timely, precise and efficient results to their clients. So, either way automation will impact your business in some way, even if that impact is indirect.

In any event, as technology continues to evolve, it’s becoming more and more evident that the decision to outsource IT operations will be an increasingly complicated one. Overall, the trends indicate that automation will ultimately win the battle and provide businesses of all shapes, sizes and industries the ability to run efficient, effective internal IT departments.

Want to start harnessing the power of IT process automation for your own organization? Begin your free 30 day trial today by clicking here.
Free eBook! Get Your Own Copy Today

Still need a reason to automate your IT processes? Here are six.

These days more and more businesses are adopting intelligent automation to help streamline operations, improve efficiency, boost service levels, cut costs and more. And while the overall goals and objectives of each organization may differ, sometimes to a great degree, there are a number of universal reasons that key decision makers cite for why they ultimately opted to automate their IT processes. If you’re still on the fence, here are 6 key advantages to consider.

Automation of Repetitive Maintenance Procedures – Every IT department has its own fair share of routine processes and procedures that must be performed to ensure that operations continue to run as smoothly as possible for everyone within the organization. Unfortunately, many of these IT processes are highly redundant, such as checking disk space, system restarts, monitoring log files, resetting passwords and managing user profiles. All of these things can and should be shifted to automation.

Enhanced Incident Management – Incident management is one of the most frequently automated and subsequently optimized IT processes. Businesses are under constant threat and it’s become increasingly clear that the human workforce simply cannot keep up. By automating the incident monitoring, response and remediation process, the entire operation maintains a greater degree of accuracy and security.

Reduction of Errors and False Positives – IT personnel are constantly being inundated with incoming requests and, as a result, are often bogged down putting out fires and chasing their tails. This heavy volume of work coupled with the increasing demands can dramatically increase the amount of costly errors committed. Incorporating automation as a central part of critical IT processes can dramatically reduce errors and also eliminate time-consuming false positives.

Empower Skilled Employees – Automating basic, routine and repetitive IT processes is something everyone in the department can benefit from. IT leaders can focus their valuable skills and experience on more complex, mission-critical business initiatives and front-line IT workers are empowered to resolve issues without the need to escalate to management.

Integrate Disparate Systems, Programs and Applications – Maintaining a plethora of different systems, apps and programs is a very inefficient and ineffective way to do business. In many cases, these silos actually work against, rather than with, each other, further hindering operational efficiency. The right automation tool can effectively integrate with these legacy platforms to create a more connected, cohesive and collaborative interdepartmental environment.

Establishment of Documented Best Practices – The very nature of IT automation is that it creates and maintains a series of consistent, repeatable (and therefore often predictable) patterns and processes. It also provides visibility, insight and the ability to identify, establish, document and hone best practices for improved operations moving forward.

Could your organization benefit from any of these basic advantages of automation? Find out today by starting your free 30 day trial of Ayehu NG.

Free eBook! Get Your Own Copy Today

Your Top Artificial Intelligence Adoption Questions, Answered

According to Gartner, the number of organizations implementing some type of artificial intelligence (i.e. machine learning, deep learning and automation) has grown by 270% over the past four years. One big reason for this boost is the fact that executives and decision makers are beginning to recognize the value that these innovative technologies present.

That’s not to say they’re all on board. Are CEOs getting savvier about AI? Yes. Do they still have questions? Also yes – particularly as it relates to the adoption/deployment process. Let’s take a look at a few of the top questions and answers surrounding the topic of artificial intelligence below, along with some practical advice for getting started.

Is a business case necessary for AI?

Most AI projects are viewed as a success when they further an overarching, predefined goal, when they support the existing culture, when they produce something that the competition hasn’t and when they are rolled out in increments. At the end of the day, it’s really all about perspective. For some, AI is viewed as disruptive and innovative. For others, it might represent the culmination of previous efforts that have laid a foundation.

To answer this question, examine other strategic projects within the company. Did they require business cases? If so, determine whether your AI initiative should follow suit or whether it should be standalone. Likewise, if business cases are typically necessary in order to justify capital expenditure, one may be necessary for AI. Ultimately, you should determine exactly what will happen in the absence of a business case. Will there be a delay in funding? Will there be certain sacrifices?

Should we adopt an external solution or should we code from scratch?

For some companies, artificial intelligence adoption came at the hands of dedicated developers and engineers tirelessly writing custom code. These days, such an effort isn’t really necessary. The problem is, many executives romanticize the process, conveniently forgetting that working from scratch also involves other time-intensive activities, like market research, development planning, data knowledge and training (just to name a few). All of these things can actually delay AI delivery.

Utilizing a pre-packaged solution, on the other hand, can shave weeks or even months off the development timeline, accelerating productivity and boosting time-to-value. To determine which option is right for your organization, start by defining budget and success metrics. You should also carefully assess the current skill level of your IT staff. If human resources are scarce or if time is of the essence, opting for a ready-made solution probably makes the most sense (as it does in most cases).

What kind of reporting structure are we looking at for the AI team?

Another thing that’s always top-of-mind with executives is organizational issues, specifically as they relate to driving growth and maximizing efficiencies. But while this question may not be new, the answer just might be. Some managers may advocate for a formal data science team while others may expect AI to fall under the umbrella of the existing data center-of-excellence (COE).

The truth is, the positioning of AI will ultimately depend on current practices as well as overarching needs and goals. For example, one company might designate a small group of customer service agents to spearhead a chatbot project while another organization might consider AI more of an enterprise service and, as such, designate machine learning developers and statisticians into a separate team that reports directly to the CIO. It all comes down to what works for your business.

To determine the answer to this question, first figure out how competitively differentiating the expected outcome should be. In other words, if the AI effort is viewed as strategic, it might make sense to form a team of developers and subject matter experts with its own headcount and budget. On a lesser scale, siphoning resources from existing teams and projects might suffice. You should also ask what internal skills are currently available and whether it would be wiser to hire externally.

Practical advice for organizations just getting started with AI:

Being successful with AI requires a bit of a balancing act. On one hand, if you are new to artificial intelligence, you want to be cautious about deviating from the status quo. On the other hand, positioning the technology as evolutionary and disruptive (which it certainly is) can be a true game-changer.

In either case, the most critical measures for AI success include setting appropriate and accurate expectations, communicating them continuously and addressing questions and concerns with swiftness and transparency.

A few more considerations:

  • Develop a high-level delivery schedule and do your best to adhere to it.
  • Execution matters, so be sure you’re actually building something and be clear about your plan of delivery.
  • Help others envision the benefits. Does AI promise significant cost reductions? Competitive advantage? Greater brand awareness? Figure out those hot buttons and press them. Often.
  • Explain enough to illustrate in the goal. Avoid vagueness and ambiguity.

Today’s organizations are getting serious about AI in a way we’ve never seen before. The better your team of decision makers understands about how and why it will be rolled out and leveraged, the better your chances of successfully delivering on that value, both now and in the future.

Episode #20: Applying Ancient Greek Wisdom to 21st Century Emerging Technologies – Trace3’s Mark Campbell

July 1 2019    Episodes

Episode #20: Applying Ancient Greek Wisdom to 21st Century Emerging Technologies

In today’s episode of Ayehu’s podcast we interview Mark Campbell – Chief Innovation Officer of Trace3

Over two millennia ago, the famous ancient Greek historian Thucydides wrote about how best to dispense with Spartans. He could not have known that 2,400 years later his writings would enter the pantheon of organizational thinking about innovation.  Yet as unlikely as it sounds, that’s exactly what Mark Campbell believes has happened.  As Chief Innovation Officer of Trace3, Mark uses the age-old reflections of Thucydides to help advise IT executives today.  Navigating the labyrinth of emerging technologies is a Herculean task, and Mark has lots of sage advice on what innovations to take advantage of, as well as which ones to avoid.

In this episode, we chat with Mark about a number of emerging technologies from automation, AI, and machine learning to quantum computing.  Along the way we’ll learn what questions a vendor should answer to ascertain if their product’s AI capabilities are based on engineering or marketing hype, the potential pitfalls awaiting any enterprise that decides to handle the “people problem” later, and the one biggest fear customers have about emerging technologies.



Guy Nadivi: Welcome, everyone. My name is Guy Nadivi, and I’m the host of Intelligent Automation Radio. Our guest on today’s episode is Mark Campbell, Chief Innovation Officer of Trace3, an emerging technology consulting firm based out of Irvine, California. As Mark’s LinkedIn profile states, he specializes in, “Squeezing the hype out of emerging tech.” Thanks to his work in the venture and startup ecosystems to identify emerging enterprise IT technologies and innovation trends, and introduce these to customers, partners and the industry. Mark and his team review over a thousand tech startups each year, and he’s also a frequent speaker and presenter on innovation, so we’ve invited him to come on our show to help us squeeze the hype out of such technologies as automation, artificial intelligence, and machine learning. Mark, welcome to Intelligent Automation Radio.

Mark Campbell: Well, thanks for having me, Guy. Greatly appreciate it.

Guy Nadivi: Mark, as a Chief Innovation Officer, you have an interesting definition of innovation, involving something called “positive deviance.” Can you elaborate on that a bit and how you incorporate it into your definition of innovation?

Mark Campbell: Sure. We get that question quite a bit from customers, especially customers starting up their own innovation group, or tackling an innovation project, or trying to inject emerging technology into an existing business model. The term positive deviance, I’d love to claim credit for it. It is so smart, but it was actually invented by a guy by the name of Dr. Jeff Degraff out of the University of Michigan, and then all of the very wordy and academic definitions that I’ve encountered over the years, I do think that Dr. Degraff has distilled this down into two words, the real essence of what it means to innovate – positive deviance, both sides of that. Kind of the idea there being that for every innovation, regardless of industry, technology, outcome, or pitfalls you’re trying to avoid, it means that you have to deviate from the status quo.

Mark Campbell: You have to kind of overcome your own internal business inertia, your own internal processes, your own internal way of doing business, your own internal technical skills. That deviation, of course, can take many different forms. Not all of them are beneficial. Sometimes deviation for the sake of deviation has gotten companies into trouble. Certainly, New Coke is a terrific example of that, but positive deviation where you’re kind of taking a look at this perfect future that you’re aiming for, and what sort of deviations are going to be the ones that give you the greatest probability of gain and the greatest probability of avoiding the pitfalls that comes from deviating from your current business models.

Mark Campbell: I think smart automation is certainly one of those examples that we are seeing in the market today, where automation is a terrific and wonderful thing that is helping us, if you will, streamline the status quo. When we talk about smart automation, we are really talking about deviating from that, and so I think that’s a very good example of how positive deviance, an example of it anyway, how we’re seeing it applied in our customer base.

Guy Nadivi: You’ve talked about there being three key drivers of innovation – fear, honor and interest. How should IT professionals factor those key drivers into their decision-making when considering moving forward with a potential enterprise innovation?

Mark Campbell: Well, I think before even jumping into an innovation project, or looking for innovation targets, or forming an innovation team, I do think the executive sponsor of the innovative initiative needs to take a look at these three core values, this fear, honor, and interest. If you prefer a little bit more modern terms, you can say fear, pride, and greed. This was a pattern discovered by a Greek researcher, oh, about 2,400 years ago, by the name of Thucydides. Now, at the time, of course, he was trying to figure out a more effective way of killing Spartans, but nonetheless, it actually applies very apropos today. When we take a look at this deviance, we’re going to deviate.

Mark Campbell: It does require us to evaluate the fear, the fear of staying where we are and having our competition beat us, versus the fear of changing, or potentially the pride and honor of damaging our brand with a failed innovation initiative, or increasing our leadership, our industry prowess by creating a new, innovative and disruptive product that changes our whole marketplace. I think like when we get looking at the interest or the greed, or what’s in it for us side, certainly there is a danger in releasing new innovation projects by displacing existing revenue streams, or existing products, or existing customer bases, and that has to be balanced against the potential upside of a new line of business, a new market, a new expansion, a new threat to bring against your competitors. I think innovation leaders, right at the very get-go, when they’re starting to contemplate using innovation as a weapon need to balance that, right? “What do we fear more? Where’s the greater honor, and what’s in our best interest? Doing this or not doing this?”

Guy Nadivi: Mark, let’s talk about AI and machine learning. Please tell our listeners where you would squeeze the hype out of these emerging technologies.

Mark Campbell: Well, certainly AI has had a long and roller coaster-ed history going back to the 40’s and 50’s, and in that period, we’ve seen kind of this ebb and flow of various techniques and technologies that are enabling AI to tackle harder and harder problems. However, when we take a look at a lot of products hitting the market, we trifurcate these into three buckets. We call them the simple, the savvy and the smart. Simple being just regular, procedural type algorithms, whether that’s a Excel spreadsheet or an autopilot on an airplane. Then, there are savvy products.

Mark Campbell: These are those that have embedded knowledge bases or access to expert systems. Of course, these were very popular in the late 80’s and 90’s, but really, they embody the knowledge of whoever created the product. Then, we get into the last of the three buckets, the smart bucket, and this is where solutions really learn. They take and ingest data, discover patterns, discover behaviors, maybe they discover baselines and report on anomalies from that baseline. Nonetheless, they are smart, they do learn and they do adapt.

Mark Campbell: When we look at this AI space, we are seeing hype being introduced by simple and savvy products out there, that are having their marketing department inject terms like deep learning, or machine learning, or AI, or convolutional networks, or reinforcement learning, or what have you, and to where their AI actually exists in their marketing department, not in their engineering department. That, that disconnect there can be very confusing for our customers who see a great story, they hear a great speech, they may even see a good canned demo, but looking under the hood a little bit, there really isn’t AI there. This is just AI-washing, a savvy or even worse, simple product. There are some techniques that we discuss with our customers on how to make that differentiation. One of the core truths about AI as we mentioned is that it learns.

Mark Campbell: It’s smart, and that learning is based upon a lot of data. A technique that we talk to our customers about is dig into that. When you’re evaluating a product and you want to really make sure that it is a smart product, not just the savvy product, talk about the learning. “How was this trained? How does it learn? Is it delivered in a pre-trained fashion, or does it continue to learn after I install it in my environment?”

Mark Campbell: “What data is being used for the learning? How much data is required? Is it canned data, is it publicly available data, or is it my proprietary data?” Digging into that layer of it when you’re confronting a potential smart solution. If the product out there does not truly incorporate any learning or any AI, you’re going to get very evasive answers, “Well, that’s a trade secret,” or, “Well, that’s a lot of smarts that our guys in the back room have injected into the product,” or, “Well, I’m not really able to go into that. I’d have to shoot you.”

Mark Campbell: If you keep pressing on that and get these evasive answers, you should kind of flag that as something really to be concerned about. On the flip side, if you talk to a true AI company, and you ask them, “Well, how does your product learn? What kind of data? Can I use my data? What happens after I install it? How do I re-baseline?”, you’re going to see them light up.

Mark Campbell: The analogy I use is like sitting next to a grandmother on the airplane, and you ask, “Do you have any grandkids?” If they don’t, they’re going to tell you, “Shut up and don’t be that guy,” and you’re going to get a silent plane trip for the rest of the way. If in fact they are a grandmother, you’re soon going to see Josh’s kindergarten play, you’re going to see a photo album, you’re going to see the birthday card that they got them last year, and you’re going to have a very, very conversation-filled journey. The very same is true with an AI product. If it truly has AI digging into that, it’s going to just open up a whole world, to the point that you almost don’t care what the answer is, but that enthusiasm and that passion that you see coming from the vendor, you can make a safe bet that you’re on the right path.

Guy Nadivi: What about automation? Where does the hype need to be squeezed out there?

Mark Campbell: Well, right now, we’re seeing a ton of automation products come to market. Certainly, what we talked about before about separating the savvy from the smart, equally true on automation products, especially those touting to be smart automation, so those all hold true. The other point of hype that we do see quite a bit in automation is the promise that this is going to be a single click and your problems are solved. Certainly, from a technology point of view, there are some great advancements out there. Certainly, there are a lot of techniques and products that truly will, in a smart way automate your business processes or your internal development life cycle or what have you.

Mark Campbell: The one thing however that is very often glossed over is the technology part’s the easy part. The cultural part is where things get a bit difficult, and these kind of happen on three levels. On a personal level, you do have people that maybe fear that automation is going to displace them, or at least displace some of the skills they’ve garnered over the years. At an organizational level, when you start talking about automating techniques within your group, there also is going to be a little bit of dissonance. Typically, organizations have well-worn processes.

Mark Campbell: They have rules of thumb, and certainly, if the automation is truly smart, it may suggest ways of doing things that are not part of the playbook so to speak, and that causes some organizational tension. The other thing that tends to happen at a corporate level is sometimes automation isn’t isolated into one particular team. Typically, when you’re automating, especially business processes, these start to leak over into other business units, other parts of the organization, and the cultural, political and personal ramifications of that, unless they’re addressed right upfront in a project. Even if the technology is perfect and flawless out of the box, these are some potential pitfalls that await to any enterprise that decides to handle the people problem later.

Guy Nadivi: What are you seeing as some of the most interesting innovations right now around automation, AI, and machine learning?

Mark Campbell: Well, we have a distinct advantage in that we’re partnered up with a few dozen of the world’s top-tier venture capital firms, so we do get an opportunity to see a lot of products when they are at the proverbial “two guys & a PowerPoint stage”, and watch them mature. Now, by the way, there’s a ton of infant mortality, and a lot of these companies don’t ever see the light of day.

Nonetheless, when we start watching these, as you mentioned earlier, we do have the opportunity to take a look at thousands of startups a year, you do start to see patterns forming, and certainly, when we look at areas that the venture community right now is spending a lot of attention on, certainly the AIOps, applying AI to IT operations. Smart SecOps, this is applying AI into security operations. Those two are huge right now.

Mark Campbell: There is such a large market out there, and there is such a dearth of products to satisfy that market that there are some very good products coming to market right now that solve a myriad of problems, but one other area is robotic process automation. We are seeing … As you’ve probably noticed, there are several products on the market that have IPO’d and their IPOs are enjoying a terrific ride right now, but that’s echoed in our customer base. When we go and talk about robotic process automation, whether it’s actually workflow automation, whether it’s screen automation, smart chatbots, call center interactions, across the board, we are seeing a big interest right now from our customers to bring those processes under automation and if you’re going to go through all of that smart automation. It’s not just about business process management or business process automation anymore, we are kind of seeing this, let’s say the maturation of the use cases that are being solved with AI, now allowing all three of those AIOps, smart SecOps, and robotic process automation to baseline and report on anomalies.

Mark Campbell: Sometimes this is called behavior analytics, to actually correlate, especially in the security space where you have thousands of alarms to correlate those down into clumps, and then for each clump, determine a root cause. We’re also seeing smart automation being used to not just react or control existing situations, but to actually make predictive alerting on things that could be going wrong, or bottlenecks that may be appearing, or issues that may manifest themselves further on down the line. At the very hairy edge in the automation space, and this is a little bit controversial right now, certainly in the security space, is automated remediation. If we are being attacked or we do have a storage array that goes offline, or we do have a workflow that all of a sudden halts, do we want automation to jump in and automatically remediate that? Of course, the answer is it depends.

Mark Campbell: Certainly, if it’s a low-level, we have someone from accounting that can’t get in because their password is jammed up, certainly stepping in automatically and remediating that, resetting their password, probably not that big of a deal, but taking an auto manufacturer’s assembly line offline, that’s a fairly financially onerous decision to make. I think over time, that’ll move, but that’s certainly the areas we’re seeing investment being made in today.

Guy Nadivi: I understand you’re doing a lot of exploratory research on quantum computing right now. How do you think quantum computing will disrupt automation, AI, machine learning for IT in the future?

Mark Campbell: Well, it’s still a little bit nascent, but we do have customers that are spending time and money evaluating quantum. Right now, the two hot areas are quantum computing, which includes quantum computing as a service, so instead of buying a quantum computer, just renting time on an existing one. That’s one big area. The other area is quantum encryption, and so that certainly leaks over into the security side of the house, but these are still in development. There are some great systems out there.

Mark Campbell: There are real products that you can buy today. There are open source projects that can be implemented today, and the main targets that these are approaching, one is optimization problems. These are your typical traveling salesman, flow dynamic, scheduling and network optimization. Not necessarily physical networks, but even human and social network optimization. Quantum is quite effective at solving optimization problems, even the primitive machines we have available to us today.

Mark Campbell: Certainly, when we take a look at automation and we’re talking about automating a workflow, today, what we’re doing is we’re actually automating heuristics. In a general sense on large scale processes and flows, it isn’t mathematically possible to come up on a digital computer with all of the combinations and select the best. However, with a quantum computer, that does appear to be a very solvable problem. As I mentioned, we do have small and primitive systems today, but even on medium-sized problems, that is becoming a little bit more of a reality today. This idea that quantum computers in the optimization space, at least, will be able to replace the heuristics being used in automation.

Mark Campbell: That definitely is a fairly likely outcome. The other area is AI training. You certainly can look at the training of an AI system as a non-deterministic and even probabilistic activity that once an AI system is trained, you’re not truly guaranteed that that was the optimal training. It just works with the training data that we’ve presented it with so far. There are…the term being bandied around right now is quantum intelligence, to where you can actually use a quantum system to take, again today relatively small AI networks and come up with the optimal training that is out there with a fairly high confidence. As these quantum computing systems mature and incorporate more and more cubits, the sample space of data’s going to increase. The amount of solution space that you’re able to address is also going to increase. I think that’s going to have a direct impact on smart automation both on the automation side and the smart side of them.

Guy Nadivi: Is there a single metric other than ROI perhaps that will cause you to recommend a particular innovation to your customers and partners over others?

Mark Campbell: Well, I think when we take a look at our customers, the one thing was, especially in the emerging technology space that’s a big fear is, “Is this going to be around tomorrow? If we implement this really cutting edge solution from a bunch of smart folks that have their own little startup, what’s the story going to be in six months? Are they able to keep up that trajectory? Are they still going to be around?” It really breaks down into this, “What is that product sale’s pipeline?”

Mark Campbell: Sometimes that’s a bit hard to measure, and, “How innovative is that solution? Is it the right type of innovation for the right time for the right problem, and how is the market responding to it?” Now, I know that I cheated a little bit and gave you three answers to that, but if you roll all of those up, it’s what we call momentum. When we take a look at a startup, certainly there’s a ton of other ancillary attributes that a startup has to have, like smart and experienced leadership, a great product suite, some good early results from their Alphas and Betas, but if you want to boil down one thing, it’s very easy to go look if a top-tier VC has already funded them. Now, if they’re onto their B or C round funding, that typically means that they’ve convinced at least two or three top VCs to do their funding, and one of the key attributes VCs look at before they write the big checks is exactly this momentum area.

Mark Campbell: If you don’t have access to funding data from VCs, there are a handful of emerging tech research companies out there that attempt at least to combine these. One example would be CB Insights. They put together something called the Mosaic Score, which is composed of market, the market strength that they’re targeting, the momentum they’re seeing in that, and how much money they’ve garnered, and how far have they burnt through it. There are metrics out there, but it all hinges around this momentum idea.

Guy Nadivi: Mark, what can CIOs, CTOs and other IT executives start doing right now to prepare for the innovations you think will be the biggest disruptors to IT in the next three to five years?

Mark Campbell: Well, I think that’s a very good question, and certainly one that we get brought in to deal with, and I think every customer realizes their market, their business, their culture, their skills, their budget are all very unique and shape that, but if I was to condense those down, I would actually put things into two buckets. The first bucket is what I would call defensive IT. This is using emerging technology to shore up your IT assets, set another way from a business point of view. This is to do cost reductions, efficiencies, to where the business isn’t worried necessarily about the money they’re pumping into their IT’s infrastructure, and the return that they’re getting from this. Typically, defensive IT helps buoy up those “ility’s”, availability, scalability, agility, portability, maintainability, a lot of those non-functional type requirements.

Mark Campbell: These are what we kind of call defensive IT. We’ve seen a ton of great innovations come on the defensive side, certainly things like containers, or cloud, AI, where it’s allowing us to do more with the budgets we have, or in some cases, even less budget. Being that defensive side, making sure that you’re doing the tried and true, as best and efficiently as possible, and increasing those ilities, I think that’s job one. However, if you’ve gotten to the point where your business has progressed by that, and I do mean the entire enterprise, has progressed from viewing IT as just a cost center, and therefore, the least cost, the better, now we start getting into offensive IT, and this is where IT starts getting a seat at the table for business decisions. We have an expression at Trace3 where we say, “All possibilities lay in technology,” and we truly believe that.

Mark Campbell: There are a ton of business problems out there, a ton of competition problems, ton of market problems, some regulatory problems, some skillsets problems, some budgetary problems that companies face, and we believe that there is a solution in technology for each one of those. The IT organization that consistently finds those fields, and brings in business, benefit from those, is going to be asked for a seat at the table. They’re going to be part of the thought leadership at the company, especially as it relates to lines of business, so not just the goodies that sit inside of the data center, but the actual lines of business and revenue streams and P&Ls of the company. That internal thought leadership, of course is much easier said than done, and there’s an awful lot of trust that has to be built. There’s a little bit of risk-taking that needs to be built, and just like we said before, a great evaluation of fear, honor and interest need to go into that.

Guy Nadivi: Offensive versus defensive IT, that’s a phrase I think is really going to resonate with the sports-minded CIOs out there.

Mark Campbell: Well, there’s a bunch of them. I totally agree.

Guy Nadivi: All right. Looks like that’s all the time we have for on this episode of Intelligent Automation Radio. Mark, it’s been great having you on the show to squeeze some hype out of the emerging technologies we hear so much about these days. Thanks for coming on.

Mark Campbell: Well, thanks for having me, Guy, and I’m certainly looking forward to your upcoming podcast topics. I think this is a terrific and fertile ground to plow.

Guy Nadivi: Mark Campbell, Chief Innovation Officer of Trace3, an emerging technology consulting firm based out of Irvine, California. Thank you for listening, everyone, and remember, don’t hesitate, automate.



Mark Campbell

Chief Innovation Officer of Trace3

Mark Campbell is the Chief Innovation Officer at Trace3 where he combines insights from leading venture firms and more than 25 years of real-world IT experience to help enterprises discover, vet, and adopt emerging technologies. His ‘from the trenches’ perspective gives Mark the material for his frequent articles and speaking engagements.

Mark Campbell can be found at:

Office:     (303) 575-2144

Mobile:     (719) 338-7772

LinkedIn:   https://www.linkedin.com/in/mark-campbell-256748/

Twitter:     https://twitter.com/HypeSnyper

Quotes

“I think smart automation is certainly one of those examples that we are seeing in the market today, where automation is a terrific and wonderful thing that is helping us, if you will, streamline the status quo. When we talk about smart automation, we are really talking about deviating from that, and so I think that's a very good example of how positive deviance, an example of it anyway, how we're seeing it applied in our customer base.”

"It does require us to evaluate the fear, the fear of staying where we are and having our competition beat us, versus the fear of changing, or potentially the pride and honor of damaging our brand with a failed innovation initiative, or increasing our leadership, our industry prowess by creating a new, innovative and disruptive product that changes our whole marketplace.”

“When we look at this AI space, we are seeing hype being introduced by simple and savvy products out there, that are having their marketing department inject terms like deep learning, or machine learning, or AI, or convolutional networks, or reinforcement learning, or what have you, and to where their AI actually exists in their marketing department, not in their engineering department.”

“At an organizational level, when you start talking about automating techniques within your group, there also is going to be a little bit of dissonance. Typically, organizations have well-worn processes. They have rules of thumb, and certainly, if the automation is truly smart, it may suggest ways of doing things that are not part of the playbook so to speak, and that causes some organizational tension.”

“We're also seeing smart automation being used to not just react or control existing situations, but to actually make predictive alerting on things that could be going wrong, or bottlenecks that may be appearing, or issues that may manifest themselves further on down the line.”

“We have an expression at Trace3 where we say, "All possibilities lay in technology," and we truly believe that. There are a ton of business problems out there, a ton of competition problems, ton of market problems, some regulatory problems, some skillsets problems, some budgetary problems that companies face, and we believe that there is a solution in technology for each one of those.”

About Ayehu

Ayehu’s IT automation and orchestration platform powered by AI is a force multiplier for IT and security operations, helping enterprises save time on manual and repetitive tasks, accelerate mean time to resolution, and maintain greater control over IT infrastructure. Trusted by hundreds of major enterprises and leading technology solution and service partners, Ayehu supports thousands of automated processes across the globe.

GET STARTED WITH AYEHU INTELLIGENT AUTOMATION & ORCHESTRATION  PLATFORM:

News

Ayehu NG Trial is Now Available
SRI International and Ayehu Team Up on Artificial Intelligence Innovation to Deliver Enterprise Intelligent Process Automation
Ayehu Launches Global Partner Program to Support Increasing Demand for Intelligent Automation
Ayehu wins Stevie award in 2018 international Business Award
Ayehu Automation Academy is Now Available

Links

Episode #1: Automation and the Future of Work
Episode #2: Applying Agility to an Entire Enterprise
Episode #3: Enabling Positive Disruption with AI, Automation and the Future of Work
Episode #4: How to Manage the Increasingly Complicated Nature of IT Operations
Episode #5: Why your organization should aim to become a Digital Master (DTI) report
Episode #6: Insights from IBM: Digital Workforce and a Software-Based Labor Model
Episode #7: Developments Influencing the Automation Standards of the Future
Episode #8: A Critical Analysis of AI’s Future Potential & Current Breakthroughs
Episode #9: How Automation and AI are Disrupting Healthcare Information Technology
Episode #10: Key Findings From Researching the AI Market & How They Impact IT
Episode #11: Key Metrics that Justify Automation Projects & Win Budget Approvals
Episode #12: How Cognitive Digital Twins May Soon Impact Everything
Episode #13: The Gold Rush Being Created By Conversational AI
Episode #14: How Automation Can Reduce the Risks of Cyber Security Threats
Episode #15: Leveraging Predictive Analytics to Transform IT from Reactive to Proactive
Episode #16: How the Coming Tsunami of AI & Automation Will Impact Every Aspect of Enterprise Operations
Episode #17: Back to the Future of AI & Machine Learning – SRI International’s Manish Kothari
Episode #18: Implementing Automation From A Small Company Perspective – IVM’s Andy Dalton
Episode #19: Why Embracing Consumerization is Key To Delivering Enterprise-Scale Automation – Broadcom’s Andy Nallappan

Follow us on social media

Twitter: twitter.com/ayehu_eyeshare

LinkedIn: linkedin.com/company/ayehu-software-technologies-ltd-/

Facebook: facebook.com/ayehu

YouTube: https://www.youtube.com/user/ayehusoftware

Disclaimer Note

Neither the Intelligent Automation Radio Podcast, Ayehu, nor the guest interviewed on the podcast are making any recommendations as to investing in this or any other automation technology. The information in this podcast is for informational and entertainment purposes only. Please do you own due diligence and consult with a professional adviser before making any investment