Posts

Episode #59: Why 2021 Is The Year Organizations Will Start Widely Trusting AI – Mist Systems’ Bob Friday

February 16, 2021    Episodes

Episode #59:  Why 2021 Is The Year Organizations Will Start Widely Trusting AI

In today’s episode of Ayehu’s podcast, we interview Bob Friday, Vice President, CTO, and Co-Founder of Mist Systems, a Juniper Company. 

If a parallel could be drawn between the history of Artificial Intelligence and a professional star athlete who rehabilitated his career, it might go something like this.  As a rookie, AI flashed occasional signs of brilliance that enthralled a million minds with the promise of greater possibilities.  Then it floundered, getting sent down to the minor leagues to overhaul & revamp itself.  Eventually it worked its way back up to the big leagues, and began fulfilling the expectations of greatness many had predicted.  Now that AI is delivering consistent superstar results, organizations seeking their own operational victories want to sign it to a long-term contract.  Has AI finally redeemed itself enough to gain everyone’s trust though? 

That’s a topic of particular interest to Bob Friday, Vice President, CTO, and Co-Founder of Mist Systems, a Juniper Company.  As a pioneer in smart wireless networking, Bob has seen a lot in his storied Silicon Valley career.  He stops by to share with us why 2014 was a watershed year for AI, why adoption of AI is accelerating for enterprises with complex networks, and the risks for companies who don’t develop an AI for IT strategy in the coming year. 



Guy Nadivi: Welcome everyone. My name is Guy Nadivi and I’m the host of Intelligent Automation Radio. Our guest on today’s episode is Bob Friday, Vice President, CTO and Co-Founder of Mist Systems, a Juniper Company. Now for those not familiar, Mist Systems is in the business of providing wireless networks with AI built in, to provide automated incident remediation. We haven’t spoken much on this podcast about that kind of intelligence being baked into an enterprise’s network, so we decided to bring Bob onto the show to gain further insights on how AI-powered networks might be part of an AIops-driven future. Bob, welcome to Intelligent Automation Radio.

Bob Friday: Guy, thank you for having me. This is a topic dear to my heart, so happy to be here.

Guy Nadivi: Bob, let’s start then with asking you if you could just share with us a bit about what path you took that led you to co-found Mist Systems and the AI work that you do.

Bob Friday: For me personally, this path to Mist and AI really started almost all the way back into, for people who remember, the eighties. When the FCC first came out with this unlicensed spectrum rules, that’s when I really got into the wireless space. For those who remember back then, I was doing Metricom Ricochet. This is building a nationwide packet wide network, and that’s where I really got my first taste of wireless. From there I really went off and co-founded a company called Airespace and that was back in the early 2000’s. And for those who remember back then, that’s when wifi was just becoming, going from a nice-to-have to a must-have. And that’s when I started really working with enterprise customers and really started trying to help them figure out how to manage these wireless networks that were coming into the enterprise space. From there, I actually sold that company to Cisco and it was really at Cisco where I became the CTO of mobility that I really started working with some very large enterprise customers. And there we saw wifi and wireless kind of go from a nice-to-have to must-have to really becoming a business critical. And that’s where I really saw the paradigm shift from enterprise customers wanting help managing these network elements, to where they really wanted help managing the end-to-end user experience. It wasn’t good enough to tell them that the EP or the switch is up and running, but they really wanted to know if they were going to put an app on that consumer device, that they can ensure that that consumer really had great internet connectivity. And that’s how I got into AI. You know, from there, it happened to be the convergence with AI was becoming popular. We finally had the technology. When I started Mist back in 2014, that’s really when AI kind of went from a marketing thing to a reality thing. We really had the compute and storage that we could actually use to solve interesting problems. So that’s how I got to Mist, that’s how I got to AI and that’s how I got to here today.

Guy Nadivi: 2020 was a challenging year for so many organizations. Bob, when it comes to AI for IT, what are the biggest lessons that enterprises learned in 2020?

Bob Friday: Like I said, I think one of the big lessons that they’ve learned in 2020 is that really AI is really becoming more than just marketing hype. I think for a lot of IT departments and enterprise businesses, AI has been kind of a marketing thing and not a reality thing. And I think when we look back on why is AI becoming real now? I think it really kind of started, like I said, back in the 2014. 20 years ago when I did my masters, I actually did neural networks and masters. But 20 years ago, the problem was we really couldn’t build neural networks that were big enough to do interesting things. Somewhere around 2014, we got this perfect storm of compute storage costs going low enough, datasets getting big enough, open stores, where we find we’re able to start building AI that can actually solve real problems. And I think that that is one of the things that they learned in 2020. I think the other thing they’ve learned is really a difference between AI versus ML. We’ve been using ML and machine learning to solve problems throughout most of my engineering career. Really in 2020, people have started to learn that AI is really doing something on par with a human. Whether it’s learning to drive a car, interpreting a medical MRI or x-ray, and really in networking it’s really about can we really build something that can do something on par with a network domain expert? And that was one of the inspirations for Mist was really for those who remember Watson playing Jeopardy. When I started Mist, when I saw Watson playing Jeopardy, I was like, if they can build something that can play championship-level Jeopardy, we really should be able to build something that can actually play networking Jeopardy. Do something on par with real network domain experts.

Guy Nadivi: Okay. So with 2020 thankfully behind us, can you talk about the role of AI and AIOps in the future of work?

Bob Friday: Yeah. So I think what we’re going to see in the future of enterprise IT work, we’re going to start to see these AI assistants actually start to become part of the IT team, right? And so I think what we’re going to start to see is IT administrators and businesses start to free up their IT teams to do more strategic things in the business. Case in point is right now we’ve got to the point where we can actually build these systems that can actually detect bad ethernet cables. That’s a very hard thing for a person to go detect. That’s an easy thing for an AI assistant with machine learning to find bad ethernet cables. So now you don’t have your IT team busy trying to basically go track down that bad cable. You can have AI assistants join the team. So I think what we’re going to start to see coming forward is really around IT starting to adopt these AI assistants onto their team as kind of a trusted member and start bringing that and start training those AI assistants like an employee coming on to their team.

Guy Nadivi: There are different shades of AIOps, but Bob, can you explain what the difference is between domain agnostic and domain specific AIOps?

Bob Friday: Guy, I think you always hear people talk about kind of narrow AI or general AI. And usually when they use those terms, they’re talking about kind of “Terminator AI”. And I think most people agree today, most AI is narrow. We’re teaching AI to drive cars, interpret medical records, images. When we talk about domain agnostic, domain specific, specifically we’re talking about in the IT space about different platforms that were designed specifically to solve a specific IT problem. And when you look at AI, it really starts with a question. You got to kind of answer, what questions do you want your AI assistants? So like when I started Mist, the questions we wanted to answer was really around why is the user having a poor internet experience? Why is the user having a poor connectivity experience? And that is really around trying to figure out what data you need. So when you think about domain specific, it’s really starting with that specific question or that specific problem. Domain agnostic is more about we’re going to take a platform, a generic platform, and try to train it to actually solve a problem. And right now I think if you look at the Gartner who kind of came up with these terms, the general consensus is that most enterprise businesses will get to an ROI quicker if they start with a domain specific platform versus a domain agnostic platform. When you look at having to solve these AI problems, there’s a lot more than just putting the data into the platform. It turns out, after doing this for the last six years, a lot of work goes right into the feature engineering. Even after you know the question you want to answer, you may spend months, weeks, making sure you have the right features you need to solve the problem. And that’s one reason when I started Mist, that’s the reason why I built my own access point. It wasn’t because I thought that the world needed another access point. It’s because I wanted to make sure I could actually get the data I need to actually answer that specific question of why that user experience is poor. Why are they having a poor internet experience? So when you look at domain agnostics and domain specific right now, I think most enterprise businesses are going to find that they will get to a quicker ROI with the domain specific platform where all the feature engineering has been done for them. Where the data has been cleaned up, they don’t have to worry about preprocessing the data. There is a specific solution that’s actually processed that data and has that solution up and running to answer a very specific question in cloud. As opposed to kind of do it yourself. So you kind of break it down to domain agnostic is kind of the do it yourself approach. Domain specific is you’re actually finding a solution, it actually solves a specific problem that needs to be solved.

Guy Nadivi: What market transitions, if any, have you seen Bob that are driving business and IT to adopt AIOps?

Bob Friday: As we kind of started the discussion earlier, when I started Mist, I was the Mobility CTO of Cisco and I was working for some very large retail customers. Some very large enterprises customers. And back then I was building these controllers with these kind of embedded software architectures. And what I heard from these customers was, you know Bob, before I put any of your stuff on the network, I need to make sure that your controllers don’t crash. I really need to make sure that you can keep up with my mobile development cycle. They were developing mobile code every week, right? The mobile developers had these very agile development environments. Whereas we were releasing code maybe once or twice a year type thing. And then as I mentioned before, they were really wanting to make sure they had end-to-end visibility. So I think one of the big market transitions is really going from this networking becoming business critical, where they were putting some sort of critical applications, whether it’s a robot in a distribution center or a consumer app on top of a consumer device, it was really around going to that business critical, networking was becoming business critical at that point of view. So that was one of the key market transitions is business going from that paradigm of managing network elements to really managing the end-to-end user experience.

Guy Nadivi: What are the biggest motivators and barriers that you’re seeing for AI adoption?

Bob Friday: I think as we mentioned, the biggest motivator is really these networks are becoming very much more complex. I mean the other big transition we’re seeing out in networking right now is watching the workflows move from on-prem, behind the firewall to public clouds. So if you look at Salesforce, Microsoft Office 365, we have people in workflows working from home now in addition to working from the office. We have workflows scattered all the way across from the private data center inside the enterprise to the AWS, the Googles and the Azures out there. So that is one of the biggest motivators is really how do you handle the complexity? And we’ve gotten to a point now where when we transitioned from CLIs to dashboards, dashboards were kind of the one way to deal with that complexity. Now to the point where there’s just too many dashboards. We’re getting to the point where a network IT person cannot deal with the amount of information and log files that need to be dealt with. And that is why we’re starting to see them move to more of these AI assistants. Because AI assistants and these conversational interfaces are what’s really helping our IT departments be able to get the data quicker. Instead of having to remember the hundred different dashboards you need to find something, you can now go to your AI assistants and basically just simply ask, “Please tell me, why do I have unhappy users right now?” And the AI assistant can do the work of basically aggregating the data necessary to answer that question. So that is kind of the motivator. The barriers is really around, as we mentioned before, the adoption of AI. When you look at these AI assistants, whether you head down the domain agnostic path, there’s barriers there, and that’s why it’s hard to get to the ROI quickly. If you do it yourself, you find that there’s a lot of feature engineering. Getting the data you need to answer the question is a barrier in itself. And that’s why we’re seeing more enterprises move towards these domain specific, where the domain expert is actually helping them bring a solution to this table that can actually solve an immediate problem. So I think from a motivator point of view, it’s complexity that’s driving people to adopt these AI solutions. From a barrier point of view, it’s basically a knowledge base. We’re starting to ask our IT departments, we’re asking a lot out of the enterprise IT department nowadays. First of all, we asked them to move from the CLI paradigm into these dashboard paradigms. And then we’re asking them to become Python programmers. We built all these cloud APIs for them to help them automate and get data out of their networks quicker and easier. And now we’re asking our enterprise IT departments to start to wrap their heads around the data science and the AIs. We’re asking them to become a little bit of a data science expert enough so they can evaluate all the different options out there. So that’s probably the biggest barrier right now is this knowledge. Bringing our enterprise IT departments and educating them around the different data science options that are out there to help them solve their problems.

Guy Nadivi: Bob, with 2020 behind us, what do you expect AI for networking adoption will look like in 2021?

Bob Friday: I think the one word I would use here is acceleration. If anything, what we’ve learned over the last hundred years is the adoption of technology seems to be accelerating faster and faster. And I think this is going to be totally true with what we’re seeing with the adoption of AI. It’s becoming very clear that AI is going to be valuable in helping enterprise businesses basically deal with these complex networks going forward. I think we’re going to see the adoption accelerate. We’re going to see the technology definitely accelerate. We’re definitely seeing AI become, we’re in that kind of exponential hockey point of view of the AI adoption thing. That started back in 2014. Every year we’re starting to see more and more AI solutions show up in the marketplace. We’re starting to see more and more open source AI solutions on top of which we can build. So this is becoming easier and easier for startups like Mist to actually add value, because we’re building on the shoulders of giants right now. The mountain of AI open source code is just accelerating faster and faster every year, making it easier for us to actually bring value to customers. So acceleration is the word I would stick with for 2021.

Guy Nadivi: So with adoption of AI accelerating, as you’re seeing, what’s at risk for companies who don’t develop an AI for IT strategy in the coming year?

Bob Friday: I mean, interestingly, I think there’s two risks. There’s kind of the ultimate customer experience. And this is what we’re seeing with our big B2C customer, hospitality, retail. Anywhere where there’s a business-to-consumer experience, that’s the first thing. The risk is, hey, if you’re in that business and you’re providing experiences to your consumer or your employees, you are going to need AI to start to manage this end-to-end client-to-cloud connectivity experience. The more implicit type of risk is it’s really subtle that unless you’ve been doing it is really what I call the “vendor to customer support”. The interesting thing as we start to move to a cloud AI paradigm, your big, large networking vendors, they now actually have the data to do much more proactive, even on your support models, your networking support models. And this is the one thing I learned at Mist interestingly is what we did organizationally. There was the technical, architectural issues of building real-time pipelines for AI, but there’s actually an organizational component here really around combining customer support with the engineering team at Mist. And that was the key to success, to really bringing a new support model into the enterprise. Where we as a vendor now can actually help our enterprise customers proactively send broken hardware now. If there’s a broken piece of hardware or software in a network, the customer doesn’t have to send us an RMA ticket, we know it. So we can be very proactive on helping, saying, “Hey, we know there’s a broken AP out there, a broken switch out there. There’s one in the mail for you.” That’s a total paradigm shift of what they’ve had to deal with in the past of arguing with their vendors about networking problems. Which usually turns into multi-day, multi-week discussions about sending log files back and forth to each other. So that’s probably the other big thing that we’re going to start to see around the risks for companies who don’t start to really wrap their arms around, and their heads, and start to internalize where AI can help their businesses.

Guy Nadivi: Interesting. Staying with 2021, what are your biggest 2021 AI for IT predictions that you’re most excited about?

Bob Friday: Yeah, for me personally right now, my big focus in prediction for 2020 is trust. I think it’s become, I think people are starting to become aware of AI/ML. You know, they’re starting to understand that AI/ML is more than just marketing hype. They’re starting to see it actually solve real problems that are relevant to their businesses. I think 2021 is a year of trust. How does an AI assistant earn the trust of the IT department to be a trusted member of that team? So when I look at 2021 and AI, where we are in the journey right now, it’s about conversational interfaces and bringing that AI assistant into AI as a trusted member into the IT team.

Guy Nadivi: Bob, for the CEOs, CTOs and other IT executives listening in, what is the one big must have piece of advice you’d like them to take away from our discussion with regards to implementing AIOps at their organization?

Bob Friday: As the saying goes, every journey starts with the first step. So my words of wisdom for a CIO, a CTO who hasn’t started the journey is to take that first step. It seems daunting sometimes, and probably that first step really starts around the question and the data. If you’re just starting this journey, first start with the question you want to be answered. What do you want to leverage AI to do? What human characteristic, what human task do you really want AI to take on? And that’s back to the point of really what is the difference between AI and ML? And I try to highlight to people sometimes, AI is really about building solutions and software that actually does something on par with the human. So for that first step is really thinking about what are you asking AI to do on par with human? For me personally at Mist, it was really about building a solution that really was on par with networking IT domain experts. Can we build a solution that really can answer questions and manage networks on par of network domain experts? So I would say that’s the first step I would recommend to CIOs, CTOs is really look at the question, what human behavior are you trying to mimic in your business that you think AI can help you with, and then move on to the data. Once you’ve got that figured out then start working on the data, making sure you have the data that kind of answer that question.

Guy Nadivi: Interesting. A new way of thinking for IT executives. All right. Looks like that’s all the time we have for on this episode of Intelligent Automation Radio. Bob, it’s very interesting to hear about AI-powered networks when we mostly only hear about AI as an add-on to an existing network. So I’m very much looking forward to seeing how your approach to AIOps plays out in the near future. Thank you so much for coming onto the podcast.

Bob Friday: Guy, thank you for having me and it’s been an honor.

Guy Nadivi: Bob Friday, Vice President, CTO and Co-Founder of Mist Systems, a Juniper Company. Thank you for listening everyone. And remember, don’t hesitate, automate.



Bob Friday

Vice President, CTO, and Co-Founder of Mist Systems, a Juniper Company. 

Bob Friday is VP/CTO of the AI-Driven Enterprise at Juniper Networks and co-founder of Mist, a Juniper Company. Bob started his career in wireless at Metricom (Ricochet wireless network) developing and deploying wireless mesh networks across the country to connect the first generation of Internet browsers. Following Metricom, Bob co-founded Airespace, a start-up focused on helping enterprises manage the flood of employees bringing unlicensed Wi-Fi technology into their businesses. Following Cisco’s acquisition of Airespace in 2005, Bob became the VP/CTO of Cisco enterprise mobility and drove mobility strategy and investments in the wireless business (e.g. Navini, Cognio, ThinkSmart, Wilocity, Meraki). He also drove industry standards such as Hot Spot 2.0 and market efforts such as Cisco’s Connected Mobile Experience. He holds more than 15 patents. 

Bob can be reached at: 

LinkedIn: https://www.linkedin.com/in/bobfriday/ 

Email: bfriday@juniper.net 

Twitter: https://twitter.com/WirelessBob 

Quotes

“When I started Mist back in 2014, that's really when AI kind of went from a marketing thing to a reality thing. We really had the compute and storage that we could actually use to solve interesting problems.” 

“…right now we've got to the point where we can actually build these systems that can actually detect bad ethernet cables. That's a very hard thing for a person to go detect. That's an easy thing for an AI assistant with machine learning to find..." 

“…the general consensus is that most enterprise businesses will get to an ROI quicker if they start with a domain specific platform versus a domain agnostic platform.” 

“If anything, what we've learned over the last hundred years is the adoption of technology seems to be accelerating faster and faster. And I think this is going to be totally true with what we're seeing with the adoption of AI.” 

“Every year we're starting to see more and more AI solutions show up in the marketplace. We're starting to see more and more open source AI solutions on top of which we can build. So this is becoming easier and easier for startups like Mist to actually add value, because we're building on the shoulders of giants right now.” 

About Ayehu

Ayehu’s IT automation and orchestration platform powered by AI is a force multiplier for IT and security operations, helping enterprises save time on manual and repetitive tasks, accelerate mean time to resolution, and maintain greater control over IT infrastructure. Trusted by hundreds of major enterprises and leading technology solution and service partners, Ayehu supports thousands of automated processes across the globe.

GET STARTED WITH AYEHU INTELLIGENT AUTOMATION & ORCHESTRATION  PLATFORM:

News

Ayehu NG Trial is Now Available
SRI International and Ayehu Team Up on Artificial Intelligence Innovation to Deliver Enterprise Intelligent Process Automation
Ayehu Launches Global Partner Program to Support Increasing Demand for Intelligent Automation
Ayehu wins Stevie award in 2018 international Business Award
Ayehu Automation Academy is Now Available

Links

Episode #1: Automation and the Future of Work
Episode #2: Applying Agility to an Entire Enterprise
Episode #3: Enabling Positive Disruption with AI, Automation and the Future of Work
Episode #4: How to Manage the Increasingly Complicated Nature of IT Operations
Episode #5: Why your organization should aim to become a Digital Master (DTI) report
Episode #6: Insights from IBM: Digital Workforce and a Software-Based Labor Model
Episode #7: Developments Influencing the Automation Standards of the Future
Episode #8: A Critical Analysis of AI’s Future Potential & Current Breakthroughs
Episode #9: How Automation and AI are Disrupting Healthcare Information Technology
Episode #10: Key Findings From Researching the AI Market & How They Impact IT
Episode #11: Key Metrics that Justify Automation Projects & Win Budget Approvals
Episode #12: How Cognitive Digital Twins May Soon Impact Everything
Episode #13: The Gold Rush Being Created By Conversational AI
Episode #14: How Automation Can Reduce the Risks of Cyber Security Threats
Episode #15: Leveraging Predictive Analytics to Transform IT from Reactive to Proactive
Episode #16: How the Coming Tsunami of AI & Automation Will Impact Every Aspect of Enterprise Operations
Episode #17: Back to the Future of AI & Machine Learning
Episode #18: Implementing Automation From A Small Company Perspective
Episode #19: Why Embracing Consumerization is Key To Delivering Enterprise-Scale Automation
Episode #20: Applying Ancient Greek Wisdom to 21st Century Emerging Technologies
Episode #21: Powering Up Energy & Utilities Providers’ Digital Transformation with Intelligent Automation & Ai
Episode #22: A Prominent VC’s Advice for AI & Automation Entrepreneurs
Episode #23: How Automation Digitally Transformed British Law Enforcement
Episode #24: Should Enterprises Use AI & Machine Learning Just Because They Can?
Episode #25: Why Being A Better Human Is The Best Skill to Have in the Age of AI & Automation
Episode #26: How To Run A Successful Digital Transformation
Episode #27: Why Enterprises Should Have A Chief Automation Officer
Episode #28: How AIOps Tames Systems Complexity & Overcomes Talent Shortages
Episode #29: How Applying Darwin’s Theories To Ai Could Give Enterprises The Ultimate Competitive Advantage
Episode #30: How AIOps Will Hasten The Digital Transformation Of Data Centers
Episode #31: Could Implementing New Learning Models Be Key To Sustaining Competitive Advantages Generated By Digital Transformation?
Episode #32: How To Upscale Automation, And Leave Your Competition Behind
Episode #33: How To Upscale Automation, And Leave Your Competition Behind
Episode #34: What Large Enterprises Can Learn From Automation In SMB’s
Episode #35: The Critical Steps You Must Take To Avoid The High Failure Rates Endemic To Digital Transformation
Episode #36: Why Baking Ethics Into An AI Project Isn't Just Good Practice, It's Good Business
Episode #37: From Witnessing Poland’s Transformation After Communism’s Collapse To Leading Digital Transformation For Global Enterprises
Episode #38: Why Mastering Automation Will Determine Which MSPs Succeed Or Disappear
Episode #39: Accelerating Enterprise Digital Transformation Could Be IT’s Best Response To The Coronavirus Pandemic
Episode #40: Key Insights Gained From Overseeing 1,200 Automation Projects That Saved Over $250 Million
Episode #41: How A Healthcare Organization Confronted COVID-19 With Automation & AI
Episode #42: Why Chatbot Conversation Architects Might Be The Unheralded Heroes Of Digital Transformation
Episode #43: How Automation, AI, & Other Technologies Are Advancing Post-Modern Enterprises In The Lands Of The Midnight Sun
Episode #44: Sifting Facts From Hype About Actual AIOps Capabilities Today & Future Potential Tomorrow
Episode #45: Why Focusing On Trust Is Key To Delivering Successful AI
Episode #46: Why Chatbots Are Critical For Tapping Into The Most Lucrative Demographics
Episode #47: Telling It Like It Is: A 7-Time Silicon Valley CIO Explains How IT’s Role Will Radically Change Over The Next Decade
Episode #48: How Microsoft Will Change The World (Again) Via Automation
Episode #49: How One Man’s Automation Journey Took Him From Accidental CIO To Unconventional VC
Episode #50: How Automation Helped LPL Financial Grow Into The Largest Independent Broker Dealer In The US
Episode #51: Why Cognitive Architecture Might Be An Early Glimpse Of A Future With Artificial General Intelligence
Episode #52: Chatbots Aren’t Human, So Don’t Expect People To Pretend They Are
Episode #53: Why End User Experience May Be A Better Measure Of Automation Success Than ROI
Episode #54: How Digital Dexterity Will Generate Competitive Advantage For Agile Enterprises
Episode #55: Is It Time To Start Hiring Digital Coworkers So Human Staff Can Spend More Time With Customers?
Episode #56: How Intelligent Automation Will Empower People, Transform Organizations, & Improve Our World
Episode #57: Can The World’s Largest ITSM Vendor Innovate Fast Enough To Maintain Its Meteoric Growth?
Episode #58: What Works? A Senior Partner From Bain Articulates The Keys To Automation Success

Follow us on social media

Twitter: twitter.com/ayehu_eyeshare

LinkedIn: linkedin.com/company/ayehu-software-technologies-ltd-/

Facebook: facebook.com/ayehu

YouTube: https://www.youtube.com/user/ayehusoftware

Disclaimer Note

Neither the Intelligent Automation Radio Podcast, Ayehu, nor the guest interviewed on the podcast are making any recommendations as to investing in this or any other automation technology. The information in this podcast is for informational and entertainment purposes only. Please do you own due diligence and consult with a professional adviser before making any investment

3 Ways AIOps is Revolutionizing Enterprise IT

With digital transformation initiatives topping the priority list, increasing pressure is being placed on IT teams to continue to innovate and improve digital services at a breakneck pace. To meet these demands and keep their organizations competitive, IT teams are increasingly turning to artificial intelligence for IT operations (AIOps). This technology enables IT to manage performance issues in real-time and leverages predictive insight to prevent problems before they occur. In addition to creating this type of self-healing environment, here are few other ways AIOps will revolutionize enterprise IT in the coming years.

Overcoming the Skills Gap

Traditional IT work focused on the manual production and maintenance of reliable infrastructure. This was heavily dependent on skilled human agents. With the introduction of AIOps, IT enjoys faster incident remediation, a reduction to MTTR and a decrease in overall costs. A more autonomous environment such as this reduces the need for hands-on, manual oversight by IT personnel. As a result, IT departments can run effectively and efficiently, even on limited staff. Essentially, AIOps enables IT teams to do more with less.

Furthermore, with the increased prevalence of low-code and no-code solutions, the level of expertise required to keep IT functioning at optimal performance drops exponentially. Managing the AIOps platform can be accomplished through upskilling existing employees, eliminating the need to seek external talent from a pool that is rapidly drying up.

Achieving Synergy

Historically, problems with data would often go undetected or require a frustratingly long time to resolve. This contributed to increased MTTR, ultimately impacting satisfaction rates and hindering the ability to innovate. Further complicating matters, the widespread permeation of cloud technology has resulted in the adoption of micro-services, most of which has its own set of rules and tools. This made the gathering, interpreting and leveraging of data exceeding more challenging for IT.

With the implementation of AIOps, enterprise IT teams can finally lift the veil and enjoy full-stack visibility over all elements of the IT environment. Individual silos are broken down and a seamless integration of systems is made possible. This makes data infinitely more powerful and tremendously more useful to the entire organization.

Enhancing and Fortifying Cybersecurity

Cybersecurity has been a top organizational priority for decades. Rapidly evolving technology has opened the doors to bad actors, enabling unprecedented opportunity for malicious action. Couple this with the rise in online activity over the past year, and it’s no surprise that cyberattacks have become the biggest threat to enterprise growth. As cyber threats become increasingly complex, IT decision-makers are faced with the daunting task of establishing countermeasures that will allow them to manage threats proactively.

AIOps can help to establish and strengthen sufficient fortifications to protect the enterprise from even the most sophisticated attack, enabling IT to mitigate the damages of cyber threats, or better yet – avoid them altogether. For IT teams facing increasingly complex security challenges, AIOps can facilitate the detection, evaluation and autonomous elimination of potential vulnerabilities before cyber criminals have the opportunity to exploit them.

For these reasons (and many more), AIOps is poised to fundamentally change enterprise IT over the following months and years. Best of all, mature AIOps technology is readily available for rapid implementation. All you have to do to get started is download your free trial of Ayehu NG today. What are you waiting for?

The Impact of AIOps on the Future of Work

If there’s anything we’ve learned from the past several months, it’s that flexibility and the ability to adapt are the key to success. With the sudden and rather unexpected shift to remote work, many organizations have quickly discovered the need for a new approach to IT management. AI for IT operations (AIOps) has the potential to become the golden ticket for improving efficiency and creating a collaborative, supportive and secure environment for distributed workforces.

Bigger companies who have opted to spread their workforces across multiple satellite locations stand to benefit greatly from AIOps. In fact, with intelligent tooling, organizations with 50, 75 or even 100 remote offices are capable of operating cohesively. As the number of offices scales up, AIOps becomes even more critical. One area where it is of particular value is in automated remediation. Ideally, the goal is to have technology do the heavy lifting, with the ability to pinpoint when and where something has gone awry and preemptively correct it.

From a productivity standpoint, AIOps helps, both in terms of IT management, as well as helping remote employees stay on top of monitoring activity and environmental changes. With machine learning and artificial intelligence at the helm, human effort is reduced tremendously. Given the recent – and likely permanent – shift to satellite and remote operations, it’s becoming abundantly clear that AIOps is the approach of the future.

This isn’t to say AIOps is infallible. There is still a margin of error to account for. This is good news for humans, as this is where creating a hybrid approach that has people and robots working together comes into play. Where AIOps can really standout is in its capability of identifying subtle transient issues that might not otherwise trigger a ticket or catch the attention of the support desk.

A good example of this would be changes in latency that only occur for mere seconds. Independently, these subtle problems may otherwise go undetected, or may not seem significant enough to warrant attention. But, when viewed collectively as a trend, AIOps could potentially identify the changes as something that could eventually cause more significant and widespread issues.

Another area where AIOps can help is by prequalifying remote employees for the applications they run and the quality of their network connection. Workloads can then be automatically shifted and optimally distributed based on these pre-qualifiers. Furthermore, AIOps technology can limit event volumes, predict future outages, and leverage intelligent automation to reduce downtime and alleviate staff workload.

The most exciting part of all of this is that, for all intents and purposes, AIOps is still really only in its infancy. For those wishing to jump on the AIOps bandwagon, there’s still plenty of room. And we’ve got a quick and easy way for you to get started. Simply click here to launch a free, 30-day trial of Ayehu NG and start putting the power of AIOps to work for your organization.

3 Ways AIOps Can Change the Game for CIOs

3 Ways AIOps Can Change the Game for CIOs

The role of CIO has evolved rapidly over the past few decades. Perhaps the biggest change has been the shift from being primarily tech-focused to playing a pivotal role in driving business strategy. In fact, most organizations now recognize the unique value that the CIO brings to the table, combining technology know-how with in-depth business acumen to provide unparalleled insight and perspective to the enterprise.

Of course, along with great responsibility comes even greater challenges. Because of their unique positioning, CIOs are expected to deal with everything from infrastructure and operations to innovation. On any given day, they’re expected to balance putting out fires in the trenches and handling escalations with things like managing budgets and developing growth strategies. They wear dozens of hats, many of which must be switched in a matter of seconds.

As the CIO’s role becomes even more multifaceted, the demands and expectations they face continue to grow while their time and energy remain finite. At some point, something’s got to give.

AIOps to the rescue!

In the face of increasing complexity, growing demands and ever-changing requirements, AIOps can be a secret weapon for CIOs, freeing up their time and enabling them to focus their efforts on more mission-critical projects. Here are three specific ways AIOps can become an absolute game-changer.

Maximum Visibility

AIOps facilitates end-to-end visibility, offering oversight of the IT infrastructure in its entirety, including on-premises, cloud and end-user environments. It also bridges the gap between the infrastructure and the services being delivered, enabling prioritization of issues based on their business impact. This helps CIOs identify which issues require their attention most. It also provides invaluable, data-driven insight executives need to make more informed business decisions.

Greater Simplicity

Today’s IT environments are becoming increasingly complex by the day. AIOps allows CIOs to not only keep the pace, but actually gain a few extra steps in the process. Rather than wasting countless hours trying to connect thousands of separate events from disparate monitoring tools, IT operations teams can view relevant, actionable alerts and impacted services on one single, central console.

This facilitates faster detection of service issues, eliminates false positives and prevents important issues from potentially being missed. As a result, CIOs receive fewer escalations and are able to offer more timely answers when escalations do occur, saving both themselves and their teams time and energy.

Rapid Resolution

The third most impactful way AIOps can help CIOs is by enabling IT operations to resolve service issues faster. This is accomplished through a strategic combination of root cause analysis, historical and real-time context and automated remediation. Not only does this significantly reduce MTTR and mitigate the impact of major incidents, but it also lessens the amount of time and effort CIOs must commit to escalation management.

At the end of the day, AIOps has the potential to deliver a momentous win-win, improving service quality while also freeing up the CIO to be able to focus his or her precious time on strategic work and innovation.

The great news is, getting started with AIOps is easier than ever. With Ayehu, you can be up and running with AI-driven, intelligent automation in just a few minutes! Click here to download your trial and try Ayehu free for 30 full days.

Is Your Organization AIOps-Ready? Here’s How to Get There in 3 Steps

3 steps to get AIOps ready

Digital transformation can be boiled down to three things: simplicity, innovation and intelligence. One of the most effective tools for achieving these things is AIOps. But adopting a new approach in any department can be daunting. Adding in the complexities of IT makes it even more challenging. When it comes to significant initiatives like this, taking it step-by-step can make things much more manageable. Let’s break down three phases that will make your AIOps implementation go much more smoothly.

Step 1: Define your goals.

You can’t expect to hit a target if you don’t have one in front of you. The first step in getting your organization ready to roll with AIOps is to determine exactly what you’d like to accomplish as the end-result. Take some time to identify areas of specific need where AIOps could provide resolution. Some examples of this might be supporting your ITSM team with alert escalation or improving service availability through incident management. You have to define your objectives first before you can develop a strategy for achieving them.

Step 2: Establish parameters for success.

Next, you must determine which parameters you will use to monitor and measure your success. Common AIOps success parameters include things such as mean time to resolution (MTTR), outage prevention, improved productivity and cost reduction as a result of automation. By developing and setting these benchmarks, you’ll be better able to assess your implementation success rate. You’ll also be better prepared to determine when to pivot and change course if necessary.

Step 3: Focus on the data.

Artificial intelligence is only as good as the data it’s being fed. The term “garbage in, garbage out” comes to mind. In order for your AIOps adoption to be successful, you must prioritize the issue of making sure the data you will use is plentiful, accessible, relevant and most importantly, of superior quality. Sourcing this information as early as possible can give your project a jumpstart, save you a massive amount of time and aggravation and improve your chances of reaching your goals without fewer roadblocks along the way.

As you work through these three fundamental steps, one important point to keep in mind is that AIOps is a journey, not a destination. Therefore, it should never be viewed as a one-off project, or a “set it and forget it” initiative. Organizations that have been successful with AIOps recognize that it’s something dynamic and ever-changing. Go into it with the proper mindset and digital transformation will be your reward.

cio guide to operational efficiency

Episode #45: Why Focusing On Trust Is Key To Delivering Successful AI – CognitiveScale’s Matt Sanchez

August 24, 2020    Episodes

Episode #45: Why Focusing On Trust Is Key To Delivering Successful AI

In today’s episode of Ayehu’s podcast, we interview Matt Sanchez – Founder & CTO at CognitiveScale, and Former Leader of IBM Watson Labs.

“First is the worst, second is the best, third is the one with the treasure chest.” Some of you may recognize this old children’s poem, variations of which can be found on the internet that replace the last two words “treasure chest” with….well, I’ll leave it up to you to find out. Though quaint, this rhyme is actually quite germane as shorthand for the many iterations AI projects must cycle through before they start delivering trusted data, trusted decisions, & trusted outcomes.  The worst results are at the beginning, but as time goes on and the AI continues learning & improving, the results can be quite good, and for organizations who stick with it, very successful.

When it comes to AI projects, significant time often passes between inception and dividends, due to the many steps which must be taken to get things right.  Our guest Matt Sanchez argues that in order to protect customer’s trust in your brand, there should be no short cuts taken along this route.  In fact as Founder & CTO of CognitiveScale, a company focused on helping clients to pair humans with machines, he advocates for “responsible AI” as a framework to ensure that AI never breaches customer trust.  Matt joins us for a broad-based discussion that taps into his wide-ranging insights on AI dating back to his days as a Leader of IBM’s Watson Labs.  In this episode, we’ll learn about the 6 key components that make up responsible AI, why data needs to be “nutritious, digestible, and delicious”, and the bottomline proof that leaves him so optimistic about AI’s future.



Guy Nadivi: Welcome, everyone. My name is Guy Nadivi and I’m the host of Intelligent Automation Radio. Our guest on today’s episode is Matt Sanchez, Founder and Chief Technology Officer of CognitiveScale, an enterprise AI software company. CognitiveScale is number one in AI patents among privately held companies and number four overall since 2013, with a focus on helping clients to pair human and machine. Prior to CognitiveScale, Matt was one of the leaders at IBM’s Watson Labs, which is of course, part of IBM Research, one of the largest industrial research organizations in the world. So with such an accomplished track record in the field of artificial intelligence, Matt is someone we absolutely had to have on our show. He’s been kind enough to carve some time out from his understandably busy schedule to join us and share his considerable insights with our audience. Matt, welcome to Intelligent Automation Radio.

Matt Sanchez: Well thanks, Guy. I’m glad to be here and appreciate you taking the time to discuss some of these topics today. It’s certainly a interesting time for us in the field of artificial intelligence, so glad to be here.

Guy Nadivi: So let’s start by talking about something very interesting that you’re an advocate for which is something called responsible AI. Can you please define that, Matt, and explain what the components of responsible AI entail?

Matt Sanchez: Sure. So responsible AI at its core really is about, for us, it’s about wanting our clients to have a better understanding of how to maximize the value of AI while minimizing the risk and the risk could be to their business and it could be to society. And so we believe that you need to have tools to really handle this. It’s not something that many businesses are equipped with today, and these tools need to be able to automatically detect, score and mitigate risks that come from using AI and related technologies to automate decisions or to help augment decisions that are happening in the enterprise. And so we want to make AI transparent, trustworthy, and secure by providing these tools. And responsible AI is really about leveraging those sorts of things to make sure that these systems we’re creating are not just opaque learning machines, but they’re actually trusted, controlled, intelligent systems that can actually improve individuals, organizations, and society. And we think of really there’s six key components to responsible AI. We call them trust factors and we really talk about it in terms of it being a trusted AI framework. And these trust factors are things like effectiveness. So making sure that the AI systems and the models that we build in these systems are continually generating the optimal business value. It’s actually been studied. A number of studies were done recently, even that talked about the difficulty in making sure that AI models deliver ongoing business value. And so that continues to be a challenge, but beyond business value, there are risks that also come with it. And so these trust factors go beyond just understanding business value and look at things like explainability. How explainable are the decisions that these AI systems make to human users? We did a very simple explanation for how an automated decision was made. What about bias and fairness issues? How do we know if there’s bias in these systems? How do we test for it? How do we measure it if it’s somehow hidden or learned, inferred by these AI models? Can we test for that? Robustness, making sure that these systems are secure, that adversarial attacks on these systems can be understood. We can actually test for these weaknesses in these AI systems. Data risks. Data really is the fuel for these systems and if it’s tainted with bad information, right, it’s a garbage in garbage out problem. We need to be able to detect that and data is constantly changing. So this isn’t a one-time thing. It has to be monitored. And then finally compliance. There are legal, ethical considerations when we use automated decisioning. Many countries are actually starting to define very specific laws around automated decisions and the use of algorithms. And so compliance is going to be a continually changing landscape, but one that’s increasingly important for customers using AI.

Guy Nadivi: Matt, you’ve spoken about the need for data to be “nutritious, digestible, and delicious,” end quote, which by the way, is how I like to describe my wife’s cooking. What did you mean by nutritious, digestible and delicious data?

Matt Sanchez: Yeah, so, and I think somebody you had on your podcast in the past, Lee Coulter, is a good friend of ours and someone that we actually have talked about together on this topic, kind of came up with this set of things and tried to create an analogy to what does it mean to really power a artificial intelligence system or to power machine learning? And we were really focusing on the data. So delicious really means the right variety. I need to make sure that the data I have is not, it has the right inputs. It has the right conditions. It has the right outputs. If it doesn’t have that, I know I don’t have complete information. So whatever I’m learning from that, it’s not going to be very good. My results aren’t going to be very good. Digestible means that I have to be able to actually consume the data. A lot of data that was created in enterprises is not digestible by machine learning algorithms. So the structures that are created have to be both useful and usable by the model that we’re creating with it. And it has to be free from any sort of contaminants as well that could cause the system to reject that information. And finally nutritious really means data that really is sustenance for the main purpose of that model. It contributes either positively or negative to the inferences we’re making, but it’s not just noise. It’s not just filler. It actually needs to be the right stuff. And nutritious means that our confidence in the predictions that these systems are making is growing over time. We call that trusted decisions. There’s transparency and trust in those decisions. And then finally nutritious also means that the data itself is not poisonous. There’s no leakage of private data that was unintended or biased information. And so we talk about this as a high level framework to really think about your data because without the right data, AI really cannot succeed.

Guy Nadivi: Matt, as I’m sure you’re aware, AI projects have an unacceptably high failure rate, as much as 85% according to one report. In your experience, what are the biggest reasons AI projects fail and what can be done to reverse poor outcomes?

Matt Sanchez: Well I think there are three key problems for failure in these projects. One is, of course, as I just discussed, is data quality. And if I can’t really get a handle on data quality, then everything else downstream from that fails. You can almost think of AI as a supply chain problem where the upstream work is really around data. And so data quality becomes really key. The second part, which is really one level downstream from data, is modeling and model validation. We’ve heard from clients that it can take upwards of a year to build one machine learning model and get it into production. And the challenge that they tend to get into, maybe half the time, is actually the technical work. The other half of the time is validating that that model is actually trustworthy, that it’s compliant, that it actually delivers the right business value and that we can prove that and that can trip up these projects, essentially stalling them out in the lab. And then finally business outcome, making sure that on a continual basis that my AI systems are measurable against the business KPIs that they’re designed to solve for. That’s the only way to make sure that the investments in those AI systems are actually paying off. And so these three problems really trip up a lot of projects. And what you really need to solve for this is first you need trusted data. You need data that is free from bias, that has the right nutritional value to solve the problem and is ready for machine learning. You need trusted decisions. So we need to make sure that the decisions that these systems are making have a level of transparency and explainability built into them. And then finally we need trusted outcomes. We need to know that and have full transparency from the business side that the AI systems are actually generating value.

Guy Nadivi: Matt, there are, as I’m sure you know, some concerns cropping up about the misuse of AI and machine learning, deep fakes being just one example. Do you see any economic, legal or political headwinds that could slow adoption of these advanced technologies, or is the genie out of the bottle at this point to such an extent that they just can’t be stopped and perhaps not even effectively regulated?

Matt Sanchez: Yeah. So, I think this is a continual challenge in the field of artificial intelligence and in a lot of potentially other related fields. And certainly, there is a lot of opportunities for misuse of these technologies. And I think we will continue to see that by bad actors. Now, that being said, I think most corporations, governments are actually going to, or are incented if you will, to use AI in a responsible manner. And the reason for that is that it’s a brand trust issue and it’s a public safety issue if you’re in a government agency. And brand trust has been shown to be a very costly thing to lose in terms of dollars and cents. And so at the end of the day, if the consumer doesn’t trust the brand, they stop using that brand. That’s results in literally trillions of dollars in lost revenue globally every year because of trust issues. And this can exasperate those trust issues. If you’re using AI in a way that is not trusted, your brand will erode very, very quickly and consumers are keenly aware of this. So that being said, I think to the extent that organizations can address the ethics question around AI, what are your principles as an organization that you’re going to adhere to? Publish those principles and then have a way of actually showing that you’re following them. I think that’s one way organizations can sort of get around the fear, if you will, that they’re somehow using AI for evil in the back office, but also from a regular regulatory standpoint, we’re seeing more and more examples of consumer data protections. That’s usually where it’s starting with things like GDPR in Europe, but now also in California. January 1st this year, the California Consumer Privacy Act went into effect. And now consumers are gaining more control over their own data. And that’s really where it starts and that’s regulation and laws that are being passed. And there are many other legal ramifications for organizations that try to use AI in the wrong way, and particularly try to use data in a sort of illegal way. And then finally, I would say on the public safety side of things, I do think there are genuine public safety concerns with things like autonomous vehicles and other sorts of autonomous technologies that will be regulated at some point. We will have to expand the regulations that exist to protect the public from these technologies, just like any new technology that surfaces in the marketplace. And then of course, there’s always the bad actors who try to use technology for their own criminal purposes. And that’s going to happen whether business uses AI or not. In that sense the genie is out already out of the bottle. The technology is out there.

Guy Nadivi: Your company CognitiveScale describes its product as, “the world’s first automated scanner for black box AI models that detects and scores vulnerabilities in most types of machine learning and statistical models.”. There are some who say that AI algorithms should be audited much like a publicly traded company’s financial statements. Could CognitiveScale be a virtual operator for auditing AI algorithms?

Matt Sanchez: Yeah, that’s a great question and, in fact, one that we talk to our customers about quite often, and as it turns out, auditing is already starting to occur. There are large organizations that have had various forms of, I’ll call them, AI audits and particularly auditors starting to look into from a business risk standpoint. How has the use of AI potentially introducing new risks into the organization and are they managing those risks appropriately? You can think of banks, for example, needing to answer these questions from a regulatory standpoint. And so AI auditing, if you will, is becoming an increasingly important topic. Now how do you do it? First of all, I think you have to understand the ethical principles, regulations, laws, et cetera, that are applicable for your business and for your jurisdictions of interest. Regulations are specific to jurisdictions. In the United States, for example, if you’re a healthcare insurance company and you operate in 50 States, you probably have 50 different sets of insurance codes that talk about discrimination and they don’t all talk about discrimination in the same way. And so now you have to understand how to apply what does that mean for your business? And a lot of organizations, what they’re doing to get ahead of that is to define their own and publish their own AI principles. You can see large tech companies like Google, Facebook, and others who have published some of these principles, but now you can actually start to see banks and healthcare insurance companies and other types of companies starting to publish their AI principles. What are their values as a company and how are they using AI responsibly? And so that’s sort of the second part. First understand what are the applicable regulations and laws? Second is define your principles. And third, have the measurements and controls in place to prove that you’re being compliant with these rules and regulations. And on that last point, this is where our product, Cortex Certified, can really help. Because one of the things we discovered is that within an organization, the technical people, the data scientists and the engineers, speak a very different language than the compliance officers and the business owners when it comes to AI. And so we needed a common language so that they could all talk. And this is something we call the AI trust index. Think of it as almost a single scoring mechanism for measuring algorithmic risk and breaking it down into those six trust factors that I discussed earlier, where we can now get a very simple score, almost like a credit score for an AI that tells me how trustworthy is it. And so instead of just looking at the technical attributes, the statistical attributes of these systems, I now can look at the trust attributes or the ethical attributes of these systems. And this is enabling a common language to be able to then facilitate measurement and ultimately controls and audit in these organizations.

Guy Nadivi: Interesting. Earlier this year, Matt, there was an article in MIT Technology Review about artificial general intelligence or AGI. In that piece, the author, Karen Hao, who’s been on her podcast, wrote quote, “There are two prevailing technical theories about what it will take to reach AGI. In one, all the necessary techniques already exist. It’s just a matter of figuring out how to scale and assemble them. In the other, there needs to be an entirely new paradigm. Deep learning, the current dominant technique in AI, won’t be enough”. Matt, what do you and your team at CognitiveScale think it will take to achieve AGI?

Matt Sanchez: Yeah, so I’ll preface this with the point that our team at CognitiveScale, AGI is interesting and it’s a topic that’s worth debating. However, my view is this is not where the current opportunity in the market is. And so while it’s interesting, we don’t spend a whole lot of time in the AGI world, but that being said, I do have some opinions that I can share. And I think there’s a couple of different ways of looking at it. One is if AGI is supposed to be creating a system, creating technology that can really work like the human brain, meaning think and learn the way that the human brain does, then we have a long way to go, like maybe 50 plus years of more work to do before we even get close. And I just pointed to two things that humans do that machines don’t do today at all and we don’t even know how to do at the scale that the human brain can. And the first is just common sense, understanding or common sense reasoning. And this was something that I learned about when I was at IBM, because we were certainly trying to figure out how to teach Watson to have a little bit more common sense when it was answering questions and it’s challenging. It’s challenging to some of the things we learn as human beings, the inflections in our voice, the subtleties of body language, things that just are intuition to humans are very difficult for machines to understand, and encoding that information, encoding that data in a way that the machines can understand is really challenging. So, in that sense, I would agree with the person who said that we need new technologies to solve for this because the encoding of that is still a big challenge, even with deep neural networks. And then things like emotion are also very challenging and they factor very deeply into how the human brain works. So if that’s the definition of AGI, I think we’ve got a long way to go. If AGI really at an algorithmic level is supposed to be about generalization, so generalizing, showing that one algorithm can solve for multiple tasks, different types of tasks without having to be explicitly retrained, if you will, or rebuilt to solve those tasks, then I think we’re actually on our way. I think there’s been some great advances along this dimension with reinforcement learning and some other technologies. And so there are certainly a lot of interesting advances in this space, but my view is, the definition of AGI that I’ve always sort of looked at really talks about it being more of this learning, really simulating, understanding and learning in a way that’s similar to how the human brain can reason and learn and generalize. And I think we’ve got a long way to go before we are even close to that.

Guy Nadivi: Okay. So perhaps that’s a good segue into my next question. Overall, Matt, given your vantage point, what makes you most optimistic about AI and machine learning?

Matt Sanchez: So the number one thing that I get excited about with AI is when I see real business outcomes. So when I see that by using AI, we can start to save a lot of money for our customers, or we can help them improve the customer experience that they want to deliver. And when I can see that in terms of dollars and cents, for me, it shows that AI is working, that it’s worth the investment and it actually is something that’s worth pursuing as a core capability in the business. And so I think that’s, at the end of the day, what I get excited about, and we see that with a lot of our efforts and with our customers. And in fact, we make that a core part of our methodology for how we work with clients is to really focus on the outcome first and to really challenge our clients and ourselves to define what that outcome is, how we achieve it. What does good look like? Why is it better than what you’re doing today? And I think if you start from there and when you see the result and you can calculate the value, it’s really exciting. And I’ve seen so many examples of that now over the years that I’m really excited about the future. I think we can continue to improve and to apply that. And it’s really this iterative process where the first time you turn the crank and see this result, it’s very exciting and it makes you want to do it again and again, and, as you do that, your data gets better. Your techniques get better, your infrastructure gets better, and you start to see things go faster and faster. And we’re seeing examples of that today with a lot of our clients that are making it that I’m really excited about. The second thing I’m really optimistic about is that I think ethical AI and the concerns around this that are really top of mind, both for consumers, for governments, perhaps not as much the US government, although recently there’s been more movement there, but in other countries, Canada, Europe, even the Middle East, there’s a lot of proactive effort from the government side to really define the principles again at a societal level around AI. And I think that’s really encouraging because to me it means that people are starting to understand how to define this and that it’s important. And I’m also seeing it at the level of CEOs and boards within very large corporations because again, they’re worried about risk and they’re worried about the brand trust. And they know that AI has both the potential and the power to be very valuable, but also very, very risky if they aren’t managing these things. So I’ve seen an increase in that dimension. And I think that you can point to a few events that have occurred, very public events that have occurred, that you can kind of see why these issues are top of mind and things like data breaches, data misuse by certain organizations and social media outlets and people being put in front of Congress to explain themselves. I mean, these are things that no CEO or board wants to be a part of. So a lot of these challenges now have been recognized and organizations are starting to invest in making sure that they can get the right outcomes from these systems and do it in a safe way. So that’s why I’m excited about it. I’m seeing that trend increase, both of those trends increase.

Guy Nadivi: That is all encouraging. Nevertheless, as a corollary to my last question, I’ve also got to ask what leaves you most concerned about AI and machine learning?

Matt Sanchez: Yeah. So a couple of things. One, I would say the first one is inflated expectations. AI is not a magic wand to solve all your past data sins is what I like to tell people. So back to my comment on garbage in, garbage out, if your data is not nutritious and digestible, AI is not going to solve that for you magically. And it really comes down to the information you believe you have is subjective. A simple example of this would be if I’m trying to solve an image classification problem, I want the machine to tell me if the image I’m looking at is one thing or another. If I put two images in front of two different human experts and those two different human experts disagree on the classification, then effectively what we have is highly subjective information. And it’s likely that AI is not going to provide a whole lot of value there. It might. AI could provide some additional information that could help those human experts, but it probably isn’t a situation where it can automate that decision in place of human intervention. So we have to always think of AI, it’s not a magic wand, but it can help. It can be, it can certainly help. It can certainly potentially uncover some of those ambiguities and actually improve upon them and make your data better and make your processes better. But inflated expectations is sort of the one thing that always has me worried. Second thing is over-hyped fears. As we said, I think we do have the ability to put the right guardrails around AI. We do have the ability to measure things like explainability and robustness and bias. And I think corporations will do this because it is a brand trust issue. It is a legal compliance issue, but the fears that the corporations are going to start using AI to somehow abuse people’s information, they’re real situations and a lot of times it’s unintended side effects. And I think that’s the challenge. I think that’s what we need to really focus on is not necessarily that it’s ill intent, although that does happen, of course. There are always bad actors and I think we all hope they are the exception and not the rule, but be given that there is a way to measure these things, I think we have the ability to actually put the guardrails around these systems. And I think that’s important for us to work with leaders, leaders in the government and business leaders to really make sure that those practices, those controls are put in place. But those are the things that worry me the most, that the expectations are inflated and that fears are also in somewhat over-hyped in a science fiction type of way sometimes and in other ways. There are some real fears. There are some real issues that have occurred, both bias related issues, where it’s almost we like to think of it as this fairness through unawareness fallacy, which basically says, “Well, of course I’m making a fair decision because the system doesn’t even understand the concept like gender or age or ethnicity.” But we actually can prove is that sometimes those systems, even though you’re not explicitly telling them that information, because it’s the way that machine learning works, they sometimes learn those patterns and can develop biases that you don’t want. And so the idea that you can just be unaware of these things, and that makes you fair is actually false. And so I think it’s those types of understandings that can overcome some of those fears.

Guy Nadivi: Matt, for the CIOs, CTOs and other IT executives listening in, what is the one big must have piece of advice you’d like them to take away from our discussion with regards to deploying AI & machine learning at their organization?

Matt Sanchez: Yeah. So really I would kind of say it like this. Start with the business outcome in mind. Set realistic expectations with the business around what that outcome’s going to look like. Make sure you add explainability and measurement as a first class requirement to these systems. So with the business outcome in mind, also ask how are we going to measure it? Do we have the right feedback loop in the system to really measure this? Make that a requirement of the system, not an afterthought. And be prepared to iterate, to deliver incremental value, meaning you’re not going to get it right the first time. You’re going to have to iterate. You’re going to learn a tremendous amount every time you turn the crank on these systems and they do improve over time. We’d like to say that with AI the first day is the worst day, meaning the very first version of your system probably is the worst it’s ever going to be. And this is somewhat unique about AI systems. They improve with time. They improve with that feedback loop being put into operation. And that’s very unique in the IT world because most of the IT systems we build, they realize their maximum value on day one and it sort of then declines over time. AI kind of works the opposite way or it should. And so a big part of that is iterate. Think of it as an iterative process. Start small, and stair-step your way towards incremental business value.

Guy Nadivi: All right. Looks like that’s all the time we have for on this episode of Intelligent Automation Radio. Matt, it’s been a real treat having a marquee name in the field of AI on the podcast today. I think you’ve really shed some light for our listeners on the black box that is artificial intelligence. And I suspect you provided many of them with new data points to factor into their thinking for their own AI projects. Thank you very much for being on the show today.

Matt Sanchez: Well, thank you, Guy, and appreciate the time and look forward to hearing the podcast when it goes live and following other topics in the space that you’re interested in.

Guy Nadivi: Matt Sanchez, Founder and Chief Technology Officer of CognitiveScale, an Austin, Texas company. Thank you for listening, everyone. And remember, don’t hesitate, automate.



Matt Sanchez

Founder & CTO at CognitiveScale, and Former Leader of IBM Watson Labs.

Matt Sanchez is the Founder and Chief Technology Officer at CognitiveScale where he leads products and technology including the award-winning Cortex platform. As the technology visionary at CognitiveScale, Matt has led the development of the world's 4th largest AI patent portfolio and has helped clients realize the business value of Trusted AI across financial services, healthcare, and digital commerce industries. Before starting CognitiveScale, Matt was the leader of IBM Watson Labs and was the first to apply IBM Watson to the financial services and healthcare industries. Before joining IBM, Matt was Chief Architect and employee number three at Webify, which was acquired by IBM in 2006. Matt earned his BS degree in Computer Science from the University of Texas at Austin in 2000.

Matt can be reached at:

LinkedIn:  https://www.linkedin.com/in/mbsanchez/

CognitiveScale’s Cortex Certifai:  https://vimeo.com/410902149

Build Trusted AI with Cortex Certifai: https://info.cognitivescale.com/build-trusted-ai-with-cortex-certifai

Try Cortex Certifai: https://www.cognitivescale.com/try-certifai/

Cortex Certifai Toolkit: https://www.cognitivescale.com/download-certifai/

Quotes

“You can almost think of AI as a supply chain problem where the upstream work is really around data. And so data quality becomes really key.”

“…..if AGI (Artificial General Intelligence) is supposed to be creating a system, creating technology that can really work like the human brain, meaning think and learn the way that the human brain does, then we have a long way to go, like maybe 50 plus years of more work to do before we even get close.

"If AGI really at an algorithmic level is supposed to be about generalization, so generalizing, showing that one algorithm can solve for multiple tasks, different types of tasks without having to be explicitly retrained, if you will, or rebuilt to solve those tasks, then I think we're actually on our way."

“AI is not a magic wand to solve all your past data sins is what I like to tell people. So back to my comment on garbage in, garbage out, if your data is not nutritious and digestible, AI is not going to solve that for you magically.”

About Ayehu

Ayehu’s IT automation and orchestration platform powered by AI is a force multiplier for IT and security operations, helping enterprises save time on manual and repetitive tasks, accelerate mean time to resolution, and maintain greater control over IT infrastructure. Trusted by hundreds of major enterprises and leading technology solution and service partners, Ayehu supports thousands of automated processes across the globe.

GET STARTED WITH AYEHU INTELLIGENT AUTOMATION & ORCHESTRATION  PLATFORM:

News

Ayehu NG Trial is Now Available
SRI International and Ayehu Team Up on Artificial Intelligence Innovation to Deliver Enterprise Intelligent Process Automation
Ayehu Launches Global Partner Program to Support Increasing Demand for Intelligent Automation
Ayehu wins Stevie award in 2018 international Business Award
Ayehu Automation Academy is Now Available

Links

Episode #1: Automation and the Future of Work
Episode #2: Applying Agility to an Entire Enterprise
Episode #3: Enabling Positive Disruption with AI, Automation and the Future of Work
Episode #4: How to Manage the Increasingly Complicated Nature of IT Operations
Episode #5: Why your organization should aim to become a Digital Master (DTI) report
Episode #6: Insights from IBM: Digital Workforce and a Software-Based Labor Model
Episode #7: Developments Influencing the Automation Standards of the Future
Episode #8: A Critical Analysis of AI’s Future Potential & Current Breakthroughs
Episode #9: How Automation and AI are Disrupting Healthcare Information Technology
Episode #10: Key Findings From Researching the AI Market & How They Impact IT
Episode #11: Key Metrics that Justify Automation Projects & Win Budget Approvals
Episode #12: How Cognitive Digital Twins May Soon Impact Everything
Episode #13: The Gold Rush Being Created By Conversational AI
Episode #14: How Automation Can Reduce the Risks of Cyber Security Threats
Episode #15: Leveraging Predictive Analytics to Transform IT from Reactive to Proactive
Episode #16: How the Coming Tsunami of AI & Automation Will Impact Every Aspect of Enterprise Operations
Episode #17: Back to the Future of AI & Machine Learning
Episode #18: Implementing Automation From A Small Company Perspective
Episode #19: Why Embracing Consumerization is Key To Delivering Enterprise-Scale Automation
Episode #20: Applying Ancient Greek Wisdom to 21st Century Emerging Technologies
Episode #21: Powering Up Energy & Utilities Providers’ Digital Transformation with Intelligent Automation & Ai
Episode #22: A Prominent VC’s Advice for AI & Automation Entrepreneurs
Episode #23: How Automation Digitally Transformed British Law Enforcement
Episode #24: Should Enterprises Use AI & Machine Learning Just Because They Can?
Episode #25: Why Being A Better Human Is The Best Skill to Have in the Age of AI & Automation
Episode #26: How To Run A Successful Digital Transformation
Episode #27: Why Enterprises Should Have A Chief Automation Officer
Episode #28: How AIOps Tames Systems Complexity & Overcomes Talent Shortages
Episode #29: How Applying Darwin’s Theories To Ai Could Give Enterprises The Ultimate Competitive Advantage
Episode #30: How AIOps Will Hasten The Digital Transformation Of Data Centers
Episode #31: Could Implementing New Learning Models Be Key To Sustaining Competitive Advantages Generated By Digital Transformation?
Episode #32: How To Upscale Automation, And Leave Your Competition Behind
Episode #33: How To Upscale Automation, And Leave Your Competition Behind
Episode #34: What Large Enterprises Can Learn From Automation In SMB’s
Episode #35: The Critical Steps You Must Take To Avoid The High Failure Rates Endemic To Digital Transformation
Episode #36: Why Baking Ethics Into An AI Project Isn't Just Good Practice, It's Good Business
Episode #37: From Witnessing Poland’s Transformation After Communism’s Collapse To Leading Digital Transformation For Global Enterprises
Episode #38: Why Mastering Automation Will Determine Which MSPs Succeed Or Disappear
Episode #39: Accelerating Enterprise Digital Transformation Could Be IT’s Best Response To The Coronavirus Pandemic
Episode #40: Key Insights Gained From Overseeing 1,200 Automation Projects That Saved Over $250 Million
Episode #41: How A Healthcare Organization Confronted COVID-19 With Automation & AI
Episode #42: Why Chatbot Conversation Architects Might Be The Unheralded Heroes Of Digital Transformation
Episode #43: How Automation, AI, & Other Technologies Are Advancing Post-Modern Enterprises In The Lands Of The Midnight Sun
Episode #44: Sifting Facts From Hype About Actual AIOps Capabilities Today & Future Potential Tomorrow

Follow us on social media

Twitter: twitter.com/ayehu_eyeshare

LinkedIn: linkedin.com/company/ayehu-software-technologies-ltd-/

Facebook: facebook.com/ayehu

YouTube: https://www.youtube.com/user/ayehusoftware

Disclaimer Note

Neither the Intelligent Automation Radio Podcast, Ayehu, nor the guest interviewed on the podcast are making any recommendations as to investing in this or any other automation technology. The information in this podcast is for informational and entertainment purposes only. Please do you own due diligence and consult with a professional adviser before making any investment