Intelligent Automation Radio is the #1 podcast for IT executives seeking insights on the impact and opportunities for innovation that automation is delivering to businesses around the world. Featuring thought leaders in AI, Machine Learning, Orchestration, Security Automation, and the Future of Work.

September 1 2019    Episodes

Episode #24: Should Enterprises Use AI & Machine Learning Just Because They Can?

In today’s episode of Ayehu’s podcast we interview Karen Hao – Artificial Intelligence Reporter for MIT Technology Review.

The legendary Yogi Berra once said “The future ain’t what it used to be.” We’re living in that future Yogi alluded to, & his quote accurately reflects the surprising ways advances in AI & Machine Learning are unfolding.  For instance, nobody envisioned that one day Pavlovian techniques used to condition animal behavior might be the model for training a computer algorithm.  Yet that approach powered the breakthrough victory by AI firm DeepMind’s AlphaGo program in 2016 when it defeated world champion Lee Sedol in the ancient game of Go.

What specific impact will these technological advances have on us as individuals, and on the enterprises seeking competitive advantages in the marketplace?  To explore this and other questions we speak to Karen Hao, AI Reporter for MIT Technology Review, a publication claiming to be “the oldest technology magazine in the world.”  Karen’s data-driven reporting focuses on demystifying the recondite world of AI & Machine Learning.  We tap into her extensive insights to learn why despite proliferation of “deep fakes” & AI bias she’s so optimistic about the field, and what commonly overlooked aspect of AI & Machine Learning IT management should focus on before deploying it.



Guy Nadivi: Welcome, everyone. My name is Guy Nadivi and I’m the host of Intelligent Automation Radio. Our guest on today’s episode is Karen Hao, Artificial Intelligence Reporter for MIT Technology Review. Karen’s focus is on the ethics and social impact of AI, as well as its applications for social good, and she also writes an AI newsletter called The Algorithm, which was a 2019 Webby Award nominee. Previously, Karen was a reporter and data scientist at Quartz, a digital news site, and before that she worked as an application engineer at the first startup to spin out of Google X. Karen is a real expert on the current state of affairs in artificial intelligence, something that we’re very interested in, and we’re thrilled to have her on our show. Karen, welcome to Intelligent Automation Radio.

Karen Hao: Thank you so much for having me, Guy.

Guy Nadivi: Karen, earlier this year you published a piece about a study you conducted which covered 25 years of AI research, encompassing nearly 17,000 articles to determine where AI is going next, and you predicted that deep learning, the predominant class of machine learning over the last decade, may actually soon be on the way out, while something called reinforcement learning was gaining momentum in its stead. Can you please tell us a bit about reinforcement learning and its applications in the enterprise?

Karen Hao: Yeah, so I wanted to clarify your point, which is that reinforcement learning is actually a sub-category of deep learning. When I sort of broadly claimed that deep learning is on its way out, I was referring very much to the research world, and that in the research world when I conducted the analysis, you can see that they’re sort of in waves of interest in different types of techniques. Deep learning is just one category of AI research, but there are other categories that have sort of risen and fallen in interest over time.

Karen Hao: The prediction comes from the idea that deep learning is really good at finding. It’s a class or a category of statistical methods for finding patterns in data, specifically correlations in data. There’s a rumbling in the AI research community that this can only get us so far because correlations aren’t really, ultimately, the only element of human intelligence that we need to be able to replicate. So the claim that I made was sort of that deep learning is probably going to not be the star of the show in a decade, and there will be other techniques and methods that we’ll be starting to look at replicating other aspects of human intelligence and take it from there.

Karen Hao: But reinforcement learning is this interesting thing in that it is still a sub-category of deep learning, but it started getting popularity more recently, when Deep Mind’s program AlphaGo successfully defeated the greatest human Go player, and that was sort of a really big milestone achievement because up until then the idea of reinforcement learning was kind of cast aside as this silly thought experiment. But the theory behind it is that when you train animals, like in Pavlov’s dog scenario for example, you will incentivize it to do behaviors that you want it to by giving it rewards, you will de-incentivize behaviors that you don’t want it to do by giving it punishments. Reinforcement learning is like the software equivalent of that where you give points to an algorithm when it starts moving towards a goal that you want it to achieve and you take away points went moves away from that goal.

Karen Hao: Before AlphaGo had really had this milestone achievement of beating the human Go player, people didn’t really understand how to make reinforcement learning work. But since that milestone, now there has been a rush of interest in trying to use reinforcement learning for various applications, including self-driving cars. In the self-driving car scenario, you would simulate a car essentially through trial and error, figuring out how to avoid crashes, and then ideally whatever algorithm you have from the end of that trial and error process, you can then deploy to a real car on the road, and that car would be able to successfully navigate the road safely. Those are the two overarching themes of that question.

Guy Nadivi: Now, you mentioned a rush of interest lately in reinforcement learning and there’s been a rush of interest in AI in general. You’ve proposed a checklist of five questions one can use to cut through AI hype, which I think is a very valuable reference tool for evaluators at corporate enterprises and other organizations. I won’t recite all five, but I am particularly curious about your fifth checklist question, which is, “Should the company be using machine learning to solve this problem?” The very first thing you wrote about answering this question is that it’s “more of a judgment call. Even if a problem can be solved with machine learning, it’s important to question whether it should be.” So Karen, in your opinion, what are the criteria that a corporate enterprise should use to make its own judgment call on whether or not to use machine learning to solve a particular problem?

Karen Hao: Yeah, that’s a good question. I guess I’ll start with an example of where this has become particularly relevant. Face recognition is obviously becoming quite controversial, and initially when face recognition became deployed as a technology, I think a lot of companies that were developing face recognition platforms simply asked the question, how can we build the best face recognition platform ever? How do we improve accuracy? A lot of human rights activists and civil rights activists have pointed out that even if you make a highly, highly accurate face recognition platform, that doesn’t necessarily mean that it won’t infringe on people’s civil liberties. Actually, in fact, having a highly accurate face recognition platform can do a lot to constrain people’s civil liberties in ways that are being done both in the US and particularly in China.

Karen Hao: So I think when I wrote that fifth tip or fifth question in my Cutting Through AI Hype article, I was really thinking about sometimes if you have a technology hammer and you’re just looking for nails to hit, you don’t take the step back to say but wait, is this actually even a challenge that we should be tackling with technology or should we consider another approach that is perhaps more ethical?

Guy Nadivi: So speaking of the social impact, given your level of immersion in the fields of AI and machine learning and your focus on their social impact, I’m very curious if you think that these technologies will ultimately augment more people or replace more people? Karen Hao: This question is interesting, because when talking about the future of work and the impact that AI will have on it, I think we often reduce the complexity a bit into saying like how many jobs will be replaced versus how will people just be better at their jobs because they have like an AI assistant or something like that. But recently at Tech Review, we actually hosted a conference dedicated to the future of work, and a lot of the researchers that were speaking at the conference said that AI is fundamentally just going to change the nature of work. So it’s not necessarily that it’s replacing jobs per se. There will be jobs that will go away, but there will also be many, many jobs that will be created because of AI and it will just look different.

Karen Hao: For example, manufacturing jobs might go away, but data labeling jobs have become a huge thing because AI algorithms need lots and lots of data and you often need to label that data or clean that data or do some kind of pre-processing before you feed it into the algorithm. Now there are whole industries that have bloomed because of that.

Karen Hao: I think in thinking about the social impact, it’s kind of tough to answer whether it will augment or replace. But I will say that I think it is within the technologists’ hands and within the consumer’s hands to make sure that AI will be more beneficial to people. I think people should feel that they have agency to partake in this revolution and make sure that AI will be augmenting people and not replacing them.

Guy Nadivi: Let’s switch gears a little bit, Karen. You’ve written recently about the use of AI to manipulate media and create what are called deep fakes. And that brought to mind something Vladimir Putin said in 2017, “Artificial intelligence is the future not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict, and whoever becomes the leader in this sphere will become the ruler of the world.” Given the advances in AI you’ve been reporting on, do you agree with his statement?

Karen Hao: I hesitate to, because it kind of creates this narrative that is pretty prevalent right now of this idea of an AI arms race, that all these countries are engaging in and whoever reaches better AI capabilities faster, they will be capable of obliterating essentially everyone else or perhaps just ensuring superior dominance or had global hegemony or whatever the term you want to use. I don’t think that that’s a very productive narrative. I think it’s a self-fulfilling prophecy. So if countries do believe that is happening, then of course the arms race will start, will accelerate, but it doesn’t have to be that way.

Karen Hao: The research community in AI is extremely global and extremely open. The community has always been founded on the idea of open collaboration, open sourcing, and a lot of the advancements that are currently happening right now or that we already see the products of from advancements prior came from significant collaboration across borders, particularly among American scientists and other scientists who immigrated to the US and work at American universities or Chinese scientists and American scientists at some of the biggest tech giants in the world. So I hesitate to agree with that in the sense that it is a self-fulfilling prophecy, but it would be quite detrimental to I think the research and to our world if that prophecy were bought into.

Guy Nadivi: So with some of these concerns that are cropping up about the misuse of AI and machine learning, do you see any economic, legal, or political headwinds that could slow adoption of these advanced technologies? Or is the genie out of the bottle at this point to an extent that they just can’t be stopped and perhaps not even effectively regulated?

Karen Hao: I think that there are pathways forward for effective regulation, at many different levels of government, so the local, the national, the international. In the US, particularly, there is a small contingent of Congress members that are currently looking very carefully at this issue and trying to understand the best way that they can regulate the technology that makes sure that it is a force for good, beneficial for everyone, an equalizer for everyone, but also not going to hinder innovation. It’s a particularly tricky problem because AI is what they call a migratory technology, so it is a tool that can be used in so many different ways across so many different industries. You have to slice the regulatory knife very finely to make sure you’re cutting it in the right direction, if that makes sense.

Karen Hao: One of the recent bills that came out of this, called the Algorithmic Accountability Act, I think is a really great example of an initial effort to start regulating this effectively. Essentially what they proposed is that all companies that are engaged in creating automated decision making systems should be evaluating them to make sure that they don’t have bias or other negative unintended consequences. I think that’s an interesting and nuanced approach to this, in that they aren’t necessarily saying in this particular industry for this particular type of algorithm, it should be used in XYZ way, but more like across all of these industries, there are many, many different types of automated decision making systems. You need to make sure to audit them and make sure that they are not unintentionally harming people, particularly at vulnerable populations.

Karen Hao: So yes, I do think that there are regulatory pathways forward and it’s an ongoing discussion. I think that there are a lot of policymakers that are thinking about it in the right way.

Guy Nadivi: You’ve argued in one of your articles earlier this year that, “There is no such thing as a tech person in the age of AI.” You also revealed that part of your driving mission with MIT Technology Review’s AI newsletter, The Algorithm, was to dismantle our outdated notions that technology is for the tech people and social problems are for the humanities people. In other words, engineers could use a good dose of liberal arts education and the non-techies need to brush up on their technical skills.

Guy Nadivi: Now, not to perpetuate any stereotypes, but most if not all, most, not all, but most of the talented technology people I know are only interested in technology, and most of the non-techies I know are too intimidated or uninterested in the nuts and bolts of technology to dive into it and get their hands dirty. How do you bridge this divide? Karen Hao: I actually had a really great conversation with a researcher today, actually just like 30 minutes ago, about the work that they’re currently doing to educate kids, specifically middle school students about AI ethics. So they are having hands-on workshops with 10-year-olds about algorithmic bias and about the fact that algorithms are opinions that you can actually change. I think that that’s one of the ways that we can really bridge that divide is there’s this false dichotomy in not just our society but in many societies that from a very early age that we’re ingrained with, which is you can either be a tech person or you can be a humanities person. Like I’m Chinese American and I also speak Mandarin. In Mandarin, there’s a phrase like you are either a tech person or you’re a humanities person, and people will ask you when you start demonstrating particular interests in certain subjects, “Oh, are you going to take the humanities track or the tech track in the future?”

Karen Hao: I think early stage education, such as what that researcher is doing with middle school students to dispel that notion, is a great way to really make sure that we are holistically educating kids to have interests in both. But I know at an older level, so I think that there’s a lot of opportunities for universities to be integrating ethics curriculum and humanities curriculum directly into their engineering requirements and vice versa, to integrate intro to coding courses in humanities curriculum.

Karen Hao: I think there are two main efforts right now that I see that are pretty exciting in that regard, which is the new College of Computing at MIT and the Human AI Institute at Stanford that are both trying to be as interdisciplinary as possible in their approach to teaching the engineering/AI curriculum, where they’re being really intentional about developing the curriculum so that every class would have ethics imbued into it. It’s not just like a tacked on module at the end. I don’t know the exact details of how that would actually work, but just in general, I think educating people, breaking down the boundaries between STEM and humanities is step one, and then really educating people to understand the interplay between the two is the next step.

Guy Nadivi: Overall, Karen, given everything you see and report on, what makes you most optimistic about AI & machine learning?

Karen Hao: I think the thing that I’m optimistic about is the ambition is pure and the people that are working towards it are genuinely good and they really care. What I mean by that is that the ambition for AI really is to recreate intelligence that can help us solve problems that we aren’t able to solve by ourselves. That’s the grand ambition of the field. Part of that is about tackling problems like climate change, hunger, poverty, things that if we could alleviate those issues then it would really bring so much good to the world and improve the quality of lives for so many people.

Karen Hao: So I’m optimistic about the fact that when I talk to researchers in this space, they do have that in mind. That is the driving motivation for many of them. Now that we’ve seen the fact that there are challenges currently in the way that AI is being developed, there’s been a pretty strong reaction to making sure that they get it right, to correcting the path and making sure that they get it right. I’m optimistic about the fact that the spirit of the field is in the right place and people are actively working to get us back in alignment with that.

Guy Nadivi: Of course, I’m obligated to ask you as a correlation to my last question, what makes you most concerned about AI and machine learning?

Karen Hao: Just all of the things that we currently see in the news constantly about AI bias, about abuses of deep fakes, about other things that we just have not… I think in the early stages of the fields, people were just so excited about the innovation and the pace of innovation that it was hard to really step back and understand the gravity of some of the technology that was being developed. I hope that we slowed down a bit, and as I said, I am optimistic that that will happen.

Guy Nadivi: Karen, for the CIOs, CTOs, and other IT executives listening in, what is the one big must have piece of advice you’d like them to take away from our discussion with regards to deploying AI and machine learning at their organizations?

Karen Hao: That’s a great question. This is the most obvious answer, but I have to say it anyway. It’s just to really, really think about the ethics. Make sure whatever you’re doing, ethics should be right along, in the very first meeting, you should be talking about, okay, this is a problem we want to tackle through machine learning. What are some of the potential consequences that could arise if we didn’t do it right, and how do we make sure therefore to avoid that and do it right. I think a lot of organizations are starting to do this, and I hope that it will just become second nature and it will no longer even be a question in the future that that’s how you approach the conversation of machine learning.

Guy Nadivi: I think that would be a very positive trend. All right, it looks like that’s all the time we have for on this episode of Intelligent Automation Radio. Karen, you’re the first AI reporter we’ve ever had on the podcast, and I really appreciate you coming onto the show to share your insights with us on some of the more intriguing issues the fields of AI and machine learning are currently facing. It’s been great having you on the show.

Karen Hao: Thank you so much, Guy.

Guy Nadivi: Karen Hao, Artificial Intelligence Reporter for MIT Technology Review. Thank you for listening, everyone, and remember, don’t hesitate, automate.



Karen Hao

Artificial Intelligence Reporter for MIT Technology Review.

Karen Hao is the Artificial Intelligence Reporter for MIT Technology Review, where she covers the latest research developments, ethics, and social impact of the technology. She also writes the AI newsletter, The Algorithm, which was nominated for a Webby in 2019. Prior to joining the publication, she was a reporter and data scientist at Quartz and an application engineer at the first startup to spin out of Google X.

Karen can be found at:

Email:                                    Karen.Hao@TechnologyReview.com

Twitter:                                https://twitter.com/_KarenHao

Website:                              https://www.karendhao.com/

LinkedIn:                             https://www.linkedin.com/in/karendhao/

Quotes

“There's a rumbling in the AI research community that this can only get us so far because correlations aren't really, ultimately, the only element of human intelligence that we need to be able to replicate.”

"A lot of human rights activists and civil rights activists have pointed out that even if you make a highly, highly accurate face recognition platform, that doesn't necessarily mean that it won't infringe on people's civil liberties. Actually, in fact, having a highly accurate face recognition platform can do a lot to constrain people's civil liberties in ways that are being done both in the US and particularly in China.”

“…I think it is within the technologists' hands and within the consumer's hands to make sure that AI will be more beneficial to people. I think people should feel that they have agency to partake in this revolution and make sure that AI will be augmenting people and not replacing them.”

“The research community in AI is extremely global and extremely open. The community has always been founded on the idea of open collaboration, open sourcing, and a lot of the advancements that are currently happening right now or that we already see the products of from advancements prior came from significant collaboration across borders, particularly among American scientists and other scientists who immigrated to the US and work at American universities or Chinese scientists and American scientists at some of the biggest tech giants in the world.”

“Now that we've seen the fact that there are challenges currently in the way that AI is being developed, there's been a pretty strong reaction to making sure that they get it right, to correcting the path and making sure that they get it right.”

About Ayehu

Ayehu’s IT automation and orchestration platform powered by AI is a force multiplier for IT and security operations, helping enterprises save time on manual and repetitive tasks, accelerate mean time to resolution, and maintain greater control over IT infrastructure. Trusted by hundreds of major enterprises and leading technology solution and service partners, Ayehu supports thousands of automated processes across the globe.

GET STARTED WITH AYEHU INTELLIGENT AUTOMATION & ORCHESTRATION  PLATFORM:

News

Ayehu NG Trial is Now Available
SRI International and Ayehu Team Up on Artificial Intelligence Innovation to Deliver Enterprise Intelligent Process Automation
Ayehu Launches Global Partner Program to Support Increasing Demand for Intelligent Automation
Ayehu wins Stevie award in 2018 international Business Award
Ayehu Automation Academy is Now Available

Links

Episode #1: Automation and the Future of Work
Episode #2: Applying Agility to an Entire Enterprise
Episode #3: Enabling Positive Disruption with AI, Automation and the Future of Work
Episode #4: How to Manage the Increasingly Complicated Nature of IT Operations
Episode #5: Why your organization should aim to become a Digital Master (DTI) report
Episode #6: Insights from IBM: Digital Workforce and a Software-Based Labor Model
Episode #7: Developments Influencing the Automation Standards of the Future
Episode #8: A Critical Analysis of AI’s Future Potential & Current Breakthroughs
Episode #9: How Automation and AI are Disrupting Healthcare Information Technology
Episode #10: Key Findings From Researching the AI Market & How They Impact IT
Episode #11: Key Metrics that Justify Automation Projects & Win Budget Approvals
Episode #12: How Cognitive Digital Twins May Soon Impact Everything
Episode #13: The Gold Rush Being Created By Conversational AI
Episode #14: How Automation Can Reduce the Risks of Cyber Security Threats
Episode #15: Leveraging Predictive Analytics to Transform IT from Reactive to Proactive
Episode #16: How the Coming Tsunami of AI & Automation Will Impact Every Aspect of Enterprise Operations
Episode #17: Back to the Future of AI & Machine Learning – SRI International’s Manish Kothari
Episode #18: Implementing Automation From A Small Company Perspective – IVM’s Andy Dalton
Episode #19: Why Embracing Consumerization is Key To Delivering Enterprise-Scale Automation – Broadcom’s Andy Nallappan
Episode #20: Applying Ancient Greek Wisdom to 21st Century Emerging Technologies
Episode #21: Powering Up Energy & Utilities Providers’ Digital Transformation with Intelligent Automation & Ai
Episode #22: A Prominent VC’s Advice for AI & Automation Entrepreneurs
Episode #23: How Automation Digitally Transformed British Law Enforcement – Crown Prosecution Service’s Mark Gray

Follow us on social media

Twitter: twitter.com/ayehu_eyeshare

LinkedIn: linkedin.com/company/ayehu-software-technologies-ltd-/

Facebook: facebook.com/ayehu

YouTube: https://www.youtube.com/user/ayehusoftware

Disclaimer Note

Neither the Intelligent Automation Radio Podcast, Ayehu, nor the guest interviewed on the podcast are making any recommendations as to investing in this or any other automation technology. The information in this podcast is for informational and entertainment purposes only. Please do you own due diligence and consult with a professional adviser before making any investment