Podcast-logo

Intelligent Automation Radio is the #1 podcast for IT executives seeking insights on the impact and opportunities for innovation that automation is delivering to businesses around the world. Featuring thought leaders in AI, Machine Learning, Orchestration, Security Automation, and the Future of Work.

July 15, 2021    Episodes

Episode #69:  Why AI & ML Engineers Should Incorporate Value Sensitive Design Into Their Models

In today’s podcast, we interview Steven Umbrello, Managing Director at the Institute for Ethics and Emerging Technologies. 

Had the great modernist poet D. H. Lawrence lived in our time, perhaps he would’ve alternatively phrased this famous quote as follows – “Ethics and equity and the principles of justice do not change with shifting technological paradigms.” Today, many people realize that the shifting paradigms of AI, automation, and digital transformation will disrupt numerous human-involved processes, but few ponder how those disruptions will affect ethics and equity and principles of justice.  Fewer still contemplate how to address these technoethical challenges, and what framework should be applied in doing so. 

Enter Steven Umbrello, Managing Director at the Institute for Ethics and Emerging Technologies.  Steven emphatically advocates for Value Sensitive Design (VSD) as a systemic approach to ensuring human values are accounted for throughout a design process. Steven joins us to discuss his research, how to incorporate VSD into AI systems, and the negative ramifications of excluding VSD from the innovation process. 



Guy Nadivi: Welcome everyone. My name is Guy Nadivi , and I’m the host of Intelligent Automation Radio. Our guest on today’s episode is Steven Umbrello, Managing Director at the Institute for Ethics and Emerging Technologies. Steven is also Co-Editor-in-Chief of the International Journal of Technoethics, and Associate Editor of the quarterly peer reviewed scientific journal, Science and Engineering Ethics. As the pace of technology accelerates in automation, artificial intelligence, and other areas, it can be easy to overlook the accompanying ethical predicaments we face due to these advances. Nevertheless, digital transformation’s broad social impact demands that these issues be examined carefully and addressed properly. The high tech industry has an amazing track record when it comes to problem solving. And there’s no reason it can’t rise to this challenge. With that in mind, we’ve invited Steven onto the podcast to discuss some of the work he’s doing on these ethical challenges, and gain new insights on the intersection between high tech and humanity. Steven, welcome to Intelligent Automation Radio.

Steven Umbrello: Thank you for inviting me, Guy.

Guy Nadivi: Steven, what drew you to the field of technoethics?

Steven Umbrello: Well, I guess you can say that as a millennial I grew up on the cusp of technology being an integral part of my everyday life. I can still remember my first flip phone, but now we’re pretty far removed from those days, but we can still see how technologies incrementally build on their predecessor technologies like a scaffold on a high rise. I actually studied both history and philosophy simultaneously in university, and I think that having that historical perspective is quite useful in the philosophical realm, because it helps to give perspective. And you probably heard the saying that nobody sleep walks into war and something similar can be said that no technologies just stumble into existence either. They don’t just come out of nothing. So each technology that we see today came about because of the foundations of the technologies that came before it. Like ancient philosophers asking themselves the why of the world, I’m interested in doing the same. In our case, our world is markedly technological in comparison to other historic ologies to embody values al periods or better still, us philosophers of technology like to call it our world is social technical. Since we can’t really separate anymore the technological from the social, they’re kind of one and the same thing. So as someone who studies ethics, I like to explore questions like how can we design technologies to support rather than hinder human values? And all of these types of questions fall into the purview of a subfield in which I specialize, which is called engineering ethics.

Guy Nadivi: Okay. What is engineering ethics?

Steven Umbrello: Well, there’s a bunch of sub-fields in philosophy, with ethics being a relatively large umbrella of sub-disciplines that come underneath it. And when most people think about ethics they think about morality and what people ought to do, meaning what they should do in certain circumstances, and this is generally a pretty good way of understanding ethics more broadly. So like I said, regarding our social technical world, we can see how technologies have an inextricable impact on our daily lives, what we find important in our lives and also how we relate to each other. And all of this is based on how technology is designed. Engineering ethics then is fundamentally it looks at the practices, the day-to-day work of engineers, the nitty gritty and shows how these activities shape our technologies and therefore our lives. What engineering ethics fundamentally explores, I guess, is how we can be responsible for our innovations, how to design technologies for the things that we find to be important in our lives, to mitigate the potential bad things that may emerge from their design and therefore of their use in the world.

Guy Nadivi: Now that’s a good segue into my next question, which is about your research, which primarily focuses on something called Value Sensitive Design. What is VSD?

Steven Umbrello: Sure. So I’m glad you even used the the acronym, VSD. It’s shorthand, instead of always saying Value Sensitive Design every time, but Value Sensitive Design is an approach within engineering ethics and there’s many approaches, but value sensitive design or VSD was developed in the early 90s, and it’s often described as a principled approach to technology design because at its core it centers human values rather than relegating human values to afterthoughts, things like sustainability and human autonomy, safety, usability, accessibility, and so on. So I won’t really go into the nitty gritty of what VSD is methodologically, but I will say that it is itself designed in such a way that rejects value trade-offs, and these value trade-offs are quite popular in mainstream culture today. So think about the popular debate between security and privacy. This is something that a lot of people get passionate about and I think they’re right to do so. This also goes for technologies as well. Executives may be tempted to scoff at important values like safety and sustainability and privacy, and the list goes on. And this is because these values are often construed as coming into conflict with, or, they would probably say, at the opportunity cost of economic values like profit, but VSD rejects this binary thinking and instead frames values as in tension with one another, or at least potentially in tension with one another and not as mutually exclusive. This means that through creativity and salient design, many of these tensions can not only be resolved, but they can be done so in such a way as to maximize each other. So long gone are the days where environmental sustainability comes at the cost of profit, but rather designing for the former may actually augment the latter. And perhaps by not doing so, will decrease profits in the long run, thus risking the firm’s overall long-term sustainability. So VSD is attractive because it comes at no cost to firms. There are many different methods within the VSD toolkit that can be easily adopted, and these can range from more time consuming stakeholder interviews to even simple three-minute brainstorming sessions between designers using a tool called Envisioning Cards. And design teams can pick up VSD tools today and begin using them immediately with immediate results and at no cost.

Guy Nadivi: What does it mean for technologies to embody values?

Steven Umbrello: Well, it means that technologies, simply by the nature of the question, are never neutral, they’re always value-laden, that means that they’re a carrier of values. So many scholars in the past have realized that technologies inherit the values of its makers, its designers. And one of the most famous example was brought to light by a scholar named Langdon Winner in the early 80s. And he argued that the low hanging overpasses across Long Island, New York, that were built at the beginning of the 20th century, had been designed intentionally low. The urban planner at the time, Robert Moses, designed these low overpasses over the parkways of Long Island, so that the buses from New York City couldn’t access the beaches, a place that he loved dearly. And it was because of these design decisions that the urban poor, which were primarily African-American, couldn’t reach the shore because they were dependent on the buses for transportation. The beaches were therefore only accessible to the white upper and middle classes. So this is a clear, albeit low-tech, example of how values are deliberately designed into technological artifacts. And in this case, Moses’s racist values were the ones that were imbued into those bridges.

Guy Nadivi: What about artificial intelligence? Can VSD be used to design AI for human values?

Steven Umbrello: That’s a tricky question because AI systems are very different from other technologies. First, there’s a lot of haphazard use of the term AI that doesn’t actually refer to the AI we imagine. We see this a lot of times in news headlines, and then you actually read the content and it’s not exactly what we imagined, but a good way to think about AI is that it’s a class of technologies that are autonomous, they’re interactive, they’re adaptive and capable of carrying out human like tasks. And in particular, AI technologies that are based on machine learning, another buzz word, which allows systems to learn on the basis of interaction with and feedback from the environment are actually what often comes to mind when we think about AI. And the nature of these learning capabilities poses some pretty hard challenges for AI design and that’s because AI systems are more likely than not to acquire features that were neither foreseen nor even intended by their designers and maybe even they’re unforeseeable. And these features, as well as the ways AI technologies are learning and evolving, may be opaque to humans. Which means that we may not be able to peer into the system and actually understand why it does what it does. But VSD is still very self-reflective and engages directly with stakeholders and their values, so I would argue that VSD with some modifications is uniquely suited among the other existing design traditions to actually meet the challenges posed by artificial intelligence.

Guy Nadivi: Many AI projects, as they enter operational status, begin producing unexpected results, this can be positive or negative depending on stakeholder perspectives. But I’m curious, Steven, how can a Value Sensitive Design approach to AI be reconciled with the AI when it generates results that conflict with desired values?

Steven Umbrello: So one thing that’s fundamental to VSD is the philosophy of progress, not perfection. So trying to ensure absolutely perfect functionality after a system enters operational status is a non-starter. We have to allow for the potential for recalcitrance to occur for things to go wrong. But that, of course, doesn’t mean not doing our best to minimize it on the backend. Of course we want to do that and we should, but we have to also be prepared for unwanted emerging behavior. And one way to make VSD work is to not only ensure its design is aligned with important human values that we hold dear, but personally, and on a societal and global levels, we have to manage these emerging behaviors that we may feel are deleterious. So I would argue that we need to include full life cycle monitoring in AI systems as a foundational starting point in design. We need an internal mechanism that designers of that system, given that they are obviously the ones most familiar with its inner workings, can pull the system out of its context of use and begin another iteration of design, which I would call redesign. And this is actually a familiar process to many people within the business world, particularly those who use waterfall or agile approaches to design, given that they’re used to these short iterative sprints in their work. Think of this as short-term sprints with long-term envisioning where designers are always ready and prepared to extract the system and fix those errors as they appear. But this means that we can’t think of AI systems like normal products that we design and throw out into the world and wash our hands like Pontius Pilate when things go wrong. Meaningful human control of these systems can only be attained when we take responsibility for these dynamic changing systems over the course of their entire lifespans.

Guy Nadivi: Harvard Business School published an article not long ago calling for the auditing of algorithms the same way companies are required to issue audited financial statements. Given that AI developers can incorporate their own biases into algorithms, even unintentionally or unconsciously, what do you think, Steven, about the need for algorithm auditing?

Steven Umbrello: Well, I think it’s a promising first step. Firstly, because it acknowledges something that we talked about and that’s the value-ladenness of the technology, that technologies inherit the values of its designers. Here we’re talking about the negative value of unwanted bias. There’s a difficulty in the actual logistics of carrying this out at scale, I’m talking about algorithm auditing, but part of approaches like Value Sensitive Design is that many of these biases can be weeded out early on and throughout the design program of the systems, which would result in the reduction of the overall operational costs of firms who would need to engage in bias auditing, whether that’s on the back end or on the front end. So remember what we’re trying to do here is we’re trying to marry both moral values and economic values to make them complimentary with one another rather than at the opportunity cost of the other.

Guy Nadivi: It’s a strange thing to ask perhaps with regards to ethics, but is there a single metric like ROI that best captures the impact of incorporating Value Sensitive Design into a technology project?

Steven Umbrello: Well, if you remember, we’re not really looking for perfection in our designs, but progress. So VSD doesn’t have benchmarks or standards that needs to be met quantitatively that determines whether success has been met or not. VSD is meant to be integrated into the day-to-day practices of technology designers regardless of the domain. And therefore it’s not meant to be an overhaul or replace what they already do. If it did, then that would be a barrier of entry given the cost of retraining employees for a new type of design method. So VSD can easily assimilate existing ROI metrics to determine the success of a design. So, for example, I created a cool 15-minute field manual for designers who use the agile methodology for incorporating VSD tools into their day-to-day work, and I offer some questions of lessons learned that can help them gauge their own success in their own context of use. There’s no one size fits all. So there are various flexible ways that VSD can be used to achieve progressing results. It’s a primary driving factor behind VSD’s 20-year success, and it’s continued adoption.

Guy Nadivi: Overall, Steven, given everything you know and have seen on the ethics front with AI, are you more optimistic or more pessimistic about the future?

Steven Umbrello: I would say I’m more white-pilled in the sense, so optimistic, I guess. Although we’ve seen a lot of pushback and ethics-washing by companies, it’s mostly because they’re still thinking in binary terms and framing important values as coming at the opportunity cost of profit, but the veil is being stripped away and people can see that the emperor has no clothes. By acting in this way, these larger firms, which have shown their true colors despite their opportunistic waving of the rainbow flag once a year, may actually ultimately prove to be unsustainable and by doing so, they undermine the very thing they sacrifice those important moral values for, in this case profit. In my experience, designers and executives are actually aligned with these human values. They’re hungry for ways to make them work, to change the world. Unfortunately, the language of values is abstract and lacks concrete ways that engineers and designers can pick up and actualize in their day-to-day practices in the world. So part of my work is providing them with the Rosetta Stone for translating values, the language of philosophers, into design requirements, the language of engineers.

Guy Nadivi: Steven, for the CIOs, CTOs and other IT executives listening in, what is the one big must have piece of advice you’d like them to take away from our discussion with regards to implementing Value Sensitive Design?

Steven Umbrello: Moral values are not at odds with economic values. In fact, sacrificing the former for the latter is a good way to long-term unsustainability. There are existing cost free and easy to adopt approaches to technology design like Value Sensitive that allow us to easily marry and compliment the often at odds set of values. So I would encourage these executives if they’re concerned with the long-term viability of their firms, particularly in the wake of this growing push towards conscious capitalism, to take moral values seriously and to design for them rather than waiting for something wrong to happen first.

Guy Nadivi: All right. Well, it looks like that’s all the time we have for on this episode of Intelligent Automation Radio. Steven, we cover a lot of advanced technology issues on this podcast, but I never want our audience to lose sight of the ethical issues emerging as a result of digital transformation. As automation and AI expand the boundaries of what machines can do, I think it’s important we continue listening to people like yourself who remind us what machines should do. Thank you for coming onto the show today.

Steven Umbrello: Thank you for having me.

Guy Nadivi: Steven Umbrello, Managing Director at the Institute for Ethics and Emerging Technologies. Thank you for listening everyone, and remember, don’t hesitate, automate.



Steven Umbrello

Managing Director at the Institute for Ethics and Emerging Technologies

Steven Umbrello currently serves as the Managing Director at the Institute for Ethics and Emerging Technologies. His main area of research revolves around Value Sensitive Design otherwise known as (VSD), its philosophical foundations, and its potential application to emerging technologies such as artificial intelligence and Industry 4.0. 

Steven can be reached at: 

Website: https://stevenumbrello.com/  

Twitter: (@stevenumbro) https://twitter.com/StevenUmbro 

LinkedIn: https://www.linkedin.com/in/stevenumbrello/ 

Quotes

“…we can't really separate anymore the technological from the social, they’re kind of one and the same thing. So as someone who studies ethics, I like to explore questions like how can we design technologies to support rather than hinder human values?” 

“What engineering ethics fundamentally explores, I guess, is how we can be responsible for our innovations, how to design technologies for the things that we find to be important in our lives, to mitigate the potential bad things that may emerge from their design and therefore of their use in the world." 

“…we can't think of AI systems like normal products that we design and throw out into the world and wash our hands like Pontius Pilate when things go wrong.  Meaningful human control of these systems can only be attained when we take responsibility for these dynamic changing systems over the course of their entire lifespans.” 

“Although we've seen a lot of pushback and ethics-washing by companies, it's mostly because they're still thinking in binary terms and framing important values as coming at the opportunity cost of profit, but the veil is being stripped away and people can see that the emperor has no clothes.” 

“Moral values are not at odds with economic values. In fact, sacrificing the former for the latter is a good way to long-term unsustainability.” 

GET STARTED WITH INTELLIGENT AUTOMATION & ORCHESTRATION:

News

Ayehu NG Trial is Now Available
SRI International and Ayehu Team Up on Artificial Intelligence Innovation to Deliver Enterprise Intelligent Process Automation
Ayehu Launches Global Partner Program to Support Increasing Demand for Intelligent Automation
Ayehu wins Stevie award in 2018 international Business Award
Ayehu Automation Academy is Now Available

Episode #1: Automation and the Future of Work
Episode #2: Applying Agility to an Entire Enterprise
Episode #3: Enabling Positive Disruption with AI, Automation and the Future of Work
Episode #4: How to Manage the Increasingly Complicated Nature of IT Operations
Episode #5: Why your organization should aim to become a Digital Master (DTI) report
Episode #6: Insights from IBM: Digital Workforce and a Software-Based Labor Model
Episode #7: Developments Influencing the Automation Standards of the Future
Episode #8: A Critical Analysis of AI’s Future Potential & Current Breakthroughs
Episode #9: How Automation and AI are Disrupting Healthcare Information Technology
Episode #10: Key Findings From Researching the AI Market & How They Impact IT
Episode #11: Key Metrics that Justify Automation Projects & Win Budget Approvals
Episode #12: How Cognitive Digital Twins May Soon Impact Everything
Episode #13: The Gold Rush Being Created By Conversational AI
Episode #14: How Automation Can Reduce the Risks of Cyber Security Threats
Episode #15: Leveraging Predictive Analytics to Transform IT from Reactive to Proactive
Episode #16: How the Coming Tsunami of AI & Automation Will Impact Every Aspect of Enterprise Operations
Episode #17: Back to the Future of AI & Machine Learning
Episode #18: Implementing Automation From A Small Company Perspective
Episode #19: Why Embracing Consumerization is Key To Delivering Enterprise-Scale Automation
Episode #20: Applying Ancient Greek Wisdom to 21st Century Emerging Technologies
Episode #21: Powering Up Energy & Utilities Providers’ Digital Transformation with Intelligent Automation & Ai
Episode #22: A Prominent VC’s Advice for AI & Automation Entrepreneurs
Episode #23: How Automation Digitally Transformed British Law Enforcement
Episode #24: Should Enterprises Use AI & Machine Learning Just Because They Can?
Episode #25: Why Being A Better Human Is The Best Skill to Have in the Age of AI & Automation
Episode #26: How To Run A Successful Digital Transformation
Episode #27: Why Enterprises Should Have A Chief Automation Officer
Episode #28: How AIOps Tames Systems Complexity & Overcomes Talent Shortages
Episode #29: How Applying Darwin’s Theories To Ai Could Give Enterprises The Ultimate Competitive Advantage
Episode #30: How AIOps Will Hasten The Digital Transformation Of Data Centers
Episode #31: Could Implementing New Learning Models Be Key To Sustaining Competitive Advantages Generated By Digital Transformation?
Episode #32: How To Upscale Automation, And Leave Your Competition Behind
Episode #33: How To Upscale Automation, And Leave Your Competition Behind
Episode #34: What Large Enterprises Can Learn From Automation In SMB’s
Episode #35: The Critical Steps You Must Take To Avoid The High Failure Rates Endemic To Digital Transformation
Episode #36: Why Baking Ethics Into An AI Project Isn't Just Good Practice, It's Good Business
Episode #37: From Witnessing Poland’s Transformation After Communism’s Collapse To Leading Digital Transformation For Global Enterprises
Episode #38: Why Mastering Automation Will Determine Which MSPs Succeed Or Disappear
Episode #39: Accelerating Enterprise Digital Transformation Could Be IT’s Best Response To The Coronavirus Pandemic
Episode #40: Key Insights Gained From Overseeing 1,200 Automation Projects That Saved Over $250 Million
Episode #41: How A Healthcare Organization Confronted COVID-19 With Automation & AI
Episode #42: Why Chatbot Conversation Architects Might Be The Unheralded Heroes Of Digital Transformation
Episode #43: How Automation, AI, & Other Technologies Are Advancing Post-Modern Enterprises In The Lands Of The Midnight Sun
Episode #44: Sifting Facts From Hype About Actual AIOps Capabilities Today & Future Potential Tomorrow
Episode #45: Why Focusing On Trust Is Key To Delivering Successful AI
Episode #46: Why Chatbots Are Critical For Tapping Into The Most Lucrative Demographics
Episode #47: Telling It Like It Is: A 7-Time Silicon Valley CIO Explains How IT’s Role Will Radically Change Over The Next Decade
Episode #48: How Microsoft Will Change The World (Again) Via Automation
Episode #49: How One Man’s Automation Journey Took Him From Accidental CIO To Unconventional VC
Episode #50: How Automation Helped LPL Financial Grow Into The Largest Independent Broker Dealer In The US
Episode #51: Why Cognitive Architecture Might Be An Early Glimpse Of A Future With Artificial General Intelligence
Episode #52: Chatbots Aren’t Human, So Don’t Expect People To Pretend They Are
Episode #53: Why End User Experience May Be A Better Measure Of Automation Success Than ROI
Episode #54: How Digital Dexterity Will Generate Competitive Advantage For Agile Enterprises
Episode #55: Is It Time To Start Hiring Digital Coworkers So Human Staff Can Spend More Time With Customers?
Episode #56: How Intelligent Automation Will Empower People, Transform Organizations, & Improve Our World
Episode #57: Can The World’s Largest ITSM Vendor Innovate Fast Enough To Maintain Its Meteoric Growth?
Episode #58: What Works? A Senior Partner From Bain Articulates The Keys To Automation Success
Episode #59: Why 2021 Is The Year Organizations Will Start Widely Trusting AI
Episode #60: Why Lasting Automation Success And Competitive Advantage Require Process Excellence
Episode #61: How One Industry Leader Is Making Hyper Automation A Reality For Her Customers Today
Episode #62: How Explainable AI Keeps Decision-Making Algorithms Understandable, Efficient, & Trustworthy

Follow us on social media

Twitter: twitter.com/ayehu_eyeshare

LinkedIn: linkedin.com/company/ayehu-software-technologies-ltd-/

Facebook: facebook.com/ayehu

YouTube: https://www.youtube.com/user/ayehusoftware

Disclaimer Note

Neither the Intelligent Automation Radio Podcast, Ayehu, nor the guest interviewed on the podcast are making any recommendations as to investing in this or any other automation technology. The information in this podcast is for informational and entertainment purposes only. Please do you own due diligence and consult with a professional adviser before making any investment