As a professional engineering community, what are we doing?
Courtenay, British Columbia – Ever since one of our ancient ancestors picked up a stick to knock a piece of fruit from a tree because it was slightly out of reach, technology has always been, and will always be, a means to an end. It wasn’t long after that event that our ancestors also figured out that the stick could be used as a spear. Therefore, whether a technology is used for good or bad purposes depends quite a lot on intent and the intended and unintended consequences of technology use.
In the next ten years and beyond, we will witness a massive acceleration of the integration of data around the world. Artificial intelligence in its various forms is being designed into our machines at unprecedented rates. These machines can be used for good and bad reasons. Sometimes, the outcomes of an intended good use can also turn out badly. This should cause us some degree of dual-use discomfort.
I was at DVCon 2018 this year, and while I heard a number of people say, “Just because we can doesn’t mean we should,” I did not hear the conversation getting elevated to the topic of the intersection of AI and ethics. Instead, one panel that started off well and could have gone there ended up devolving into a tool spat between vendors. That’s a real shame and very costly waste.
When I think of the salary bill plus the travel and living costs of the people at the event, I cannot help feeling sorry for the participants who had a very real opportunity to hear the wisdom of the community and its leaders. Regrettably, we did not create a non-partisan environment and rules of community that we could agree to before the panels started. I’m recommending here that the organizers take the necessary steps to do that. Now that my rant is done, I’ll get back to my topic.
Why should it matter to us?
Some people might think that an ethics track at a technical conference such as DVCon doesn’t make sense, but I would vehemently disagree. When we design our machines, whether we know it or not, we design our values into them. This will be particularly true in the case for AI. A classic example used at DVCon was self-driving automobiles that cannot avoid an accident and must decide in a split second who is to die. One of my favorite examples of ethics dilemmas involving automobiles comes from the movie I Robot. Will Smith’s character is involved in an accident whereby the car plunges into water and sinks quickly. Inside the car is a 12 year old girl. A robot witnessing the accident sees that humans are in danger and jumps into the water immediately. It assesses the situation and figures out that only one person can be saved. The robot, having a utility function, makes the decision to save the Will Smith character because his chances of surviving were greater. Later in the movie, when recounting this event, the Will Smith character mournfully says, “I would not have made that decision.”
In my opinion, the movie anecdote above foreshadows our potential to create another Challenger disaster with our technology. Our challenges are manifold. How do we connect the knowledgeable engineer to the decision maker who will “launch” the product into the marketplace? What rules are we going to embed in our machines and how will they function? Whose rules are we going to use? Who will make the decision about the ethics of what we are doing? How comprehensive will the use cases regarding ethics be in our verification and validation environments? What will be our engineering community’s process for how to address this very difficult problem? I am very passionate about these challenges.
We can use the medical community for a good analogy. They have struggled with the question of ethics for a long time. As I know from my daughter who recently graduated as an MD, ethics education is a key part of the curriculum as are the rules taught, such as “first, do no harm.”
What is our obligation?
We must elevate the conversation within our professional community as it is our obligation to society to do so. But, talk and text is cheap, we must move to action. For our part, XtremeEDA will sponsor panels and activities at trade shows and other venues to get this conversation going. These conversations can only help to strengthen the work done within our professional associations such as the IEEE. We don’t have much time! I invite other industry CEOs, thought leaders, and other interested stakeholders to join me in promoting this conversation and helping to supply the necessary resources for our community to have a vibrant and effective conversation so that we take the right actions to produce systems that are safe and secure. Please feel free to contact me about this important topic. Thank you!
Claude J. Cloutier, Ph.D.
The 2 Faces of Debug – by Neil JohnsonMay 16, 2018
Recently, I was part of a discussion about the different types of bugs that can pop up for verification teams. […]Learn More
AnySilicon – CEO Talk: Claude Joseph CloutierMarch 1, 2018
CEO Talk: Claude Cloutier discusses the XtremeEDA MultiProcessor Platform (XMPP), safety, security and the path to transformational client engagements with AnySilicon.com . . […]Learn More
Neil Johnson on TDD for hardware development in the System Design JournalMarch 1, 2018
XtremeEDA Chief Technologist Neil Johnson comments on the use of test-driven development (TDD) in hardware in this article from the […]Learn More
20 Years of PCIe in Sunny San DiegoJanuary 17, 2018
The XtremeEDA engineering team has developed significant strengths though our frequent work with on-chip and chip-to-chip interface protocols used across […]Learn More