Exploring Artificial Intelligence with Melanie Mitchell


We are currently in a podcast series on the complexity of artificial intelligence and we recently shared our interview with AI researcher, leading Complex Systems Scientist, Professor of Computer Science at Portland State University, and external professor at the Santa Fe Institute, Melanie Mitchell.

Mitchell’s background originally stems from Mathematics and Physics. She became curious about how intelligence emerges from a complex system after reading Douglas Hofstadter’s book, Gödel, Escher, Bach: An Eternal Golden Braid. Hofstadter later became her Phd advisor after she sought him out at the University of Michigan. Mitchell attributes her professional journey to his guidance, as well as, Professor John Holland, who was a pioneer in the field of genetic algorithms. Holland was also an early founder of the Santa Fe Institute and encouraged Mitchell’s involvement with the Institute.

Since her journey began, Mitchell has contributed to over 80 scholarly papers in the fields of cognitive science, complex systems and artificial intelligence. She is also the author of An Introduction to Genetic Algorithms, a widely known introductory book published by MIT Press, and Complexity: A Guided Tour, winner of the 2010 Phi Beta Kappa Book Award in Science. We were very excited for the opportunity to talk with Mitchell, for she is more than qualified to discuss artificial intelligence as a complex system. Mitchell’s history, career, and research have given her a unique understanding of the dynamics of systems, especially technological systems like AI and machine learning.  

During our podcast interview with Mitchell we learned that she is writing a new book about artificial intelligence, which will go into depth about AI, addressing questions like: “how close is AI to human abilities in many different areas and what are the ethical, moral, and social issues that we really should be thinking about?” It was a pleasure to learn about her new book, for most of our proposed questions would lead to a deeper discussion on these topics.

For example, we asked Mitchell if unintended consequences might arise from increased use of AI and she replied with specific examples of how this technology can in fact amplify inherent biases. She said, “AI systems can be biased by the data that they learn from”, giving a few examples to explain. Her first example was gender bias in natural language processing. She described how AI recommendation systems, like job ads, often pick up on inherent gender bias in the data they use to learn, so that men and women might receive different job recommendations based on their gender.

She continued with another example about bias in facial recognition software, explaining that these systems are much worse at identifying the facial features of non-white individuals because they were not trained on many ethnic groups other than caucasians. This example of an AI system bias is indicative of a large-scale racial bias and it brings attention to the inherent biases we face in society.

We asked Mitchell how we can combat bias in AI systems when we still haven’t figured out how to address it in society?

She explained that she doesn’t think anyone really has an answer to this question yet, but the starting point is understanding what the biases are in AI systems. Understanding why a machine behaves a certain way is not easy however, because these systems do not explain themselves. Mitchell believes creating AI systems that can explain themselves is really important in further understanding their behavior, biases and vulnerabilities.

Mitchell’s advice for navigating our relationship with AI as it becomes more popular was “to be very careful knowing when we can trust these systems”. She believes we tend to trust algorithms more than we should, especially when we don’t understand how they work. It seems we need to continually question the AI systems we build and interact with in order to work toward a better understanding of how they operate. Artificial intelligence is a powerful new tool that requires diligence and acute awareness of what is happening while it is happening. There is hope in building a better future with AI, for the further we travel down this path, the more we come to realize the importance of growing our understanding of the dynamics of complex systems and practicing self-awareness and conscious behavior.

Sometimes it seems as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.
— Douglas R. Hofstadter

You can listen to the HumanCurrent podcast here and don't forget to subscribe in iTunes. Listen to our recent interview with Melanie Mitchell!

Comment