The Impact and Ethics of Artificial Intelligence: A conversation with Dr. Anna Farzindar

A conversation Dr. Anna Farzindar of the University of Southern California about the impact and ethics of Artificial Intelligence.

 

AI gets a lot of buzz but many of us are confused about what “AI” really means. Briefly, can you explain what it is? How does AI differ from machine learning?

The term Artificial Intelligence (AI) was used in the Second World War and in the 1940s and 1950s in the theory of control and communication, both for animals and machines. Basically, AI were mathematical and computer models developed to simulate biological neurons.

Today we use the term AI when a computer system executes complex tasks intelligently, such as communication between machines and humans in spoken or written languages, like talking to Siri or Amazon echo. Or when you are looking for something specific with Google in writing, by voice, or searching with an image. To build an AI system, we can use various machine learning techniques to solve real-world problems. Currently, deep learning is a popular machine learning technique when dealing with large amounts of data. For example, deep learning techniques can be trained on millions of images for the purpose of classification.

What are its limitations? Thinking of its current level of development, can you give us some examples of things AI cannot yet do?

Computers are great for repetitive work, mathematical calculations, and large-scale computing. But for some tasks AI systems are far more limited. Detecting human emotions is very hard for a computer, for example. It is more complex than the sentiment analyses used to detect positive or negative opinions in product reviews. Understanding the depth of human emotion is very challenging for machines.

Another limiting factor of AI is creativity. Machines are far from the true creation of visual art or poems. As I’ve mentioned, repetitive jobs or finding patterns in the data are suitable tasks for machines, but not creativity. There are some AI systems which attempt to create artwork like Dall-e-2 https://openai.com/dall-e-2/. It is designed to generate images from text descriptions. The system is trained on a large database of images and texts. It can recreate the style of a painting, like an impressionist painting. But there is no innovation in AI art. AI can only find patterns and combinations of data together to produce new images. There are also AI systems that can write poetry. But once again, the machine uses a large data set of poems to recreate a new text using natural language processing and text generation. So, AI cannot yet actually create.

Another limitation of AI systems is related to the limitations of data. If our data are inconsistent or inaccurate it will be hard to develop a system that produces trustworthy outcomes. Current AI systems are heavily dependent on data. That is one reason why companies gather data from users in massive quantities to understand their customers and markets. This is a big danger for our society because we have no consensus on safety, ethics, privacy regarding data collection, or possible leaks.

Some people feel AI may be dangerous. Is this a legitimate concern in your view or mainly hype?

I can ensure you that this is a serious concern. Have you ever feared losing your internet connection or being without your smartphone? This phenomenon is called Nomophobia, which is smartphone addiction. Many of us are using our phones excessively. Some apps on smartphones use AI systems in the background. One of the goals of these apps is keeping users active and increasing their engagement in order to collect even more data from them.

Computers looks for patterns in our data to identify associations between user characteristics and to build a user profile. These characteristics could be age, sexual orientation, race, geolocation, education, marital status, number of children, etc. Many of us provide this information voluntarily to platforms such as Facebook and other social media.

User profiles are very important information. For example, they can help companies identify communities and their interests, which can be used to increase sales for specific products. By knowing the user’s or community’s profile it is possible to predict their behavior.

But there are concerns when the purposes of this data collection and pattern identification are not clear. Companies also sell data to third parties, which can link additional user characteristics and behaviors to it and sell these data to other companies. In the end we don’t know who gets access to our data nor how they will use them.

Currently, AI systems are deployed extensively in decision making. For example, financial institutions use AI to make decisions about applications for credit cards or loans. But the results of AI models and decision-making are based on data. If data are inaccurate or biased the results will be unreliable. It is important for AI developers to understand the quality of their data.

However, some machine learning techniques, such as deep learning, are black boxes. That means they are using a large number of neurons and layers to form the neural network. Consequently, it may not be clear how the system produces its results. Many researchers have raised concerns about transparency of AI, but it’s very hard to track the flow of information among the neurons and understand the processes which lead to the results produced by the black box.

What are some of the ways AI is potentially harmful?

Currently, AI systems are widely implemented on a large scale in our homes (e.g., smart TVs, smart phones, smart home IoT), in public places (e.g., security cameras, traffic control, autonomous security drones), and in workplaces (e.g., collaborative platforms, video conferencing, and AI-powered talent search and recruiting). Yet, the societal impact of these systems is unknown and not fully understood. Even the impact of AI on the next generation of children who depend heavily on tablets and electronic devices is not clear. These changes in lifestyle and workstyle could lead to unintended consequences for society.

 Additionally, AI systems have the potential to automate many simple tasks performed by humans, such as food delivery, restaurant services, or online help, and this could lead to job losses for many vulnerable and low-income individuals. Furthermore, AI systems are becoming more powerful and able to perform complicated expert tasks such as reading medical images and producing reports. This could lead to job displacements and economic disruption.

I sometimes hear it alleged that AI and machine learning are biased. What does this mean? Is it something we really need to worry about?

Many AI systems are developed rapidly and trained on inaccurate or biased data. As a result, these systems may produce unfair or discriminatory outcomes. Furthermore, algorithms can be implemented in biased settings. For example, a video on YouTube or Instagram is promoted based on the number of views. Many views imply the content is important and should be shown to larger audiences. These algorithms can lead to the spread of hate speech, for example.

In AI systems unconscious bias could be introduced in various ways, such as bias in training data or annotated reference data, in design of the algorithms, or because of limited knowledge of human biases in the development of the system. It is important to have an accurate test for detecting potential bias and unfairness in an algorithm’s results. In addition, these systems can generate new data to feed and train future systems. In these cases, the impact of biases will magnify.

Another criticism of AI and machine learning is that they aren’t transparent. Could you explain what this means and its implications?

Lack of explainability and transparency of AI systems can create problems. Many machine learning algorithms used in AI systems are difficult to understand or explain, as noted, which impacts on trust in their results. In fields such as healthcare, understanding the results and trusting them are crucial. For example, when AI classifies the stage of cancer from a combination of different types of data, can a doctor trust the classification produced by this system and recommend the right treatment in a timely fashion?

 Can AI be completely automated, or will some human supervision always be needed?

It is important to keep humans in the loop. Autonomous AI systems are able to make decisions on their own, but if this is not properly controlled and supervised it could lead to disaster. Many of these risks are related to the speed of decision-making by AI systems. Some examples are finding the correct target for autonomous weapons, performing automated tasks with systems that may be biased, or safety in self-driving vehicles.

Are there books or articles you can recommend to laypersons who’d like to learn more about AI and machine learning?

There are many online tutorials, courses and videos that can help you learn more about AI and machine learning, as well as books and articles. Here are some sources I can recommend:

 

  1. Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig. This is a widely used textbook that provides a comprehensive introduction to AI, covering the key concepts and techniques used in the field.
  2. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow by Aurélien Géron. This book provides a hands-on introduction to machine learning using Python and popular open-source libraries such as scikit-learn, Keras, and TensorFlow.
  3. Deep Learning by Yoshua Bengio, Ian Goodfellow and Aaron Courville. This book is an in-depth introduction to deep learning, a powerful subset of machine learning that has been behind many recent breakthroughs in AI.
  4. The AI Revolution: The Road to Superintelligence by Tim Urban. This article is a great read for anyone looking to understand the basics of AI, its history, current state and future potential.

 

Thank you, Anna!

 

________________________________________________________________________

 

Kevin Gray is President of Cannon Gray, a marketing science and analytics consultancy.

 

Anna Farzindar, Ph.D. is a faculty member of the Department of Computer Science, Viterbi School of Engineering, University of Southern California.

 

Arrange a Conversation 

Browse

Article by channel:

Read more articles tagged: