From New York to San Jose, Italy to Israel, attendees of the live session engaged passionately with the conversation, sharing resources and life experiences. We have compiled these for your exploration and enjoyment.
Diversity and Inclusion in AI
Dr. Jamika D. Burge is the Head of AI Design Insights at Capital One and the Co-Founder of blackcomputeHER; an organization positioned to be an influential think tank for black women and girls in computing and technology. Dr. Burge advocates for the importance of representation and diversity in data collection and analysis as it pertains to AI.
Dr. Jamika Burge on Responsible AI:
How do we have AI machine-learning without understanding the impact of that work, or the impact of not including as many people as possible, and creating experiences that matter to them?
What does representation in data mean? As humans are multidimensional, the creators of technology and the builders of algorithms must understand the entire context and experiences of the end-user.
Panelists urge the importance of representation in the teams building AI tools and data sets. For teams building AI tools and data sets, representation is an important requisite to arrive at more equitable innovations. The long and important journey of anti-bias, anti-racist work is crucial to identifying, and ultimately correcting, the resulting biases built into technology we use every day. For those with a seat at the table, pulling up a chair for others is crucial to ensure values and bias are addressed for future generations.
Biases exist. From an equity perspective, the way we build algorithms needs to be explored: what problems are we solving, and who is creating the steps to solve those problems?
The crucial element of reducing bias in technology starts with the analysis of the data set. Dr. Molly Wright Steenson argues the reduction of bias must begin at the source:
The crux of data is that it is in the past. The issue with data sets is the reinforcement of existing biases rather than finding new ways to do things and solve problems.
If the objectivity of data is based on something in the past that we’re reinforcing into the present, then these biases continue to be perpetuated. The Cognitive Bias Codex, which visually captures biases that unknowingly affect our experiences and decision-making, is an infographic which helps illustrate this issue.
We need to talk about our assumptions and how we want to deal with them.
– Ruth Kikin-Gil, Responsible AI Design and Strategy at Microsoft
Ethical and Responsible AI
It is a necessity to understand both human interaction and the logic behind the algorithm. At the center of this is how humans interact with each other, and how humanity can responsibly and ethically create and interact with these systems.
Accountability is a core principle of AI. Ruth Kikin-Gil argues that Artificial Intelligence cannot be considered as algorithms alone, as there is much more to the story:
We need to acknowledge that the humans creating the systems and the products that use the systems are all part of the equation.
The human values behind each and every application need to be analyzed, and the way the system behaves needs to align with these values. Not doing so communicates a lack of accountability.
When the builders of technology hold different values, biases and experiences, what does it mean for those using these technologies?
Designers put people first. They empathize, observe, and listen. They find problems to solve not because they are technically difficult, but because they are hard human issues. How to use AI is one of these challenges — and humanity-centered design could be the solution.
Microsoft has created the Office of Responsible AI, that governs and shepherds the development of AI products in Microsoft, where development of AI is handled in an ethical and responsible way. The team has created frameworks that are practical and digestible. The guidelines for responsible AI from Microsoft can be found here.
To improve the human outcome of AI in the future, those who build these systems must recognize the innate biases that exist through their lived experiences and in the data as a result of these experiences. An attendee expressed this as “GIGO – Garbage in, garbage out”: What comes out (the technology) is only as good as what goes in (the data and the process to collect that data). We are responsible and accountable for understanding this connection. Innovators and designers must be deliberate and bring new experiences to the table in order to design the things that shape access and participation in the world.
Through the democratization of AI, we must recognize how our data is being used, and who benefits from our data. Dr. Jamika D. Burge urges each of us to move from understanding AI to understanding how our data is being used, who it is benefiting, and whether or not it is being used for good. The crux of this is to encourage conversations across disciplines, to make certain everyone has a role in ensuring all of humanity is considered equitably.
By tuning in and paying attention in the world of AI, we can acknowledge the diversity of lived experience and ensure the design of these systems are equitable.