Exploring the Impact of AI on Culture: Insights from Ewa Luger

In order to highlight the achievements of women academics and professionals in the field of AI, TechCrunch has been featuring a series of interviews with remarkable women who have made significant contributions to the AI revolution. These profiles are being published throughout the year to shed light on important work that often goes unnoticed.

One of the featured individuals is Ewa Luger, who serves as the co-director at the Institute of Design Informatics and the co-director of the Bridging Responsible AI Divides (BRAID) program, backed by the Arts and Humanities Research Council (AHRC). She collaborates closely with policymakers and industry professionals and is a member of the U.K. Department for Culture, Media and Sport (DCMS) college of experts.

Luger's research focuses on exploring social, ethical, and interactional issues within data-driven systems, including AI systems. She is particularly interested in design, power distribution, spheres of exclusion, and user consent. Prior to her current roles, she was a fellow at the Alan Turing Institute, worked as a researcher at Microsoft, and held a fellowship at Corpus Christi College at the University of Cambridge.

Q&A

How did you start your journey in the field of AI and what drew you to it?

Following my PhD, I joined Microsoft Research where I worked in the user experience and design group in the Cambridge lab, which had a strong focus on AI. This led me to delve deeper into the realm of human-centered AI, such as intelligent voice assistants.

Upon moving to the University of Edinburgh, my interest expanded to algorithmic intelligibility, which was a niche area at the time. I found myself immersed in the field of responsible AI and currently co-lead a national program on the subject supported by the AHRC.

What work in the field of AI are you most proud of?

While my most cited work is a paper on the user experience of voice assistants, the work I am personally most proud of is the ongoing BRAID program that I co-lead. This collaborative effort aims to establish a responsible AI ecosystem in the U.K. by integrating arts and humanities knowledge with policy, regulation, industry, and the voluntary sector.

Through partnerships with organizations like the Ada Lovelace Institute and the BBC, BRAID seeks to amplify the voices of artists and designers in the AI field, which are often overlooked. The program has funded 27 projects to date and is committed to promoting AI literacy and dispelling misconceptions surrounding AI.

What are some of the key challenges faced by women in the tech and AI industries?

Gender equality issues are prevalent not only in the tech industry but also in academia. While the academic setting at the Institute of Design Informatics that I co-direct promotes a better gender balance, my past experiences in male-dominated environments have highlighted the higher standards and expectations placed on women. Women often face hurdles in confidently pursuing opportunities and may be subjected to stereotypical roles in the workplace.

Navigating these challenges requires setting boundaries, advocating for oneself, and actively addressing structural and cultural issues. Visibility and active efforts to address gender disparities are crucial steps towards fostering inclusivity in the tech industry.

What advice do you have for women aspiring to enter the field of AI?

I urge women to seize opportunities for growth and advancement, even if they feel they may not meet all the requirements. Research indicates that women tend to be more hesitant in pursuing roles where they feel less qualified compared to men. The industry is showing progress in gender awareness, but achieving true gender representation in AI leadership remains a critical goal.

What are some of the most pressing challenges facing the evolution of AI?

The most pressing challenges in AI revolve around the ethical and societal implications of AI systems. Addressing immediate and downstream harms resulting from inadequate design, governance, and usage of AI systems is paramount.

Environmental impact, regulatory alignment, democratization of AI, bias mitigation, and technological literacy are among the key issues to address as AI continues to advance.

What are some important considerations for AI users?

AI users should prioritize trust, authenticity, and vigilance when engaging with AI technologies. Given the limitations of current AI models, users should exercise caution when relying on AI-generated content, particularly in high-stakes scenarios. Ensuring the veracity and integrity of AI-generated content is essential in navigating the evolving AI landscape.

How can responsible AI be effectively developed?

Building responsible AI requires a multi-faceted approach encompassing diverse representation in design teams, ethical data practices, comprehensive training in socio-technical issues, stakeholder engagement, rigorous testing, and governance mechanisms. Emphasizing values of accountability and proactive decision-making can help shape a culture of responsible AI development.

What role do investors play in promoting responsible AI?

Investors play a crucial role in incentivizing responsible AI practices by aligning financial priorities with ethical considerations. Prioritizing responsible AI development over expedited market entry is essential in mitigating potential harms and building trust among users. Balancing economic gains with ethical responsibilities is key to fostering a sustainable and equitable AI ecosystem.