Article

Diffusion launches All Tech Voices: Increasing diversity in Tech

Back
Share

Diffusion is launching All Tech Voices, a new forum to advance the debate on how increasing representation of diverse talent and understanding of underrepresented audiences can strengthen the technology sector.  As we set out in our essay below exploring these issues in the field of AI, the need for change has never been greater. 

 

The rise of Automated Injustice: Will Tech listen to the warnings?

 

Technology’s capacity to be a force for good or evil has always hinged on the human hands that design and wield it. Today the debate over the identity of the people behind those hands has (finally) reached a new saliency. We see a growing acceptance, at least in theory, that products and solutions developed by individuals with a diverse set of perspectives, cultural backgrounds and lived experiences will create better technology that more fairly serves us all.  If there is still any doubt about what the exclusion of diverse voices and talent can mean for technology, the comparatively short history of machine learning and AI should act as a wake-up call.

It’s surprising how often the solutions built on exciting new AI technology are presented as impartial and autonomous. The reality is that they run on rails laid by teams that are rarely immune from prejudice. We see that bias is woven into the code that governs the very outcomes these systems are asked to predict, which in turn can reflect the prejudice of the institutions that have commissioned them, and the prejudice of the individuals that then operate them.

 

A question of data:

Peeling back the code further, we know the accuracy of these artificial intelligence and machine learning systems is heavily reliant on the quality of data that feeds them. Here that old computing adage of Garbage In, Garbage Out too often applies. We frequently see systems built on data that either underrepresents minority groups across race, gender, sexuality and beyond, or worse still misrepresents these groups with data that has been erroneously collected or classified on racist assumptions.

Depressingly, instead of AI’s alluring promise of sophistication and objectivity, what we are too often left with is simply Discrimination In, Discrimination Out. Again and again, we see the these flawed outputs profoundly impact the lives of minorities, from the allocation of health services, to housing and beyond.  With the roll out of AI and machine learning accelerating across society from government to commerce, the future frequency and scale of this ‘Automated Injustice’ will be staggering if minorities continue to be excluded from setting the parameters and monitoring the decisions of these solutions.

 

Lessons from history:

While AI as an industry may still be relatively nascent, the problems described above are not sudden realisations. As far back as 2015, Google was criticised when the algorithm for its new Photos app automatically classified photos of a black couple as ‘Gorillas.’ At the time Google promised further research into facial-recognition and linguistics to prevent a repeat. Campaigners rightly asked about the diversity of the teams that had created the software.

Fast-forward five years and Google had indeed built a formidable and respected Ethical AI team of researchers. It was co-led by Timnit Gebru, an influential computer scientist and one of the pioneers in the field who had previously conducted ground-breaking research into the bias in facial-recognition systems. It appeared that Big Tech finally “got it” but what happened next exposed the yawning chasm between rhetoric and action, between representation and respect and arguably between ethical behaviour and profitability at the company.

 

Excluding diverse expertise:

At the start of December 2020, against the backdrop of Black Lives Matter, Gebru, an Ethiopian American woman, was effectively forced out of Google. She refused a request from the company to retract a research paper she had co-authored examining AI language models, including their capability to display structural bias against women and people belonging to ethnic minorities. Google claimed the paper lacked balance and had not considered recent improvements in technology.

The controversy, which sparked over 2,700 Google employees and 4,300 academic and industry supporters to sign an open letter challenging Google’s account and behaviour, was significant for a number of overlapping reasons. Firstly, for what it said about the independence of research conducted and funded by big technology companies, secondly the apparent willingness of big tech to ignore or downplay flaws in AI in pursuit of profit, and lastly for the value placed on representation and diverse perspectives in furthering this field.

Speaking at the time to the BBC, Gebru was clear in her belief that “Google and all of the other tech companies are institutionally racist”. In 2020, just 1.8% of Google employees were black women. She continued, “Unless there is some sort of shift of power, where people who are most affected by these technologies are allowed to shape them…I am really worried that these tools are going to be used for more for harm than good.”

One year on from this regressive episode there is thankfully at least one ray of light in the gloom, with Gebru marking the anniversary of her departure by striking out on her own and launching a new research institute DAIR. It will aim to be an independent, community rooted institute to counter Big Tech’s pervasive influence on the research, development, and deployment of AI.

While DAIR and Gebru will no doubt illuminate bad actors and continue to shape the contours of ethics in AI, the question remains – what will need to change to allow for diverse talent to be heard, and to lead and to be respected from the inside of Big Tech where its impact for good will be greatest?

We have used AI as just one powerful illustration of how a lack of diversity is holding back tech, but we can be in no doubt that this is a historic and sector wide problem. Research conducted into tech, energy and industrial companies earlier in 2021 by an inclusion alliance led by Intel, Dell, Nasdaq, NTT DATA and Snap Inc illustrated that a huge lack of diversity persists, with 84% of technology teams being male, and 87% white or Asian. Therefore, we continue to see unintentional discrimination baked into technology created by teams whose privilege leaves them blind to its unintended consequences.

 

Why All Tech Voices?

At Diffusion we believe dialogue is necessary to promote understanding of any issue, and understanding is the bedrock of meaningful action. There is thankfully a growing conversation on promoting diversity in tech and the complexities of turning rhetoric into reality. We take the view that there is no such thing as ‘too much’ discussion on this vital issue. In launching All Tech Voices, we want to help accelerate the debate. We hope to create a platform to allow diverse industry voices to share their experiences and to offer their road maps to embed greater diversity, inclusion, and influence for underrepresented groups in tech.

We are privileged as an agency to be working with clients that are on their own missions for change, including supporting Cognassist to increase support for neurodiverse people in education and employment. Diffusion is also proud to represent Advancing Women in Tech (AWIT), as it seeks to address the gender and diversity gaps seen in tech management roles.

We thank Nancy Wang, CEO and Founder of AWIT & GM of Data Protection Services at AWS, for opening the All Tech Voices discussion with this powerful and practical piece on bringing actionable DEI into the tech industry’s workplaces. In the coming months we will be hearing from across the technology ecosystem, including the media who cover our industry.

We hope to inspire you to join and further the debate within your organisations and accelerate the path to full and meaningful representation of all tech voices. If you would like to contribute, we would love to hear from you at [email protected]