Sharon Moore’s Blog on the FDM-sponsored event “Change Agents creating diversity in AI”

“We must stop telling young girls they are princesses who need a prince to save them.”

That’s how Sarah Burnett, our BCSWomen Chair, kicked off the panel response to the question: “How do we get diverse teams?”. Her point was that we need to make sure we’re attracting girls at school and telling them they are equally capable as the boys.

This was at our FDM-sponsored event “Change Agents creating diversity in AI”, and our Change Agents included Sarah (as above), VP at Everest Group; Sheila Flavell, FDM COO; Abeth Go, Head of AI at BP; and Nadia Abouayoub, a Strategist in Finance, BCSWomen Committee member, and BCS SGAI Committee member. The panel host was Claus Munmann of Standard Chartered Bank.

Sheila made it clear that diversity is not a challenge but a business imperative, and she spoke of the FDM returners programme that helped address the gender pay gap (which is zero at FDM). She also believes that FDM would not have been so successful if it had not been such a diverse company.

Abeth stressed that we need to make sure we create the necessary environment to retain those women who have chosen technology, and AI, as a career.  If we are not inclusive as individuals and as organisations, we risk losing these talents and women (and men) having to be someone they are not in order to fit into prejudiced criteria to be in the leadership team.

When talking about the risks of not having diversity in AI it was agreed that this refers to both the people that create it and the data used to train and test the AI. Nadia recommended reading in order to explain the importance of training algorithms and the challenges “West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute”. Training algorithms on biased data can have a negative economic impact. Sarah mentioned that AI can exaggerate and emphasise unconscious biases, and gave examples many will be familiar with such as the language translator that would take “she is a doctor, he is a nurse” in English, but when translated to Turkish then back to English the result would be “he is a doctor, she is a nurse”. Claus mentioned that Standard Chartered Bank has written standards to be used, and other companies are also taking a stance, such as IBM who has developed and shared principles for trust and transparency. Of course, regulators are still getting up to speed in this area.

The panel moved on to discuss bias in more detail. Sheila was clear that we most openly discuss it, we have many biases, conscious and unconscious. (In fact, over 180 human biases have been discovered, and there’s a helpful article in Wikipedia.)

Abeth mentioned that there are capabilities that can help with this, and can offer some traceability to expose the reason why AI made a recommendation (for example), but she also emphasised that this does not mean we can leave it to the computer. A framework or a tool may highlight a bias, and we must then decide what we are going to do with that knowledge.

When it comes to what we can do about it, Nadia made reference to a report by Accenture in 2018 that recommended designing for inclusion, and also to consider diversity in a product (it’s not necessarily a good thing that Alexa, Siri and Google home all have female voices by default).

There was general agreement that innovation would be better with more diversity (and there are plenty of studies that demonstrate that already). Abeth sees a much better world when we have diversity and inclusion, with equal opportunity for everyone, and where we are not typecasted in what we can or cannot do.  How good will it be to just be ourselves?

Sarah gave a great example of AI being applied to job adverts, with the results being creation of adverts that appeal to more people.

Sheila finished the discussion with this marvellous statement: “When different minds come together the results can be monumental”.

Contributed by

Sharon Moore MBE
CTO Public Sector UK
ibm