INSIGHTS
Humans – the masters not the servants of AI?

How we remain ethical and unbiased and avoid discrimination in the workplace.

Artificial Intelligence is a brilliant business tool that holds the potential to save countless hours of repetitive labor, freeing people to carry out work of greater value, greater interest, and of higher creative worth. It’s clearly here to stay.

IT trade body Comptia1 calculates that:

  • 91% of leading businesses are investing in AI
  • 97% of mobile users use AI-powered voice assistants
  • 4 billion-plus devices already work on AI-powered voice assistants
  • 40% of people use an AI-powered voice search function every day

McKinsey reports that that AI may contribute an extra $13 trillion to global GDP by 20302, as automation increases productivity and fuels innovation in products and services.

It’s about to get really personal too.

 

AI changes personal and professional lives

It seems likely that AI will feature much more in the way people organize their private lives as well as their work. Bill Gates posits that individualized digital assistants – known as agents – will learn enough about our lives, personalities, and preferences to provide a service that’s unique for each of us.

“An agent will be able to help you with all your activities if you want it to,” he writes.3

“With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.”

So far so good – yet that’s not the whole story.

 

Real-world impacts of AI

AI is still limited in what it can do. It is also limited in what it should do.

Crucially, AI can only work inside the data sets to which it has access. What might seem like original work pouring out of an app is nothing more than a distillation of information that already exists in the world’s digital brain bank.

Deeply impressive, yes. But also deeply flawed.

It reflects biases, and they can be amplified with every new iteration.

AI is entirely neutral. It has no ethical sense. It plagiarizes. It invents. It hallucinates, proposing sequences of facts and events which have no relation to reality. If it can’t find the real-world example you ask for, it can offer a hypothetical case – with worrying consequences for the careless or lazy user who passes it off as an actual example and passes it out into the world.

 

Five areas of potential bias in AI

This has real-world impacts, bringing a threat of bias and discrimination into an organization’s activity. This is already clear in five areas of AI’s use:

1. Facial recognition

Some systems have been found to be less accurate when identifying faces of females or people with darker skin compared to males or lighter-skinned counterparts.

A study by Joy Buolamwini, from MIT Media Lab, and Timnit Gebru, formerly a researcher at Microsoft4, exposed big differences in the accuracy of facial recognition systems based on gender and skin type.

2. Predictive policing

Has been criticized for perpetuating and even amplifying existing bias in law enforcement data. If historical arrest data is biased, the AI model may inadvertently target specific communities, leading to over-policing and reinforcing stereotypes.

3. Job recruitment

AI-driven hiring tools have faced scrutiny for gender and racial bias. If historical hiring data reflects a skewed workforce, AI may exacerbate bias by favoring certain groups, leading to discrimination. Even Amazon5, one of the great pioneers and advocates of AI, has admitted its own processes have been affected. In 2018 it was reported that Amazon had developed machine learning to assess CVs. It was trained on CVs submitted over a 10-year period when most applicants were male. As a result, the system reportedly favored CVs that included male-centric language and penalized those that included terms more commonly found in CVs submitted by women. Amazon abandoned the tool after discovering the bias.

4. Credit scores

If historical lending data is biased, AI algorithms may result in discriminatory lending practices, affecting certain racial or socioeconomic groups. The National Bureau of Economic Research published a study ‘Consumer-Lending Discrimination in the FinTech Era, 20186, based on research by Stanford University, Microsoft Research, and the University of California, Irvine. It found that applicants from minorities, particularly Black and Latinx customers, were more likely to be charged higher interest than other customers.

5. Chatbots and virtual assistants

AI-driven bots and virtual assistants may reflect gender bias in their responses. Some have been criticized for responding inappropriately or reinforcing stereotypes. In 2016 Microsoft launched a chatbot, Tay, to interact on social media. It lasted less than a day before it had to be shut down7, as it absorbed and repeated racist and abusive language from users.

None of this means that we should thank AI for its services and pull the plug on a powerful innovation that is already firmly embedded in working practices and processes. We don’t need to cast machine learning on the scrap heap of unwanted or unnecessary inventions – along with Google Glass, the Segway– or Bill Gates’s affectionately maligned proto-digital assistant, Clippy.

It has long since gone beyond the tipping point of acceptance into mainstream life. Amazon Web Services defines generative AI – the creation of new images, media, and graphics – as the fastest-growing trend in AI. ChatGPT – probably the best-known AI platform – reached one million users in just five days. That compares to 75 days for Instagram and 150 days for Spotify.

So even if it was possible to cram the genie back in the bottle, that’s not where we are.

 

Attensi’s use of AI

Attensi and partners are already gaining real benefits from the application of AI in five key areas:

1. Translations

Adapting our training quickly and easily for more languages around the world.

2. Generating AI characters

Creating super-realistic avatar-style figures that bring a new level of believability to digital scenarios. A great example is the character named Makayla whom Attensi has introduced to the world. She is a recognizable personality, created with new animation tools, voice recognition, and synthetic voice generation. You can have a real conversation with Makayla, and she is an exciting prototype that we’re working on in developing future solutions.

3. Faster content

For example: Input a PDF and the AI can set up 20 multiple-choice questions based on the content.

4. Create dialogues

Generating realistic interactions that make simulations credible and compelling.

5. AI voicing

Making digital characters sound precisely and convincingly human, just like Makayla.

 

Using AI with care

What it does mean is that businesses – and individuals for that matter – must use AI with care. It means that people must ensure that the human element remains in the driving seat – and that they should bring their ethical sense and intuition to every application of AI. And critically, they must remain the master and not the servant of this technology.

It all speaks to the importance of careful design and vigilant scrutiny to mitigate bias in AI systems. It requires the promotion of a culture where ethics always comes before speed and convenience. That’s a moral issue, yes. But it is also a business issue – as any organization with aspirations for long-term sustainability must guard their reputation zealously, so that people still respect them, and want to trade with them and work for them.

Tackling the potential danger of discrimination is essential to ensure fair outcomes, and to establish AI as a lasting and valuable benefit for business and for humanity.

Interested in learning more about AI? Listen to our CEO, Trond Aas and Muhammad Sajid, Senior Solution Architect at Amazon Web Services discuss the future of AI and how it’s becoming part of our lives.

Are you ready to level up your training?

You might also be interested in

Bridging the generational gaps in hospitality training

Simulated Experience is the New Baseball Practice

What can we learn about L&D from pilot simulation training?

DEI – how can you stop ticking a box and start making an impact?

Sources

    1. 30+ Artificial Intelligence Statistics and Facts for 2023, connect.comptia.org/blog/artificial-intelligence-statistics-facts#:~:text=97%25%20of%20mobile%20users%20are,at%20least%20once%20every%20day
    2. Economic impacts of artificial
      intelligence (AI), europarl.europa.eu/RegData/etudes/BRIE/2019/637967/EPRS_BRI(2019)637967_EN.pdf
    3. AI is about to completely change how you use computers (and upend the software industry), linkedin.com/pulse/ai-completely-change-how-you-use-computers-upend-software-bill-gates-brvsc/
    4. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
    5. Amazon scraps secret AI recruiting tool that showed bias against women, reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G/
    6. Consumer-Lending Discrimination in the FinTech Era, nber.org/papers/w25943
    7. Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter, theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter