Happy International Women’s Day.
As it seems to me today, we need to unite for women’s equality as never before.
In today’s blog post, I want to spark your curiosity and provoke thought about the nuanced visibility of a women in AI world.
The world of artificial intelligence is evolving rapidly, influencing every aspect of our lives, from the way we work to how we conduct our daily activities. But as these technological advancements progress, one critical question remains: How does AI perceive and treat different genders, particularly women? Let me give you this example: the numerous times I have tried using ChatGPT for translating from English to other languages, it has never considered that I could be a woman.
AI predictive models are tools designed to forecast future events based on historical data. While they offer numerous benefits, such as efficiency and personalized experiences, they also harbor risks, especially when they replicate societal biases. These biases can inadvertently perpetuate and amplify existing gender inequalities, affecting women’s opportunities and representation in various sectors.
Woman Persona
In Canada, the narrative of the average woman presents a complex and multifaceted picture. On one hand, she represents the strides that have been made toward gender equality: she is likely well-educated, with a significant proportion holding higher education degrees, often surpassing her male counterparts in this arena. Active in the workforce, she contributes to various industries, showcasing the versatility and capability of women in the professional sphere.
However, this image stands in stark contrast to a reality filled with ongoing challenges and deep-rooted biases. Despite her education and ability, the average Canadian woman faces a gender wage gap. Not only do women earn less money, but they also face higher costs and tend to invest less.
The barriers to professional advancement extend beyond economic factors. Women encounter the ‘glass ceiling’, an invisible barrier that prevents them from rising to the upper rungs of the corporate ladder, irrespective of their qualifications or achievements. This phenomenon reflects deeper issues of power dynamics, leadership stereotypes, and organizational cultures that favor male leadership styles. In 2024, we still haven’t bridged persistent representation and compensation gaps for women in management and leadership positions in corporate Canada, according to the Canadian Chamber of Commerce.
Furthermore, societal biases exacerbate the professional challenges women face.
- Gender Role Stereotypes: The belief that women should adhere to traditional roles such as caregiving, cooking, and domestic tasks, while men are seen as breadwinners and leaders. Women are often expected to be nurturing, emotional, and submissive, while aggression, ambition, and leadership are seen as masculine traits.
- Career and Professional Biases: In the workplace, women often face biases that can impact their hiring, promotion, and evaluation. This includes assumptions that women are less committed to their careers, less capable in STEM fields (science, technology, engineering, and mathematics), or that they should not occupy high-level positions.
- Appearance Bias: Women are often judged harshly on their appearance. This includes expectations to dress a certain way, maintain a particular body shape, or wear makeup. Professionalism for women can be unfairly linked with physical appearance.
- Maternal Bias: This is the assumption that mothers, or women of childbearing age, are less dedicated or competent at work. Women without children may also face bias, such as assumptions that they will eventually leave their jobs to have children.
- Age Bias: Women often face more discrimination than men as they age, being perceived as less attractive, less capable, or less relevant.
- Intellectual Bias: This involves underestimating women’s intelligence and capabilities. Women are often assumed to be less logical or less competent in fields considered to be male-dominated, such as math, science, or engineering.
- Ambition Bias: Women who display ambition, assertiveness, or leadership qualities can be negatively labeled as “bossy,” “aggressive,” or “unfeminine,” whereas the same traits in men might be viewed positively.
- Double-Bind Bias: Women often face a double-bind where they are penalized for not adhering to feminine stereotypes, but also penalized when they do. For example, a woman in a leadership role may be criticized for being too soft if she is compassionate or too harsh if she is assertive.
In summary, and as famous speech goes: It’s literally impossible to be a women
Would you say this represents every single one of us? Absolutely not, but this is how women are perceived, and this is dangerous. Why? Read below
Predictable Society
With AI advancement, we all can sense how our society is moving toward a future where AI predictive tools will help us make decisions. The problem is only that AI fluency is often mistaken for intelligence, and fewer and fewer people scrutinize the output of very convincing machines.
The biggest danger comes with the prediction or suggestion that we take as truth without questioning it at all.
Below is the high-level overview of artificial intelligence (AI) models training. Going through this list, please consider the “woman visibility context” (Professionals, please forgive me for this simplification, as I am doing this exclusively for the purpose of showcasing potential challenges and opportunities.)
1. Data Collection:
The first step is collecting a large and diverse set of data relevant to the task at hand. This could include images, text, audio, or structured data, depending on the application. For example, for a model designed to recognize objects in images, you would collect thousands or even millions of labeled images where each image is tagged with the object it contains.
2. Data Preparation:
Once the data is collected, it must be cleaned and formatted properly. This process, known as data preprocessing, may involve:
- Removing or correcting errors in the data.
- Normalizing data to ensure consistency
- Augmenting the dataset to improve diversity and reduce overfitting
- Splitting the data into training, validation, and test sets to evaluate the model’s performance.
3. Model Selection:
Next, an appropriate model architecture is selected based on the task. There are many types of models, from simple linear regressions to complex neural networks.
4. Model Training:
Training the model involves using the training data to adjust the model’s parameters (e.g., weights and biases in neural networks) so that it can accurately perform the desired task. This is typically done using an algorithm called backpropagation combined with an optimization technique such as gradient descent. The process involves:
- Feeding the training data into the model.
- Comparing the model’s predictions against the actual outcomes.
- Calculating the error (or loss) based on the differences.
- Updating the model’s parameters to reduce the error.
- Repeating this process until the model performs satisfactorily.
5. Evaluation:
After training, the model is evaluated using the validation and test sets to ensure it generalizes well to unseen data. If the model does not perform well, adjustments may be made to the model architecture, training procedure, or data preprocessing.
6. Hyperparameter Tuning:
Hyperparameters are settings that are not learned from the data but rather set before the training process. They can significantly impact model performance. Examples include the learning rate, batch size, and number of epochs. Hyperparameter tuning involves searching for the set of hyperparameters that yields the best performance on the validation set.
7. Prediction:
Once the model is trained and fine-tuned, it can be used to make predictions on new, unseen data. This is the ultimate goal of the training process. For example, a trained image recognition model can classify new images, or a trained language model can generate text or translate languages.
8. Monitoring and Updating:
In real-world applications, AI models may degrade over time as the world changes (a phenomenon known as concept drift). Therefore, it’s important to monitor the model’s performance and update it with new data or adjust its parameters as needed.
Now, after reviewing the process and in context of women visibility, answer to these questions:
- What types of data are currently being collected for AI applications, and how accurately do they represent the diverse experiences and identities of women from various backgrounds?
- Who is responsible for analyzing the datasets used in AI, and what measures are they taking to identify and mitigate common biases against women?
- How are current AI models tested for gender biases, and what standards are in place to ensure these models perform equally well for data representing women?
- In decision-making processes influenced by AI predictions, whose samples and experiences are predominantly used, and how does this impact women’s representation and outcomes?
- What initiatives are currently underway to integrate feedback from a broad spectrum of women into the ongoing development, evaluation, and updating of AI systems to ensure their fairness and inclusivity?
Future
As the mother of a teenage girl, I want to be excited in this part of my blog post, but to be honest, I am concerned. The future for women in a society influenced by biased humans managing and training AI could be grim.
Professional opportunities could narrow, and societal roles might become more rigid, limiting women’s potential and reinforcing stereotypes.
In a predictable society governed by algorithms, decisions about what is supposedly best for you are made without offering much flexibility for you to express your own preferences and desires.
I highly recommend reading “The Algorithm: How AI Decides Who Gets Hired” by Hilke Schellmann. This book provides an insightful examination into the role of AI in employment practices and its implications for gender equality.
We already lived in a world in which your role is predefined because you are a woman. If we don’t pay attention now, we might return to it, just in a different dimension and reality.
Therefore, as never before, we should strengthen our mutual call for an equitable world for all, AI world including.
Awareness is the first step towards change.
Thank you for reading! Never stop learning.
Happy International Women’s Day