Unmasking the Machine: How AI Fuels Uncertainty
- dilaaydogmus02
- Jun 3, 2025
- 4 min read

"When AI makes mistakes, where do ethics stand, and what happens to our culture?
May 13, 2025
By DEMARTE Enzo, AYDOGMUS Meryem Dila, GUNGEN Zehra
In a world where technology connects us more than ever, it’s also creating new barriers that threaten to divide us. From AI bias to the spread of misinformation, these challenges are shaping how we interact, trust, and understand one another across cultures. While AI promises a future of innovation and convenience, its biases are raising questions about fairness and accountability as well. Meanwhile, the digital age has made misinformation a powerful tool for reinforcing stereotypes and fueling polarization. Thus, causing us to struggle to find common ground and connect.
The Creation of Uncertainty due to AI Bias
AI bias occurs when algorithms produce results influenced by human prejudices, distorting outcomes and potentially causing harm (James Holdsworth, 2023). For many students, like Arzum U., a linguistics major who uses AI in her studies, the bias comes from society itself. "It’s mainly because of society. We have prejudices, and it reflects in AI since it’s a product created by humanity," she explains. So, when does AI bias appear, and how does it lead to uncertainty? According to AI researchers, bias often arises when the data used to train AI systems reflects existing societal issues like racism or sexism, or when the data isn’t diverse enough (e.g., over-representing one group). This leads AI to copy these biases. Real-world examples of AI bias are evident in areas like credit scoring, where algorithms can disadvantage certain racial or socioeconomic groups, or job screening systems that reinforce workplace biases (SAP, 2024). It creates uncertainty by making outcomes less fair, less trustworthy, and harder to predict. Additionally, it is unclear who is responsible if a biased AI causes harm, which contributes to creating uncertainty for developers, companies, and users.

However, that doesn’t mean we should give up on using it. We can fix AI bias through diverse data collection by ensuring that AI systems learn from a wide range of scenarios and demographics to make fairer decisions (DigitalOcean, 2024). Transparency is equally crucial. Informing users about how AI systems function and making their processes clear can reduce errors and decrease the risks of reinforcing harmful stereotypes.
The Cultural Cost of Misinformation
In today’s fast-moving digital world, the spread of misinformation has become a silent barrier to cross-cultural understanding. This spread on the internet and social media can reinforce harmful stereotypes, spread fear, and even encourage hate (Hughes et al., 2021). Different cultures have different ways of understanding and reacting to information—some may trust online sources more, while others rely on personal or cultural values to judge what is true or false. These differences, in intercultural communication, may lead to complication or conflict. For example, during the COVID-19 pandemic, misinformation and fake news fueled stigmatization and xenophobia against certain communities (Hughes et al., 2021).

Studies have shown that individuals with lower media and information literacy are more susceptible to believing and sharing misinformation (Yamaguchi & Tanihara, 2025). As Lucas S., an exchange student, says, “Sometimes it’s hard to know what’s true, especially when public figures and famous people say things that sound convincing and generate false narratives. This makes media literacy more important than ever.” Teaching people critical thinking and how to verify facts is crucial if we want to build media literacy and respectful cross-cultural communication.
The Challenge of Ethical Integration
Integrating AI into society is not just a technical challenge – it’s a moral one. How we design, deploy, and regulate AI will define its impact : As machines create more content and make decisions, questions arise about who is accountable for their outputs and whether machines can truly “own” their creations. Many AI systems operate as black boxes (nature machine intelligence), making it difficult to understand how they work or ensure fairness in their decisions, this undermines trust and hinders accountability. AI must be developed with clear ethical principles in mind. “Facing these new issues, justice is very often late.” Maxime H., Exchange student.

In conclusion, as we go towards a future dominated by AI and digital tools becoming part of our everyday lives, it’s clear that we need to think more about how they shape the way we see the world and each other. From biased algorithms to the spread of misinformation, the challenges are real. But so are the solutions. By learning media literacy, how to recognise false information and supporting a more ethical, inclusive technology, we can make sure these tools connect us instead of separate us. The future of technology isn’t just about machines, it's about the people behind them, and the values we choose to build into the systems we use every day.
Bibliography
Cynthia Rudin (2019, May 13) : Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead : https://www.nature.com/articles/s42256-019-0048-x
Dazeley, P. (2024). Computer keyboard [Photograph]. Getty Images. https://www.gettyimages.co.jp/detail/写真/computer-keyboard-ロイヤリティフリーイメージ/2184983726
DigitalOcean. (2024, July 29), Addressing AI Bias: Real-World Challenges and How to Solve Them https://www.digitalocean.com/resources/articles/ai-bias
European Commission (2019, April 08) Ethics guidelines for trustworthy AI : https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Holdsworth, J. (2023, December 22), What is AI Bias? IBM https://www.ibm.com/think/topics/ai-bias
Hughes, B., McCauley, C., Jenkins, B. M., & Gamson, W. A. (2021). Cultural variance in reception and interpretation of social media COVID-19 disinformation in French-speaking regions. International Journal of Environmental Research and Public Health, 18(23), 12624. https://doi.org/10.3390/ijerph182312624
SAP. (2024, October 29) What is AI bias? Causes, effects, and mitigation strategies https://www.sap.com/resources/what-is-ai-bias
Shutterstock. (2020). Online corona fake news on mobile [Photograph]. https://www.shutterstock.com/ja/image-photo/online-corona-fake-news-on-mobile-1716020299
Shutterstock. (2024). AI law concept codes ethics compliance business [Photograph]. https://www.shutterstock.com/ja/image-photo/ai-law-concept-codes-ethics-compliancebusiness-2511146589
Smith, J. (2024). AI artificial intelligence man uses laptop [Photograph]. Shutterstock. https://www.shutterstock.com/ja/image-photo/ai-artificial-intelligence-man-uses-laptop-2550221353
Yamaguchi, S., & Tanihara, T. (2025). Relationship between misinformation spreading behaviour and true/false judgments and literacy: An empirical analysis of COVID-19 vaccine and political misinformation in Japan. Global Knowledge, Memory and Communication, 74(1/2), 35–53. https://doi.org/10.1108/GKMC-12-2022-0287



Comments