In the futuristic films that shaped the childhoods of 20th-century generations, the world was full of flying cars, robots with emotions, and intelligent machines serving humans. Today, although we’re not flying to work, we live in a reality where algorithms are increasingly making decisions for us—silently, subtly, and often without any oversight.
Artificial intelligence (AI) is no longer just a technology of the future. It now personalizes the news we read, generates images and texts that are indistinguishable from real ones, recommends products, and influences who gets hired or approved for a loan. Yet the question increasingly being raised is—can we really say this intelligence is impartial?
When an AI system makes decisions based on previously collected data, it can easily replicate and amplify existing social injustices. As Dr. Miroslava Jordović Pavlović points out, “If the data is inaccurate, the model becomes biased because it makes decisions based on what it learned.” This is most evident in visual tools. If a user enters a simple prompt—such as “a happy girl dancing”—there’s a high chance the AI will generate an image of a white-skinned girl, even if no race or ethnicity was specified. What seems like a harmless request reveals how deeply AI models are shaped by dominant patterns and implicit biases. If the data used to train the model largely represents a certain demographic group, there’s no guarantee the system will reflect the diversity we expect.
This phenomenon is especially concerning given that most users have little to no understanding of how these systems function. AI tools are increasingly accessible and simple to use, but their inner workings remain highly complex. In this gap between everyday use and lack of understanding lies a serious risk—people may accept what AI delivers as objective truth, unaware of the invisible biases behind it.
One of the most visible moments when this issue entered public debate happened in 2023, when German artist Boris Eldagsen won a prize at the Sony World Photography Awards with a photo generated by AI. Eldagsen later declined the award, stating that he had submitted the image to provoke a discussion about the boundary between human creativity and machine-generated art.
Yet the dilemmas don’t stop at art. UNESCO warns about the risks of integrating AI into judicial systems. While algorithms can help process large caseloads efficiently, there’s a real danger that hidden biases within the code could influence verdicts—without citizens having any means of understanding or challenging how the decisions were made. That’s why UNESCO adopted the first global document on the ethics of artificial intelligence in 2021, emphasizing the importance of transparency, the inclusion of diverse social groups in tech development, and the accountability of AI system creators. A simple example says it all: if you ask an AI to generate a list of “the greatest leaders in history,” it will almost certainly return only male names.
In this context, ethics is not just about following rules—it’s about critically examining who is designing the technology, in whose interest, and with what consequences. Can people even realize they’ve been discriminated against if they never learn how the algorithm made its decision? Can judges, doctors, or educators rely on AI assistance if they don’t know how it “thinks”?
Serbia’s national AI development strategy through 2025 identifies ethics as one of its five key pillars. The plan includes mechanisms to ensure transparency, accountability, and compliance with safety standards. The strategy also prioritizes education, so that citizens can understand, identify, and responsibly use AI tools. It calls for verifying machine learning systems and ensuring they align with ethical and security norms.
Artificial intelligence now shapes our digital environment—moderating information, offering recommendations, and generating new content. As a result, digital literacy is no longer just about knowing how to use technology. It must also include an understanding of how algorithms collect and process data, how content is personalized, and how to recognize AI-generated material such as deepfake videos, synthetic images, and automated texts. In a world where automation is part of everyday life, users need to understand how systems operate within smart devices, virtual assistants, and content recommendation platforms.
Adding to the complexity is the fact that most people don’t know what kinds of data algorithms collect about them or how that data is used. In a daily routine where AI tools are so easy to use and require virtually no technical knowledge, few people stop to ask what’s happening “behind the screen.” This leads to passive acceptance of technology that isn’t necessarily neutral. While we often assume algorithms make “objective” decisions, they are actually just reflecting the data they were trained on—and that data, as we know, comes from a world full of stereotypes and inequalities.
Our trust in technology must be earned. Maintaining an “audit trail”—a record of who trained the algorithm, when, and under what parameters—could improve oversight and allow us to correct errors. Technology itself doesn’t make ethical choices—those choices come from the people who design and apply it. The problem isn’t the algorithm per se, but the goals and assumptions built into it by humans.
As automated systems continue to spread, digital literacy must expand to include a solid understanding of how those processes work, from smart assistants to AI in medicine and law. Education—particularly education that fosters critical thinking and ethical awareness—is emerging as the most important response to these challenges. Because the question is not only how to make AI efficient, but how to ensure it is fair.
In an age where artificial intelligence makes decisions that directly affect people’s lives, being a passive user is no longer an option. Every generation—especially young people—must be equipped to identify potential manipulations, understand the consequences of automated decisions, and ask the right questions. Only an informed society can ensure that technology works for everyone—not just for those who create it.
Sources:
- https://www.unesco.org/en/artificial-intelligence/recommendation-ethics/cases
- https://www.ai.gov.rs/tekst/sr/238/etika-u-vestackoj-inteligenciji.php
- https://ue.akademijazs.edu.rs/blog-etikapravicnost-i-privatnost-u-vestackoj-inteligenciji/
This text is written under the activities of the MEDEA project – Developing Media Literacy to debunk gender related media manipulation and fake news No. 2024-1-LV01-KA210-ADU-