It’s undeniable that AI (Artificial Intelligence) is becoming more present in our daily lives—on social media, in news articles, memes, and more. AI-generated content, especially images and videos, is spreading rapidly, and Malaysian netizens are not immune to its effects. The technology has become so advanced that sometimes, it’s hard to tell what’s real and what’s generated. And that’s where the problem begins.
A few days ago, I came across an ad on YouTube with a fake thumbnail showing political figure Tun Mahathir being “arrested” by police officers, paired with a clickbait title like “You wouldn’t believe what he said” and “Everyone was shocked by Mahathir’s words.” This was clearly fake news, but it looked real enough to fool someone scrolling by. That’s the danger of AI-generated content; it can blur the line between truth and fiction, especially when it’s used for political propaganda.
This clearly violates some of the laws of freedom of speech and expression.
Like the Sedition Act 1948, as the advertisement uses an AI-generated image of a famous political figure to spread fake content that incites hatred or unrest. Politically motivated misinformation like this is seen as contributing to public discontent or undermining the government’s credibility.
Besides that, the Communications and Multimedia Act 1998 (CMA) is also included, as it involves the spreading of fake news. The CMA bans content that is false, offensive, or indecent, especially when posted online to harass or deceive. Technically, fake AI content could be prosecuted under this act. BUT, when AI creates harmful content, it’s unclear who’s responsible—the person who posted it, the AI tool that generated it, or the platform hosting it? The waters are still muddy when it comes to AI, and these laws don’t clearly address how AI-generated content fits in.
Let's not forget the Defamation law (Libel), where generated AI images or deepfakes are spread and posted online, damaging one’s reputation, even though it's not true or real. Under the Penal Code (Section 499), creating or sharing false content that harms a person’s reputation can lead to legal action.
Just look at what happened in Johor recently: A male student allegedly used AI to generate explicit images of female classmates. 38 people have been identified as victims, with the youngest reportedly only 12 or 13 years old. They had their faces attached to pornographic content and shared online. The school responded, police got involved, and the Deputy Communications Minister called for stronger digital safety laws. But it’s clear the damage had already been done, leaving the victims' reputation permanently tarnished.
It’s heartbreaking, and it shows how damaging AI misuse can be—not just to reputations, but to people’s mental health and safety. Once this kind of content is out there, it’s nearly impossible to fully erase it. Malaysia has established the National Guidelines on AI Governance and Ethics (AIGE) back in 2024, designed to ensure responsible, safe, and ethical integration of AI. But these recent events show that current protections aren’t enough, and there is still no specific law regulating the use of AI in Malaysia.
Thus, What Needs to Change?
-
New laws specifically targeting AI-generated content
-
Stricter restrictions from AI platforms to prevent sexual, racist, or misleading content
-
Clearer accountability for when harm is caused
At the end of the day, AI should be here to inspire people, not ruin lives or spread lies. If we want to use it responsibly, the authorities and government need to step up. AI isn't going away anytime soon, but the way we deal with it has to change.
Sources
https://mastic.mosti.gov.my/publication/the-national-guidelines-on-ai-governance-ethics/
Written by Nicole