Join now to register or log in and unlock exclusive age-restricted content.

Blog and Articles

The Truth About Anti AI Arguments: What Critics Get Wrong in 2025

by | Apr 16, 2025 | Defedning AI Art, Learning Resources, Myth-Busting

Reading Time: ( Word Count: )

The divide between Big Tech and left-wing groups has pushed anti-AI feelings to new heights in 2025. People who speak against AI often base their arguments on wrong ideas about how these systems actually work.

Critics point to AI’s environmental effects, citing hyperscalers’ 40-50% increases in carbon emissions. They miss an important fact: AI has propelled development of nuclear power plants in the US. The planned buildout has grown 20 times larger in the last eighteen months. People claim AI “steals” intellectual property, but these systems use predictive algorithms instead of copying existing work. Supporters say AI gives everyone access to creative tools. Yet independent artists tell a different story – they’re losing work to AI-generated alternatives.

This piece will get into what critics misunderstand about AI in 2025. We’ll look at the real ethical issues that need attention and why some anti-AI arguments completely miss the point.

Why the Public Gets AI So Wrong

“I am very, very worried about Generative AI amplifying fears and polarizing us. In this respect I think Generative AI is even more dangerous than the search recommendation in social media engines that have been making decisions about what to show us, and what not to show us.” — De KaiAI Professor at ICSI (UC Berkeley) & HKUST (Hong Kong), Machine Translation Pioneer, Member of Google’s Inaugural AI Ethics Council

Reality and public perception of artificial intelligence rarely line up. . This gap in understanding drives many arguments against AI today.

The role of memes and viral outrage

Memes have evolved into powerful tools that shape how people think about AI. These simple visual posts hide problematic messages behind humor and satire. . So they end up shaping what people believe quite effectively.

Research shows how AI-generated memes have crept into political discussions worldwide. . To name just one example, politicians have shared AI-created images without telling anyone they weren’t real, making it harder to separate truth from fiction.

This emotional manipulation serves a purpose. . These circumstances make it almost impossible to have reasonable discussions about what AI can and cannot do.

Social media algorithms combined with AI tools create perfect conditions for misleading content to reach specific audiences quickly. .

. This creates an environment where telling fact from fiction gets harder every day.

Misunderstanding how AI actually works

People misunderstand AI technology because they learn about it from movies and TV shows rather than technical sources. Popular media shows AI with human-like consciousness – think HAL 9000 or the Terminator. .

, even though it started in the 1950s. . They don’t know about its long history.

The most dangerous myth suggests AI can be completely objective. . The truth? AI systems learn from internet data that contains human biases and limitations.

Common myths about AI include:

  • AI has human traits like consciousness and emotions
  • AI improves itself without human help
  • AI possesses mysterious “emergent abilities” beyond our control
  • AI works autonomously without errors

.

People worry more about AI’s effect on society than their personal lives. . This pattern shows how public perception often misses reality.

Better understanding requires more than fact-checking. . Looking at content alone no longer helps determine what’s true or trustworthy.

The Ghibli Meme Controversy and What It Reveals

The Ghibli-style AI image trend took social media by storm in early 2025. This revealed how people misunderstood artificial intelligence and copyright. OpenAI’s updated GPT-4o model let users turn photos into dreamlike images that looked like Studio Ghibli’s iconic style. .

How AI mimics style, not content

GPT-4o doesn’t copy frames from Ghibli films directly. . The model learns to link certain patterns, such as color palettes or brushstrokes, with specific prompts through training on large datasets.

 when users ask for “Ghibli-style” images. This “style engine” process helps AI pick out and apply visual features to brand new images.

. This explains why AI-generated Ghibli art might look right but feel empty—it picks up surface patterns without grasping Miyazaki’s creative vision.

Why ‘Ghibli-style’ isn’t copyright infringement

The legal argument focuses on a vital difference: copyright protects specific works, not artistic styles in general. .

Legal experts warn this line isn’t clear-cut. . Courts could still find infringement if AI-generated images contain original, expressive elements that look too much like copyrighted works.

This gray area sits at the center of current lawsuits like Andersen v. Stability AI. The biggest problem isn’t whether AI can copy a style, but if that copying creates unauthorized derivative works.

, though courts haven’t really tested this defense. . Yet they allow “broader studio styles” like Ghibli’s, even though Miyazaki is still alive at 84.

The Miyazaki quote and its context

The most misunderstood part of this debate might be Miyazaki’s famous quote calling AI “an insult to life itself.” People often leave out important background when they cite this criticism of AI-generated Ghibli art.

.

Miyazaki’s emotional response targeted this particular demonstration, not the whole technology. .

.

This detailed reality shows how critics often strip away context to attack AI. The debate tells us more about our fears of technology than the real strengths or limits of artificial intelligence.

Virtue Signaling and the Anti-AI Movement

The debate about AI goes beyond technical discussions. The anti-AI movement has become a powerful way for people to show off their moral values. Research shows that people oppose artificial intelligence more to display their ethics than to address real concerns about the technology.

Moral outrage as a social currency

People use moral outrage as social capital on social platforms. .

Lab experiments back this up. . This explains why anti-AI rhetoric gets so extreme – stronger condemnation leads to bigger social rewards.

Social media platforms magnify this effect. .

The rise of anti-AI pledges and purity tests

Companies see marketing potential in taking anti-AI stances. The illustration app Procreate earned widespread praise from artists in 2024 when CEO James Cuda promised to avoid AI features. .

This trend shows up in many industries:

These pledges show how companies use anti-AI sentiment to stand out. Jack Woodhams runs the portfolio site PosterSpy and explained his strict no-AI policy. .

When criticism becomes performative

People who criticize the anti-AI movement point out how opposition turns into a show rather than real criticism. One analysis looks at the Ghibli AI meme controversy that sparked massive outrage. .

The shallow nature of this behavior raises concerns. Critics focus on getting social approval and ignore how AI really works. .

Research shows that network-level norms affect social reinforcement. . This explains why anti-AI communities develop more extreme views over time.

The anti-AI movement tells us more about human social behavior than artificial intelligence. Understanding how people use technology criticism to signal virtue helps us separate real ethical concerns from moral posturing.

The Real Ethics of AI Use

“There’s still a lot of knowledge that needs to be gained about how LLMs and Generative AI works. We would benefit from a lot of transparency into how these systems work and where the limitations are, such as where the biases are, where the data was trained at.” — De KaiAI Professor at ICSI (UC Berkeley) & HKUST (Hong Kong), Machine Translation Pioneer, Member of Google’s Inaugural AI Ethics Council

Real ethical concerns about AI need serious attention beyond the current hype and hysteria. These problems are quite different from the performative outrage we see in public discussions, yet they remain crucial to developing responsible technology.

Prompting vs authorship: who’s responsible?

The ownership battle over AI-generated content continues to rage in 2025. The U.S. .

This position comes from a basic principle: copyright law must have human creativity. .

Other countries see things differently. The Beijing Internet Court ruled an AI-generated image could be copyrighted. .

Transparency and consent in AI training

.

This lack of transparency creates several issues:

  1. Rightsholders can’t protect their work when it’s used without permission
  2. External auditors can’t check models for bias or harmful content
  3. Users can’t make informed choices about their tools
  4. Lawmakers can’t regulate what they can’t see

. Trust breaks down without transparency, which explains why many creators take anti-AI stances.

Different regions handle regulation differently. . The U.S. .

When AI use crosses ethical lines

.

.

Building ethical AI means following core principles from the Belmont Report:

  • Respect for Persons: Recognizing individual autonomy and getting informed consent
  • Beneficence: Following the “do no harm” principle to avoid magnifying biases

Good governance means setting clear roles and responsibilities. .

Arguments against AI grow louder each day. These important ethical concerns need careful thought rather than outright rejection. We must move past blind acceptance and uninformed opposition to address these issues properly.

AI as a Tool for the Underdog

Unlike what AI critics say, artificial intelligence acts as a powerful equalizer for independent creators. The technology moves creative power from those with deep pockets to anyone with imagination and ideas.

How indie creators benefit from AI

Independent filmmakers now use AI to analyze audience reactions through telemetry data. Young writers without film school credentials can now compete with seasoned professionals. Game developers find AI tools speed up their content creation. .

Musicians have discovered new opportunities too. . A developer puts it well: “I see AI as being analogous to sewing machines and calculators. .

Democratizing access to creative tools

AI changes who can create professional-quality content. It turns complex tools into easy-to-use, tailored conversations. .

This transformation goes beyond convenience. .

Why AI isn’t just for big corporations

Small businesses and independent creators reap major benefits from AI’s democratizing effects. . The playing field levels out. .

. More unique voices will emerge as this technology becomes available to all. Traditional gatekeepers won’t hold them back anymore.

The Cyberpunk Irony: Fighting AI with AI

The biggest irony of the anti-AI movement comes from its misuse of cyberpunk themes—a genre that welcomes technology as a tool against oppressive systems.

Why AI fits the cyberpunk ethos

The 1980s saw cyberpunk fiction emerge as a response to neoliberal economics and upcoming technological changes. These stories help us think about where unchecked capitalism might take society. Yet cyberpunk never opposed technology—it’s quite different. The genre shows technology as both the problem and solution, giving underdogs a way to fight back against corporate control.

Stories like Blade Runner and The Matrix show protagonists who employ advanced technology to challenge power structures. This mirrors how the cypherpunk movement supports privacy-enhancing technologies to encourage personal freedom and resist control systems. The replicants’ struggle for recognition reflects privacy advocates’ challenges against surveillance and power abuse.

Misreading genre themes to justify fear

AI critics often misunderstand cyberpunk’s warnings. The genre doesn’t warn us about technology but about unchecked corporate interests controlling it. One analysis points out that cyberpunk authors “nearly always portray future societies in which governments have become wimpy and pathetic… with power clutched in the secretive hands of a wealthy or corporate elite.”

Critics cite cyberpunk esthetics to justify anti-AI positions—missing how characters in these stories use technology to resist oppression. Cyberpunk reflects society’s fascination and fear of technology’s potential, not an argument for rejecting progress.

Using tech to fight systemic inequality

AI offers powerful tools that can curb discrimination. Researchers have found that AI can identify early warning signs of human rights abuses through pattern recognition and predictive modeling. AI systems can predict population risks by analyzing historical records, economic trends, and political changes. This allows defenders to step in before violence grows.

This use of AI perfectly matches cyberpunk’s idea of using technology against systemic problems. AI-powered job matching platforms help people from disadvantaged backgrounds find opportunities matching their skills, reducing hiring discrimination. Translation tools break down language barriers that often show privilege.

Blanket opposition to AI goes against the spirit of resistance that cyberpunk celebrates. The genre teaches us not to reject technology outright but to make sure it serves people’s needs rather than corporate profits.

Conclusion

AI criticism in 2025 reveals a simple truth: most arguments against artificial intelligence come from basic misunderstandings, not real concerns. Critics who spread viral memes and make emotional appeals don’t grasp how these systems actually work.

We must choose between blind opposition and meaningful involvement with AI technology. People who reject AI out of fear fail to see how it makes creativity accessible to independent creators. The real issues that need our attention are ethical concerns about data transparency, consent, and algorithmic bias.

The critics who use cyberpunk imagery to fight AI miss the point of the genre they reference. Cyberpunk never supported rejecting technology. Instead, it shows technology as both a challenge and an answer—a tool that helps underdogs fight powerful institutions.

Better technical understanding leads to meaningful discussions about AI. Public perception leans toward science fiction rather than technical reality because only 13.73% of people who know about AI understand its processes. This gap creates unrealistic fears and expectations.

AI ended up being neither a dystopian threat nor a utopian solution—it’s just a powerful tool with huge potential. Our approach to development, regulation, and use of this technology will determine if it enhances human creativity or makes existing inequalities worse.

The next time you see anti-AI arguments, ask yourself if they raise real concerns or just virtue signal. Technology stays neutral—humans decide if AI becomes a force for positive change or exploitation. The decision remains in our hands.

FAQs

Q1. How does AI actually work compared to popular misconceptions? AI uses predictive algorithms and pattern recognition to generate content, rather than copying existing works. It learns from large datasets to mimic styles and create new outputs, but doesn’t have human-like consciousness or emotions as often portrayed in science fiction.

Q2. Is AI-generated art considered copyright infringement? Generally, AI-generated art that mimics a style is not considered copyright infringement, as copyright law protects specific works, not general artistic styles. However, the legal landscape is still evolving, and using AI to create unauthorized derivative works could potentially infringe on copyrights.

Q3. How does AI benefit independent creators and small businesses? AI democratizes access to professional-grade creative tools, allowing independent creators and small businesses to produce high-quality content without extensive resources. This levels the playing field, enabling them to compete more effectively with larger organizations.

Q4. What are the main ethical concerns surrounding AI use? Key ethical concerns include transparency in AI training data, consent for data use, potential biases in AI algorithms, and the impact on privacy and human judgment. Addressing these issues requires clear governance, stakeholder education, and adherence to ethical principles.

Q5. How can we ensure responsible development and use of AI technology? Responsible AI development involves increasing technical literacy, implementing transparent practices, establishing ethical guidelines, and creating diverse AI ethics boards. It’s crucial to balance innovation with addressing legitimate concerns about data privacy, bias, and the societal impact of AI.

References

[1] – https://www.finnegan.com/en/firm/news/ai-prompts-alone-are-not-human-authorship-long-awaited-us-copyright-office-report-declares.html
[2] – https://theodi.org/news-and-events/blog/policy-intervention-1-increase-transparency-around-the-data-used-to-train-ai-models/
[3] – https://www.nbcnews.com/tech/tech-news/ai-image-misinformation-surged-google-research-finds-rcna154333
[4] – https://today.umd.edu/ai-generated-misinformation-is-everywhere-iding-it-may-be-harder-than-you-think
[5] – https://medium.com/jigsaw-puzzles/anti-ai-imagery-rhetoric-is-the-new-virtue-signaling-a-look-at-last-weeks-studio-ghibli-meme-9a0ac0660dce
[6] – https://www.ibm.com/think/topics/ai-ethics
[7] – https://www.abc.net.au/news/2025-04-03/the-controversial-chatgpt-studio-ghibli-trend-explained/105125570
[8] – https://wjlta.com/2025/04/17/studio-ghibli-style-ai-images-and-the-legal-questions-they-raise/
[9] – https://news.harvard.edu/gazette/story/2023/08/is-art-generated-by-artificial-intelligence-real-art/
[10] – https://www.cullenllp.com/blog/ai-and-artistic-style-imitation-emerging-copyright-implications/
[11] – https://the-decoder.com/studio-ghibli-founder-hayao-miyazakis-viral-ai-criticism-lacks-crucial-context/
[12] – https://www.ndtv.com/world-news/quot-i-would-never-incorporate-this-quot-what-studio-ghibli-039-s-hayao-miyazaki-once-said-about-ai-animation-8021037
[13] – https://www.science.org/doi/10.1126/sciadv.abe5641
[14] – https://www.bloomberg.com/opinion/articles/2016-04-21/displaying-outrage-on-social-media-earns-a-kind-of-currency
[15] – https://phys.org/news/2025-05-relationship-moral-outrage-social-media.html
[16] – https://www.forbes.com/sites/rashishrivastava/2024/08/20/the-prompt-anti-ai-pledges-gain-popularity/
[17] – https://www.rollingstone.com/culture/culture-features/ai-image-brand-backlash-1235040371/
[18] – https://marcwatkins.substack.com/p/ai-is-unavoidable-not-inevitable
[19] – https://www.dbllawyers.com/are-text-prompt-commands-to-ai-enough-to-show-creative-human-authorship-part-1/
[20] – https://perkinscoie.com/insights/article/human-authorship-requirement-continues-pose-difficulties-ai-generated-works
[21] – https://academic.oup.com/jiplp/article/20/3/182/7922541
[22] – https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
[23] – https://www.forbes.com/councils/forbestechcouncil/2024/04/09/democratized-creativity-the-evolution-and-impact-of-ai/
[24] – https://readyplayer.me/blog/ai-is-just-getting-started-boosting-indie-studios-to-the-big-leagues
[25] – https://aimm.edu/blog/how-ai-could-empower-independent-artists
[26] – https://www.ign.com/articles/the-indie-developers-who-are-using-ai-to-make-living-cities-tactics-machines-and-more
[27] – https://www.linkedin.com/pulse/democratization-creativity-how-ai-empowers-custom-gary-ramah-n4dfc
[28] – https://www.datacenters.com/news/democratized-generative-ai-empowering-creativity
[29] – https://executive.berkeley.edu/thought-leadership/blog/unleashing-creativity-ai
[30] – https://hbr.org/2024/06/genai-is-leveling-the-playing-field-for-smaller-businesses
[31] – https://www.hostinger.com/blog/ai-myths-survey
[32] – https://www.brookings.edu/articles/ai-memes-election-disinformation-manifested-through-satire/
[33] – https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections
[34] – https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2023.1113903/full
[35] – https://www.brookings.edu/articles/what-the-public-thinks-about-ai-and-the-implications-for-governance/