The divide between Big Tech and left-wing groups has pushed anti-AI feelings to new heights in 2025. People who speak against AI often base their arguments on wrong ideas about how these systems actually work.
Critics point to AI’s environmental effects, citing hyperscalers’ 40-50% increases in carbon emissions. They miss an important fact: AI has propelled development of nuclear power plants in the US. The planned buildout has grown 20 times larger in the last eighteen months. People claim AI “steals” intellectual property, but these systems use predictive algorithms instead of copying existing work. Supporters say AI gives everyone access to creative tools. Yet independent artists tell a different story – they’re losing work to AI-generated alternatives.
This piece will get into what critics misunderstand about AI in 2025. We’ll look at the real ethical issues that need attention and why some anti-AI arguments completely miss the point.
Why the Public Gets AI So Wrong
“I am very, very worried about Generative AI amplifying fears and polarizing us. In this respect I think Generative AI is even more dangerous than the search recommendation in social media engines that have been making decisions about what to show us, and what not to show us.” — De Kai, AI Professor at ICSI (UC Berkeley) & HKUST (Hong Kong), Machine Translation Pioneer, Member of Google’s Inaugural AI Ethics Council
Reality and public perception of artificial intelligence rarely line up. Recent surveys paint a concerning picture: only 13.73% of people who know about AI truly understand how it works, while 50.08% say they’re “somewhat familiar” with it31. This gap in understanding drives many arguments against AI today.
The role of memes and viral outrage
Memes have evolved into powerful tools that shape how people think about AI. These simple visual posts hide problematic messages behind humor and satire. They seem harmless but play with viewers’ emotions32. So they end up shaping what people believe quite effectively.
Research shows how AI-generated memes have crept into political discussions worldwide. Their creators don’t want to change minds – they just want to “make their preferred candidate look patriotic or noble [or] to make their opposing candidate look evil”33. To name just one example, politicians have shared AI-created images without telling anyone they weren’t real, making it harder to separate truth from fiction.
This emotional manipulation serves a purpose. Studies show memes deliberately create “rage-baiting” – content that brings out strong negative reactions from users32. These circumstances make it almost impossible to have reasonable discussions about what AI can and cannot do.
Social media algorithms combined with AI tools create perfect conditions for misleading content to reach specific audiences quickly. One researcher explained it well: “I don’t think the images were designed to be clearly deceptive, but they were designed to push a narrative, and propaganda works”33.
Studies make this even more worrying – about 80% of fact-checked misinformation claims contain images and videos3. Users have a 100% chance of seeing AI-generated fake content on social media4. This creates an environment where telling fact from fiction gets harder every day.
Misunderstanding how AI actually works
People misunderstand AI technology because they learn about it from movies and TV shows rather than technical sources. Popular media shows AI with human-like consciousness – think HAL 9000 or the Terminator. This leads to “overly expectant or unwarranted pessimistic narratives”34.
Science fiction’s influence explains why 55.71% of Americans wrongly think AI is new technology31, even though it started in the 1950s. Most people connect AI only with recent tools like ChatGPT (69.18%) and chatbots (70.44%)31. They don’t know about its long history.
The most dangerous myth suggests AI can be completely objective. Surveys reveal 41.66% of Americans believe AI achieves 100% objectivity31. The truth? AI systems learn from internet data that contains human biases and limitations.
Common myths about AI include:
- AI has human traits like consciousness and emotions
- AI improves itself without human help
- AI possesses mysterious “emergent abilities” beyond our control
- AI works autonomously without errors
AI remains a “black box” for many people – they can’t properly evaluate its benefits or risks34. This lack of knowledge explains why Western countries view AI more negatively than positively, with 49% of Americans thinking risks outweigh benefits35.
People worry more about AI’s effect on society than their personal lives. More individuals fear AI’s impact on overall jobs than their own work35. This pattern shows how public perception often misses reality.
Better understanding requires more than fact-checking. One expert put it well: “To reliably distinguish misinformation as AI tools grow more sophisticated… people will have to learn to ask about the content’s source or distributor rather than the visuals themselves”3. Looking at content alone no longer helps determine what’s true or trustworthy.
The Ghibli Meme Controversy and What It Reveals
The Ghibli-style AI image trend took social media by storm in early 2025. This revealed how people misunderstood artificial intelligence and copyright. OpenAI’s updated GPT-4o model let users turn photos into dreamlike images that looked like Studio Ghibli’s iconic style. The demand was so intense that OpenAI’s servers reportedly started “melting”7.
How AI mimics style, not content
GPT-4o doesn’t copy frames from Ghibli films directly. The system uses an autoregressive algorithm that breaks images into visual “tokens”—just like language models predict word sequences in sentences8. The model learns to link certain patterns, such as color palettes or brushstrokes, with specific prompts through training on large datasets.
The AI creates a mathematical version of the studio’s look—what you might call “Ghibli-ness”—rather than pulling actual film frames8 when users ask for “Ghibli-style” images. This “style engine” process helps AI pick out and apply visual features to brand new images.
A Harvard expert pointed out that “AI is a superb mimic and quick learner” that creates recognizable styles but “will lack true insight and experience”9. This explains why AI-generated Ghibli art might look right but feel empty—it picks up surface patterns without grasping Miyazaki’s creative vision.
Why ‘Ghibli-style’ isn’t copyright infringement
The legal argument focuses on a vital difference: copyright protects specific works, not artistic styles in general. U.S. copyright law only covers “original works of authorship fixed in a tangible medium of expression,” not abstract elements like style or technique10.
Legal experts warn this line isn’t clear-cut. The general public’s idea of “style” might include protected elements, even though style itself isn’t copyrightable8. Courts could still find infringement if AI-generated images contain original, expressive elements that look too much like copyrighted works.
This gray area sits at the center of current lawsuits like Andersen v. Stability AI. Artists claim AI companies used their copyrighted works without permission to train models that copy their unique styles8. The biggest problem isn’t whether AI can copy a style, but if that copying creates unauthorized derivative works.
OpenAI says training its models falls under fair use8, though courts haven’t really tested this defense. The company claims to have built in protections that kick in “when a user attempts to generate an image in the style of a living artist”7. Yet they allow “broader studio styles” like Ghibli’s, even though Miyazaki is still alive at 84.
The Miyazaki quote and its context
The most misunderstood part of this debate might be Miyazaki’s famous quote calling AI “an insult to life itself.” People often leave out important background when they cite this criticism of AI-generated Ghibli art.
The quote comes from a 2016 documentary where Miyazaki watched a demo of an AI system built to create creepy movements for horror content11. They showed him a disturbing figure moving unnaturally and using its head to move—a model specifically created for horror settings11.
Miyazaki’s emotional response targeted this particular demonstration, not the whole technology. He talked about his disabled friend who he speaks with every morning. “I can’t ***** this stuff and find it entertaining when thinking of my friend’s situation,” he said, seeing the unnatural movements as disrespectful to human experience12. Even a simple handshake was challenging for his friend11.
The 2016 image generation technology was nothing like what we have today11. His comment about “humanity losing confidence in itself” hints at broader AI concerns but doesn’t mean he rejects all AI uses11.
This detailed reality shows how critics often strip away context to attack AI. The debate tells us more about our fears of technology than the real strengths or limits of artificial intelligence.
Virtue Signaling and the Anti-AI Movement
The debate about AI goes beyond technical discussions. The anti-AI movement has become a powerful way for people to show off their moral values. Research shows that people oppose artificial intelligence more to display their ethics than to address real concerns about the technology.
Moral outrage as a social currency
People use moral outrage as social capital on social platforms. Studies have found that there was a clear pattern – expressing outrage advertises personal views and allegiances to potential allies13. Yes, it is easier to show moral outrage when the cause is popular, which makes more people join in14.
Lab experiments back this up. People who call out perceived wrongdoers in public earn social trust from others, even at a personal cost14. This explains why anti-AI rhetoric gets so extreme – stronger condemnation leads to bigger social rewards.
Social media platforms magnify this effect. They provide instant feedback through likes and shares that reinforce outrage through reinforcement learning13. But research points to a concerning gap – posts with moral outrage spread substantially faster yet rarely lead to real action15.
The rise of anti-AI pledges and purity tests
Companies see marketing potential in taking anti-AI stances. The illustration app Procreate earned widespread praise from artists in 2024 when CEO James Cuda promised to avoid AI features. He stated, “I don’t like what’s happening to the industry, and I don’t like what it’s doing to artists”16.
This trend shows up in many industries:
- Cara, a social media platform for artists, prohibits AI-generated artwork uploads17
- Dove pledged never to use AI-generated content to represent women in advertisements17
- Discover advertises that its call centers are staffed by humans, not AI16
These pledges show how companies use anti-AI sentiment to stand out. Jack Woodhams runs the portfolio site PosterSpy and explained his strict no-AI policy. He believes comparing AI art to human artwork is “insulting to the real artists out there who have trained for years”17.
When criticism becomes performative
People who criticize the anti-AI movement point out how opposition turns into a show rather than real criticism. One analysis looks at the Ghibli AI meme controversy that sparked massive outrage. Commenters shouted “AI IS THEFT!” in all-caps – a classic way to signal opposition without understanding the technology or law5.
The shallow nature of this behavior raises concerns. Critics focus on getting social approval and ignore how AI really works. One observer noted the hypocrisy: “A person will decry another’s AI usage as environmentally damaging on social media while waiting for their flight to Greenland to catch one last glimpse of the glaciers before they melt”18.
Research shows that network-level norms affect social reinforcement. Users become less sensitive to feedback in extreme networks where outrage is common13. This explains why anti-AI communities develop more extreme views over time.
The anti-AI movement tells us more about human social behavior than artificial intelligence. Understanding how people use technology criticism to signal virtue helps us separate real ethical concerns from moral posturing.
The Real Ethics of AI Use
“There’s still a lot of knowledge that needs to be gained about how LLMs and Generative AI works. We would benefit from a lot of transparency into how these systems work and where the limitations are, such as where the biases are, where the data was trained at.” — De Kai, AI Professor at ICSI (UC Berkeley) & HKUST (Hong Kong), Machine Translation Pioneer, Member of Google’s Inaugural AI Ethics Council
Real ethical concerns about AI need serious attention beyond the current hype and hysteria. These problems are quite different from the performative outrage we see in public discussions, yet they remain crucial to developing responsible technology.
Prompting vs authorship: who’s responsible?
The ownership battle over AI-generated content continues to rage in 2025. The U.S. Copyright Office maintains that AI prompts alone “do not constitute sufficient human authorship for copyright protection“1. Users can’t claim ownership through detailed text prompts because they “had no control over how the artificial intelligence tool analyzed, interpreted or responded to these prompts”19.
This position comes from a basic principle: copyright law must have human creativity. Courts have clearly stated that copyright protections “have never been granted to any nonhuman”20. The Copyright Office turns down registration requests when “human authorship cannot be distinguished or separated from the final work produced by the computer program”20.
Other countries see things differently. The Beijing Internet Court ruled an AI-generated image could be copyrighted. They based this on a user’s 30 prompts, 120 negative prompts, and parameter tweaks that showed “esthetic choices and personal judgment”20. Indian and Canadian intellectual property offices have also started recognizing AI as a co-author with humans20.
Transparency and consent in AI training
Leading AI companies won’t tell us what data they used to train their models2. The Stanford Foundation Model Transparency Index showed that “transparency regarding the data used was very low compared to other aspects of transparency“2. OpenAI claims this secrecy stems from “the competitive landscape and safety implications”21.
This lack of transparency creates several issues:
- Rightsholders can’t protect their work when it’s used without permission
- External auditors can’t check models for bias or harmful content
- Users can’t make informed choices about their tools
- Lawmakers can’t regulate what they can’t see
A Washington Post investigation learned that “many companies do not document the contents of their training data – even internally – for fear of finding personal information about identifiable individuals, copyrighted material and other data grabbed without consent”2. Trust breaks down without transparency, which explains why many creators take anti-AI stances.
Different regions handle regulation differently. The EU AI Act now requires detailed summaries of training data content2. The U.S. Federal Trade Commission told OpenAI to document all their training data sources2. Japan’s draft AI principles ask for “data collection method transparency and data source traceability”2.
When AI use crosses ethical lines
AI ethics focuses on three main areas: “privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment”22. Algorithms tend to “replicate and embed the biases that already exist in our society”22.
Political philosopher Michael Sandel points out that “AI not only replicates human biases, it confers on these biases a kind of scientific credibility”22. Banking algorithms might copy the industry’s past discrimination against marginalized consumers22.
Building ethical AI means following core principles from the Belmont Report:
- Respect for Persons: Recognizing individual autonomy and getting informed consent
- Beneficence: Following the “do no harm” principle to avoid magnifying biases
- Justice: Making sure benefits and burdens are shared fairly across society6
Good governance means setting clear roles and responsibilities. Organizations must educate stakeholders, monitor processes, and use tools that build trust6. Many companies now have AI Ethics Boards with diverse leaders to guide decisions and governance6.
Arguments against AI grow louder each day. These important ethical concerns need careful thought rather than outright rejection. We must move past blind acceptance and uninformed opposition to address these issues properly.
AI as a Tool for the Underdog
Unlike what AI critics say, artificial intelligence acts as a powerful equalizer for independent creators. The technology moves creative power from those with deep pockets to anyone with imagination and ideas.
How indie creators benefit from AI
Independent filmmakers now use AI to analyze audience reactions through telemetry data. This helps them improve scripts without expensive focus groups23. Young writers without film school credentials can now compete with seasoned professionals. Game developers find AI tools speed up their content creation. These tools reduce resource needs and make high-quality production available to more people24.
Musicians have discovered new opportunities too. AI tools help solo independent artists create full songs, albums, and music videos with virtual bands and production crews25. A developer puts it well: “I see AI as being analogous to sewing machines and calculators. The invention of sewing machines didn’t put tailors out of business—instead they got more efficient”26.
Democratizing access to creative tools
AI changes who can create professional-quality content. It turns complex tools into easy-to-use, tailored conversations. Anyone with an idea can build software, write lyrics, or create videos through simple voice commands or text prompts23. More creators from diverse backgrounds can now share their unique views27.
This transformation goes beyond convenience. AI strengthens people from various backgrounds to express creativity without technical knowledge or formal training28. Tools like Canva and AI-driven platforms like Grammarly help create professional-grade content quickly29.
Why AI isn’t just for big corporations
Small businesses and independent creators reap major benefits from AI’s democratizing effects. Advanced technologies help smaller entities produce high-quality creative work that rivals larger organizations27. The playing field levels out. Small-to-medium enterprises no longer face such big disadvantages from limited strategy development resources30.
AI-powered creativity gives every aspiring creator a fair chance at success, whatever their access to formal training23. More unique voices will emerge as this technology becomes available to all. Traditional gatekeepers won’t hold them back anymore.
The Cyberpunk Irony: Fighting AI with AI
The biggest irony of the anti-AI movement comes from its misuse of cyberpunk themes—a genre that welcomes technology as a tool against oppressive systems.
Why AI fits the cyberpunk ethos
The 1980s saw cyberpunk fiction emerge as a response to neoliberal economics and upcoming technological changes. These stories help us think about where unchecked capitalism might take society. Yet cyberpunk never opposed technology—it’s quite different. The genre shows technology as both the problem and solution, giving underdogs a way to fight back against corporate control.
Stories like Blade Runner and The Matrix show protagonists who employ advanced technology to challenge power structures. This mirrors how the cypherpunk movement supports privacy-enhancing technologies to encourage personal freedom and resist control systems. The replicants’ struggle for recognition reflects privacy advocates’ challenges against surveillance and power abuse.
Misreading genre themes to justify fear
AI critics often misunderstand cyberpunk’s warnings. The genre doesn’t warn us about technology but about unchecked corporate interests controlling it. One analysis points out that cyberpunk authors “nearly always portray future societies in which governments have become wimpy and pathetic… with power clutched in the secretive hands of a wealthy or corporate elite.”
Critics cite cyberpunk esthetics to justify anti-AI positions—missing how characters in these stories use technology to resist oppression. Cyberpunk reflects society’s fascination and fear of technology’s potential, not an argument for rejecting progress.
Using tech to fight systemic inequality
AI offers powerful tools that can curb discrimination. Researchers have found that AI can identify early warning signs of human rights abuses through pattern recognition and predictive modeling. AI systems can predict population risks by analyzing historical records, economic trends, and political changes. This allows defenders to step in before violence grows.
This use of AI perfectly matches cyberpunk’s idea of using technology against systemic problems. AI-powered job matching platforms help people from disadvantaged backgrounds find opportunities matching their skills, reducing hiring discrimination. Translation tools break down language barriers that often show privilege.
Blanket opposition to AI goes against the spirit of resistance that cyberpunk celebrates. The genre teaches us not to reject technology outright but to make sure it serves people’s needs rather than corporate profits.
Conclusion
AI criticism in 2025 reveals a simple truth: most arguments against artificial intelligence come from basic misunderstandings, not real concerns. Critics who spread viral memes and make emotional appeals don’t grasp how these systems actually work.
We must choose between blind opposition and meaningful involvement with AI technology. People who reject AI out of fear fail to see how it makes creativity accessible to independent creators. The real issues that need our attention are ethical concerns about data transparency, consent, and algorithmic bias.
The critics who use cyberpunk imagery to fight AI miss the point of the genre they reference. Cyberpunk never supported rejecting technology. Instead, it shows technology as both a challenge and an answer—a tool that helps underdogs fight powerful institutions.
Better technical understanding leads to meaningful discussions about AI. Public perception leans toward science fiction rather than technical reality because only 13.73% of people who know about AI understand its processes. This gap creates unrealistic fears and expectations.
AI ended up being neither a dystopian threat nor a utopian solution—it’s just a powerful tool with huge potential. Our approach to development, regulation, and use of this technology will determine if it enhances human creativity or makes existing inequalities worse.
The next time you see anti-AI arguments, ask yourself if they raise real concerns or just virtue signal. Technology stays neutral—humans decide if AI becomes a force for positive change or exploitation. The decision remains in our hands.
FAQs
Q1. How does AI actually work compared to popular misconceptions? AI uses predictive algorithms and pattern recognition to generate content, rather than copying existing works. It learns from large datasets to mimic styles and create new outputs, but doesn’t have human-like consciousness or emotions as often portrayed in science fiction.
Q2. Is AI-generated art considered copyright infringement? Generally, AI-generated art that mimics a style is not considered copyright infringement, as copyright law protects specific works, not general artistic styles. However, the legal landscape is still evolving, and using AI to create unauthorized derivative works could potentially infringe on copyrights.
Q3. How does AI benefit independent creators and small businesses? AI democratizes access to professional-grade creative tools, allowing independent creators and small businesses to produce high-quality content without extensive resources. This levels the playing field, enabling them to compete more effectively with larger organizations.
Q4. What are the main ethical concerns surrounding AI use? Key ethical concerns include transparency in AI training data, consent for data use, potential biases in AI algorithms, and the impact on privacy and human judgment. Addressing these issues requires clear governance, stakeholder education, and adherence to ethical principles.
Q5. How can we ensure responsible development and use of AI technology? Responsible AI development involves increasing technical literacy, implementing transparent practices, establishing ethical guidelines, and creating diverse AI ethics boards. It’s crucial to balance innovation with addressing legitimate concerns about data privacy, bias, and the societal impact of AI.
References
[1] – https://www.finnegan.com/en/firm/news/ai-prompts-alone-are-not-human-authorship-long-awaited-us-copyright-office-report-declares.html
[2] – https://theodi.org/news-and-events/blog/policy-intervention-1-increase-transparency-around-the-data-used-to-train-ai-models/
[3] – https://www.nbcnews.com/tech/tech-news/ai-image-misinformation-surged-google-research-finds-rcna154333
[4] – https://today.umd.edu/ai-generated-misinformation-is-everywhere-iding-it-may-be-harder-than-you-think
[5] – https://medium.com/jigsaw-puzzles/anti-ai-imagery-rhetoric-is-the-new-virtue-signaling-a-look-at-last-weeks-studio-ghibli-meme-9a0ac0660dce
[6] – https://www.ibm.com/think/topics/ai-ethics
[7] – https://www.abc.net.au/news/2025-04-03/the-controversial-chatgpt-studio-ghibli-trend-explained/105125570
[8] – https://wjlta.com/2025/04/17/studio-ghibli-style-ai-images-and-the-legal-questions-they-raise/
[9] – https://news.harvard.edu/gazette/story/2023/08/is-art-generated-by-artificial-intelligence-real-art/
[10] – https://www.cullenllp.com/blog/ai-and-artistic-style-imitation-emerging-copyright-implications/
[11] – https://the-decoder.com/studio-ghibli-founder-hayao-miyazakis-viral-ai-criticism-lacks-crucial-context/
[12] – https://www.ndtv.com/world-news/quot-i-would-never-incorporate-this-quot-what-studio-ghibli-039-s-hayao-miyazaki-once-said-about-ai-animation-8021037
[13] – https://www.science.org/doi/10.1126/sciadv.abe5641
[14] – https://www.bloomberg.com/opinion/articles/2016-04-21/displaying-outrage-on-social-media-earns-a-kind-of-currency
[15] – https://phys.org/news/2025-05-relationship-moral-outrage-social-media.html
[16] – https://www.forbes.com/sites/rashishrivastava/2024/08/20/the-prompt-anti-ai-pledges-gain-popularity/
[17] – https://www.rollingstone.com/culture/culture-features/ai-image-brand-backlash-1235040371/
[18] – https://marcwatkins.substack.com/p/ai-is-unavoidable-not-inevitable
[19] – https://www.dbllawyers.com/are-text-prompt-commands-to-ai-enough-to-show-creative-human-authorship-part-1/
[20] – https://perkinscoie.com/insights/article/human-authorship-requirement-continues-pose-difficulties-ai-generated-works
[21] – https://academic.oup.com/jiplp/article/20/3/182/7922541
[22] – https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
[23] – https://www.forbes.com/councils/forbestechcouncil/2024/04/09/democratized-creativity-the-evolution-and-impact-of-ai/
[24] – https://readyplayer.me/blog/ai-is-just-getting-started-boosting-indie-studios-to-the-big-leagues
[25] – https://aimm.edu/blog/how-ai-could-empower-independent-artists
[26] – https://www.ign.com/articles/the-indie-developers-who-are-using-ai-to-make-living-cities-tactics-machines-and-more
[27] – https://www.linkedin.com/pulse/democratization-creativity-how-ai-empowers-custom-gary-ramah-n4dfc
[28] – https://www.datacenters.com/news/democratized-generative-ai-empowering-creativity
[29] – https://executive.berkeley.edu/thought-leadership/blog/unleashing-creativity-ai
[30] – https://hbr.org/2024/06/genai-is-leveling-the-playing-field-for-smaller-businesses
[31] – https://www.hostinger.com/blog/ai-myths-survey
[32] – https://www.brookings.edu/articles/ai-memes-election-disinformation-manifested-through-satire/
[33] – https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections
[34] – https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2023.1113903/full
[35] – https://www.brookings.edu/articles/what-the-public-thinks-about-ai-and-the-implications-for-governance/