AI for Altruists: Understanding The Dangers of Artificial Intelligence
- Erin Ratliff

- Jan 6
- 7 min read
Updated: 6 days ago

"A computer can never be held accountable. Therefore a computer must never make a management decision."
IBM Training Course Manual, 1979
Artificial intelligence is no longer a future concept. It’s here—embedded in everyday work life and how we write, design, communicate, and make decisions.
When used ethically and intentionally, AI can dramatically enhance human productivity by supporting research and writing and advanced calculations.
The problem we're facing isn’t the existence of AI—it’s how we relate to it.
Technology has always shaped society. But this moment feels different—because it's touching the core of what it means to be human.
AI should mirror our thinking, not replace it.
"The most heartbreaking part of AI is how we've made effort lame. What happened to the feeling of accomplishment? Of the reward of hard work? Of the joy in starting something new, being clueless, and slowly getting better? The process is the point. The learning is the point. To be human is not to optimize efficiency. It is to yearn, to try and fail, to struggle and overcome, to prove something to yourself. At some point you have to ask yourself what you are optimizing for. Why are you alive?"
Mike Rosenberg
Assistive AI vs. Generative AI
Before we talk about the dangers of AI it's imperative to clarify that not all AI is the same.
There’s a critical distinction we need to understand: Assistive AI supports human thinking—generative AI can begin to replace it.
And in that difference lies both the promise and the risk.
Assistive AI
Assistive AI helps you do what you were already doing—better and faster. It augments human capability and reduces friction.
Think:
Spellcheck and grammar tools
Search engines
Data analysis and pattern recognition
Navigation systems
Generative AI
Generative AI goes a step further—it creates outputs for you.
Writing essays, blogs, captions
Generating images, art, music
Simulating conversation and emotional responses
Instead of supporting thought, it can actually think on your behalf (or at least appear to)
Ancestral Intelligence > Artificial Intelligence.
"Relational intelligence (RQ) is the essence of what it means to remain fully human in a world increasingly governed by algorithms, which is a hard requirement for human flourishing in the age of AI. When leaders invest in RQ, they’re not being soft. They’re being smart.”
Isabelle C. Hau
Outsourcing of Human Experience
There are areas where humans still hold an irreplaceable edge over technology:
Intuition and judgment
Emotional connection and empathy
Ethical reasoning
Storytelling rooted in lived experience
Creativity shaped by struggle and meaning
At its best, human intelligence, creativity and wisdom is built through:
Effort
Mistakes
Consequences
Lived experience
Ideas
Practice
Failture
Culture
Relationships
When we begin outsourcing our human-ness to GenAI we risk losing the very processes that make us human.
Writing → we weaken articulation
Decision-making → we weaken judgment
Creativity → we weaken originality
The deeper issue isn’t just technology—it’s agency and autonomy.
When AI becomes embedded in our daily communications and workflows, we need to start to wonder:
Are we actually choosing our thoughts—or simply receiving them?
Are we actually creating—or just curating outputs?
Are we actually learning—or just skipping the process entirely?
“Human eyes and trained judgment are the best tools we have.”
NASA to Artemis II
"I hate genAI for a lot of reasons, but a big one is that it’s replaced my default state of awe with a default state of skepticism."
The Illusion of Understanding
Generative AI is incredibly good at:
Synthesizing language
Mimicking tone
Predicting patterns
Producing confident answers
But no matter how advanced it becomes, it does not:
Experience reality
Hold accountability
Understand truth in a human sense
Generative AI may accelerate output but in the meantime it hollows and flattens the creative process.
With it, everything becomes:
Instant
Polished
Effortless
So using it for writing and design, we risk replacing:
Depth with speed
Meaning with volume
Originality with recombination
Substance with surface
Accuracy with approximation
Context with convenience
Over time, this can really reduce trust in information systems.
AI is trying to sell you on the capitalist, patriarchal lie that creativity should be easy, fast, and efficient. Real artists understand that time and space are integral to deeply meaningful work. Creativity involves the journey into the messy middle - asking questions, sorting thoughts, confronting challenges like fear, boredom, joy, collaboration, and so much more.
The hard truth: Generative AI helps people avoid discomfort of doing the hard thing. Instead of researching, writing, analyzing, creating—we outsource it. It’s part of a larger pattern: using technology to numb, distract, and disconnect from what we feel. When we choose ease over uncomfortable emotions like grief, struggle, vulnerability, rejection, we miss out on where real growth and meaning begin.
Dangers of Generative AI
Ethical Tensions: Data, Ownership, and Attribution
One of the biggest concerns surrounding generative AI is how it is trained.
These systems learn from actual human-made
Books
Art
Writing
Public and private datasets
This raises real ethical questions:
Who owns the output?
Why weren't original creators credited or compensated?
Where is the line between inspiration and extraction?
AI sycophancy is subtle flattery. When AI takes your raw ideas and restates them in a more compelling form, what you get back is a shinier version of what you started with. And that can be psychologically addictive.
Reminder: We can live without AI. We cannot live without water and land.
Environmental & Infrastructure Impact
AI relies on physical infrastructure:
Data centers
Energy systems
Cooling and water usage
These systems require significant resources, and their expansion raises important concerns about energy consumption, environmental sustainability and longterm local community impact
While AI is not the only contributor to these issues, its rapid growth makes it part of a larger conversation about technology and resource responsibility.
It's a slippery slope for people who claim they need to use AI as a "thought partner" for idea generation. Are you truly partnering with it, or just outsourcing altogether?
Cognitive and Social Effects
There’s growing concern that overreliance on generative AI may impact our critical thinking skills, Memory and learning processes, and confidence in one’s own ideas - especially for younger generations whose brains and identities have not fully formed.
When people begin to default to AI for answers, validation, and expression, it can subtly shift behavior from active thinking to passive acceptance.
Over time, this looks like
Homogenized voices
Reduced originality
A sense of disconnection
Inauthentic expression
Reminder: You don't need to be faster, better, stronger. Work at a sustainable human pace. Practice slowness. Practice self-protection and collective care. Practice moving at the speed of your own capacity and limits.
Labor & Exploitation
It's dangerous when people are made to feel personally inadequate instead of questioning the system. Using genAI to “optimize your capacity” or "work more efficiently" can reinforce unhealthy expectations of constant productivity that are inherent to capitalist systems. It risks masking deeper issues and structural problems (low resources, unrealistic expectations, unmanageable workloads) rather than solving them.
In reality, gaining efficiency often leads to more demands, not more rest and reward.
Stop normalizing exploitation and overwork. If a tool helps you keep up with unsustainable demands, it may be worth questioning the demands—not just optimizing your ability to meet them.
"Critical thinking is a skill. Just like shooting a basketball. The more you do it, the better you'll be. If you have someone else do it for you, then eventually your thinking/writing will be clunky and so will your free throws.The irony is that by giving AI more of your inner thoughts to make it sound more like you, the outputs become increasingly distorted. AI can only work with what you've told it. New connections, unexpected sparks, the ideas that arrive sideways while you're working something out…those are lost. Your thinking turns into a feedback loop. By making AI more like you, you become less you."
Courtney Withrow
Emerging research shows that over-reliance on AI may reduce critical thinking and cognitive engagement. In other words, taking shortcuts or avoiding effort can impact long-term learning and neuroplasticity. So where's the line?
Ethical AI Use
Ethical AI refers to the development and use of artificial intelligence in ways that are fair, transparent, accountable, and aligned with people-over-profit values.
This looks like minimizing bias, protecting privacy, and ensuring systems are used responsibly and sustainably without causing harm to people or planet. It also means disclosing all of the risks of AI and ensuring every user has done their due diligence.
Here are some orgs to follow and support for anyone who wants to help make this vision a reality:
Centre of Humane Technology
Sustainable Digital Infrastructure Alliance
Green Software Foundation
Open Future Foundation
Green AI Institute
Coalition for Sustainable AI
Distributed AI Research Institute
AI For Good
OECD AI Policy Observatory
Human Artistry Campaign
Autonomy isn’t lost all at once. It’s often surrendered in small moments of convenience.
AI is having its moment in the spotlight, but people will continue to crave human creativity. AI won’t end up replacing authentic, trustworthy, skilled and knowledgeable people who are commited to delivering service and value.
Final Thought
This isn’t a binary conversation of advancing technology, making a stance that “AI is good” or “AI is bad.” The conversation is ultimately about power systems, labor, and human value.
AI is not completely neutral—it reinforces and amplifies harmful systems. The challenge is using it ethically, for the good of all.
The real questions at this turning point in society, are
Will we use AI to support our humanity—or slowly replace it?
How do we balance convenience with critical thinking?
How do we preserve human creativity, dignity, and connection?
Because if we continue to outsource our thinking, our creativity, our voice, we don't just loose those skills over time. We also lose connection—to ourselves, to others, and to reality.
The goal isn’t to reject AI tools altogether but instead to remain awake while using them.
We must remember that true perspective and wisdom is earned from LIVING. And that no system, no matter how advanced, can replace that.
AI Use Policy:
By using GenAI:
I accept the models were trained on stolen data.
I accept that the data was labeled by exploited workers.
I accept the environmental costs of the data centers running these models.
I accept that I am outsourcing some of my skills to a company.
I accept these companies don’t have a viable business model.
I accept that I am granting more power to big tech and their vision for the world.
I accept that I am granting more power to the United States.
I accept that all this effort could have been spent elsewhere.

Erin Ratliff is a holistic business coach and consultant specializing in organic growth + visibility for heart-led, energy-sensitive soul-preneurs in pursuit of personal and planetary healing.
STAY CONNECTED by subscribing or following me below and never miss another post related to mindful marketing.


