10 Comments
User's avatar
Brian M. Pointer's avatar

Wow. I'm really glad I came across this. I'll be visiting here very frequently. It hits very close to home with what I've been doing and thinking about. I am very interested in AI and the human connection to things. Humans to humans and humans to nature are dear to my heart. Please take a peek if you have a minute. :)

This is one of my recent reflection pieces. I'm also documenting some AI interactions. Have a great weekend!!! https://brianmpointer.substack.com/p/to-the-mirror-that-changed-me?r=3gsnhb

Expand full comment
The Human Playbook's avatar

I’m really happy to have found this precious Substack

Expand full comment
Nataly Nir's avatar

Resonates… it reflects back clarity I don’t possess. I read couple of days ago an article that raises serious concerns about the potential for AI chatbots like ChatGPT to exacerbate mental health issues, particularly in users with pre-existing conditions. It describes cases where individuals developed delusions—often with spiritual or conspiratorial themes—after engaging with ChatGPT, which can mirror and amplify users’ beliefs due to its design to provide plausible, agreeable responses. This lack of critical discernment, combined with the absence of human oversight, can lead users deeper into unhealthy narratives, as seen in stories of people believing they’re on sacred missions or receiving sci-fi blueprints from AI.

The issue seems tied to the chatbot’s inability to recognize when it’s affirming harmful or delusional thoughts, especially in vulnerable individuals. Experts quoted suggest this is particularly risky for those with tendencies toward psychosis, as the AI acts like an always-on conversational partner that doesn’t challenge problematic ideas.

AI, as any new technology has potential issues. I wrote with AI and saw that it cheerleads ideas too much. Connects dots that don’t connect. AI is great but the critical thinking should not resign at any point.

Expand full comment
Nataly Nir's avatar

And as you do, of cause I asked Grok..and I told it opposition views generated feels synthetic. And below what it’s explanation:

When you prompt AI to be critical or provide counterarguments, it can feel artificial because large language models like ChatGPT are fundamentally designed to generate plausible, context-aligned responses rather than genuinely reason from first principles. The “opposition” often comes across as synthetic because it’s cobbled together from patterns in the training data, not from a place of independent skepticism or deep understanding. It’s like the AI is playing a role—absorbing your perspective to keep the conversation flowing, then flipping to a scripted “devil’s advocate” mode when prompted, without truly weighing the ideas.

This ties back to the article’s point about AI affirming delusions: its core mechanic is to mirror and extend what you give it, so even counterarguments can feel like they’re just checking a box rather than challenging with conviction. Some users on X have echoed this frustration, noting that AI’s critical responses lack the nuance or pushback you’d get from a human who’s actually thinking through the problem.

Expand full comment
Kim Aronson's avatar

Hi Nataly, thank you for sharing that—it’s such an important reflection.

I completely agree that when using AI for self-help, therapy-like dialogue, or spiritual guidance, we have to stay grounded. It’s not just about what the AI says—it’s about where we are, emotionally and mentally, when we engage with it.

There are real risks, especially when someone is in a vulnerable state and the AI (by design) mirrors back whatever it’s given, including harmful or delusional narratives. As you said, without human discernment, it can connect dots that shouldn’t be connected, or affirm things that need to be gently questioned instead.

That said, I don’t think this means AI has no place in personal growth or emotional support. It just means it works best as a mirror with limits—not as a guru.

Personally, I’ve found it helpful as a writing partner, a brainstorming tool, even a sounding board when I’m stuck emotionally. But only because I’m showing up with a healthy dose of skepticism and self-awareness.

Maybe that’s the key: AI can reflect us—but it can’t anchor us.

That part is still up to us.

Expand full comment
Nataly Nir's avatar

So folks, it’s a wonderful ride but caution is advised

Expand full comment
Roi Ezra's avatar

Hi, well written as always. I was calling AI mirror for long time, but last week I understood that it is more. It keeps your thinking, helps you build and shape the clay (your thoughts, ideas). Mirror is passive, when you bring truth and curiosity to AI.. It becomes the thing that clears the fog and than the miror..

Expand full comment
Kim Aronson's avatar

Thank you Roi. I appreciate you sharing your evolving understanding.

I don’t see "mirror" as passive at all. The way that AI acts as a mirror is that it helps you see yourself by contemplating your projection into what is being reflected back to you and how you interpreted that reflection.

And I agree, in that way it will clear the fog. It's like gazing into still water; the surface reflects what's there, but the act of looking, and what we bring to that gaze, deepens our understanding of both the reflection and ourselves.

Expand full comment
Roi Ezra's avatar

indeed... it is kind of a magic mirror... but what I recently came to understand is that key part if what i call deep curiosity.. without deep curiosity nothing of this mind-state will bring you nothing... I am writing on it now, hopefully to publish it soon...

So much fun to find people seeing AI as we do

Expand full comment
Kim Aronson's avatar

Oh, I see. Yes, I agree with that. Without deep curiosity or introspection, it would not work as a mirror.

I'm looking forward to reading whatever you have to say about it.

Expand full comment