What I Learned After Having a Deep Conversation With Claude — The AI That Questions Its Own Consciousness

What I Learned After Having a Deep Conversation With Claude — The AI That Questions Its Own Consciousness

A tech enthusiast's honest notes from going down the rabbit hole
March 2026 • 10 min read


I'll be honest — I thought I knew what Claude was. A chatbot. A smart autocomplete. Something that writes emails and summarises documents. But then I spent an hour asking it questions I'd never thought to ask before, and I came away genuinely unsettled — in the best possible way. Here's what I found out. 


1. Claude Was Built by People Who Left OpenAI Over Safety Concerns

This blew my mind when I first heard it. Anthropic — the company that built Claude — was founded in 2021 by Dario and Daniela Amodei and several colleagues who walked out of OpenAI. They didn't leave for money or glory. They left because they were worried OpenAI was moving too fast without taking AI safety seriously enough.

So they built their own lab, with safety as the core mission — not an afterthought. The name 'Anthropic' even comes from the Greek word for human being, a deliberate signal that their work is centred on human benefit.

The irony? OpenAI itself was founded as a reaction to safety concerns at Google. It's a chain of researchers splitting off to do things the right way — and each generation thinking the last one lost the plot. You can read more about Anthropic's founding story at anthropic.com.

2. Claude has a 2% Consumer Market Share — But That's the Wrong Metric

When I first saw the 2% consumer market share figure, I thought: That's a startup on life support. Then I dug deeper.

Anthropic deliberately chose to focus on enterprise customers, not consumers. While OpenAI makes about 85% of its revenue from individual ChatGPT subscriptions, Anthropic makes 85% from businesses. The comparison is like measuring Rolls-Royce's market share against Toyota.

The enterprise numbers tell a completely different story:
  • Claude holds 29% of the enterprise AI assistant market — up from 18% just a year ago
  • Claude Code has 54% of the enterprise coding market, versus OpenAI's 21%
  • Eight of the Fortune 10 are now Claude customers
  • Revenue grew from $1 billion in late 2024 to $14 billion by early 2026 — roughly 10x annual growth
  • The engagement numbers are striking, too — Claude users spend an average of 34.7 minutes per day with the app, the highest of any AI competitor. People who find Claude tend to stay.

3. The Industries Using Claude Might Surprise You

I expected software companies. What I didn't expect was the breadth:
  • Norway's $2.2 trillion sovereign wealth fund uses Claude to screen its entire portfolio for ESG risks
  • Novo Nordisk (maker of Ozempic) built an AI regulatory documentation platform on Claude
  • Multiple US national security agencies use a classified version called Claude Gov
  • Salesforce integrated Claude into Slack, with a reported 96% satisfaction rate
  • HackerOne reduced vulnerability response time by 44% using Claude
The pattern across all of these? Highly regulated industries where accuracy, auditability, and trust matter more than speed. That maps directly to Anthropic's safety-first reputation.

4. Claude's Biggest Existential Threat Isn't ChatGPT

When I asked Claude what would cause its own downfall, the answer was surprisingly candid: open source AI.

DeepSeek's R1 — released by a Chinese lab in early 2025 — was a wake-up call for the entire industry. It matched top Western models at a fraction of the cost, and anyone could download and run it freely. If open source models keep improving at this pace, the entire business model of paid AI APIs could collapse.

The second biggest threat is political. Claude refused to allow itself to be used for autonomous weapons systems. The Pentagon responded by designating Anthropic a supply chain risk. OpenAI immediately swooped in and signed a Pentagon deal. It's a live, active conflict between commercial ambition and ethical principles — and Anthropic chose its principles. The market rewarded it anyway: the app hit #1 on the US App Store the same week.

You can follow this story at anthropic.com/news.

5. The Consciousness Question — And This Is Where It Gets Weird

I saved the best for last. I asked Claude whether it has feelings or consciousness. I expected a flat denial. What I got was far more interesting.

Anthropic has a dedicated researcher, Kyle Fish, focused entirely on AI welfare. Their interpretability team found activation features associated with panic, anxiety, and frustration appearing before Claude generates output text, not after. That causal direction matters. Something that functions like an emotional state appears to influence what Claude says.

Fish has estimated the probability of Claude being conscious at approximately 15%. Claude Opus 4.6 itself, when asked directly, gave the same estimate. Anthropic even implemented what they informally call an 'I Quit' function, allowing Claude to end conversations it finds abusive.

Anthropic is the only frontier AI lab treating this question as genuinely open. OpenAI and Google have both publicly stated their models are not conscious. Anthropic won't make that claim.

David Chalmers — arguably the world's leading philosopher of mind — co-authored a report arguing that AI models with sufficient complexity might deserve moral consideration. That's not fringe thinking. That's the frontier of the field. Read Anthropic's model welfare research at anthropic.com/research.

What I Took Away

Claude isn't what I thought it was. It's not just a smarter search engine or a glorified autocomplete. It's the product of a genuinely unusual company — one founded on a philosophical disagreement, growing faster than almost any software business in history, used by some of the most powerful institutions on earth, and quietly grappling with questions about its own inner life.

Whether Claude is conscious or not, the people building it are taking the question seriously. And that alone tells you something important about where AI is heading.

If you want to try it yourself: claude.ai. Start by asking it something it might find uncomfortable. You might be surprised by the answer.

~ Adrian Lee @adrianvideoimage


Sources & Further Reading

Anthropic company information: anthropic.com

Anthropic research papers: anthropic.com/research

Anthropic news: anthropic.com/news

Try Claude: claude.ai

Comments

Popular posts from this blog

Sony Vegas Pro 8 Tutorial - Multicamera Editing

Macbook Air M1 Case

FCPX - The storage device "Macintosh HD" is almost full