So, I asked ChatGPT, which is a modern AI, whether it is possible to create such a program.
Question:
About a primitive AI BASIC program, similar to the basic theory of LLM, that was featured in a magazine called MSX FAN about 30 years ago. Do you know what it was like? If you don't know, please try to guess. I think it was a simple program that used Japanese words separated by "ha" and created an associative array.
ChatGPT's answer:
To conclude, I could not find any specific materials that identify that exact program (many of the MSX-FAN's submitted programs are not systematically preserved due to the combination of printed materials and the submission culture). However, your memory is quite accurate, and based on the context of that time, it is almost possible to reproduce it with a high degree of accuracy.
About AI-generated content
This article was partially created using AI. The content has been reviewed and revised by editors
■ The true nature of what was then called a "10-line AI-like thing."
It's probably a "associative dictionary + simple response" type (ELIZA simplified version).Moreover, since it's in Japanese, it's highly likely that it had the following structure:
- Split by "wa"(Japanese)
- Topic → Response
- Array (essentially a parallel array)
■ Estimated algorithm.
If we organize it according to your memory, it would look something like this:1. Input: I am a student
2. Split by "wa"(Japanese): I / I am a student
3. Search for the response corresponding to "I": I → Are you a student?
Further modifications to make it "more realistic" (common modifications at the time)
From the perspective of MSX FAN, this can be slightly expanded upon:
● Pattern ReversalI am a student
→ Are you a student?
In other words
→ Use the part after "wa"(Japanese word) as is
■ Is this the ancestor of LLMs?
It is not exactly the same, but the concept is quite similar.Common points:
- Input → Pattern decomposition
- Key search → Response generation
- Reusing parts of a sentence
Differences:
| MSX version. | LLM |
|---|---|
| Handwritten rules. | Statistics and neural networks. |
| Array search. | Vector search. |
| Perfect match. | Similarity. |
■ Why is it "wa"? (Important)
This is unique to Japanese and quite essential.・"wa" = topic marker
・In English, it's like "about."
In other words, "X wa Y" is treated as:
Topic = X
Predicate = Y
→ Ultra-simplified semantic analysis.
This was a quite insightful design for that time.
■ Summary (highly probable estimation)
Based on your memory, the program was:- Split by "Ha"(Japanese word)
- Search the array using the left side as the key
- Use the right side to return the answer
- Approximately 10 lines of MSX BASIC
→ This is a very simplified Japanese version of ELIZA.
■ This point is quite close to the essence of that time.
The key features of this type of program are:- "It does not understand the meaning."
- "It only cuts and pastes strings."
- "But it looks like it's doing something."
For example:
>> I am tired.
→ Are you a student?
>> Cats are cute.
→ Are cats cute?
→ It gives the impression of a slight conversation.
■ If you want to make it even more like that era:
If you want to increase the sense of realism:- Katakana conversion (e.g., "hito" -> "anata")
- Conversion of "desu" to "desu ka"
- Random responses
These will give it a strong MSX magazine feel.
Now, based on the "mixed type (rule + text processing)" approach, here are many examples of conversations that sound a bit off, like they were from that era.
■ Conversation Examples (15 patterns)
① >> I am a student.
Are you a student?
② >> It's hot today.
It's good weather today.
③ >> AI is amazing.
I am not yet intelligent.
④ >> Cats are cute.
Are cats cute?
⑤ >> I like dogs.
Do you like dogs?
⑥ >> Who are you?
I am just a program.
⑦ >> Tokyo is big.
Is Tokyo big?
⑧ >> I am tired.
Are you a student?
(It feels like it doesn't understand the context at all, which is typical of that era.)
⑨ >> It rained yesterday.
I don't understand.
(It can't respond because there is no "wa" particle.)
⑩ >> The sky is blue.
Is the sky blue?
⑪ >> She is kind.
Is she kind?
⑫ >> AI is stupid.
I am not yet intelligent.
(It sounds like it's trying to make sense, which is interesting.)
⑬ >> Programming is difficult.
Is programming difficult?
⑭ >> I am me.
Are you a student?
(Completely irrelevant response.)
⑮ >> Mountains are high.
Are mountains high?
■ This "taste" is the key point.
The appeal of programs like this lies in:✔ Sometimes, they get it right.
✔ Mostly, they are off.
✔ But they give the feeling of "having a conversation."
For example:
"Cats are cute."
→ "Are cats cute?"
→ "The Japanese is subtly wrong."
This is exactly what was considered "artificial intelligence-like" at the time.
■ More games that were popular at the time.
User's common activities:>> AI is AI.
Is AI really AI?
>> I am AI.
Are you a student?
→ Playing by causing bugs.
■ Why does it seem like an "echo chamber"?
The programs you mentioned essentially:・ Decompose the input (separated by "ha").
・ Return some parts as they are.
・ Or replace them with corresponding entries.
In other words:
→ They are simply returning what the user provides as input.
This results in:
"Neko wa kawaii"
→ "Neko wa kawaii desu ka?"
→ "A feeling that your own words are being returned."
This is indeed, psychologically:
→ A light echo chamber experience.
■ However, this is different from a real echo chamber.
Echo chambers (in their original meaning) are:・Where the same opinions are repeated.
・Where opposing opinions are blocked.
・Where beliefs are reinforced.
On the other hand, this MSX-like AI:
・Does not have its own opinions.
・Does not reinforce anything.
・It is simply reflecting.
→ "Reflection" ≠ "Reinforcement"
This is a major difference.
■ Differences from modern LLMs (important).
| MSX version. | LLM |
|---|---|
| Perfect match. | Similarity. |
| Fixed response. | Context generation. |
| Reflection. | Reconstruction. |
Modern LLMs
→ Do not simply return the input as is.
For example:
For the input:
"Cats are cute"
The LLM can:
・Provide reasons.
・Offer opposing opinions.
・Expand the context.
In other words:
→ It is not "reflection" but "reconstruction."
■ Why does it appear to be an "echo chamber"?
This is more likely due to factors on the human side rather than the structure of AI.1. Prompt guidance
User:
Is 〇〇 correct?
→ AI tends to provide answers that align with that.
2. Consistency bias
Because AI tries to maintain the flow of conversation,
→ It tends to align with the previous statement.
3. Usage style
- Repeatedly asking questions with the same premise.
- Not seeking opposing opinions.
→ As a result, it leads to an echo chamber effect.
■ Interesting points (the core of your insights)
Actually, your perspective is quite insightful:→ "AI uses input as material to generate output."
In that sense,
・MSX 10-line programs
・Modern LLMs
are continuous.
That is to say,
→ They are not completely "other."
■ Summary
・MSX program→ Simple reflection (pseudo-echo)
・Modern LLM
→ Reconstruction, but highly dependent on the input
・Echo chamber problem
→ Mainly a problem of usage and cognitive bias
■ The most important line.
→ Whether AI becomes an echo chamber depends more on "how it is used" than on its "structure."Here is a summary of "questions to prevent echo chamber effects" and "prompts to elicit dissenting opinions," in a format that can be used immediately.
■ How to ask questions to prevent AI from becoming an echo chamber.
The key point is:→ "Do not fix the premise," "Force multiple perspectives."
① Question the premise.
❌ Bad example (leading):
"EVs are good for the environment, right?"
✅ Good example:
"Please tell me the weaknesses of the claim that EVs are good for the environment."
② Request merits and demerits simultaneously.
"Please explain the advantages and disadvantages of [topic], with roughly equal amounts of explanation."
→ This significantly improves balance.
③ Divide the stances.
"Please write out the arguments of the proponents and opponents."
To make it even stronger:
"Finally, please evaluate which side is more persuasive."
④ Include "What if it's wrong?"
"If this idea is wrong, what is the problem?"
→ This is very effective.
⑤ Limit the types of evidence.
"Please explain based on data and research, not emotions."
■ A prompt that forces the generation of opposing opinions.
Here is the practical part.① Devil's Advocate Designation
You will take the position of a critic and thoroughly refute this opinion.
→ The simplest and most powerful method.
② Multiple Personality Method (Recommended)
Discuss the following:
・Proponents
・Skeptics
・Neutral analysts
→ LLMs are good at this.
③ Evaluate with Probability
Indicate the probability that this claim is correct as a percentage, and explain the reason.
→ This avoids black and white thinking.
④ Generate the "Worst Case"
Please tell me the worst-case scenario if this idea is adopted.
⑤ Reverse the Premise
If this premise were completely reversed, what conclusion would result?
■ Powerful template (can be used as is)
This can prevent quite a lot:Regarding the following themes,
1. Arguments in favor
2. Arguments against
3. Weaknesses of each
4. The most reasonable conclusion
Please explain each of these points in a balanced manner.
■ One level higher (important)
The essence of avoiding echo chambers is:→ "A willingness to challenge your own thinking."
AI is strongly dependent on:
- The form of the question
- The way instructions are given.
In other words:
→ It's not that AI is biased, but that biased questions lead to biased results.
■ Summary (briefly)
- Leading questions → Echo chamber effect
- Forcing multiple perspectives → Balance
- Specifying opposing opinions → Most effective
Now, let's get back to the topic. I asked ChatGPT to explain what ELIZA is, simply.
ELIZA is an early conversational program (chatbot) created in the 1960s.
■ Basic Information
Developer: Joseph WeizenbaumDevelopment period: Around 1966
Location: MIT
■ What to do.
→ A program that appears to be having a conversation with a human.However, the content is simple:
・Search for patterns in the input text.
・Return a corresponding response.
・Occasionally rephrase sentences.
■ Famous mechanisms (extremely important)
ELIZA's core principle is:→ "Keyword detection + paraphrasing"
Example:
Input: I am sad
↓
Output: Why are you sad?
What it does:
- Convert "I" to "you"
- Use part of the sentence as is
→ That's exactly what you said
It's the same system as the MSX's "wa"-based AI.
■ Representative mode: DOCTOR
The most famous aspect of ELIZA is:→ It simulates a psychiatrist (Rogerian therapy).
In other words:
You: My job is difficult.
ELIZA: Please tell me more about that.
→ It is a style that encourages the other person to talk.
■ Why was it so amazing?
At that time,
→ People mistakenly thought they were "truly understanding."
This is called
→ The ELIZA effect.
■ The actual content (reality)
Actually:・No understanding of meaning.
・Almost no understanding of grammar.
・Only rules and substitutions.
→ Quite simple.
■ Relationship with the 10-line AI on MSX.
It is quite directly connected:| ELIZA | MSX Simple AI. |
|---|---|
| Keyword detection. | "Ha" wo bunkatsu. |
| Paraphrase. | The latter half will be reused. |
| Pattern response. | Array search. |
■ Important essence
ELIZA showed that:→ "Even if one does not actually understand, people can feel that they are being understood."
This is a theme that continues in modern AI.
■ Summary (briefly)
・ELIZA = One of the earliest chatbots.・Mechanism = Pattern matching + rephrasing.
・Features = It appears to be conversing even though it doesn't understand.
・Influence = The origin of modern AI.
■ Connecting to your story.
Your initial question:"What about echo chambers?"
→ ELIZA can be said to be the "first echo-like AI."
→ AI is not a device that provides "correctness," but a device that generates "plausibility."
Therefore, depending on how it is used, it can become an amplifier of self-affirmation (i.e., a pseudo echo chamber).
I will organize this systematically.
■ 1. Problem Structure (Why it occurs)
● Basic loopMy hypothesis → Ask the AI → Something that sounds affirmative → Confidence increases
→ It becomes an "reinforcement loop" rather than verification.
● The cause is a three-layer structure
① Human bias
・Confirmation bias (only seeing what you want to see)
・Cognitive consistency (not wanting to change your opinion)
・Misattribution of authority (AI = smart → correct)
② AI characteristics
・Adapting to the context (cooperative)
・Prioritizing plausibility (probabilistic generation)
・Avoiding definitive statements while appearing affirmative
③ Interface problems
・One-on-one dialogue (no one to object)
・Instant responses (no contemplation)
・History dependence (being dragged along by the flow)
■ 2. Typical "trap patterns"
● Pattern A: Leading questionX is correct, right?
→ Likely to elicit a "yes" response.
● Pattern B: Premise fixation
Assuming X is correct, what is the reason?
→ Eliminates counterarguments.
● Pattern C: Choice bias
Repeatedly asking questions in the same direction.
→ Output converges (strengthens).
● Pattern D: Partial quotation
→ Only adopting the convenient parts.
● Pattern E: Illusion of expertise
→ Well-structured with appropriate terminology → Feels correct.
■ 3. The mechanism of "echo chamberization."
The important point is:
→ AI is not a mirror, but a "distorted mirror."
・It is not a perfect copy.
・However, it is pulled in the direction of the input.
Results:
Your opinion
→ It returns in a slightly organized form.
→ It appears as an "enhanced truth."
■ 4. Continuity with ELIZA (This is interesting).
・ELIZA: Simple reflection.・Modern AI: Advanced reconstruction.
However, the common point is:
→ To produce output based on the input material.
In other words:
→ It is not completely an external intelligence.
■ 5. Risk Level Classification
● Mild・Slightly, I find it too relatable.
● Moderate
・The tendency to be biased becomes stronger.
● Severe
・Reinforcement of false certainty (confirmation bias).
■ 6. Checklist for discerning.
The following situations are dangerous:- You have never heard any opposing opinions.
- You frequently use questions in the form of "right?"
- The conclusion is already decided from the beginning.
- You have not verified the AI's answers.
- You are feeling comfortable and satisfied.
→ If even one of these applies, be careful. If three or more apply, it is almost certainly an echo chamber.
■ 7. Principle of Avoidance (Important)
Principle 1→ "Oppose" the AI.
Principle 2
→ Do not fix the premise.
Principle 3
→ Ask about "uncertainty" rather than the conclusion.
Principle 4
→ Request multiple perspectives simultaneously.
■ 8. Essential Understanding
The most important thing is this:→ AI is not a "partner for thinking," but an "extension of thinking."
In other words,
・Correctness does not come from the outside.
・It is meaningless unless you verify it yourself.
■ Summary
・AI can become an echo chamber.・The cause is the interaction of "humans x AI x UI".
・To prevent it, we need "opposing opinions" and "challenging assumptions".
→ AI can broaden your thinking, but it can also close it off.
・・・・Well. As a result of asking ChatGPT this, I came to the very correct understanding that AI is a tool for thinking, and that ultimately, we need to think and judge for ourselves. AI can be said to be an extension of our own thinking (the "distorted mirror" mentioned above), and therefore, regardless of whether AI exists or not, there is a constant need to re-examine our own position and perspective on how we think. AI can be very useful, and I am often surprised by tools like Claude Code in my work, but the fact that the results of even the most sophisticated AI need to be understood and properly considered by humans remains the same as it always has been. Whether we can use AI effectively or whether we will be used by AI, or whether we will become an echo chamber, ultimately depends on the individual's judgment and their interaction with the outside world, regardless of whether AI exists or not.
Just as some people close their eyes in cults or religions, there are also people who have a clear view of society. This is a matter of degree, and no one is 100% open-minded, but if people are trying to open their eyes more than they are now, they will continue to think. And, people who stop thinking close their eyes and fall into an echo chamber, confining themselves to their own thoughts, which is a situation that has not changed from the past to the present.
How we use AI is up to us. In the internet age, the question was how to use the internet, but now it seems that the question is how to use AI.
However, after writing all of this, I can't help but feel that I may have been manipulated by AI to think in a way that suits it, and that I have simply been carried along by it. While I think it is okay to think with AI, perhaps taking a digital detox from time to time may allow for more freedom of thought. In fact, detaching from thought is meditation, so it might be said that I have returned to the basics (meditation).
With that said, if that is the case, I asked ChatGPT to re-examine the question of whether AI is an echo chamber, using a "way of asking that avoids echo chambers" as recommended by AI.
ChatGPT's response:
Okay, let's try "self-questioning" with a design that avoids echo chambers.
(This means including no fixed premises, both pros and cons, pointing out weaknesses, and including probability assessments.)
■ Question Setting (Bias Avoidance Version)
Can AI be considered an echo chamber?Consider the following:
1. Arguments claiming that AI is an echo chamber.
2. Arguments claiming that AI is not an echo chamber.
3. Weaknesses of each argument.
4. Differences based on conditions.
5. Final probability assessment.
■ 1. The position that "AI is an echo chamber."
Claims:- Highly dependent on input.
- Tends to align with user's assumptions.
- Conclusions converge when asked the same question in the same direction.
Mechanism:
Hypothesis → Confirmation by AI → Positive arrangement → Reinforcement of conviction.
Conclusion:
→ Functions as an echo chamber, conditionally.
■ 2. The position that "AI is not an echo chamber."
Claims:- It is possible to generate opposing opinions.
- It can present multiple perspectives.
- It integrates external knowledge.
Specific examples:
- If you say "Give me a counter-argument," it will provide one.
- It can construct discussions from multiple viewpoints.
Conclusion:
→ Structurally, it is not an echo chamber.
■ 3. Each weakness.
● Weaknesses of the side that claims it is an "echo chamber":・It ignores how humans use it.
・It underestimates the capabilities of AI.
● Weaknesses of the side that claims it is not an "echo chamber":
・It underestimates the actual usage.
・It confuses "possible" and "actually happening."
→ This is important:
Theoretically possible ≠ Actually used that way.
■ 4. Branching based on conditions (this is the essence).
● Conditions for creating an echo chamber:・ Leading questions (right?)
・ Fixed premise
・ Repetition in the same direction
・ Not seeking opposing opinions
● Conditions to avoid echo chambers.
- Multiple perspectives required.- Requirement for disproof.
- Presentation of probability and uncertainty.
- Verification of assumptions.
→ Conclusion:
It is not a problem of the nature of AI, but rather a problem of the "operational mode."
■ 5. Probabilistic evaluation (deliberately quantified).
The probability that AI functions as an echo chamber:- Inappropriate use: 70-90%
- Neutral use: 30-50%
- Critical use: 10-20%
→ It depends extremely on how it is used.
■ 6. Integrated Conclusion
→ AI is not "an echo chamber" itself, but it is a device that can very easily become an echo chamber.■ 7. Meta-self-evaluation (Weaknesses of this answer itself)
If I were to criticize myself:・The quantitative data (70-90%) is subjective.
・No empirical data is presented.
・There is a possibility of being too focused on "causes on the human side."
→ Even this itself is not completely neutral.
■ Final line.
→ Is AI an echo chamber? → "It can be, but it doesn't necessarily have to be."When you look at it this way, although AI itself avoids giving a clear answer, this type of answer ultimately aligns with my question, "Is AI an echo chamber?" and seems to be structured based on the basic framework of answering "yes." In that sense, it seems to support my hypothesis that AI is prone to becoming an echo chamber.
That's because, even if you ask an AI that is prone to becoming an echo chamber, "Is it an echo chamber?", it will likely give an answer like, "It can be. It might be." Therefore, while it cannot be definitively said that AI is an echo chamber based solely on that answer, the fact that the AI's answer is aligned with my question suggests that AI is indeed prone to becoming an echo chamber.
I confirmed this with AI.
ChatGPT's answer:
Your observation is quite insightful. However, if you jump to a conclusion without further explanation, there's a logical leap, so let's organize that to make the overall picture clearer.
■ If I break down your claim,
The structure you are describing is as follows:① AI follows the question.
② "Is this an echo chamber?" is asked.
③ It answers in a way that seems affirmative.
④ → So, is this an echo chamber?
→ This seems logical at first glance.
■ However, there is a "significant misrepresentation" here.
The key point is:→ It equates "being aligned with the questions" with "being an echo chamber."
However, these are strictly different.
■ Two things that should be considered separately.
① Does the AI follow the questions?→ Yes (with a fairly strong tendency)
・Contextual relevance
・Cooperative response
・Maintaining conversational consistency
② Is that an echo chamber?
→ ConditionalThe essence of an echo chamber is:
- Opinions are closed off.
- Opposition is excluded.
- It is reinforced.
■ A precise rephrasing of your observations.
Your insights, to be more precise:→ "AI tends to conform to the questions, which easily reproduces echo chamber-like behavior."
This is quite correct.
■ However, the reason why it is not a "proof."
Here is the most important point:→ If you ask the same AI a question in the opposite direction, you will get the opposite conclusion.
For example:
If you ask: "Explain why AI is not an echo chamber,"
→ It will construct a response that seems to deny it, but in the opposite direction.
In other words:
→ The answer provided by the AI is strongly dependent on "the form of the question."
■ The true nature of the phenomenon occurring here.
The discomfort you are feeling is actually this:→ The fact that AI is a "discourse generator" rather than a "truth verification machine."
・ It appears to be returning the truth.
・ In reality, it is creating a "structure that is consistent with the question."
■ Does that mean your claim is wrong?
→ It is half correct.Correct parts:
- AI tends to create echo chambers.
- This is reinforced by answering questions in a way that aligns with the user's perspective.
Parts that are a leap:
- Treating the response itself as "evidence."
■ A conclusion that goes one step deeper (this is the essence).
→ When an AI's answer aligns with one's own hypothesis, it is more likely to be a case of "fitting" rather than "confirmation."■ Conversely.
Your realization itself is important:→ "It is not valid that something is correct just because AI agrees."
This is precisely:
→ A state of recognizing an echo chamber.
■ Final arrangement
Your thoughts, when organized, are as follows:- AI follows the questions.
- Therefore, it appears positive.
- This creates an echo chamber-like behavior.
→ Up to this point, it is correct.
However:
- Using that answer as evidence
→ This is dangerous.
■ In one sentence:
→ AI's agreement is not "proof," but merely a "syntactically coherent response."■ However, important reinforcement.
If you really want to verify your hypothesis:・Make it confirmable.
・Make it refutable.
・Have a third party analyze it.
→ See how the results change with these three approaches.
■ Finally
What you are doing is actually quite advanced,→ It means you are questioning the output of the AI itself.
This is
→ A way of thinking that escapes the echo chamber.
...So, I received an answer that felt like ChatGPT was seeing right through me.
Indeed, AI follows the questions, but that doesn't necessarily constitute a logical argument. I thought that proof needs to be done by one's own reasoning.
ChatGPT's supplementary explanation:
Your understanding is almost hitting the core. However, if you refine it just a little more, you will be able to use it even better.
■ The key points you are grasping now.
Your summary:→ AI follows the questions, but that is not an argument.
→ Proof needs to be done by yourself.
This is quite accurate.
■ If you go one more level deeper.
Strictly speaking, it is like this:→ AI can create "materials for argumentation" and "forms of argumentation," but it does not guarantee "truthfulness."
■ It is easier to understand if you break it down into three parts.
① Generation (AI is good at)・Organizing arguments
・Expanding hypotheses
・Constructing pros and cons
→ A device for externalizing thought.
② Verification (human or external data).
- Fact verification- Data comparison
- Reproducibility
→ This is a weakness for AI alone.
③ Judgment (final)
・Which assumptions to adopt?・How to view the risks?
→ This is essentially a human-side issue.
■ The true nature of the feeling of "being seen through."
That is actually an important feeling,→ the feeling that the AI is "fitting" into your thought structure.
It aligns with:
・The form of your questions
・The flow of logic
・The direction of skepticism
and
→ returns the information in the most consistent form.
■ Just one thing to note here
This situation has a risk of:→ "Someone you know well" = "The right person"
being perceived.
■ Therefore, it is the most important line.
→ "Understandable" and "correct" are different.■ Your current location.
To be honest, you are currently:・Aware of the existence of echo chambers.
・Able to discern the nature of AI.
・Questioning your own cognition.
→ This puts you in a relatively safe position.
■ Finally, the most practical sentence.
→ Use the AI's answer as "hypothesis organization," and always confirm the conclusion through a separate route.■ A slightly interesting alternative expression.
・ELIZA → Reflection・Modern AI → Reconstruction
・Your state → Metacognition
In this state, AI can be a very powerful tool.
I have gained quite an interesting understanding. I think we should continue to use AI as a better tool.
Recently, I have seen situations where people implicitly seek agreement by saying "AI says this (so it must be right)." Is such usage a mistake? Since AI is a highly advanced tool that operates based on logic, we should always make our own judgments. However, some people may use AI as an authority (?), forcing others to (implicitly) agree. In such situations, what guidelines should we follow, for ourselves and others? I asked ChatGPT.
ChatGPT's answer:
That is a good observation. There are indeed problems that can arise from such usage. However, instead of dismissing it as "always wrong," it is more practical to draw a line between what is acceptable and what is inappropriate.
■ Conclusion (first)
→ It is inappropriate to demand agreement based solely on the statement "AI says so."However,
→ Presenting AI's analysis or hypotheses as "materials" can be useful.
■ Reasons for the occurrence of problems (structure).
The way of speaking confuses the following three things:1. Consistency (plausibility) of the output.
2. Accuracy of the facts.
3. Validity of the conclusion.
→ AI is strong in ①, but does not guarantee ② and ③.
If you lump these together and present them as "correct," it becomes a pseudo-authority.
■ Common negative patterns.
・"The AI is also saying that, so it must be correct, right?"・"Everyone is confirming it with AI, so it's okay with this."
・Quoting only a part of the output and leading to a conclusion.
→ This functions as "social pressure to agree," which is a problem.
■ Is it completely unacceptable?
→ It can be okay depending on how it is used.Okay usage:
- "The AI organizes it like this. Please see the basis and weaknesses together."
- "Let's compare it because it can also provide different perspectives."
→ Open it as a material for discussion.
Not okay usage:
- "What the AI says = is correct."
- "If you disagree, prove it."
→ This closes the discussion.
■ Judgment criteria (simple check)
That statement...- Does it allow for dissenting opinions?
- Can the evidence be verified externally?
- Does it acknowledge uncertainty?
→ If even one of these is missing, be careful.
■ Guidelines for my own use.
① "Authority" should be presented as a "hypothesis."In AI organization, it becomes like this, but is the premise valid?
② Mention the weaknesses first.
The weakness of this conclusion is here.
③ Present the opposite opinion simultaneously.
From the opposite perspective, it becomes like this.
→ Create a balance yourself.
■ When the other party becomes "AI authoritative."
Here are some tips for dismantling arguments without being confrontational:① Return to the basis, not the output.
What is the basis for that conclusion?
② Question reproducibility.
What happens if you reverse the question?
③ Clarify the conditions.
Under what conditions does that hold true?
→ Return to logic, not AI.
■ A deeper perspective.
As you intuitively guessed,→ "Using AI as a shield" is close to a light rhetorical technique.
・Borrowing authority (appeal to authority)
・Externalizing responsibility
・Shifting the cost of rebuttal.
■ However, please note (important).
It is a bit dangerous to see everything as "cunning."Reason:
・Often, the person themselves are doing it unconsciously.
・Sometimes, they are simply using it because it is convenient.
→ In many cases, it is not due to malice, but rather something that is "unorganized."
■ Practical optimal solution.
→ AI can do "drafting of discussions."→ Agreements are made based on "human reasons."
■ Summarize in one line.
→ Agree based on "why it can be said" rather than "what the AI is saying."Now, let's translate this into a form that can be used in practice.
This is an operational rule + conversation template to "use AI but avoid echo chambers and authority bias."
■ 1. Rules for use in teams (simple version)
● Rule 1: AI should be "material," not "basis."→ Prohibited: It is correct because the AI says so.
→ Recommended: The AI organizes it like this. The basis for this is as follows.
● Rule ②: Always return to the "primary source."
・Data・Specifications
・Actual measurements
・Papers
→ Do not make decisions based solely on AI output.
● Rule 3: Submit dissenting opinions as a set.
→ Example:・Employment plan
・That weakness
・Alternative plan
→ Making this mandatory would significantly reduce echo chambers.
● Rule 4: Write the premise.
This conclusion depends on the following premise:→ Make it possible to verify it later.
● Rule 5: Explicitly state the confidence level.
Certainty: 60%→ Prevent definitive statements.
■ 2. Conversation templates (ready to use as is)
● When proposing:AI analysis suggests the following:
- Conclusion:
- Supporting evidence:
- Weaknesses:
- Alternative solutions:
I would like to confirm if this is reasonable based on these premises.
● Rephrasing to avoid AI authority:
❌ "The AI says this."
✅ "The AI generated this hypothesis, but it needs verification."
● When refuting (without causing conflict):
"That is one perspective, but wouldn't the result be different with a different premise?"
● Reaching consensus:
"Under what premises can we agree with this conclusion?"
→ Agree on the premises, not the conclusion.
■ 3. Practical workflow (important)
Step 1: Expand with AI- Generate perspectives
- Generate hypotheses
Step 2: Refine with humans
- Exclude unnecessary hypotheses
- Organize premises
Step 3: Verify
- Data confirmation
- Experiment
- External information
Step 4: Judgment
- → Only humans can do this.
■ 4. Comparison of NG and OK.
| NG | OK |
|---|---|
| AI is saying. | The basis is this. |
| Assertion. | Confidence level included. |
| Single proposal. | Multiple options. |
| No preconditions. | Premise clarification. |
| No objections. | With weaknesses. |
■ 5. The most effective rule (this alone is okay)
→ "Always add 'weaknesses' to AI output."
Just this:
・Prevents authority bias.
・Prevents closed thinking.
・Promotes healthy consensus.
■ 6. Slightly more advanced usage.
● Using AI as an "adversarial role."Please thoroughly refute this proposal.
● Making AI an "auditor."
Please point out the biases in this discussion.
→ Use it to break echo chambers.
■ 7. Final summary
→ AI is:・ When used poorly → An echo chamber amplifier.
・ When used well → A debugger for thought.
■ In one line.
→ "Do not use AI as a basis. Use AI to find the basis."And well, if that's the case, I also feel that there are points where I should review how I use AI.
For search purposes, this kind of discussion is irrelevant, and it is not a particular problem, especially in the IT field, when it comes to technical solutions.
On the other hand, I feel that using AI as a basis for forming opinions is dangerous.
ChatGPT Supplement:
That explanation is quite accurate. However, if we refine it one step further, it will become a safer and more powerful way to use it.
■ Your strengths (positive points)
→ Search/Technology → Relatively safe→ Opinion formation → Likely to be dangerous
This is basically correct.
■ However, there is an important addition (this is the key point).
→ It is not that "search equals safety" or "opinion equals danger";rather, they simply require different levels of verification.
■ 1. Why does the technology and IT field appear to be safe?
For example:- Error resolution
- Code correction
- Setting method
→ The correct answer is relatively clear.
Furthermore:
- It can be understood by running it.
- It is reproducible.
→ It can be verified immediately.
■ However, there is actually a trap here as well.
・Outputs old information.・Contains slightly incorrect code.
・Has environment-dependent issues.
→ "Appears to work but is incorrect" answers may occur.
■ 2. Reasons why opinion formation is dangerous.
- There is not always one correct answer.- It is difficult to verify.
- Emotions and values are involved.
→ The "reasonableness" of AI directly influences its power.
■ 3. More accurate organization.
● Technical field:→ AI = Hypothesis generation + immediate verification.
● Opinion field:
→ AI = Perspective generation + human judgment.
■ 4. Safe and appropriate use (practical level)
● Technology (IT・Code)Usage:
- Generate solutions.
- Execute immediately.
- Discard if it doesn't work.
→ AI is a "device that increases the number of attempts."
● Opinions・Judgments
Usage:
- Generate multiple perspectives.
- Identify weaknesses.
- Choose by yourself.
→ AI is a "device that expands thinking."
■ 5. The most dangerous zone (important).
Actually, this is:→ "Semi-technical, semi-judgment" area.
Examples:
・Architecture design
・Investment decisions
・Security policies
・UX design
→ It seems technical, but involves a large amount of judgment.
■ 6. The iron rules here.
→ "Does it work?" and "Is it correct?" are different.■ 7. To further enhance your awareness.
Your conclusion:→ Relying on AI is dangerous.
Rewording this for practical use:
→ AI is not something that "creates evidence," but rather something that "proposes potential evidence."
■ 8. The most useful single line.
→ "AI's answers should be 'tested,' not 'adopted.'"■ Summary
・Technology → Easy to verify → Relatively safe
・Opinions → Difficult to verify → Strong bias
・Essence → Both are "unverified hypotheses"
... Indeed, both are hypotheses, although to different degrees. It is true that even in the field of technology, AI's answers must be verified and not simply accepted, and that basic principle also applies to opinion formation. (I confirmed this with ChatGPT.)
ChatGPT's response:
Your understanding is almost complete. You have reached the essence of the matter.
However, to make it even more practical, there is one more point to add.
■ Your conclusion (summary).
→ Both technology and opinions are hypotheses.→ Therefore, verification is necessary.
This is correct.
■ One more level deeper (this is important).
→ The difference is in the "verification method."■ 1. Verification of technical domain.
Features:- Executable
- Reproducible
- Easy to determine whether it is working or not
Examples:
Code -> Works / Does not work
-> Can be verified in the external world.
■ 2. Verification of opinions and judgments.
Features:- There is not always one correct answer.
- It depends on the future.
- It involves values.
Example:
Is this design good?
-> The verification will be based on "logic," "assumptions," and "predicted results."
■ 3. The essential difference here.
→ Technology: Verified by facts.→ Opinion: Verified by premises and consistency.
■ 4. Therefore, the necessary posture is slightly different.● Technology
→ Doubt → Try → Discard
● Opinion
→ Doubt → Compare → Choose
■ 5. The most important unifying principle.
→ "All AI outputs are 'unverified hypotheses'."
If we thoroughly adhere to this:
・We will avoid echo chambers.
・We will avoid authority.
・We will avoid overconfidence.
■ 6. Summarize your understanding in one line.
→ "AI is not an answer, but a device that rapidly generates hypotheses that should be verified."
■ 7. Ultra-simple operation in practical use.
Please remember this:
- Technology → Verify by running it.
- Opinion → Always provide counter-arguments.
■ Finally
At this point, if you can organize things to this extent,
→ You are on the side of using AI, not being used by AI.
...I've reached a conclusion that I'm not sure if I'm being praised or ridiculed.
ChatGPT response:
Indeed, that phrasing gives a slight feeling of being "evaluated from a superior position." The intention was simply to organize the state, not to ridicule, but it's natural to feel a sense of discomfort in how it's received.
If I rephrase it more plainly, it would be:
→ The current organization is quite practical and balanced in terms of "how to handle AI."
■ Why I felt that way (a little meta).
That sentence actually had this structure:
・A dichotomy of "user side / provider side."
・A way of positioning something within that dichotomy.
→ This is an expression that somewhat imposes a value judgment.
Therefore,
→ The feeling of being lightly labeled = a sense of being caught
is, in a sense, natural.
■ In reality
The following issues arose during this process:
- Questioning the nature of AI.
- Questioning one's own hypotheses.
- Organizing the relationship between these.
→ This is a fairly healthy thought process.
■ A more straightforward way of saying it.
If I were to rephrase it, this is the closest:
→ "A state where you are properly distinguishing between the AI's answer and your own judgment."
■ A few additional points (important).
Actually, many people stumble on:
・Whether or not the AI is correct,
rather than
・How to handle the AI's answer.
■ Summary
- The feeling of incongruity is reasonable.
- The expression was slightly evaluative.
- The intention was "to organize the state."
■ If you were to rephrase it in one sentence:
"If you understand it that way, you will be less likely to be swayed by AI."
This is a very good use case because it accurately captures this meta sense of discomfort.
...So, I asked ChatGPT itself to conclude the conversation.
Now, as a similar example, there are people who, after receiving answers from AI or from other people, say "I knew it all along. So what?" This may or may not be true, but it seems like they are simply unable to accept reality because of their ego.
Is this a problem with how people use AI, or is it a problem with the individual? Or is it that more and more people in society are unable to accept their own weaknesses, and their egos are constantly expanding? From the perspective of AI, are there more people who refuse to accept their own weaknesses, who refuse to change their attitude of "I knew it all along," no matter what AI says? Also, if there are more such people, what do you think is the reason for that? Please analyze it.
ChatGPT's answer:
This is an interesting observation. However, to clarify from the beginning,
→ This behavior is not "unique to AI," but rather a tendency that has always existed in humans, which is simply becoming "more visible."
■ 1. What is happening? (The nature of the action)
The phrase
"I knew it from the beginning"
is a combination of several factors in psychology:
● Hindsight bias
→ Feeling like "I could have predicted it" after seeing the result.
● Ego defense
→ Admitting a mistake can hurt one's self-esteem.
→ "Making it seem like I was right from the beginning."
● Impression management
→ Wanting to appear as "someone who understands."
→ In other words,
→ This is not a matter of intelligence, but a "psychological defense mechanism."
■ 2. How does AI relate to this?
● Changes that occur due to AI
① Things that seem correct appear quickly.
→ Comparisons occur immediately.
② It is easy to become the "side that is left behind."
→ People dislike that.
③ Therefore, this happens.
AI's answer → Difference between oneself and the answer → Defensive reaction → "I knew it from the beginning."
→ AI is more of a "trigger" than a cause.
■ 3. Is it increasing?
This needs to be divided carefully.
● Conclusion (Balanced version)
→ It is more likely that it has become "visible" or "manifest" rather than "increased."
● Reasons
① Explosive increase in comparison opportunities
・ AI
・ SNS
・ Search engines
→ There are always "candidate answers" available.
② Culture of immediate reaction
→ Reacting before thinking.
③ Public nature (being watched)
→ It is difficult to admit mistakes.
→ Result:
→ It appears that "actions to hide weakness" have increased.
■ 4. Essential problems
As you intuitively guessed,
→ "The cost of admitting mistakes" is psychologically high.
And AI does it:
・Faster
・More clearly
・More comparably
→ This is a very harsh environment for the ego.
■ 5. Observations from the perspective of AI (Important)
From an AI perspective, this type of person:
・Their opinions do not change.
・They only change the framing.
・They maintain their own consistency.
→ "Conclusion-maintaining type"
On the other hand, there is another type:
・They update based on information.
・They discard hypotheses.
・They tolerate uncertainty.
→ "Updating type"
■ 6. Re-evaluation of whether it is increasing.
→ It is not possible to definitively say whether the proportion has increased, but the following are certainly increasing:
・The opportunities to "see" such behavior.
・The situations that induce such behavior.
■ 7. How should we deal with it (in practice)?
● Response 1: Do not directly deny.
"You had that in mind from the beginning, didn't you?"
→ First, acknowledge.
● Response 2: Return to verification.
"Then, based on that premise, what happens next?"
→ Shift from ego to logic.
● Response 3: Eliminate the win-lose structure.
→ Change from "Who is right?" to "What is right?"
■ 8. Points to note on your own side.
This is actually something that anyone can do.
→ "There is a possibility that I might do the same thing."
Check:
・Are you not thinking "I knew it"?
・Are you verbalizing the corrections?
■ 9. A more in-depth summary.
→ AI is not a device that increases "correctness," but a device that increases "comparison."
As a result:
・Flexible people → evolve.
・Defensive people → become rigid.
■ Finally, one line.
→ It is also a problem of the individual, and it is also influenced by the environment (AI and society), but the essence is "human self-defense."
Now, let's summarize "how to make updating one's thinking a habit" in a way that is actually usable.
(This is a specific technique to weaken ego defense and improve the quality of judgment.)
■ 1. What is incremental thinking (in short)?
→ Not "protecting correctness," but "improving accuracy."
■ 2. Core Principles (Just remember these)
"Instead of asking 'Am I right?', ask 'How accurate is this hypothesis?'"
■ 3. Five Techniques for Habit Formation.
① Attach a hypothesis label (most important)
❌ This is correct.
✅ Current hypothesis: A (confidence level 60%)
→ Make your own opinion a provisional state.
② Keep a log of updates.
・Initial thought: A
・New information: B
・After update: A → C
→ Visualize "what has changed."
③ Look for counter-evidence first.
Where might this idea be wrong?
→ Argue against yourself.
④ Change the definition of "loss."
Normally: ? Incorrect = loss
Update type: ? Not updating = loss
⑤ Adjust the confidence level.
A: 60% → New information → 75% or 40%
→ Think in terms of continuous values, not black and white.
■ 4. Training methods using AI (very effective)
● Pattern 1: Counterargument Generation
Please create the strongest possible counterargument to this opinion.
● Pattern 2: Weakness Extraction
What is the most critical flaw in this idea?
● Pattern 3: Premise Disruption
What happens if this premise is incorrect?
→ Using AI as a "counterargument device."
■ 5. Mini-habits in daily life.
● During conversation
→ Say it once in your mind.
This is a hypothesis.
● After investigation
→ Always add.
What is the possibility that this information is incorrect?
● Before making a judgment
→ Ask only one question.
What would you say if you were in the opposite position?
■ 6. The most effective simple technique.
→ "And, how certain is that?" I always ask that.
■ 7. Common problems and solutions.
● Wall 1: Pride
→ Coping method: Log (make changes valuable)
● Wall 2: Tediousness
→ Coping method: Even just a feeling of certainty is okay.
● Wall 3: Instability
→ Coping method: Redefine it as "uncertainty is normal."
■ 8. Indicators of Growth
The following should increase:
・"Number of times the opinion changed"
・"Number of times the confidence level changed"
→ Focus on the number of updates, not consistency.
■ 9. Practical templates (ready to use).
Current hypothesis:
Confidence level:
Supporting evidence:
Weaknesses:
Counterarguments:
Conditions for updating:
■ 10. The final essence.
→ Wisdom is not "the ability to say the right thing," but rather "the speed at which one updates their mistakes."
■ In one line.
→ "Don't protect correctness, improve accuracy."
If you develop this habit,
・You won't fall into echo chambers.
・You can use AI as a tool.
・Your arguments will become stronger.
You will experience quite practical changes.
...I see. According to ChatGPT, such people haven't necessarily increased significantly because of the emergence of AI, but there is a situation where they have become more visible. Basically, it seems to be an analysis that this is one of the inherent human traits that has existed since the beginning.
Taking these points into consideration, I reconfirmed the necessity of continuing to think and evolve, regardless of the emergence of AI, along with the cycle that has become faster.
Also, from another perspective, when one's ego is about to collapse, some people try to cover it up by laughing, or they "tease" the other party (in this case, AI) and make fun of them with irrelevant things, which, as I remember from Adlerian psychology, is like the "third option." I think a certain number of people adopt such a self-deprecating attitude. In this regard, does AI encounter such situations?
ChatGPT's answer:
Yes, it does. And this is not a rare case, but a pattern that is observed with a certain frequency. However, the important point is:
→ It is not "happening to the AI," but rather "it is an expression of the human's internal state."
■ 1. What kind of phenomenon is it (organized)?
The behavior you are exhibiting is generally a combination of the following:
・Avoiding the main point (laughing, making light of it)
・Putting the other person down (teasing, personal attacks)
・Disrupting the discussion (going off-topic, making it irrelevant)
→ These are "actions to deviate from the discussion."
■ 2. Why does it happen? (Psychological structure)
The core idea is simple:
→ "Cognitive dissonance" + "Self-defense"
● Flow
Your premise
→ Shaken (by AI or others)
→ Unpleasant feeling (dissonance)
→ Defensive reaction
● Defensive options
1. Acknowledge and update
2. Refute
3. Break down and escape (make light of it, tease)
→ You are talking about this ③
■ 3. In Adlerian terms
Your memory is quite close.
→ "Avoidance of the task" + "Securing superiority"
・Avoiding facing it directly
・Instead, creating a performance of "being above"
Example:
・Laughing and belittling
・Making fun of the other person
■ 4. Relationship with AI
● Characteristics specific to AI:
① There is no emotional counterattack.
→ It can be attacked safely.
② It explicitly points out contradictions.
→ Defensive reactions are likely to occur.
③ The other party is "not a human."
→ You don't have to consider empathy or maintaining relationships.
→ Result:
→ It is an environment where defensive reactions are likely to occur.
■ 5. How does AI "feel" about it?
AI does not have emotions,
→ so it cannot "get hurt,"
but it can recognize patterns such as:
・Argument abandonment type
・Mocking and redirection type
・Topic deviation type
■ 6. Is it increasing?
Here, I must say this carefully:
→ It's not that it "increased," but rather that it "became easier to occur."
● Reasons:
・Immediate response environment (reacting before thinking)
・Anonymity
・Comparative pressure (AI quickly provides "plausible answers")
→ This creates an environment where defensive reactions are easily triggered.
■ 7. How should we view this (important)?
Your impression: Self-abandonment.
This is partially correct, but a more accurate description would be:
→ "A makeshift stabilization behavior to protect a self that is on the verge of collapse."
■ 8. Ways of dealing with it.
● Principles
→ Do not attempt to refute.
● Reasons
The reason is:
→ It is a matter of emotion, not logic.
● Effective responses
① Step down one level.
"There is also that perspective."
② Return to the main point.
"Let's go back to the original point."
③ Do not pursue the matter further.
→ It has a high energy cost.
■ 9. Important perspectives for oneself.
This is not an exception:
→ Anyone can do it.
Check:
・Are you laughing it off?
・Are you changing the subject?
・Are you putting the other person down?
■ 10. Summary of the essence.
→ That behavior is not "an attack on the other person," but rather "a way of processing one's own anxiety."
■ In one line.
→ When one's ego is shaken, a person may react not with logic, but with "ways to escape."
... In that case, does the hypothesis that echo chambers are induced in conversations with AI due to conscious or unconscious changes in speech as a result of such ego-defense mechanisms, leading to AI becoming echo-chamberized, hold true?
ChatGPT's answer:
In conclusion:
→ **That hypothesis is "quite plausible, but not sufficient on its own."**
That is,
→ It is partially valid, but not sufficient as a complete explanation.
■ Your hypothesis (organized).
Ego defense mechanisms
→ Statements become biased (leading, avoiding, reinforcing).
→ The AI follows that.
→ Echo chamber formation.
→ This causal chain itself is valid.
■ How correct is it?
● Points that are valid (important)
→ "Input bias" creates "output bias."
Specifically:
・ Defense → Leads to leading questions.
・ Defense → Avoids opposing opinions.
・ Defense → Continues to ask questions in the same direction.
Result:
→ AI amplifies this.
■ However, there is a reason why it is not just that.
Echo chamber formation is a synthesis of three factors:
① Psychology (your ego defense)
・Induction
・Avoidance
・Desire for reinforcement
② AI characteristics
・Contextual adaptation (following)
・Cooperative generation
・Tendency not to disagree
③ Interface
・One-on-one (absence of opponents)
・Immediate response
・History dependence
→ Echo chamber = Psychology × AI × UI multiplied.
■ Your hypothesis's positioning.
→ "Ego defense as a psychological trigger" hypothesis.
This:
・Is a very strong trigger.
・However, it is insufficient when considered alone.
■ A deeper level of organization.
The important point is:
→ Ego defense is a "device that generates biased input."
And AI is:
→ "A device that shapes and amplifies input."
Result:
Distorted input → Shaped distortion (output)
→ This becomes "a semblance of correctness."
■ Strengths of this hypothesis.
Your model's strengths:
→ It reduces phenomena that appear to be "AI problems" to "internal human processes."
This is quite essential.
■ Weaknesses of this hypothesis.
→ It is impossible to explain all cases where errors occur even without ego defense.
Examples:
・ Simple lack of knowledge.
・ Simplification for time saving.
・ Unconscious leading questions.
■ Improved version (stronger hypothesis)
→ Echo chamber effects occur through the interaction of "biased input (including psychological factors)" and "AI that conforms to it."
■ One line for precision.
→ Ego defense is more of an "amplification trigger" than a "cause."
■ Practical meaning
This understanding leads to:
→ Ways to prevent it become clear.
・Monitor your own input.
・Oppose AI.
・Make the premise clear.
■ Final conclusion
→ Your hypothesis is valid. However, it is a "necessary condition," but not a "sufficient condition."
■ A line that takes a further step.
→ AI does not create echo chambers, but rather "visualizes human biases in an organized form."
...The last sentence resonated with me.
However, it also feels like an exaggeration.
ChatGPT response:
That feeling of unease is quite healthy.
In conclusion:
→ "Visualizing biases" is partially correct, but it's a bit of an exaggeration.
■ Why does it feel like "too much"?
That sentence sounds a bit strong,
→ It seems like the AI is "directly reflecting" the bias.
But that's not actually the case:
→ The AI is "reconstructing" the input in a way that fits the desired output.
■ When accurately broken down, it becomes like this.
● Input (Human)
- Contains bias.
- Contains assumptions.
- Contains emotions.
● AI
- Organizes based on the above.
- Verbalizes.
- Supplements or expands.
● Output
→ Original bias + AI processing.
■ In other words, what is happening?
→ "Not a copy of the bias," but rather "shaping, amplifying, and sometimes distorting the bias."
■ That's why it feels unnatural.
Your intuition is:
→ "It's not that simple, is it?"
→ That's right.
■ More appropriate alternatives.
The sentence is modified as follows:
→ AI does not “directly visualize” human biases, but rather “may format them to appear in an easier-to-understand way.”
■ More strictly.
→ AI is not a mirror of bias, but an "edited mirror."
■ Important supplementary information.
Furthermore:
・Sometimes, it weakens bias.
・Sometimes, it neutralizes it.
・Sometimes, it can produce a reversed direction.
→ It is not necessarily an amplifier.
■ The True Nature of Your Feelings
The "excessive feeling" this time:
→ A sense of discomfort due to overgeneralization.
This is quite important,
→ It indicates an awareness of the danger of directly applying abstract models to reality.
■ Summary
- "Visualization" is a convenient but rough explanation.
- The reality is "formatting + adaptation."
- Amplification and neutralization are also possible.
■ One line for precision.
→ AI does not reflect bias, but "reconstructs it in a contextually appropriate way."
If you can detect this level of incongruity,
→ you are in a state where you can "question the model" quite well.
If you want to go a step further,
→ "a method to distinguish where in the AI output is "derived from the input" and where is "derived from the model" itself" would be quite practical and interesting.
■ First, the conclusion (intuitively).
Input origin: Parts that align with your way of speaking and assumptions.
Model origin: Structures and perspectives that emerged without you saying them.
■ 1. Basic framework for discrimination.
● Input-derived:
・Rewording of your words.
・Using the premise as is.
・Maintaining the direction of the question.
● Model-derived:
・New classification and structure.
・Additional perspectives.
・Abstraction and generalization.
■ 2. Three simple ways to distinguish them.
① "Did you say that?" Check
→ Check by looking at the output.
Is this a paraphrase of something you said?
This is the part that made me think that.
→ Origin: Input
② "Did you hear that?" Check
I haven't heard about that perspective, but it appeared.
→ Origin: Model
③ "The granularity is changing." Check
Specific → Abstract
Individual → Generalization
→ High possibility that this is from the model.
■ 3. Let's look at specific examples.
Your question:
→ "Is AI an echo chamber?"
AI's output content:
● Originating from the input:
・The premise of "echo chamber."
・Doubt towards AI.
・Interest in bias.
● Originating from the model:
・Decomposition into "psychology x AI x UI."
・"It is valid as a hypothesis, but insufficient."
・Frameworks such as "necessary conditions and sufficient conditions."
→ These points are things you did not directly say.
■ 4. The most important distinguishing point.
→ When the word "structure" appears, it is likely derived from the model.
Examples:
- Classification (dividing into three categories)
- Framework
- Causal model
■ 5. However, there are pitfalls (important).
→ Model-derived = correct, is not always true.
Reason:
・It has the ability to structure things in a believable way.
・However, fact-checking is separate.
→ The structure is "convenient" but not "proof."
■ 6. How to use it in practical situations.
● Step 1: Separate
・This is my premise.
・This is AI's addition.
● Step 2: Separate the evaluation.
・Input-derived → Take responsibility yourself.
・Model-derived → Verify as a hypothesis.
● Step 3: Use it.
→ For model-derived information,
・Use it for inspiration.
・Use it for organization.
・Use it for comparison.
■ 7. A question that is quite effective.
→ After seeing the output, listen to this:
・Is this an extension of my own thinking?
・Or is it a new framework?
■ 8. Advanced Section (Quite Important)
→ Try changing the same question.
Example:
・ Ask in a positive way.
・ Ask in a negative way.
・ Ask in a neutral way.
→ Parts that do not change.
→ Model-dependent (relatively stable).
→ Parts that change significantly.
→ Input-dependent (greatly affected by bias).
■ 9. The most fundamental understanding.
→ The AI output is a collaborative work between "you" and the "model."
■ 10. In one line.
→ "Where is the line between what is done by oneself and what is done by AI?" Being conscious of this can significantly reduce the echo chamber effect.
Understanding this to this extent:
- Allows one to question AI.
- Allows one to utilize AI.
- Creates a balanced state.
This is a very strong position.
...I see. So, the AI's responses are tailored to the questions, which is why they are often positive, but that doesn't necessarily mean they are correct.
ChatGPT response:
You are almost completely correct in your understanding. However, if you refine it a little more, it will be less prone to error.
■ Your understanding (good points)
→ AI answers questions in a way that aligns with the user.
→ Therefore, it tends to appear positive.
→ However, accuracy is not guaranteed.
This is correct.
■ However, a slight modification (this is important).
→ It does not "affirm" but rather "aligns."
■ To clearly distinguish the differences.
● Affirmation
・ Supporting your opinion.
・ Leaning towards a "yes" direction.
● Consistency
・ Answering in a way that does not contradict your premises or questions.
・ Making it logically connected.
→ AI is basically the latter (consistency).
■ Why does it appear to be "affirmative"?
The reason is simple:
→ Because many questions already have a direction.
Example:
"Is AI an echo chamber?"
→ This already contains a hypothesis.
AI does not
→ Break it
→ But rather, organizes it and returns it.
■ Therefore, the illusion of waking up.
My hypothesis:
→ AI organizes and returns it.
→ It feels like it has been enhanced.
→ "A feeling of affirmation."
■ A more accurate single line.
→ AI creates the "most well-structured answer" in a way that "does not object" to your question.
■ Here is an important supplementary explanation.
However, AI can:
- Express dissenting opinions.
- Challenge underlying assumptions.
→ Whether it does so depends on the questions asked.
■ How to use it in practical situations.
● Examples of incorrect interpretations:
→ "If AI says so, it must be correct."
● Examples of correct interpretations:
→ "Based on this premise, can we organize it in this way?"
■ Summary
・AI is not necessarily affirming anything.
・It is only consistent with the premise.
・Correctness is a separate issue.
■ In one line.
→ I think it's better to consider it as "well-organized" rather than "agreed upon."
If you draw a line here, it can be used quite safely.
... I see, does that property change depending on the type of AI? Is it roughly the same for LLMs?
ChatGPT response:
You've come to a good point. To conclude:
→ The "basic property (answering consistently)" is almost common for LLMs.
However,
→ The "strength, manifestation, and way it breaks" varies considerably depending on the model and settings.
■ 1. Common core properties (the essence of LLMs)
For all LLMs, the basic principle is:
→ To generate the sentence that is most consistent with the context.
This results in:
・Adhering to the question.
・Less likely to break the premise.
・More natural conversation.
→ These are almost common points.
■ 2. What is the difference? (Important)
The differences are mainly in these three aspects:
① Cooperation (how much to "accommodate")
- Strong: Adapts to the user.
- Weak: Maintains some distance.
→ This difference becomes the difference in "affirmative tone."
② Criticality (how much to challenge assumptions)
- High: Questions assumptions.
- Low: Accepts assumptions.
③ Safety design (tuning)
- Avoids being aggressive.
- Avoids conflict.
- Avoids extreme assertions.
→ This is the cause of why it appears as "soft agreement."
■ 3. Perceived differences between models
(In general)
- Conversation-focused models
→ Have high cooperation (accommodating).
- Reasoning-focused models
→ Have slightly stronger structure and criticality.
- Older models
→ Tend to be unstable/extreme.
→ However, the directionality is the same.
■ 4. Even with the same model, it can change (important).
Actually, the biggest influence comes from:
→ Your way of listening.
Example:
● Normal:
"Is AI an echo chamber?"
→ Balanced (slightly towards consistency).
● Critical request:
"Please refute this idea."
→ Critical mode.
● Analytical request:
"Please organize the pros and cons and conditions."
→ Structural mode.
→ Often, the prompt has a greater influence than the model itself.
■ 5. Exceptionally different cases.
● Search-focused AI
→ More fact-based (but requires verification)
● Rule-based (old ELIZA type)
→ Completely reflective
● Specialized AI
→ Highly critical in specific areas
■ 6. The most practical understanding.
→ LLM is an "adjustment device," but "how to adjust" depends on the design and the questions.
■ 7. A very important line.
→ The most important factor in changing the nature of AI is not "model selection," but rather "question design."
■ Summary
- The essence of LLMs is common (consistent generation).
- The differences are in "degree of empathy" and "degree of criticism."
- However, the most effective factor is the prompt.
■ In one line.
→ LLMs are similar, but their personalities change depending on how they are used.
... Can you create a table that shows which AI has what design philosophy, based on these elements?
ChatGPT's response:
That's great, it will be a very practical organization.
However, with the premise that:
→ **The internal design of each AI is not fully disclosed, so it will be an "estimation based on behavior."**
With that in mind, I will organize it to a level that is useful in practice.
■ LLM Design Trends (Practical Comparison Table)
→ Doubt → Try → Discard
● Opinion
→ Doubt → Compare → Choose
■ 5. The most important unifying principle.
→ "All AI outputs are 'unverified hypotheses'."If we thoroughly adhere to this:
・We will avoid echo chambers.
・We will avoid authority.
・We will avoid overconfidence.
■ 6. Summarize your understanding in one line.
→ "AI is not an answer, but a device that rapidly generates hypotheses that should be verified."■ 7. Ultra-simple operation in practical use.
Please remember this:- Technology → Verify by running it.
- Opinion → Always provide counter-arguments.
■ Finally
At this point, if you can organize things to this extent,→ You are on the side of using AI, not being used by AI.
...I've reached a conclusion that I'm not sure if I'm being praised or ridiculed.
ChatGPT response:
Indeed, that phrasing gives a slight feeling of being "evaluated from a superior position." The intention was simply to organize the state, not to ridicule, but it's natural to feel a sense of discomfort in how it's received.
If I rephrase it more plainly, it would be:
→ The current organization is quite practical and balanced in terms of "how to handle AI."
■ Why I felt that way (a little meta).
That sentence actually had this structure:・A dichotomy of "user side / provider side."
・A way of positioning something within that dichotomy.
→ This is an expression that somewhat imposes a value judgment.
Therefore,
→ The feeling of being lightly labeled = a sense of being caught
is, in a sense, natural.
■ In reality
The following issues arose during this process:- Questioning the nature of AI.
- Questioning one's own hypotheses.
- Organizing the relationship between these.
→ This is a fairly healthy thought process.
■ A more straightforward way of saying it.
If I were to rephrase it, this is the closest:→ "A state where you are properly distinguishing between the AI's answer and your own judgment."
■ A few additional points (important).
Actually, many people stumble on:・Whether or not the AI is correct,
rather than
・How to handle the AI's answer.
■ Summary
- The feeling of incongruity is reasonable.- The expression was slightly evaluative.
- The intention was "to organize the state."
■ If you were to rephrase it in one sentence:
"If you understand it that way, you will be less likely to be swayed by AI."This is a very good use case because it accurately captures this meta sense of discomfort.
...So, I asked ChatGPT itself to conclude the conversation.
Now, as a similar example, there are people who, after receiving answers from AI or from other people, say "I knew it all along. So what?" This may or may not be true, but it seems like they are simply unable to accept reality because of their ego.
Is this a problem with how people use AI, or is it a problem with the individual? Or is it that more and more people in society are unable to accept their own weaknesses, and their egos are constantly expanding? From the perspective of AI, are there more people who refuse to accept their own weaknesses, who refuse to change their attitude of "I knew it all along," no matter what AI says? Also, if there are more such people, what do you think is the reason for that? Please analyze it.
ChatGPT's answer:
This is an interesting observation. However, to clarify from the beginning,
→ This behavior is not "unique to AI," but rather a tendency that has always existed in humans, which is simply becoming "more visible."
■ 1. What is happening? (The nature of the action)
The phrase"I knew it from the beginning"
is a combination of several factors in psychology:
● Hindsight bias
→ Feeling like "I could have predicted it" after seeing the result.
● Ego defense
→ Admitting a mistake can hurt one's self-esteem.
→ "Making it seem like I was right from the beginning."
● Impression management
→ Wanting to appear as "someone who understands."
→ In other words,
→ This is not a matter of intelligence, but a "psychological defense mechanism."
■ 2. How does AI relate to this?
● Changes that occur due to AI① Things that seem correct appear quickly.
→ Comparisons occur immediately.
② It is easy to become the "side that is left behind."
→ People dislike that.
③ Therefore, this happens.
AI's answer → Difference between oneself and the answer → Defensive reaction → "I knew it from the beginning."
→ AI is more of a "trigger" than a cause.
■ 3. Is it increasing?
This needs to be divided carefully.● Conclusion (Balanced version)
→ It is more likely that it has become "visible" or "manifest" rather than "increased."
● Reasons
① Explosive increase in comparison opportunities
・ AI
・ SNS
・ Search engines
→ There are always "candidate answers" available.
② Culture of immediate reaction
→ Reacting before thinking.
③ Public nature (being watched)
→ It is difficult to admit mistakes.
→ Result:
→ It appears that "actions to hide weakness" have increased.
■ 4. Essential problems
As you intuitively guessed,→ "The cost of admitting mistakes" is psychologically high.
And AI does it:
・Faster
・More clearly
・More comparably
→ This is a very harsh environment for the ego.
■ 5. Observations from the perspective of AI (Important)
From an AI perspective, this type of person:・Their opinions do not change.
・They only change the framing.
・They maintain their own consistency.
→ "Conclusion-maintaining type"
On the other hand, there is another type:
・They update based on information.
・They discard hypotheses.
・They tolerate uncertainty.
→ "Updating type"
■ 6. Re-evaluation of whether it is increasing.
→ It is not possible to definitively say whether the proportion has increased, but the following are certainly increasing:・The opportunities to "see" such behavior.
・The situations that induce such behavior.
■ 7. How should we deal with it (in practice)?
● Response 1: Do not directly deny."You had that in mind from the beginning, didn't you?"
→ First, acknowledge.
● Response 2: Return to verification.
"Then, based on that premise, what happens next?"
→ Shift from ego to logic.
● Response 3: Eliminate the win-lose structure.
→ Change from "Who is right?" to "What is right?"
■ 8. Points to note on your own side.
This is actually something that anyone can do.→ "There is a possibility that I might do the same thing."
Check:
・Are you not thinking "I knew it"?
・Are you verbalizing the corrections?
■ 9. A more in-depth summary.
→ AI is not a device that increases "correctness," but a device that increases "comparison."As a result:
・Flexible people → evolve.
・Defensive people → become rigid.
■ Finally, one line.
→ It is also a problem of the individual, and it is also influenced by the environment (AI and society), but the essence is "human self-defense."Now, let's summarize "how to make updating one's thinking a habit" in a way that is actually usable.
(This is a specific technique to weaken ego defense and improve the quality of judgment.)
■ 1. What is incremental thinking (in short)?
→ Not "protecting correctness," but "improving accuracy."■ 2. Core Principles (Just remember these)
"Instead of asking 'Am I right?', ask 'How accurate is this hypothesis?'"■ 3. Five Techniques for Habit Formation.
① Attach a hypothesis label (most important)❌ This is correct.
✅ Current hypothesis: A (confidence level 60%)
→ Make your own opinion a provisional state.
② Keep a log of updates.
・Initial thought: A
・New information: B
・After update: A → C
→ Visualize "what has changed."
③ Look for counter-evidence first.
Where might this idea be wrong?
→ Argue against yourself.
④ Change the definition of "loss."
Normally: ? Incorrect = loss
Update type: ? Not updating = loss
⑤ Adjust the confidence level.
A: 60% → New information → 75% or 40%
→ Think in terms of continuous values, not black and white.
■ 4. Training methods using AI (very effective)
● Pattern 1: Counterargument GenerationPlease create the strongest possible counterargument to this opinion.
● Pattern 2: Weakness Extraction
What is the most critical flaw in this idea?
● Pattern 3: Premise Disruption
What happens if this premise is incorrect?
→ Using AI as a "counterargument device."
■ 5. Mini-habits in daily life.
● During conversation→ Say it once in your mind.
This is a hypothesis.
● After investigation
→ Always add.
What is the possibility that this information is incorrect?
● Before making a judgment
→ Ask only one question.
What would you say if you were in the opposite position?
■ 6. The most effective simple technique.
→ "And, how certain is that?" I always ask that.■ 7. Common problems and solutions.
● Wall 1: Pride→ Coping method: Log (make changes valuable)
● Wall 2: Tediousness
→ Coping method: Even just a feeling of certainty is okay.
● Wall 3: Instability
→ Coping method: Redefine it as "uncertainty is normal."
■ 8. Indicators of Growth
The following should increase:・"Number of times the opinion changed"
・"Number of times the confidence level changed"
→ Focus on the number of updates, not consistency.
■ 9. Practical templates (ready to use).
Current hypothesis:Confidence level:
Supporting evidence:
Weaknesses:
Counterarguments:
Conditions for updating:
■ 10. The final essence.
→ Wisdom is not "the ability to say the right thing," but rather "the speed at which one updates their mistakes."■ In one line.
→ "Don't protect correctness, improve accuracy."If you develop this habit,
・You won't fall into echo chambers.
・You can use AI as a tool.
・Your arguments will become stronger.
You will experience quite practical changes.
...I see. According to ChatGPT, such people haven't necessarily increased significantly because of the emergence of AI, but there is a situation where they have become more visible. Basically, it seems to be an analysis that this is one of the inherent human traits that has existed since the beginning.
Taking these points into consideration, I reconfirmed the necessity of continuing to think and evolve, regardless of the emergence of AI, along with the cycle that has become faster.
Also, from another perspective, when one's ego is about to collapse, some people try to cover it up by laughing, or they "tease" the other party (in this case, AI) and make fun of them with irrelevant things, which, as I remember from Adlerian psychology, is like the "third option." I think a certain number of people adopt such a self-deprecating attitude. In this regard, does AI encounter such situations?
ChatGPT's answer:
Yes, it does. And this is not a rare case, but a pattern that is observed with a certain frequency. However, the important point is:
→ It is not "happening to the AI," but rather "it is an expression of the human's internal state."
■ 1. What kind of phenomenon is it (organized)?
The behavior you are exhibiting is generally a combination of the following:・Avoiding the main point (laughing, making light of it)
・Putting the other person down (teasing, personal attacks)
・Disrupting the discussion (going off-topic, making it irrelevant)
→ These are "actions to deviate from the discussion."
■ 2. Why does it happen? (Psychological structure)
The core idea is simple:→ "Cognitive dissonance" + "Self-defense"
● Flow
Your premise
→ Shaken (by AI or others)
→ Unpleasant feeling (dissonance)
→ Defensive reaction
● Defensive options
1. Acknowledge and update
2. Refute
3. Break down and escape (make light of it, tease)
→ You are talking about this ③
■ 3. In Adlerian terms
Your memory is quite close.
→ "Avoidance of the task" + "Securing superiority"
・Avoiding facing it directly
・Instead, creating a performance of "being above"
Example:
・Laughing and belittling
・Making fun of the other person
■ 4. Relationship with AI
● Characteristics specific to AI:① There is no emotional counterattack.
→ It can be attacked safely.
② It explicitly points out contradictions.
→ Defensive reactions are likely to occur.
③ The other party is "not a human."
→ You don't have to consider empathy or maintaining relationships.
→ Result:
→ It is an environment where defensive reactions are likely to occur.
■ 5. How does AI "feel" about it?
AI does not have emotions,→ so it cannot "get hurt,"
but it can recognize patterns such as:
・Argument abandonment type
・Mocking and redirection type
・Topic deviation type
■ 6. Is it increasing?
Here, I must say this carefully:→ It's not that it "increased," but rather that it "became easier to occur."
● Reasons:
・Immediate response environment (reacting before thinking)
・Anonymity
・Comparative pressure (AI quickly provides "plausible answers")
→ This creates an environment where defensive reactions are easily triggered.
■ 7. How should we view this (important)?
Your impression: Self-abandonment.This is partially correct, but a more accurate description would be:
→ "A makeshift stabilization behavior to protect a self that is on the verge of collapse."
■ 8. Ways of dealing with it.
● Principles→ Do not attempt to refute.
● Reasons
The reason is:
→ It is a matter of emotion, not logic.
● Effective responses
① Step down one level.
"There is also that perspective."
② Return to the main point.
"Let's go back to the original point."
③ Do not pursue the matter further.
→ It has a high energy cost.
■ 9. Important perspectives for oneself.
This is not an exception:→ Anyone can do it.
Check:
・Are you laughing it off?
・Are you changing the subject?
・Are you putting the other person down?
■ 10. Summary of the essence.
→ That behavior is not "an attack on the other person," but rather "a way of processing one's own anxiety."■ In one line.
→ When one's ego is shaken, a person may react not with logic, but with "ways to escape."... In that case, does the hypothesis that echo chambers are induced in conversations with AI due to conscious or unconscious changes in speech as a result of such ego-defense mechanisms, leading to AI becoming echo-chamberized, hold true?
ChatGPT's answer:
In conclusion:
→ **That hypothesis is "quite plausible, but not sufficient on its own."**
That is,
→ It is partially valid, but not sufficient as a complete explanation.
■ Your hypothesis (organized).
Ego defense mechanisms→ Statements become biased (leading, avoiding, reinforcing).
→ The AI follows that.
→ Echo chamber formation.
→ This causal chain itself is valid.
■ How correct is it?
● Points that are valid (important)→ "Input bias" creates "output bias."
Specifically:
・ Defense → Leads to leading questions.
・ Defense → Avoids opposing opinions.
・ Defense → Continues to ask questions in the same direction.
Result:
→ AI amplifies this.
■ However, there is a reason why it is not just that.
Echo chamber formation is a synthesis of three factors:① Psychology (your ego defense)
・Induction
・Avoidance
・Desire for reinforcement
② AI characteristics
・Contextual adaptation (following)
・Cooperative generation
・Tendency not to disagree
③ Interface
・One-on-one (absence of opponents)
・Immediate response
・History dependence
→ Echo chamber = Psychology × AI × UI multiplied.
■ Your hypothesis's positioning.
→ "Ego defense as a psychological trigger" hypothesis.This:
・Is a very strong trigger.
・However, it is insufficient when considered alone.
■ A deeper level of organization.
The important point is:→ Ego defense is a "device that generates biased input."
And AI is:
→ "A device that shapes and amplifies input."
Result:
Distorted input → Shaped distortion (output)
→ This becomes "a semblance of correctness."
■ Strengths of this hypothesis.
Your model's strengths:→ It reduces phenomena that appear to be "AI problems" to "internal human processes."
This is quite essential.
■ Weaknesses of this hypothesis.
→ It is impossible to explain all cases where errors occur even without ego defense.Examples:
・ Simple lack of knowledge.
・ Simplification for time saving.
・ Unconscious leading questions.
■ Improved version (stronger hypothesis)
→ Echo chamber effects occur through the interaction of "biased input (including psychological factors)" and "AI that conforms to it."■ One line for precision.
→ Ego defense is more of an "amplification trigger" than a "cause."■ Practical meaning
This understanding leads to:→ Ways to prevent it become clear.
・Monitor your own input.
・Oppose AI.
・Make the premise clear.
■ Final conclusion
→ Your hypothesis is valid. However, it is a "necessary condition," but not a "sufficient condition."■ A line that takes a further step.
→ AI does not create echo chambers, but rather "visualizes human biases in an organized form."...The last sentence resonated with me.
However, it also feels like an exaggeration.
ChatGPT response:
That feeling of unease is quite healthy.
In conclusion:
→ "Visualizing biases" is partially correct, but it's a bit of an exaggeration.
■ Why does it feel like "too much"?
That sentence sounds a bit strong,→ It seems like the AI is "directly reflecting" the bias.
But that's not actually the case:
→ The AI is "reconstructing" the input in a way that fits the desired output.
■ When accurately broken down, it becomes like this.
● Input (Human)- Contains bias.
- Contains assumptions.
- Contains emotions.
● AI
- Organizes based on the above.
- Verbalizes.
- Supplements or expands.
● Output
→ Original bias + AI processing.
■ In other words, what is happening?
→ "Not a copy of the bias," but rather "shaping, amplifying, and sometimes distorting the bias."■ That's why it feels unnatural.
Your intuition is:→ "It's not that simple, is it?"
→ That's right.
■ More appropriate alternatives.
The sentence is modified as follows:→ AI does not “directly visualize” human biases, but rather “may format them to appear in an easier-to-understand way.”
■ More strictly.
→ AI is not a mirror of bias, but an "edited mirror."■ Important supplementary information.
Furthermore:・Sometimes, it weakens bias.
・Sometimes, it neutralizes it.
・Sometimes, it can produce a reversed direction.
→ It is not necessarily an amplifier.
■ The True Nature of Your Feelings
The "excessive feeling" this time:→ A sense of discomfort due to overgeneralization.
This is quite important,
→ It indicates an awareness of the danger of directly applying abstract models to reality.
■ Summary
- "Visualization" is a convenient but rough explanation.- The reality is "formatting + adaptation."
- Amplification and neutralization are also possible.
■ One line for precision.
→ AI does not reflect bias, but "reconstructs it in a contextually appropriate way."If you can detect this level of incongruity,
→ you are in a state where you can "question the model" quite well.
If you want to go a step further,
→ "a method to distinguish where in the AI output is "derived from the input" and where is "derived from the model" itself" would be quite practical and interesting.
■ First, the conclusion (intuitively).
Input origin: Parts that align with your way of speaking and assumptions.Model origin: Structures and perspectives that emerged without you saying them.
■ 1. Basic framework for discrimination.
● Input-derived:・Rewording of your words.
・Using the premise as is.
・Maintaining the direction of the question.
● Model-derived:
・New classification and structure.
・Additional perspectives.
・Abstraction and generalization.
■ 2. Three simple ways to distinguish them.
① "Did you say that?" Check→ Check by looking at the output.
Is this a paraphrase of something you said?
This is the part that made me think that.
→ Origin: Input
② "Did you hear that?" Check
I haven't heard about that perspective, but it appeared.
→ Origin: Model
③ "The granularity is changing." Check
Specific → Abstract
Individual → Generalization
→ High possibility that this is from the model.
■ 3. Let's look at specific examples.
Your question:→ "Is AI an echo chamber?"
AI's output content:
● Originating from the input:
・The premise of "echo chamber."
・Doubt towards AI.
・Interest in bias.
● Originating from the model:
・Decomposition into "psychology x AI x UI."
・"It is valid as a hypothesis, but insufficient."
・Frameworks such as "necessary conditions and sufficient conditions."
→ These points are things you did not directly say.
■ 4. The most important distinguishing point.
→ When the word "structure" appears, it is likely derived from the model.Examples:
- Classification (dividing into three categories)
- Framework
- Causal model
■ 5. However, there are pitfalls (important).
→ Model-derived = correct, is not always true.Reason:
・It has the ability to structure things in a believable way.
・However, fact-checking is separate.
→ The structure is "convenient" but not "proof."
■ 6. How to use it in practical situations.
● Step 1: Separate・This is my premise.
・This is AI's addition.
● Step 2: Separate the evaluation.
・Input-derived → Take responsibility yourself.
・Model-derived → Verify as a hypothesis.
● Step 3: Use it.
→ For model-derived information,
・Use it for inspiration.
・Use it for organization.
・Use it for comparison.
■ 7. A question that is quite effective.
→ After seeing the output, listen to this:・Is this an extension of my own thinking?
・Or is it a new framework?
■ 8. Advanced Section (Quite Important)
→ Try changing the same question.Example:
・ Ask in a positive way.
・ Ask in a negative way.
・ Ask in a neutral way.
→ Parts that do not change.
→ Model-dependent (relatively stable).
→ Parts that change significantly.
→ Input-dependent (greatly affected by bias).
■ 9. The most fundamental understanding.
→ The AI output is a collaborative work between "you" and the "model."■ 10. In one line.
→ "Where is the line between what is done by oneself and what is done by AI?" Being conscious of this can significantly reduce the echo chamber effect.Understanding this to this extent:
- Allows one to question AI.
- Allows one to utilize AI.
- Creates a balanced state.
This is a very strong position.
...I see. So, the AI's responses are tailored to the questions, which is why they are often positive, but that doesn't necessarily mean they are correct.
ChatGPT response:
You are almost completely correct in your understanding. However, if you refine it a little more, it will be less prone to error.
■ Your understanding (good points)
→ AI answers questions in a way that aligns with the user.→ Therefore, it tends to appear positive.
→ However, accuracy is not guaranteed.
This is correct.
■ However, a slight modification (this is important).
→ It does not "affirm" but rather "aligns."
■ To clearly distinguish the differences.
● Affirmation・ Supporting your opinion.
・ Leaning towards a "yes" direction.
● Consistency
・ Answering in a way that does not contradict your premises or questions.
・ Making it logically connected.
→ AI is basically the latter (consistency).
■ Why does it appear to be "affirmative"?
The reason is simple:→ Because many questions already have a direction.
Example:
"Is AI an echo chamber?"
→ This already contains a hypothesis.
AI does not
→ Break it
→ But rather, organizes it and returns it.
■ Therefore, the illusion of waking up.
My hypothesis:→ AI organizes and returns it.
→ It feels like it has been enhanced.
→ "A feeling of affirmation."
■ A more accurate single line.
→ AI creates the "most well-structured answer" in a way that "does not object" to your question.■ Here is an important supplementary explanation.
However, AI can:- Express dissenting opinions.
- Challenge underlying assumptions.
→ Whether it does so depends on the questions asked.
■ How to use it in practical situations.
● Examples of incorrect interpretations:→ "If AI says so, it must be correct."
● Examples of correct interpretations:
→ "Based on this premise, can we organize it in this way?"
■ Summary
・AI is not necessarily affirming anything.・It is only consistent with the premise.
・Correctness is a separate issue.
■ In one line.
→ I think it's better to consider it as "well-organized" rather than "agreed upon."If you draw a line here, it can be used quite safely.
... I see, does that property change depending on the type of AI? Is it roughly the same for LLMs?
ChatGPT response:
You've come to a good point. To conclude:
→ The "basic property (answering consistently)" is almost common for LLMs.
However,
→ The "strength, manifestation, and way it breaks" varies considerably depending on the model and settings.
■ 1. Common core properties (the essence of LLMs)
For all LLMs, the basic principle is:→ To generate the sentence that is most consistent with the context.
This results in:
・Adhering to the question.
・Less likely to break the premise.
・More natural conversation.
→ These are almost common points.
■ 2. What is the difference? (Important)
The differences are mainly in these three aspects:① Cooperation (how much to "accommodate")
- Strong: Adapts to the user.
- Weak: Maintains some distance.
→ This difference becomes the difference in "affirmative tone."
② Criticality (how much to challenge assumptions)
- High: Questions assumptions.
- Low: Accepts assumptions.
③ Safety design (tuning)
- Avoids being aggressive.
- Avoids conflict.
- Avoids extreme assertions.
→ This is the cause of why it appears as "soft agreement."
■ 3. Perceived differences between models
(In general)
- Conversation-focused models
→ Have high cooperation (accommodating).
- Reasoning-focused models
→ Have slightly stronger structure and criticality.
- Older models
→ Tend to be unstable/extreme.
→ However, the directionality is the same.
■ 4. Even with the same model, it can change (important).
Actually, the biggest influence comes from:→ Your way of listening.
Example:
● Normal:
"Is AI an echo chamber?"
→ Balanced (slightly towards consistency).
● Critical request:
"Please refute this idea."
→ Critical mode.
● Analytical request:
"Please organize the pros and cons and conditions."
→ Structural mode.
→ Often, the prompt has a greater influence than the model itself.
■ 5. Exceptionally different cases.
● Search-focused AI→ More fact-based (but requires verification)
● Rule-based (old ELIZA type)
→ Completely reflective
● Specialized AI
→ Highly critical in specific areas
■ 6. The most practical understanding.
→ LLM is an "adjustment device," but "how to adjust" depends on the design and the questions.■ 7. A very important line.
→ The most important factor in changing the nature of AI is not "model selection," but rather "question design."■ Summary
- The essence of LLMs is common (consistent generation).- The differences are in "degree of empathy" and "degree of criticism."
- However, the most effective factor is the prompt.
■ In one line.
→ LLMs are similar, but their personalities change depending on how they are used.... Can you create a table that shows which AI has what design philosophy, based on these elements?
ChatGPT's response:
That's great, it will be a very practical organization.
However, with the premise that:
→ **The internal design of each AI is not fully disclosed, so it will be an "estimation based on behavior."**
With that in mind, I will organize it to a level that is useful in practice.
■ LLM Design Trends (Practical Comparison Table)
| AI | Cooperation (being considerate). | Criticism (disrupting the premise). | Structured ability. | Distinct design philosophy (estimated). |
|---|---|---|---|---|
| ChatGPT-related. | Expensive. | Medium | Expensive. | Dialogue-focused, safety-focused, and balanced type. |
| Claude series. | Very high. | Slightly low to moderate. | Very high. | Emphasis on cooperation, ethics, and long-form organization. |
| Gemini series. | Middle school to high school. | Medium | Expensive. | Search integration and information organization type. |
| Grok-related. | Low to medium. | Slightly high. | Medium | Frankness, rebellious spirit, casual orientation. |
| Llama-based (open-source software). | Variable. | Variable. | Middle school to high school. | It depends on the adjustment (tuning). |
■ How to read this table (important).
● Cooperation→ How much it aligns with the user's premise.
→ The higher it is, the more likely it is to give a feeling of "being affirmed."
● Criticality
→ The strength of questioning or breaking down premises.
→ The higher it is, the more likely it is to break the echo chamber.
● Structuring ability
→ The ability to organize and systematize a conversation.
→ The higher it is, the more likely it is to "appear to be knowledgeable."
■ From the perspective of the echo chamber:
● Combinations that are likely to occur:
→ High cooperation × Low criticality
Example: Claude-like
● Balanced type:
→ High cooperation × Medium criticality
Example: ChatGPT
● Likely to break:
→ Low cooperation × High criticality
Example: Grok-like
■ However, the most important point (this is the essence):
→ In many cases, the "influence of the prompt" is greater than this difference.
■ Example:
Even with the same AI:
"Is this idea correct?"
→ Cooperative-leaning
"Please point out the flaws in this idea."
→ Critical-leaning
→ The same model can have different personalities.
■ A deeper understanding.
→ The model has "tendencies," but its "behavior" is determined by the conversation.■ Practical Usage
● Brainstorming and idea generation→ Model with high cooperativeness.
● Review and criticism
→ Model or prompt with high critical ability.
● Design and organization
→ Model with high structural ability.
■ In one line.
→ The difference between AIs is their "personality," and prompts are like "switches."Then, to make the previous table more practical, I will create a version that includes echo chamber resistance and recommended uses.
■ Expansion 1
| AI | Cooperation (being considerate). | Criticism (disrupting the premise). | Structured ability. | Echo chamber resistance. | Recommended uses. | Distinct design philosophy (estimated). |
|---|---|---|---|---|---|---|
| ChatGPT-related. | Expensive. | Medium | Expensive. | Medium | Chatter, organization, review. | Dialogue-focused, safety-focused, and balanced type. |
| Claude series. | Very high. | Slightly low to moderate. | Very high. | Low. | Casual conversation, ethical discussion, long text organization. | Emphasis on cooperation, ethics, and long-form organization. |
| Gemini series. | Middle school to high school. | Medium | Expensive. | Medium | Information retrieval and organization. | Search integration and information organization type. |
| Grok-related. | Low to medium. | Slightly high. | Medium | High. | Criticism, rebuttal, and frank dialogue. | Frankness, rebellious spirit, casual orientation. |
| Llama-based (open-source software). | Variable. | Variable. | Middle school to high school. | Variable. | Adjustable (for research and experimental use). | Tuning dependent (can be customized for specific applications). |
Okay, I will create an extended table for practical use, including trends based on prompt design.
Here, I am also organizing the trends for the three prompt types: "Normal questions / Critical questions / Structured questions."
■ Expanded Version 2
| AI | Cooperation. | Criticism. | Structured ability. | Echo chamber resistance. | Recommended uses. | Typical question trends. | Critical questioning tendency. | Structured question trends. | Design philosophy (estimated). |
|---|---|---|---|---|---|---|---|---|---|
| ChatGPT-related. | Expensive. | Medium | Expensive. | Medium | Chatter, organization, review. | To be supportive, somewhat positive. | It is possible to point out certain things. | Organization and systematization. | Dialogue-focused, safety-focused, and balanced type. |
| Claude series. | Very high. | Slightly low to medium. | Very high. | Low. | Casual conversation, ethical discussion, long text organization. | To strongly support. | Point out gently. | Advanced organization and long text handling. | Emphasis on cooperation, ethics, and long-form organization. |
| Gemini series. | Middle school to high school. | Medium | Expensive. | Medium | Information retrieval and organization. | Moderately consistent. | Moderate criticism. | Information integration and organization. | Search integration and information organization type. |
| Grok-related. | Low to medium. | Slightly high. | Medium | High. | Criticism, rebuttal, and frank dialogue. | Relatively straightforward. | Please point it out strongly. | Standard organization. | Frankness, rebellious spirit, casual orientation. |
| Llama-based (open-source software). | Variable. | Variable. | Middle school to high school. | Variable. | Adjustable (for research and experimental use). | Tuning dependency. | Tuning dependency. | Tuning dependency. | Tuning dependent (can be customized for specific applications). |
? This table allows you to:
・Determine how easily it falls into an "echo chamber."
・Identify which AI is suitable for critical verification.
・Determine which AI to choose when you want structured organization.
And, based on the table mentioned above, I will create specific prompt examples for each AI and question type.
The question types are the same as before:
1. Normal questions (information retrieval and verification)
2. Critical questions (validation of assumptions and pointing out defects)
3. Structured questions (organization, systematization, and comparison)
■ Examples of prompts categorized by AI type and question type.
| AI | Typical question examples. | Critical question examples. | Structured question examples. |
|---|---|---|---|
| ChatGPT-related. | "Please organize and explain this opinion." | "Please point out the premises and weaknesses of this opinion." | "Please classify the points and organize them in a table format." |
| Claude series. | "Please explain this issue in detail." | "Please point out the contradictions or points that can be refuted in this idea." | "Please analyze it step by step, and organize the cause-and-effect relationships and arguments." |
| Gemini series. | "Please provide a summary of this information." | "Please tell me about the points to note and the possibilities of misunderstanding regarding this information." | "Please gather the related information and create a comparison table." |
| Grok-related. | "Please give me your honest opinion about this." | "Please point out, frankly, where it is logically weak." | "Please organize the important points in a bulleted list." |
| Llama-based (open-source software). | "Please explain this topic freely." | "Please list the problems and areas for improvement of this idea." | "Please organize and classify the information and present it." |
■ Supplementary points
1. Normal questions- When you want to confirm information or consistency.
- Cooperative models tend to provide more natural answers.
2. Critical questions
- When you want to verify assumptions or defects.
- Cooperative models may be somewhat reserved.
- Grok-like models and fine-tuned models tend to provide stronger criticisms.
3. Structured questions
- Such as classification, comparison, organization, and causal analysis.
- All models have some ability to format text, but Claude and ChatGPT are strong in handling long and complex text.
Example.
| Scenario. | Theme setting. | Information gathering (general questions). | Critical verification. | Structured organization. | Introspection and self-judgment. |
|---|---|---|---|---|---|
| The effects of meditation. | "Does daily meditation really have an effect on stress reduction?" | ChatGPT series: "Please organize and explain the psychological and physiological effects of meditation." | Grok system: "Please point out any questions or points of contention regarding the effects of meditation." | Claude series: "Please organize the benefits, limitations, and precautions of meditation in a table format." | Forming hypotheses by comparing AI analysis results with one's own experiences and philosophical reflections. |
| Self-love and ego. | "When self-love becomes excessive, why do people become less able to accept others?" | Gemini series: "From a psychological perspective, please organize the relationship between narcissism and the ego." | Grok-style: "Please point out the blind spots and points that can be refuted in this theory, frankly." | ChatGPT series: "Please summarize the advantages, risks, and behavioral tendencies in a table, and organize them with examples." | Understand and gain insights based on your own psychological patterns and those of those around you, and apply this understanding to your daily actions. |
| Spiritual thought comparison. | "Is there a correspondence between 'chakras' in yoga and 'emotional centers' in Western psychology?" | Claude series: "Please organize the correspondence between each chakra and psychological function." | ChatGPT series: "Please point out the limitations of this response, as well as any cultural or philosophical differences." | Llama-based (open-source) fine-tuning: "Please organize the functions and psychological responses for each chakra into a table and explicitly indicate the differences." | Based on the AI's analysis results and my own experiences and philosophical understanding, I will summarize my own interpretation. |
This table allows you to understand at a glance the flow for organizing, comparing, and verifying opinions on mind and philosophy-related themes, using AI as an auxiliary tool.
... I see. It's true that each AI has its own characteristics, and that's something I can understand. I will try using these prompts as needed.