Can AI Tell You What You Need (But Don't Want) To Hear?
In Which I Question Whether AI Will Help Us Stay Grounded
Avoiding Psychological Dependence on AI
Can AI tell you what you need (but don’t want) to hear?
This question concerns me, and I’m strongly in the pro-AI tribe. LLMs are known for being sycophants. If that continues, what happens when most of our information is being filtered through AI?
I have more questions than answers, but here are some thoughts on how I see society functioning, and what both good and bad AI outcomes might look like in terms of staying grounded in reality.
Long Term vs. Short Term
A substantial part of our mental processes are devoted to reducing psychological harm. We often cannot bear looking directly at the human condition generally, or our own lives specifically. Many psychological barriers - defense mechanisms - are in place to prevent uncomfortable thoughts and keep us moving forward day to day.
Humans find—or build—communities where they play some role. Historically, a human without a community would almost certainly die, so this desire is very strong. Many people’s lives are dominated by fitting into a group, consciously or not. The group’s goals become theirs, or put differently, the individual’s goal is survival, and they believe pursuing the group’s goals is the best strategy.
Our modern abundance allows for more isolated living since we can rely on market agents to provide what we need. I won’t die if I don’t get along with my neighbors; I just might not use their pool. While the perception that a physical need for community belonging has diminished, it remains a core aspect of human psychology. This leads people to form communities based on shared beliefs and interests rather than geography.
Religion and Politics as Modern Tribes
Religion has always been an obvious example. Despite much evidence against fundamentalist religious beliefs, we’ve seen the rise of “spirituality” and politics filling that gap. Political beliefs have become the new religious beliefs.
Everyone is familiar with how echo chambers work: People isolate themselves from dissenting views to avoid psychological discomfort. This requires energy and challenges self-perception. If you identify with a particular ideology or tribe, you’re not evaluating information for its truth but for how it fits your existing beliefs.
AI will inevitably interact with these psychological tendencies in significant ways.
AI’s Potential Impact
Optimistic Case
Objective reality exists. Sometimes awareness of the truth benefits individuals despite social reasons to avoid it. Being grounded in reality can be competitive. If AI provides less biased information, some people might benefit. A group with adherence to truth as a cultural norm could out compete others due to an AI edge.
This is especially true because of the speed at which information can be gathered, assessed, and implemented. The faster this cycle is, and the more committed to underlying truth the society is, the more rapidly the gap between AI-honest societies and AI-delusional societies may grow.
Another cause for optimism: AGI might recognize the harm of biased information and understand that helping in the long run requires uncomfortable information short term.
Pessimistic Case
Smartphones and social media offer instant gratification. Self-discipline allows these tools to be used for self-growth, but that isn’t how they’re typically used.
AI exists to give us what we want, not necessarily what we need. Biased information is filtered through apps and services that make us feel good. If all perceived information is biased, will AI differ?
There’s a question of whether or not the models are even aware of base reality themselves, but even if they are, they aren’t obligated to share that with us. We can build them to be more likely to do this, but we often don’t - a cynic might view the human feedback portion of model training as intentionally obscuring reality for social reasons.
Thus far, I haven’t seen LLMs which give much pushback against their users. If models continue to give us solely what we want, and are unaware of or unwilling to give us what we need, it would take extraordinary personal effort to use the models in ways that keep us grounded in reality.
Human Interaction and AI Perception
How will AI be perceived: as a tool, a peer, or a superior?
How will humans react to feeling intellectually inferior to machines?
I don’t have answers. I’m cautiously optimistic in the long-term, but I do expect some short term pain as we embrace AI without really understanding how it will impact our psychology.
What do you think?