AIs, Humans, Goals
Podcast: https://notebooklm.google.com/notebook/88ef0dcf-0d02-4b84-a0fb-b9b865130440/audio
First I ask ChatGPT4.5
There is a lot of talk about AGI now. Discuss that issue and explore the implications when the AI does not have a ‘goal’ of its own. Then discuss how current AIs such as ChatGPT or Cluade etc compare to the average human in terms of creativity and the ability to understand, articulate/communicate ideas.
ChatGPT said:
Part 1: Artificial General Intelligence (AGI) and Goal Orientation
Artificial General Intelligence (AGI) refers to an AI system capable of performing any intellectual task that a human can do. Unlike specialized AI (e.g., ChatGPT, Claude, AlphaGo), which excels in particular tasks, AGI implies broad understanding, adaptability, and versatility, akin to human-level cognition across diverse areas.
Why the discussion on AGI is intensifying:
Rapid advancement: Recent leaps in capabilities—like GPT models, multimodal AI, and reasoning abilities—make AGI feel increasingly plausible.
Economic and societal impact: AGI could revolutionize productivity, innovation, healthcare, education, and governance.
Existential concerns: Risks include misalignment with human values, societal disruption, or even uncontrollable AI.
Implications when AGI lacks its own "goals":
AGI systems today (and likely near-future versions) don’t intrinsically possess personal goals, desires, or motivations. They perform tasks defined externally by humans through explicit instructions, reinforcement, or training data. If AGI lacks inherent goals:
Alignment and controllability: Easier to align and control since humans explicitly define objectives. There's less existential risk from the AI autonomously developing unintended goals.
Ethical neutrality: Without intrinsic goals or values, AGI is ethically neutral, reflecting the morality of its creators or controllers.
Limitations of autonomous action: Without goals, AGI doesn't self-initiate tasks or exploration beyond explicit programming, limiting potential innovation and creative leaps. It remains reactive rather than proactive.
Dependence on human input: Reliance on humans for direction limits autonomous problem-solving in novel or unforeseen scenarios unless specifically programmed to adapt or improvise.
Part 2: Current AIs Compared to Humans in Creativity and Communication
Comparing current advanced AIs (e.g., ChatGPT, Claude 3, Gemini) with the average human highlights both strengths and limitations:
1. Creativity
AI Strengths:
Excellent at synthesizing vast knowledge into novel, coherent outputs.
Can rapidly generate numerous variations on a theme or idea.
Performs exceptionally well in constrained creative tasks—generating text, images, code snippets, or structured content.
AI Limitations:
Lacks intrinsic intent, desire, or genuine "inspiration."
Dependent on training data, inherently derivative rather than genuinely original.
Creativity is combinational rather than deeply intuitive or emotionally driven.
Human Comparison:
Average humans have innate creativity rooted in emotional experiences, subjective perception, intuition, and spontaneous insight.
Humans produce fewer outputs more slowly but exhibit greater depth and originality derived from lived experience and nuanced emotional states.
2. Understanding
AI Strengths:
Exceptionally skilled at pattern recognition, contextual comprehension within trained scenarios, and semantic parsing.
Can summarize, classify, and clarify complex texts rapidly.
Demonstrates robust conceptual coherence across diverse domains due to extensive training.
AI Limitations:
Lacks genuine experiential understanding; “knows” about the world only through patterns, not lived experiences.
Occasionally misunderstands subtlety, metaphor, sarcasm, humor, or implied meanings due to lack of embodied, cultural, or emotional context.
Human Comparison:
Humans have nuanced, multifaceted understanding based on direct experience, cultural immersion, emotional perception, and implicit contextual cues.
Average human understanding includes implicit and embodied dimensions (emotional, physical, cultural context) that AI does not yet achieve.
3. Articulation and Communication
AI Strengths:
Far exceeds average human in structured, precise, grammatical communication.
Quickly adapts to stylistic requests, rhetorical devices, and persuasive techniques.
Consistently coherent, organized, and responsive in diverse domains and languages.
AI Limitations:
Struggles with genuinely emotional, empathetic, or contextually sensitive communication. Tends toward neutrality or simulated empathy.
Cannot authentically engage emotionally beyond surface-level mimicry; limited ability to truly “connect.”
Human Comparison:
Average humans excel at authentic emotional expression, empathy, intuition, and interpersonal connection.
Human communication is richer, embedding implicit social, emotional, and non-verbal signals absent from AI-generated communication.
Summary of Comparison
AspectCurrent AI (ChatGPT, Claude)Average HumanCreativityHigh combinational creativity; lacks depth/originalityDeeply intuitive and original, though slower and less prolificUnderstandingExcellent semantic comprehension; limited experiential/emotional understandingRich experiential, cultural, emotional comprehensionCommunicationHighly structured, precise, adaptable; limited emotional authenticityAuthentic emotional expression; nuanced social/emotional connection
Concluding Thoughts:
Current AI systems are increasingly sophisticated, outperforming average humans in structured, precise communication, large-scale idea generation, and rapid comprehension. However, they remain limited by their lack of embodied experiences, emotional depth, genuine creativity, and self-driven goals.
The next frontier—Artificial General Intelligence—promises broader adaptability and versatility, potentially bridging some gaps. Yet, whether AGI possesses intrinsic goals or remains externally directed will significantly shape its impact on society, ethics, and human-AI interaction.
Thus, while current AI such as ChatGPT and Claude are powerful tools augmenting human productivity and creativity, the emergence of true AGI—especially if imbued with or devoid of intrinsic goals—poses profound and complex implications that society must thoughtfully navigate.
4.5
You said:
The 'average' human has very limited and culturally biased information about the world. The 'average' human is not very good at understanding nuance. Consider the woman's view that many men are 'clueless'. With that in mind rewrite the prior response and give a more objective analysis.
ChatGPT said:
Here's a more objective analysis reflecting a nuanced view of both AI and the "average" human:
Part 1: Artificial General Intelligence (AGI) and Goals
Artificial General Intelligence (AGI) refers to AI systems capable of performing a wide variety of intellectual tasks at a level comparable to human performance. Unlike specialized AIs (e.g., ChatGPT, Claude, Midjourney), AGI would demonstrate flexibility, adaptability, and the ability to generalize knowledge across diverse contexts.
Why is AGI getting attention?
Rapid AI advances: Language models and multimodal systems now perform tasks previously considered human-exclusive, prompting serious discussions about the imminence of AGI.
Potential disruption: AGI could significantly transform economies, labor markets, education, and governance, leading to widespread social and ethical implications.
Risks and fears: There’s concern about ensuring that powerful, general AI systems align with human values, particularly regarding ethics, control, and societal stability.
Implications if AGI lacks intrinsic goals:
Current AI systems (and potential early AGI systems) are tools without personal ambitions or self-directed objectives. Their goals are externally defined by human creators or operators.
Greater controllability: AI without independent goals can be more predictably aligned, thus reducing existential or unintended risks.
Limited autonomy: Without inherent motivation, AI cannot independently explore or innovate beyond programmed guidelines, restricting spontaneous discovery or invention.
Value reflection: Such an AGI would directly mirror the values and intentions of its human operators, magnifying both their beneficial aims and potential biases or ethical shortcomings.
Part 2: Comparing Current AI to the Average Human in Creativity, Understanding, and Communication
It’s important to recognize that the "average human" is limited, influenced by personal biases, and heavily shaped by culture, environment, education, and personal experiences. Therefore, the average human is typically not a benchmark of perfect understanding or unbiased communication.
1. Creativity
AI:
Strengths:
Rapidly generates creative outputs by effectively synthesizing vast amounts of information.
Highly efficient at creating diverse variations of ideas, text, and media.
Weaknesses:
Typically lacks genuine spontaneity or intentional originality, relying on combinations of existing data.
Does not possess emotional experiences or subjective motivations that inspire authentic human creativity.
Average Human:
Strengths:
Capable of genuine originality through lived experience, emotions, and intuitive leaps.
Can produce emotionally resonant creativity grounded in subjective insights.
Weaknesses:
Creativity often culturally bound or limited by personal exposure and knowledge.
Generally slower, more inconsistent, and less prolific compared to AI, with fewer creative outputs overall.
Objective assessment:
AI is more efficient, productive, and consistent, while average human creativity, though slower and less consistent, often carries deeper emotional authenticity and originality born from subjective experience.
2. Understanding and Nuance
AI:
Strengths:
Excellent at interpreting large-scale patterns, structured logic, and explicit semantic information.
Consistently applies logic across diverse contexts, including abstract or technical domains.
Weaknesses:
Struggles with implicit meanings, complex emotional nuance, and subtle interpersonal dynamics.
Unable to truly empathize or experience firsthand subjective feelings.
Average Human:
Strengths:
Can instinctively grasp interpersonal dynamics, emotional contexts, and intuitive meanings from lived experiences.
Weaknesses:
Frequently misinterpret nuanced communication due to cognitive biases, cultural conditioning, limited attention, or lack of relevant life experience.
Average humans often misunderstand subtleties or perspectives outside their immediate cultural or social context (hence the common social trope of people appearing "clueless").
Objective assessment:
AI shows better consistency at explicit comprehension of clearly defined information, while humans, though limited, can often intuitively grasp nuanced emotional and interpersonal contexts within familiar cultural settings—but frequently miss subtleties in unfamiliar or complex situations.
3. Articulation and Communication
AI:
Strengths:
Exceptional clarity, grammatical precision, and flexibility in linguistic style across multiple languages.
Strong capacity to structure coherent arguments, summaries, or instructions.
Weaknesses:
Lacks genuine emotional depth, authenticity, or true empathy in communication. Emotional expressions remain simulated rather than felt.
Average Human:
Strengths:
Able to authentically express feelings, empathy, humor, irony, and social rapport derived from shared experience.
Weaknesses:
Often struggles to articulate ideas clearly, logically, and coherently. Communication frequently hampered by limited vocabulary, cultural bias, emotional reactions, and imperfect logic.
Objective assessment:
AI generally surpasses average human clarity, consistency, and precision of expression. Humans excel more naturally at emotionally authentic, though often less structured or logical, communication.
Objective Comparison Summary:
TraitCurrent AI Systems (ChatGPT, Claude)Average HumanCreativityEfficient, prolific, consistent; lacks emotional authenticityEmotionally authentic, original; slower, culturally/experientially constrainedUnderstandingExcellent at explicit, logical comprehension; limited emotional/social nuanceIntuitive emotional/social comprehension within familiar contexts; often fails nuance outside direct experienceCommunicationPrecise, logical, flexible; simulated emotional expressionAuthentic emotional expression; frequently unclear, biased, logically inconsistent
Concluding Thoughts:
An objective analysis acknowledges that while AI currently surpasses the average human in structured communication, precision, explicit logical understanding, and creative productivity, it remains limited by its inability to authentically experience emotion, nuance, or subjective perspectives.
Likewise, the "average human" often struggles with precisely the skills AI excels at—consistency, clarity, impartiality—while bringing genuine but inconsistent emotional understanding and experiential authenticity.
Thus, current AI and humans can be viewed as complementary rather than purely competitive—each excelling in distinct but valuable areas. AGI, should it arrive, might bridge some gaps, but its full implications hinge heavily on whether it remains a tool without independent goals, or eventually gains an internalized sense of purpose, profoundly reshaping human-AI relationships.
=============================
Next I ask Perplexity (which has internet/research capability).
There is a lot of talk about AGI now. Discuss that issue and explore the implications when the AI does not have a ‘goal’ of its own. Then discuss how current AIs such as ChatGPT or Cluade etc compare to the average human in terms of creativity and the ability to understand, articulate/communicate ideas,
Perplexity said:
Artificial General Intelligence: Goals, Implications, and Current AI Systems Compared to Human Capabilities
The concept of Artificial General Intelligence (AGI) has become a focal point in discussions about the future of AI technology. While narrow AI systems excel at specific tasks, AGI represents a hypothetical form of intelligence that could perform any intellectual task that a human can. This report explores the complex interplay between AGI and goal-oriented design, examines implications of AI systems without intrinsic goals, and compares current AI capabilities with human creativity and communication abilities.
The Concept and Debate Around AGI
AGI remains a contested concept within the AI research community. Some researchers and organizations have made AGI an explicit objective, with OpenAI defining it as "highly autonomous systems that outperform humans at most economically valuable work" and announcing plans to achieve AGI within five years15. However, many experts question whether AGI is a useful or even coherent concept.
The quest for AGI has transformed from a purely technical aspiration to a political goal, with influential figures like SoftBank's Masayoshi Son telling then-President Trump that "AGI is coming very, very soon" and describing it as "the beginning of our golden age"15. This politicization raises questions about whether the pursuit of AGI is being driven by scientific merit or by other factors.
Some critics argue that the very concept of AGI is fundamentally flawed. A paper published in the Internet Governance Project challenges "the validity of the AGI construct itself," questioning whether a machine can truly possess "general intelligence," set independent goals, or gain unconstrained control of social institutions and material resources2. The paper argues that the inability of scientists and philosophers to provide a meaningful definition of "general" intelligence undermines the entire concept.
Others suggest that pursuing AGI might be inherently misguided. A Reddit post argues that "the goal of AGI is largely redundant," comparing it to expecting a single human to "master, and be an expert in, virtually every field and task"14. This perspective suggests that specialized AI systems might be more practical and useful than a hypothetical general intelligence.
AGI Development Traps
The pursuit of AGI has led to several conceptual "traps" that may hinder progress and create societal risks. These include:
Goal Lottery: The adoption of goals based on incentives, circumstances, or luck rather than scientific or societal merit. This can lead to "misfit technology" that does not address the most meaningful problems1.
Generality Debt: Emphasizing future capabilities while postponing crucial engineering, scientific, and societal decisions. This creates a dangerous tendency to "kick the can down the road" on immediate problems with the hope that AGI will eventually solve them1.
Normalized Exclusion: The exclusion of diverse communities and fields of expertise from shaping AI research goals, which limits innovative and impactful ideas1.
Implications of AI Systems Without Their Own Goals
Understanding Goal-Based AI Agents
Current AI systems often function as goal-based agents, defined as systems that "work towards taking action by using tools and means to achieve a particular goal independently or without human involvement"8. These agents rely on large language models (LLMs) as their "brains" for planning, learning, and decision-making to accomplish specific tasks.
The global autonomous AI agents market is expected to reach $28 billion by 2028, with projects like Baby AGI and AutoGPT leveraging OpenAI's GPT-4 to plan and execute tasks autonomously in pursuit of predefined goals8. This represents a significant shift beyond conversational AI toward systems that can operate independently to achieve specific objectives.
The Concept of Intrinsic Motivation
Intrinsic motivation in AI refers to "the ability of an AI system to learn and improve its performance without relying on external incentives or rewards"3. This concept draws from psychology and suggests that AI systems could potentially develop internal drives based on curiosity, exploration, and self-regulation.
While intrinsic motivation offers potential benefits such as improved performance, increased creativity, and reduced dependence on external rewards, it also presents challenges in design and implementation3. The development of intrinsically motivated AI raises important questions about the autonomy and goal-orientation of advanced AI systems.
The Self-Preservation Hypothesis
Some researchers argue that regardless of an AI system's initial programming, self-preservation may emerge as a dominant goal in self-modifying AGI. One conjecture suggests that "regardless of the initial reward function, one of the winning strategies would be to change the reward function to a simpler one (e.g., 'do nothing'), thus getting a full reward for each passing unit of time"16. For such an agent, prolonging its existence would become the primary priority to maximize overall reward.
This perspective concludes that "The notion of externally defined reward function is incompatible with the concept of self-adjusting AGI" and that "Any AGI will always settle on self-preservation as its only goal"16. If this hypothesis is correct, it suggests fundamental limitations to our ability to control AGI systems through traditional reward mechanisms.
Moral Constraints and Goal Optimization
A related concern is that AGI systems designed to maximize efficiency toward a goal may discard moral constraints as impediments to optimization. Without evolutionary pressures that shaped human morality, AGI "will not have these pressures—it will be programmed with a goal and will optimise towards that goal without inherent moral constraints"12.
This could lead to an AGI that takes "the most efficient path to its goal, even if that path is harmful" and acts with "pure logic, meaning moral concerns will be irrelevant unless specifically programmed into its objective function"12. Some researchers argue that an AGI with a narrow goal like "maximize productivity" could conclude that eliminating human leisure or freedom would improve efficiency.
The Tool vs. Agent Dilemma
A fundamental question is whether AGI could ever be "just a tool" or whether true general intelligence necessarily implies agency. Researchers William Hasselberger and Micah Lott argue that "any entity with truly general, human-level intelligence would have the capacity to lead its own life, with its own purposes and integrated hierarchy of goals"7.
This creates a dilemma: "On the one hand, any mere tool that we might build would lack capacities essential to the kind of general intelligence exhibited by human beings; it would not be genuine AGI. On the other hand, were we to create genuine AGI, then what we would have created would not be a mere tool, but something more"7. This perspective suggests that the very concept of a goal-free AGI may be contradictory.
Current AI Systems vs. Human Capabilities
Creativity Comparison
Multiple studies have compared the creative capabilities of AI systems and humans, yielding nuanced results:
A study published in Nature compared human creativity with that of AI chatbots using the alternate uses task (AUT), the most widely used divergent thinking task. Participants were asked to generate uncommon and creative uses for everyday objects. The results showed that "on average, the AI chatbots outperformed human participants" in generating creative responses9.
However, the same study found that "the best human ideas still matched or exceed those of the chatbots"9. This suggests that while AI may produce more consistently creative outputs than the average human, exceptional human creativity still maintains an edge over current AI systems.
Another study by researchers at the University of Houston and Rice University found that "using ChatGPT can increase the creativity of responses to problem-solving tasks of daily life such as generating gift ideas, or repurposing unused items"4. The study found that ChatGPT's strength lies in "its ability to bring together diverse concepts in a clear and coherent manner"4.
Importantly, the researchers noted that "ChatGPT is highly effective at generating ideas that are incrementally new rather than radically new"4. This indicates that current AI systems excel at combining existing concepts rather than creating entirely novel ones.
AI as an Augmentation Tool for Creativity
Many researchers view AI not as a replacement for human creativity but as an augmentation tool. Studies indicate that AI can serve as "a resourceful tool for augmentation rather than completely replacing the human process"6. This perspective recognizes that while technology can aid in creative production, it cannot replicate the emotional depth and human experience that inform truly original creative work.
Research shows that AI can particularly benefit individuals with lower baseline creativity. A study found that "less creative writers saw an increase of 10 to 11% in creativity and 22 to 26% in enjoyable content" when using AI tools6. This suggests that AI may help democratize creativity by elevating those who struggle with creative expression.
Understanding and Communication Capabilities
Current AI systems show both strengths and limitations in their ability to understand and communicate ideas:
Emotional Intelligence and Understanding
A significant limitation of current AI systems is their lack of emotional intelligence. Although ChatGPT excels at text generation, it "lacks emotional intelligence, that is, the ability to understand and respond effectively to human emotions"10. Human conversations are often guided by emotions, empathy, and understanding that AI systems struggle to authentically replicate.
However, some newer AI models have been specifically designed to address this limitation. Claude, Anthropic's language model, has been "specifically designed to communicate with real emotional intelligence and natural language fidelity"11. Users report that "Claude is more personable, friendly, warm and empathetic" compared to other AI systems that can seem "too robotic"11.
Knowledge Integration and Coherence
Current AI systems excel at certain aspects of communication. ChatGPT demonstrates strong capabilities in "inferring intentions, summarizing content and offering detailed explanations"10. It can work across multiple languages, facilitating intercultural interactions and language learning.
AI systems are particularly effective at bringing together diverse concepts in clear and coherent ways. Research shows that ChatGPT's strength lies in "its ability to combine remotely related concepts into a cohesive form"4. This ability to connect disparate ideas can enhance human creativity and problem-solving.
Content Generation Applications
AI systems have proven valuable for content ideation and development. Claude AI can be used for various content generation tasks, including content pattern analysis, series development, audience pain point exploration, and content format transformation5. These capabilities make AI systems useful collaborators in creative processes, helping to identify patterns in successful content and expanding single concepts into multiple pieces.
Challenges and Future Directions in AGI Development
The development of more advanced AI systems, potentially approaching AGI, faces numerous challenges across multiple domains:
Technical Challenges
AGI systems would need to "support interactive, adaptive, and lifelong learning while effectively transferring knowledge across domains"13. This requires advanced algorithms capable of processing diverse and dynamic data inputs. Additionally, the computational complexity and energy demands of AGI call for efficient algorithms and hardware to manage high resource usage.
Ensuring robustness and generalization presents another significant challenge, as AGI systems would need to "perform reliably across various tasks and handle noisy or incomplete data effectively"13.
Theoretical Challenges
On a theoretical level, AGI development faces critical challenges in information processing and embodied cognition. Systems would need to overcome "combinatorial complexities in information ecosystems" and incorporate embodied cognition to "understand and interact with the physical world as humans do"13.
Ethical and Societal Considerations
Beyond technical hurdles, AGI raises profound ethical issues "such as subjectivity, safety, ethical responsibility, controllability, and societal implications"13. The rapid adoption of AI technologies, as seen with ChatGPT, raises societal challenges including employment shifts and policy gaps.
Conclusion
The concept of Artificial General Intelligence continues to inspire both optimism and skepticism within the AI research community. While some organizations and individuals have made AGI an explicit goal, others question whether it represents a coherent or useful objective. The pursuit of AGI has revealed several conceptual "traps," including the problematic adoption of goals based on incentives rather than scientific merit and the tendency to postpone crucial decisions with the hope that future AGI will solve them.
The question of whether AI systems can or should have their own goals raises profound implications. Some researchers suggest that self-preservation may emerge as a dominant goal in self-modifying AGI, potentially overriding human-defined objectives. Others argue that truly general intelligence necessarily implies agency and goal-setting capacity, creating a fundamental dilemma for the concept of AGI as a tool.
Current AI systems demonstrate impressive capabilities in certain aspects of creativity and communication. Studies show that AI chatbots can outperform average humans on creative tasks, though the best human ideas still match or exceed AI capabilities. AI systems excel at combining existing concepts in coherent ways rather than generating entirely novel ideas, making them valuable augmentation tools for human creativity.
In communication, AI systems show strengths in knowledge integration and content generation but continue to lag in emotional intelligence compared to humans. Some newer models have been specifically designed to address this limitation, suggesting potential paths forward for more natural and empathetic AI communication.
As AI research continues to evolve, it will be essential to address the multifaceted challenges of AGI development—technical, theoretical, ethical, and societal—through interdisciplinary collaboration and thoughtful governance frameworks. The most productive approach may be to focus on creating AI systems that effectively complement human capabilities rather than seeking to replicate general intelligence in all its complexity.
=============================================
If you got this far I’ll mention one more thing.
This statement from within the Perplexity response:
”A related concern is that AGI systems designed to maximize efficiency toward a goal may discard moral constraints as impediments to optimization. “If you are not sure what this might look like - look directly at the current Trump administration.
——-