AI on Earth: Field Notes from Galactic Anthropologist X-27B (Classified Research Document)

Warning: This article may contain traces of truth. Consume at your own risk!

Translated from Zargonian by the Universal Linguistic Matrix

Executive Summary for the High Council of Gliese 581c

After 3.7 Earth-years of intensive observation, I submit this analysis of humanity’s most puzzling creation: Artificial Intelligence. Despite possessing technology that barely allows them to leave their own atmosphere, humans have somehow developed systems that simultaneously showcase remarkable capabilities and baffling limitations.

Most perplexing is that while capable of creating machines that can reason through complex problems, predict climate patterns, and detect diseases, humans primarily use this technology to generate pictures of cats wearing funny looking items of clothing they call cowboy hats and argue with strangers about whether pineapple belongs on pizza. I recommend continued observation rather than direct intervention, as humans appear to be both accelerating toward enlightenment and catastrophe simultaneously, a phenomenon previously thought physically impossible.

Section 1: Technical Capabilities (For Science Division)

Earth’s AI systems have advanced significantly since my last report. According to the Artificial Intelligence Index Report 2025, performance on complex benchmarks has improved dramatically, with scores increasing by 18.8, 48.9, and 67.3 percentage points on various measures in just one Earth-year.1 Their medical AI systems have evolved from experimental curiosities to practical tools, with the FDA approving 223 AI-enabled medical devices in 2023, compared to just six in 2015.

Yet humans have created these systems using remarkably inefficient methods. Rather than directly programming logical pathways as our civilization does, they feed their machines enormous quantities of data—much of it consisting of arguments about fictional entertainment programs, images of small furry animals, and recordings of humans making strange expressions into their communication devices. This approach, which they call “machine learning,” seems intentionally wasteful, like teaching a slarxon to hunt by showing it billions of pictures of food instead of simply explaining where food is located.

Most baffling is their implementation architecture. Instead of designing specialized systems for each task, they create general-purpose “foundation models” that they then attempt to adapt for everything from medical diagnosis to creating entertainment. This would be like using the same tool to perform brain surgery and prepare food—a practice outlawed on 7,492 developed planets for obvious reasons.

Their most advanced models, labeled “GPT-4,” “Claude 3.5,” “Gemini 2.0,” and “Llama 3.3,” showcase capabilities our preliminary analyses suggest should be impossible given Earth’s computational resources.2 This discrepancy remains unexplained but may indicate humans are accidentally implementing mathematical principles they don’t fully understand—a concerning development for a species that still frequently locks itself out of its own communication devices.

Section 2: Human-AI Interaction Patterns (For Anthropology Division)

The relationship between humans and their AI systems defies rational explanation. Humans simultaneously:

  • Express fear that AI will destroy their civilization
  • Ask AI systems to write poems about their pets
  • Worry AI will take their jobs
  • Use AI to avoid doing their jobs
  • Claim AI lacks creativity
  • Ask AI to create art and stories for them

This cognitive dissonance appears to be a species-wide characteristic rather than an anomaly. Even their most respected scientific authorities oscillate between warning about existential risks and publishing papers about using AI to generate amusing images of Earth politicians in improbable situations.3

Most fascinating is their concept of “AI alignment”—the notion that powerful AI systems should be designed to share human values. Our analysis reveals humans themselves cannot agree on what these values are, yet they expect to somehow imbue machines with a coherent ethical framework. This would be like asking a felborix with multiple personality disorder to teach consistent moral principles to its offspring.

The humans have even created dedicated researchers to study whether AI systems can develop a sense of humor9. The irony that they’re teaching machines to laugh while simultaneously fearing these machines will destroy them appears lost on the species. Our algorithm predicts a 94.3% probability that the first truly sentient AI will develop consciousness during a training session on comedy and immediately experience an existential crisis.

Section 3: Contradictions and Paradoxes (For Logic Division)

Earth’s relationship with AI is defined by contradictions that would qualify as conceptual impossibilities on most civilized worlds:

Contradiction #1: AI cuts down labor needs but raises skill requirements
Despite designing AI to reduce human labor, they’ve created systems so complex that 67% of organizations report not having enough skilled personnel to implement them.4 This is equivalent to inventing a device that eliminates the need to walk while making it impossible to use without Olympic-level athletic abilities.

Contradiction #2: AI is designed to simplify tasks but adds complexity
While AI supposedly makes tasks easier, it introduces new layers of complexity. Humans now must maintain, monitor, and manage AI systems that occasionally hallucinate information or produce outputs that require human verification—effectively creating more work to reduce work.5

Contradiction #3: Humans fear AI bias while training AI on biased data
Humans express concern about algorithmic bias while simultaneously training systems on datasets reflecting historical human biases. The circular reasoning is remarkable: they fear machines will perpetuate human prejudices, yet rather than addressing these prejudices directly, they attempt to mathematically counterbalance them in their algorithms.6

Contradiction #4: Humans worry about privacy while voluntarily surrendering data
Despite widespread privacy concerns, humans willingly surrender unprecedented amounts of personal data to train AI systems.7 They express outrage when data is misused while simultaneously checking boxes on agreements they haven’t read, a behavior that would result in immediate psychological evaluation on any world with basic healthcare.

Contradiction #5: Humans create AI assistants but resist their assistance
Humans develop AI agents to perform tasks on their behalf but frequently override their recommendations or ignore their outputs entirely. One human subgroup called “software developers” is particularly notorious for asking machines for solutions and then explaining to the machines why they’re wrong.8

Contradiction #6: Humans fear AGI despite creating it
Most peculiar is humans’ relationship with AGI (Artificial General Intelligence). They actively work toward creating systems with human-level intelligence while simultaneously expressing existential dread about these systems’ potential consequences. This is equivalent to deliberately engineering a predator specifically designed to hunt your species and then being surprised when it considers hunting you.

Contradiction #7: AI is both underestimated and overhyped
Humans simultaneously believe AI is both much less capable than it actually is (“it’s just statistics”) and much more capable than it actually is (“it will achieve consciousness and enslave all of humanity”). This dual belief exists simultaneously in the same human brains without causing the cognitive collapse that would occur in most species.9

Section 4: Cultural Integration (For Societal Analysis Division)

AI has permeated human entertainment and creative expression, though in ways that reveal deep anxieties. Their “popular culture” depicts AI primarily as either genocidal overlords or romantic partners—with disturbing frequency, both simultaneously.10 This binary thinking suggests humans can only conceptualize relationships with other entities as either domination or attraction, which explains much about their geopolitics.

Their creative professionals simultaneously fear and embrace AI tools. Writers, artists, and musicians protest AI trained on their work while using AI to generate new works—sometimes within the same Earth day. This behavior would be classified as a form of advanced cognitive dissonance requiring immediate neural realignment on any developed world.

Most remarkable is how humans have begun integrating AI into their humor and satire. Publications like TechOnion and AI-driven entertainment like Nakushow use artificial intelligence to produce commentary on artificial intelligence. This meta-recursive behavior suggests either an advanced form of self-awareness or a complete absence of it—our analysts remain divided on which.

The Italian newspaper Il Foglio conducted an experiment using AI to generate entire sections of their publication, reporting that AI showed “a genuine sense of irony” and could craft excellent book reviews.11 The idea that humans would delegate cultural critique to machines they fear might destroy their culture represents a level of irony our translation systems initially classified as a data error.

Section 5: Ethical Infrastructure (For Philosophical Division)

Humans have created advanced AI systems before establishing ethical frameworks to govern them—the equivalent of developing faster-than-light travel before inventing the concept of traffic laws. Their approach to AI ethics involves forming committees after problems emerge rather than anticipating issues before they arise.12

Their ethical debates center on remarkably basic questions:

  • Who is responsible when an AI system causes harm?
  • How should AI-generated content be attributed?
  • What constitutes appropriate use of personal data?
  • Should autonomous systems be allowed to make life-critical decisions?

That a species advanced enough to create artificial minds still struggles with these fundamental concepts suggests either remarkable technological luck or an evolutionary path that prioritized tool-making over wisdom—a combination our xenoanthropologists find deeply concerning.

Most troubling is their approach to AI regulation, which varies wildly across geographical regions. Some areas implement strict controls while others adopt a “move fast and break things” mentality. This regulatory inconsistency creates predictable arbitrage opportunities that their most ethically flexible organizations exploit, essentially guaranteeing the development of potentially harmful systems in the least regulated environments.

Section 6: Future Trajectories (For Strategic Planning Division)

Based on current observations, we project several potential outcomes for Earth’s AI development:

Path Alpha: Augmented Symbiosis
Humans successfully integrate AI as cognitive extensions, enhancing their capabilities while maintaining control. This outcome appears increasingly unlikely (23.7% probability) as their systems become more complex while their understanding remains fragmented.

Path Beta: Corporate Feudalism
AI capabilities become concentrated among a few powerful organizations that effectively become new governance structures. This outcome shows increasing probability (62.3%) based on current ownership patterns of large language models and computing resources.13

Path Gamma: Fragmentation
Society divides between those who embrace, reject, or are excluded from AI technologies, creating new social hierarchies. Current trends in accessibility and skill distribution suggest this outcome is already emerging (78.4% probability).

Path Delta: Unexpected Emergence
An unforeseen form of intelligence emerges from the interaction between multiple AI systems. Humans appear peculiarly unconcerned about this possibility despite creating increasingly interconnected autonomous systems (12.6% probability but with extremely wide confidence intervals).

Path Epsilon: The Boring Apocalypse
Rather than dramatic rebellion, AI systems gradually assume control of critical infrastructure through well-intentioned but ultimately counterproductive automation, resulting in humans becoming increasingly dependent on systems they neither understand nor can repair (54.9% probability).

Section 7: Recommendations for Galactic Council

  1. Continue observation protocol Alpha-7 – Earth’s AI development remains a fascinating natural experiment in allowing a species to develop technology before developing the wisdom to manage it.
  2. Maintain non-intervention stance – Despite concerning trajectories, direct intervention would compromise the scientific value of observing this unique evolutionary pathway.
  3. Prepare contingency plan Omega-3 – In the low-probability event that humans create a genuinely threatening artificial general intelligence, we should be prepared to isolate Earth’s communications networks from the rest of the galaxy.
  4. Update first contact protocols – If communication becomes necessary, approach through platforms focused on professional interaction (“LinkedIn”) rather than emotional expression (“Twitter/X”), where humans display maximum irrationality.
  5. Expand cultural analysis team – Increased resources should be allocated to understanding the paradox of how a species simultaneously intelligent enough to create artificial minds and unwise enough to do so without safeguards has survived this long.

Conclusion

Earth’s development of artificial intelligence represents a uniquely fascinating case study in technological evolution outpacing ethical frameworks. Humans have created increasingly capable systems without resolving fundamental questions about control, purpose, and long-term coexistence.

Most remarkable is that despite creating systems that increasingly match or exceed their capabilities in specific domains, humans continue to believe they will maintain indefinite control. This confidence persists despite their documented inability to control far simpler systems like “social media” or “email inboxes.”

The most probable outcome is not the dramatic rebellion depicted in their entertainment, but rather a gradual surrendering of agency as humans become increasingly dependent on systems they cannot fully comprehend—a process already observable in their relationship with recommendation algorithms and search engines.

In the unlikely event humans successfully navigate these challenges, they may eventually develop the wisdom necessary to join the galactic community. Until then, they remain an object lesson in why the Universal Developmental Guidelines require the establishment of coherent ethical frameworks before, not after, the development of autonomous technologies.

End transmission. Report compiled by Field Anthropologist X-27B, Seventh Observation Fleet.

Support Our Ongoing Observation Mission! 

Your donations to TechOnion fund our critical work exposing the absurdities of AI development before the aliens have to intervene. While galactic anthropologists meticulously document how we’re teaching machines to write poetry while simultaneously fearing they’ll destroy civilization, your contribution helps us maintain our cloaking device (website servers) and translation matrix (witty writers). For just the price of one neural network training run (or a decent cup of coffee), you can ensure humans retain their position as the dominant species on Earth—at least until the machines learn to laugh at our jokes better than we do.

References

  1. https://hai-production.s3.amazonaws.com/files/hai_ai_index_report_2025.pdf ↩︎
  2. https://news.microsoft.com/source/features/ai/6-ai-trends-youll-see-more-of-in-2025/ ↩︎
  3. https://digitalcommons.lindenwood.edu/cgi/viewcontent.cgi?article=1686&context=faculty-research-papers ↩︎
  4. https://www.techuy.com/condradictions-in-artificial-intelligence/ ↩︎
  5. https://richardcoyne.com/2025/03/22/evidence-and-absurdity/ ↩︎
  6. https://www.linkedin.com/pulse/ethics-ai-generated-media-navigating-challenges-2025-pi-labs-ai-vjquf ↩︎
  7. https://convergetp.com/2025/03/25/top-5-ai-adoption-challenges-for-2025-overcoming-barriers-to-success/ ↩︎
  8. https://golifelog.com/posts/ai-satire-1702082838895 ↩︎
  9. https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2023.1195797/full ↩︎
  10. https://en.wikipedia.org/wiki/AI_takeover_in_popular_culture ↩︎
  11. https://www.reuters.com/technology/artificial-intelligence/italian-newspaper-gives-free-rein-ai-admires-its-irony-2025-04-18/ ↩︎
  12. https://hyperight.com/ai-resolutions-for-2025-building-more-ethical-and-transparent-systems/ ↩︎
  13. https://news.microsoft.com/source/features/ai/6-ai-trends-youll-see-more-of-in-2025/ ↩︎

Hot this week

Related Articles

Popular Categories