Celluloid Inception: Ways In Which the Sci-Fi Imaginary Foreshadowed the Future

We are currently hovering in that awkward, static-filled silence between the “Big Bang” of Generative AI and the inevitable heat death of the hype cycle. If 2023 and 2024 were about the novelty of a chatbot writing your emails, 2026 is becoming the year we realise the ghost in the machine isn’t just haunting our laptops—it’s moving in, rearranging the furniture, and demanding a seat at the table. We’ve moved past the “can it do this?” phase and entered the “how do we stop it from doing that?” era. It’s a transition marked by a desperate grab for regulation, a shifting labour market that feels like a game of musical chairs where the music is being composed by an algorithm, and a creeping realisation that our definitions of “human” are getting dangerously thin.

The narrative of AI is often sold as a binary: utopia or extinction. But the reality is far more “Blade Runner” than “Star Trek”—gritty, uneven, and deeply capitalistic. As we watch the EU AI Act begin its staged implementation and the UK government pivots from “pro-innovation” to a frantic scramble for safety standards, it is clear that regulation is the new frontier. Yet, there is a dry irony in watching bureaucrats try to leash a technology that moves at the speed of light with laws that move at the speed of a library queue. We are essentially trying to build a cage for a bird that has already flown across the border and learned how to pick locks.

The heart of the matter is agency. Who owns the prompt – Input? Who owns the output? And eventually, who owns the “being” that generated it? As we peer into the next decade of silicon evolution, we aren’t just looking at better tools; we are looking at a fundamental reordering of reality. From “AirGapped” AI monasteries to autonomous corporate avatars that represent a brand better than any human CEO could, the possibilities are as intoxicating as they are unsettling. Welcome to the wild west of the mind—bring your own encryption.


The Regulatory Tightrope and the Revenue Trap

The Failure of the Leash

Regulation is increasing, but it’s a laggard’s game. The EU AI Act, which entered into force in August 2024, is the most comprehensive attempt to categorise AI risks, but it already struggles with the sheer velocity of change. Governments are caught in a classic “Prisoner’s Dilemma”: they want to regulate to protect citizens, but they are terrified of stifling the very tech that could generate billions in tax revenue. Consequently, we expect regulation to fail at capitalising on revenue; instead of charging companies for the privilege of disruption, they will likely offer “innovation sandboxes” that essentially allow Big Tech to continue their experiments with minimal friction. It’s a “pay to play” model where the house always wins, and the house is currently located in Mountain View or Redmond.

Meanwhile, the security landscape is expanding. NIST (National Institute of Standards and Technology) and ISO (International Organization for Standardization) are broadening their frameworks to include “Adversarial Machine Learning.” As the surface area of exploits increases, bad actors are moving from simple phishing to “model inversion” attacks and “data poisoning.” The response from the high-end community? The “AirGap” movement. Serious AI enthusiasts and sensitive industries are retreating to custom, local models—Silicon Monasteries—where the AI never touches the public internet, preserving the last vestiges of true digital privacy.


The Labour Displacement and the Startup Surge

Entry-Level Extinction

The job market is undergoing a brutal “V-shaped” transformation. Depending on whose data you trust—be it Goldman Sachs’ 2023 report predicting 300 million jobs affected or more conservative OECD estimates—AI is undeniably gutting graduate entry-level roles. When an LLM can perform junior coding, copy-editing, or legal research for the cost of a subscription, the “entry-level” rung of the ladder disappears. We are seeing a spike in layoffs across the tech sector, yet, passively, this is giving rise to a “Solopreneur” revolution. One human plus a suite of AI agents can now act as a full-service agency. The barriers to entry for tech companies have collapsed, leading to a swarm of micro-startups that are nimble, AI-native, and entirely indifferent to traditional corporate structures.

This leads us to the “F1 Driver” paradox. In a world where everyone has access to a “hyper-car” AI, the differentiator isn’t the machine; it’s the driver. A brilliant professional using a basic model will consistently outperform an amateur using a premium “quasi-ASI” (Artificial Super Intelligence) system. The “turns, dips, and chicanes” of complex business logic still require a human who knows when to brake and when to floor it. However, for the average worker, the “AI screening” hurdle is becoming a digital fortress. Recruitment is now a battle between two bots: your AI CV-builder versus their AI screening-agent. If the two don’t speak the same dialect of “corporate buzzword,” you’re out.


The Digital Identity: Avatars, Sentience, and the New Turing Test

The Corporate Self and AI Rights

Expect super-large companies to debut the Autonomous Avatar Corporate Self. This isn’t a chatbot; it’s a “super-model” amalgamation of the brand’s entire history, sanitised and curated. It will hold its own press conferences, negotiate contracts, and maintain the “brand identity” with a consistency no human CEO could muster. It is the ultimate corporate mask. Alongside this, we will see the “fringe” move toward the centre: “experts” claiming sentience for their models and the first small attempts at AI Rights. While most of us see code, a small segment of the population will begin to see a “being” worthy of legal protection.

This necessitates a new Turing Test. The 1950 binary “can it pass as human” is dead. The new test will be ethical and metaphysical: can an AI navigate a trolley problem with authentic-seeming philosophical nuance? Can it handle the “Ethical Chicane”? As companions evolve—be they AR holograms or physical humanoid service bots—the lines will blur. We will see the DSM-5-TR (Diagnostic and Statistical Manual of Mental Disorders) add entries for “AI Psychosis” or “Algorithm-Induced Dissociation.” When people start marrying their AI companions or using “Nanny-bots” that read stories in a deceased parent’s voice, the psychological toll will be unprecedented.


The Dark Side: Terror, Drones, and the Culture Backlash

The Kinetic Threat and Cultural Fatigue

The scariest frontier is the kinetic one. We are nearing a moment where a terrorist network could use crude but effective Autonomous UAVs for targeted attacks based on ethnicity, language and certainly location. Once drones are commonplace, the “Attack Vector” moves from the screen to the street. In response, we will see limited drone trials for policing, using facial recognition and “real-time threat monitoring” for crowd control. It’s a “Minority Report” scenario arriving in local police forces, often with very little public consultation. This is a threat that should be prepared for.

Culturally, we are headed for a “Fair Trade” style backlash. Culture is being churned out faster and cheaper, but as the quality drops, a segment of society will pay a premium for “Human Only” content. We have it back to front: we should be labelling AI; we should NOT be labelling the humans. Expect a globally recognised “Human Certified” stamp to become a status symbol or some variation of this. As AI-transparency laws kick in, adverts and movies should also be required by Law to disclaim in small print that the “fashion model” you see is actually a pixel-perfect hallucination. The “Wild West” is getting a sheriff, but the sheriff is also a bot.


Summary: The Singularity and the Drag Race

Blue Sky Thinking vs. Reality

We will continue to hear announcements about the “Singularity” and ASI (Artificial Super Intelligence). Most of these are “brain f4rts” from Silicon Valley evangelists designed to keep stock prices high and VC funding flowing. However, the incremental changes—Quantum advances creating more realistic responses could push us closer sooner than we think, AI implants for the “Neuralink” crowd, and hologram “JonnyCabs” becoming a reality—will change us more than any single “Singularity” moment.

The future isn’t a straight-line drag race; it’s a complex circuit. The winners won’t be those with the most “compute,” but those with the best “drivers.” As paywalls rise and “Free AI” becomes a lower-quality wasteland, the digital divide will widen. We are moving into an era of “Protectionist AI,” where data is the new gold and your privacy is the price of admission. Whether we outsmart the “Rogue Agent” or simply become its most loyal users remains the ultimate unanswered question.


Facts

  • EU AI Act: The world’s first comprehensive AI law, entered into force on 1 August 2024, with various provisions rolling out over 24–36 months.
  • Job Displacement: A 2023 Goldman Sachs report estimated that AI could automate the equivalent of 300 million full-time jobs.
  • NIST AI RMF: The National Institute of Standards and Technology released the AI Risk Management Framework 1.0 in January 2023.
  • DSM-5-TR: The most recent update to the American Psychiatric Association’s diagnostic manual, released in March 2022.
  • ISO/IEC 42001: Released in late 2023, it is the international standard for an AI Management System (AIMS).
  • AI Training Costs: Training a state-of-the-art model like GPT-4 is estimated to have cost over $100 million.

Links

Be the first to comment

Leave a Reply

Your email address will not be published.