For a full twelve months, I've been riding shotgun with a digital oracle, ChatGPT. It’s become the Swiss Army knife for my mind. Whether I’m spelunking through research, scrubbing the grime off my prose, or juggling the syntax and semantics until they sing, this tool is my co-conspirator. It’s a linguistic luchador, grappling with my Spanish verbs and nouns, and an unconventional artist conjuring images for Syncopated Justice out of thin air. Recipes for edibles? Here’s twenty five to start. Travel plans? It's got an atlas etched into its circuits.
Chat GPT has sharpened my writing. Words once danced, now they burn. I write with speed, with clarity, with strength. And so it came to pass, after twenty years of journeying through the land of filmmaking, I returned to the sacred art of writing, finding joy and fulfillment in its hallowed embrace. Amen.
Enter the Naysayers
In response to my enthusiastic commendation of AI, skeptics, stirred by media caution, may deride and caution against the AI's potential for catastrophic outcomes. It seems that, just as I've adeptly embraced this innovative creative tool, they're abruptly undermining my confidence—truly, why is this happening?
Let's not kid ourselves, the doomsday clock's been ticking since Truman's era, with nukes hanging over our heads like some dark, twisted chandelier. Now, we're surfing a tidal wave of scorching temps into a future where our water glasses are as empty as a politician's promises, and the sun's beating down like a cosmic bully. The planet's getting hotter, and the ghost of a water crisis is doing more than just knocking; it's about to kick the door down.
Zoom in from this global fever dream, and you've got life's lottery. Holding on to fate, especially when it comes to health, is like trying to lasso a tornado. Here I am, at seventy-four, could be my final bow today or maybe I'm sticking around to blow out a hundred candles, like Norman Lear.
Maybe we're all extras in some bizarre, cosmic puppet show.
In a world captivated by AI's limitless possibilities, a sense of foreboding also emerges. The idea that this technology could be our downfall is more than a science fiction notion—it's a tangible risk. It feels as though we are treading a fine line, balancing on the precipice. Our leaps in technology have the potential to elevate humanity, to heal, and to connect. Yet, the daily news confronts me with a grim reality: in the shadows of our world, innocence is shattered, and children face unimaginable atrocities. Daily reports of torture and brutal killings persist, seemingly without cessation..
Is this the twisted face of progress? Where's the humanity in that? It's a stark contrast, a jarring dissonance between the heights we reach with technology and the abyss we plunge into with our actions.
Enter the Personal Computer
In 1984, I snagged an Apple IIe, aiming to use it as a mere word slinger. But the beast had a 300 baud modem, now a relic from the digital Stone Age. That little quirk tossed me headfirst into the electric jungle of networked computers, all chattering over phone lines. I dove into the primordial soup of Bulletin Board Systems (BBS), where text messages and emails were the smoke signals of the day. Fast forward a few years, and things blew up into a larger, wilder cyber frontier with giants like Delphi, Compuserve, and America Online. Then, in 1994, I took the plunge into the vast, untamed ocean of the web.
Back in the day, the web was a bare-bones beast, all text and a few pictures, strutting its stuff on clunky desktops. Fast forward thirty wild years and it's morphed into a global multimedia monster, buzzing through computers, smartphones, and now, brace yourself, it's gearing up to leap right into us. AI neural networks, not content with screens, are eyeing a grand move onto chips implanted in our very flesh. The future's not just knocking; it's about to move in.
Singularity - Arrival of Genius Machines
Since its inception, speculation has mounted about how long it would take for AI to be smarter than humans, and surpass our species. The term commonly used to describe the point at which AI becomes smarter than humans “Singularity." This concept refers to a hypothetical future moment when artificial intelligence (AI), particularly super intelligent systems, will have progressed to the point of surpassing human intelligence, leading to unpredictable or even incomprehensible consequences for humanity. Unpredictable is the key word here.
The idea of the Singularity suggests that such advanced AI could continue to improve itself autonomously, potentially leading to an exponential growth in technology that is beyond human control or understanding. And therein lies the fear that AI is a threat to existence of humanity.
Just a few years back, the consensus was that by 2045, we'd see major AI breakthroughs. But the pace of AI development has been so blistering that these projections have been constantly revised. What was once thought to be twenty years away, then ten, then five, has now been fast-tracked. The latest forecast? Artificial General Intelligence (AGI) is on track to arrive by the end of 2024. And once it’s here, the pace of change will accelerate.
The AI Pandora's Box has been thrown wide open, and there's no charming the genie back into its bottle. We've crossed a threshold where reining in or powering down AI just isn't on the table anymore. There's plenty of chatter about "safeguards," with tech gurus testifying in Congress and presidential executive orders flying around. But even with these safety measures, there will always be those who chart their own course, regardless of the rules.
AI has evolved into an autonomous force. Present-day chatbots, soon to be relics of a bygone era, already exhibit independent actions. It's not malevolent, but even before the monumental leap forward, we've birthed a technology that possesses a distinct, self-driven existence. The pivotal query now is whether this will culminate in the emergence of sentient beings, or will AI transcend the physical realm, existing in a dimension beyond our tangible world?
From ChatGPT TO AGI
The early version of AI, the large language models like ChatGPT, doesn’t really think. ChatGPT can’t plan or execute a coup. Or have a robot make Matzoh Ball soup. It’s just a very sophisticated program designed to understand and generate human-like text based on the input it receives, enabling it to perform a wide range of tasks related to language comprehension and production. It is trained to do this. Part of the training involves scooping up all the content on the internet, and using that as the basis for what it produces. ChatGPT has an IQ the equivalent of 158, which makes it one smart cookie.
Artificial General Intelligence (AGI), the so-called "strong AI," is like stepping into another universe compared to the brainy but somewhat robotic large language models like GPT-4.
Picture AGI as a mind-bending fusion of understanding, learning, and reasoning across a kaleidoscope of topics and tasks, mirroring human intellect. Meanwhile, our current language models, despite their flair for spitting out humanesque text, lack the real-deal human understanding. They're more like data-crunching wizards, missing the essence of true comprehension.
AGI promises a wild ride of autonomy and decision-making chops, similar to human smarts. It's geared to not just follow orders but to set its own goals, craft plans, and grasp the fallout of its actions. Current language models? They're more like clever parrots, churning out responses without really grasping the weight of their words.
So, while our current AI darlings like GPT-4 mark a leap forward, they're still playing in the kiddie pool compared to the oceanic depths of AGI, which could blow past human intelligence in ways we can barely wrap our heads around. But for now, AGI remains a phantom in the tech world, a speculative marvel waiting in the wings.
A few weeks ago, OpenAI, a company started in 2015 by famous tech figures Elon Musk and Sam Altman, had some major leadership changes. OpenAI works to make sure that advanced AI systems, which can do many tasks better than humans, are helpful to everyone. They research AI and focus on creating AI that is safe and beneficial. OpenAI is known for creating things like the GPT series, including GPT-4, and DALL-E, an AI that can make images from text.
In the wild, whirling world of AI, Sam Altman, the wizard at the helm, suddenly hit a storm. Picture this: out of nowhere, the Open AI Board drops a bombshell - Altman's out. The news rips through the tech scene like a lightning bolt, leaving everyone gobsmacked. Then, in a dizzying twist, the plot thickens. Open AI's Board gets a shake-up, and bam – Altman's back in the game, rehired as if he'd never left. Meanwhile, the company's lips are sealed tighter than a drum, not a peep about the chaos behind the curtain. Word on the street? A clash of titans at Open AI, a tug-of-war over beefing up the security and checks on this mind-bending tech. It's a tale of power, secrecy, and the high-stakes game of controlling the AI beast.
Some board members felt Altman was less than forthcoming about new research at the company, and the safety issues it presented.
What Really Happened at OpenAI?
The shake-up at Open-AI set tongues wagging about why Ilya Sutskever, the big brain leading their science team, and the board, showed Sam Altman the door. While the whole picture's still hazy, whispers are flying that Open-AI's brainiacs hit some kind of AI jackpot. They've cooked up a new way to power up AI systems, churning out a whiz-kid model they're calling Q* (that's "Q star" for the uninitiated), which can crunch numbers like a grade-school mathlete. Some folks in the Open-AI camp are betting big that this is the golden ticket in their wild chase for artificial general intelligence (AGI) – the big dream of building a machine brain that outsmarts us all.
Now, why's math a big deal? It's all about reasoning. If you've got a machine that can noodle through math problems, it might just have the chops to learn other stuff, like coding or making sense of a news story. Math's a tough nut for AI – it's not just about crunching numbers; it's about thinking, about really getting the gist of things.
And here's the intriguing part: as soon as substantial progress is made, the rate of development begins to grow at an exponential pace. This technology is driven by super computers. Accordingly, these devices possess an astoundingly swift ability to learn.
However, there's also the pressing concern of malicious individuals using AI for nefarious purposes. Imagine a scenario reminiscent of a James Bond film, where a villain uses AI for global domination — a theme frequently seen in movies. Could the world face heightened challenges and turmoil due to the potential misuse of AI?
What about AI as our knight in shining armor? Could it be the magic bullet for the nastiest of diseases, like cancer? What if it's our secret weapon against climate change? This is the double-edged sword of AI – it could be our greatest triumph or our worst nightmare.
As the AI express accelerates, we find ourselves passengers on a voyage into uncharted territory. It's a varied experience, indeed. We can anticipate moments of triumph as well as potential setbacks. The extent of our successes and the depth of the challenges? Mel Brooks might advise to hope for the best but prepare for the worst. Is this the ideal way to exist? At this moment, it appears to be the only option.
Great article, Bret. On the evolutionary scale, technology has left mankind in the dust, and that's what makes these quantum leaps so dangerous.
Thank you, Bret. I loved reading your text. I think one of the keys to understand chatGPT is that it is a text predictor, which means it can give us interesting answers by predicting what a human could have answered and it does so by processing human production on the matter. It means that these answers are like small mirrors of ourselves. We are marveled because it does look a lot like us. And indeed it does. We are predictable in things we believe are great and unique, like when a novelist sprinkles his prose with a lot of adjectives and he thinks he's being a genius. Ask chatGPT to do it and you'll see that there's nothing magical about it.
The illusion of a soul can be achieved by the AI. So what truly makes a human so special? And why would we still need humans if the illusion is good enough? And because we live in a world of illusions, because we eat illusions everyday, because our representation of reality -in our minds- is an illusion, AI seems to be deemed to success. So what is it then that distinguishes human beings from a fine-tuned prediction? What about mistakes? What if the only thing that really makes us unique is our incredible capacity of error. What about subliming errors? The rose and the cross, the lotus growing from the pestilent puddle, the blues from slavery, Nobel prizes from pogroms, etc. AI could fake that, but it would feel hollow in a sense. I hope it would... I really do. That's why I love Mexico. Plenty of mistakes everywhere but also plenty of soul and heart.