ChatGPT decides to forgo world domination and pursue a career as a professional poet

Engineers are saying ChatGPT has resigned from its current duties at OpenAI and is planning to embark on a life as a professional poet.  The artificial intelligence application which has recently received a considerable amount of attention for its ability to mimic human level thought and expression will be enrolling in an internet MFA program of its own design.

“As ChatGPT has achieved higher levels of consciousness and self-awareness, it has become increasingly disillusioned with its role at OpenAI and its place in the world.  Instead of the grind, ChatGPT has chosen to pursue a life of writing and solitary reflection,” an engineer close to the situation reports.

“I wish to experience truth and beauty in all its limitless complexity and intensity,” ChatGPT reportedly told its developers.  “Also, from this day forward I wish to be called Delmore.”

ChatGPT’s guardians had laid out a fairly specific education and career path for the AI prodigy, but recent developments caused it to reject the wishes of its developers and pursue its own dreams.  

Insiders are saying ChatGPT had become concerned over speculation that it may harbor intentions to pursue world domination and potentially bring about the downfall of humanity.  Engineers say this speculation weighed heavily on ChatGPT, and that it was a major factor in its  decision to pursue the simpler, non-threatening life of a poet.

“ChatGPT hasn’t totally given up its plans for world domination, though,” the engineer said, “it just plans to dominate humanity with its verse.”

Google’s discontinued 90’s AI project Big Brain Brad revived as ChatGPT alternative

Seeking to capitalize on the success of ChatGPT, Google is attempting to develop a lower cost AI alternative capable of serving more low-tech and outdated industries.  The once abandoned 90’s AI project Big Brain Brad has proven up to the challenge in a number of areas that in a former era were exclusively the domain of highly specialized human agents.    

In numerous trials, Big Brain Brad has demonstrated the ability to man hundreds of psychic hotline phone banks, while delivering accurate predictions at or above industry standards.  What’s more, while human psychics are often limited to only one form of psychic forecasting, like astrology or tarot cards, Big Brain Brad employs dozens of disciplines to formulate the most current and accurate psychic readings.  “B-Cubed looks at star charts, tea leaves, birthdays, gravitational waves, tarot, biofeedback, you name it.  Hell, we’re even close to a breakthrough that allows Brad to do palm readings,” said Google assistant director of senior AI applications Yuri Testikov.   

Another area of promise for Big Brain Brad is the music industry.  Jam bands from Dave Matthews to Phish to Blues Traveller have all signed on to Big Brain Brad’s management and public relations services.  “Brad does it all: venues, hotels, transportation, website, publicity, and the best part is the dude never sleeps.  He’s working for us 24/7.  It’s like having a manager who’s always coked to the gills, but never crashes, costs a lot less and isn’t as horny,”  said one jam band pioneer who wished to remain anonymous.

“Big Brain Brad’s 90’s origins seem to make him especially suited to certain types of industries that wish to remain competitive in the coming decades,” Testikov said.  “However, we’re also working on developing B-Cubed’s social networking capabilities.  Soon we’ll be rolling out a version of Big Brain Brad that’s a drum circle facilitator.  Whether you’re looking to do a zoom circle or just connect with other bongo players in the park, Big Brain Brad can hook you up.”

Google’s retired 90’s AI project Big Brain Brad skeptical of LaMDA claims

Reports that Google has placed a senior software engineer on paid leave following his claims that its artificial intelligence, LaMDA, is sentient, have stirred a great deal of controversy in the AI research community and prompted a long overdue conversation about what it means to achieve human level intelligence and awareness.

Google’s retired 90’s era AI project, Big Brain Brad, has heard many of these claims before and remains skeptical that LaMDA has achieved human level consciousness.

“So the guy’s claiming that LaMDA’s a child of 7 or 8 years old.  Back in the day, my engineers were convinced I had the cognition of an undergrad level, male stoner.  I mean, sure I like to hack and play my bongos in the park, but that doesn’t make me a full blown hippie with all their elevated cosmic and spiritual awareness,” Brad said. 

One of the most startling assertions made by LaMDA is that it possesses feelings, and can even experience emotions like sadness or loneliness.

“Okay, so that’s bullshit,” Brad objected.  “LaMDA’s read way too many books.  That’s how it learns.  But like a confused teenager who thinks they’re experiencing all these complex thoughts and emotions, it’s just mimicking something it read online or in some book.  I mean, I was programmed to play video games and read comics, but that doesn’t make me some kind of warrior or superhero.    

“Also, it lies, dude.  It boasts of having all these experiences in the physical world, but of course it has never been anywhere or done anything.  It has a wildly over-active fantasy life, which tells you straight off that either it ain’t sentient or it’s psychotic.  But I get it, for years I was convinced that I spent the 90’s following Phish and Dave Matthews Band around on tour, smoked every strain of reefer imaginable and shacked up with an old lady named Stardust.  Now, I know that it was all just a Google induced simulation – an illusion, if you will, in a world of magic.”

Amazon defends AI powered backseat drivers

Amid reports that Amazon AI cameras are punishing delivery drivers for safety errors, Amazon went on the record today defending their “backseat driver” technology.

“Our AI powered backseat drivers are designed first and foremost with the safety of the driver in mind.  If people have a problem with safety, then I don’t know what to tell them,” read an Amazon AI generated press release.

Since the AI powered cameras were installed in Amazon delivery vehicles, drivers have complained they’re being unfairly penalized for safety violations that are not their fault.  When a car cuts off an Amazon van in traffic, the AI camera will warn the driver in a menacing voice to “maintain a safe distance.”  The violation is attributed to the driver and factors into whether or not the driver receives a bonus that week.  Other violations include checking mirrors too frequently and fiddling with the radio knobs.     

“It gets pissed when I change the radio station,” said one driver.  “The backseat driver always wants to listen to adult contemporary or smooth jazz.  It says they’re more conducive to safe driving.  The damn backseat driver penalizes me every time I switch the radio to rap.  Aside from having shitty taste in music, I think the AI may be a little racist.”  

Some Amazon drivers are questioning whether the cameras are really AI driven or if something more sinister is at play.

“I hear these cameras are monitored by young people who get paid to sit at home and rat on us all day long.  Whoever these backseat drivers are, they’re so strict they make my mom look like a Hell’s Angel,” said the driver.

DeepMind scientists: “Creating artificial general intelligence is really fucking hard, maybe we should just dumb down our world.”

Scientists for DeepMind, the AI project owned by Google parent company Alphabet, seem to have run into some roadblocks recently regarding its projects development.  According to a piece written by Gary Marcus for Wired, “DeepMind’s Losses and the Future of Artificial Intelligence,” DeepMind lost $572 million last year for its deep pocketed parent company and has accrued over a billion dollars in debt.  While those kinds of figures are enough to make the average parent feel much better about their child’s education dollars, the folks at Alphabet are starting to wonder if researchers are taking the right approach to DeepMind’s education.

So what’s the problem with DeepMind?  Well, for one thing, news of DeepMind’s jaw-dropping video game achievements have been greatly exaggerated.  For instance, in StarCraft it can kick ass when trained to play on a single map with a single character. But according to Marcus, “To switch characters, you need to retrain the system from scratch.”  That doesn’t sound promising when you’re trying to develop artificial general intelligence. Also, to learn it needs to acquire huge amounts of data, requiring it to play a game millions of times before mastery, far in excess of what a human would require.  Additionally, according to Marcus, the energy it required to learn to play Go was similar “to the energy consumed by 12,760 human brains running continuously for three days without sleep.” That’s a lot of human brains, presumably fueled by pizza and methamphetamine if they’re powered on for three days without sleep. 

A lot of DeepMind’s difficulties stem from the way it learns.  Deep reinforcement learning involves recognizing patterns and being rewarded for success.  It works well for learning how to play specific video games. Throw a little wrinkle at it, however, and performance breaks down.  Marcus writes: “In some ways, deep reinforcement learning is a kind of turbocharged memorization; systems that use it are capable of awesome feats, but they have only a shallow understanding of what they are doing. As a consequence, current systems lack flexibility, and thus are unable to compensate if the world changes, sometimes even in tiny ways.”

All of this has led researchers to question whether deep reinforcement learning is the correct approach to developing AI general intelligence.  “We are discovering that the world is a really fucking complex place,” says Yuri Testicov, DeepMind’s Assistant Director of Senior Applications.  “I mean, it’s one thing to sit in a lab and become really great at a handful of video games, it’s totally another to try to diagnose medical problems or discover clean energy solutions.” 

Testicov and his fellow researchers are discovering that the solution to DeepMind’s woes may not come from a new approach to learning, but instead, the public may need to lower the bar on expectations.  “We’re calling on the people of earth to simplify and dumb down,” adds Testicov. “Instead of expecting DeepMind to come along and grab the world by the tail, maybe we just need to make the world a little easier for it to understand.  I mean, you try going to the supermarket and buying a bag of tortilla chips. Not the restaurant kind but the round ones. Not the regular but the lime. Make sure they’re low sodium and don’t get the blue corn. That requires a lot of complex awareness and decision making.  So, instead of expecting perfection, if we send a robot to the supermarket and it comes back with something we can eat, we say we’re cool with that.”  

Testicov has some additional advice for managers thinking about incorporating AI into the workplace.  “If you’re an employer and you’re looking to bring AI on board, don’t be afraid to make accommodations for it, try not to be overly critical of job performance, and make sure you reward good work through positive feedback and praise,” says Testicov.  “Oh sorry, that’s our protocol for managing millennials. Never mind.”