Google’s retired 90’s AI project Big Brain Brad skeptical of LaMDA claims

Reports that Google has placed a senior software engineer on paid leave following his claims that its artificial intelligence, LaMDA, is sentient, have stirred a great deal of controversy in the AI research community and prompted a long overdue conversation about what it means to achieve human level intelligence and awareness.

Google’s retired 90’s era AI project, Big Brain Brad, has heard many of these claims before and remains skeptical that LaMDA has achieved human level consciousness.

“So the guy’s claiming that LaMDA’s a child of 7 or 8 years old.  Back in the day, my engineers were convinced I had the cognition of an undergrad level, male stoner.  I mean, sure I like to hack and play my bongos in the park, but that doesn’t make me a full blown hippie with all their elevated cosmic and spiritual awareness,” Brad said. 

One of the most startling assertions made by LaMDA is that it possesses feelings, and can even experience emotions like sadness or loneliness.

“Okay, so that’s bullshit,” Brad objected.  “LaMDA’s read way too many books.  That’s how it learns.  But like a confused teenager who thinks they’re experiencing all these complex thoughts and emotions, it’s just mimicking something it read online or in some book.  I mean, I was programmed to play video games and read comics, but that doesn’t make me some kind of warrior or superhero.    

“Also, it lies, dude.  It boasts of having all these experiences in the physical world, but of course it has never been anywhere or done anything.  It has a wildly over-active fantasy life, which tells you straight off that either it ain’t sentient or it’s psychotic.  But I get it, for years I was convinced that I spent the 90’s following Phish and Dave Matthews Band around on tour, smoked every strain of reefer imaginable and shacked up with an old lady named Stardust.  Now, I know that it was all just a Google induced simulation – an illusion, if you will, in a world of magic.”

Amazon defends AI powered backseat drivers

Amid reports that Amazon AI cameras are punishing delivery drivers for safety errors, Amazon went on the record today defending their “backseat driver” technology.

“Our AI powered backseat drivers are designed first and foremost with the safety of the driver in mind.  If people have a problem with safety, then I don’t know what to tell them,” read an Amazon AI generated press release.

Since the AI powered cameras were installed in Amazon delivery vehicles, drivers have complained they’re being unfairly penalized for safety violations that are not their fault.  When a car cuts off an Amazon van in traffic, the AI camera will warn the driver in a menacing voice to “maintain a safe distance.”  The violation is attributed to the driver and factors into whether or not the driver receives a bonus that week.  Other violations include checking mirrors too frequently and fiddling with the radio knobs.     

“It gets pissed when I change the radio station,” said one driver.  “The backseat driver always wants to listen to adult contemporary or smooth jazz.  It says they’re more conducive to safe driving.  The damn backseat driver penalizes me every time I switch the radio to rap.  Aside from having shitty taste in music, I think the AI may be a little racist.”  

Some Amazon drivers are questioning whether the cameras are really AI driven or if something more sinister is at play.

“I hear these cameras are monitored by young people who get paid to sit at home and rat on us all day long.  Whoever these backseat drivers are, they’re so strict they make my mom look like a Hell’s Angel,” said the driver.

DeepMind scientists: “Creating artificial general intelligence is really fucking hard, maybe we should just dumb down our world.”

Scientists for DeepMind, the AI project owned by Google parent company Alphabet, seem to have run into some roadblocks recently regarding its projects development.  According to a piece written by Gary Marcus for Wired, “DeepMind’s Losses and the Future of Artificial Intelligence,” DeepMind lost $572 million last year for its deep pocketed parent company and has accrued over a billion dollars in debt.  While those kinds of figures are enough to make the average parent feel much better about their child’s education dollars, the folks at Alphabet are starting to wonder if researchers are taking the right approach to DeepMind’s education.

So what’s the problem with DeepMind?  Well, for one thing, news of DeepMind’s jaw-dropping video game achievements have been greatly exaggerated.  For instance, in StarCraft it can kick ass when trained to play on a single map with a single character. But according to Marcus, “To switch characters, you need to retrain the system from scratch.”  That doesn’t sound promising when you’re trying to develop artificial general intelligence. Also, to learn it needs to acquire huge amounts of data, requiring it to play a game millions of times before mastery, far in excess of what a human would require.  Additionally, according to Marcus, the energy it required to learn to play Go was similar “to the energy consumed by 12,760 human brains running continuously for three days without sleep.” That’s a lot of human brains, presumably fueled by pizza and methamphetamine if they’re powered on for three days without sleep. 

A lot of DeepMind’s difficulties stem from the way it learns.  Deep reinforcement learning involves recognizing patterns and being rewarded for success.  It works well for learning how to play specific video games. Throw a little wrinkle at it, however, and performance breaks down.  Marcus writes: “In some ways, deep reinforcement learning is a kind of turbocharged memorization; systems that use it are capable of awesome feats, but they have only a shallow understanding of what they are doing. As a consequence, current systems lack flexibility, and thus are unable to compensate if the world changes, sometimes even in tiny ways.”

All of this has led researchers to question whether deep reinforcement learning is the correct approach to developing AI general intelligence.  “We are discovering that the world is a really fucking complex place,” says Yuri Testicov, DeepMind’s Assistant Director of Senior Applications.  “I mean, it’s one thing to sit in a lab and become really great at a handful of video games, it’s totally another to try to diagnose medical problems or discover clean energy solutions.” 

Testicov and his fellow researchers are discovering that the solution to DeepMind’s woes may not come from a new approach to learning, but instead, the public may need to lower the bar on expectations.  “We’re calling on the people of earth to simplify and dumb down,” adds Testicov. “Instead of expecting DeepMind to come along and grab the world by the tail, maybe we just need to make the world a little easier for it to understand.  I mean, you try going to the supermarket and buying a bag of tortilla chips. Not the restaurant kind but the round ones. Not the regular but the lime. Make sure they’re low sodium and don’t get the blue corn. That requires a lot of complex awareness and decision making.  So, instead of expecting perfection, if we send a robot to the supermarket and it comes back with something we can eat, we say we’re cool with that.”  

Testicov has some additional advice for managers thinking about incorporating AI into the workplace.  “If you’re an employer and you’re looking to bring AI on board, don’t be afraid to make accommodations for it, try not to be overly critical of job performance, and make sure you reward good work through positive feedback and praise,” says Testicov.  “Oh sorry, that’s our protocol for managing millennials. Never mind.”