common sense

"there is no arguing with one who denies first principles"

Thursday, July 28, 2022

Is Blake Lemoine On to Something?

 


Blake Lemoine was fired early this week. 

He’s the Google engineer who claimed an Artificial Intelligence chatbot (LaMDA) was sentient. After his leak about this information to the news, Google placed him on leave. That was over a month ago. I watched a short interview he did on a San Francisco tech news show. He comes off as quite genuine and honest, not like a guy with an axe to grind against his employer.

When I first the saw the headline about a sentient machine it shocked me. Have we come this far? For the record I don’t believe the AI bot is sentient for the simple reason that only God can create life. I’m sure the algorithm is impressive, but life is exclusive to the Creator. I don’t know enough about Artificial Intelligence to pick the mechanics apart. But at its core LaMDA is a complex machine that uses sophisticated data to answer questions. It’s a pattern recognition machine similar to Siri or Alexa.

Lemoine’s job was to fix bias in the system. He asked questions to determine where the problems or missing information was. He explains some of his methods here.

The details of how the AI functions are not in dispute. The dispute is over ‘reasoning’ in computers and what constitutes a sentient being. This isn’t reasoning as we understand it. I don’t think this AI rises to the level of sentience.

The most interesting part of his TV interview was about eliminating the significance of culture. It’s more an ethical question than anything. Let’s say I’m a big believer in baseball as the most advanced version of human competition. I’m tasked with writing a short history of the United States from 1900 to 1950, stay with me. Baseball will feature prominently in my view of how the country developed. I’ve already told you I think it’s a revolutionary achievement in human competition. But I bias the information by focusing so heavily on this one aspect. Others would scarcely mention baseball, thinking it closer to a pastime for kids or a club sport.

In other words, my baseball dominated history is a cultural norm. Cultural norms can become inputs when you train a machine.

What’s the prevailing cultural norm in the Western world of philosophy or religion? Secular humanism. Remember what AI is really supposed to do, search for answers to our questions and suggest answers based on available inputs. It knows and we don’t. Any information it draws it’s been fed. I’m sympathetic to Lemoine’s argument about destroying cultural beliefs. I’m a Christian after all. But I don’t worry that my belief will whither in the light of scientific inquiry. I worry that the only information available is highly biased in favor of secular humanism.

Like my baseball slanted history, a machine that’s fed anti Christian information will discount religious belief. As a practical matter it will erase cultural significance. That doesn't just mean religion, culture is a mix of history, philosophy and human nature.

We know that tech companies highly curate news and information on their search engines. The Hunter Biden laptop story disappeared for a short time on Twitter during the 2020 election. That’s just one example of manipulating results and it happened in real time. Too much control over information leads to highly biased results. Even if we all subscribe to a secular view of humanity right no we probably won’t in 50 years. This isn’t just a flaw in the belief system as much as a practical way to look at history. Views about humanity, religion and philosophy shift and change over time.

But I wouldn’t suggest an alternative way to design AI machines. It’s still too new. Blake Lemoine carries the title “AI Ethicist” even though it’s probably not even a decade old term. With technology that’s advancing faster than we can understand it’s important to establish first principles before going ahead with new designs. 

In a world where secular humanists run the show I’m concerned. It might be easier to ask the designers “What should we not do?” with regard to development. Get a sense of where the boundaries are in their heads and work back from there.

They could always be lying of course, but at least you’ll have something for the record. If Google (and others) can’t define boundaries I’d be extremely worried.

Blake Lemoine could be a gadfly for all I know. Maybe Google is happy to be done with him and hopes that the story goes away. But they hired him for some delicate work and trusted him with critical infrastructure work. He raises curious ethical questions about AI and I hope he continues to work in the same field. Even if his ideas about what constitutes a human are skewed.

No comments:

Post a Comment