Alphabet and Google CEO Sundar Pichai takes his sweet time attending to the point in a new Financial Times editorial. However, when he gets there, he leaves little room for discussion: “…there is no question in my mind that artificial intelligence needs to be regulated. It’s too important not to.”
After putting out his connection with technology and providing several examples where innovation has had unintended negative consequences, Pichai makes the case that. In contrast, AI is powerful and useful, and we should balance its “potential harms… with social opportunities.” This name for “balance” leaves some questions about how tight the regulation is that Pichai is speaking about. He does not explicitly deny the White House’s recent calls for a light touch. Nor does he suggest the EU’s more extensive proposals go too far.
As an alternative, he makes clear that having the worldwide community agreed on regulatory points is vital. Then he appears to recommend that the Alphabet’s personal internal handling of AI may serve as a guideline. He insists that the rules and systems put in place by the company support it to avoid prejudice, and prioritize the safety and privacy of people — although it’s controversial how profitable Alphabet has been on those fronts. He also says the company will not deploy AI “to support mass surveillance or violate human rights.”