Bill Gates speak at unveiling of new human-centered artificial intelligence institute
Former director of the Stanford Artificial Intelligence (AI) Lab Fei-Fei Li and Stanford humanities professor and former Provost John Etchemendy are on a mission to “bring the humanities and social thinking into tech.”
They conceptualized the Stanford Institute for Human-Centered Artificial Intelligence (HAI) in 2016 in the hopes of involving communities from all corners of Stanford to tackle global challenges.
“We can’t do this alone,” Li said. “We need people from all seven schools, especially from the humanities.”
Monday marked the launch of this new institute, a day-long affair bringing together panelists, speakers and attendees from all across the globe and bridging all disciplines. The day’s talks centered around human-inspired intelligence, the societal impact of AI and human augmentation using AI. Panelists included LinkedIn co-founder Reid Hoffman, DeepMind co-founder Demis Hassabis, senior vice president of Google AI Jeff Dean and director of Microsoft Research Labs Eric Horvitz.
While most of AI research is focused on its technical development, HAI’s express goal is to ensure that the progress of AI remains diverse, humanistic, and fundamentally human-centric.
“In order to train AI to benefit humanity,” Li said. “The creators of AI need to represent humanity.”
While Gates spoke of AI positively, he also recognized its potential consequences if misused, comparing it in both constructive and destructive power to the development of nuclear fission.
“The world hasn’t had that many technologies that are both promising and dangerous,” Gates said. “We had nuclear weapons and nuclear energy, and so far so good.”
Gates also called for “creativity” in navigating privacy concerns about access to datasets.
“In some domains like education, I am more worried that the privacy concerns, which are appropriate — but if you don’t put much creativity in how you have longitudinal data access while not violating privacy, you are going to default to the datasets not being there,” Gates said.
Unlike nuclear reactors and weapons which were developed almost entirely by the government, the forefront of AI research occurs in university labs and in private companies. As a result, Gates observed, “the government just doesn’t see it in the same way they did with previous technologies.”
Despite the potential dangers of AI, however, Gates’ outlook was optimistic. He spoke at length about the potential uses of AI in interpreting large data sets related to health and education — two areas of major focus for the Bill and Melinda Gates Foundation.
Specifically in Africa, the Gates Foundation has been experimenting with the use of AI to help identify the factors that cause high rates of premature birth and malnutrition.
“With AI we are able to take all that data and find some really low cost intervention,” Gates said.
As an example, Gates spoke of applying machine learning algorithms to the connection between health and the gut biome. The complex, non-linear relationship makes it difficult to deduce causes and effects using traditional data analysis methods.
“If you give kids in some countries an antibiotic once a year that costs two cents called azithromycin, it saves a hundred thousand lives,” Gates said. “I do not believe without machine learning techniques we [would have ever been] able to take the dimensionality of this problem to find the solution.”
While health care interventions and drugs are piloted in the United States, Gates said, Africa is where these interventions have the highest return of diminished human suffering per intervention.
“If everybody in the United States lived to 100, it would not match what we can do in the developing world in terms of net change to human benefit,” Gates said.
In addition to health, Gates described education as a promising area for machine learning algorithms to identify factors which contribute to a successful teacher or an engaged student. In regards to the current state of machine learning applied to education research, Gates states that “it’s a desert.”
With a good training set, Gates theorizes that it might be possible to for AI could to identify certains consistent causes of a good education.
“With everything we have learned about education, you could still say that the best teacher ever had lived 100 years ago,” Gates said. “You could not say that about doctors.”
More generally, AI offers a “chance to supercharge the social sciences,” he said.