Translate to multiple languages

Subscribe to my Email updates
Enjoy what you've read, make sure you subscribe to my Email Updates

Sunday, May 07, 2017

Meet the 69-year-old professor who left retirement to help lead one of Google's most crucial projects | CNBC - Tech

  • Professor David Patterson retired from U.C. Berkeley last summer after a 40-year academic career in computer architecture.
  • He's now a key part of the team behind a critical chip that Google uses for artificial intelligence processing.
  • Without this chip, Google top execs estimated it would have had to double its data centers to support even a limited amount of voice processing.

Photo: Ari Levy
"Google brought on board a legend in computer architecture from the University of California to help develop its tensor processing unit." inform Ari Levy, CNBC's senior technology reporter in San Francisco. 

David Patterson, former professor at UC Berkeley

A year ago the University of California at Berkeley hosted a retirement celebration for David Patterson, who was hanging it up after a 40-year academic career in computer architecture.

Patterson encored the event last May with a personal 16-minute history, chronicling his days as a wrestler in high school and college and a math major at UCLA, followed by a job at Hughes Aircraft and four decades at Berkeley.

From writing two books with Stanford University's John Hennessy to chairing the Computing Research Association, Patterson told the audience that a key to his success was doing "one big thing at a time." 

His next big thing could be enormous.

Rather than hitting the beach after retirement, Patterson joined Google in July to work on an ambitious new chip that's designed to run at least 10 times faster than today's processors and is sophisticated enough to handle the intensive computations required for artificial intelligence.

It's called the Tensor Processing Unit (TPU), and Patterson has emerged as one of the principal evangelists. He spoke to about 100 students and faculty members at the Berkeley campus on Wednesday, a few days shy of the anniversary of his retirement celebration. 

Four years ago they had this worry and it went to the top of the corporation," said Patterson, 69, while sporting a T-shirt for Google Brain, the company's research group. The fear was that if every Android user had three minutes of conversation translated a day using Google's machine learning technology, "we'd have to double our data centers," he said.

Google’s tensor processing unit or TPU.  
Photo: Google
Google parent Alphabet already spends $10 billion a year on capital expenses, largely tied to data center costs. And now it's addressing what it calls a "renaissance in machine learning." Deep neural networks, or computers that are modeled to learn and get smarter over time as data sets get bigger and more complicated, require big breakthroughs in hardware efficiency. 

Patterson, who gave the same talk at Stanford on Thursday, was among the lead authors on a report from Google last month on the TPU's performance. The report concluded that the TPU is running 15 to 30 times faster and 30 to 80 times more efficient than contemporary processors from Intel and Nvidia

The paper, written by 75 engineers, will be delivered next month at the International Symposium on Computer Architecture in Toronto.

Source: CNBC