DeepMind and Waymo collaborate to improve AI accuracy and speed…

DeepMind and Waymo collaborate to improve AI accuracy and speed up model training | VentureBeat

In several recent studies, DeepMind and Waymo applied Population Based Training (PBT) to pedestrian, bicyclist, and motorcyclist recognition tasks with the goal of investigating whether it could improve recall (the fraction of obstacles identified over the total number of in-scene obstacles) and precision (the fraction of detected obstacles that are actually obstacles and not false positives). Ultimately, the companies sought to train a single AI model to maintain recall of over 99% while reducing false positives.

DeepMind reports that these experiments informed a “realistic” framework for evaluating real-world model robustness, which in turn informed PBT’s algorithm-selecting competition. They also say the experiments revealed the need for fast evaluation to support evolutionary competition; PBT models are evaluated every 15 minutes. (DeepMind said it employed parallelization across “hundreds” of distributed machines in Google’s datacenters to achieve this.)

The results are impressive. PBT algorithms managed to achieve higher precision, reducing false positives by 24% compared to their hand-tuned equivalents, while maintaining a high recall rate, DeepMind claims. Moreover, they saved time and resources — the hyperparameter schedule discovered with PBT-trained algorithms took half the training time and resources and used half the computational resources.

DeepMind says it has incorporated PBT directly into Waymo’s technical infrastructure, enabling researchers from across the company to apply it with a button click. “Since the completion of these experiments, PBT has been applied to many different Waymo models and holds a lot of promise for helping to create more capable vehicles for the road,” wrote the company. “Traditionally, [AI] can only be trained using simple and smooth loss functions, which act as a proxy for what we really care about. PBT enabled us to go beyond the update rule used for training neural nets, and toward the more complex metrics optimizing for features we care about.”

Leave a Reply