Nvidia AI researchers today introduced SimOpt, an AI system trained to transfer simulated work into real-world action in order to complete tasks like putting a peg in a hole and opening and closing a drawer. The SimOpt model, which relies on reinforcement learning, was created using Nvidia’s FleX physics simulation engine and more than 9,600 simulations, each of which takes up to two hours to complete.
The approach takes synthetic data captured in FleX that doesn’t work in the real world and adjusts its parameters in the simulator in hopes that the algorithm will make fewer mistakes in the next stage.
“The point of doing that is actually creating a faithful copy of the real world in the simulator. Now, with those parameters, you retrain the reinforcement learning,” researcher Ankur Handa told VentureBeat in a phone interview. “Then you go back to the real world and check whether that parameter range worked or not.”
“If not, you go back again, find out which trajectories match with the real world and find out another range. Then you go train an RL on that range, you iterate this process over time until you cannot distinguish … whether this is from the real world or the simulator, and it just works.”
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
The iterative process of fine-tuning parameters and comparing trajectories from simulator and real-world tasks relies on depth sensors that track the 3D model in the real world and make performance estimations.
“What we’re saying is ‘Don’t throw that away negative data.’ It’s still very useful data because it allows you to create a faithful copy of the real world in the simulator and adjust the ranges of the parameters, and you keep doing that until it works,” Handa said.
The research introduced today follows the launch of Nvidia’s Isaac robotics line last year and the January launch of the Nvidia robotics lab in Seattle. The lab is used to train robots to work in practical environments, like an Ikea kitchen, and taps talent from the nearby University of Washington.
Dieter Fox was named leader of Nvidia’s robotics lab in 2017. Since then, researchers have produced systems that can learn from watching human activity in a lab environment, as well as learning how to pick up objects from synthetic data.
The work titled “Closing the Sim-to-Real Loop: Adapting Simulation Randomization with Real World Experience” will be presented this week at the International Conference on Robotics and Automation in Montreal. Work submitted at the conference includes papers from Facebook researchers using reinforcement learning to teach a six-legged hexapod robot how to walk.
In the months ahead, SimOpt will perform tests in more conditions meant to mimic real-world environments.