MIT debuts a large language model-inspired method for teaching robots new skills

3 weeks ago 6

MIT this week showcased a new model for training robots. Rather than the standard set of focused data used to teach robots new tasks, the method goes big, mimicking the massive troves of information used to train large language models (LLMs).

The researchers note that imitation learning — in which the agent learns by following an individual performing a task — can fail when small challenges are introduced. These could be things like lighting, a different setting, or new obstacles. In those scenarios, the robots simply don’t have enough data to draw upon in order to adapt.

The team looked to models like GPT-4 for a kind of brute force data approach to problem solving.

“In the language domain, the data are all just sentences,” says Lirui Wang, the new paper’s lead author. “In robotics, given all the heterogeneity in the data, if you want to pretrain in a similar manner, we need a different architecture.”

The team introduced a new architecture called Heterogeneous Pretrained Transformers (HPT), which pulls together information from different sensors and different environments. A transformer was then used to pull together the data into training models. The larger the transformer, the better the output.

Users then input the robot design, configuration, and the job they want done.

“Our dream is to have a universal robot brain that you could download and use for your robot without any training at all,” CMU associate professor David Held said of the research. “While we are just in the early stages, we are going to keep pushing hard and hope scaling leads to a breakthrough in robotic policies, like it did with large language models.”

The research was founded, in part, by Toyota Research Institute. Last year at TechCrunch Disrupt, TRI debuted a method for training robots overnight. More recently, it struck a watershed partnership that will unite its robot learning research with Boston Dynamics hardware.  

Brian Heater is the Hardware Editor at TechCrunch. He worked for a number of leading tech publications, including Engadget, PCMag, Laptop, and Tech Times, where he served as the Managing Editor. His writing has appeared in Spin, Wired, Playboy, Entertainment Weekly, The Onion, Boing Boing, Publishers Weekly, The Daily Beast and various other publications. He hosts the weekly Boing Boing interview podcast RiYL, has appeared as a regular NPR contributor and shares his Queens apartment with a rabbit named Juniper.

Subscribe for the industry’s biggest tech news

Related

Read Entire Article