Figure 01 Now Engages in Natural Conversations Enabled by OpenAI Collaboration

by | Mar 13, 2024 | Humanoid

Two weeks ago, Figure announced a partnership with OpenAI to push the boundaries of AI and robot learning. Figure just released a video of Figure 01, their humanoid robot having a conversation driven by the results of this partnership.

Connecting Figure 01 to a large OpenAI pre-trained multimodal model allows for new capabilities. Figure 01 + OpenAI can now:

• Describe its visual experience

• Plan future actions and use common sense reasoning

• Reflect on its memory

• Explain its reasoning verbally

Figure’s onboard cameras feed into a large vision-language model (VLM) trained by OpenAI that understands both images and text. This model reportedly processes the entire history of the conversation (including past images) to determine language responses, which are spoken back to humans. This same model is responsible for deciding which learned, closed-loop behavior to run on the robot to fulfill a given command.

by: Bill Parson

by: Bill Parson

Bill is an accomplished editor with a passion for robotics and emerging technologies. With a keen eye for detail and a knack for concise communication, he plays a pivotal role in developing and publishing content for SimplyBots. His deep interest in the field of robotics stems from his fascination with the potential of intelligent machines to transform various aspects of our lives.

Pin It on Pinterest