Grasp-Anything++

Grasp-Anything++ [1] extend over Grasp-Anything and includes 10 million grasping instructions and associated ground truth.

Our dataset can be used for language-driven grasping task and allows the robots to grasp specific objects based on language commands.

Samples

samples

Language-driven Grasping

method

We introduce a new language-driven grasping method based on conditional-guided diffusion models [2] with a new contrastive training objective.

References

  • [1] An Dinh Vuong, Minh Nhat Vu, Baoru Huang, Nghia Nguyen, Hieu Le, Thieu Vo, Anh Nguyen. Language-driven Grasp Detection. In CVPR, 2024.

  • [2] An Dinh Vuong, Minh Nhat Vu, Toan Tien Nguyen, Baoru Huang, Dzung Nguyen, Thieu Vo, Anh Nguyen. Language-driven Scene Synthesis using Multi-conditional Diffusion Model. In NeurIPS, 2023.

Updated: