Introduction

Welcome to Grasp-Anything project! We tackle grasp detection by utilizing foundation models. Our project represents a data centric approach for grasp detection.

Dataset Comparison

dataset_comparison

Grasp-Anything offers universality, featuring a wide range of everyday objects in natural arrangements, unlike other benchmarks limited by object selection and controlled settings.

Statistics

num-samples num-objects

Grasp-Anything significantly outperforms other datasets in terms of number of samples and number of categories.

pos-tags num-cats

The POS tags in our dataset are visualized in the figure above, highlighting a diverse vocabulary in scene descriptions. By comparing the object shape distributions between Grasp-Anything and Jacquard dataset, Grasp-Anything covers a wider area, suggesting a higher level of shape diversity.

shape_visualization

Stay Updated

This website will be continuously updated with the latest papers, datasets, and code. Please check back regularly for updates.

Updated: