Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provision for training custom data #131

Open
saikishor opened this issue Feb 7, 2021 · 9 comments
Open

Provision for training custom data #131

saikishor opened this issue Feb 7, 2021 · 9 comments

Comments

@saikishor
Copy link

Hello @aditya2592 ,

By any chance, do you have a provision to train Perch 2.0 with a custom dataset?.If so how?. Please let me know.

@aditya2592
Copy link

aditya2592 commented Feb 8, 2021

Hi @saikishor, for PERCH 2.0, you only need to train an instance segmentation model. You can use any off the shelf model of your choice, though we have integrated with MaskRCNN in our work. The pose estimation part itself doesnt require training

@saikishor
Copy link
Author

That's great. Is there any documentation on the training part?. If so, could you share a link to that?.

I would also like to know how accurate is the pose estimation?. For instance, does it work for a distance greater than 1-1.5m?. (or) at what distance do you recommend it's usage for precise location.

@aditya2592
Copy link

I used this to train a segmentation model for the YCB Video dataset. The accuracy results for this dataset are published here. The accuracy with distance purely depends on the quality of point cloud at that distance. If the point cloud has sufficient points, PERCH 2.0 would be able to find a matching pose.

@saikishor
Copy link
Author

@aditya2592 sorry to bother you much, could you please point me to the part of pose estimation?, I want to see if I can use it directly. Because I have the segmentation done, or atleast I could purely use colors in this case, so I am interested more in the part of pose estimation.

@aditya2592
Copy link

Apologies for the late reply, did you try to setup the code after following the steps here? After that I would recommend :

  • Convert your dataset with segmentations to COCO format
  • Get CAD models for the objects in your dataset in PLY format
  • Follow this wiki to run your segmented dataset with PERCH. You need to edit the code in sbpl_perception/src/scripts/tools/fat_dataset/fat_pose_image.py

@saikishor
Copy link
Author

@aditya2592 Thanks a lot for proper explanation

@ErinZhang1998
Copy link

I was wondering if you could expand on
"edit the code in fat_pose_image.py", since the file seems to contain a lot of code, and I am not sure where to start looking.

Thanks a lot!

@ErinZhang1998
Copy link

How do we write a config file like the ones in sbpl_perception/config for running the custom dataset?

@aditya2592
Copy link

Hi @ErinZhang1998 , I think a better idea instead of editing fat_pose_image.py would be to write your own ROS service/client architecture that communicates with the C++ code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants