Skip to content

CPU Code Documentation

Aditya Agarwal edited this page May 26, 2020 · 1 revision

EnvObjectRecognition::GetCost : Compute cost of every successor state (use lazy is false)

EnvObjectRecognition::GetDepthImage : Get depth image of a single object in given pose

GraphState is a collection of ObjectStates, ObjectState is an object in a ContPose. Go to a graph state and expand all its successors. If its the root state then expand all its children (objects in single poses). Expanding means it will calculate cost of all child states. For 2 objects, the algo should first expand all single object states (power drill and box) and calculate cost for each. Then it should go to one of these and expand that state. Ideally at this point there should be two objects in the scene For YCB objects, object axis seems to be at the object center, as a result, need to add some z equal to half the object height when rendering poses.

LocalizeObjects() - In object_recognizer.cpp

Set Input() - In search_env.cpp :

LoadObjFiles() -> Reads model files as polygon meshes, object details are stored in objectmodel class, preprocessmodel function is applied from object_model.cpp when object model object is created SetObservation() -> Set observed point cloud details

GetLazySuccs : When successor is added, cost is computed lazily instead of full cost of state after adding successor

GetSuccs (): First and last level states will always be expanded. Once all costs are obtained for all states, print and visualize point clouds is done in this function

GenerateSuccessorStates (): Generate multiple successor states for given states (a graph state is a collection of objects in various poses) by adding objects to the state in different poses. Should give back an vector of graph states. Generated poses are filtered according to : -> there should be minimum number of points lying in observed point cloud around the object (the point where object axis is located is used to search). However in this case the observed point cloud is a shifted version of the observed point cloud - all points in the observed cloud are made such that their height is equal to the height of the table given in the config. So this is a projection of the observed cloud on the z plane. So the point with respect to which we have to search has to be at z which is of table height

GetDepthImage () : Get depth image of source state by adding all objects in different poses to the scene. If first scene then this will do nothing

ComputeCostsInParallel : First a cost computation input is created for all successor states, this input is used to create cost computation output containing computed cost for every successor state

GetCost : Non lazy version of getting cost. Fired for every successor state, gets distributed over processors depending on number of processors allowed : GetDepthImage() : Get depth image again after adding new object Get depth image of state (GetComposedDepthImage) Do ICP adjustment of newly added object if its occluded

GetLazyCost: Every object has fixed set of depth images generated (these are generated for all objects when number of objects in scene is 0 by calling GetCost which stores them in a cache). These are cached. when that object is added, cached image is used. If occlusion is there, the image is adjusted to remove occluded points. ICP is done only in case of occlusion. The image corresponding to new object pixels is used for cost computation

**GetDepthImage() : ** Get depth image after ICP adjustment Knn - observed point cloud

GetTargetCost(): find points in new rendered point cloud unexplained in observed point cloud

GetSourceCost():

GetLazyCost : Get depth image of state (GetComposedDepthImage) Make point cloud ICP adjust point cloud Get depth image from new point cloud Once pose is obtained it is sent to requesting client after applying preprocessing transform to it

Clone this wiki locally