Skip to content

Some facts about the runtime speed. #15

@SUZhaoyu

Description

@SUZhaoyu

Hi I tried to reimplement the similar operation as yours in Tensorflow and found two facts w.r.t the runtime speed performance:

  1. The gpu kernel which adds the filter gradients from one batch to another has almost no influence on the speed performance, in fact, the original MXNet implementation also applies this idea.

  2. Splitting back propagations for different inputs variables into different TF ops does help to accelerate the runtime speed, but there's only 30% boost observed, compared with wrapping them into one TF op.

I think the straggler is most likely to be the im2col/col2im operation, which is implemented in pure cuda code with little optimizations (compared with CuDNN). And the Author of Deform Conv also admitted that the main downside of their implementation is that they did not apply any CuDNN for the optimization (sorry I cannot find the origin).

Hopefully, these results can be helpful for those who are also interested in the Deform Conv implementation in Tensorflow, especially when the Deform Conv V2 paper has been released recently.

Any comments or further discussion are welcomed and Merry Christmas!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions