Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Adding TRT options/task #435

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from
Draft

WIP: Adding TRT options/task #435

wants to merge 7 commits into from

Conversation

pranavm-nvidia
Copy link
Collaborator

No description provided.

Comment on lines 38 to 50
// TODO (pranavm): Figure out a better way to reuse TRT translation options -
// maybe move to options providers?
struct TensorRTOptions
: public mlirtrt::compiler::OptionsProvider<TensorRTOptions> {
mlir::tensorrt::TensorRTTranslationOptions options;

void addToOptions(mlir::OptionsContext &context) {
options.addToOptions(context);
}
};
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can move TensorRTTranslationOptions to make them an options provider if that makes sense to do.

Comment on lines +56 to +60
// TODO (pranavm): Check if this needs to be conditional - the TRT passes
// above are not.
#ifdef MLIR_TRT_TARGET_TENSORRT
mlirtrt::compiler::registerTensorRTToExecutableTask();
#endif
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what exactly needs to be guarded. In a lot of places we guard TRT things with MLIR_TRT_TARGET_TENSORRT but that doesn't seem to be the case for the TRT passes registered above.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TensorRT dialect and MLIR passes are always built. It doesn't depend on actually having TensorRT binaries or headers to build against (or at least, that was the idea, we haven't been enforcing it). The MLIR_TRT_TARGET_TENSORRT is basically a guard for anything that relies on having actual TRT headers or libraries to link against, for example the translation from MLIR to TensorRT, hence the TARGET_TENSORRT in the name.

Comment on lines 30 to 42
TensorRTToExecutableOptions::TensorRTToExecutableOptions(
TaskExtensionRegistry extensions) {
// TODO (pranavm): Do we need to support extensions?
}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if we want to require extensions for all options types or if we need to handle both cases in the options registry. If it's the former, then I can just assert that the extensions are empty here (or maybe even just add support?). If it's the latter, we could have a setExtensions method so it becomes optional instead of having it part of the constructor.

//===----------------------------------------------------------------------===//
// OutlineTensorRTOpPass
//===----------------------------------------------------------------------===//
// TODO: what are the dependent dialects? what are the options?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the required dependent dialects/options?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also I got error during compilation:

error: ‘createOutlineTensorRTOpPass’ was not declared in this scope

Did I missing anything for the tablegen config?

Comment on lines +117 to +120
/// Helper function to call the `makeRegionIsolatedFromAbove` to capture all
/// required arguments into the InlineGroupOp region.
// static LogicalResult
// makeIsolatedFromAboveImpl(RewriterBase &rewriter, plan::InlineGroupOp regionOp,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this function still useful?

SymbolTableCollection symbolTable;
IRRewriter rewriter(&getContext());
// what are these? are they needed?
// DataFlowSolver solver;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We just decide if an op is tensorrt dialect or not. is solver still needed then?

emitError(module.getLoc()) << "failed to create clustering options";
return signalPassFailure();
}
// What do they do here?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did not figure out this part, regionOps is not used either.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants