fix: use NLA to preserve externalized memory from being dropped #9012
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This commit fixes a corner-case, where a memory module itself is not effectful, but it is connected to an effectful ext memory module. If we simply mark all memory modules as pure, this would optimise out the ext memory module, causing inconsistencies in the sifive -repl-seq-mem flow. The previous firrtl transforms and hierarchies would be generated correctly, yet the final _ext sram would be omitted completely.
This fixes #8994, where the offending chisel looks like
Before this patch, the following would have all the wires optimised out by this pass under rob_debug_inst_mem
Open to suggestions on how to mitigate this issue in a potentially cleaner manner :)