Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU-based simulation of x-ray projections and reconstruction #22

Merged
merged 9 commits into from
Jan 7, 2025

Conversation

JSCallaghan
Copy link
Contributor

@JSCallaghan JSCallaghan commented Nov 19, 2024

Describe your contribution

Checklist when you are ready to request a review

  • I have performed a self-review of my code
  • I have created a new folder, containing my contributions which are in the form of jupyter notebooks, with any necessary supporting python files, and a LICENSE file.
  • I have added a description of my contribution(s) to the top of my file(s)
  • If publicly available, I have added a link to the dataset used near the top of my file(s)
  • I have added the CIL version I ran with near the top of my file(s)
  • The content of this Pull Request (the Contribution) is intentionally submitted for inclusion in CIL-User-Showcase.
  • I confirm that the contribution does not violate any intellectual property rights of third parties.
  • I confirm that I have added license headers to all of the files I am contributing (with a license of my choice)
  • Change pull request label to 'Waiting for review'

Note: for an example of a contribution, where a license header, description, data link and CIL version has been added, please
see: example_contribution

Sorry, something went wrong.

Signed-off-by: JSCallaghan <jcc23grt@bangor.ac.uk>
Signed-off-by: JSCallaghan <jcc23grt@bangor.ac.uk>
Signed-off-by: JSCallaghan <jcc23grt@bangor.ac.uk>
Signed-off-by: JSCallaghan <jcc23grt@bangor.ac.uk>
@MargaretDuff
Copy link
Member

Hi @JSCallaghan - many thanks for your contribution from the CIL user meeting! We will try and review over the next couple of weeks :)

@effepivi
Copy link
Contributor

Hi @JSCallaghan,

Nice to see you took the time to create a CIL/gVXR User contribution.

Below are a few suggestions:

  • The simulation and reconstruction are performed on the GPU, not CPU.
  • You could mention that you use the XML file from a CT scan of an actua sample performed using the Diondo scanner.
  • If you want you can install SpekPy or xpecgen you could simulate your actual spectrum.
from gvxrPython3.utils import loadSpectrumSpekpy

# gvxr.addEnergyBinToSpectrum(scaVol, "keV", 1)
loadSpectrumSpekpy(scaVol);
  • You can improve the simulation speed by using the built-in function:
gvxr.computeCTAcquisition("", # Where to save the projections
                          "", # Where to save the screenshots
                          numPro, # Total number of projections
                          0, # First angle
                          False, # Include the last angle
                          360, # Last angle
                          0, # Number of flat images
                          0, 0, 0, "mm", # Centre of rotation
                          *gvxr.getDetectorUpVector()) # Rotation axis

projSim = np.array(gvxr.getLastProjectionSet())

# # Define the number of projections, along with the angle step
# angSte = endAng / numPro

# # Pre-create our results array with the size of our detector
# projSim = np.ndarray((numPro, numPixY, numPixX))

# # Rotate our object by angSte for every projection, saving it to the results array.
# for i in range(0, numPro):
#     # Save current angular projection
#     projSim[i] = gvxr.computeXRayImage()
#     # Rotate models
    
#     gvxr.rotateScene(angSte,0,0,1)

# Don't forget, we need to use flatfield normalisation on the radiographs to get the correct attenuation values!
# Because our flatfield and darkfield are perfect, all we need to do is divide by the total energy
projSim /= gvxr.getTotalEnergyWithDetectorResponse()

@JSCallaghan
Copy link
Contributor Author

Hi @effepivi, thanks for the suggestions.

  • Please correct me if I am wrong, but I think I am using the CPU-based CIL environment (my PC does not have a dedicated graphics card), which I created with conda create --name cilCPU -c conda-forge -c https://software.repos.intel.com/python/conda -c ccpi cil=24.2.0 ipp=2021.12 astra-toolbox==py tigre ccpi-regulariser tomophantom ipykernel ipywidgets scikit-image. For the simulation, I used the openGL backend, this might also be running on the CPU in-built graphics?
  • I'll implement your other suggestions 👍

@MargaretDuff
Copy link
Member

Hi @JSCallaghan - heard that you spoke to @effepivi who confirmed that the GVXR simulation was being done on GPU but the CIL part on CPU. I have changed the text to reflect this.

Indeed, we triple-checked and although ASTRA has CPU forward and backward projectors for a 2D fanbeam geometry it doesn't have an FBP implementation. Thus we can do iterative methods but not an FBP reconstruction. I also clarified this in the text.

@jakobsj
Copy link

jakobsj commented Nov 26, 2024

Thank you very much @JSCallaghan for this very nice notebook and @MargaretDuff and @effepivi for commenting and processing, I've taken a quick look and the notebook looks really great. As I am not so familiar with gVXR, I will leave the formal reviewing/approval to someone else (@effepivi?) but can provide a few minor comments:

  1. Found a few typos "This notebook is used to simulate an X-ray CT scan projections" should omit "an" and "A snall amount of noise has been added:" should be "small".
  2. Could it be stated explicitly in the intro how the topics covered by this gVXR+CIL notebook differ from the ones in the existing gVXR+CIL showcase notebook: https://github.com/JSCallaghan/CIL-User-Showcase/blob/main/006_gVXR2CIL/CBCT-acquisition-with-noise-and-reconstruction.ipynb
  3. The k3d plots are not saved in the notebook, but probably not much can be done to fix this. Unless there are any CIL (or matplotlib or similar) plots that could convey the same or part of the same message?
  4. The islicer output is not saved, could one perhaps replace that by one or more show2D calls that would appear in the notebook outputs?
  5. The notebook finishes a bit abruptly - could a brief conclusion/list of key take-away points (especially relating to gVXR usage in combination with CIL) be added at the end?

Thank you again for this excellent addition to the showcase. Next step - a tiny code contribution to CIL itself? :)

@effepivi
Copy link
Contributor

@jakobsj & @JSCallaghan
The workaround for K3D is somehow the same as with islicer. What I tend to do is to programmatically take a screenshot and display it. I tend to display the screenshot at the top of the notebook as it is "nice".

import base64

plot.fetch_screenshot();
k3d_screenshot = plot.screenshot;
data = base64.b64decode(k3d_screenshot);
with open("k3d_screenshot.png", "wb") as fp:
    fp.write(data);
    fp.flush();
    fp.close();

However, due to some asynchronous behaviours, it is possible that the code above needs to be called twice (I often have an empty file on the 1st try).

BTW, you can also use k3D to do some volume rendering in the notebook:

bounds = [0, ig.voxel_num_x * ig.voxel_size_x, 0, ig.voxel_num_y * ig.voxel_size_y, 0, ig.voxel_num_z * ig.voxel_size_z]

np_recons = recons.as_array()
k3d_volume = k3d.volume(np_recons, bounds=bounds)

plot = k3d.plot()

plot += k3d_volume
value_range = [np.min(np_recons), np.max(np_recons)]
k3d_volume.alpha_coef = 5
k3d_volume.color_range = [value_range[0] + (value_range[1] - value_range[0]) / 3, value_range[1]]

plot.display()

@MargaretDuff
Copy link
Member

Thanks @effepivi and @jakobsj, I have:

  • Added screenshots for online viewing
  • Fixed those two spelling mistakes
  • Drafted something which sets out the difference from the previous showcase
  • Drafted some concluding remarks

@effepivi - I didn't understand your comment on volume rendering the reconstruction - the reconstruction is just a 2D slice?

@effepivi
Copy link
Contributor

effepivi commented Jan 7, 2025

Text:

  • use a small 'g' in gVXR.
  • in the lists, be consistent in the use of full stops. For example in:
    • imaging a single material object created from an stl file.
    • reading in and using scan parameters for on existing scan xml file
    • saving and then importing and processing raw projection data
    • reconstructing using CIL SIRT and CGLS iterative algorithms using CPU projectors.

If you end one line with a full stop, the next one must start with a capital letter. The use of full stops (or semicolons as a matter of fact) in bullet point lists is contraversial. They're not "needed". However, they can be used for inclusivity as it helps people see when an item is finishing and when another one is starting. I was disability officer, and we were told at the time to recommend collleagues something like that to make it clear for everyone:

  • imaging a single material object created from an stl file;
  • reading in and using scan parameters for on existing scan xml file;
  • saving and then importing and processing raw projection data; and
  • reconstructing using CIL SIRT and CGLS iterative algorithms using CPU projectors.

In any case, being consistent is what matters the most.

Code:

  • Cell 13 needs to be amended. With this colour map, it makes the assumption that the error is always positive. A "diverging" colour lookup table should be used with a range of minus N to plus N (i.e. the centre is zero).
  • Even better, divide it by sim and multiply by 100 to compute a relative error in percentage. It will be more meaningful than an absolute error (e.g. an error of 0.05% is easier to interpret than 0.75*10^-5). See below how I compute the fix range for the difference. The visualisation will carry more information:
projDiff = 100*(projSim[0]-projRead[0])/projSim[0])
limit = max(abs(projDiff.min()),projDiff.max())

show2D([projSim[0], projRead[0], projDiff],title=["sim","sim_read_write_"+dataWriteMethod,"difference (in %)"],\
        cmap='seismic',num_cols=3,fix_range=[(0,1),(0,1),(-limit,limit)], size=(9,3))
  • Again, use a relative error instead of an absolute error for the histogram.

Other than that, it looks great, I like it!

@MargaretDuff MargaretDuff self-requested a review January 7, 2025 15:08
Copy link
Member

@MargaretDuff MargaretDuff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving after a review from Jakob and 2 reviews from Franck

@MargaretDuff MargaretDuff merged commit 51361cb into TomographicImaging:main Jan 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

None yet

4 participants