Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update channel_combination.py #48

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
129 changes: 128 additions & 1 deletion suspect/processing/channel_combination.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,19 @@

def svd_weighting(data):
p, _, v = numpy.linalg.svd(data, full_matrices=False)

channel_weights = p[:, 0].conjugate()
# normalize so they sum to 1
channel_weights = channel_weights/numpy.sum(numpy.abs(channel_weights))

# try some basic phase correction
# in our truncation of the SVD to rank 1, we know that v[0] is our FID
# use the first point of it to phase the signal
phase_shift = numpy.angle(v[0, 0])

channel_weights_phase_corrected = channel_weights * numpy.exp(-1j * phase_shift)

return channel_weights * numpy.exp(-1j * phase_shift) / numpy.sum(numpy.abs(channel_weights))
return channel_weights_phase_corrected, channel_weights, phase_shift
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this is unnecessary - the phase shift here is completely insignificant as there is going to be some proper phase correction happening later on anyway, this is just a very rough and ready attempt to put the signal in phase to make it easier to see what is going on when plotting e.g. to check for frequency drift. The point is that the channel weights are only defined up to an arbitrary phase, so we have to choose something and this makes the most sense - it is actually better than doing nothing in all cases.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I'm fine with changing that. I realize that there is additional phase correction coming down the line. The only issue is that I like to pull my channel weights off of the unsuppressed H2O signal since the SNR is so much higher, and then apply them to the H2O suppressed signal. The phase correction for one doesn't really work for the other, so it ends up making the H2O suppressed signal's phase a bit worse in some cases. This will be easily corrected later on since it's only 0th order anyway, so it's not that big of a deal. I can change it back if that would be easier.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am surprised that the unsuppressed water phase is not correct for the suppressed signal, they should have an exactly matched phase, except for the suppressed water peak, and who cares about that anyway. It should also be extremely simple to modify the phase of the suppressed signal after using the wrongly phased weights, simply by making the first point of the FID real, which is all this does anyway. I would really prefer to keep this as simple as possible as a function, which means returning just one thing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also there is no reason that the "unphased" weights from the wref are any better as an option for the phase of the suppressed signal.



def combine_channels(data, weights=None):
Expand All @@ -19,3 +24,125 @@ def combine_channels(data, weights=None):
weighted_data = weights.reshape((len(weights), 1)) * data
combined_data = weighted_data.sum(axis=0)
return combined_data


def RAMRS_SVD_weighting(data, phase_correct = True, whiten = False, W = None, noise_range = None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whitening the channel noise is something that I have been meaning to do for a long time, so very pleased to see that has made an appearance. However, I am not sure this is the best way to set up the interface. My general approach throughout has been to provide small, modular functions to perform each step of the processing, rather than to give a few monolithic functions with lots of options in the parameter string, like whiten, phase_correct etc. This is easier to understand, easier to test and easier to customise. I would much prefer code like

whitened_wref = whiten(wref_data)
channel_weights = svd_weighting(whitened_wref)
combined_data = np.average(wsup_data, axis=1, weights=channel_weights)

rather than

combined_data = RAMRS_SVD_weighting(wsup_data, phase_correct=True, whiten=True, W = channel_weights)

"""
This channel combination approach operates on raw MRS (spectral) data,
prior to averaging repetitions from multiple acquisitions. It is useful in
cases where there is instability in frequency and phase shifts across
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually the same method of calculating the channel weights, as you call the existing function. Essentially the major difference is that this function expects a 3d array instead of 2d. I think a far better solution would be to modify the existing svd_weighting() function so that it accepts an axis parameter to define which axis the channels run over, then internally it reshapes all other axes into a single line and calculates the channel weights that way. That would then be a more seamless improvement of the existing code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My original idea was to do it exactly like you described - but I thought instead it might be a good idea to actually create a function that specifically references this paper, and makes it explicit that it is replicating the method described therein. It becomes sort of self-explanatory in that way, and people can follow along with the paper and see exactly what is going on. So often I'll see an algorithm implemented in a paper, try to find some code that replicates it somewhere, end up cobbling something together from bits and pieces of other libraries, but I'm never quite sure if it's done exactly right. I've worked out the kinks on this approach and tested it so extensively that I thought it would be good to present it in this kind of monolithic way (which it definitely is!) and use the other functions from Suspect internally as needed. An alternative would be to put all of this into an example script somewhere, but then people will have to slog through it, if they find it at all. Anyway, that was my thought process on the whole thing. I'm glad to change it to the method you described, though. I already have a version of that somewhere in my code base.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting that we both thought about this in the same way and came to opposite conclusions. I am trying quite hard to maintain an easy to understand set of example notebooks on readthedocs, there is already a notebook on channel combination that looks at different ways of weighting the channels including the SVD method. My plan was to either add another notebook later in the series to cover noise whitening, or to go back and incorporate that into the existing notebook. The great thing about the notebooks is that they can combine explanation and plots inline with the code, so they help with understanding the concepts much better than simple comments. Although your docstring is excellent, most end users aren't going to dig through the codebase to understand how the algorithm works, they just want some functions they can call.
I think that a chunk of the code inside this function could do with being extracted into a whiten() function which can be called separately. Ultimately I think that the implementation of this function should be essentially just calling about 3 other functions from the module, and that may make it superfluous. If you don't have the time to work on that, then I am happy to take it on.

averages. Coil weights are determined from the entire 3D array, and applied
to each average. Frequency and phase correction can then be performed on
the channel-combined acquisitions, prior to final averaging.

This channel combination method is described in the 'Multiple Acquisitions'
section of:

Rodgers, Christopher T., and Matthew D. Robson. "Receive array magnetic
resonance spectroscopy: whitened singular value decomposition (WSVD) gives
optimal Bayesian solution." Magnetic resonance in medicine 63.4
(2010): 881-891.

Performing signal whitening, prior to channel combination, is optional. You
can also (optionally) pass in a whitening matrix, W, if it has already been
computed, or derive one from the data, as described in the reference.

Parameters
----------
data : MRSSpectrum
Input spectrum matrix for channel combination. The data should be a 3D
array, with dimensions [num_acquisitions, n_channels, n_samples], and

phase_correct : bool
If True, perform a 0th order phase correction along with the channel
weighting

whiten : bool
(Optional) If True, whiten the array prior to channel combination

W : array
(Optional) Whitening array to use. If W = None and whiten = True, the
array will be computed from the data

noise_range : tuple
(Optional) (lower bound, upper bound) of the frequency range to use for
deriving the whitening matrix. Specified in Hz.

Returns
-------
combined_data : MRSSpectrum
Output spectrum, after coil combination.

chann_weights_phase_corrected : array
Channel weights derived from the data, including the 0th order phase
correction term. If phase_correct = True, these are the weights that
were applied to the data

channel_weights : array
Weights applied to each channel, normalized so they sum to 1. If
phase_correct = False, these are the weights that were applied to the
averages.

phase_shift : float
0th order phase correction term derived from the SVD.

W : array
Whitening matrix used, or None if pre-whitening was not done.

"""

n_reps, n_chans, n_samps = data.shape

# Re-structure the data to get it in the right format
R = numpy.transpose(data,(0,2,1))
d = R.shape
R = numpy.reshape(R,(d[0]*d[1],d[2]))

if whiten:
if W is None:
# Derive the whitening matrix from this data
# Isolate a subset of the spectrum that is in the noise region
noise_idx = numpy.logical_and(data.frequency_axis()>=noise_range[0], data.frequency_axis()<=noise_range[1])
N = data[:,:,noise_idx]
# Do some reshaping of the array to make it 2D
N = numpy.transpose(N,(0,2,1))
d = N.shape
N = numpy.reshape(N,(d[0]*d[1],d[2]))
W = get_whitening_matrix(N)

# Apply whitening matrix to the ensemble
S = numpy.dot(W,R.T)

else:
# Do not pre-whiten, run SVD on the raw data
S = R.T

chann_weights_phase_corrected, channel_weights, phase_shift = svd_weighting(S)

# Apply the weights
if phase_correct:
Q = combine_channels(S, weights=chann_weights_phase_corrected)
else:
Q = combine_channels(S, weights=channel_weights)

combined_spectrum = numpy.reshape(Q,(n_reps,n_samps))
combined_data = data.inherit(combined_spectrum)


return combined_data, chann_weights_phase_corrected, channel_weights, phase_shift, W



def get_whitening_matrix(data):

C = numpy.cov(data,rowvar=False)

[w,v]=numpy.linalg.eig(C) # Eigenvalue decomp of covariance matrix
D = numpy.diag(2*(w**(-0.5)))

# Scaling matrix
W = numpy.dot(v,D)
return W