You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/index.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
*Collection of linear operators for multi-dimensional signal and imaging tasks*
4
4
5
5
6
-
## Purpose
6
+
## Introduction
7
7
8
8
This package contains a collection of linear operators that are particularly useful for multi-dimensional signal and image processing tasks. Linear operators or linear maps behave like matrices in a matrix-vector product, but aren't necessarily matrices themselves. They can utilize more effective algorithms and can defer their computation until they are multiplied with a vector.
# There are two different ways one can implement a custom LinearOperator. The first one is to directly implement an operator as a LinearOperator from LinearOperators.jl:
# GPU kernels generally require all their arguments to exist on the GPU. This is not ncessarily the case for matrix-free operators as provides LinearOperators or LinearOperatorCollection.
4
+
# In the case that a matrix free operator is solely a function call and contains no internal array state, the operator is GPU compatible as long as the method has a GPU compatible implementation.
5
+
6
+
# If the operator has internal fields required for its computation, such as temporary arrays for intermediate values or indices, then it needs to move those to the GPU.
7
+
# Furthermore if the operator needs to create a new array in its execution, e.g. it is used in a non-inplace matrix-vector multiplication or it is combined with other operators, then the operator needs to specify
8
+
# a storage type. LinearOperatorCollection has several GPU compatible operators, where the storage type is given by setting a `S` parameter:
9
+
# ```julia
10
+
# using CUDA # or AMDGPU, Metal, ...
11
+
# image_gpu = cu(image)
12
+
# ```
13
+
using LinearOperatorCollection.LinearOperators
14
+
image_gpu = image #hide
15
+
storage =Complex.(similar(image_gpu, 0))
16
+
fop =FFTOp(eltype(image_gpu), shape = (N, N), S =typeof(storage))
Copy file name to clipboardExpand all lines: docs/src/literate/tutorials/product.jl
+4-3Lines changed: 4 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -3,16 +3,17 @@ include("../../util.jl") #hide
3
3
# This operator describes the product or composition between two operators:
4
4
weights =collect(range(0, 1, length = N*N))
5
5
wop =WeightingOp(weights)
6
-
fop =FFTOp(ComplexF64, shape = (N, N))
6
+
fop =FFTOp(ComplexF64, shape = (N, N));
7
7
# A feature of LinearOperators.jl is that operator can be cheaply transposed, conjugated and multiplied and only in the case of a matrix-vector product the combined operation is evaluated.
8
8
tmp_op = wop * fop
9
9
tmp_freqs = tmp_op *vec(image)
10
10
11
11
12
12
# Similar to the WeightingOp, the main difference with the product operator provided by LinearOperatorCollection is the dedicated type, which allows for code specialisation.
0 commit comments