@@ -10,87 +10,181 @@ NOTE: For DMA Engine usage in async_tx please see:
10
10
Below is a guide to device driver writers on how to use the Slave-DMA API of the
11
11
DMA Engine. This is applicable only for slave DMA usage only.
12
12
13
- The slave DMA usage consists of following steps
13
+ The slave DMA usage consists of following steps:
14
14
1. Allocate a DMA slave channel
15
15
2. Set slave and controller specific parameters
16
16
3. Get a descriptor for transaction
17
- 4. Submit the transaction and wait for callback notification
17
+ 4. Submit the transaction
18
+ 5. Issue pending requests and wait for callback notification
18
19
19
20
1. Allocate a DMA slave channel
20
- Channel allocation is slightly different in the slave DMA context, client
21
- drivers typically need a channel from a particular DMA controller only and even
22
- in some cases a specific channel is desired. To request a channel
23
- dma_request_channel() API is used.
24
-
25
- Interface:
26
- struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
27
- dma_filter_fn filter_fn,
28
- void *filter_param);
29
- where dma_filter_fn is defined as:
30
- typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
31
-
32
- When the optional 'filter_fn' parameter is set to NULL dma_request_channel
33
- simply returns the first channel that satisfies the capability mask. Otherwise,
34
- when the mask parameter is insufficient for specifying the necessary channel,
35
- the filter_fn routine can be used to disposition the available channels in the
36
- system. The filter_fn routine is called once for each free channel in the
37
- system. Upon seeing a suitable channel filter_fn returns DMA_ACK which flags
38
- that channel to be the return value from dma_request_channel. A channel
39
- allocated via this interface is exclusive to the caller, until
40
- dma_release_channel() is called.
21
+
22
+ Channel allocation is slightly different in the slave DMA context,
23
+ client drivers typically need a channel from a particular DMA
24
+ controller only and even in some cases a specific channel is desired.
25
+ To request a channel dma_request_channel() API is used.
26
+
27
+ Interface:
28
+ struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
29
+ dma_filter_fn filter_fn,
30
+ void *filter_param);
31
+ where dma_filter_fn is defined as:
32
+ typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
33
+
34
+ The 'filter_fn' parameter is optional, but highly recommended for
35
+ slave and cyclic channels as they typically need to obtain a specific
36
+ DMA channel.
37
+
38
+ When the optional 'filter_fn' parameter is NULL, dma_request_channel()
39
+ simply returns the first channel that satisfies the capability mask.
40
+
41
+ Otherwise, the 'filter_fn' routine will be called once for each free
42
+ channel which has a capability in 'mask'. 'filter_fn' is expected to
43
+ return 'true' when the desired DMA channel is found.
44
+
45
+ A channel allocated via this interface is exclusive to the caller,
46
+ until dma_release_channel() is called.
41
47
42
48
2. Set slave and controller specific parameters
43
- Next step is always to pass some specific information to the DMA driver. Most of
44
- the generic information which a slave DMA can use is in struct dma_slave_config.
45
- It allows the clients to specify DMA direction, DMA addresses, bus widths, DMA
46
- burst lengths etc. If some DMA controllers have more parameters to be sent then
47
- they should try to embed struct dma_slave_config in their controller specific
48
- structure. That gives flexibility to client to pass more parameters, if
49
- required.
50
-
51
- Interface:
52
- int dmaengine_slave_config(struct dma_chan *chan,
53
- struct dma_slave_config *config)
49
+
50
+ Next step is always to pass some specific information to the DMA
51
+ driver. Most of the generic information which a slave DMA can use
52
+ is in struct dma_slave_config. This allows the clients to specify
53
+ DMA direction, DMA addresses, bus widths, DMA burst lengths etc
54
+ for the peripheral.
55
+
56
+ If some DMA controllers have more parameters to be sent then they
57
+ should try to embed struct dma_slave_config in their controller
58
+ specific structure. That gives flexibility to client to pass more
59
+ parameters, if required.
60
+
61
+ Interface:
62
+ int dmaengine_slave_config(struct dma_chan *chan,
63
+ struct dma_slave_config *config)
64
+
65
+ Please see the dma_slave_config structure definition in dmaengine.h
66
+ for a detailed explaination of the struct members. Please note
67
+ that the 'direction' member will be going away as it duplicates the
68
+ direction given in the prepare call.
54
69
55
70
3. Get a descriptor for transaction
56
- For slave usage the various modes of slave transfers supported by the
57
- DMA-engine are:
58
- slave_sg - DMA a list of scatter gather buffers from/to a peripheral
59
- dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the
71
+
72
+ For slave usage the various modes of slave transfers supported by the
73
+ DMA-engine are:
74
+
75
+ slave_sg - DMA a list of scatter gather buffers from/to a peripheral
76
+ dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the
60
77
operation is explicitly stopped.
61
- The non NULL return of this transfer API represents a "descriptor" for the given
62
- transaction.
63
-
64
- Interface:
65
- struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_sg)(
66
- struct dma_chan * chan,
67
- struct scatterlist *dst_sg, unsigned int dst_nents ,
68
- struct scatterlist *src_sg, unsigned int src_nents ,
78
+
79
+ A non-NULL return of this transfer API represents a "descriptor" for
80
+ the given transaction.
81
+
82
+ Interface:
83
+ struct dma_async_tx_descriptor *(* chan->device->device_prep_slave_sg)(
84
+ struct dma_chan *chan, struct scatterlist *sgl ,
85
+ unsigned int sg_len, enum dma_data_direction direction ,
69
86
unsigned long flags);
70
- struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)(
87
+
88
+ struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)(
71
89
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
72
90
size_t period_len, enum dma_data_direction direction);
73
91
74
- 4. Submit the transaction and wait for callback notification
75
- To schedule the transaction to be scheduled by dma device, the "descriptor"
76
- returned in above (3) needs to be submitted.
77
- To tell the dma driver that a transaction is ready to be serviced, the
78
- descriptor->submit() callback needs to be invoked. This chains the descriptor to
79
- the pending queue.
80
- The transactions in the pending queue can be activated by calling the
81
- issue_pending API. If channel is idle then the first transaction in queue is
82
- started and subsequent ones queued up.
83
- On completion of the DMA operation the next in queue is submitted and a tasklet
84
- triggered. The tasklet would then call the client driver completion callback
85
- routine for notification, if set.
86
- Interface:
87
- void dma_async_issue_pending(struct dma_chan *chan);
88
-
89
- ==============================================================================
90
-
91
- Additional usage notes for dma driver writers
92
- 1/ Although DMA engine specifies that completion callback routines cannot submit
93
- any new operations, but typically for slave DMA subsequent transaction may not
94
- be available for submit prior to callback routine being called. This requirement
95
- is not a requirement for DMA-slave devices. But they should take care to drop
96
- the spin-lock they might be holding before calling the callback routine
92
+ The peripheral driver is expected to have mapped the scatterlist for
93
+ the DMA operation prior to calling device_prep_slave_sg, and must
94
+ keep the scatterlist mapped until the DMA operation has completed.
95
+ The scatterlist must be mapped using the DMA struct device. So,
96
+ normal setup should look like this:
97
+
98
+ nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len);
99
+ if (nr_sg == 0)
100
+ /* error */
101
+
102
+ desc = chan->device->device_prep_slave_sg(chan, sgl, nr_sg,
103
+ direction, flags);
104
+
105
+ Once a descriptor has been obtained, the callback information can be
106
+ added and the descriptor must then be submitted. Some DMA engine
107
+ drivers may hold a spinlock between a successful preparation and
108
+ submission so it is important that these two operations are closely
109
+ paired.
110
+
111
+ Note:
112
+ Although the async_tx API specifies that completion callback
113
+ routines cannot submit any new operations, this is not the
114
+ case for slave/cyclic DMA.
115
+
116
+ For slave DMA, the subsequent transaction may not be available
117
+ for submission prior to callback function being invoked, so
118
+ slave DMA callbacks are permitted to prepare and submit a new
119
+ transaction.
120
+
121
+ For cyclic DMA, a callback function may wish to terminate the
122
+ DMA via dmaengine_terminate_all().
123
+
124
+ Therefore, it is important that DMA engine drivers drop any
125
+ locks before calling the callback function which may cause a
126
+ deadlock.
127
+
128
+ Note that callbacks will always be invoked from the DMA
129
+ engines tasklet, never from interrupt context.
130
+
131
+ 4. Submit the transaction
132
+
133
+ Once the descriptor has been prepared and the callback information
134
+ added, it must be placed on the DMA engine drivers pending queue.
135
+
136
+ Interface:
137
+ dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc)
138
+
139
+ This returns a cookie can be used to check the progress of DMA engine
140
+ activity via other DMA engine calls not covered in this document.
141
+
142
+ dmaengine_submit() will not start the DMA operation, it merely adds
143
+ it to the pending queue. For this, see step 5, dma_async_issue_pending.
144
+
145
+ 5. Issue pending DMA requests and wait for callback notification
146
+
147
+ The transactions in the pending queue can be activated by calling the
148
+ issue_pending API. If channel is idle then the first transaction in
149
+ queue is started and subsequent ones queued up.
150
+
151
+ On completion of each DMA operation, the next in queue is started and
152
+ a tasklet triggered. The tasklet will then call the client driver
153
+ completion callback routine for notification, if set.
154
+
155
+ Interface:
156
+ void dma_async_issue_pending(struct dma_chan *chan);
157
+
158
+ Further APIs:
159
+
160
+ 1. int dmaengine_terminate_all(struct dma_chan *chan)
161
+
162
+ This causes all activity for the DMA channel to be stopped, and may
163
+ discard data in the DMA FIFO which hasn't been fully transferred.
164
+ No callback functions will be called for any incomplete transfers.
165
+
166
+ 2. int dmaengine_pause(struct dma_chan *chan)
167
+
168
+ This pauses activity on the DMA channel without data loss.
169
+
170
+ 3. int dmaengine_resume(struct dma_chan *chan)
171
+
172
+ Resume a previously paused DMA channel. It is invalid to resume a
173
+ channel which is not currently paused.
174
+
175
+ 4. enum dma_status dma_async_is_tx_complete(struct dma_chan *chan,
176
+ dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used)
177
+
178
+ This can be used to check the status of the channel. Please see
179
+ the documentation in include/linux/dmaengine.h for a more complete
180
+ description of this API.
181
+
182
+ This can be used in conjunction with dma_async_is_complete() and
183
+ the cookie returned from 'descriptor->submit()' to check for
184
+ completion of a specific DMA transaction.
185
+
186
+ Note:
187
+ Not all DMA engine drivers can return reliable information for
188
+ a running DMA channel. It is recommended that DMA engine users
189
+ pause or stop (via dmaengine_terminate_all) the channel before
190
+ using this API.
0 commit comments