Skip to content

Commit 1ae105a

Browse files
author
Vinod Koul
committed
Merge branch 'next' into for-linus-3.0
2 parents 02f8c6a + 5a42fb9 commit 1ae105a

30 files changed

+2541
-1011
lines changed

Documentation/dmaengine.txt

+164-70
Original file line numberDiff line numberDiff line change
@@ -10,87 +10,181 @@ NOTE: For DMA Engine usage in async_tx please see:
1010
Below is a guide to device driver writers on how to use the Slave-DMA API of the
1111
DMA Engine. This is applicable only for slave DMA usage only.
1212

13-
The slave DMA usage consists of following steps
13+
The slave DMA usage consists of following steps:
1414
1. Allocate a DMA slave channel
1515
2. Set slave and controller specific parameters
1616
3. Get a descriptor for transaction
17-
4. Submit the transaction and wait for callback notification
17+
4. Submit the transaction
18+
5. Issue pending requests and wait for callback notification
1819

1920
1. Allocate a DMA slave channel
20-
Channel allocation is slightly different in the slave DMA context, client
21-
drivers typically need a channel from a particular DMA controller only and even
22-
in some cases a specific channel is desired. To request a channel
23-
dma_request_channel() API is used.
24-
25-
Interface:
26-
struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
27-
dma_filter_fn filter_fn,
28-
void *filter_param);
29-
where dma_filter_fn is defined as:
30-
typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
31-
32-
When the optional 'filter_fn' parameter is set to NULL dma_request_channel
33-
simply returns the first channel that satisfies the capability mask. Otherwise,
34-
when the mask parameter is insufficient for specifying the necessary channel,
35-
the filter_fn routine can be used to disposition the available channels in the
36-
system. The filter_fn routine is called once for each free channel in the
37-
system. Upon seeing a suitable channel filter_fn returns DMA_ACK which flags
38-
that channel to be the return value from dma_request_channel. A channel
39-
allocated via this interface is exclusive to the caller, until
40-
dma_release_channel() is called.
21+
22+
Channel allocation is slightly different in the slave DMA context,
23+
client drivers typically need a channel from a particular DMA
24+
controller only and even in some cases a specific channel is desired.
25+
To request a channel dma_request_channel() API is used.
26+
27+
Interface:
28+
struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
29+
dma_filter_fn filter_fn,
30+
void *filter_param);
31+
where dma_filter_fn is defined as:
32+
typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
33+
34+
The 'filter_fn' parameter is optional, but highly recommended for
35+
slave and cyclic channels as they typically need to obtain a specific
36+
DMA channel.
37+
38+
When the optional 'filter_fn' parameter is NULL, dma_request_channel()
39+
simply returns the first channel that satisfies the capability mask.
40+
41+
Otherwise, the 'filter_fn' routine will be called once for each free
42+
channel which has a capability in 'mask'. 'filter_fn' is expected to
43+
return 'true' when the desired DMA channel is found.
44+
45+
A channel allocated via this interface is exclusive to the caller,
46+
until dma_release_channel() is called.
4147

4248
2. Set slave and controller specific parameters
43-
Next step is always to pass some specific information to the DMA driver. Most of
44-
the generic information which a slave DMA can use is in struct dma_slave_config.
45-
It allows the clients to specify DMA direction, DMA addresses, bus widths, DMA
46-
burst lengths etc. If some DMA controllers have more parameters to be sent then
47-
they should try to embed struct dma_slave_config in their controller specific
48-
structure. That gives flexibility to client to pass more parameters, if
49-
required.
50-
51-
Interface:
52-
int dmaengine_slave_config(struct dma_chan *chan,
53-
struct dma_slave_config *config)
49+
50+
Next step is always to pass some specific information to the DMA
51+
driver. Most of the generic information which a slave DMA can use
52+
is in struct dma_slave_config. This allows the clients to specify
53+
DMA direction, DMA addresses, bus widths, DMA burst lengths etc
54+
for the peripheral.
55+
56+
If some DMA controllers have more parameters to be sent then they
57+
should try to embed struct dma_slave_config in their controller
58+
specific structure. That gives flexibility to client to pass more
59+
parameters, if required.
60+
61+
Interface:
62+
int dmaengine_slave_config(struct dma_chan *chan,
63+
struct dma_slave_config *config)
64+
65+
Please see the dma_slave_config structure definition in dmaengine.h
66+
for a detailed explaination of the struct members. Please note
67+
that the 'direction' member will be going away as it duplicates the
68+
direction given in the prepare call.
5469

5570
3. Get a descriptor for transaction
56-
For slave usage the various modes of slave transfers supported by the
57-
DMA-engine are:
58-
slave_sg - DMA a list of scatter gather buffers from/to a peripheral
59-
dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the
71+
72+
For slave usage the various modes of slave transfers supported by the
73+
DMA-engine are:
74+
75+
slave_sg - DMA a list of scatter gather buffers from/to a peripheral
76+
dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the
6077
operation is explicitly stopped.
61-
The non NULL return of this transfer API represents a "descriptor" for the given
62-
transaction.
63-
64-
Interface:
65-
struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_sg)(
66-
struct dma_chan *chan,
67-
struct scatterlist *dst_sg, unsigned int dst_nents,
68-
struct scatterlist *src_sg, unsigned int src_nents,
78+
79+
A non-NULL return of this transfer API represents a "descriptor" for
80+
the given transaction.
81+
82+
Interface:
83+
struct dma_async_tx_descriptor *(*chan->device->device_prep_slave_sg)(
84+
struct dma_chan *chan, struct scatterlist *sgl,
85+
unsigned int sg_len, enum dma_data_direction direction,
6986
unsigned long flags);
70-
struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)(
87+
88+
struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)(
7189
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
7290
size_t period_len, enum dma_data_direction direction);
7391

74-
4. Submit the transaction and wait for callback notification
75-
To schedule the transaction to be scheduled by dma device, the "descriptor"
76-
returned in above (3) needs to be submitted.
77-
To tell the dma driver that a transaction is ready to be serviced, the
78-
descriptor->submit() callback needs to be invoked. This chains the descriptor to
79-
the pending queue.
80-
The transactions in the pending queue can be activated by calling the
81-
issue_pending API. If channel is idle then the first transaction in queue is
82-
started and subsequent ones queued up.
83-
On completion of the DMA operation the next in queue is submitted and a tasklet
84-
triggered. The tasklet would then call the client driver completion callback
85-
routine for notification, if set.
86-
Interface:
87-
void dma_async_issue_pending(struct dma_chan *chan);
88-
89-
==============================================================================
90-
91-
Additional usage notes for dma driver writers
92-
1/ Although DMA engine specifies that completion callback routines cannot submit
93-
any new operations, but typically for slave DMA subsequent transaction may not
94-
be available for submit prior to callback routine being called. This requirement
95-
is not a requirement for DMA-slave devices. But they should take care to drop
96-
the spin-lock they might be holding before calling the callback routine
92+
The peripheral driver is expected to have mapped the scatterlist for
93+
the DMA operation prior to calling device_prep_slave_sg, and must
94+
keep the scatterlist mapped until the DMA operation has completed.
95+
The scatterlist must be mapped using the DMA struct device. So,
96+
normal setup should look like this:
97+
98+
nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len);
99+
if (nr_sg == 0)
100+
/* error */
101+
102+
desc = chan->device->device_prep_slave_sg(chan, sgl, nr_sg,
103+
direction, flags);
104+
105+
Once a descriptor has been obtained, the callback information can be
106+
added and the descriptor must then be submitted. Some DMA engine
107+
drivers may hold a spinlock between a successful preparation and
108+
submission so it is important that these two operations are closely
109+
paired.
110+
111+
Note:
112+
Although the async_tx API specifies that completion callback
113+
routines cannot submit any new operations, this is not the
114+
case for slave/cyclic DMA.
115+
116+
For slave DMA, the subsequent transaction may not be available
117+
for submission prior to callback function being invoked, so
118+
slave DMA callbacks are permitted to prepare and submit a new
119+
transaction.
120+
121+
For cyclic DMA, a callback function may wish to terminate the
122+
DMA via dmaengine_terminate_all().
123+
124+
Therefore, it is important that DMA engine drivers drop any
125+
locks before calling the callback function which may cause a
126+
deadlock.
127+
128+
Note that callbacks will always be invoked from the DMA
129+
engines tasklet, never from interrupt context.
130+
131+
4. Submit the transaction
132+
133+
Once the descriptor has been prepared and the callback information
134+
added, it must be placed on the DMA engine drivers pending queue.
135+
136+
Interface:
137+
dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc)
138+
139+
This returns a cookie can be used to check the progress of DMA engine
140+
activity via other DMA engine calls not covered in this document.
141+
142+
dmaengine_submit() will not start the DMA operation, it merely adds
143+
it to the pending queue. For this, see step 5, dma_async_issue_pending.
144+
145+
5. Issue pending DMA requests and wait for callback notification
146+
147+
The transactions in the pending queue can be activated by calling the
148+
issue_pending API. If channel is idle then the first transaction in
149+
queue is started and subsequent ones queued up.
150+
151+
On completion of each DMA operation, the next in queue is started and
152+
a tasklet triggered. The tasklet will then call the client driver
153+
completion callback routine for notification, if set.
154+
155+
Interface:
156+
void dma_async_issue_pending(struct dma_chan *chan);
157+
158+
Further APIs:
159+
160+
1. int dmaengine_terminate_all(struct dma_chan *chan)
161+
162+
This causes all activity for the DMA channel to be stopped, and may
163+
discard data in the DMA FIFO which hasn't been fully transferred.
164+
No callback functions will be called for any incomplete transfers.
165+
166+
2. int dmaengine_pause(struct dma_chan *chan)
167+
168+
This pauses activity on the DMA channel without data loss.
169+
170+
3. int dmaengine_resume(struct dma_chan *chan)
171+
172+
Resume a previously paused DMA channel. It is invalid to resume a
173+
channel which is not currently paused.
174+
175+
4. enum dma_status dma_async_is_tx_complete(struct dma_chan *chan,
176+
dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used)
177+
178+
This can be used to check the status of the channel. Please see
179+
the documentation in include/linux/dmaengine.h for a more complete
180+
description of this API.
181+
182+
This can be used in conjunction with dma_async_is_complete() and
183+
the cookie returned from 'descriptor->submit()' to check for
184+
completion of a specific DMA transaction.
185+
186+
Note:
187+
Not all DMA engine drivers can return reliable information for
188+
a running DMA channel. It is recommended that DMA engine users
189+
pause or stop (via dmaengine_terminate_all) the channel before
190+
using this API.

Documentation/spi/ep93xx_spi

+10
Original file line numberDiff line numberDiff line change
@@ -88,6 +88,16 @@ static void __init ts72xx_init_machine(void)
8888
ARRAY_SIZE(ts72xx_spi_devices));
8989
}
9090

91+
The driver can use DMA for the transfers also. In this case ts72xx_spi_info
92+
becomes:
93+
94+
static struct ep93xx_spi_info ts72xx_spi_info = {
95+
.num_chipselect = ARRAY_SIZE(ts72xx_spi_devices),
96+
.use_dma = true;
97+
};
98+
99+
Note that CONFIG_EP93XX_DMA should be enabled as well.
100+
91101
Thanks to
92102
=========
93103
Martin Guy, H. Hartley Sweeten and others who helped me during development of

arch/arm/mach-ep93xx/Makefile

+3-1
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,13 @@
11
#
22
# Makefile for the linux kernel.
33
#
4-
obj-y := core.o clock.o dma-m2p.o gpio.o
4+
obj-y := core.o clock.o gpio.o
55
obj-m :=
66
obj-n :=
77
obj- :=
88

9+
obj-$(CONFIG_EP93XX_DMA) += dma.o
10+
911
obj-$(CONFIG_MACH_ADSSPHERE) += adssphere.o
1012
obj-$(CONFIG_MACH_EDB93XX) += edb93xx.o
1113
obj-$(CONFIG_MACH_GESBC9312) += gesbc9312.o

arch/arm/mach-ep93xx/core.c

+5-1
Original file line numberDiff line numberDiff line change
@@ -492,11 +492,15 @@ static struct resource ep93xx_spi_resources[] = {
492492
},
493493
};
494494

495+
static u64 ep93xx_spi_dma_mask = DMA_BIT_MASK(32);
496+
495497
static struct platform_device ep93xx_spi_device = {
496498
.name = "ep93xx-spi",
497499
.id = 0,
498500
.dev = {
499-
.platform_data = &ep93xx_spi_master_data,
501+
.platform_data = &ep93xx_spi_master_data,
502+
.coherent_dma_mask = DMA_BIT_MASK(32),
503+
.dma_mask = &ep93xx_spi_dma_mask,
500504
},
501505
.num_resources = ARRAY_SIZE(ep93xx_spi_resources),
502506
.resource = ep93xx_spi_resources,

0 commit comments

Comments
 (0)