@@ -22,12 +22,11 @@ production enviroments.
22
22
23
23
## Quickstart
24
24
25
- Starting a worker on a node, with debug flag set to true on configuration file
25
+ Starting a worker on a node using Redis as backend
26
26
27
27
```
28
- $ tq redis
29
- Listening for jobs on 127.0.0.1:9000
30
- Response actor started
28
+ $ tq redis-worker --log-level DEBUG
29
+ 2019-04-26 23:15:28 - tasq.remote.supervisor-17903: Worker type: Actor
31
30
```
32
31
33
32
In a python shell
@@ -42,77 +41,59 @@ Warning: disable autoreload in ipython_config.py to improve performance.
42
41
43
42
In [1]: from tasq.queue import TasqQueue
44
43
45
- In [2]: tq = TasqQueue(backend='redis://localhost:6379/0?name=test ')
44
+ In [2]: tq = TasqQueue(backend='redis://localhost:6379')
46
45
47
- In [3]: def dup(n):
48
- ...: return n * 2
46
+ In [3]: def fib(n):
47
+ ...: if n == 0:
48
+ ...: return 0
49
+ ...: a, b = 0, 1
50
+ ...: for _ in range(n - 1):
51
+ ...: a, b = b, a + b
52
+ ...: return b
49
53
...:
50
54
51
- In [4]: fut = tq.put(dup, 5, name='task-01')
52
-
53
- In [5]: fut
54
- Out[5]: <TasqFuture at 0x7f2851826518 state=finished returned JobResult>
55
-
56
- In [6]: fut.unwrap()
57
- Out[6]: 10
58
- ```
59
-
60
- ** Lower-level TasqClient**
61
-
62
- ```
63
- Python 3.6.5 (default, Apr 12 2018, 22:45:43)
64
- [GCC 7.3.1 20180312] on linux
65
- Type "help", "copyright", "credits" or "license" for more information.
66
- >>> from tasq import TasqClient
67
- >>> tc = TasqClient('127.0.0.1', 9000)
68
- >>> tc.connect()
69
- >>>
70
- >>> def foo(num):
71
- >>> import time
72
- >>> import random
73
- >>> r = random.randint(0, 2)
74
- >>> time.sleep(r)
75
- >>> return f'Foo - {random.randint(0, num)}'
76
- >>>
77
- >>> fut = tc.schedule(foo, 5, name='Task-1')
78
- >>> fut
79
- >>> <Future at 0x7f7d6e048160 state=pending>
80
- >>> fut.result
81
- >>>
82
- >>> # After some time, to let worker complete the job
83
- >>> fut.result
84
- >>> 'Foo - 2'
85
- >>> tc.results
86
- >>> {'Task-1': <Future at 0x7f7d6e048160 state=finished returned str>}
87
- >>>
88
- >>> tc.schedule_blocking(foo, 5, name='Task-2')
89
- >>> 'Foo - 4'
90
- >>>
91
- >>> tc.results
92
- >>> {'Task-1': <Future at 0x7f7d6e048160 state=finished returned str>,
93
- >>> 'Task-2': <Future at 0x7f7d6e047268 state=finished returned str>}
94
- ```
95
-
96
- Scheduling a job after a delay
97
-
98
- ```
99
- >>> fut = tc.schedule(foo, 5, name='Delayed-Task', delay=5)
100
- >>> tc.results
101
- >>> {'Delayed-Task': <Future at 0x7f7d6e044208 state=pending>}
102
- >>> # Wait 5 seconds
103
- >>> tc.results
104
- >>> {'Delayed-Task': <Future at 0x7f7d6e044208 state=finished returned str>}
105
- >>> fut.result()
106
- >>> 'Foo - 2'
55
+ In [4]: # Asynchronous execution
56
+ In [5]: fut = tq.put(fib, 50, name='fib-async')
57
+
58
+ In [6]: fut
59
+ Out[6]: <TasqFuture at 0x7f2851826518 state=finished returned JobResult>
60
+
61
+ In [7]: fut.unwrap()
62
+ Out[7]: 12586269025
63
+
64
+ In [8]: res = tq.put_blocking(fib, 50, name='fib-sync')
65
+
66
+ In [9]: res.unwrap()
67
+ Out[9]: 12586269025
107
68
```
108
69
109
- Scheduling a task to be executed continously in a defined interval
70
+ Scheduling jobs after a delay
71
+ ```
72
+
73
+ In [10]: fut = tc.schedule(fib, 5, name='fib-delayed', delay=5)
110
74
75
+ In [11]: fut
76
+ Out[11]: <TasqFuture at 0x7f2951856418 state=pending>
77
+
78
+ In [12]: # wait 5 seconds
79
+
80
+ In [13]: fut.unwrap()
81
+ Out[13]: 5
82
+
83
+ In [14] tq.results
84
+ Out[14] {'fib-async': <TasqFuture at 0x7f2851826518 state=finished returned JobResult>,
85
+ Out[14] 'fib-sync': <TasqFuture at 0x7f7d6e047268 state=finished returned JobResult>
86
+ Out[14] 'fib-delayed': <TasqFuture at 0x7f2951856418 state=finished returned JobResult>}
111
87
```
112
- >>> tc.schedule(foo, 5, name='8_seconds_interval_task', eta='8s')
113
- >>> tc.schedule(foo, 5, name='2_hours_interval_task', eta='2h')
88
+
89
+ Scheduling a task to be executed continously in a defined interval
90
+
114
91
```
92
+ In [15] tq.put(fib, 5, name='8_seconds_interval_fib', eta='8s')
115
93
94
+ In [16] tq.put(fib, 5, name='2_hours_interval_fib', eta='2h')
95
+
96
+ ```
116
97
Delayed and interval tasks are supported even in blocking scheduling manner.
117
98
118
99
Tasq also supports an optional static configuration file, in the ` tasq.settings.py ` module is
@@ -124,27 +105,18 @@ By setting the `-f` flag it is possible to also set a location of a configuratio
124
105
filesystem
125
106
126
107
```
127
- $ tq -- worker -f path/to/conf/conf.json
108
+ $ tq worker -c path/to/conf/conf.json
128
109
```
129
110
130
111
A worker can be started by specifying the type of sub worker we want:
131
112
132
113
```
133
- $ tq - -worker --worker-type process
114
+ $ tq rabbitmq -worker --worker-type process
134
115
```
135
116
Using ` process ` type subworker it is possible to use a distributed queue for parallel execution,
136
117
usefull when the majority of the jobs are CPU bound instead of I/O bound (actors are preferable in
137
118
that case).
138
119
139
- Multiple workers can be started in the same node, this will start two worker process ready to
140
- receive jobs.
141
-
142
- ```
143
- $ tq --workers 127.0.0.1:9000:9001, 127.0.0.1:9090:9091
144
- Listening for jobs on 127.0.0.1:9000
145
- Listening for jobs on 127.0.0.1:9090
146
- ```
147
-
148
120
If jobs are scheduled for execution on a disconnected client, or remote workers are not up at the
149
121
time of the scheduling, all jobs will be enqeued for later execution. This means that there's no
150
122
need to actually start workers before job scheduling, at the first worker up all jobs will be sent
@@ -201,15 +173,13 @@ See the [CHANGES](CHANGES.md) file.
201
173
202
174
## TODO:
203
175
204
- - [ ] Possibility of a broker to persist jobs (classic task queue celery like)
176
+ - [x] Possibility of a broker to persist jobs (classic task queue celery like)
177
+ - [x] Delayed tasks and scheduled cron tasks
178
+ - [x] Configuration handling throughout the code
179
+ - [x] Better explanation of the implementation and actors defined
180
+ - [x] Improve CLI options
205
181
- [ ] Check for pynacl for security on pickled data
206
182
- [ ] Tests
207
- - [ ] A meaningful client pool
208
- - [x] Debugging multiprocessing start for more workers on the same node
209
183
- [ ] Refactor of existing code and corner case handling (Still very basic implementation of even
210
184
simple heuristics)
211
- - [x] Delayed tasks and scheduled cron tasks
212
- - [x] Configuration handling throughout the code
213
- - [x] Better explanation of the implementation and actors defined
214
- - [ ] Improve CLI options
215
185
- [ ] Dockerfile
0 commit comments