From 93058b9d69ea78d82d92c43ccc7dd90f8e7057e6 Mon Sep 17 00:00:00 2001 From: Stan Seibert Date: Mon, 28 Aug 2017 08:53:31 -0500 Subject: [PATCH] Fix out of date entry on parallelism to also remove NumbaPro reference --- docs/source/user/faq.rst | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/docs/source/user/faq.rst b/docs/source/user/faq.rst index 80e6e8a0d85..62a1b67fe40 100644 --- a/docs/source/user/faq.rst +++ b/docs/source/user/faq.rst @@ -99,14 +99,20 @@ Does Numba vectorize array computations (SIMD)? Numba doesn't implement such optimizations by itself, but it lets LLVM apply them. -Does Numba parallelize code? ----------------------------- +Does Numba automatically parallelize code? +------------------------------------------ + +In can, in some cases: + +* Ufuncs and gufuncs with the ``target="parallel"`` option will run on multiple threads. +* The experimental ``parallel=True`` option to ``@jit`` will attempt to optimize + array operations and run them in parallel. -No, it doesn't. If you want to run computations concurrently on multiple -threads (by :ref:`releasing the GIL `) or processes, you'll -have to handle the pooling and synchronisation yourself. +You can also manually run computations on multiple threads yourself and use +the ``nogil=True`` option (see :ref:`releasing the GIL `). Numba +can also target parallel execution on GPU architectures using its CUDA and HSA +backends. -Or, you can take a look at NumbaPro_. Can Numba speed up short-running functions? -------------------------------------------