diff --git a/docs/source/cuda/cuda_array_interface.rst b/docs/source/cuda/cuda_array_interface.rst index c21a936b01f..304f4ecab24 100644 --- a/docs/source/cuda/cuda_array_interface.rst +++ b/docs/source/cuda/cuda_array_interface.rst @@ -30,7 +30,7 @@ that must contain the following entries: - **typestr**: ``str`` The type string. This has the same definition as ``typestr`` in the - `numpy array interface`_. + `NumPy array interface`_. - **data**: ``(integer, boolean)`` @@ -63,14 +63,14 @@ The following are optional entries: - **descr** This is for describing more complicated types. This follows the same - specification as in the `numpy array interface`_. + specification as in the `NumPy array interface`_. - **mask**: ``None`` or object exposing the ``__cuda_array_interface__`` If ``None`` then all values in **data** are valid. All elements of the mask array should be interpreted only as true or not true indicating which elements of this array are valid. This has the same definition as ``mask`` - in the `numpy array interface`_. + in the `NumPy array interface`_. .. note:: Numba does not currently support working with masked CUDA arrays and will raise a ``NotImplementedError`` exception if one is passed @@ -475,7 +475,7 @@ include: - is the pointer a managed memory? -.. _numpy array interface: https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.interface.html#__array_interface__ +.. _NumPy array interface: https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.interface.html#__array_interface__ Differences with CUDA Array Interface (Version 0) diff --git a/docs/source/cuda/cudapysupported.rst b/docs/source/cuda/cudapysupported.rst index 79e2b60c498..d5dc5a7908b 100644 --- a/docs/source/cuda/cudapysupported.rst +++ b/docs/source/cuda/cudapysupported.rst @@ -275,7 +275,7 @@ The following functions from the :mod:`operator` module are supported: * :func:`operator.xor` -Numpy support +NumPy support ============= Due to the CUDA programming model, dynamic memory allocation inside a kernel is @@ -283,13 +283,13 @@ inefficient and is often not needed. Numba disallows any memory allocating feat This disables a large number of NumPy APIs. For best performance, users should write code such that each thread is dealing with a single element at a time. -Supported numpy features: +Supported NumPy features: * accessing `ndarray` attributes `.shape`, `.strides`, `.ndim`, `.size`, etc.. * scalar ufuncs that have equivalents in the `math` module; i.e. ``np.sin(x[0])``, where x is a 1D array. * indexing and slicing works. -Unsupported numpy features: +Unsupported NumPy features: * array creation APIs. * array methods. diff --git a/docs/source/cuda/reduction.rst b/docs/source/cuda/reduction.rst index 988fd0d89e2..674728408bd 100644 --- a/docs/source/cuda/reduction.rst +++ b/docs/source/cuda/reduction.rst @@ -13,7 +13,7 @@ kernel. An example follows:: return a + b A = (numpy.arange(1234, dtype=numpy.float64)) + 1 - expect = A.sum() # numpy sum reduction + expect = A.sum() # NumPy sum reduction got = sum_reduce(A) # cuda sum reduction assert expect == got diff --git a/docs/source/cuda/ufunc.rst b/docs/source/cuda/ufunc.rst index 416c11db0db..c690557fce3 100644 --- a/docs/source/cuda/ufunc.rst +++ b/docs/source/cuda/ufunc.rst @@ -123,7 +123,7 @@ the `max_blocksize` attribute on the compiled gufunc object. chunksize = 1e+6 chunkcount = N // chunksize - # partition numpy arrays into chunks + # partition NumPy arrays into chunks # no copying is performed sA = np.split(A, chunkcount) sB = np.split(B, chunkcount) diff --git a/docs/source/proposals/jit-classes.rst b/docs/source/proposals/jit-classes.rst index 3f80a0432fa..9ddf67abd13 100644 --- a/docs/source/proposals/jit-classes.rst +++ b/docs/source/proposals/jit-classes.rst @@ -69,7 +69,7 @@ C structure:: complex64 field2; }; -This will also be comptabile with an aligned numpy structure dtype. +This will also be comptabile with an aligned NumPy structured dtype. Methods diff --git a/docs/source/reference/jit-compilation.rst b/docs/source/reference/jit-compilation.rst index 36c026de14a..ac67a593b14 100644 --- a/docs/source/reference/jit-compilation.rst +++ b/docs/source/reference/jit-compilation.rst @@ -74,7 +74,7 @@ JIT functions .. _jit-decorator-parallel: If true, *parallel* enables the automatic parallelization of a number of - common Numpy constructs as well as the fusion of adjacent parallel + common NumPy constructs as well as the fusion of adjacent parallel operations to maximize cache locality. The *error_model* option controls the divide-by-zero behavior. @@ -294,7 +294,7 @@ Vectorized functions (ufuncs and DUFuncs) .. decorator:: numba.vectorize(*, signatures=[], identity=None, nopython=True, target='cpu', forceobj=False, cache=False, locals={}) - Compile the decorated function and wrap it either as a `Numpy + Compile the decorated function and wrap it either as a `NumPy ufunc`_ or a Numba :class:`~numba.DUFunc`. The optional *nopython*, *forceobj* and *locals* arguments have the same meaning as in :func:`numba.jit`. @@ -302,10 +302,10 @@ Vectorized functions (ufuncs and DUFuncs) *signatures* is an optional list of signatures expressed in the same form as in the :func:`numba.jit` *signature* argument. If *signatures* is non-empty, then the decorator will compile the user - Python function into a Numpy ufunc. If no *signatures* are given, + Python function into a NumPy ufunc. If no *signatures* are given, then the decorator will wrap the user Python function in a :class:`~numba.DUFunc` instance, which will compile the user - function at call time whenever Numpy can not find a matching loop + function at call time whenever NumPy can not find a matching loop for the input arguments. *signatures* is required if *target* is ``"parallel"``. @@ -317,7 +317,7 @@ Vectorized functions (ufuncs and DUFuncs) axes can be reordered. If there are several *signatures*, they must be ordered from the more - specific to the least specific. Otherwise, Numpy's type-based + specific to the least specific. Otherwise, NumPy's type-based dispatching may not work as expected. For example, the following is wrong:: @@ -354,7 +354,7 @@ Vectorized functions (ufuncs and DUFuncs) :func:`numba.vectorize` will produce a simple ufunc whose core functionality (the function you are decorating) operates on scalar operands and returns a scalar value, :func:`numba.guvectorize` - allows you to create a `Numpy ufunc`_ whose core function takes array + allows you to create a `NumPy ufunc`_ whose core function takes array arguments of various dimensions. The additional argument *layout* is a string specifying, in symbolic @@ -387,27 +387,27 @@ Vectorized functions (ufuncs and DUFuncs) .. seealso:: Specification of the `layout string `_ - as supported by Numpy. Note that Numpy uses the term "signature", + as supported by NumPy. Note that NumPy uses the term "signature", which we unfortunately use for something else. The compiled function can be cached to reduce future compilation time. It is enabled by setting *cache* to True. Only the "cpu" and "parallel" targets support caching. -.. _Numpy ufunc: http://docs.scipy.org/doc/numpy/reference/ufuncs.html +.. _NumPy ufunc: http://docs.scipy.org/doc/numpy/reference/ufuncs.html .. class:: numba.DUFunc The class of objects created by calling :func:`numba.vectorize` with no signatures. - DUFunc instances should behave similarly to Numpy + DUFunc instances should behave similarly to NumPy :class:`~numpy.ufunc` objects with one important difference: - call-time loop generation. When calling a ufunc, Numpy looks at + call-time loop generation. When calling a ufunc, NumPy looks at the existing loops registered for that ufunc, and will raise a :class:`~python.TypeError` if it cannot find a loop that it cannot safely cast the inputs to suit. When calling a DUFunc, Numba - delegates the call to Numpy. If the Numpy ufunc call fails, then + delegates the call to NumPy. If the NumPy ufunc call fails, then Numba attempts to build a new loop for the given input types, and calls the ufunc again. If this second call attempt fails or a compilation error occurs, then DUFunc passes along the exception to @@ -422,7 +422,7 @@ Vectorized functions (ufuncs and DUFuncs) .. attribute:: ufunc - The actual Numpy :class:`~numpy.ufunc` object being built by the + The actual NumPy :class:`~numpy.ufunc` object being built by the :class:`~numba.DUFunc` instance. Note that the :class:`~numba.DUFunc` object maintains several important data structures required for proper ufunc functionality (specifically @@ -482,7 +482,7 @@ Vectorized functions (ufuncs and DUFuncs) .. method:: at(A, indices, *, B) Performs unbuffered in place operation on operand *A* for - elements specified by *indices*. If you are using Numpy 1.7 or + elements specified by *indices*. If you are using NumPy 1.7 or earlier, this method will not be present. See `ufunc.at`_. diff --git a/docs/source/reference/numpysupported.rst b/docs/source/reference/numpysupported.rst index f9baa803639..54060fd906a 100644 --- a/docs/source/reference/numpysupported.rst +++ b/docs/source/reference/numpysupported.rst @@ -17,7 +17,7 @@ NumPy support in Numba comes in many forms: * Numba understands calls to NumPy `ufuncs`_ and is able to generate equivalent native code for many of them. -* NumPy arrays are directly supported in Numba. Access to Numpy arrays +* NumPy arrays are directly supported in Numba. Access to NumPy arrays is very efficient, as indexing is lowered to direct memory accesses when possible. @@ -30,14 +30,14 @@ NumPy support in Numba comes in many forms: .. _ufuncs: http://docs.scipy.org/doc/numpy/reference/ufuncs.html .. _gufuncs: http://docs.scipy.org/doc/numpy/reference/c-api.generalized-ufuncs.html -The following sections focus on the Numpy features supported in +The following sections focus on the NumPy features supported in :term:`nopython mode`, unless otherwise stated. Scalar types ============ -Numba supports the following Numpy scalar types: +Numba supports the following NumPy scalar types: * **Integers**: all integers of either signedness, and any width up to 64 bits * **Booleans** @@ -146,14 +146,14 @@ Without subtyping the last line would fail. With subtyping, no new compilation w compiled function for ``record1`` will be used for ``record2``. .. seealso:: - `Numpy scalars `_ + `NumPy scalars `_ reference. Array types =========== -`Numpy arrays `_ +`NumPy arrays `_ of any of the scalar types above are supported, regardless of the shape or layout. @@ -166,7 +166,7 @@ advanced index is allowed, and it has to be a one-dimensional array (it can be combined with an arbitrary number of basic indices as well). .. seealso:: - `Numpy indexing `_ + `NumPy indexing `_ reference. @@ -224,7 +224,7 @@ version raises an error because of the unsupported use of attribute access. Attributes ---------- -The following attributes of Numpy arrays are supported: +The following attributes of NumPy arrays are supported: * :attr:`~numpy.ndarray.dtype` * :attr:`~numpy.ndarray.flags` @@ -254,22 +254,22 @@ non-C-contiguous arrays. The ``real`` and ``imag`` attributes '''''''''''''''''''''''''''''''''''' -Numpy supports these attributes regardless of the dtype but Numba chooses to +NumPy supports these attributes regardless of the dtype but Numba chooses to limit their support to avoid potential user error. For numeric dtypes, -Numba follows Numpy's behavior. The :attr:`~numpy.ndarray.real` attribute +Numba follows NumPy's behavior. The :attr:`~numpy.ndarray.real` attribute returns a view of the real part of the complex array and it behaves as an identity function for other numeric dtypes. The :attr:`~numpy.ndarray.imag` attribute returns a view of the imaginary part of the complex array and it returns a zero array with the same shape and dtype for other numeric dtypes. For non-numeric dtypes, including all structured/record dtypes, using these attributes will result in a compile-time (`TypingError`) error. This behavior differs from -Numpy's but it is chosen to avoid the potential confusion with field names that +NumPy's but it is chosen to avoid the potential confusion with field names that overlap these attributes. Calculation ----------- -The following methods of Numpy arrays are supported in their basic form +The following methods of NumPy arrays are supported in their basic form (without any optional arguments): * :meth:`~numpy.ndarray.all` @@ -288,13 +288,13 @@ The following methods of Numpy arrays are supported in their basic form * :meth:`~numpy.ndarray.take` * :meth:`~numpy.ndarray.var` -The corresponding top-level Numpy functions (such as :func:`numpy.prod`) +The corresponding top-level NumPy functions (such as :func:`numpy.prod`) are similarly supported. Other methods ------------- -The following methods of Numpy arrays are supported: +The following methods of NumPy arrays are supported: * :meth:`~numpy.ndarray.argmax` (``axis`` keyword argument supported). * :meth:`~numpy.ndarray.argmin` (``axis`` keyword argument supported). @@ -340,7 +340,7 @@ Where applicable, the corresponding top-level NumPy functions (such as :func:`numpy.argmax`) are similarly supported. .. warning:: - Sorting may be slightly slower than Numpy's implementation. + Sorting may be slightly slower than NumPy's implementation. Functions @@ -507,7 +507,7 @@ The following top-level functions are supported: * :func:`numpy.searchsorted` (only the 3 first arguments) * :func:`numpy.select` (only using homogeneous lists or tuples for the first two arguments, condlist and choicelist). Additionally, these two arguments - can only contain arrays (unlike Numpy that also accepts tuples). + can only contain arrays (unlike NumPy that also accepts tuples). * :func:`numpy.shape` * :func:`numpy.sinc` * :func:`numpy.sort` (no optional arguments, quicksort accepts @@ -688,7 +688,7 @@ Permutations array) is not supported * :func:`numpy.random.permutation` * :func:`numpy.random.shuffle`: the sequence argument must be a one-dimension - Numpy array or buffer-providing object (such as a :class:`bytearray` + NumPy array or buffer-providing object (such as a :class:`bytearray` or :class:`array.array`) Distributions @@ -732,7 +732,7 @@ The following functions support all arguments. .. note:: Calling :func:`numpy.random.seed` from non-Numba code (or from - :term:`object mode` code) will seed the Numpy random generator, not the + :term:`object mode` code) will seed the NumPy random generator, not the Numba random generator. .. note:: diff --git a/docs/source/reference/types.rst b/docs/source/reference/types.rst index a84648b2702..75ff3432078 100644 --- a/docs/source/reference/types.rst +++ b/docs/source/reference/types.rst @@ -39,7 +39,7 @@ The most basic types can be expressed through simple expressions. The symbols below refer to attributes of the main ``numba`` module (so if you read "boolean", it means that symbol can be accessed as ``numba.boolean``). Many types are available both as a canonical name and a shorthand alias, -following Numpy's conventions. +following NumPy's conventions. Numbers ------- @@ -270,7 +270,7 @@ Inference reflected list(int64) -Numpy scalars +NumPy scalars ------------- Instead of using :func:`~numba.typeof`, non-trivial scalars such as @@ -278,7 +278,7 @@ structured types can also be constructed programmatically. .. function:: numba.from_dtype(dtype) - Create a Numba type corresponding to the given Numpy *dtype*:: + Create a Numba type corresponding to the given NumPy *dtype*:: >>> struct_dtype = np.dtype([('row', np.float64), ('col', np.float64)]) >>> ty = numba.from_dtype(struct_dtype) @@ -289,18 +289,18 @@ structured types can also be constructed programmatically. .. class:: numba.types.NPDatetime(unit) - Create a Numba type for Numpy datetimes of the given *unit*. *unit* - should be a string amongst the codes recognized by Numpy (e.g. + Create a Numba type for NumPy datetimes of the given *unit*. *unit* + should be a string amongst the codes recognized by NumPy (e.g. ``Y``, ``M``, ``D``, etc.). .. class:: numba.types.NPTimedelta(unit) - Create a Numba type for Numpy timedeltas of the given *unit*. *unit* - should be a string amongst the codes recognized by Numpy (e.g. + Create a Numba type for NumPy timedeltas of the given *unit*. *unit* + should be a string amongst the codes recognized by NumPy (e.g. ``Y``, ``M``, ``D``, etc.). .. seealso:: - Numpy `datetime units `_. + NumPy `datetime units `_. Arrays diff --git a/docs/source/user/cfunc.rst b/docs/source/user/cfunc.rst index b06a78bb101..845dc96341a 100644 --- a/docs/source/user/cfunc.rst +++ b/docs/source/user/cfunc.rst @@ -172,7 +172,7 @@ For example:: tmp = 0 for i in range(n): tmp += base[i].i1 * base[i].f2 / base[i].d3 - tmp += base[i].af4.sum() # nested arrays are like normal numpy array + tmp += base[i].af4.sum() # nested arrays are like normal NumPy arrays return tmp diff --git a/docs/source/user/jitclass.rst b/docs/source/user/jitclass.rst index 73cf406d992..9000bf43678 100644 --- a/docs/source/user/jitclass.rst +++ b/docs/source/user/jitclass.rst @@ -84,8 +84,8 @@ Note that only type annotations on the class will be used to infer spec elements. Method type annotations (e.g. those of ``__init__`` above) are ignored. -Numba requires knowing the dtype and rank of numpy arrays, which cannot -currently be expressed with type annotations. Because of this, numpy arrays need +Numba requires knowing the dtype and rank of NumPy arrays, which cannot +currently be expressed with type annotations. Because of this, NumPy arrays need to be included in the ``spec`` explicitly. diff --git a/docs/source/user/stencil.rst b/docs/source/user/stencil.rst index 5d146b4a7ca..6888a556f57 100644 --- a/docs/source/user/stencil.rst +++ b/docs/source/user/stencil.rst @@ -243,9 +243,9 @@ to be used for the output of the stencil. In this case, the stencil function will not allocate its own output array. Users should assure that the return type of the stencil kernel can be safely cast to the element-type of the user-specified output array -following the `Numpy ufunc casting rules`_. +following the `NumPy ufunc casting rules`_. -.. _`Numpy ufunc casting rules`: http://docs.scipy.org/doc/numpy/reference/ufuncs.html#casting-rules +.. _`NumPy ufunc casting rules`: http://docs.scipy.org/doc/numpy/reference/ufuncs.html#casting-rules An example usage is shown below:: diff --git a/docs/source/user/vectorize.rst b/docs/source/user/vectorize.rst index f3c9ce0e844..dc15cda1bc0 100644 --- a/docs/source/user/vectorize.rst +++ b/docs/source/user/vectorize.rst @@ -30,7 +30,7 @@ loop (or *kernel*) allowing efficient iteration over the actual inputs. The :func:`~numba.vectorize` decorator has two modes of operation: * Eager, or decoration-time, compilation: If you pass one or more type - signatures to the decorator, you will be building a Numpy universal + signatures to the decorator, you will be building a NumPy universal function (ufunc). The rest of this subsection describes building ufuncs using decoration-time compilation. @@ -43,7 +43,7 @@ The :func:`~numba.vectorize` decorator has two modes of operation: As described above, if you pass a list of signatures to the :func:`~numba.vectorize` decorator, your function will be compiled -into a Numpy ufunc. In the basic case, only one signature will be +into a NumPy ufunc. In the basic case, only one signature will be passed:: from numba import vectorize, float64 @@ -301,7 +301,7 @@ Let's try to make a call to :func:`f`:: >>> f.types # shorthand for f.ufunc.types ['ll->l'] -If this was a normal Numpy ufunc, we would have seen an exception +If this was a normal NumPy ufunc, we would have seen an exception complaining that the ufunc couldn't handle the input types. When we call :func:`f` with integer arguments, not only do we receive an answer, but we can verify that Numba created a loop supporting C @@ -317,7 +317,7 @@ We can add additional loops by calling :func:`f` with different inputs:: We can now verify that Numba added a second loop for dealing with floating-point inputs, :code:`"dd->d"`. -If we mix input types to :func:`f`, we can verify that `Numpy ufunc +If we mix input types to :func:`f`, we can verify that `NumPy ufunc casting rules`_ are still in effect:: >>> f(1,2.) @@ -325,10 +325,10 @@ casting rules`_ are still in effect:: >>> f.types ['ll->l', 'dd->d'] -.. _`Numpy ufunc casting rules`: http://docs.scipy.org/doc/numpy/reference/ufuncs.html#casting-rules +.. _`NumPy ufunc casting rules`: http://docs.scipy.org/doc/numpy/reference/ufuncs.html#casting-rules This example demonstrates that calling :func:`f` with mixed types -caused Numpy to select the floating-point loop, and cast the integer +caused NumPy to select the floating-point loop, and cast the integer argument to a floating-point value. Thus, Numba did not create a special :code:`"dl->d"` kernel. @@ -410,7 +410,7 @@ floating-point inputs, :code:`"dd->d"`. >>> g.types # shorthand for g.ufunc.types ['ll->l', 'dd->d'] -One can also verify that Numpy ufunc casting rules are working as expected:: +One can also verify that NumPy ufunc casting rules are working as expected:: >>> x = np.arange(5, dtype=np.int64) >>> y = 2.2