Skip to content

Commit

Permalink
Merge pull request numba#8144 from stuartarchibald/doc/numpy_spelling
Browse files Browse the repository at this point in the history
Fix NumPy capitalisation in docs.
  • Loading branch information
sklam authored Jun 21, 2022
2 parents 7dab9b9 + 146e72d commit 9a7820c
Show file tree
Hide file tree
Showing 12 changed files with 60 additions and 60 deletions.
8 changes: 4 additions & 4 deletions docs/source/cuda/cuda_array_interface.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ that must contain the following entries:
- **typestr**: ``str``

The type string. This has the same definition as ``typestr`` in the
`numpy array interface`_.
`NumPy array interface`_.

- **data**: ``(integer, boolean)``

Expand Down Expand Up @@ -63,14 +63,14 @@ The following are optional entries:
- **descr**

This is for describing more complicated types. This follows the same
specification as in the `numpy array interface`_.
specification as in the `NumPy array interface`_.

- **mask**: ``None`` or object exposing the ``__cuda_array_interface__``

If ``None`` then all values in **data** are valid. All elements of the mask
array should be interpreted only as true or not true indicating which
elements of this array are valid. This has the same definition as ``mask``
in the `numpy array interface`_.
in the `NumPy array interface`_.

.. note:: Numba does not currently support working with masked CUDA arrays
and will raise a ``NotImplementedError`` exception if one is passed
Expand Down Expand Up @@ -475,7 +475,7 @@ include:
- is the pointer a managed memory?


.. _numpy array interface: https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.interface.html#__array_interface__
.. _NumPy array interface: https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.interface.html#__array_interface__


Differences with CUDA Array Interface (Version 0)
Expand Down
6 changes: 3 additions & 3 deletions docs/source/cuda/cudapysupported.rst
Original file line number Diff line number Diff line change
Expand Up @@ -275,21 +275,21 @@ The following functions from the :mod:`operator` module are supported:
* :func:`operator.xor`


Numpy support
NumPy support
=============

Due to the CUDA programming model, dynamic memory allocation inside a kernel is
inefficient and is often not needed. Numba disallows any memory allocating features.
This disables a large number of NumPy APIs. For best performance, users should write
code such that each thread is dealing with a single element at a time.

Supported numpy features:
Supported NumPy features:

* accessing `ndarray` attributes `.shape`, `.strides`, `.ndim`, `.size`, etc..
* scalar ufuncs that have equivalents in the `math` module; i.e. ``np.sin(x[0])``, where x is a 1D array.
* indexing and slicing works.

Unsupported numpy features:
Unsupported NumPy features:

* array creation APIs.
* array methods.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/cuda/reduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ kernel. An example follows::
return a + b

A = (numpy.arange(1234, dtype=numpy.float64)) + 1
expect = A.sum() # numpy sum reduction
expect = A.sum() # NumPy sum reduction
got = sum_reduce(A) # cuda sum reduction
assert expect == got

Expand Down
2 changes: 1 addition & 1 deletion docs/source/cuda/ufunc.rst
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ the `max_blocksize` attribute on the compiled gufunc object.
chunksize = 1e+6
chunkcount = N // chunksize
# partition numpy arrays into chunks
# partition NumPy arrays into chunks
# no copying is performed
sA = np.split(A, chunkcount)
sB = np.split(B, chunkcount)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/proposals/jit-classes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ C structure::
complex64 field2;
};

This will also be comptabile with an aligned numpy structure dtype.
This will also be comptabile with an aligned NumPy structured dtype.


Methods
Expand Down
26 changes: 13 additions & 13 deletions docs/source/reference/jit-compilation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ JIT functions
.. _jit-decorator-parallel:

If true, *parallel* enables the automatic parallelization of a number of
common Numpy constructs as well as the fusion of adjacent parallel
common NumPy constructs as well as the fusion of adjacent parallel
operations to maximize cache locality.

The *error_model* option controls the divide-by-zero behavior.
Expand Down Expand Up @@ -294,18 +294,18 @@ Vectorized functions (ufuncs and DUFuncs)

.. decorator:: numba.vectorize(*, signatures=[], identity=None, nopython=True, target='cpu', forceobj=False, cache=False, locals={})

Compile the decorated function and wrap it either as a `Numpy
Compile the decorated function and wrap it either as a `NumPy
ufunc`_ or a Numba :class:`~numba.DUFunc`. The optional
*nopython*, *forceobj* and *locals* arguments have the same meaning
as in :func:`numba.jit`.

*signatures* is an optional list of signatures expressed in the
same form as in the :func:`numba.jit` *signature* argument. If
*signatures* is non-empty, then the decorator will compile the user
Python function into a Numpy ufunc. If no *signatures* are given,
Python function into a NumPy ufunc. If no *signatures* are given,
then the decorator will wrap the user Python function in a
:class:`~numba.DUFunc` instance, which will compile the user
function at call time whenever Numpy can not find a matching loop
function at call time whenever NumPy can not find a matching loop
for the input arguments. *signatures* is required if *target* is
``"parallel"``.

Expand All @@ -317,7 +317,7 @@ Vectorized functions (ufuncs and DUFuncs)
axes can be reordered.

If there are several *signatures*, they must be ordered from the more
specific to the least specific. Otherwise, Numpy's type-based
specific to the least specific. Otherwise, NumPy's type-based
dispatching may not work as expected. For example, the following is
wrong::

Expand Down Expand Up @@ -354,7 +354,7 @@ Vectorized functions (ufuncs and DUFuncs)
:func:`numba.vectorize` will produce a simple ufunc whose core
functionality (the function you are decorating) operates on scalar
operands and returns a scalar value, :func:`numba.guvectorize`
allows you to create a `Numpy ufunc`_ whose core function takes array
allows you to create a `NumPy ufunc`_ whose core function takes array
arguments of various dimensions.

The additional argument *layout* is a string specifying, in symbolic
Expand Down Expand Up @@ -387,27 +387,27 @@ Vectorized functions (ufuncs and DUFuncs)

.. seealso::
Specification of the `layout string <https://numpy.org/doc/stable/reference/c-api/generalized-ufuncs.html#details-of-signature>`_
as supported by Numpy. Note that Numpy uses the term "signature",
as supported by NumPy. Note that NumPy uses the term "signature",
which we unfortunately use for something else.

The compiled function can be cached to reduce future compilation time.
It is enabled by setting *cache* to True. Only the "cpu" and "parallel"
targets support caching.

.. _Numpy ufunc: http://docs.scipy.org/doc/numpy/reference/ufuncs.html
.. _NumPy ufunc: http://docs.scipy.org/doc/numpy/reference/ufuncs.html

.. class:: numba.DUFunc

The class of objects created by calling :func:`numba.vectorize`
with no signatures.

DUFunc instances should behave similarly to Numpy
DUFunc instances should behave similarly to NumPy
:class:`~numpy.ufunc` objects with one important difference:
call-time loop generation. When calling a ufunc, Numpy looks at
call-time loop generation. When calling a ufunc, NumPy looks at
the existing loops registered for that ufunc, and will raise a
:class:`~python.TypeError` if it cannot find a loop that it cannot
safely cast the inputs to suit. When calling a DUFunc, Numba
delegates the call to Numpy. If the Numpy ufunc call fails, then
delegates the call to NumPy. If the NumPy ufunc call fails, then
Numba attempts to build a new loop for the given input types, and
calls the ufunc again. If this second call attempt fails or a
compilation error occurs, then DUFunc passes along the exception to
Expand All @@ -422,7 +422,7 @@ Vectorized functions (ufuncs and DUFuncs)

.. attribute:: ufunc

The actual Numpy :class:`~numpy.ufunc` object being built by the
The actual NumPy :class:`~numpy.ufunc` object being built by the
:class:`~numba.DUFunc` instance. Note that the
:class:`~numba.DUFunc` object maintains several important data
structures required for proper ufunc functionality (specifically
Expand Down Expand Up @@ -482,7 +482,7 @@ Vectorized functions (ufuncs and DUFuncs)
.. method:: at(A, indices, *, B)

Performs unbuffered in place operation on operand *A* for
elements specified by *indices*. If you are using Numpy 1.7 or
elements specified by *indices*. If you are using NumPy 1.7 or
earlier, this method will not be present. See `ufunc.at`_.


Expand Down
34 changes: 17 additions & 17 deletions docs/source/reference/numpysupported.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ NumPy support in Numba comes in many forms:
* Numba understands calls to NumPy `ufuncs`_ and is able to generate
equivalent native code for many of them.

* NumPy arrays are directly supported in Numba. Access to Numpy arrays
* NumPy arrays are directly supported in Numba. Access to NumPy arrays
is very efficient, as indexing is lowered to direct memory accesses
when possible.

Expand All @@ -30,14 +30,14 @@ NumPy support in Numba comes in many forms:
.. _ufuncs: http://docs.scipy.org/doc/numpy/reference/ufuncs.html
.. _gufuncs: http://docs.scipy.org/doc/numpy/reference/c-api.generalized-ufuncs.html

The following sections focus on the Numpy features supported in
The following sections focus on the NumPy features supported in
:term:`nopython mode`, unless otherwise stated.


Scalar types
============

Numba supports the following Numpy scalar types:
Numba supports the following NumPy scalar types:

* **Integers**: all integers of either signedness, and any width up to 64 bits
* **Booleans**
Expand Down Expand Up @@ -146,14 +146,14 @@ Without subtyping the last line would fail. With subtyping, no new compilation w
compiled function for ``record1`` will be used for ``record2``.

.. seealso::
`Numpy scalars <http://docs.scipy.org/doc/numpy/reference/arrays.scalars.html>`_
`NumPy scalars <http://docs.scipy.org/doc/numpy/reference/arrays.scalars.html>`_
reference.


Array types
===========

`Numpy arrays <http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html>`_
`NumPy arrays <http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html>`_
of any of the scalar types above are supported, regardless of the shape
or layout.

Expand All @@ -166,7 +166,7 @@ advanced index is allowed, and it has to be a one-dimensional array
(it can be combined with an arbitrary number of basic indices as well).

.. seealso::
`Numpy indexing <http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html>`_
`NumPy indexing <http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html>`_
reference.


Expand Down Expand Up @@ -224,7 +224,7 @@ version raises an error because of the unsupported use of attribute access.
Attributes
----------

The following attributes of Numpy arrays are supported:
The following attributes of NumPy arrays are supported:

* :attr:`~numpy.ndarray.dtype`
* :attr:`~numpy.ndarray.flags`
Expand Down Expand Up @@ -254,22 +254,22 @@ non-C-contiguous arrays.
The ``real`` and ``imag`` attributes
''''''''''''''''''''''''''''''''''''

Numpy supports these attributes regardless of the dtype but Numba chooses to
NumPy supports these attributes regardless of the dtype but Numba chooses to
limit their support to avoid potential user error. For numeric dtypes,
Numba follows Numpy's behavior. The :attr:`~numpy.ndarray.real` attribute
Numba follows NumPy's behavior. The :attr:`~numpy.ndarray.real` attribute
returns a view of the real part of the complex array and it behaves as an identity
function for other numeric dtypes. The :attr:`~numpy.ndarray.imag` attribute
returns a view of the imaginary part of the complex array and it returns a zero
array with the same shape and dtype for other numeric dtypes. For non-numeric
dtypes, including all structured/record dtypes, using these attributes will
result in a compile-time (`TypingError`) error. This behavior differs from
Numpy's but it is chosen to avoid the potential confusion with field names that
NumPy's but it is chosen to avoid the potential confusion with field names that
overlap these attributes.

Calculation
-----------

The following methods of Numpy arrays are supported in their basic form
The following methods of NumPy arrays are supported in their basic form
(without any optional arguments):

* :meth:`~numpy.ndarray.all`
Expand All @@ -288,13 +288,13 @@ The following methods of Numpy arrays are supported in their basic form
* :meth:`~numpy.ndarray.take`
* :meth:`~numpy.ndarray.var`

The corresponding top-level Numpy functions (such as :func:`numpy.prod`)
The corresponding top-level NumPy functions (such as :func:`numpy.prod`)
are similarly supported.

Other methods
-------------

The following methods of Numpy arrays are supported:
The following methods of NumPy arrays are supported:

* :meth:`~numpy.ndarray.argmax` (``axis`` keyword argument supported).
* :meth:`~numpy.ndarray.argmin` (``axis`` keyword argument supported).
Expand Down Expand Up @@ -340,7 +340,7 @@ Where applicable, the corresponding top-level NumPy functions (such as
:func:`numpy.argmax`) are similarly supported.

.. warning::
Sorting may be slightly slower than Numpy's implementation.
Sorting may be slightly slower than NumPy's implementation.


Functions
Expand Down Expand Up @@ -507,7 +507,7 @@ The following top-level functions are supported:
* :func:`numpy.searchsorted` (only the 3 first arguments)
* :func:`numpy.select` (only using homogeneous lists or tuples for the first
two arguments, condlist and choicelist). Additionally, these two arguments
can only contain arrays (unlike Numpy that also accepts tuples).
can only contain arrays (unlike NumPy that also accepts tuples).
* :func:`numpy.shape`
* :func:`numpy.sinc`
* :func:`numpy.sort` (no optional arguments, quicksort accepts
Expand Down Expand Up @@ -688,7 +688,7 @@ Permutations
array) is not supported
* :func:`numpy.random.permutation`
* :func:`numpy.random.shuffle`: the sequence argument must be a one-dimension
Numpy array or buffer-providing object (such as a :class:`bytearray`
NumPy array or buffer-providing object (such as a :class:`bytearray`
or :class:`array.array`)

Distributions
Expand Down Expand Up @@ -732,7 +732,7 @@ The following functions support all arguments.

.. note::
Calling :func:`numpy.random.seed` from non-Numba code (or from
:term:`object mode` code) will seed the Numpy random generator, not the
:term:`object mode` code) will seed the NumPy random generator, not the
Numba random generator.

.. note::
Expand Down
16 changes: 8 additions & 8 deletions docs/source/reference/types.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The most basic types can be expressed through simple expressions. The
symbols below refer to attributes of the main ``numba`` module (so if
you read "boolean", it means that symbol can be accessed as ``numba.boolean``).
Many types are available both as a canonical name and a shorthand alias,
following Numpy's conventions.
following NumPy's conventions.

Numbers
-------
Expand Down Expand Up @@ -270,15 +270,15 @@ Inference
reflected list(int64)


Numpy scalars
NumPy scalars
-------------

Instead of using :func:`~numba.typeof`, non-trivial scalars such as
structured types can also be constructed programmatically.

.. function:: numba.from_dtype(dtype)

Create a Numba type corresponding to the given Numpy *dtype*::
Create a Numba type corresponding to the given NumPy *dtype*::

>>> struct_dtype = np.dtype([('row', np.float64), ('col', np.float64)])
>>> ty = numba.from_dtype(struct_dtype)
Expand All @@ -289,18 +289,18 @@ structured types can also be constructed programmatically.

.. class:: numba.types.NPDatetime(unit)

Create a Numba type for Numpy datetimes of the given *unit*. *unit*
should be a string amongst the codes recognized by Numpy (e.g.
Create a Numba type for NumPy datetimes of the given *unit*. *unit*
should be a string amongst the codes recognized by NumPy (e.g.
``Y``, ``M``, ``D``, etc.).

.. class:: numba.types.NPTimedelta(unit)

Create a Numba type for Numpy timedeltas of the given *unit*. *unit*
should be a string amongst the codes recognized by Numpy (e.g.
Create a Numba type for NumPy timedeltas of the given *unit*. *unit*
should be a string amongst the codes recognized by NumPy (e.g.
``Y``, ``M``, ``D``, etc.).

.. seealso::
Numpy `datetime units <http://docs.scipy.org/doc/numpy/reference/arrays.datetime.html#datetime-units>`_.
NumPy `datetime units <http://docs.scipy.org/doc/numpy/reference/arrays.datetime.html#datetime-units>`_.


Arrays
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user/cfunc.rst
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ For example::
tmp = 0
for i in range(n):
tmp += base[i].i1 * base[i].f2 / base[i].d3
tmp += base[i].af4.sum() # nested arrays are like normal numpy array
tmp += base[i].af4.sum() # nested arrays are like normal NumPy arrays
return tmp


Expand Down
4 changes: 2 additions & 2 deletions docs/source/user/jitclass.rst
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,8 @@ Note that only type annotations on the class will be used to infer spec
elements. Method type annotations (e.g. those of ``__init__`` above) are
ignored.

Numba requires knowing the dtype and rank of numpy arrays, which cannot
currently be expressed with type annotations. Because of this, numpy arrays need
Numba requires knowing the dtype and rank of NumPy arrays, which cannot
currently be expressed with type annotations. Because of this, NumPy arrays need
to be included in the ``spec`` explicitly.


Expand Down
Loading

0 comments on commit 9a7820c

Please sign in to comment.