Skip to content

Commit

Permalink
Merge remote-tracking branch 'numba/main' into cuda-dispatcher-base-2…
Browse files Browse the repository at this point in the history
…0220204
  • Loading branch information
gmarkall committed Feb 17, 2022
2 parents 499623f + 6e7561b commit 17f9585
Show file tree
Hide file tree
Showing 18 changed files with 336 additions and 79 deletions.
1 change: 1 addition & 0 deletions .github/ISSUE_TEMPLATE/first_rc_checklist.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ labels: task
* [ ] Build and upload conda packages on buildfarm (check "upload").
* [ ] Build wheels (`$PYTHON_VERSIONS`) on the buildfarm.
* [ ] Verify packages uploaded to Anaconda Cloud and move to `numba/label/main`.
* [ ] Build sdist locally using `python setup.py sdist --owner=ci --group=numba` with umask `0022`.
* [ ] Upload wheels and sdist to PyPI (upload from `ci_artifacts`).
* [ ] Verify wheels for all platforms arrived on PyPi.
* [ ] Initialize and verify ReadTheDocs build.
Expand Down
1 change: 1 addition & 0 deletions .github/ISSUE_TEMPLATE/sub_rc_checklist.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ labels: task
* [ ] Verify packages uploaded to Anaconda Cloud and move to
`numba/label/main`.
* [ ] Build wheels (`$PYTHON_VERSIONS`) on the buildfarm.
* [ ] Build sdist locally using `python setup.py sdist --owner=ci --group=numba` with umask `0022`.
* [ ] Upload wheels and sdist to PyPI (upload from `ci_artifacts`).
* [ ] Verify wheels for all platforms arrived on PyPi.
* [ ] Verify ReadTheDocs build.
Expand Down
62 changes: 13 additions & 49 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,15 @@ Numba
.. image:: https://zenodo.org/badge/3659275.svg
:target: https://zenodo.org/badge/latestdoi/3659275
:alt: Zenodo DOI

.. image:: https://img.shields.io/pypi/v/numba.svg
:target: https://pypi.python.org/pypi/numba/
:alt: PyPI

.. image:: https://dev.azure.com/numba/numba/_apis/build/status/numba.numba?branchName=main
:target: https://dev.azure.com/numba/numba/_build/latest?definitionId=1?branchName=main
:alt: Azure Pipelines

A Just-In-Time Compiler for Numerical Functions in Python
#########################################################

Expand All @@ -31,54 +35,22 @@ parallelization of loops, generation of GPU-accelerated code, and creation of
ufuncs and C callbacks.

For more information about Numba, see the Numba homepage:
https://numba.pydata.org

Supported Platforms
===================

* Operating systems and CPUs:

- Linux: x86 (32-bit), x86_64, ppc64le (POWER8 and 9), ARMv7 (32-bit),
ARMv8 (64-bit).
- Windows: x86, x86_64.
- macOS: x86_64, (M1/Arm64, unofficial support only).
- \*BSD: (unofficial support only).

* (Optional) Accelerators and GPUs:

* NVIDIA GPUs (Kepler architecture or later) via CUDA driver on Linux and
Windows.
https://numba.pydata.org and the online documentation:
https://numba.readthedocs.io/en/stable/index.html

Dependencies
Installation
============

* Python versions: 3.7-3.10
* llvmlite 0.39.*
* NumPy >=1.18 (can build with 1.11 for ABI compatibility).

Optionally:

* SciPy >=1.0.0 (for ``numpy.linalg`` support).
Please follow the instuctions:


Installing
==========

The easiest way to install Numba and get updates is by using the Anaconda
Distribution: https://www.anaconda.com/download

::

$ conda install numba

For more options, see the Installation Guide:
https://numba.readthedocs.io/en/stable/user/installing.html

Documentation
=============
Demo
====

https://numba.readthedocs.io/en/stable/index.html
Please have a look and the demo notebooks via the mybinder service:

https://mybinder.org/v2/gh/numba/numba-examples/master?filepath=notebooks

Contact
=======
Expand All @@ -87,11 +59,3 @@ Numba has a discourse forum for discussions:

* https://numba.discourse.group



Continuous Integration
======================

.. image:: https://dev.azure.com/numba/numba/_apis/build/status/numba.numba?branchName=main
:target: https://dev.azure.com/numba/numba/_build/latest?definitionId=1?branchName=main
:alt: Azure Pipelines
5 changes: 2 additions & 3 deletions buildscripts/condarecipe.local/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,8 @@ requirements:
build:
- {{ compiler('c') }} # [not (armv6l or armv7l or aarch64)]
- {{ compiler('cxx') }} # [not (armv6l or armv7l or aarch64)]
# both of these are needed on osx, llvm for the headers, Intel for the lib
# OpenMP headers from llvm needed for OSX.
- llvm-openmp # [osx]
- intel-openmp # [osx]
host:
- python
- numpy
Expand Down Expand Up @@ -71,7 +70,7 @@ test:
- ipython # [not (armv6l or armv7l or aarch64)]
- setuptools
- tbb >=2021 # [not (armv6l or armv7l or aarch64 or linux32 or ppc64le)]
- intel-openmp # [osx]
- llvm-openmp # [osx]
# This is for driving gdb tests
- pexpect # [linux64]
# For testing ipython
Expand Down
5 changes: 3 additions & 2 deletions buildscripts/incremental/setup_conda_environment.sh
Original file line number Diff line number Diff line change
Expand Up @@ -81,8 +81,9 @@ if [[ $(uname) == Linux ]]; then
fi
elif [[ $(uname) == Darwin ]]; then
$CONDA_INSTALL clang_osx-64 clangxx_osx-64
# Install llvm-openmp and intel-openmp on OSX too
$CONDA_INSTALL llvm-openmp intel-openmp
# Install llvm-openmp on OSX for headers during build and runtime during
# testing
$CONDA_INSTALL llvm-openmp
fi

# `pip install` all the dependencies on Python 3.10
Expand Down
1 change: 1 addition & 0 deletions docs/source/cuda/cudapysupported.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ The following built-in types support are inherited from CPU nopython mode.
* bool
* None
* tuple
* Enum, IntEnum

See :ref:`nopython built-in types <pysupported-builtin-types>`.

Expand Down
3 changes: 1 addition & 2 deletions docs/source/developer/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -99,8 +99,7 @@ Then create an environment with the right dependencies::
.. note::
This installs an environment based on Python 3.8, but you can of course
choose another version supported by Numba. To test additional features,
you may also need to install ``tbb`` and/or ``llvm-openmp`` and
``intel-openmp``.
you may also need to install ``tbb`` and/or ``llvm-openmp``.

To activate the environment for the current shell session::

Expand Down
14 changes: 7 additions & 7 deletions docs/source/user/installing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -167,9 +167,9 @@ otherwise build by default along with information on configuration options.
* For Linux and Windows it is necessary to provide OpenMP C headers and
runtime libraries compatible with the compiler tool chain mentioned above,
and for these to be accessible to the compiler via standard flags.
* For OSX the conda packages ``llvm-openmp`` and ``intel-openmp`` provide
suitable C headers and libraries. If the compilation requirements are not
met the OpenMP threading backend will not be compiled
* For OSX the conda package ``llvm-openmp`` provides suitable C headers and
libraries. If the compilation requirements are not met the OpenMP threading
backend will not be compiled.

.. envvar:: NUMBA_DISABLE_TBB (default: not set)

Expand Down Expand Up @@ -211,8 +211,6 @@ vary with target operating system and hardware. The following lists them all

* ``llvm-openmp`` (OSX) - provides headers for compiling OpenMP support into
Numba's threading backend
* ``intel-openmp`` (OSX) - provides OpenMP library support for Numba's
threading backend.
* ``tbb-devel`` - provides TBB headers/libraries for compiling TBB support
into Numba's threading backend (version >= 2021 required).
* ``importlib_metadata`` (for Python versions < 3.9)
Expand All @@ -226,8 +224,10 @@ vary with target operating system and hardware. The following lists them all
* ``jinja2`` - for "pretty" type annotation output (HTML) via the ``numba``
CLI
* ``cffi`` - permits use of CFFI bindings in Numba compiled functions
* ``intel-openmp`` - (OSX) provides OpenMP library support for Numba's OpenMP
threading backend
* ``llvm-openmp`` - (OSX) provides OpenMP library support for Numba's OpenMP
threading backend.
* ``intel-openmp`` - (OSX) provides an alternative OpenMP library for use with
Numba's OpenMP threading backend.
* ``ipython`` - if in use, caching will use IPython's cache
directories/caching still works
* ``pyyaml`` - permits the use of a ``.numba_config.yaml``
Expand Down
5 changes: 3 additions & 2 deletions docs/source/user/threading-layer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -157,8 +157,9 @@ and requirements are as follows:
| | Windows | MS OpenMP libraries (very likely this will|
| | | already exist) |
| | | |
| | OSX | The ``intel-openmp`` package (``$ conda |
| | | install intel-openmp``) |
| | OSX | Either the ``intel-openmp`` package or the|
| | | ``llvm-openmp`` package |
| | | (``conda install`` the package as named). |
+----------------------+-----------+-------------------------------------------+
| ``workqueue`` | All | None |
+----------------------+-----------+-------------------------------------------+
Expand Down
7 changes: 6 additions & 1 deletion numba/cuda/cudaimpl.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@

from numba.core.imputils import Registry, lower_cast
from numba.core.typing.npydecl import parse_dtype, signature
from numba.core.datamodel import models
from numba.core import types, cgutils
from .cudadrv import nvvm
from numba import cuda
Expand Down Expand Up @@ -987,7 +988,11 @@ def _generic_array(context, builder, shape, dtype, symbol_name, addrspace,
raise ValueError("array length <= 0")

# Check that we support the requested dtype
other_supported_type = isinstance(dtype, (types.Record, types.Boolean))
data_model = context.data_model_manager[dtype]
other_supported_type = (
isinstance(dtype, (types.Record, types.Boolean))
or isinstance(data_model, models.StructModel)
)
if dtype not in types.number_domain and not other_supported_type:
raise TypeError("unsupported type: %s" % dtype)

Expand Down
8 changes: 8 additions & 0 deletions numba/cuda/dispatcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -384,6 +384,14 @@ def _prepare_args(self, ty, val, stream, retr, kernelargs):
for t, v in zip(ty, val):
self._prepare_args(t, v, stream, retr, kernelargs)

elif isinstance(ty, types.EnumMember):
try:
self._prepare_args(
ty.dtype, val.value, stream, retr, kernelargs
)
except NotImplementedError:
raise NotImplementedError(ty, val)

else:
raise NotImplementedError(ty, val)

Expand Down
4 changes: 3 additions & 1 deletion numba/cuda/target.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,13 @@
class CUDATypingContext(typing.BaseContext):
def load_additional_registries(self):
from . import cudadecl, cudamath, libdevicedecl
from numba.core.typing import enumdecl

self.install_registry(cudadecl.registry)
self.install_registry(cudamath.registry)
self.install_registry(cmathdecl.registry)
self.install_registry(libdevicedecl.registry)
self.install_registry(enumdecl.registry)

def resolve_value_type(self, val):
# treat other dispatcher object as another device function
Expand Down Expand Up @@ -89,7 +91,7 @@ def load_additional_registries(self):
# side effect of import needed for numba.cpython.*, the builtins
# registry is updated at import time.
from numba.cpython import numbers, tupleobj, slicing # noqa: F401
from numba.cpython import rangeobj, iterators # noqa: F401
from numba.cpython import rangeobj, iterators, enumimpl # noqa: F401
from numba.cpython import unicode, charseq # noqa: F401
from numba.cpython import cmathimpl
from numba.np import arrayobj # noqa: F401
Expand Down
58 changes: 58 additions & 0 deletions numba/cuda/tests/cudapy/extensions_usecases.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
from numba import types
from numba.core import config


class TestStruct:
def __init__(self, x, y):
self.x = x
self.y = y


class TestStructModelType(types.Type):
def __init__(self):
super().__init__(name="TestStructModelType")


test_struct_model_type = TestStructModelType()


if not config.ENABLE_CUDASIM:
from numba import int32
from numba.core.extending import (
models,
register_model,
make_attribute_wrapper,
typeof_impl,
type_callable
)
from numba.cuda.cudaimpl import lower
from numba.core import cgutils

@typeof_impl.register(TestStruct)
def typeof_teststruct(val, c):
return test_struct_model_type

@register_model(TestStructModelType)
class TestStructModel(models.StructModel):
def __init__(self, dmm, fe_type):
members = [("x", int32), ("y", int32)]
super().__init__(dmm, fe_type, members)

make_attribute_wrapper(TestStructModelType, 'x', 'x')
make_attribute_wrapper(TestStructModelType, 'y', 'y')

@type_callable(TestStruct)
def type_test_struct(context):
def typer(x, y):
if isinstance(x, types.Integer) and isinstance(y, types.Integer):
return test_struct_model_type
return typer

@lower(TestStruct, types.Integer, types.Integer)
def lower_test_type_ctor(context, builder, sig, args):
obj = cgutils.create_struct_proxy(
test_struct_model_type
)(context, builder)
obj.x = args[0]
obj.y = args[1]
return obj._getvalue()
Loading

0 comments on commit 17f9585

Please sign in to comment.