Skip to content

Commit 942642a

Browse files
authored
Move to manylinux_2_28 base image for wheels (#3306)
* use new Docker image tags (`manylinux_2_28_x86_64` and `manylinux_2_28_aarch64`) in CI for building wheels * use vendor-provided `flex`, `mpich`, and `openmpi` instead of building from source (still need custom `readline` and `ncurses` as the `*-static` versions do not use `-fPIC`) * unify MPI headers when building wheels * remove BBP-specific tests for wheels * show warning in Python wrapper if host machine is using GCC<10 * mention new min system requirements in docs when using a manylinux NEURON wheel
1 parent 90900fa commit 942642a

File tree

9 files changed

+81
-100
lines changed

9 files changed

+81
-100
lines changed

.circleci/config.yml

+4-4
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ orbs:
44
python: circleci/[email protected]
55

66
jobs:
7-
manylinux2014-aarch64:
7+
manylinux_2_28-aarch64:
88

99
parameters:
1010
NRN_PYTHON_VERSION_MINOR:
@@ -31,7 +31,7 @@ jobs:
3131
-e NRN_RELEASE_UPLOAD \
3232
-e SETUPTOOLS_SCM_PRETEND_VERSION \
3333
-e NRN_BUILD_FOR_UPLOAD=1 \
34-
'neuronsimulator/neuron_wheel:latest-aarch64' \
34+
'docker.io/neuronsimulator/neuron_wheel:manylinux_2_28_aarch64' \
3535
packaging/python/build_wheels.bash linux 3<< parameters.NRN_PYTHON_VERSION_MINOR >> coreneuron
3636
3737
- store_artifacts:
@@ -71,7 +71,7 @@ workflows:
7171

7272
build-workflow:
7373
jobs:
74-
- manylinux2014-aarch64:
74+
- manylinux_2_28-aarch64:
7575
filters:
7676
branches:
7777
only:
@@ -91,7 +91,7 @@ workflows:
9191
only:
9292
- master
9393
jobs:
94-
- manylinux2014-aarch64:
94+
- manylinux_2_28-aarch64:
9595
matrix:
9696
parameters:
9797
NRN_PYTHON_VERSION_MINOR: ["9", "10", "11", "12", "13"]

azure-pipelines.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ stages:
7070
-e NRN_RELEASE_UPLOAD \
7171
-e SETUPTOOLS_SCM_PRETEND_VERSION \
7272
-e NRN_BUILD_FOR_UPLOAD=1 \
73-
'neuronsimulator/neuron_wheel:latest-x86_64' \
73+
'docker.io/neuronsimulator/neuron_wheel:manylinux_2_28_x86_64' \
7474
packaging/python/build_wheels.bash linux $(python.version) coreneuron
7575
displayName: 'Building ManyLinux Wheel'
7676

docs/install/install_instructions.md

+10-1
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ architecture.
107107

108108
#### Linux
109109

110-
Like Mac OS, since 7.8.1 release python wheels are provided and you can use `pip` to install NEURON by opening a terminal and typing:
110+
Like Mac OS, since 7.8.1 release Python wheels are provided and you can use `pip` to install NEURON by opening a terminal and typing:
111111

112112
```
113113
pip3 install neuron
@@ -116,6 +116,15 @@ pip3 install neuron
116116
Note that Python2 wheels are provided for the 8.0.x release series exclusively. Also, we are not providing .rpm or .deb
117117
installers for recent releases.
118118

119+
**Note**: as of NEURON major version 9, the minimum system requirements for using NEURON Python wheels on Linux are:
120+
121+
* Debian 10 or higher
122+
* Ubuntu 18.10 or higher
123+
* Fedora 29 or higher
124+
* CentOS/RHEL 8 or higher
125+
126+
Furthermore, GCC >= 10 is required (older versions of GCC may work, but are not recommended).
127+
119128
#### Windows
120129

121130
On Windows, the only recommended way to install NEURON is using the binary installer. You can download alpha

docs/install/python_wheels.md

+2-31
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
## Linux wheels
55

66
In order to have NEURON binaries run on most Linux distros, we rely on the [manylinux project](https://github.com/pypa/manylinux).
7-
Current NEURON Linux image is based on `manylinux2014`.
7+
Current NEURON Linux image is based on `manylinux_2_28`.
88

99
### Setting up Docker
1010

@@ -35,23 +35,8 @@ Refer to the following image for the NEURON Docker Image workflow:
3535
![](images/docker-workflow.png)
3636

3737

38-
### Building the docker images automatically
39-
If you run the workflow manually on Gitlab (with the "Run pipeline" button), it will now have the `mac_m1_container_build` and `x86_64_container_build` jobs added to it. These jobs need to be started manually and will not affect the overal workflow status. They don't need to be run every time, just when a refresh of the container images is necessary.
40-
They will build the container images and push to docker hub. If you want to, you can still build manually (see next section), but there shouldn't be a requirement to do so any more.
41-
42-
A word of warning: podman on OSX uses a virtual machine. The job can take care of starting it, but we generally try to have it running to avoid jobs cleaning up after themselves and killing the machine for other jobs. When starting the machine, set the variables that need to be set during the container build, ie. proxy and `BUILDAH_FORMAT`.
43-
44-
`BUILDAH_FORMAT` ensures that `ONBUILD` instructions are enabled.
45-
46-
```
47-
export http_proxy=http://bbpproxy.epfl.ch:80
48-
export https_proxy=http://bbpproxy.epfl.ch:80
49-
export HTTP_PROXY=http://bbpproxy.epfl.ch:80
50-
export HTTPS_PROXY=http://bbpproxy.epfl.ch:80
51-
export BUILDAH_FORMAT=docker
52-
```
53-
5438
### Building the docker image manually
39+
5540
After making updates to any of the docker files, you can build the image with:
5641
```
5742
cd nrn/packaging/python
@@ -108,11 +93,6 @@ For `HPE-MPT MPI`, since it's not open source, you need to acquire the headers a
10893
docker run -v $PWD/nrn:/root/nrn -w /root/nrn -v $PWD/mpt-headers/2.21/include:/nrnwheel/mpt/include -it neuronsimulator/neuron_wheel:latest-x86_64 bash
10994
```
11095
where `$PWD/mpt-headers` is the path to the HPE-MPT MPI headers on the host machine that end up mounted at `/nrnwheel/mpt/include`.
111-
You can download the headers with:
112-
113-
```
114-
git clone ssh://bbpcode.epfl.ch/user/kumbhar/mpt-headers
115-
```
11696

11797
## macOS wheels
11898

@@ -206,15 +186,6 @@ bash packaging/python/test_wheels.sh python3.9 "-i https://test.pypi.org/simple/
206186
On MacOS, launching `nrniv -python` or `special -python` can fail to load `neuron` module due to security restrictions.
207187
For this specific purpose, please `export SKIP_EMBEDED_PYTHON_TEST=true` before launching the tests.
208188

209-
### Testing on BB5
210-
On BB5, we can test CPU wheels with:
211-
212-
```
213-
salloc -A proj16 -N 1 --ntasks-per-node=4 -C "cpu" --time=1:00:00 -p interactive
214-
module load unstable python
215-
bash packaging/python/test_wheels.sh python3.9 wheelhouse/NEURON-7.8.0.236-cp39-cp39-manylinux1_x86_64.whl
216-
```
217-
218189
## Publishing the wheels on Pypi via Azure
219190

220191
### Variables that drive PyPI upload

packaging/python/Dockerfile

+25-27
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,23 @@
1-
ARG MANYLINUX_IMAGE=manylinux2014_x86_64
1+
ARG MANYLINUX_IMAGE=manylinux_2_28_x86_64
22

33
FROM quay.io/pypa/$MANYLINUX_IMAGE
4-
LABEL authors="Pramod Kumbhar, Fernando Pereira, Alexandru Savulescu"
4+
LABEL authors="Pramod Kumbhar, Fernando Pereira, Alexandru Savulescu, Goran Jelic-Cizmek"
5+
6+
# problem: libstdc++ is _not_ forwards compatible, so if we try to compile mod
7+
# files on a system that ships a version of it older than the one used for
8+
# building the wheel itself, we'll get linker errors.
9+
# solution: use a well-defined oldest-supported version of GCC
10+
# we need to do this _before_ building any libraries from source
11+
ARG OLDEST_SUPPORTED_GCC_VERSION=10
12+
RUN yum -y install \
13+
gcc-toolset-${OLDEST_SUPPORTED_GCC_VERSION}-gcc \
14+
gcc-toolset-${OLDEST_SUPPORTED_GCC_VERSION}-gcc-c++ \
15+
&& yum -y clean all && rm -rf /var/cache
16+
ENV PATH /opt/rh/gcc-toolset-${OLDEST_SUPPORTED_GCC_VERSION}/root/usr/bin:$PATH
17+
ENV LD_LIBRARY_PATH=/opt/rh/gcc-toolset-${OLDEST_SUPPORTED_GCC_VERSION}/root/usr/lib64:/opt/rh/gcc-toolset-${OLDEST_SUPPORTED_GCC_VERSION}/root/usr/lib:/opt/rh/gcc-toolset-${OLDEST_SUPPORTED_GCC_VERSION}/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-${OLDEST_SUPPORTED_GCC_VERSION}/root/usr/lib/dyninst
18+
ENV DEVTOOLSET_ROOTPATH=/opt/rh/gcc-toolset-${OLDEST_SUPPORTED_GCC_VERSION}/root
519

6-
RUN gcc --version && python --version
20+
RUN gcc --version && python3 --version
721

822
# install basic packages
923
RUN yum -y install \
@@ -13,6 +27,9 @@ RUN yum -y install \
1327
vim \
1428
curl \
1529
unzip \
30+
flex \
31+
mpich-devel \
32+
openmpi-devel \
1633
bison \
1734
autoconf \
1835
automake \
@@ -28,33 +45,13 @@ RUN yum -y install \
2845

2946
WORKDIR /root
3047

31-
# newer flex with rpmbuild (manylinux2014 based on Centos7 currently has flex < 2.6)
32-
RUN rpmbuild --rebuild https://vault.centos.org/8-stream/AppStream/Source/SPackages/flex-2.6.1-9.el8.src.rpm \
33-
&& yum -y install rpmbuild/RPMS/*/flex-2.6.1-9.el7.*.rpm \
34-
&& rm -rf rpmbuild
35-
3648
RUN wget http://ftpmirror.gnu.org/ncurses/ncurses-6.4.tar.gz \
3749
&& tar -xvzf ncurses-6.4.tar.gz \
3850
&& cd ncurses-6.4 \
3951
&& ./configure --prefix=/nrnwheel/ncurses --without-shared --without-debug CFLAGS="-fPIC" \
4052
&& make -j install \
4153
&& cd .. && rm -rf ncurses-6.4 ncurses-6.4.tar.gz
4254

43-
RUN curl -L -o mpich-3.3.2.tar.gz http://www.mpich.org/static/downloads/3.3.2/mpich-3.3.2.tar.gz \
44-
&& tar -xvzf mpich-3.3.2.tar.gz \
45-
&& cd mpich-3.3.2 \
46-
&& ./configure --disable-fortran --prefix=/nrnwheel/mpich \
47-
&& make -j install \
48-
&& cd .. && rm -rf mpich-3.3.2 mpich-3.3.2.tar.gz \
49-
&& rm -rf /nrnwheel/mpich/share/doc /nrnwheel/mpich/share/man
50-
51-
RUN curl -L -o openmpi-4.0.3.tar.gz https://download.open-mpi.org/release/open-mpi/v4.0/openmpi-4.0.3.tar.gz \
52-
&& tar -xvzf openmpi-4.0.3.tar.gz \
53-
&& cd openmpi-4.0.3 \
54-
&& ./configure --prefix=/nrnwheel/openmpi \
55-
&& make -j install \
56-
&& cd .. && rm -rf openmpi-4.0.3 openmpi-4.0.3.tar.gz
57-
5855
RUN curl -L -o readline-7.0.tar.gz https://ftp.gnu.org/gnu/readline/readline-7.0.tar.gz \
5956
&& tar -xvzf readline-7.0.tar.gz \
6057
&& cd readline-7.0 \
@@ -83,17 +80,18 @@ RUN curl -L -o Python-3.10.0.tar.gz https://www.python.org/ftp/python/3.10.0/Pyt
8380
&& make -j altinstall \
8481
&& cd .. && rm -rf Python-3.10.0 Python-3.10.0.tar.gz
8582

86-
ENV PATH /nrnwheel/openmpi/bin:$PATH
8783
RUN yum -y install epel-release libX11-devel libXcomposite-devel vim-enhanced && yum -y clean all && rm -rf /var/cache
8884
RUN yum -y remove ncurses-devel
8985

90-
# Copy Dockerfile for reference
91-
COPY Dockerfile .
92-
9386
# build wheels from there
9487
WORKDIR /root
9588

9689
# remove Python 3.13t since we do not support the free-threaded build yet
9790
RUN rm -fr /opt/python/cp313-cp313t
9891

9992
ENV NMODL_PYLIB=/nrnwheel/python/lib/libpython3.10.so.1.0
93+
94+
ENV PATH /usr/lib64/openmpi/bin:$PATH
95+
96+
# Copy Dockerfile for reference
97+
COPY Dockerfile .

packaging/python/README.md

-1
This file was deleted.

packaging/python/README.md

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
Refer to [this document](../../docs/install/python_wheels.md) to see how to build Python wheels for NEURON.

packaging/python/build_wheels.bash

+19-2
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,15 @@ coreneuron=$3
193193
case "$1" in
194194

195195
linux)
196-
MPI_INCLUDE_HEADERS="/nrnwheel/openmpi/include;/nrnwheel/mpich/include"
196+
MPI_POSSIBLE_INCLUDE_HEADERS="/usr/include/openmpi-$(uname -m) /usr/include/mpich-$(uname -m) /usr/lib/$(uname -m)-linux-gnu/openmpi/include /usr/include/$(uname -m)-linux-gnu/mpich"
197+
MPI_INCLUDE_HEADERS=""
198+
for dir in $MPI_POSSIBLE_INCLUDE_HEADERS
199+
do
200+
if [ -d "${dir}" ]; then
201+
MPI_INCLUDE_HEADERS="${MPI_INCLUDE_HEADERS};${dir}"
202+
fi
203+
done
204+
197205
# Check for MPT headers. On Azure, we extract them from a secure file and mount them in the docker image in:
198206
MPT_INCLUDE_PATH="/nrnwheel/mpt/include"
199207
if [ -d "$MPT_INCLUDE_PATH" ]; then
@@ -221,7 +229,16 @@ case "$1" in
221229
MPI_INCLUDE_HEADERS="${BREW_PREFIX}/opt/openmpi/include;${BREW_PREFIX}/opt/mpich/include"
222230
build_wheel_osx $(which python3) "$coreneuron" "$MPI_INCLUDE_HEADERS"
223231
else
224-
MPI_INCLUDE_HEADERS="/usr/lib/x86_64-linux-gnu/openmpi/include;/usr/include/x86_64-linux-gnu/mpich"
232+
# first two are for AlmaLinux 8 (default for manylinux_2_28);
233+
# second two are for Debian/Ubuntu derivatives
234+
MPI_POSSIBLE_INCLUDE_HEADERS="/usr/include/openmpi-$(uname -m) /usr/include/mpich-$(uname -m) /usr/lib/$(uname -m)-linux-gnu/openmpi/include /usr/include/$(uname -m)-linux-gnu/mpich"
235+
MPI_INCLUDE_HEADERS=""
236+
for dir in $MPI_POSSIBLE_INCLUDE_HEADERS
237+
do
238+
if [ -d "${dir}" ]; then
239+
MPI_INCLUDE_HEADERS="${MPI_INCLUDE_HEADERS};${dir}"
240+
fi
241+
done
225242
build_wheel_linux $(which python3) "$coreneuron" "$MPI_INCLUDE_HEADERS"
226243
fi
227244
ls wheelhouse/

packaging/python/test_wheels.sh

+6-23
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ set -xe
55
# See CMake's CMAKE_HOST_SYSTEM_PROCESSOR documentation
66
# On the systems where we are building wheel we can rely
77
# on uname -m. Note that this is just wheel testing script.
8-
ARCH_DIR=`uname -m`
8+
ARCH_DIR="$(uname -m)"
99

1010
if [ ! -f setup.py ]; then
1111
echo "Error: Please launch $0 from the root dir"
@@ -36,10 +36,6 @@ run_mpi_test () {
3636
echo "======= Testing $mpi_name ========"
3737
if [ -n "$mpi_module" ]; then
3838
echo "Loading module $mpi_module"
39-
if [[ $(hostname -f) = *r*bbp.epfl.ch* ]]; then
40-
echo "\tusing unstable on BB5"
41-
module load unstable
42-
fi
4339
module load $mpi_module
4440
fi
4541

@@ -184,29 +180,16 @@ run_parallel_test() {
184180
export DYLD_LIBRARY_PATH=${BREW_PREFIX}/opt/open-mpi/lib:$DYLD_LIBRARY_PATH
185181
run_mpi_test "${BREW_PREFIX}/opt/open-mpi/bin/mpirun" "OpenMPI" ""
186182

187-
# CI Linux or Azure Linux
188-
elif [[ "$CI_OS_NAME" == "linux" || "$AGENT_OS" == "Linux" ]]; then
183+
# CI Linux or Azure Linux or circleCI build (all on Debian/Ubuntu)
184+
elif [[ "$CI_OS_NAME" == "linux" || "$AGENT_OS" == "Linux" || "$CIRCLECI" == "true" ]]; then
189185
# make debugging easier
190186
sudo update-alternatives --get-selections | grep mpi
191-
sudo update-alternatives --list mpi-x86_64-linux-gnu
187+
sudo update-alternatives --list mpi-${ARCH_DIR}-linux-gnu
192188
# choose mpich
193-
sudo update-alternatives --set mpi-x86_64-linux-gnu /usr/include/x86_64-linux-gnu/mpich
189+
sudo update-alternatives --set mpi-${ARCH_DIR}-linux-gnu /usr/include/${ARCH_DIR}-linux-gnu/mpich
194190
run_mpi_test "mpirun.mpich" "MPICH" ""
195191
# choose openmpi
196-
sudo update-alternatives --set mpi-x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/openmpi/include
197-
run_mpi_test "mpirun.openmpi" "OpenMPI" ""
198-
199-
# BB5 with multiple MPI libraries
200-
elif [[ $(hostname -f) = *r*bbp.epfl.ch* ]]; then
201-
run_mpi_test "srun" "HPE-MPT" "hpe-mpi"
202-
run_mpi_test "mpirun" "Intel MPI" "intel-oneapi-mpi"
203-
run_mpi_test "srun" "MVAPICH2" "mvapich2"
204-
205-
# circle-ci build
206-
elif [[ "$CIRCLECI" == "true" ]]; then
207-
sudo update-alternatives --set mpi-aarch64-linux-gnu /usr/include/aarch64-linux-gnu/mpich
208-
run_mpi_test "mpirun.mpich" "MPICH" ""
209-
sudo update-alternatives --set mpi-aarch64-linux-gnu /usr/lib/aarch64-linux-gnu/openmpi/include
192+
sudo update-alternatives --set mpi-${ARCH_DIR}-linux-gnu /usr/lib/${ARCH_DIR}-linux-gnu/openmpi/include
210193
run_mpi_test "mpirun.openmpi" "OpenMPI" ""
211194

212195
# linux desktop or docker container used for wheel

share/lib/python/scripts/_binwrapper.py

+13-10
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77
import shutil
88
import subprocess
99
import sys
10+
import warnings
1011
from importlib.metadata import metadata, PackageNotFoundError
1112
from importlib.util import find_spec
1213
from pathlib import Path
@@ -44,21 +45,23 @@ def _set_default_compiler():
4445
os.environ.setdefault("CXX", ccompiler.compiler_cxx[0])
4546

4647

47-
def _check_cpp_compiler_version():
48-
"""Check if GCC compiler is >= 9.0 otherwise show warning"""
48+
def _check_cpp_compiler_version(min_version: str):
49+
"""Check if GCC compiler is >= min supported one, otherwise show warning"""
4950
try:
5051
cpp_compiler = os.environ.get("CXX", "")
5152
version = subprocess.run(
52-
[cpp_compiler, "--version"], stdout=subprocess.PIPE
53+
[cpp_compiler, "--version"],
54+
stdout=subprocess.PIPE,
5355
).stdout.decode("utf-8")
54-
if "GCC" in version:
56+
if "gcc" in version.lower() or "gnu" in version.lower():
5557
version = subprocess.run(
56-
[cpp_compiler, "-dumpversion"], stdout=subprocess.PIPE
58+
[cpp_compiler, "-dumpversion"],
59+
stdout=subprocess.PIPE,
5760
).stdout.decode("utf-8")
58-
if Version(version) <= Version("9.0"):
59-
print(
60-
"Warning: GCC >= 9.0 is required with this version of NEURON but found",
61-
version,
61+
if Version(version) <= Version(min_version):
62+
warnings.warn(
63+
f"Warning: GCC >= {min_version} is required with this version of NEURON"
64+
f"but found version {version}",
6265
)
6366
except:
6467
pass
@@ -111,7 +114,7 @@ def _wrap_executable(output_name):
111114

112115
if exe.endswith("nrnivmodl"):
113116
# To create a wrapper for special (so it also gets ENV vars) we intercept nrnivmodl
114-
_check_cpp_compiler_version()
117+
_check_cpp_compiler_version("10.0")
115118
subprocess.check_call([exe, *sys.argv[1:]])
116119
_wrap_executable("special")
117120
sys.exit(0)

0 commit comments

Comments
 (0)