Skip to content

Commit dbe1267

Browse files
authored
Merge pull request #46 from open-ephys/development
Release version 1.0.0
2 parents eceddca + ce89d43 commit dbe1267

File tree

147 files changed

+4764
-528
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

147 files changed

+4764
-528
lines changed

.github/workflows/python-tests.yml

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
# This workflow will install Python dependencies, run tests and lint with a single version of Python
2+
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python
3+
4+
name: Python Tests
5+
6+
on:
7+
push:
8+
branches: [ "main" ]
9+
pull_request:
10+
branches: [ "main" ]
11+
12+
permissions:
13+
contents: read
14+
15+
jobs:
16+
build:
17+
18+
runs-on: ubuntu-latest
19+
20+
steps:
21+
- uses: actions/checkout@v4
22+
- name: Install uv
23+
uses: astral-sh/setup-uv@v5
24+
with:
25+
enable-cache: true
26+
cache-dependency-glob: "uv.lock"
27+
- name: Install Python (pin to a wheel-friendly version)
28+
run: |
29+
uv python install 3.12
30+
uv python pin 3.12
31+
32+
- name: Create virtualenv
33+
run: uv venv
34+
35+
- name: Preinstall h5py as wheel only
36+
run: uv pip install "--only-binary=:all:" "h5py==3.13.0"
37+
38+
- name: Install the project
39+
run: uv sync --extra dev
40+
41+
- name: Run tests
42+
run: uv run pytest tests
43+

.gitignore

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,4 +14,7 @@ dist
1414
Notebooks
1515
notebooks
1616

17-
.spyproject
17+
.vscode
18+
.spyproject
19+
20+
build

CHANGELOG.md

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,51 @@
11
# `open-ephys-python-tools` Changelog
22

3+
## 1.0.0
4+
5+
- Dropped support for Python < 3.9
6+
- Refactoring without new functionality or API changes
7+
- The `Continuous` and `Spike` classes of the three formats now have an explicit interface
8+
(i.e. abstract parent class) and have been renamed to `BinaryContinuous`, `BinarySpike` etc.
9+
- The metadata of `Continuous` and `Spike` in the analysis package now are typed dataclasses
10+
instead of `dict` objects . This makes accessing metadata more reliable.
11+
- Type hints have been added to the `analysis` package.
12+
- Automated tests for reading Binary, NWB and OpenEphys data formats have been added.
13+
- Added a `RecordingFormat` enum for the three formats
14+
- Added a JSON schema for validating oebin files
15+
- Added a `uv.lock` file for reproducible development environments.
16+
- `BinaryContinuous` and `BinarySpike` now have `__str__` methods to give an overview over
17+
their contents.
18+
19+
## 0.1.13
20+
- Improve NWB format loading
21+
- Add method for selecting channels by name
22+
23+
## 0.1.12
24+
- Fix bug in global timestamp computation
25+
26+
## 0.1.11
27+
- Ensure experiment and recording directories are sorted alphanumericaly
28+
29+
## 0.1.10
30+
- Add option to load events without sorting by timestamp
31+
32+
## 0.1.9
33+
- Allow continuous timestamps to be loaded without memory mapping (necessary when timestamp file will be overwritten)
34+
35+
## 0.1.8
36+
- Change indexing method for extracting processor ID in NwbRecording
37+
38+
## 0.1.7
39+
- Raise exception if no events exist on a selected line for global timestamp computation
40+
- Add option to ignore a sample interval when computing global timestamps
41+
42+
## 0.1.6
43+
- Add `config` method to `OpenEphysHTTPServer` class
44+
45+
## 0.1.5
46+
- Speed up loading of Open Ephys data format
47+
- Add stream names to NWB and Open Ephys events
48+
349
## 0.1.4
450

551
- Include `source_processor_id` and `source_processor_name` when writing .oebin file

LICENSE

Lines changed: 14 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,17 @@
1-
Copyright 2020 Open Ephys
1+
Copyright 2020-2025 Open Ephys
22

3-
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
3+
Permission is hereby granted, free of charge, to any person obtaining a copy of this
4+
software and associated documentation files (the "Software"), to deal in the Software
5+
without restriction, including without limitation the rights to use, copy, modify, merge,
6+
publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons
7+
to whom the Software is furnished to do so, subject to the following conditions:
48

5-
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
9+
The above copyright notice and this permission notice shall be included in all copies or
10+
substantial portions of the Software.
611

7-
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
12+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
13+
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
14+
PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
15+
FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
16+
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
17+
DEALINGS IN THE SOFTWARE.

pyproject.toml

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -5,29 +5,29 @@ build-backend = "setuptools.build_meta"
55
[project]
66
name = "open-ephys-python-tools"
77
description = "Software tools for interfacing with the Open Ephys GUI"
8-
license = {text = "MIT"}
9-
requires-python = ">=3.7"
8+
license = { text = "MIT" }
9+
requires-python = ">=3.9"
1010
classifiers = [
1111
"Programming Language :: Python :: 3",
1212
"License :: OSI Approved :: MIT License",
13-
"Operating System :: OS Independent"
13+
"Operating System :: OS Independent",
1414
]
1515
readme = "README.md"
1616

1717
dynamic = ["version"]
1818

19-
dependencies = [
20-
'numpy',
21-
'pandas',
22-
'h5py',
23-
'zmq',
24-
'requests'
25-
]
19+
dependencies = ['numpy', 'pandas', 'h5py', 'zmq', 'requests']
2620

2721
[tool.setuptools.packages.find]
2822
where = ["src"]
2923

3024
[tool.setuptools.dynamic]
31-
version = {attr = "open_ephys.__version__"}
32-
25+
version = { attr = "open_ephys.__version__" }
3326

27+
[dependency-groups]
28+
dev = [
29+
"black>=25.1.0",
30+
"jsonschema>=4.23.0",
31+
"mypy>=1.15.0",
32+
"pytest>=8.3.5",
33+
]

src/open_ephys/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
__version__ = "0.1.13"
1+
__version__ = "1.0.0"

src/open_ephys/analysis/README.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -65,14 +65,16 @@ Recording Index: 0
6565

6666
## Loading continuous data
6767

68-
Continuous data for each recording is accessed via the `.continuous` property of each `Recording` object. This returns a list of continuous data, grouped by processor/sub-processor. For example, if you have two data streams merged into a single Record Node, each data stream will be associated with a different processor ID. If you're recording Neuropixels data, each probe's data stream will be stored in a separate sub-processor, which must be loaded individually.
68+
Continuous data for each recording is accessed via the `.continuous` property of each `Recording` object. This now returns a dictionary of continuous data grouped by processor/sub-processor. Each stream is stored twice in the dictionary: once under its zero-based index and once under its stream name. For example, if you have two data streams merged into a single Record Node, each data stream will be associated with a different processor ID. If you're recording Neuropixels data, each probe's data stream will be stored in a separate sub-processor, which must be loaded individually.
69+
70+
Continuous data for individual data streams can be accessed by index (e.g., `continuous[0]`), or by stream name (e.g., `continuous["example_data"]`). If there are multiple streams with the same name, the source processor ID will be appended to the stream name so they can be distinguished (e.g., `continuous["example_data_100"]`). Iterating over the dictionary yields the continuous objects in index order, and `continuous.keys()` lists both the integer indices and stream names that can be used for lookup.
6971

7072
Each `continuous` object has four properties:
7173

72-
- `samples` - a `numpy.ndarray` that holds the actual continuous data with dimensions of samples x channels. For Binary, NWB, and Kwik format, this will be a memory-mapped array (i.e., the data will only be loaded into memory when specific samples are accessed).
74+
- `samples` - a `numpy.ndarray` that holds the actual continuous data with dimensions of samples x channels. For Binary and NWB formats, this will be a memory-mapped array (i.e., the data will only be loaded into memory when specific samples are accessed).
7375
- `sample_numbers` - a `numpy.ndarray` that holds the sample numbers since the start of acquisition. This will have the same size as the first dimension of the `samples` array
7476
- `timestamps` - a `numpy.ndarray` that holds global timestamps (in seconds) for each sample, assuming all data streams were synchronized in this recording. This will have the same size as the first dimension of the `samples` array
75-
- `metadata` - a `dict` containing information about this data, such as the ID of the processor it originated from.
77+
- `metadata` - a `ContinousMetadata` dataclass containing information about this data, such as the ID of the processor it originated from.
7678

7779
Because the memory-mapped samples are stored as 16-bit integers in arbitrary units, all analysis should be done on a scaled version of these samples. To load the samples scaled to microvolts, use the `get_samples()` method:
7880

@@ -81,7 +83,7 @@ Because the memory-mapped samples are stored as 16-bit integers in arbitrary uni
8183
>> data = recording.continuous[0].get_samples(start_sample_index=0, end_sample_index=10000)
8284
```
8385

84-
This will return the first 10,000 continuous samples for all channels in units of microvolts. Note that your computer may run out of memory when requesting a large number of samples for many channels at once. It's also important to note that `start_sample_index` and `end_sample_index` represent relative indices in the `samples` array, rather than absolute sample numbers. The default behavior is to return all channels in the order in which they are stored, typically in increasing numerical order. However, if the `channel map` plugin is placed in the signal chain before a `record node`, the order of channels will follow the order of the specified channel mapping.
86+
This will return the first 10,000 continuous samples for all channels in units of microvolts. Note that your computer may run out of memory when requesting a large number of samples for many channels at once. It's also important to note that `start_sample_index` and `end_sample_index` represent relative indices in the `samples` array, rather than absolute sample numbers. The default behavior is to return all channels in the order in which they are stored, typically in increasing numerical order. However, if the Channel Map plugin is placed in the signal chain before a Record Node, the order of channels will follow the order of the specified channel mapping.
8587

8688
The `get_samples` method includes the arguments:
8789

@@ -124,7 +126,7 @@ If spike data has been saved by your Record Node (i.e., there is a Spike Detecto
124126
- `sample_numbers` - `numpy.ndarray` of sample indices (one per spikes)
125127
- `timestamps` - `numpy.ndarray` of global timestamps (in seconds)
126128
- `clusters` - `numpy.ndarray` of cluster IDs for each spike (default cluster = 0)
127-
- `metadata` - `dict` with metadata about each electrode
129+
- `metadata` - `SpikeMetadata` dataclass with metadata about each electrode
128130

129131
## Synchronizing timestamps
130132

0 commit comments

Comments
 (0)