You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
3
+
Permission is hereby granted, free of charge, to any person obtaining a copy of this
4
+
software and associated documentation files (the "Software"), to deal in the Software
5
+
without restriction, including without limitation the rights to use, copy, modify, merge,
6
+
publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons
7
+
to whom the Software is furnished to do so, subject to the following conditions:
4
8
5
-
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
9
+
The above copyright notice and this permission notice shall be included in all copies or
10
+
substantial portions of the Software.
6
11
7
-
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
12
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
13
+
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
14
+
PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
15
+
FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
16
+
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
Copy file name to clipboardExpand all lines: src/open_ephys/analysis/README.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,14 +65,16 @@ Recording Index: 0
65
65
66
66
## Loading continuous data
67
67
68
-
Continuous data for each recording is accessed via the `.continuous` property of each `Recording` object. This returns a list of continuous data, grouped by processor/sub-processor. For example, if you have two data streams merged into a single Record Node, each data stream will be associated with a different processor ID. If you're recording Neuropixels data, each probe's data stream will be stored in a separate sub-processor, which must be loaded individually.
68
+
Continuous data for each recording is accessed via the `.continuous` property of each `Recording` object. This now returns a dictionary of continuous data grouped by processor/sub-processor. Each stream is stored twice in the dictionary: once under its zero-based index and once under its stream name. For example, if you have two data streams merged into a single Record Node, each data stream will be associated with a different processor ID. If you're recording Neuropixels data, each probe's data stream will be stored in a separate sub-processor, which must be loaded individually.
69
+
70
+
Continuous data for individual data streams can be accessed by index (e.g., `continuous[0]`), or by stream name (e.g., `continuous["example_data"]`). If there are multiple streams with the same name, the source processor ID will be appended to the stream name so they can be distinguished (e.g., `continuous["example_data_100"]`). Iterating over the dictionary yields the continuous objects in index order, and `continuous.keys()` lists both the integer indices and stream names that can be used for lookup.
69
71
70
72
Each `continuous` object has four properties:
71
73
72
-
-`samples` - a `numpy.ndarray` that holds the actual continuous data with dimensions of samples x channels. For Binary, NWB, and Kwik format, this will be a memory-mapped array (i.e., the data will only be loaded into memory when specific samples are accessed).
74
+
-`samples` - a `numpy.ndarray` that holds the actual continuous data with dimensions of samples x channels. For Binaryand NWB formats, this will be a memory-mapped array (i.e., the data will only be loaded into memory when specific samples are accessed).
73
75
-`sample_numbers` - a `numpy.ndarray` that holds the sample numbers since the start of acquisition. This will have the same size as the first dimension of the `samples` array
74
76
-`timestamps` - a `numpy.ndarray` that holds global timestamps (in seconds) for each sample, assuming all data streams were synchronized in this recording. This will have the same size as the first dimension of the `samples` array
75
-
-`metadata` - a `dict` containing information about this data, such as the ID of the processor it originated from.
77
+
-`metadata` - a `ContinousMetadata` dataclass containing information about this data, such as the ID of the processor it originated from.
76
78
77
79
Because the memory-mapped samples are stored as 16-bit integers in arbitrary units, all analysis should be done on a scaled version of these samples. To load the samples scaled to microvolts, use the `get_samples()` method:
78
80
@@ -81,7 +83,7 @@ Because the memory-mapped samples are stored as 16-bit integers in arbitrary uni
81
83
>> data = recording.continuous[0].get_samples(start_sample_index=0, end_sample_index=10000)
82
84
```
83
85
84
-
This will return the first 10,000 continuous samples for all channels in units of microvolts. Note that your computer may run out of memory when requesting a large number of samples for many channels at once. It's also important to note that `start_sample_index` and `end_sample_index` represent relative indices in the `samples` array, rather than absolute sample numbers. The default behavior is to return all channels in the order in which they are stored, typically in increasing numerical order. However, if the `channel map` plugin is placed in the signal chain before a `record node`, the order of channels will follow the order of the specified channel mapping.
86
+
This will return the first 10,000 continuous samples for all channels in units of microvolts. Note that your computer may run out of memory when requesting a large number of samples for many channels at once. It's also important to note that `start_sample_index` and `end_sample_index` represent relative indices in the `samples` array, rather than absolute sample numbers. The default behavior is to return all channels in the order in which they are stored, typically in increasing numerical order. However, if the Channel Map plugin is placed in the signal chain before a Record Node, the order of channels will follow the order of the specified channel mapping.
85
87
86
88
The `get_samples` method includes the arguments:
87
89
@@ -124,7 +126,7 @@ If spike data has been saved by your Record Node (i.e., there is a Spike Detecto
124
126
-`sample_numbers` - `numpy.ndarray` of sample indices (one per spikes)
125
127
-`timestamps` - `numpy.ndarray` of global timestamps (in seconds)
126
128
-`clusters` - `numpy.ndarray` of cluster IDs for each spike (default cluster = 0)
127
-
-`metadata` - `dict` with metadata about each electrode
129
+
-`metadata` - `SpikeMetadata` dataclass with metadata about each electrode
0 commit comments