Skip to content

Commit 87dd13e

Browse files
committed
EXA: batching. 99.99% of issue #1
1 parent eaa0968 commit 87dd13e

File tree

1 file changed

+83
-0
lines changed

1 file changed

+83
-0
lines changed
+83
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
'''
2+
===========================================
3+
Extracting features from stimulus batches
4+
===========================================
5+
6+
This example shows how to use batches to extract motion-energy features from a video.
7+
8+
When the stimulus is very high-resolution (e.g. 4K) or is multiple hours long, it might not be possible to fit the data in memory. In such situations, it is useful to load a small number of video frames and extract motion-energy features from that subset of frames alone. In order to do this properly, one must avoid edge effects. In this example we show how to batch
9+
'''
10+
11+
12+
# %%
13+
# First, we'll specify the stimulus we want to load.
14+
15+
import moten
16+
import numpy as np
17+
import matplotlib.pyplot as plt
18+
stimulus_fps = 24
19+
video_file = 'http://anwarnunez.github.io/downloads/avsnr150s24fps_tiny.mp4'
20+
21+
# %%
22+
# Load the first 300 images and spatially downsample the video.
23+
small_vhsize = (72, 128) # height x width
24+
luminance_images = moten.io.video2luminance(video_file, size=small_vhsize, nimages=300)
25+
nimages, vdim, hdim = luminance_images.shape
26+
print(vdim, hdim)
27+
28+
fig, ax = plt.subplots()
29+
ax.matshow(luminance_images[200], vmin=0, vmax=100, cmap='inferno')
30+
ax.set_xticks([])
31+
ax.set_yticks([])
32+
33+
# %%
34+
# Next we need to construct the pyramid and extract the motion-energy features from the full stimulus.
35+
36+
pyramid = moten.pyramids.MotionEnergyPyramid(stimulus_vhsize=(vdim, hdim),
37+
stimulus_fps=stimulus_fps,
38+
filter_temporal_width=16)
39+
40+
moten_features = pyramid.project_stimulus(luminance_images)
41+
print(moten_features.shape)
42+
43+
# %%
44+
# We have to include some padding to the batches in order to avoid convolution edge effects. The padding is determined by the temporal width of the motion-energy filter. By default, the temporal width is 2/3 of the stimulus frame rate (`int(fps*(2/3))`). This parameter can be specified when instantating a pyramid by passing e.g. ``filter_temporal_width=16``. Once the pyramid is defined, the parameter can also be accessed from the ``pyramid.definition`` dictionary.
45+
46+
filter_temporal_width = pyramid.definition['filter_temporal_width']
47+
48+
# %%
49+
# Finally, we define the padding window as half the temporal filter width.
50+
51+
window = int(np.ceil((filter_temporal_width/2)))
52+
print(filter_temporal_width, window)
53+
54+
# %%
55+
# Now we are ready to extract motion-energy features in batches:
56+
57+
nbatches = 5
58+
batch_size = int(np.ceil(nimages/nbatches))
59+
batched_data = []
60+
for bdx in range(nbatches):
61+
start_frame, end_frame = batch_size*bdx, batch_size*(bdx + 1)
62+
print('Batch %i/%i [%i:%i]'%(bdx+1, nbatches, start_frame, end_frame))
63+
64+
# Padding
65+
batch_start = max(start_frame - window, 0)
66+
batch_end = end_frame + window
67+
batched_responses = pyramid.project_stimulus(
68+
luminance_images[batch_start:batch_end])
69+
70+
# Trim edges
71+
if bdx == 0:
72+
batched_responses = batched_responses[:-window]
73+
elif bdx + 1 == nbatches:
74+
batched_responses = batched_responses[window:]
75+
else:
76+
batched_responses = batched_responses[window:-window]
77+
batched_data.append(batched_responses)
78+
79+
batched_data = np.vstack(batched_data)
80+
81+
# %%
82+
# They are exactly the same.
83+
assert np.allclose(moten_features, batched_data)

0 commit comments

Comments
 (0)