Skip to content

Removed unreusable reusable code for better readability #3510

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

Helveg
Copy link
Contributor

@Helveg Helveg commented Jun 25, 2025

This code was written by me with a lot of initial enthusiasm for pytest during a hackathon experimenting with best patterns for pytest, and it completely missed its mark for providing an easy reusable simulation fixture. I've removed it in favor of straightforward, less complex, much more readable test code.

@Helveg Helveg requested a review from heplesser June 25, 2025 08:36
to function print_details.
"""
nest.ResetKernel()
nest.SetKernelStatus({"tics_per_ms": 2**14, "resolution": resolution})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nest.set

Suggested change
nest.SetKernelStatus({"tics_per_ms": 2**14, "resolution": resolution})
nest.set(tics_per_ms=2**14, resolution=resolution)

neuron = nest.Create(model, params=params)
nest.Simulate(duration)

V_m = nest.GetStatus(neuron, "V_m")[0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
V_m = nest.GetStatus(neuron, "V_m")[0]

V_m = nest.GetStatus(neuron, "V_m")[0]
expected_V_m = params["I_e"] * params["tau_m"] / params["C_m"] * (1.0 - math.exp(-duration / params["tau_m"]))

assert math.fabs(V_m - expected_V_m) < tolerance
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
assert math.fabs(V_m - expected_V_m) < tolerance
assert neuron.V_m == pytest.approx(expected_V_m)


nest.Simulate(duration)

spike_times = nest.GetStatus(spike_recorder, "events")[0]["times"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
spike_times = nest.GetStatus(spike_recorder, "events")[0]["times"]
spike_times = spike_recorder.get("events", "times")

Comment on lines +224 to +262
events = nest.GetStatus(vm, "events")[0]
times = np.array(events["times"]).reshape(-1, 1)
V_m = np.array(events["V_m"]).reshape(-1, 1)
results = np.hstack((times, V_m))

actual, expected = testutil.get_comparable_timesamples(
results,
np.array(
[
[0.1, -70],
[0.2, -70],
[0.3, -70],
[0.4, -70],
[0.5, -70],
[2.8, -70],
[2.9, -70],
[3.0, -70],
[3.1, -70],
[3.2, -70],
[3.3, -70],
[3.4, -70],
[3.5, -70],
[4.8, -70],
[4.9, -70],
[5.0, -70],
[5.1, -69.9974],
[5.2, -69.9899],
[5.3, -69.9781],
[5.4, -69.9624],
[5.5, -69.9434],
[5.6, -69.9213],
[5.7, -69.8967],
[5.8, -69.8699],
[5.9, -69.8411],
[6.0, -69.8108],
]
),
)
assert actual == expected
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
events = nest.GetStatus(vm, "events")[0]
times = np.array(events["times"]).reshape(-1, 1)
V_m = np.array(events["V_m"]).reshape(-1, 1)
results = np.hstack((times, V_m))
actual, expected = testutil.get_comparable_timesamples(
results,
np.array(
[
[0.1, -70],
[0.2, -70],
[0.3, -70],
[0.4, -70],
[0.5, -70],
[2.8, -70],
[2.9, -70],
[3.0, -70],
[3.1, -70],
[3.2, -70],
[3.3, -70],
[3.4, -70],
[3.5, -70],
[4.8, -70],
[4.9, -70],
[5.0, -70],
[5.1, -69.9974],
[5.2, -69.9899],
[5.3, -69.9781],
[5.4, -69.9624],
[5.5, -69.9434],
[5.6, -69.9213],
[5.7, -69.8967],
[5.8, -69.8699],
[5.9, -69.8411],
[6.0, -69.8108],
]
),
)
assert actual == expected
res = vm.get("events", ["times", "V_m"])
vm_expected = testutil.get_samples_at_nearest_times(res["times"],
np.array(
[
[0.1, -70],
[0.2, -70],
[0.3, -70],
[0.4, -70],
[0.5, -70],
[2.8, -70],
[2.9, -70],
[3.0, -70],
[3.1, -70],
[3.2, -70],
[3.3, -70],
[3.4, -70],
[3.5, -70],
[4.8, -70],
[4.9, -70],
[5.0, -70],
[5.1, -69.9974],
[5.2, -69.9899],
[5.3, -69.9781],
[5.4, -69.9624],
[5.5, -69.9434],
[5.6, -69.9213],
[5.7, -69.8967],
[5.8, -69.8699],
[5.9, -69.8411],
[6.0, -69.8108],
]
),
)
assert actual == pytest.approx(expected)

@heplesser heplesser added S: Normal Handle this with default priority T: Maintenance Work to keep up the quality of the code and documentation. I: No breaking change Previously written code will work as before, no one should note anything changing (aside the fix) labels Jun 30, 2025
@Helveg
Copy link
Contributor Author

Helveg commented Jul 7, 2025

@heplesser It seems like the tests do rely on get_comparable_timesamples to subselect both the measured and expected samples. One way we could completely remove this util function altogether is by running the old tests and by exfiltrating the exact solutions into separate arrays that we can then simply assert measured == pytest.approx(expected)? We'd probably be duplicating the solutions a couple of times but perhaps that is preferred? I think using this util function leaves a lot of room for errors in the times of samples being masked because they're filtered out in both the measured and expected array.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I: No breaking change Previously written code will work as before, no one should note anything changing (aside the fix) S: Normal Handle this with default priority T: Maintenance Work to keep up the quality of the code and documentation.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants