Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segfault with SpikeGeneratorGroup #1017

Closed
varshiths opened this issue Oct 19, 2018 · 4 comments · Fixed by brian-team/brian2cuda#164
Closed

Segfault with SpikeGeneratorGroup #1017

varshiths opened this issue Oct 19, 2018 · 4 comments · Fixed by brian-team/brian2cuda#164

Comments

@varshiths
Copy link

Say I have run my network, with a sgg = SpikeGeneratorGroup() for T seconds.
Now if I set the spikes in sgg with time instants < T; in the next run, the program crashes with a segmentation fault.
It should ideally throw a runtime error.

@mstimberg
Copy link
Member

Hi, thanks for your bug report. I'm afraid I don't follow 100% what the problem is, can you provide a minimal example that triggers this issue? I tried the following and it seems to work fine:

from brian2 import *
T = 10*ms
sgg = SpikeGeneratorGroup(1, [], []*ms)
run(T)
sgg.set_spikes([0], [5]*ms)  # 5ms < T
run(T)

@varshiths
Copy link
Author

I tried reducing my program to its minimal as shown below.
The usage is different from yours. I should have been more detailed with my first report.

from brian2 import *

def generate_input_spikes(neurons, frequency, period):
	_gen = PoissonGroup(neurons, frequency)
	_mon = SpikeMonitor(_gen)
	_net = Network(_gen, _mon)
	_net.run(period)

	indices = _mon.i.get_item(slice(None, None, None))
	times = _mon.t.get_item(slice(None, None, None))
	return indices, times

N = 2
ilayer = SpikeGeneratorGroup(N, [], []*ms)
network = Network(
		ilayer,
	)

for j in range(2):
	print(j)
	indices, times = generate_input_spikes(N, 40*Hz, 500*ms)
	# indices = np.arange(N)
	# times = indices * 500*ms / N

	# times += network.t

	ilayer.set_spikes(indices, times)
	network.run(500*ms)

For N=2, a SegFault is thrown.
For N=1, free(): invalid pointer error is thrown.

When I set the indices using arange(), the error doesn't occur (my initial hunch was probably wrong).
When I use the generate_input_spikes(), the error is thrown after the second iteration.

However, none of the errors occur when I uncomment the line times+=network.t.

The error seems to be related to generate_input_spikes() being called multiple times and resource allocation for Networks. I am however not familiar with the internals of the library.

@mstimberg
Copy link
Member

Many thanks for that example, that was very useful. I think I understand where the problem is coming from. Our implementation of the SpikeGeneratorGroup assumes that none of its neurons spikes more than once during a time step (and it checks the given indices/times to make sure this really is the case). At the same time, it generates spikes by simply "delivering" all spikes that have not yet been delivered and do not lie in the future. By setting times earlier than the current time, this leads to an attempt to deliver all the spikes at once which violates the assumption of no more than one spike per time step.

I'm not 100% sure what the best solution is: on the one hand, it would be good to raise an error for such cases, in the same way we do for negative spike times. On the other hand, this also feels not optimal, e.g.:

group = SpikeGeneratorGroup(1, [0], [7]*ms)
run(10*ms)
print(group.neuron_index, group.spike_time)
run(5*ms)

and

group = SpikeGeneratorGroup(1, [0], [7]*ms)
run(10*ms)
group.set_spikes([0], [7]*ms)
print(group.neuron_index, group.spike_time)
run(5*ms)

would both print the same stored indices/spike times but the latter would raise an error.

In the spikegeneratorgroup_time_check branch I have implemented another simple solution: we simply ignore spikes in the past, in the same way we do if they have already been delivered. This means that the two code examples above would give the same output and the same result, without any error.

Maybe as a compromise, we could use this solution and raise a warning? @thesamovar, any thoughts on this?

@thesamovar
Copy link
Member

I like that idea. I think it's a sign that the user is not doing what they expect if we ever discard information that they have provided, but it's not necessarily an error, so a warning is appropriate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants