colldata module

Stores the collected raw data, gives functions to perform various things on it. Supported functionality:

  • keep only a predefined amount of data (a window of the last n second of samples), dropping old data
  • help in quick plotting of raw data (compress it for display)
  • search for spikes over threshold

Data, TTL and timestamp storage happens in Collector class, and Spike detection and data compression for raw plotting are performed in DataProc.

class opeth.colldata.Collector

Bases: object

Data storage class for raw analog data, timestamps and event timestamps.

databuffer

The 2D data storage, each row representing a channel, each column a sample.

Type:2D CircularBuffer
tsbuffer

Timestamp buffer storing 1 time stamp value for each data column.

Type:1D CircularBuffer
timestamp

Sample number updated on timestamp event or when received explicitly with a set of data.

spikes

Spike positions - stored if spikes are sent by OE.

Type:deque
ttls

TTL positions as sent by OE.

Type:deque
samples_per_sec

Sampling rate.

Type:int
prev_trigger_ts

Used to detect backward jumping timestamps in TTL stamps.

Type:defaultdict(int)
drop_aux

Adjusted through set_drop_aux(), affects whether auxiliary data (the 3 gyroscope channels) is to be filtered or not.

Type:bool
add_data(data)

Append a new chunk of analog channel measurements to the end of the storage array.

Auxiliary channel data (gyroscopes) are automatically removed if 35 or 70 channels were received (ch 33-35 or ch 65-70) and drop_aux is True.

Data sampling timestamps are calculated for each sample position based on the last received timestamp (stored in timestamp and the sample rate defaults to SAMPLES_PER_SEC.

Parameters:data – input data received from OE. Multiple channels, multiple samples. (E.g. 35 rows/channels of 640 floating point samples.) Unit value is supposed to be in uV.
add_spike(spike)

Store a new spike event. (Not used currently.)

add_ttl(ttl)

Store a new TTL event.

All TTLs are stored regardless of the selected TTL channel, the TTL processing happens in process_ttl(). This code assumes the timestamp and the sample count are the same.

channel_cnt()
Returns:the number of channels based on the rows of data in the databuffer.
drop_before(timestamp)

Drop old data which is not required for any of the various displays.

get_data()

Accessor function for the databuffer.

Obsolete. Former version could return the proper structure depending on which data storage backend was used. Now one may use databuffer directly as no other structure is configurable.

get_ts()
Returns:the timestamp buffer tsbuffer.
has_data()
Returns:true if there is (already/still) data in the buffers.
keep_last(seconds=None, samples=None, **kwargs)

Convenience wrapper function for drop_before.

Parameters:
  • seconds (int) – length of samples to be kept in buffer (in number of seconds). If given, it takes precedence over samples.
  • samples (int) – number of samples to be kept in buffer.
process_ttl(start_offset=-0.02, end_offset=0.05, ttl_ch=None, trigger_holdoff=0.001, **kwargs)

Process a TTL (event), return data and timestamp around event on success or (None, None) otherwise - using first TTL from ttl_ch.

Drops all TTLs silently from channels other than ttl_ch. Works on data accumulated by add_data() calls (dataarray numpy array) and TTLs from add_ttl() calls (self.ttls list). Too frequent pulses are filtered by trigger_holdoff

Parameters:
  • start_offset (float) – TTL-relative start offset in seconds, typically a small negative value to return data collected right before the TTL signal
  • end_offset (float) – TTL-relative end offset in seconds specifying end of data ROI
  • ttl_ch (int) – channel whose TTL events are to be processed as trigger
  • trigger_holdoff (float) – holdoff time in seconds until no new triggers are processed (to protect the system against trigger bursts in case of broken cabling etc.)
Returns:

2D numpy array of data (one row per channel) around the TTL [-start_offset .. +end_offset], 1D numpy array of timestamps (same number of columns as data). Timestamps are actually sample number (sort of).

set_drop_aux(should_drop)

Update AUX channel settings (whether we’d like to search for spikes on them or not).

set_sampling_rate(sampling_rate)
update_ts(timestamp)

Required for old OE version that sent timestamps separately as events, kept it for backward compatibility.

Parameters:timestamp (int) – stored in timestamp for later timestamp interpolation calculations when data arrives.
class opeth.colldata.DataProc(collector=None, drop_aux=False)

Bases: object

Utility functions to handle collected data

Parameters:
  • collector (Collector) – data on which the operations are performed. Only a reference, part of data to operate on is passed over to each function.
  • drop_aux (bool) – sets whether in a 35 or 70 analog channel case is a 32+3 (or 64+6) setup with extra 3 (or 6) channels unimportant and to be dropped or important and are to be parsed for spikes.
autottl(data, timestamps, base_timestamp, ch=0, threshold=0.5, **kwargs)

Generate TTL signals based on threshold in a channel of data.

Playback from file in OE did not support TTL event playback, so it was necessary to generate them somehow.

Not used in real situations.

Parameters:
  • ch – channel to run thresholding on for TTL signals
  • threshold – threshold level
  • base_timestamp – TTL timestamp relative to start of data packet timestamp (in samples) Probably unnecessary, just kept for emulating OE TTL data.
compress(data, rate, timestamps=None)

Compress a 2D matrix column-wise by keeping the min and max values of the compressed chunks.

Used by real time raw display to reduce number of points to be plotted. The displayed set tries to plot a sawtooth-style signal touching both min and max values of the original signal of the given range.

Parameters:
  • data (2D CircularBuffer) – array to be compressed.
  • rate (int) – required compression rate.
  • timestamps (1D CircularBuffer) – timestamp axis is compressed the same way as vertical
set_sampling_rate(sampling_rate)
spikedetect(data, timestamps, threshold=0.5, rising_edge=False, disabled=[])

Detect spikes based on threshold level.

Spike detection: from first continouos block of data exceeding threshold select maximal [minimal in case of negative threshold] value as spike position and don’t search for spikes in the SPIKE_HOLDOFF time after crossing the threshold level.

Threshold method is selected by SPIKE_THRESHOLD_BELOW setting (True by default).

Parameters:
  • threshold (scalar or vector) – must have the same number of channels as data.
  • data (ndarray e.g. CircularBuffer) – samples on which spike filtering will be performed.
  • timestamps – time stamps accompanying the data samples
  • rising_edge (bool) – false if threshold level should be considered a negative threshold and falling edge is to be detected
Returns:

a list of spike positions (sample index) and another list of the same position as timestamp.

opeth.colldata.EVENT_ROI = (-0.02, 0.05)

Region of interest in seconds (+-timestamp range in seconds - neighbourhood of a event that is investigated for spikes)

opeth.colldata.SAMPLES_PER_SEC = 30000

Sampling frequency in Hz

opeth.colldata.SPIKE_HOLDOFF = 0.00075

Dead time / censoring period (seconds)