API¶
MDF¶
This class acts as a proxy for the MDF2, MDF3 and MDF4 classes. All attribute access is delegated to the underlying _mdf attribute (MDF2, MDF3 or MDF4 object). See MDF3 and MDF4 for available extra methods (MDF2 and MDF3 share the same implementation).
An empty MDF file is created if the name argument is not provided. If the name argument is provided then the file must exist in the filesystem, otherwise an exception is raised.
The best practice is to use the MDF as a context manager. This way all resources are released correctly in case of exceptions.
with MDF(r'test.mdf') as mdf_file:
# do something
-
class
asammdf.mdf.
MDF
(name=None, memory='full', version='4.10', callback=None, queue=None)[source]¶ Unified access to MDF v3 and v4 files. Underlying _mdf’s attributes and methods are linked to the MDF object via setattr. This is done to expose them to the user code and for performance considerations.
Parameters: - name : string
mdf file name, if provided it must be a real file name
- memory : str
memory option; default full:
- if full the data group binary data block will be loaded in RAM
- if low the channel data is read from disk on request, and the metadata is loaded into RAM
- if minimum only minimal data is loaded into RAM
- version : string
mdf file version from (‘2.00’, ‘2.10’, ‘2.14’, ‘3.00’, ‘3.10’, ‘3.20’, ‘3.30’, ‘4.00’, ‘4.10’, ‘4.11’); default ‘4.10’
-
static
concatenate
(files, outversion='4.10', memory='full', callback=None)[source]¶ concatenates several files. The files must have the same internal structure (same number of groups, and same channels in each group)
Parameters: - files : list | tuple
list of MDF file names or MDF instances
- outversion : str
merged file version
- memory : str
memory option; default full
Returns: - concatenate : MDF
new MDF object with concatenated channels
Raises: - MdfException : if there are inconsistencies between the files
-
convert
(to, memory='full')[source]¶ convert MDF to other version
Parameters: - to : str
new mdf file version from (‘2.00’, ‘2.10’, ‘2.14’, ‘3.00’, ‘3.10’, ‘3.20’, ‘3.30’, ‘4.00’, ‘4.10’, ‘4.11’); default ‘4.10’
- memory : str
memory option; default full
Returns: - out : MDF
new MDF object
-
cut
(start=None, stop=None, whence=0)[source]¶ cut MDF file. start and stop limits are absolute values or values relative to the first timestamp depending on the whence argument.
Parameters: - start : float
start time, default None. If None then the start of measurement is used
- stop : float
stop time, default None. If None then the end of measurement is used
- whence : int
how to search for the start and stop values
- 0 : absolute
- 1 : relative to first timestamp
Returns: - out : MDF
new MDF object
-
export
(fmt, filename=None, **kargs)[source]¶ export MDF to other formats. The MDF file name is used is available, else the filename argument must be provided.
Parameters: - fmt : string
can be one of the following:
- csv : CSV export that uses the “;” delimiter. This option will generate a new csv file for each data group (<MDFNAME>_DataGroup_<cntr>.csv)
- hdf5 : HDF5 file output; each MDF data group is mapped to a HDF5 group with the name ‘DataGroup_<cntr>’ (where <cntr> is the index)
- excel : Excel file output (very slow). This option will generate a new excel file for each data group (<MDFNAME>_DataGroup_<cntr>.xlsx)
- mat : Matlab .mat version 4, 5 or 7.3 export. If single_time_base==False the channels will be renamed in the mat file to ‘DataGroup_<cntr>_<channel name>’. The channel group master will be renamed to ‘DataGroup_<cntr>_<channel name>_master’ ( <cntr> is the data group index starting from 0)
- pandas : export all channels as a single pandas DataFrame
- filename : string
export file name
- **kwargs
- single_time_base: resample all channels to common time base, default False (pandas export is by default single based)
- raster: float time raster for resampling. Valid if single_time_base is True and for pandas export
- time_from_zero: adjust time channel to start from 0
- use_display_names: use display name instead of standard channel name, if available.
- empty_channels: behaviour for channels without samples; the options are skip or zeros; default is zeros
- format: only valid for mat export; can be ‘4’, ‘5’ or ‘7.3’, default is ‘5’
Returns: - dataframe : pandas.DataFrame
only in case of pandas export
-
filter
(channels, memory='full')[source]¶ return new MDF object that contains only the channels listed in channels argument
Parameters: - channels : list
list of items to be filtered; each item can be :
- a channel name string
- (channel name, group index, channel index) list or tuple
- (channel name, group index) list or tuple
- (None, group index, channel index) list or tuple
- memory : str
memory option for filtered MDF; default full
Returns: - mdf : MDF
new MDF file
Examples
>>> from asammdf import MDF, Signal >>> import numpy as np >>> t = np.arange(5) >>> s = np.ones(5) >>> mdf = MDF() >>> for i in range(4): ... sigs = [Signal(s*(i*10+j), t, name='SIG') for j in range(1,4)] ... mdf.append(sigs) ... >>> filtered = mdf.filter(['SIG', ('SIG', 3, 1), ['SIG', 2], (None, 1, 2)]) >>> for gp_nr, ch_nr in filtered.channels_db['SIG']: ... print(filtered.get(group=gp_nr, index=ch_nr)) ... <Signal SIG: samples=[ 1. 1. 1. 1. 1.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> <Signal SIG: samples=[ 31. 31. 31. 31. 31.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> <Signal SIG: samples=[ 21. 21. 21. 21. 21.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> <Signal SIG: samples=[ 12. 12. 12. 12. 12.] timestamps=[0 1 2 3 4] unit="" info=None comment="">
-
iter_channels
(skip_master=True)[source]¶ generator that yields a Signal for each non-master channel
Parameters: - skip_master : bool
do not yield master channels; default True
-
iter_get
(name=None, group=None, index=None, raster=None, samples_only=False, raw=False)[source]¶ iterator over a channel
This is usefull in case of large files with a small number of channels.
-
static
merge
(files, outversion='4.10', memory='full', callback=None)[source]¶ concatenates several files. The files must have the same internal structure (same number of groups, and same channels in each group)
Parameters: - files : list | tuple
list of MDF file names or MDF instances
- outversion : str
merged file version
- memory : str
memory option; default full
Returns: - concatenate : MDF
new MDF object with concatenated channels
Raises: - MdfException : if there are inconsistencies between the files
-
resample
(raster, memory='full')[source]¶ resample all channels using the given raster
Parameters: - raster : float
time raster is seconds
- memory : str
memory option; default None
Returns: - mdf : MDF
new MDF with resampled channels
-
select
(channels, dataframe=False)[source]¶ retreiv the channels listed in channels argument as Signal objects
Parameters: - channels : list
list of items to be filtered; each item can be :
- a channel name string
- (channel name, group index, channel index) list or tuple
- (channel name, group index) list or tuple
- (None, group index, channel index) lsit or tuple
- dataframe: bool
return a pandas DataFrame instead of a list of Signals; in this case the signals will be interpolated using the union of all timestamps
Returns: - signals : list
list of Signal objects based on the input channel list
Examples
>>> from asammdf import MDF, Signal >>> import numpy as np >>> t = np.arange(5) >>> s = np.ones(5) >>> mdf = MDF() >>> for i in range(4): ... sigs = [Signal(s*(i*10+j), t, name='SIG') for j in range(1,4)] ... mdf.append(sigs) ... >>> # select SIG group 0 default index 1 default, SIG group 3 index 1, SIG group 2 index 1 default and channel index 2 from group 1 ... >>> mdf.select(['SIG', ('SIG', 3, 1), ['SIG', 2], (None, 1, 2)]) [<Signal SIG: samples=[ 1. 1. 1. 1. 1.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> , <Signal SIG: samples=[ 31. 31. 31. 31. 31.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> , <Signal SIG: samples=[ 21. 21. 21. 21. 21.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> , <Signal SIG: samples=[ 12. 12. 12. 12. 12.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> ]
-
static
stack
(files, outversion='4.10', memory='full', sync=True, callback=None)[source]¶ merge several files and return the merged MDF object
Parameters: - files : list | tuple
list of MDF file names or MDF instances
- outversion : str
merged file version
- memory : str
memory option; default full
- sync : bool
sync the files based on the start of measurement, default True
Returns: - merged : MDF
new MDF object with merge channels
-
whereis
(channel)[source]¶ get ocurrences of channel name in the file
Parameters: - channel : str
channel name string
Returns: - ocurrences : tuple
Examples
>>> mdf = MDF(file_name) >>> mdf.whereis('VehicleSpeed') # "VehicleSpeed" exists in the file ((1, 2), (2, 4)) >>> mdf.whereis('VehicleSPD') # "VehicleSPD" doesn't exist in the file ()
MDF3¶
asammdf tries to emulate the mdf structure using Python builtin data types.
The header attibute is an OrderedDict that holds the file metadata.
The groups attribute is a dictionary list with the following keys:
data_group : DataGroup object
channel_group : ChannelGroup object
channels : list of Channel objects with the same order as found in the mdf file
channel_conversions : list of ChannelConversion objects in 1-to-1 relation with the channel list
channel_sources : list of SourceInformation objects in 1-to-1 relation with the channels list
chanel_dependencies : list of ChannelDependency objects in a 1-to-1 relation with the channel list
data_block : DataBlock object
texts : dictionay containing TextBlock objects used throughout the mdf
channels : list of dictionaries that contain TextBlock objects ralated to each channel
- long_name_addr : channel long name
- comment_addr : channel comment
- display_name_addr : channel display name
channel group : list of dictionaries that contain TextBlock objects ralated to each channel group
- comment_addr : channel group comment
conversion_tab : list of dictionaries that contain TextBlock objects ralated to VATB and VTABR channel conversions
- text_{n} : n-th text of the VTABR conversion
sorted : bool flag to indicate if the source file was sorted; it is used when memory is low or minimum
size : data block size; used for lazy laoding of measured data
record_size : dict of record ID -> record size pairs
The file_history attribute is a TextBlock object.
The channel_db attibute is a dictionary that holds the (data group index, channel index) pair for all signals. This is used to speed up the get_signal_by_name method.
The master_db attibute is a dictionary that holds the channel index of the master channel for all data groups. This is used to speed up the get_signal_by_name method.
-
class
asammdf.mdf_v3.
MDF3
(name=None, memory='full', version='3.30', callback=None)[source] If the name exist it will be loaded otherwise an empty file will be created that can be later saved to disk
Parameters: - name : string
mdf file name
- memory : str
memory optimization option; default full
- if full the data group binary data block will be memorised in RAM
- if low the channel data is read from disk on request, and the metadata is memorised into RAM
- if minimum only minimal data is memorised into RAM
- version : string
mdf file version (‘2.00’, ‘2.10’, ‘2.14’, ‘3.00’, ‘3.10’, ‘3.20’ or ‘3.30’); default ‘3.30’
Attributes: - channels_db : dict
used for fast channel access by name; for each name key the value is a list of (group index, channel index) tuples
- file_history : TextBlock
file history text block; can be None
- groups : list
list of data groups
- header : HeaderBlock
mdf file header
- identification : FileIdentificationBlock
mdf file start block
- masters_db : dict
- used for fast master channel access; for each group index key the value
is the master channel index
- memory : str
memory optimization option
- name : string
mdf file name
- version : str
mdf version
-
add_trigger
(group, timestamp, pre_time=0, post_time=0, comment='')[source] add trigger to data group
Parameters: - group : int
group index
- timestamp : float
trigger time
- pre_time : float
trigger pre time; default 0
- post_time : float
trigger post time; default 0
- comment : str
trigger comment
-
append
(signals, acquisition_info='Python', common_timebase=False)[source] Appends a new data group.
For channel dependencies type Signals, the samples attribute must be a numpy.recarray
Parameters: - signals : list
list on Signal objects
- acquisition_info : str
acquisition information; default ‘Python’
- common_timebase : bool
flag to hint that the signals have the same timebase
Examples
>>> # case 1 conversion type None >>> s1 = np.array([1, 2, 3, 4, 5]) >>> s2 = np.array([-1, -2, -3, -4, -5]) >>> s3 = np.array([0.1, 0.04, 0.09, 0.16, 0.25]) >>> t = np.array([0.001, 0.002, 0.003, 0.004, 0.005]) >>> names = ['Positive', 'Negative', 'Float'] >>> units = ['+', '-', '.f'] >>> info = {} >>> s1 = Signal(samples=s1, timstamps=t, unit='+', name='Positive') >>> s2 = Signal(samples=s2, timstamps=t, unit='-', name='Negative') >>> s3 = Signal(samples=s3, timstamps=t, unit='flts', name='Floats') >>> mdf = MDF3('new.mdf') >>> mdf.append([s1, s2, s3], 'created by asammdf v1.1.0') >>> # case 2: VTAB conversions from channels inside another file >>> mdf1 = MDF3('in.mdf') >>> ch1 = mdf1.get("Channel1_VTAB") >>> ch2 = mdf1.get("Channel2_VTABR") >>> sigs = [ch1, ch2] >>> mdf2 = MDF3('out.mdf') >>> mdf2.append(sigs, 'created by asammdf v1.1.0')
-
close
()[source] if the MDF was created with memory=’minimum’ and new channels have been appended, then this must be called just before the object is not used anymore to clean-up the temporary file
-
configure
(read_fragment_size=None, write_fragment_size=None, use_display_names=None, single_bit_uint_as_bool=None)[source] configure read and write fragment size for chuncked data access
Parameters: - read_fragment_size : int
size hint of splitted data blocks, default 8MB; if the initial size is smaller, then no data list is used. The actual split size depends on the data groups’ records size
- write_fragment_size : int
size hint of splitted data blocks, default 8MB; if the initial size is smaller, then no data list is used. The actual split size depends on the data groups’ records size
- use_display_names : bool
use display name if available for the Signal’s name returned by the get method
-
extend
(index, signals)[source] Extend a group with new samples. The first signal is the master channel’s samples, and the next signals must respect the same order in which they were appended. The samples must have raw or physical values according to the Signals used for the initial append.
Parameters: - index : int
group index
- signals : list
list on numpy.ndarray objects
Examples
>>> # case 1 conversion type None >>> s1 = np.array([1, 2, 3, 4, 5]) >>> s2 = np.array([-1, -2, -3, -4, -5]) >>> s3 = np.array([0.1, 0.04, 0.09, 0.16, 0.25]) >>> t = np.array([0.001, 0.002, 0.003, 0.004, 0.005]) >>> names = ['Positive', 'Negative', 'Float'] >>> units = ['+', '-', '.f'] >>> s1 = Signal(samples=s1, timstamps=t, unit='+', name='Positive') >>> s2 = Signal(samples=s2, timstamps=t, unit='-', name='Negative') >>> s3 = Signal(samples=s3, timstamps=t, unit='flts', name='Floats') >>> mdf = MDF3('new.mdf') >>> mdf.append([s1, s2, s3], 'created by asammdf v1.1.0') >>> t = np.array([0.006, 0.007, 0.008, 0.009, 0.010]) >>> mdf2.extend(0, [t, s1, s2, s3])
-
get
(name=None, group=None, index=None, raster=None, samples_only=False, data=None, raw=False)[source] Gets channel samples. Channel can be specified in two ways:
using the first positional argument name
- if there are multiple occurances for this channel then the group and index arguments can be used to select a specific group.
- if there are multiple occurances for this channel and either the group or index arguments is None then a warning is issued
using the group number (keyword argument group) and the channel number (keyword argument index). Use info method for group and channel numbers
If the raster keyword argument is not None the output is interpolated accordingly.
Parameters: - name : string
name of channel
- group : int
0-based group index
- index : int
0-based channel index
- raster : float
time raster in seconds
- samples_only : bool
if True return only the channel samples as numpy array; if False return a Signal object
- data : bytes
prevent redundant data read by providing the raw data group samples
- raw : bool
return channel samples without appling the conversion rule; default False
Returns: - res : (numpy.array | Signal)
returns Signal if samples_only*=*False (default option), otherwise returns numpy.array. The Signal samples are:
- numpy recarray for channels that have CDBLOCK or BYTEARRAY type channels
- numpy array for all the rest
Raises: - MdfException :
- * if the channel name is not found
- * if the group index is out of range
- * if the channel index is out of range
Examples
>>> from asammdf import MDF, Signal >>> import numpy as np >>> t = np.arange(5) >>> s = np.ones(5) >>> mdf = MDF(version='3.30') >>> for i in range(4): ... sigs = [Signal(s*(i*10+j), t, name='Sig') for j in range(1, 4)] ... mdf.append(sigs) ... >>> # first group and channel index of the specified channel name ... >>> mdf.get('Sig') UserWarning: Multiple occurances for channel "Sig". Using first occurance from data group 4. Provide both "group" and "index" arguments to select another data group <Signal Sig: samples=[ 1. 1. 1. 1. 1.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> >>> # first channel index in the specified group ... >>> mdf.get('Sig', 1) <Signal Sig: samples=[ 11. 11. 11. 11. 11.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> >>> # channel named Sig from group 1 channel index 2 ... >>> mdf.get('Sig', 1, 2) <Signal Sig: samples=[ 12. 12. 12. 12. 12.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> >>> # channel index 1 or group 2 ... >>> mdf.get(None, 2, 1) <Signal Sig: samples=[ 21. 21. 21. 21. 21.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> >>> mdf.get(group=2, index=1) <Signal Sig: samples=[ 21. 21. 21. 21. 21.] timestamps=[0 1 2 3 4] unit="" info=None comment="">
-
get_channel_comment
(name=None, group=None, index=None)[source] Gets channel comment. Channel can be specified in two ways:
using the first positional argument name
- if there are multiple occurances for this channel then the group and index arguments can be used to select a specific group.
- if there are multiple occurances for this channel and either the group or index arguments is None then a warning is issued
using the group number (keyword argument group) and the channel number (keyword argument index). Use info method for group and channel numbers
If the raster keyword argument is not None the output is interpolated accordingly.
Parameters: - name : string
name of channel
- group : int
0-based group index
- index : int
0-based channel index
Returns: - comment : str
found channel comment
-
get_channel_name
(group, index)[source] Gets channel name.
Parameters: - group : int
0-based group index
- index : int
0-based channel index
Returns: - name : str
found channel name
-
get_channel_unit
(name=None, group=None, index=None)[source] Gets channel unit.
Channel can be specified in two ways:
using the first positional argument name
- if there are multiple occurances for this channel then the group and index arguments can be used to select a specific group.
- if there are multiple occurances for this channel and either the group or index arguments is None then a warning is issued
using the group number (keyword argument group) and the channel number (keyword argument index). Use info method for group and channel numbers
If the raster keyword argument is not None the output is interpolated accordingly.
Parameters: - name : string
name of channel
- group : int
0-based group index
- index : int
0-based channel index
Returns: - unit : str
found channel unit
-
get_master
(index, data=None, raster=None)[source] returns master channel samples for given group
Parameters: - index : int
group index
- data : (bytes, int)
(data block raw bytes, fragment offset); default None
- raster : float
raster to be used for interpolation; default None
Returns: - t : numpy.array
master channel samples
-
info
()[source] get MDF information as a dict
Examples
>>> mdf = MDF3('test.mdf') >>> mdf.info()
-
iter_get_triggers
()[source] generator that yields triggers
Returns: - trigger_info : dict
trigger information with the following keys:
- comment : trigger comment
- time : trigger time
- pre_time : trigger pre time
- post_time : trigger post time
- index : trigger index
- group : data group index of trigger
-
save
(dst='', overwrite=False, compression=0)[source] Save MDF to dst. If dst is not provided the the destination file name is the MDF name. If overwrite is True then the destination file is overwritten, otherwise the file name is appended with ‘_<cntr>’, were ‘<cntr>’ is the first counter that produces a new file name (that does not already exist in the filesystem).
Parameters: - dst : str
destination file name, Default ‘’
- overwrite : bool
overwrite flag, default False
- compression : int
does nothing for mdf version3; introduced here to share the same API as mdf version 4 files
Returns: - output_file : str
output file name
MDF4¶
asammdf tries to emulate the mdf structure using Python builtin data types.
The header attibute is an OrderedDict that holds the file metadata.
The groups attribute is a dictionary list with the following keys:
data_group : DataGroup object
channel_group : ChannelGroup object
channels : list of Channel objects with the same order as found in the mdf file
channel_conversions : list of ChannelConversion objects in 1-to-1 relation with the channel list
channel_sources : list of SourceInformation objects in 1-to-1 relation with the channels list
data_block : DataBlock object
texts : dictionay containing TextBlock objects used throughout the mdf
channels : list of dictionaries that contain TextBlock objects ralated to each channel
- name_addr : channel name
- comment_addr : channel comment
channel group : list of dictionaries that contain TextBlock objects ralated to each channel group
- acq_name_addr : channel group acquisition comment
- comment_addr : channel group comment
conversion_tab : list of dictionaries that contain TextBlock objects related to TABX and RTABX channel conversions
- text_{n} : n-th text of the VTABR conversion
- default_addr : default text
conversions : list of dictionaries that containt TextBlock obejcts related to channel conversions
- name_addr : converions name
- unit_addr : channel unit_addr
- comment_addr : converison comment
- formula_addr : formula text; only valid for algebraic conversions
sources : list of dictionaries that containt TextBlock obejcts related to channel sources
- name_addr : source name
- path_addr : source path_addr
- comment_addr : source comment
The file_history attribute is a list of (FileHistory, TextBlock) pairs .
The channel_db attibute is a dictionary that holds the (data group index, channel index) pair for all signals. This is used to speed up the get_signal_by_name method.
The master_db attibute is a dictionary that holds the channel index of the master channel for all data groups. This is used to speed up the get_signal_by_name method.
-
class
asammdf.mdf_v4.
MDF4
(name=None, memory='full', version='4.10', callback=None, queue=None)[source] If the name exist it will be memorised otherwise an empty file will be created that can be later saved to disk
Parameters: - name : string
mdf file name
- memory : str
memory optimization option; default full
- if full the data group binary data block will be memorised in RAM
- if low the channel data is read from disk on request, and the metadata is memorized into RAM
- if minimum only minimal data is memorized into RAM
- version : string
mdf file version (‘4.00’, ‘4.10’, ‘4.11’); default ‘4.10’
Attributes: - attachments : list
list of file attachments
- channels_db : dict
used for fast channel access by name; for each name key the value is a list of (group index, channel index) tuples
- file_comment : TextBlock
file comment TextBlock
- file_history : list
list of (FileHistory, TextBlock) pairs
- groups : list
list of data groups
- header : HeaderBlock
mdf file header
- identification : FileIdentificationBlock
mdf file start block
- masters_db : dict
- used for fast master channel access; for each group index key the value
is the master channel index
- memory : str
memory optimization option
- name : string
mdf file name
- version : str
mdf version
-
append
(signals, source_info='Python', common_timebase=False)[source] Appends a new data group.
For channel dependencies type Signals, the samples attribute must be a numpy.recarray
Parameters: - signals : list
list on Signal objects
- source_info : str
source information; default ‘Python’
- common_timebase : bool
flag to hint that the signals have the same timebase
Examples
>>> # case 1 conversion type None >>> s1 = np.array([1, 2, 3, 4, 5]) >>> s2 = np.array([-1, -2, -3, -4, -5]) >>> s3 = np.array([0.1, 0.04, 0.09, 0.16, 0.25]) >>> t = np.array([0.001, 0.002, 0.003, 0.004, 0.005]) >>> names = ['Positive', 'Negative', 'Float'] >>> units = ['+', '-', '.f'] >>> info = {} >>> s1 = Signal(samples=s1, timstamps=t, unit='+', name='Positive') >>> s2 = Signal(samples=s2, timstamps=t, unit='-', name='Negative') >>> s3 = Signal(samples=s3, timstamps=t, unit='flts', name='Floats') >>> mdf = MDF3('new.mdf') >>> mdf.append([s1, s2, s3], 'created by asammdf v1.1.0') >>> # case 2: VTAB conversions from channels inside another file >>> mdf1 = MDF3('in.mdf') >>> ch1 = mdf1.get("Channel1_VTAB") >>> ch2 = mdf1.get("Channel2_VTABR") >>> sigs = [ch1, ch2] >>> mdf2 = MDF3('out.mdf') >>> mdf2.append(sigs, 'created by asammdf v1.1.0')
-
attach
(data, file_name=None, comment=None, compression=True, mime='application/octet-stream')[source] attach embedded attachment as application/octet-stream
Parameters: - data : bytes
data to be attached
- file_name : str
string file name
- comment : str
attachment comment
- compression : bool
use compression for embedded attachment data
- mime : str
mime type string
Returns: - index : int
new attachment index
-
close
()[source] if the MDF was created with memory=False and new channels have been appended, then this must be called just before the object is not used anymore to clean-up the temporary file
-
configure
(read_fragment_size=None, write_fragment_size=None, use_display_names=None, single_bit_uint_as_bool=None)[source] configure read and write fragment size for chuncked data access
Parameters: - read_fragment_size : int
size hint of splitted data blocks, default 8MB; if the initial size is smaller, then no data list is used. The actual split size depends on the data groups’ records size
- write_fragment_size : int
size hint of splitted data blocks, default 8MB; if the initial size is smaller, then no data list is used. The actual split size depends on the data groups’ records size
- use_display_names : bool
use display name if available for the Signal’s name returned by the get method
-
extend
(index, signals)[source] Extend a group with new samples. The first signal is the master channel’s samples, and the next signals must respect the same order in which they were appended. The samples must have raw or physical values according to the Signals used for the initial append.
Parameters: - index : int
group index
- signals : list
list on numpy.ndarray objects
Examples
>>> # case 1 conversion type None >>> s1 = np.array([1, 2, 3, 4, 5]) >>> s2 = np.array([-1, -2, -3, -4, -5]) >>> s3 = np.array([0.1, 0.04, 0.09, 0.16, 0.25]) >>> t = np.array([0.001, 0.002, 0.003, 0.004, 0.005]) >>> names = ['Positive', 'Negative', 'Float'] >>> units = ['+', '-', '.f'] >>> s1 = Signal(samples=s1, timstamps=t, unit='+', name='Positive') >>> s2 = Signal(samples=s2, timstamps=t, unit='-', name='Negative') >>> s3 = Signal(samples=s3, timstamps=t, unit='flts', name='Floats') >>> mdf = MDF3('new.mdf') >>> mdf.append([s1, s2, s3], 'created by asammdf v1.1.0') >>> t = np.array([0.006, 0.007, 0.008, 0.009, 0.010]) >>> mdf2.extend(0, [t, s1, s2, s3])
-
extract_attachment
(address=None, index=None)[source] extract attachment data by original address or by index. If it is an embedded attachment, then this method creates the new file according to the attachment file name information
Parameters: - address : int
attachment index; default None
- index : int
attachment index; default None
Returns: - data : bytes | str
attachment data
-
get
(name=None, group=None, index=None, raster=None, samples_only=False, data=None, raw=False)[source] Gets channel samples. Channel can be specified in two ways:
using the first positional argument name
- if there are multiple occurances for this channel then the group and index arguments can be used to select a specific group.
- if there are multiple occurances for this channel and either the group or index arguments is None then a warning is issued
using the group number (keyword argument group) and the channel number (keyword argument index). Use info method for group and channel numbers
If the raster keyword argument is not None the output is interpolated accordingly
Parameters: - name : string
name of channel
- group : int
0-based group index
- index : int
0-based channel index
- raster : float
time raster in seconds
- samples_only : bool
- if True return only the channel samples as numpy array; if
False return a Signal object
- data : bytes
prevent redundant data read by providing the raw data group samples
- raw : bool
return channel samples without appling the conversion rule; default False
Returns: - res : (numpy.array | Signal)
returns Signal if samples_only = False (default option), otherwise returns numpy.array The Signal samples are:
- numpy recarray for channels that have composition/channel array address or for channel of type CANOPENDATE, CANOPENTIME
- numpy array for all the rest
Raises: - MdfException :
- * if the channel name is not found
- * if the group index is out of range
- * if the channel index is out of range
Examples
>>> from asammdf import MDF, Signal >>> import numpy as np >>> t = np.arange(5) >>> s = np.ones(5) >>> mdf = MDF(version='4.10') >>> for i in range(4): ... sigs = [Signal(s*(i*10+j), t, name='Sig') for j in range(1, 4)] ... mdf.append(sigs) ... >>> # first group and channel index of the specified channel name ... >>> mdf.get('Sig') UserWarning: Multiple occurances for channel "Sig". Using first occurance from data group 4. Provide both "group" and "index" arguments to select another data group <Signal Sig: samples=[ 1. 1. 1. 1. 1.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> >>> # first channel index in the specified group ... >>> mdf.get('Sig', 1) <Signal Sig: samples=[ 11. 11. 11. 11. 11.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> >>> # channel named Sig from group 1 channel index 2 ... >>> mdf.get('Sig', 1, 2) <Signal Sig: samples=[ 12. 12. 12. 12. 12.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> >>> # channel index 1 or group 2 ... >>> mdf.get(None, 2, 1) <Signal Sig: samples=[ 21. 21. 21. 21. 21.] timestamps=[0 1 2 3 4] unit="" info=None comment=""> >>> mdf.get(group=2, index=1) <Signal Sig: samples=[ 21. 21. 21. 21. 21.] timestamps=[0 1 2 3 4] unit="" info=None comment="">
-
get_channel_comment
(name=None, group=None, index=None)[source] Gets channel comment.
Channel can be specified in two ways:
using the first positional argument name
- if there are multiple occurrences for this channel then the group and index arguments can be used to select a specific group.
- if there are multiple occurrences for this channel and either the group or index arguments is None then a warning is issued
using the group number (keyword argument group) and the channel number (keyword argument index). Use info method for group and channel numbers
If the raster keyword argument is not None the output is interpolated accordingly.
Parameters: - name : string
name of channel
- group : int
0-based group index
- index : int
0-based channel index
Returns: - comment : str
found channel comment
-
get_channel_name
(group, index)[source] Gets channel name.
Parameters: - group : int
0-based group index
- index : int
0-based channel index
Returns: - name : str
found channel name
-
get_channel_unit
(name=None, group=None, index=None)[source] Gets channel unit.
Channel can be specified in two ways:
using the first positional argument name
- if there are multiple occurrences for this channel then the group and index arguments can be used to select a specific group.
- if there are multiple occurrences for this channel and either the group or index arguments is None then a warning is issued
using the group number (keyword argument group) and the channel number (keyword argument index). Use info method for group and channel numbers
If the raster keyword argument is not None the output is interpolated accordingly.
Parameters: - name : string
name of channel
- group : int
0-based group index
- index : int
0-based channel index
Returns: - unit : str
found channel unit
-
get_master
(index, data=None, raster=None)[source] returns master channel samples for given group
Parameters: - index : int
group index
- data : (bytes, int)
(data block raw bytes, fragment offset); default None
- raster : float
raster to be used for interpolation; default None
Returns: - t : numpy.array
master channel samples
-
get_valid_indexes
(group_index, channel, fragment)[source] get invalidation indexes for the channel
Parameters: - group_index : int
group index
- channel : Channel
channel object
- fragment : (bytes, int)
(fragment bytes, fragment offset)
Returns: - valid_indexes : iterable
iterable of valid channel indexes; if all are valid None is returned
-
info
()[source] get MDF information as a dict
Examples
>>> mdf = MDF4('test.mdf') >>> mdf.info()
-
save
(dst='', overwrite=False, compression=0)[source] Save MDF to dst. If dst is not provided the the destination file name is the MDF name. If overwrite is True then the destination file is overwritten, otherwise the file name is appened with ‘_<cntr>’, were ‘<cntr>’ is the first conter that produces a new file name (that does not already exist in the filesystem)
Parameters: - dst : str
destination file name, Default ‘’
- overwrite : bool
overwrite flag, default False
- compression : int
use compressed data blocks, default 0; valid since version 4.10
- 0 - no compression
- 1 - deflate (slower, but produces smaller files)
- 2 - transposition + deflate (slowest, but produces the smallest files)
Returns: - output_file : str
output file name
Signal¶
-
class
asammdf.signal.
Signal
(samples=None, timestamps=None, unit='', name='', conversion=None, comment='', raw=True, master_metadata=None, display_name='', attachment=(), source=None, bit_count=None)[source]¶ The Signal represents a channel described by it’s samples and timestamps. It can perform arithmetic operations against other Signal or numeric types. The operations are computed in respect to the timestamps (time correct). The non-float signals are not interpolated, instead the last value relative to the current timestamp is used. samples, timstamps and name are mandatory arguments.
Parameters: - samples : numpy.array | list | tuple
signal samples
- timestamps : numpy.array | list | tuple
signal timestamps
- unit : str
signal unit
- name : str
signal name
- conversion : dict | channel conversion block
dict that contains extra conversion information about the signal , default None
- comment : str
signal comment, default ‘’
- raw : bool
signal samples are raw values, with no physical conversion applied
- master_metadata : list
master name and sync type
- display_name : str
display name used by mdf version 3
- attachment : bytes, name
channel attachment and name from MDF version 4
-
astype
(np_type)[source]¶ returns new Signal with samples of dtype np_type
Parameters: - np_type : np.dtype
new numpy dtye
Returns: - signal : Signal
new Signal with the samples of np_type dtype
-
cut
(start=None, stop=None)[source]¶ Cuts the signal according to the start and stop values, by using the insertion indexes in the signal’s time axis.
Parameters: - start : float
start timestamp for cutting
- stop : float
stop timestamp for cutting
Returns: - result : Signal
new Signal cut from the original
Examples
>>> new_sig = old_sig.cut(1.0, 10.5) >>> new_sig.timestamps[0], new_sig.timestamps[-1] 0.98, 10.48
-
extend
(other)[source]¶ extend signal with samples from another signal
Parameters: - other : Signal
Returns: - signal : Signal
new extended Signal
-
interp
(new_timestamps)[source]¶ returns a new Signal interpolated using the new_timestamps
Parameters: - new_timestamps : np.array
timestamps used for interpolation
Returns: - signal : Signal
new interpolated Signal