MDF3

asammdf tries to emulate the mdf structure using Python builtin data types.

The header attibute is an OrderedDict that holds the file metadata.

The groups attribute is a dictionary list with the following keys:

  • data_group : DataGroup object

  • channel_group : ChannelGroup object

  • channels : list of Channel objects with the same order as found in the mdf file

  • channel_conversions : list of ChannelConversion objects in 1-to-1 relation with the channel list

  • channel_sources : list of SourceInformation objects in 1-to-1 relation with the channels list

  • chanel_dependencies : list of ChannelDependency objects in a 1-to-1 relation with the channel list

  • data_block : DataBlock object

  • texts : dictionay containing TextBlock objects used throughout the mdf

    • channels : list of dictionaries that contain TextBlock objects ralated to each channel

      • long_name_addr : channel long name
      • comment_addr : channel comment
      • display_name_addr : channel display name
    • channel group : list of dictionaries that contain TextBlock objects ralated to each channel group

      • comment_addr : channel group comment
    • conversion_tab : list of dictionaries that contain TextBlock objects ralated to VATB and VTABR channel conversions

      • text_{n} : n-th text of the VTABR conversion
  • sorted : bool flag to indicate if the source file was sorted; it is used when load_measured_data = False

  • size : data block size; used for lazy laoding of measured data

  • record_size : dict of record ID -> record size pairs

The file_history attribute is a TextBlock object.

The channel_db attibute is a dictionary that holds the (data group index, channel index) pair for all signals. This is used to speed up the get_signal_by_name method.

The master_db attibute is a dictionary that holds the channel index of the master channel for all data groups. This is used to speed up the get_signal_by_name method.

API

class asammdf.mdf3.MDF3(name=None, load_measured_data=True, version='3.20')

If the name exist it will be loaded otherwise an empty file will be created that can be later saved to disk

Parameters:

name : string

mdf file name

load_measured_data : bool

load data option; default True

  • if True the data group binary data block will be loaded in RAM
  • if False the channel data is read from disk on request

version : string

mdf file version (‘3.00’, ‘3.10’, ‘3.20’ or ‘3.30’); default ‘3.20’

Attributes

name (string) mdf file name
groups (list) list of data groups
header (OrderedDict) mdf file header
file_history (TextBlock) file history text block; can be None
load_measured_data (bool) load measured data option
version (str) mdf version
channels_db (dict) used for fast channel access by name; for each name key the value is a list of (group index, channel index) tuples
masters_db (dict) used for fast master channel access; for each group index key the value is the master channel index

Methods

add_trigger
append
close
get
info
iter_get_triggers
remove
save
add_trigger(group, time, pre_time=0, post_time=0, comment='')

add trigger to data group

Parameters:

group : int

group index

time : float

trigger time

pre_time : float

trigger pre time; default 0

post_time : float

trigger post time; default 0

comment : str

trigger comment

append(signals, acquisition_info='Python', common_timebase=False)

Appends a new data group.

For channel depencies type Signals, the samples attribute must be a numpy.recarray

Parameters:

signals : list

list on Signal objects

acquisition_info : str

acquisition information; default ‘Python’

common_timebase : bool

flag to hint that the signals have the same timebase

Examples

>>> # case 1 conversion type None
>>> s1 = np.array([1, 2, 3, 4, 5])
>>> s2 = np.array([-1, -2, -3, -4, -5])
>>> s3 = np.array([0.1, 0.04, 0.09, 0.16, 0.25])
>>> t = np.array([0.001, 0.002, 0.003, 0.004, 0.005])
>>> names = ['Positive', 'Negative', 'Float']
>>> units = ['+', '-', '.f']
>>> info = {}
>>> s1 = Signal(samples=s1, timstamps=t, unit='+', name='Positive')
>>> s2 = Signal(samples=s2, timstamps=t, unit='-', name='Negative')
>>> s3 = Signal(samples=s3, timstamps=t, unit='flts', name='Floats')
>>> mdf = MDF3('new.mdf')
>>> mdf.append([s1, s2, s3], 'created by asammdf v1.1.0')
>>> # case 2: VTAB conversions from channels inside another file
>>> mdf1 = MDF3('in.mdf')
>>> ch1 = mdf1.get("Channel1_VTAB")
>>> ch2 = mdf1.get("Channel2_VTABR")
>>> sigs = [ch1, ch2]
>>> mdf2 = MDF3('out.mdf')
>>> mdf2.append(sigs, 'created by asammdf v1.1.0')
close()

if the MDF was created with load_measured_data=False and new channels have been appended, then this must be called just before the object is not used anymore to clean-up the temporary file

get(name=None, group=None, index=None, raster=None, samples_only=False)

Gets channel samples. Channel can be specified in two ways:

  • using the first positional argument name

    • if there are multiple occurances for this channel then the group and index arguments can be used to select a specific group.
    • if there are multiple occurances for this channel and either the group or index arguments is None then a warning is issued
  • using the group number (keyword argument group) and the channel number (keyword argument index). Use info method for group and channel numbers

If the raster keyword argument is not None the output is interpolated accordingly

Parameters:

name : string

name of channel

group : int

0-based group index

index : int

0-based channel index

raster : float

time raster in seconds

samples_only : bool

if True return only the channel samples as numpy array; if False return a Signal object

Returns:

res : (numpy.array | Signal)

returns Signal if samples_only*=*False (default option), otherwise returns numpy.array. The Signal samples are:

  • numpy recarray for channels that have CDBLOCK or BYTEARRAY type channels
  • numpy array for all the rest
Raises:

MdfError :

* if the channel name is not found

* if the group index is out of range

* if the channel index is out of range

info()

get MDF information as a dict

Examples

>>> mdf = MDF3('test.mdf')
>>> mdf.info()
iter_get_triggers()

generator that yields triggers

Returns:

trigger_info : dict

trigger information with the following keys:

  • comment : trigger comment
  • time : trigger time
  • pre_time : trigger pre time
  • post_time : trigger post time
  • index : trigger index
  • group : data group index of trigger
remove(group=None, name=None)

Remove data group. Use group or name keyword arguments to identify the group’s index. group has priority

Parameters:

name : string

name of the channel inside the data group to be removed

group : int

data group index to be removed

Examples

>>> mdf = MDF3('test.mdf')
>>> mdf.remove(group=3)
>>> mdf.remove(name='VehicleSpeed')
save(dst='', overwrite=False, compression=0)

Save MDF to dst. If dst is not provided the the destination file name is the MDF name. If overwrite is True then the destination file is overwritten, otherwise the file name is appened with ‘_xx’, were ‘xx’ is the first conter that produces a new file name (that does not already exist in the filesystem)

Parameters:

dst : str

destination file name, Default ‘’

overwrite : bool

overwrite flag, default False

compression : int

does nothing for mdf version3; introduced here to share the same API as mdf version 4 files