The Virtual Brain Project

Table Of Contents

Previous topic

entities Package

Next topic

file_update_scripts Package

This Page

file Package

exceptions

Exceptions for File Storage layer.

exception tvb.core.entities.file.exceptions.FileStorageException(message)[source]

Bases: tvb.basic.traits.exceptions.TVBException

Generic exception when storing in data in files.

exception tvb.core.entities.file.exceptions.FileStructureException(message)[source]

Bases: tvb.basic.traits.exceptions.TVBException

Exception to be thrown in case of a problem related to File Structure Storage.

exception tvb.core.entities.file.exceptions.FileVersioningException(message)[source]

Bases: tvb.basic.traits.exceptions.TVBException

A base exception class for all TVB file storage version conversion custom exceptions.

exception tvb.core.entities.file.exceptions.IncompatibleFileManagerException(message)[source]

Bases: tvb.core.entities.file.exceptions.FileVersioningException

Exception that should be raised in case a file is handled by some filemanager which is incompatible with that version of TVB file storage.

exception tvb.core.entities.file.exceptions.MissingDataFileException(message)[source]

Bases: tvb.core.entities.file.exceptions.FileStorageException

Exception when the file associated to some manager does not exist on disk for some reason.

exception tvb.core.entities.file.exceptions.MissingDataSetException(message)[source]

Bases: tvb.core.entities.file.exceptions.FileStorageException

Exception when a dataset is accessed, but no written entry exists in HDF5 file for it. we will consider the attribute None.

files_helper

class tvb.core.entities.file.files_helper.FilesHelper[source]

This class manages all Structure related operations, using File storage. It will handle creating meaning-full entities and retrieving existent ones.

ALLEN_MOUSE_CONNECTIVITY_CACHE_FOLDER = 'ALLEN_MOUSE_CONNECTIVITY_CACHE'
IMAGES_FOLDER = 'IMAGES'
PROJECTS_FOLDER = 'PROJECTS'
TEMP_FOLDER = 'TEMP'
TVB_FILE_EXTENSION = '.xml'
TVB_OPERARATION_FILE = 'Operation.xml'
TVB_PROJECT_FILE = 'Project.xml'
TVB_STORAGE_FILE_EXTENSION = '.h5'
check_created(*args, **kw)[source]

New function will actually write the Lock.

static compute_size_on_disk(file_path)[source]

Given a file’s path, return size occupied on disk by that file. Size should be a number, representing size in KB.

static copy_file(source, dest, dest_postfix=None, buffer_size=1048576)[source]

Copy a file from source to dest. source and dest can either be strings or any object with a read or write method, like StringIO for example.

static find_relative_path(full_path, root_path=u'/home/tester/TVB/')[source]
Parameters:full_path – Absolute full path
Root_path:find relative path from param full_path to this root.
get_allen_mouse_cache_folder(project_name)[source]
get_images_folder(project_name)[source]

Computes the name/path of the folder where to store images.

get_operation_folder(project_name, operation_id)[source]

Computes the folder where operation details are stored

get_operation_meta_file_path(project_name, operation_id)[source]

Retrieve the path to operation meta file

Parameters:
  • project_name – name of the current project.
  • operation_id – Identifier of Operation in given project
Returns:

File path for storing Operation meta-data. File might not be yet created, but parent folder exists after this method.

get_project_folder(project, *sub_folders)[source]

Retrieve the root path for the given project. If root folder is not created yet, will create it.

get_project_meta_file_path(project_name)[source]

Retrieve project meta info file path.

Returns:File path for storing Project meta-data File might not exist yet, but parent folder is created after this method call.
move_datatype(datatype, new_project_name, new_op_id)[source]

Move H5 storage into a new location

static parse_xml_content(xml_content)[source]

Delegate reading of some XML content. Will parse the XMl and return a dictionary of elements with max 2 levels.

read_project_metadata(project_path)[source]
remove_datatype(datatype)[source]

Remove H5 storage fully.

static remove_files(file_list, ignore_exception=False)[source]
Parameters:
  • file_list – list of file paths to be removed.
  • ignore_exception – When True and one of the specified files could not be removed, an exception is raised.
static remove_folder(folder_path, ignore_errors=False)[source]

Given a folder path, try to remove that folder from disk. :param ignore_errors: When False throw FileStructureException if folder_path is invalid.

remove_image_metadata(figure)[source]

Remove the file storing image meta data

remove_operation_data(project_name, operation_id)[source]

Remove H5 storage fully.

remove_project_structure(project_name)[source]

Remove all folders for project or THROW FileStructureException.

rename_project_structure(project_name, new_name)[source]

Rename Project folder or THROW FileStructureException.

unpack_zip(uploaded_zip, folder_path)[source]

Simple method to unpack ZIP archive in a given folder.

update_operation_metadata(project_name, new_group_name, operation_id, is_group=False)[source]

Update operation meta data. :param is_group: when FALSE, use parameter ‘new_group_name’ for direct assignment on operation.user_group when TRUE, update operation.operation_group.name = parameter ‘new_group_name’

write_image_metadata(figure)[source]

Writes figure meta-data into XML file

write_operation_metadata(operation)[source]
Parameters:operation – DB stored operation instance.
write_project_metadata(project)[source]
Parameters:project – Project instance, to get metadata from it.
write_project_metadata_from_dict(project_path, meta_dictionary)[source]
static zip_files(zip_full_path, files)[source]

This method creates a ZIP file with all files provided as parameters :param zip_full_path: full path and name of the result ZIP file :param files: array with the FULL names/path of the files to add into ZIP

static zip_folder(result_name, folder_root)[source]

Given a folder and a ZIP result name, create the corresponding archive.

static zip_folders(zip_full_path, folders, folder_prefix='')[source]

This method creates a ZIP file with all folders provided as parameters :param zip_full_path: full path and name of the result ZIP file :param folders: array with the FULL names/path of the folders to add into ZIP

class tvb.core.entities.file.files_helper.TvbZip(dest_path, mode='r')[source]

Bases: zipfile.ZipFile

write_folder(folder, archive_path_prefix='', exclude=None)[source]

write folder contents in archive :param archive_path_prefix: root folder in archive. Defaults to “” the archive root :param exclude: a list of file or folder names that will be recursively excluded

files_update_manager

Manager for the file storage version updates.

class tvb.core.entities.file.files_update_manager.FilesUpdateManager[source]

Bases: tvb.core.code_versions.base_classes.UpdateManager

Manager for updating H5 files version, when code gets changed.

DATA_TYPES_PAGE_SIZE = 500
MESSAGE = 'Done'
PROJECTS_PAGE_SIZE = 20
STATUS = True
UPDATE_SCRIPTS_SUFFIX = '_update_files'
get_file_data_version(file_path)[source]

Return the data version for the given file.

Parameters:file_path – the path on disk to the file for which you need the TVB data version
Returns:a number representing the data version for which the input file was written
is_file_up_to_date(file_path)[source]

Returns True only if the data version of the file is equal with the data version specified into the TVB configuration file.

run_all_updates()[source]

Upgrades all the data types from TVB storage to the latest data version.

Returns:a two entry tuple (status, message) where status is a boolean that is True in case the upgrade was successfully for all DataTypes and False otherwise, and message is a status update message.
upgrade_file(input_file_name, datatype=None)[source]

Upgrades the given file to the latest data version. The file will be upgraded sequentially, up until the current version from tvb.basic.config.settings.VersionSettings.DB_STRUCTURE_VERSION

:param input_file_name the path to the file which needs to be upgraded :return True, when update was needed and running it was successful. False, the the file is already up to date.

hdf5_storage_manager

Persistence of data in HDF5 format.

class tvb.core.entities.file.hdf5_storage_manager.HDF5StorageManager(storage_folder, file_name, buffer_size=600000)[source]

Bases: object

This class is responsible for saving / loading data in HDF5 file / format.

BOOL_VALUE_PREFIX = 'bool:'
DATETIME_VALUE_PREFIX = 'datetime:'
DATE_TIME_FORMAT = '%Y-%m-%d %H:%M:%S.%f'
class H5pyStorageBuffer(h5py_dataset, buffer_size=300, buffered_data=None, grow_dimension=-1)[source]

Helper class in order to buffer data for append operations, to limit the number of actual HDD I/O operations.

buffer_data(data_list)[source]

Add data_list to an internal buffer in order to improve performance for append_data type of operations. :returns: True if buffer is still fine, False if a flush is necessary since the buffer is full

flush_buffered_data()[source]

Append the data buffered so far to the input dataset using :param grow_dimension: as the dimension that will be expanded.

HDF5StorageManager.LOCKS = {}
HDF5StorageManager.ROOT_NODE_PATH = '/'
HDF5StorageManager.TVB_ATTRIBUTE_PREFIX = 'TVB_'
HDF5StorageManager.append_data(dataset_name, data_list, grow_dimension=-1, close_file=True, where='/')[source]

This method appends data to an existing data set. If the data set does not exists, create it first.

Parameters:
  • dataset_name – Name of the data set where to store data
  • data_list – Data to be stored / appended
  • grow_dimension – The dimension to be used to grow stored array. By default will grow on the LAST dimension
  • close_file – Specify if the file should be closed automatically after write operation. If not, you have to close file by calling method close_file()
  • where – represents the path where to store our dataset (e.g. /data/info)
HDF5StorageManager.close_file()[source]

The synchronization of open/close doesn’t seem to be needed anymore for h5py in contrast to PyTables for concurrent reads. However since it shouldn’t add that much overhead in most situation we’ll leave it like this for now since in case of concurrent writes(metadata) this provides extra safety.

HDF5StorageManager.get_data(dataset_name, data_slice=None, where='/', ignore_errors=False, close_file=True)[source]

This method reads data from the given data set based on the slice specification

Parameters:
  • dataset_name – Name of the data set from where to read data
  • data_slice – Specify how to retrieve data from array {e.g (slice(1,10,1),slice(1,6,2)) }
  • where – represents the path where dataset is stored (e.g. /data/info)
Returns:

a numpy.ndarray containing filtered data

HDF5StorageManager.get_data_shape(dataset_name, where='/', ignore_errors=False)[source]

This method reads data-size from the given data set

Parameters:
  • dataset_name – Name of the data set from where to read data
  • where – represents the path where dataset is stored (e.g. /data/info)
Returns:

a tuple containing data size

HDF5StorageManager.get_file_data_version()[source]

Checks the data version for the current file.

HDF5StorageManager.get_gid_attribute()[source]

Used for obtaining the gid of the DataType of which data are stored in the current file.

HDF5StorageManager.get_metadata(dataset_name='', where='/', ignore_errors=False)[source]

Retrieve ALL meta-data information for root node or for a given data set.

Parameters:
  • dataset_name – name of the dataset for which to read metadata. If None, read metadata from ROOT node.
  • where – represents the path where dataset is stored (e.g. /data/info)
Returns:

a dictionary containing all metadata associated with the node

HDF5StorageManager.is_valid_hdf5_file()[source]

This method checks if specified file exists and if it has correct HDF5 format :returns: True is file exists and has HDF5 format. False otherwise.

HDF5StorageManager.remove_data(dataset_name, where='/')[source]

Deleting a data set from H5 file.

:param dataset_name:name of the data set to be deleted :param where: represents the path where dataset is stored (e.g. /data/info)

HDF5StorageManager.remove_metadata(meta_key, dataset_name='', tvb_specific_metadata=True, where='/')[source]

Remove meta-data information for root node or for a given data set.

Parameters:
  • meta_key – name of the metadata attribute to be removed
  • dataset_name – name of the dataset from where to delete metadata. If None, metadata will be removed from ROOT node.
  • tvb_specific_metadata – specify if the provided metadata is specific to TVB (keys will have a TVB prefix).
  • where – represents the path where dataset is stored (e.g. /data/info)
HDF5StorageManager.set_metadata(meta_dictionary, dataset_name='', tvb_specific_metadata=True, where='/')[source]

Set meta-data information for root node or for a given data set.

Parameters:
  • meta_dictionary – dictionary containing meta info to be stored on node
  • dataset_name – name of the dataset where to assign metadata. If None, metadata is assigned to ROOT node.
  • tvb_specific_metadata – specify if the provided metadata is TVB specific (All keys will have a TVB prefix)
  • where – represents the path where dataset is stored (e.g. /data/info)
HDF5StorageManager.store_data(dataset_name, data_list, where='/')[source]

This method stores provided data list into a data set in the H5 file.

Parameters:
  • dataset_name – Name of the data set where to store data
  • data_list – Data to be stored
  • where – represents the path where to store our dataset (e.g. /data/info)

xml_metadata_handlers

This module contains logic for meta-data handling.

It handles read/write operations in XML files for retrieving/storing meta-data. More specific: it contains XML Reader/Writer Utility, for GenericMetaData.

class tvb.core.entities.file.xml_metadata_handlers.XMLReader(xml_path)[source]

Bases: object

Reader for XML with meta-data on generic entities (e.g. Project, Operation).

static get_node_text(node)[source]

From XMl node, read string content.

parse_xml_content_to_dict(xml_data)[source]
Parameters:xml_data – String representing an XML root.
Returns:Dictionary with text-content read from the given XML.
read_metadata()[source]

Return one instance of GenericMetaData, filled with data read from XML file.

read_only_element(tag_name)[source]

From XML file, read only an element specified by tag-name. :returns: Textual value of the XML node, or None

class tvb.core.entities.file.xml_metadata_handlers.XMLWriter(entity)[source]

Bases: object

Writer for XML with meta-data on generic entities (e.g. Project, Operation).

ELEM_ROOT = 'tvb_data'
FILE_EXTENSION = '.xml'
write(final_path)[source]

From a meta-data dictionary for an entity, create the XML file.