imagine.tools package

Submodules

imagine.tools.carrier_mapper module

The mapper module is designed for implementing distribution mapping functions.

imagine.tools.carrier_mapper.exp_mapper(x, a=0, b=1)[source]

Maps x from [0, 1] into the interval [exp(a), exp(b)].

Parameters:
  • x (float) – The variable to be mapped.
  • a (float) – The lower parameter value limit.
  • b (float) – The upper parameter value limit.
Returns:

The mapped parameter value.

Return type:

numpy.float64

imagine.tools.carrier_mapper.unity_mapper(x, a=0.0, b=1.0)[source]

Maps x from [0, 1] into the interval [a, b].

Parameters:
  • x (float) – The variable to be mapped.
  • a (float) – The lower parameter value limit.
  • b (float) – The upper parameter value limit.
Returns:

The mapped parameter value.

Return type:

numpy.float64

imagine.tools.class_tools module

class imagine.tools.class_tools.BaseClass[source]

Bases: object

REQ_ATTRS = []
imagine.tools.class_tools.req_attr(meth)[source]

imagine.tools.config module

IMAGINE global configuration

The default behaviour of some aspects of IMAGINE can be set using global rc configuration variables.

These can be accessed and modified using the imagine.rc dictionary or setting the corresponding environment variables (named ‘IMAGINE_’+RC_VAR_NAME).

For example to set the default path for the hamx executable, one can either do:

import imagine
imagine.rc.hammurabi_hamx_path = 'my_desired_path'

or, alternatively, set this as an environment variable before the exectution of the script:

export IMAGINE_HAMMURABI_HAMX_PATH='my_desired_path'

The following list describes all the available global settings variables.

IMAGINE rc variables
temp_dir
Default temporary directory used by IMAGINE. If not set, a temporary directory will be created at /tmp/ with a safe name.
distributed_arrays
If True, arrays containing covariances are distributed among different MPI processes (and so are the corresponding array operations).
pipeline_default_seed
The default value for the master seed used by a Pipeline object (see Pipeline.master_seed).
pipeline_distribute_ensemble
The default value of (see Pipeline.distribute_ensemble).
hammurabi_hamx_path
Default location of the Hammurabi X executable file, hamx.

imagine.tools.covariance_estimator module

This module contains estimation algorithms for the covariance matrix based on a finite number of samples.

For the testing suits, please turn to “imagine/tests/tools_tests.py”.

imagine.tools.covariance_estimator.empirical_cov(data)[source]

Empirical covariance estimator

Given some data matrix, \(D\), where rows are different samples and columns different properties, the covariance can be estimated from

\[U_{ij} = D_{ij} - \overline{D}_j\,,\; \text{with}\; \overline{D}_j=\tfrac{1}{N} \sum_{i=1}^N D_{ij}\]
\[\text{cov} = \tfrac{1}{N} U^T U\]

Notes

While conceptually simple, this is usually not the best option.

Parameters:data (numpy.ndarray) – Ensemble of observables, in global shape (ensemble size, data size).
Returns:cov – Distributed (not copied) covariance matrix in global shape (data size, data size), each node takes part of the rows.
Return type:numpy.ndarray
imagine.tools.covariance_estimator.oas_cov(data)[source]

Estimate covariance with the Oracle Approximating Shrinkage algorithm.

Given some \(n\times m\) data matrix, \(D\), where rows are different samples and columns different properties, the covariance can be estimated in the following way.

\[U_{ij} = D_{ij} - \overline{D}_j\,,\; \text{with}\; \overline{D}_j=\tfrac{1}{n} \sum_{i=1}^n D_{ij}\]

Let

\[S = \tfrac{1}{n} U^T U\,,\; T = \text{tr}(S)\quad\text{and}\quad V = \text{tr}(S^2)\]
\[\tilde\rho = \min\left[1,\frac{(1-2/m)V + T^2}{ (n+1-2/m)(V-T^2/m)}\right]\]

The covariance is given by

\[\text{cov}_\text{OAS} = (1-\rho)S + \tfrac{1}{N} \rho T I_m\]
Parameters:data (numpy.ndarray) – Distributed data in global shape (ensemble_size, data_size).
Returns:cov – Covariance matrix in global shape (data_size, data_size).
Return type:numpy.ndarray
imagine.tools.covariance_estimator.oas_mcov(data)[source]

Estimate covariance with the Oracle Approximating Shrinkage algorithm.

See imagine.tools.covariance_estimator.oas_cov for details. This function aditionally returns the computed ensemble mean.

Parameters:data (numpy.ndarray) – Distributed data in global shape (ensemble_size, data_size).
Returns:
  • mean (numpy.ndarray) – Copied ensemble mean (on all nodes).
  • cov (numpy.ndarray) – Distributed covariance matrix in shape (data_size, data_size).

imagine.tools.io_handler module

The io_handler class is designed for IMAGINE I/O with HDF5+MPI, but parallel HDF5 is not required.

There are two types of data reading, corresponding to the data types defined in the Observable class.

1. for reading ‘measured’ data (including mask maps), each node reads the full data. ‘read_copy’ is designed for this case.

2. for reading ‘covariance’ data, each node reads a certain rows. ‘read_dist’ is designed for this case.

We do not require writing in parallel since the output workload is not heavy. And there are also two types of data writing out, corresponds to the reading, i.e., ‘write_copy’ and ‘write_dist’.

For the testing suits, please turn to “imagine/tests/tools_tests.py”.

class imagine.tools.io_handler.IOHandler(wk_dir=None)[source]

Bases: object

Handles the I/O.

Parameters:wkdir (string) – The absolute path of the working directory.
read_copy(file, key)[source]

Reads from a HDF5 file identically to all nodes, by doing so, each node contains an identical copy of the data stored in the file.

Parameters:
  • data (numpy.ndarray) – Distributed data.
  • file (str) – String for filename.
  • key (str) – String for HDF5 group and dataset names, e.g., ‘group name/dataset name’.
Returns:

The output must be in (1,n) shape on each node.

Return type:

Copied numpy.ndarray.

read_dist(file, key)[source]

Reads from a HDF5 file and returns a distributed data-set. Note that the binary file data should contain enough rows to be distributed on the available computing nodes, otherwise the mpi_arrange function will raise an error.

Parameters:
  • data (numpy.ndarray) – Distributed data.
  • file (str) – String for filename.
  • key (str) – String for HDF5 group and dataset names, e.g., ‘group name/dataset name’.
Returns:

The output must be in either at least (1,n), or (m,n) shape on each node.

Return type:

Distributed numpy.ndarray.

write_copy(data, file, key)[source]

Writes a copied data-set into a HDF5 file. In practice, it writes out the data stored in the master node, by defaut taking all nodes have the same copies.

Parameters:
  • data (numpy.ndarray) – Distributed/copied data.
  • file (str) – Strong for filename.
  • key (str) – String for HDF5 group and dataset names, e.g., ‘group name/dataset name’.
write_dist(data, file, key)[source]

Writes a distributed data-set into a HDF5 file. If the given filename does not exist then creates one the data shape must be either in (m,n) on each node, each node will pass its content to the master node who is in charge of sequential writing.

Parameters:
  • data (numpy.ndarray) – Distributed data.
  • file (str) – String for filename.
  • key (str) – String for HDF5 group and dataset names, e.g., ‘group name/dataset name’.
file_path

Absolute path of the HDF5 binary file.

wk_dir

String containing the absolute path of the working directory.

imagine.tools.masker module

This module defines methods related to masking out distributed data and/or the associated covariance matrix. For the testing suits, please turn to “imagine/tests/tools_tests.py”.

Implemented with numpy.ndarray raw data.

imagine.tools.masker.mask_cov(cov, mask)[source]

Applies mask to the observable covariance.

Parameters:
  • cov (distributed numpy.ndarray) – Covariance matrix of observalbes in global shape (data size, data size) each node contains part of the global rows.
  • mask (numpy.ndarray) – Copied mask map in shape (1, data size).
Returns:

Masked covariance matrix of shape (masked data size, masked data size).

Return type:

numpy.ndarray

imagine.tools.masker.mask_obs(obs, mask)[source]

Applies a mask to an observable.

Parameters:
  • data (distributed numpy.ndarray) – Ensemble of observables, in global shape (ensemble size, data size) each node contains part of the global rows.
  • mask (numpy.ndarray) – Copied mask map in shape (1, data size) on each node.
Returns:

Masked observable of shape (ensemble size, masked data size).

Return type:

numpy.ndarray

imagine.tools.misc module

imagine.tools.misc.adjust_error_intervals(value, errlo, errup, sdigits=2)[source]

Takes the value of a quantity value with associated errors errlo and errup; and prepares them to be reported as \(v^{+err\,up}_{-err\,down}\).

Parameters:
  • value (int or float) – Value of quantity.
  • errlo, errup (int or float) – Associated lower and upper errors of value.
Returns:

  • value (float) – Rounded value
  • errlo, errup (float) – Assimetric error values

imagine.tools.misc.is_notebook()[source]

Finds out whether python is running in a Jupyter notebook or as a shell.

imagine.tools.mpi_helper module

This MPI helper module is designed for parallel computing and data handling.

For the testing suits, please turn to “imagine/tests/tools_tests.py”.

imagine.tools.mpi_helper.mpi_arrange(size)[source]

With known global size, number of mpi nodes, and current rank, returns the begin and end index for distributing the global size.

Parameters:size (integer (positive)) – The total size of target to be distributed. It can be a row size or a column size.
Returns:result – The begin and end index [begin,end] for slicing the target.
Return type:numpy.uint
imagine.tools.mpi_helper.mpi_shape(data)[source]

Returns the global number of rows and columns of given distributed data.

Parameters:data (numpy.ndarray) – The distributed data.
Returns:result – Glboal row and column number.
Return type:numpy.uint
imagine.tools.mpi_helper.mpi_prosecutor(data)[source]

Check if the data is distributed in the correct way covariance matrix is distributed exactly the same manner as multi-realization data if not, an error will be raised.

Parameters:data (numpy.ndarray) – The distributed data to be examined.
imagine.tools.mpi_helper.mpi_mean(data)[source]

calculate the mean of distributed array prefers averaging along column direction but if given (1,n) data shape the average is done along row direction the result note that the numerical values will be converted into double

Parameters:data (numpy.ndarray) – Distributed data.
Returns:result – Copied data mean, which means the mean is copied to all nodes.
Return type:numpy.ndarray
imagine.tools.mpi_helper.mpi_trans(data)[source]

Transpose distributed data, note that the numerical values will be converted into double.

Parameters:data (numpy.ndarray) – Distributed data.
Returns:result – Transposed data in distribution.
Return type:numpy.ndarray
imagine.tools.mpi_helper.mpi_mult(left, right)[source]

Calculate matrix multiplication of two distributed data, the result is data1*data2 in multi-node distribution note that the numerical values will be converted into double. We send the distributed right rows into other nodes (aka cannon method).

Parameters:
  • left (numpy.ndarray) – Distributed left side data.
  • right (numpy.ndarray) – Distributed right side data.
Returns:

result – Distributed multiplication result.

Return type:

numpy.ndarray

imagine.tools.mpi_helper.mpi_trace(data)[source]

Computes the trace of the given distributed data.

Parameters:data (numpy.ndarray) – Array of data distributed over different processes.
Returns:result – Copied trace of given data.
Return type:numpy.float64
imagine.tools.mpi_helper.mpi_eye(size)[source]

Produces an eye matrix according of shape (size,size) distributed over the various running MPI processes

Parameters:size (integer) – Distributed matrix size.
Returns:result – Distributed eye matrix.
Return type:numpy.ndarray, double data type
imagine.tools.mpi_helper.mpi_distribute_matrix(full_matrix)[source]
Parameters:size (integer) – Distributed matrix size.
Returns:result – Distributed eye matrix.
Return type:numpy.ndarray, double data type
imagine.tools.mpi_helper.mpi_lu_solve(operator, source)[source]

Simple LU Gauss method WITHOUT pivot permutation.

Parameters:
  • operator (distributed numpy.ndarray) – Matrix representation of the left-hand-side operator.
  • source (copied numpy.ndarray) – Vector representation of the right-hand-side source.
Returns:

result – Copied solution to the linear algebra problem.

Return type:

numpy.ndarray, double data type

imagine.tools.mpi_helper.mpi_slogdet(data)[source]

Computes log determinant according to simple LU Gauss method WITHOUT pivot permutation.

Parameters:data (numpy.ndarray) – Array of data distributed over different processes.
Returns:
  • sign (numpy.ndarray) – Single element numpy array containing the sign of the determinant (copied to all nodes).
  • logdet (numpy.ndarray) – Single element numpy array containing the log of the determinant (copied to all nodes).
imagine.tools.mpi_helper.mpi_global(data)[source]

Gathers data spread accross different processes.

Parameters:data (numpy.ndarray) – Array of data distributed over different processes.
Returns:global array – The root process returns the gathered data, other processes return None.
Return type:numpy.ndarray
imagine.tools.mpi_helper.mpi_local(data)[source]

Distributes data over available processes

Parameters:data (numpy.ndarray) – Array of data to be distributed over available processes.
Returns:local array – Return the distributed array on all preocesses.
Return type:numpy.ndarray

imagine.tools.parallel_ops module

Interface module which allows automatically switching between the routines in the imagine.tools.mpi_helper module and their:py:mod:numpy or pure Python equivalents, depending on the contents of imagine.rc['distributed_arrays']

imagine.tools.parallel_ops.pshape(data)[source]

imagine.tools.mpi_helper.mpi_shape() or numpy.ndarray.shape() depending on imagine.rc['distributed_arrays'].

imagine.tools.parallel_ops.prosecutor(data)[source]

imagine.tools.mpi_helper.mpi_prosecutor() or nothing depending on imagine.rc['distributed_arrays'].

imagine.tools.parallel_ops.pmean(data)[source]

imagine.tools.mpi_helper.mpi_mean() or numpy.mean() depending on imagine.rc['distributed_arrays'].

imagine.tools.parallel_ops.ptrans(data)[source]

imagine.tools.mpi_helper.mpi_mean() or numpy.ndarray.T() depending on imagine.rc['distributed_arrays'].

imagine.tools.parallel_ops.pmult(left, right)[source]

imagine.tools.mpi_helper.mpi_mult() or numpy.matmul() depending on imagine.rc['distributed_arrays'].

imagine.tools.parallel_ops.ptrace(data)[source]

imagine.tools.mpi_helper.mpi_trace() or numpy.trace() depending on imagine.rc['distributed_arrays'].

imagine.tools.parallel_ops.peye(size)[source]

imagine.tools.mpi_helper.mpi_eye() or numpy.eye() depending on imagine.rc['distributed_arrays'].

imagine.tools.parallel_ops.distribute_matrix(full_matrix)[source]

imagine.tools.mpi_helper.mpi_distribute_matrix() or nothing depending on imagine.rc['distributed_arrays'].

imagine.tools.parallel_ops.plu_solve(operator, source)[source]

imagine.tools.mpi_helper.mpi_lu_solve() or numpy.linalg.solve() depending on imagine.rc['distributed_arrays'].

Notes

In the non-distributed case, the source is transposed before the calculation

imagine.tools.parallel_ops.pslogdet(data)[source]

imagine.tools.mpi_helper.mpi_slogdet() or numpy.linalg.slogdet() depending on imagine.rc['distributed_arrays'].

imagine.tools.parallel_ops.pglobal(data)[source]

imagine.tools.mpi_helper.mpi_global() or nothing depending on imagine.rc['distributed_arrays'].

imagine.tools.parallel_ops.plocal(data)[source]

imagine.tools.mpi_helper.mpi_local() or nothing depending on imagine.rc['distributed_arrays'].

imagine.tools.random_seed module

This module provides a time-thread dependent seed value.

For the testing suites, please turn to “imagine/tests/tools_tests.py”.

imagine.tools.random_seed.ensemble_seed_generator(size)[source]

Generates fixed random seed values for each realization in ensemble.

Parameters:size (int) – Number of realizations in ensemble.
Returns:seeds – An array of random seeds.
Return type:numpy.ndarray
imagine.tools.random_seed.seed_generator(trigger)[source]

Sets trigger as 0 will generate time-thread dependent method otherwise returns the trigger as seed.

Parameters:trigger (int) – Non-negative pre-fixed seed.
Returns:seed – A random seed value.
Return type:int

imagine.tools.timer module

Timer class is designed for time recording.

class imagine.tools.timer.Timer[source]

Bases: object

Class designed for time recording.

Simply provide an event name to the tick method to start recording. The tock method stops the recording and the record property allow one to access the recorded time.

tick(event)[source]

Starts timing with a given event name.

Parameters:event (str) – Event name (will be key of the record attribute).
tock(event)[source]

Stops timing of the given event.

Parameters:event (str) – Event name (will be key of the record attribute).
record

Dictionary of recorded times using event name as keys.

Module contents

class imagine.tools.BaseClass[source]

Bases: object

REQ_ATTRS = []
class imagine.tools.IOHandler(wk_dir=None)[source]

Bases: object

Handles the I/O.

Parameters:wkdir (string) – The absolute path of the working directory.
read_copy(file, key)[source]

Reads from a HDF5 file identically to all nodes, by doing so, each node contains an identical copy of the data stored in the file.

Parameters:
  • data (numpy.ndarray) – Distributed data.
  • file (str) – String for filename.
  • key (str) – String for HDF5 group and dataset names, e.g., ‘group name/dataset name’.
Returns:

The output must be in (1,n) shape on each node.

Return type:

Copied numpy.ndarray.

read_dist(file, key)[source]

Reads from a HDF5 file and returns a distributed data-set. Note that the binary file data should contain enough rows to be distributed on the available computing nodes, otherwise the mpi_arrange function will raise an error.

Parameters:
  • data (numpy.ndarray) – Distributed data.
  • file (str) – String for filename.
  • key (str) – String for HDF5 group and dataset names, e.g., ‘group name/dataset name’.
Returns:

The output must be in either at least (1,n), or (m,n) shape on each node.

Return type:

Distributed numpy.ndarray.

write_copy(data, file, key)[source]

Writes a copied data-set into a HDF5 file. In practice, it writes out the data stored in the master node, by defaut taking all nodes have the same copies.

Parameters:
  • data (numpy.ndarray) – Distributed/copied data.
  • file (str) – Strong for filename.
  • key (str) – String for HDF5 group and dataset names, e.g., ‘group name/dataset name’.
write_dist(data, file, key)[source]

Writes a distributed data-set into a HDF5 file. If the given filename does not exist then creates one the data shape must be either in (m,n) on each node, each node will pass its content to the master node who is in charge of sequential writing.

Parameters:
  • data (numpy.ndarray) – Distributed data.
  • file (str) – String for filename.
  • key (str) – String for HDF5 group and dataset names, e.g., ‘group name/dataset name’.
file_path

Absolute path of the HDF5 binary file.

wk_dir

String containing the absolute path of the working directory.

class imagine.tools.Timer[source]

Bases: object

Class designed for time recording.

Simply provide an event name to the tick method to start recording. The tock method stops the recording and the record property allow one to access the recorded time.

tick(event)[source]

Starts timing with a given event name.

Parameters:event (str) – Event name (will be key of the record attribute).
tock(event)[source]

Stops timing of the given event.

Parameters:event (str) – Event name (will be key of the record attribute).
record

Dictionary of recorded times using event name as keys.

imagine.tools.exp_mapper(x, a=0, b=1)[source]

Maps x from [0, 1] into the interval [exp(a), exp(b)].

Parameters:
  • x (float) – The variable to be mapped.
  • a (float) – The lower parameter value limit.
  • b (float) – The upper parameter value limit.
Returns:

The mapped parameter value.

Return type:

numpy.float64

imagine.tools.unity_mapper(x, a=0.0, b=1.0)[source]

Maps x from [0, 1] into the interval [a, b].

Parameters:
  • x (float) – The variable to be mapped.
  • a (float) – The lower parameter value limit.
  • b (float) – The upper parameter value limit.
Returns:

The mapped parameter value.

Return type:

numpy.float64

imagine.tools.req_attr(meth)[source]
imagine.tools.empirical_cov(data)[source]

Empirical covariance estimator

Given some data matrix, \(D\), where rows are different samples and columns different properties, the covariance can be estimated from

\[U_{ij} = D_{ij} - \overline{D}_j\,,\; \text{with}\; \overline{D}_j=\tfrac{1}{N} \sum_{i=1}^N D_{ij}\]
\[\text{cov} = \tfrac{1}{N} U^T U\]

Notes

While conceptually simple, this is usually not the best option.

Parameters:data (numpy.ndarray) – Ensemble of observables, in global shape (ensemble size, data size).
Returns:cov – Distributed (not copied) covariance matrix in global shape (data size, data size), each node takes part of the rows.
Return type:numpy.ndarray
imagine.tools.oas_cov(data)[source]

Estimate covariance with the Oracle Approximating Shrinkage algorithm.

Given some \(n\times m\) data matrix, \(D\), where rows are different samples and columns different properties, the covariance can be estimated in the following way.

\[U_{ij} = D_{ij} - \overline{D}_j\,,\; \text{with}\; \overline{D}_j=\tfrac{1}{n} \sum_{i=1}^n D_{ij}\]

Let

\[S = \tfrac{1}{n} U^T U\,,\; T = \text{tr}(S)\quad\text{and}\quad V = \text{tr}(S^2)\]
\[\tilde\rho = \min\left[1,\frac{(1-2/m)V + T^2}{ (n+1-2/m)(V-T^2/m)}\right]\]

The covariance is given by

\[\text{cov}_\text{OAS} = (1-\rho)S + \tfrac{1}{N} \rho T I_m\]
Parameters:data (numpy.ndarray) – Distributed data in global shape (ensemble_size, data_size).
Returns:cov – Covariance matrix in global shape (data_size, data_size).
Return type:numpy.ndarray
imagine.tools.oas_mcov(data)[source]

Estimate covariance with the Oracle Approximating Shrinkage algorithm.

See imagine.tools.covariance_estimator.oas_cov for details. This function aditionally returns the computed ensemble mean.

Parameters:data (numpy.ndarray) – Distributed data in global shape (ensemble_size, data_size).
Returns:
  • mean (numpy.ndarray) – Copied ensemble mean (on all nodes).
  • cov (numpy.ndarray) – Distributed covariance matrix in shape (data_size, data_size).
imagine.tools.mask_cov(cov, mask)[source]

Applies mask to the observable covariance.

Parameters:
  • cov (distributed numpy.ndarray) – Covariance matrix of observalbes in global shape (data size, data size) each node contains part of the global rows.
  • mask (numpy.ndarray) – Copied mask map in shape (1, data size).
Returns:

Masked covariance matrix of shape (masked data size, masked data size).

Return type:

numpy.ndarray

imagine.tools.mask_obs(obs, mask)[source]

Applies a mask to an observable.

Parameters:
  • data (distributed numpy.ndarray) – Ensemble of observables, in global shape (ensemble size, data size) each node contains part of the global rows.
  • mask (numpy.ndarray) – Copied mask map in shape (1, data size) on each node.
Returns:

Masked observable of shape (ensemble size, masked data size).

Return type:

numpy.ndarray

imagine.tools.mpi_arrange(size)[source]

With known global size, number of mpi nodes, and current rank, returns the begin and end index for distributing the global size.

Parameters:size (integer (positive)) – The total size of target to be distributed. It can be a row size or a column size.
Returns:result – The begin and end index [begin,end] for slicing the target.
Return type:numpy.uint
imagine.tools.mpi_shape(data)[source]

Returns the global number of rows and columns of given distributed data.

Parameters:data (numpy.ndarray) – The distributed data.
Returns:result – Glboal row and column number.
Return type:numpy.uint
imagine.tools.mpi_prosecutor(data)[source]

Check if the data is distributed in the correct way covariance matrix is distributed exactly the same manner as multi-realization data if not, an error will be raised.

Parameters:data (numpy.ndarray) – The distributed data to be examined.
imagine.tools.mpi_mean(data)[source]

calculate the mean of distributed array prefers averaging along column direction but if given (1,n) data shape the average is done along row direction the result note that the numerical values will be converted into double

Parameters:data (numpy.ndarray) – Distributed data.
Returns:result – Copied data mean, which means the mean is copied to all nodes.
Return type:numpy.ndarray
imagine.tools.mpi_trans(data)[source]

Transpose distributed data, note that the numerical values will be converted into double.

Parameters:data (numpy.ndarray) – Distributed data.
Returns:result – Transposed data in distribution.
Return type:numpy.ndarray
imagine.tools.mpi_mult(left, right)[source]

Calculate matrix multiplication of two distributed data, the result is data1*data2 in multi-node distribution note that the numerical values will be converted into double. We send the distributed right rows into other nodes (aka cannon method).

Parameters:
  • left (numpy.ndarray) – Distributed left side data.
  • right (numpy.ndarray) – Distributed right side data.
Returns:

result – Distributed multiplication result.

Return type:

numpy.ndarray

imagine.tools.mpi_trace(data)[source]

Computes the trace of the given distributed data.

Parameters:data (numpy.ndarray) – Array of data distributed over different processes.
Returns:result – Copied trace of given data.
Return type:numpy.float64
imagine.tools.mpi_eye(size)[source]

Produces an eye matrix according of shape (size,size) distributed over the various running MPI processes

Parameters:size (integer) – Distributed matrix size.
Returns:result – Distributed eye matrix.
Return type:numpy.ndarray, double data type
imagine.tools.mpi_distribute_matrix(full_matrix)[source]
Parameters:size (integer) – Distributed matrix size.
Returns:result – Distributed eye matrix.
Return type:numpy.ndarray, double data type
imagine.tools.mpi_lu_solve(operator, source)[source]

Simple LU Gauss method WITHOUT pivot permutation.

Parameters:
  • operator (distributed numpy.ndarray) – Matrix representation of the left-hand-side operator.
  • source (copied numpy.ndarray) – Vector representation of the right-hand-side source.
Returns:

result – Copied solution to the linear algebra problem.

Return type:

numpy.ndarray, double data type

imagine.tools.mpi_slogdet(data)[source]

Computes log determinant according to simple LU Gauss method WITHOUT pivot permutation.

Parameters:data (numpy.ndarray) – Array of data distributed over different processes.
Returns:
  • sign (numpy.ndarray) – Single element numpy array containing the sign of the determinant (copied to all nodes).
  • logdet (numpy.ndarray) – Single element numpy array containing the log of the determinant (copied to all nodes).
imagine.tools.mpi_global(data)[source]

Gathers data spread accross different processes.

Parameters:data (numpy.ndarray) – Array of data distributed over different processes.
Returns:global array – The root process returns the gathered data, other processes return None.
Return type:numpy.ndarray
imagine.tools.mpi_local(data)[source]

Distributes data over available processes

Parameters:data (numpy.ndarray) – Array of data to be distributed over available processes.
Returns:local array – Return the distributed array on all preocesses.
Return type:numpy.ndarray
imagine.tools.pshape(data)[source]

imagine.tools.mpi_helper.mpi_shape() or numpy.ndarray.shape() depending on imagine.rc['distributed_arrays'].

imagine.tools.prosecutor(data)[source]

imagine.tools.mpi_helper.mpi_prosecutor() or nothing depending on imagine.rc['distributed_arrays'].

imagine.tools.pmean(data)[source]

imagine.tools.mpi_helper.mpi_mean() or numpy.mean() depending on imagine.rc['distributed_arrays'].

imagine.tools.ptrans(data)[source]

imagine.tools.mpi_helper.mpi_mean() or numpy.ndarray.T() depending on imagine.rc['distributed_arrays'].

imagine.tools.pmult(left, right)[source]

imagine.tools.mpi_helper.mpi_mult() or numpy.matmul() depending on imagine.rc['distributed_arrays'].

imagine.tools.ptrace(data)[source]

imagine.tools.mpi_helper.mpi_trace() or numpy.trace() depending on imagine.rc['distributed_arrays'].

imagine.tools.peye(size)[source]

imagine.tools.mpi_helper.mpi_eye() or numpy.eye() depending on imagine.rc['distributed_arrays'].

imagine.tools.distribute_matrix(full_matrix)[source]

imagine.tools.mpi_helper.mpi_distribute_matrix() or nothing depending on imagine.rc['distributed_arrays'].

imagine.tools.plu_solve(operator, source)[source]

imagine.tools.mpi_helper.mpi_lu_solve() or numpy.linalg.solve() depending on imagine.rc['distributed_arrays'].

Notes

In the non-distributed case, the source is transposed before the calculation

imagine.tools.pslogdet(data)[source]

imagine.tools.mpi_helper.mpi_slogdet() or numpy.linalg.slogdet() depending on imagine.rc['distributed_arrays'].

imagine.tools.pglobal(data)[source]

imagine.tools.mpi_helper.mpi_global() or nothing depending on imagine.rc['distributed_arrays'].

imagine.tools.plocal(data)[source]

imagine.tools.mpi_helper.mpi_local() or nothing depending on imagine.rc['distributed_arrays'].

imagine.tools.ensemble_seed_generator(size)[source]

Generates fixed random seed values for each realization in ensemble.

Parameters:size (int) – Number of realizations in ensemble.
Returns:seeds – An array of random seeds.
Return type:numpy.ndarray
imagine.tools.seed_generator(trigger)[source]

Sets trigger as 0 will generate time-thread dependent method otherwise returns the trigger as seed.

Parameters:trigger (int) – Non-negative pre-fixed seed.
Returns:seed – A random seed value.
Return type:int