bilby.gw.prior.CalibrationPriorDict

class bilby.gw.prior.CalibrationPriorDict(dictionary=None, filename=None)[source]

Bases: PriorDict

Prior dictionary class for calibration parameters. This has methods for simplifying the creation of priors for the large numbers of parameters used with the spline model.

__init__(dictionary=None, filename=None)[source]

Initialises a Prior dictionary for calibration parameters

Parameters:
dictionary: dict, optional

See superclass

filename: str, optional

See superclass

__call__(*args, **kwargs)

Call self as a function.

Methods

__init__([dictionary, filename])

Initialises a Prior dictionary for calibration parameters

cdf(sample)

Evaluate the cumulative distribution function at the provided points

check_ln_prob(sample, ln_prob[, normalized])

check_prob(sample, prob)

clear()

constant_uncertainty_spline(amplitude_sigma, ...)

Make prior assuming constant in frequency calibration uncertainty.

convert_floats_to_delta_functions()

Convert all float parameters to delta functions

copy()

We have to overwrite the copy method as it fails due to the presence of defaults.

default_conversion_function(sample)

Placeholder parameter conversion function.

evaluate_constraints(sample)

fill_priors(likelihood[, default_priors_file])

Fill dictionary of priors based on required parameters of likelihood

from_dictionary(dictionary)

from_envelope_file(envelope_file, ...[, ...])

Load in the calibration envelope.

from_file(filename)

Reads in a prior from a file specification

from_json(filename)

Reads in a prior from a json file

fromkeys(iterable[, value])

Create a new dictionary with keys from iterable and values set to value.

get(key[, default])

Return the value for key if key is in the dictionary, else default.

items()

keys()

ln_prob(sample[, axis, normalized])

normalize_constraint_factor(keys[, ...])

pop(key[, default])

If the key is not found, return the default if given; otherwise, raise a KeyError.

popitem(/)

Remove and return a (key, value) pair as a 2-tuple.

prob(sample, **kwargs)

rescale(keys, theta)

Rescale samples from unit cube to prior

sample([size])

Draw samples from the prior set

sample_subset([keys, size])

Draw samples from the prior set for parameters which are not a DeltaFunction

sample_subset_constrained([keys, size])

sample_subset_constrained_as_array([keys, size])

Return an array of samples

setdefault(key[, default])

Insert key with a value of default if key is not in the dictionary.

test_has_redundant_keys()

Test whether there are redundant keys in self.

test_redundancy(key[, disable_logging])

Empty redundancy test, should be overwritten in subclasses

to_file(outdir, label)

Write the prior to file.

to_json(outdir, label)

update([E, ]**F)

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values()

Attributes

constraint_keys

fixed_keys

non_fixed_keys

cdf(sample)[source]

Evaluate the cumulative distribution function at the provided points

Parameters:
sample: dict, pandas.DataFrame

Dictionary of the samples of which to calculate the CDF

Returns:
dict, pandas.DataFrame: Dictionary containing the CDF values
clear() None.  Remove all items from D.
static constant_uncertainty_spline(amplitude_sigma, phase_sigma, minimum_frequency, maximum_frequency, n_nodes, label, boundary='reflective')[source]

Make prior assuming constant in frequency calibration uncertainty.

This assumes Gaussian fluctuations about 0.

Parameters:
amplitude_sigma: float

Uncertainty in the amplitude.

phase_sigma: float

Uncertainty in the phase.

minimum_frequency: float

Minimum frequency for the spline.

maximum_frequency: float

Minimum frequency for the spline.

n_nodes: int

Number of nodes for the spline.

label: str

Label for the names of the parameters, e.g., recalib_H1_

boundary: None, ‘reflective’, ‘periodic’

The type of prior boundary to assign

Returns:
prior: PriorDict

Priors for the relevant parameters. This includes the frequencies of the nodes which are _not_ sampled.

convert_floats_to_delta_functions()[source]

Convert all float parameters to delta functions

copy()[source]

We have to overwrite the copy method as it fails due to the presence of defaults.

default_conversion_function(sample)[source]

Placeholder parameter conversion function.

Parameters:
sample: dict

Dictionary to convert

Returns:
sample: dict

Same as input

fill_priors(likelihood, default_priors_file=None)[source]

Fill dictionary of priors based on required parameters of likelihood

Any floats in prior will be converted to delta function prior. Any required, non-specified parameters will use the default.

Note: if likelihood has non_standard_sampling_parameter_keys, then this will set-up default priors for those as well.

Parameters:
likelihood: bilby.likelihood.GravitationalWaveTransient instance

Used to infer the set of parameters to fill the prior with

default_priors_file: str, optional

If given, a file containing the default priors.

Returns:
prior: dict

The filled prior dictionary

static from_envelope_file(envelope_file, minimum_frequency, maximum_frequency, n_nodes, label, boundary='reflective', correction_type=None)[source]

Load in the calibration envelope.

This is a text file with columns

frequency median-amplitude median-phase -1-sigma-amplitude
-1-sigma-phase +1-sigma-amplitude +1-sigma-phase

There are two definitions of the calibration correction in the literature, one defines the correction as mapping calibrated strain to theoretical waveform templates (data) and the other as mapping theoretical waveform templates to calibrated strain (template). Prior to version 1.4.0, template was assumed, the default changed to data when the correction argument was added.

Parameters:
envelope_file: str

Name of file to read in.

minimum_frequency: float

Minimum frequency for the spline.

maximum_frequency: float

Minimum frequency for the spline.

n_nodes: int

Number of nodes for the spline.

label: str

Label for the names of the parameters, e.g., recalib_H1_

boundary: None, ‘reflective’, ‘periodic’

The type of prior boundary to assign

correction_type: str

How the correction is defined, either to the data (default) or the template. In general, data products produced by the LVK calibration groups assume data. The default value will be removed in a future release and this will need to be explicitly specified.

Added in version 1.4.0.

Returns:
prior: PriorDict

Priors for the relevant parameters. This includes the frequencies of the nodes which are _not_ sampled.

from_file(filename)[source]

Reads in a prior from a file specification

Parameters:
filename: str

Name of the file to be read in

Notes

Lines beginning with ‘#’ or empty lines will be ignored. Priors can be loaded from:

  • bilby.core.prior as, e.g., foo = Uniform(minimum=0, maximum=1)

  • floats, e.g., foo = 1

  • bilby.gw.prior as, e.g., foo = bilby.gw.prior.AlignedSpin()

  • other external modules, e.g., foo = my.module.CustomPrior(...)

classmethod from_json(filename)[source]

Reads in a prior from a json file

Parameters:
filename: str

Name of the file to be read in

fromkeys(iterable, value=None, /)

Create a new dictionary with keys from iterable and values set to value.

get(key, default=None, /)

Return the value for key if key is in the dictionary, else default.

items() a set-like object providing a view on D's items
keys() a set-like object providing a view on D's keys
ln_prob(sample, axis=None, normalized=True)[source]
Parameters:
sample: dict

Dictionary of the samples of which to calculate the log probability

axis: None or int

Axis along which the summation is performed

normalized: bool

When False, disables calculation of constraint normalization factor during prior probability computation. Default value is True.

Returns:
float or ndarray:

Joint log probability of all the individual sample probabilities

pop(key, default=<unrepresentable>, /)

If the key is not found, return the default if given; otherwise, raise a KeyError.

popitem(/)

Remove and return a (key, value) pair as a 2-tuple.

Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.

prob(sample, **kwargs)[source]
Parameters:
sample: dict

Dictionary of the samples of which we want to have the probability of

kwargs:

The keyword arguments are passed directly to np.prod

Returns:
float: Joint probability of all individual sample probabilities
rescale(keys, theta)[source]

Rescale samples from unit cube to prior

Parameters:
keys: list

List of prior keys to be rescaled

theta: list

List of randomly drawn values on a unit cube associated with the prior keys

Returns:
list: List of floats containing the rescaled sample
sample(size=None)[source]

Draw samples from the prior set

Parameters:
size: int or tuple of ints, optional

See numpy.random.uniform docs

Returns:
dict: Dictionary of the samples
sample_subset(keys=<list_iterator object>, size=None)[source]

Draw samples from the prior set for parameters which are not a DeltaFunction

Parameters:
keys: list

List of prior keys to draw samples from

size: int or tuple of ints, optional

See numpy.random.uniform docs

Returns:
dict: Dictionary of the drawn samples
sample_subset_constrained_as_array(keys=<list_iterator object>, size=None)[source]

Return an array of samples

Parameters:
keys: list

A list of keys to sample in

size: int

The number of samples to draw

Returns:
array: array_like

An array of shape (len(key), size) of the samples (ordered by keys)

setdefault(key, default=None, /)

Insert key with a value of default if key is not in the dictionary.

Return the value for key if key is in the dictionary, else default.

test_has_redundant_keys()[source]

Test whether there are redundant keys in self.

Returns:
bool: Whether there are redundancies or not
test_redundancy(key, disable_logging=False)[source]

Empty redundancy test, should be overwritten in subclasses

to_file(outdir, label)[source]

Write the prior to file. This includes information about the source if possible.

Parameters:
outdir: str

Output directory.

label: str

Label for prior.

update([E, ]**F) None.  Update D from dict/iterable E and F.

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values() an object providing a view on D's values