bilby.gw.prior.CBCPriorDict

class bilby.gw.prior.CBCPriorDict(dictionary=None, filename=None, conversion_function=None)[source]

Bases: ConditionalPriorDict

__init__(dictionary=None, filename=None, conversion_function=None)[source]
Parameters:
dictionary: dict

See parent class

filename: str

See parent class

__call__(*args, **kwargs)

Call self as a function.

Methods

__init__([dictionary, filename, ...])

cdf(sample)

Evaluate the cumulative distribution function at the provided points

check_ln_prob(sample, ln_prob[, normalized])

check_prob(sample, prob)

clear()

convert_floats_to_delta_functions()

Convert all float parameters to delta functions

copy()

We have to overwrite the copy method as it fails due to the presence of defaults.

default_conversion_function(sample)

Placeholder parameter conversion function.

evaluate_constraints(sample)

fill_priors(likelihood[, default_priors_file])

Fill dictionary of priors based on required parameters of likelihood

from_dictionary(dictionary)

from_file(filename)

Reads in a prior from a file specification

from_json(filename)

Reads in a prior from a json file

fromkeys(iterable[, value])

Create a new dictionary with keys from iterable and values set to value.

get(key[, default])

Return the value for key if key is in the dictionary, else default.

get_required_variables(key)

Returns the required variables to sample a given conditional key.

is_nonempty_intersection(pset)

Check if keys in self exist in the parameter set

items()

keys()

ln_prob(sample[, axis, normalized])

normalize_constraint_factor(keys[, ...])

pop(key[, default])

If the key is not found, return the default if given; otherwise, raise a KeyError.

popitem(/)

Remove and return a (key, value) pair as a 2-tuple.

prob(sample, **kwargs)

rescale(keys, theta)

Rescale samples from unit cube to prior

sample([size])

Draw samples from the prior set

sample_subset([keys, size])

Draw samples from the prior set for parameters which are not a DeltaFunction

sample_subset_constrained([keys, size])

sample_subset_constrained_as_array([keys, size])

Return an array of samples

setdefault(key[, default])

Insert key with a value of default if key is not in the dictionary.

test_has_redundant_keys()

Test whether there are redundant keys in self.

test_redundancy(key[, disable_logging])

Empty redundancy test, should be overwritten in subclasses

to_file(outdir, label)

Write the prior distribution to file.

to_json(outdir, label)

update([E, ]**F)

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

validate_prior(duration, minimum_frequency)

Validate the prior is suitable for use

values()

Attributes

conditional_keys

constraint_keys

distance_inclination

Return true if priors include any extrinsic parameters

extrinsic

Return true if priors include any extrinsic parameters

fixed_keys

intrinsic

Return true if priors include any intrinsic parameters

mass

Return true if priors include any mass parameters

maximum_chirp_mass

measured_spin

Return true if priors include any measured_spin parameters

minimum_chirp_mass

minimum_component_mass

The minimum component mass allowed for the prior dictionary.

non_fixed_keys

phase

Return true if priors include phase parameters

precession

Return true if priors include any precession parameters

sky

Return true if priors include any extrinsic parameters

sorted_keys

sorted_keys_without_fixed_parameters

spin

Return true if priors include any spin parameters

unconditional_keys

cdf(sample)[source]

Evaluate the cumulative distribution function at the provided points

Parameters:
sample: dict, pandas.DataFrame

Dictionary of the samples of which to calculate the CDF

Returns:
dict, pandas.DataFrame: Dictionary containing the CDF values
clear() None.  Remove all items from D.
convert_floats_to_delta_functions()[source]

Convert all float parameters to delta functions

copy()[source]

We have to overwrite the copy method as it fails due to the presence of defaults.

default_conversion_function(sample)[source]

Placeholder parameter conversion function.

Parameters:
sample: dict

Dictionary to convert

Returns:
sample: dict

Same as input

property distance_inclination

Return true if priors include any extrinsic parameters

property extrinsic

Return true if priors include any extrinsic parameters

fill_priors(likelihood, default_priors_file=None)[source]

Fill dictionary of priors based on required parameters of likelihood

Any floats in prior will be converted to delta function prior. Any required, non-specified parameters will use the default.

Note: if likelihood has non_standard_sampling_parameter_keys, then this will set-up default priors for those as well.

Parameters:
likelihood: bilby.likelihood.GravitationalWaveTransient instance

Used to infer the set of parameters to fill the prior with

default_priors_file: str, optional

If given, a file containing the default priors.

Returns:
prior: dict

The filled prior dictionary

from_file(filename)[source]

Reads in a prior from a file specification

Parameters:
filename: str

Name of the file to be read in

Notes

Lines beginning with ‘#’ or empty lines will be ignored. Priors can be loaded from:

  • bilby.core.prior as, e.g., foo = Uniform(minimum=0, maximum=1)

  • floats, e.g., foo = 1

  • bilby.gw.prior as, e.g., foo = bilby.gw.prior.AlignedSpin()

  • other external modules, e.g., foo = my.module.CustomPrior(...)

classmethod from_json(filename)[source]

Reads in a prior from a json file

Parameters:
filename: str

Name of the file to be read in

fromkeys(iterable, value=None, /)

Create a new dictionary with keys from iterable and values set to value.

get(key, default=None, /)

Return the value for key if key is in the dictionary, else default.

get_required_variables(key)[source]

Returns the required variables to sample a given conditional key.

Parameters:
keystr

Name of the key that we want to know the required variables for

Returns:
dict: key/value pairs of the required variables
property intrinsic

Return true if priors include any intrinsic parameters

is_nonempty_intersection(pset)[source]

Check if keys in self exist in the parameter set

Parameters:
pset: str, set

Either a string referencing a parameter set in PARAMETER_SETS or a set of keys

items() a set-like object providing a view on D's items
keys() a set-like object providing a view on D's keys
ln_prob(sample, axis=None, normalized=True)[source]
Parameters:
sample: dict

Dictionary of the samples of which we want to have the log probability of

axis: Union[None, int]

Axis along which the summation is performed

normalized: bool

When False, disables calculation of constraint normalization factor during prior probability computation. Default value is True.

Returns:
float: Joint log probability of all the individual sample probabilities
property mass

Return true if priors include any mass parameters

property measured_spin

Return true if priors include any measured_spin parameters

property minimum_component_mass

The minimum component mass allowed for the prior dictionary.

This property requires either: * a prior for mass_2 * priors for chirp_mass and mass_ratio

Returns:
mass_2: float

The minimum allowed component mass.

property phase

Return true if priors include phase parameters

pop(key, default=<unrepresentable>, /)

If the key is not found, return the default if given; otherwise, raise a KeyError.

popitem(/)

Remove and return a (key, value) pair as a 2-tuple.

Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.

property precession

Return true if priors include any precession parameters

prob(sample, **kwargs)[source]
Parameters:
sample: dict

Dictionary of the samples of which we want to have the probability of

kwargs:

The keyword arguments are passed directly to np.prod

Returns:
float: Joint probability of all individual sample probabilities
rescale(keys, theta)[source]

Rescale samples from unit cube to prior

Parameters:
keys: list

List of prior keys to be rescaled

theta: list

List of randomly drawn values on a unit cube associated with the prior keys

Returns:
list: List of floats containing the rescaled sample
sample(size=None)[source]

Draw samples from the prior set

Parameters:
size: int or tuple of ints, optional

See numpy.random.uniform docs

Returns:
dict: Dictionary of the samples
sample_subset(keys=<list_iterator object>, size=None)[source]

Draw samples from the prior set for parameters which are not a DeltaFunction

Parameters:
keys: list

List of prior keys to draw samples from

size: int or tuple of ints, optional

See numpy.random.uniform docs

Returns:
dict: Dictionary of the drawn samples
sample_subset_constrained_as_array(keys=<list_iterator object>, size=None)[source]

Return an array of samples

Parameters:
keys: list

A list of keys to sample in

size: int

The number of samples to draw

Returns:
array: array_like

An array of shape (len(key), size) of the samples (ordered by keys)

setdefault(key, default=None, /)

Insert key with a value of default if key is not in the dictionary.

Return the value for key if key is in the dictionary, else default.

property sky

Return true if priors include any extrinsic parameters

property spin

Return true if priors include any spin parameters

test_has_redundant_keys()[source]

Test whether there are redundant keys in self.

Returns:
bool: Whether there are redundancies or not
test_redundancy(key, disable_logging=False)[source]

Empty redundancy test, should be overwritten in subclasses

to_file(outdir, label)[source]

Write the prior distribution to file.

Parameters:
outdir: str

output directory name

label: str

Output file naming scheme

update([E, ]**F) None.  Update D from dict/iterable E and F.

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

validate_prior(duration, minimum_frequency, N=1000, error=True, warning=False)[source]

Validate the prior is suitable for use

Parameters:
duration: float

The data duration in seconds

minimum_frequency: float

The minimum frequency in Hz of the analysis

N: int

The number of samples to draw when checking

error: bool

Whether to raise a ValueError on failure.

warning: bool

Whether to log a warning on failure.

Returns:
bool: whether the template will fit within the segment duration
values() an object providing a view on D's values