bilby.bilby_mcmc.sampler.Bilby_MCMC

class bilby.bilby_mcmc.sampler.Bilby_MCMC(likelihood, priors, outdir='outdir', label='label', use_ratio=False, skip_import_verification=True, check_point_plot=True, diagnostic=False, resume=True, exit_code=130, verbose=True, normalize_prior=True, **kwargs)[source]

Bases: MCMCSampler

The built-in Bilby MCMC sampler

Parameters:
likelihood: likelihood.Likelihood

A object with a log_l method

priors: bilby.core.prior.PriorDict, dict

Priors to be used in the search. This has attributes for each parameter to be sampled.

outdir: str, optional

Name of the output directory

label: str, optional

Naming scheme of the output files

use_ratio: bool, optional

Switch to set whether or not you want to use the log-likelihood ratio or just the log-likelihood

skip_import_verification: bool

Skips the check if the sampler is installed if true. This is only advisable for testing environments

check_point_plot: bool

If true, create plots at the check point

check_point_delta_t: float

The time in seconds afterwhich to checkpoint (defaults to 30 minutes)

diagnostic: bool

If true, create deep-diagnostic plots used for checking convergence problems.

resume: bool

If true, resume from any existing check point files

exit_code: int

The code on which to raise if exiting

nsamples: int (1000)

The number of samples to draw

nensemble: int (1)

The number of ensemble-chains to run (with periodic communication)

pt_ensemble: bool (False)

If true, each run a parallel-tempered set of chains for each ensemble-chain (in which case the total number of chains is nensemble * ntemps). Else, only the zero-ensemble chain is run with a parallel-tempering (in which case the total number of chains is nensemble + ntemps - 1).

ntemps: int (1)

The number of parallel-tempered chains to run

Tmax: float, (None)

If given, the maximum temperature to set the initial temperate-ladder

Tmax_from_SNR: float (20)

(Alternative to Tmax): The SNR to estimate an appropriate Tmax from.

initial_betas: list (None)

(Alternative to Tmax and Tmax_from_SNR): If given, an initial choice of the inverse temperature ladder.

pt_rejection_sample: bool (False)

If true, use rejection sampling to draw samples from the pt-chains.

adapt, adapt_t0, adapt_nu: bool, float, float (True, 100, 10)

Whether to use adaptation and the adaptation parameters. See arXiv:1501.05823 for a description of adapt_t0 and adapt_nu.

burn_in_nact, thin_by_nact, fixed_discard: float, float, float (10, 1, 0)

The number of auto-correlation times to discard for burn-in and to thin by. The fixed_discard is the number of steps discarded before automatic autocorrelation time analysis begins.

autocorr_c: float (5)

The step-size for the window search. See emcee.autocorr.integrated_time for additional details.

L1steps: int

The number of internal steps to take. Improves the scaling performance of multiprocessing. Note, all ACTs are calculated based on the saved steps. So, the total ACT (or number of steps) is L1steps * tau (or L1steps * position).

L2steps: int

The number of steps to take before swapping between parallel-tempered and ensemble chains.

npool: int

The number of multiprocessing cores to use. For efficiency, this must be matched to an integer number of the total number of chains.

printdt: float

Print an update on the progress every printdt s. Note, each print requires an evaluation of the ACT so short print times are unwise.

min_tau: 1

The minimum allowed ACT. Can be used to force a larger ACT.

proposal_cycle: str, bilby.core.sampler.bilby_mcmc.proposals.ProposalCycle

Either a string pointing to one of the built-in proposal cycles or, a proposal cycle.

stop_after_convergence:

If running with parallel-tempered chains. Stop updating the chains once they have congerged. After this time, random samples will be drawn at swap time.

fixed_tau: int

A fixed value for the ACT: used for testing purposes.

tau_window: int, None

Using tau’, a previous estimates of tau, calculate the new tau using the last tau_window * tau’ steps. If None, the entire chain is used.

evidence_method: str, [stepping_stone, thermodynamic]

The evidence calculation method to use. Defaults to stepping_stone, but the results of all available methods are stored in the ln_z_dict.

initial_sample_method: str

Method to draw the initial sample. Either “prior” (a random draw from the prior) or “maximize” (use an optimization approach to attempt to find the maximum posterior estimate).

initial_sample_dict: dict

A dictionary of the initial sample value. If incomplete, will overwrite the initial_sample drawn using initial_sample_method.

normalize_prior: bool

When False, disables calculation of constraint normalization factor during prior probability computation. Default value is True.

verbose: bool

Whether to print diagnostic output during the run.

__init__(likelihood, priors, outdir='outdir', label='label', use_ratio=False, skip_import_verification=True, check_point_plot=True, diagnostic=False, resume=True, exit_code=130, verbose=True, normalize_prior=True, **kwargs)[source]
__call__(*args, **kwargs)

Call self as a function.

Methods

__init__(likelihood, priors[, outdir, ...])

add_data_to_result(result, ptsampler, ...)

calc_likelihood_count()

calculate_autocorrelation(samples[, c])

Uses the emcee.autocorr module to estimate the autocorrelation

check_draw(theta[, warning])

Checks if the draw will generate an infinite prior or likelihood

check_point([ignore_time])

draw()

get_expected_outputs([outdir, label])

Get lists of the expected outputs directories and files.

get_initial_points_from_prior([npoints])

Method to draw a set of live points from the prior

get_random_draw_from_prior()

Get a random draw from the prior distribution

get_setup_string()

init_ptsampler()

log_likelihood(theta)

log_prior(theta)

plot_progress(ptsampler, label, outdir, priors)

print_ensemble_acceptance()

print_long_progress()

print_nburn_logging_info()

Prints logging info as to how nburn was calculated

print_per_proposal()

print_progress()

print_pt_acceptance()

print_tau_dict()

prior_transform(theta)

Prior transform method that is passed into the external sampler.

read_current_state()

Read the existing resume file

run_sampler(*args, **kwargs)

A template method to run in subclasses

setup_chain_set()

verify_configuration()

write_current_state()

write_current_state_and_exit([signum, frame])

Make sure that if a pool of jobs is running only the parent tries to checkpoint and exit.

Attributes

abbreviation

check_point_equiv_kwargs

constraint_parameter_keys

list: List of parameters providing prior constraints

default_kwargs

external_sampler_name

fixed_parameter_keys

list: List of parameter keys that are not being sampled

hard_exit

kwargs

dict: Container for the kwargs.

nburn_equiv_kwargs

ndim

int: Number of dimensions of the search parameter space

npool

npool_equiv_kwargs

nwalkers_equiv_kwargs

sampler_name

sampling_seed_equiv_kwargs

sampling_seed_key

Name of keyword argument for setting the sampling for the specific sampler.

search_parameter_keys

list: List of parameter keys that are being sampled

target_nsamples

calculate_autocorrelation(samples, c=3)[source]

Uses the emcee.autocorr module to estimate the autocorrelation

Parameters:
samples: array_like

A chain of samples.

c: float

The minimum number of autocorrelation times needed to trust the estimate (default: 3). See emcee.autocorr.integrated_time.

check_draw(theta, warning=True)[source]

Checks if the draw will generate an infinite prior or likelihood

Also catches the output of numpy.nan_to_num.

Parameters:
theta: array_like

Parameter values at which to evaluate likelihood

warning: bool

Whether or not to print a warning

Returns:
bool, cube (nlive,

True if the likelihood and prior are finite, false otherwise

property constraint_parameter_keys

list: List of parameters providing prior constraints

property fixed_parameter_keys

list: List of parameter keys that are not being sampled

classmethod get_expected_outputs(outdir=None, label=None)[source]

Get lists of the expected outputs directories and files.

These are used by bilby_pipe when transferring files via HTCondor.

Parameters:
outdirstr

The output directory.

labelstr

The label for the run.

Returns:
list

List of file names.

list

List of directory names. Will always be empty for bilby_mcmc.

get_initial_points_from_prior(npoints=1)[source]

Method to draw a set of live points from the prior

This iterates over draws from the prior until all the samples have a finite prior and likelihood (relevant for constrained priors).

Parameters:
npoints: int

The number of values to return

Returns:
unit_cube, parameters, likelihood: tuple of array_like

unit_cube (nlive, ndim) is an array of the prior samples from the unit cube, parameters (nlive, ndim) is the unit_cube array transformed to the target space, while likelihood (nlive) are the likelihood evaluations.

get_random_draw_from_prior()[source]

Get a random draw from the prior distribution

Returns:
draw: array_like

An ndim-length array of values drawn from the prior. Parameters with delta-function (or fixed) priors are not returned

property kwargs

dict: Container for the kwargs. Has more sophisticated logic in subclasses

log_likelihood(theta)[source]
Parameters:
theta: list

List of values for the likelihood parameters

Returns:
float: Log-likelihood or log-likelihood-ratio given the current

likelihood.parameter values

log_prior(theta)[source]
Parameters:
theta: list

List of sampled values on a unit interval

Returns:
float: Joint ln prior probability of theta
property ndim

int: Number of dimensions of the search parameter space

print_nburn_logging_info()[source]

Prints logging info as to how nburn was calculated

prior_transform(theta)[source]

Prior transform method that is passed into the external sampler.

Parameters:
theta: list

List of sampled values on a unit interval

Returns:
list: Properly rescaled sampled values
read_current_state()[source]

Read the existing resume file

Returns:
success: boolean

If true, resume file was successfully loaded, otherwise false

run_sampler(*args, **kwargs)[source]

A template method to run in subclasses

sampling_seed_key = None

Name of keyword argument for setting the sampling for the specific sampler. If a specific sampler does not have a sampling seed option, then it should be left as None.

property search_parameter_keys

list: List of parameter keys that are being sampled

write_current_state_and_exit(signum=None, frame=None)[source]

Make sure that if a pool of jobs is running only the parent tries to checkpoint and exit. Only the parent has a ‘pool’ attribute.

For samplers that must hard exit (typically due to non-Python process) use os._exit that cannot be excepted. Other samplers exiting can be caught as a SystemExit.