threeML.bayesian.nautilus_sampler module

class threeML.bayesian.nautilus_sampler.NautilusSampler(likelihood_model=None, data_list=None, **kwargs)[source]

Bases: UnitCubeSampler

sample(quiet=False)[source]

sample using the UltraNest numerical integration method :rtype:

Returns:

setup(n_live: int = 2000, n_update: int | None = None, enlarge_per_dim: float = 1.1, n_points_min: int | None = None, split_threshold: int = 100, n_networks: int = 4, neural_network_kwargs: Dict[str, Any] = {}, prior_args: List[Any] = [], prior_kwargs: Dict[str, Any] = {}, likelihood_args: List[Any] = [], likelihood_kwargs: Dict[str, Any] = {}, n_batch: int = 100, n_like_new_bound: int | None = None, vectorized: bool = False, pass_dict: bool | None = None, pool: int | None = None, seed: int | None = None, filepath: str | None = None, resume: bool = True, f_live: float = 0.01, n_shell: int | None = None, n_eff: int = 10000, discard_exploration: bool = False, verbose: bool = False)[source]

setup the nautilus sampler.

See: https://nautilus-sampler.readthedocs.io/en/stable/index.html

Parameters:
  • n_live (int) – Number of so-called live points. New bounds are constructed so that they encompass the live points. Default is 3000.

  • n_update (Optional[int]) – The maximum number of additions to the live set before a new bound is created. If None, use n_live. Default is None.

  • enlarge_per_dim (float) – Along each dimension, outer ellipsoidal bounds are enlarged by this factor. Default is 1.1.

  • n_points_min (Optional[int]) – The minimum number of points each ellipsoid should have. Effectively, ellipsoids with less than twice that number will not be split further. If None, uses n_points_min = n_dim + 50. Default is None.

  • split_threshold (int) – hreshold used for splitting the multi-ellipsoidal bound used for sampling. If the volume of the bound prior enlarging is larger than split_threshold times the target volume, the multi-ellipsiodal bound is split further, if possible. Default is 100.

  • n_networks (int) – Number of networks used in the estimator. Default is 4.

  • neural_network_kwargs (Dict[Any]) – Non-default keyword arguments passed to the constructor of MLPRegressor.

  • prior_args (List[Any]) – List of extra positional arguments for prior. Only used if prior is a function.

  • prior_kwargs (Dict[Any]) – Dictionary of extra keyword arguments for prior. Only used if prior is a function.

  • likelihood_args (List[Any]) – List of extra positional arguments for likelihood.

  • likelihood_kwargs (Dict[Any]) – Dictionary of extra keyword arguments for likelihood.

  • n_batch (int) – Number of likelihood evaluations that are performed at each step. If likelihood evaluations are parallelized, should be multiple of the number of parallel processes. Very large numbers can lead to new bounds being created long after n_update additions to the live set have been achieved. This will not cause any bias but could reduce efficiency. Default is 100.

  • n_like_new_bound (Optional[int]) – The maximum number of likelihood calls before a new bounds is created. If None, use 10 times n_live. Default is None.

  • vectorized (bool) – If True, the likelihood function can receive multiple input sets at once. For example, if the likelihood function receives arrays, it should be able to take an array with shape (n_points, n_dim) and return an array with shape (n_points). Similarly, if the likelihood function accepts dictionaries, it should be able to process dictionaries where each value is an array with shape (n_points). Default is False.

  • pass_dict (Optional[bool]) – If True, the likelihood function expects model parameters as dictionaries. If False, it expects regular numpy arrays. Default is to set it to True if prior was a nautilus.Prior instance and False otherwise

  • pool (Optional[int]) – Pool used for parallelization of likelihood calls and sampler calculations. If None, no parallelization is performed. If an integer, the sampler will use a multiprocessing.Pool object with the specified number of processes. Finally, if specifying a tuple, the first one specifies the pool used for likelihood calls and the second one the pool for sampler calculations. Default is None.

  • seed (Optional[int]) – Seed for random number generation used for reproducible results accross different runs. If None, results are not reproducible. Default is None.

  • filepath (Optional[str]) – ath to the file where results are saved. Must have a ‘.h5’ or ‘.hdf5’ extension. If None, no results are written. Default is None.

  • resume (bool) – If True, resume from previous run if filepath exists. If False, start from scratch and overwrite any previous file. Default is True.

  • f_live (float) – Maximum fraction of the evidence contained in the live set before building the initial shells terminates. Default is 0.01.

  • n_shell (Optional[int]) – Minimum number of points in each shell. The algorithm will sample from the shells until this is reached. Default is the batch size of the sampler which is 100 unless otherwise specified.

  • n_eff (int) – Minimum effective sample size. The algorithm will sample from the shells until this is reached. Default is 10000.

  • discard_exploration (bool) – Whether to discard points drawn in the exploration phase. This is required for a fully unbiased posterior and evidence estimate. Default is False.

  • verbose (bool) – If True, print additional information. Default is False.

Returns:

threeML.bayesian.nautilus_sampler.capture_arguments(func, *args, **kwargs)[source]