# Time-energy fit

3ML allows the possibility to model a time-varying source by explicitly fitting the time-dependent part of the model. Let’s see this with an example.

First we import what we need:

[1]:
import warnings

warnings.simplefilter("ignore")
import numpy as np

np.seterr(all="ignore")
[1]:
{'divide': 'warn', 'over': 'warn', 'under': 'ignore', 'invalid': 'warn'}
[2]:
%%capture
import matplotlib.pyplot as plt

# import matplotlib.animation as animation

from threeML import *
from threeML.io.package_data import get_path_of_data_file
[3]:
from jupyterthemes import jtplot

%matplotlib inline
jtplot.style(context="talk", fscale=1, ticks=True, grid=False)
set_threeML_style()
silence_warnings()

## Generating the datasets

Then we generate a simulated dataset for a source with a cutoff powerlaw spectrum with a constant photon index and cutoff but with a normalization that changes with time following a powerlaw:

[4]:
def generate_one(K):

# Let's generate some data with y = Powerlaw(x)

gen_function = Cutoff_powerlaw()
gen_function.K = K

# Generate a dataset using the power law, and a
# constant 30% error

x = np.logspace(0, 2, 50)

xyl_generator = XYLike.from_function(
"sim_data", function=gen_function, x=x, yerr=0.3 * gen_function(x)
)

y = xyl_generator.y
y_err = xyl_generator.yerr

return x, y, y_err

These are the times at which the simulated spectra have been observed

[5]:
time_tags = np.array([1.0, 2.0, 5.0, 10.0])

This describes the time-varying normalization. If everything works as it should, we should recover from the fit a normalization of 0.23 and a index of -1.2 for the time law.

[6]:
normalizations = 0.23 * time_tags ** (-3.5)

Now that we have a simple function to create the datasets, let’s build them.

[7]:

datasets = [generate_one(k) for k in normalizations] x = np.logspace(0, 2, 50) fig, ax = plt.subplots() for k in normalizations: gen_function = Cutoff_powerlaw() gen_function.K = k ax.loglog(x, gen_function(x)) ax.set_xlabel("Energy") ax.set_ylabel("Flux")
19:46:41 INFO      Using Gaussian statistic (equivalent to chi^2) with the provided errors.            XYLike.py:89
19:46:43 INFO      Using Gaussian statistic (equivalent to chi^2) with the provided errors.            XYLike.py:89
INFO      Using Gaussian statistic (equivalent to chi^2) with the provided errors.            XYLike.py:89
INFO      Using Gaussian statistic (equivalent to chi^2) with the provided errors.            XYLike.py:89
INFO      Using Gaussian statistic (equivalent to chi^2) with the provided errors.            XYLike.py:89
INFO      Using Gaussian statistic (equivalent to chi^2) with the provided errors.            XYLike.py:89
INFO      Using Gaussian statistic (equivalent to chi^2) with the provided errors.            XYLike.py:89
INFO      Using Gaussian statistic (equivalent to chi^2) with the provided errors.            XYLike.py:89
[7]:
Text(0, 0.5, 'Flux')

## Setup the model

Now set up the fit and fit it. First we need to tell 3ML that we are going to fit using an independent variable (time in this case). We init it to 1.0 and set the unit to seconds.

[8]:
time = IndependentVariable("time", 1.0, u.s)

Then we load the data that we have generated, tagging them with their time of observation.

[9]:

plugins = [] for i, dataset in enumerate(datasets): x, y, y_err = dataset xyl = XYLike("data%i" % i, x, y, y_err) # This is the important part: we need to tag the instance of the # plugin so that 3ML will know that this instance corresponds to the # given tag (a time coordinate in this case). If instead of giving # one time coordinate we give two time coordinates, then 3ML will # take the average of the model between the two time coordinates # (computed as the integral of the model between t1 and t2 divided # by t2-t1) xyl.tag = (time, time_tags[i]) # To access the tag we have just set we can use: independent_variable, start, end = xyl.tag # NOTE: xyl.tag will return 3 things: the independent variable, the start and the # end. If like in this case you do not specify an end when assigning the tag, end # will be None plugins.append(xyl)
INFO      Using Gaussian statistic (equivalent to chi^2) with the provided errors.            XYLike.py:89
INFO      Using Gaussian statistic (equivalent to chi^2) with the provided errors.            XYLike.py:89
INFO      Using Gaussian statistic (equivalent to chi^2) with the provided errors.            XYLike.py:89
INFO      Using Gaussian statistic (equivalent to chi^2) with the provided errors.            XYLike.py:89

Generate the datalist as usual

[10]:
data = DataList(*plugins)

Now let’s generate the spectral model, in this case a point source with a cutoff powerlaw spectrum.

[11]:
spectrum = Cutoff_powerlaw()

src = PointSource("test", ra=0.0, dec=0.0, spectral_shape=spectrum)

model = Model(src)

Now we need to tell 3ML that we are going to use the time coordinate to specify a time dependence for some of the parameters of the model.

[12]:

Now let’s specify the time-dependence (a powerlaw) for the normalization of the powerlaw spectrum.

[13]:
time_po = Powerlaw()
time_po.K.bounds = (0.01, 1000)

Link the normalization of the cutoff powerlaw spectrum with time through the time law we have just generated.

[14]:
model
[14]:
Model summary:

N
Point sources 1
Extended sources 0
Particle sources 0

Free parameters (4):

value min_value max_value unit
test.spectrum.main.Cutoff_powerlaw.K.Powerlaw.K 1.0 0.01 1000.0 keV-1 s-1 cm-2
test.spectrum.main.Cutoff_powerlaw.K.Powerlaw.index -2.01 -10.0 10.0
test.spectrum.main.Cutoff_powerlaw.index -2.0 -10.0 10.0
test.spectrum.main.Cutoff_powerlaw.xc 10.0 1.0 None keV

Fixed parameters (4):
(abridged. Use complete=True to see all fixed parameters)

Properties (0):

(none)

test.spectrum.main.Cutoff_powerlaw.K
current value 1.0
function Powerlaw
unit 1 / (cm2 keV s)

Independent variables:

time
current value 1.0
unit s

(none)

## Performing the fit

[15]:
jl = JointLikelihood(model, data)

best_fit_parameters, likelihood_values = jl.fit()
19:46:44 INFO      set the minimizer to minuit                                             joint_likelihood.py:1042
Best fit values:

result unit
parameter
test.spectrum.main.Cutoff_powerlaw.K.Powerlaw.K (2.18 -0.12 +0.13) x 10^-1 1 / (cm2 keV s)
test...index -3.488 +/- 0.023
test.spectrum.main.Cutoff_powerlaw.index -1.944 +/- 0.032
test.spectrum.main.Cutoff_powerlaw.xc 9.76 -0.15 +0.16 keV
Correlation matrix:

 1 -0.48 -0.74 0.49 -0.48 1 -0 0.01 -0.74 -0 1 -0.87 0.49 0.01 -0.87 1
Values of -log(likelihood) at the minimum:

-log(likelihood)
data0 25.960539
data1 23.586684
data2 29.731722
data3 22.843985
total 102.122930
Values of statistical measures:

statistical measures
AIC 212.450989
BIC 225.439130
[16]:
for p in plugins:

_ = p.plot(x_scale="log", y_scale="log")