pax package

Submodules

pax.FolderIO module

pax.InterpolatingMap module

class pax.InterpolatingMap.InterpolateAndExtrapolate(points, values, neighbours_to_use=None)[source]

Bases: object

Linearly interpolate- or extrapolate between nearest N points Needed to roll our own because scipy’s linear Nd interpolator refuses to extrapolate

class pax.InterpolatingMap.InterpolatingMap(filename)[source]

Bases: object

Construct s a scalar function using linear interpolation, weighted by euclidean distance.

The map must be specified as a json translating to a dictionary like this:
‘coordinate_system’ : [[x1, y1], [x2, y2], [x3, y3], [x4, y4], ...], ‘map’ : [value1, value2, value3, value4, ...] ‘another_map’ : idem ‘name’: ‘Nice file with maps’, ‘description’: ‘Say what the maps are, who you are, your favorite food, etc’, ‘timestamp’: unix epoch seconds timestamp

with the straightforward generalization to 1d and 3d. The default map name is ‘map’, I’d recommend you use that.

For a 0d placeholder map, use
‘points’: [], ‘map’: 42, etc

The json can be gzip compressed: if so, it must have a .gz extension.

See also examples/generate_mock_correction_map.py

data_field_names = ['timestamp', 'description', 'coordinate_system', 'name', 'irregular']
get_value(*coordinates, **kwargs)[source]

Returns the value of the map at the position given by coordinates Keyword arguments:

  • map_name: Name of the map to use. By default: ‘map’.
get_value_at(position, map_name='map')[source]

Returns the value of the map map_name at a ReconstructedPosition position - pax.datastructure.ReconstructedPosition instance

pax.MongoDB_ClientMaker module

class pax.MongoDB_ClientMaker.ClientMaker(config)[source]

Bases: object

Helper class to create MongoDB clients

On __init__, you can specify options that will be used to format mongodb uri’s, in particular user, password, host and port.

get_client(database_name=None, uri=None, monary=False, host=None, autoreconnect=False, **kwargs)[source]

Get a Mongoclient. Returns Mongo database object. If you provide a mongodb connection string uri, we will insert user & password into it, otherwise one will be built from the configuration settings. If database_name=None, will connect to the default database of the uri. database=something overrides event the uri’s specification of a database. host is special magic for split_hosts kwargs will be passed to pymongo.mongoclient/Monary

pax.MongoDB_ClientMaker.MongoProxy(x, **kwargs)
class pax.MongoDB_ClientMaker.PersistentRunsDBConnection(clientmaker_config)[source]

Bases: object

Helper class for maitaining a persistent collection to the XENON1T runs database

check()[source]

Checks that the runs db connection we currently have is alive. If not, we try to re-acquire it forever.

pax.MongoDB_ClientMaker.dummy(x, **kwargs)[source]
pax.MongoDB_ClientMaker.parse_passwordless_uri(uri)[source]

Return host, port, database_name

pax.PatternFitter module

pax.configuration module

pax.configuration.combine_configs(*args)[source]

Combines a series of configuration dictionaries; later ones have higher priority. Each argument must be a pax configuration dictionary, i.e. have at most one level of sections.

pax.configuration.fix_sections_from_mongo(config)[source]

Returns configuration with | replaced with . in section keys. Needed because . in field names has special meaning in MongoDB

pax.configuration.load_configuration(config_names=(), config_paths=(), config_string=None, config_dict=None, maybe_call_mongo=False)[source]

Load pax configuration using configuration data. See the docstring of Processor for more info. :param: maybe_call_mongo: if True, at the end of loading the config (but before applying config_dict) :return: nested dictionary of evaluated configuration values, use as: config[section][key].

pax.core module

pax.data_model module

pax.datastructure module

pax.dsputils module

pax.exceptions module

exception pax.exceptions.CoordinateOutOfRangeException[source]

Bases: pax.exceptions.PaxException

exception pax.exceptions.DatabaseConnectivityError[source]

Bases: pax.exceptions.PaxException

A database connectivity error (“Failed to resolve”) we often see probably due to a small network hickup.

exception pax.exceptions.EventBlockHeapSizeExceededException[source]

Bases: pax.exceptions.PaxException

exception pax.exceptions.InvalidConfigurationError[source]

Bases: pax.exceptions.PaxException

exception pax.exceptions.MaybeOldFormatException[source]

Bases: pax.exceptions.PaxException

exception pax.exceptions.OutputFileAlreadyExistsError[source]

Bases: pax.exceptions.PaxException

exception pax.exceptions.PaxException[source]

Bases: Exception

exception pax.exceptions.PulseBeyondEventError[source]

Bases: pax.exceptions.PaxException

exception pax.exceptions.QueueTimeoutException[source]

Bases: pax.exceptions.PaxException

exception pax.exceptions.TriggerGroupSignals[source]

Bases: pax.exceptions.PaxException

exception pax.exceptions.UnknownPropagatedException[source]

Bases: Exception

For re-raising an exception of an unknown type in a host process. Do NOT subclass PaxException! We don’t know where this exception came from.

pax.formats module

Input/output code for tabular processed data formats Most work has migrated to the pax ROOTClass output format by now – this is retained for backwards compatibility and because csv is nice for debugging.

Here are the definitions of how to serialize our data structure to and from various formats. Please be careful when editing this file:

  • Do not add any dependencies (e.g. imports at head of file without try-except), this file has to stay import-able even if not all the python modules for all the formats are installed.
  • Do not use python3-specific syntax, this file should be importable by python2 applications. (but in a sense this applies to all of pax, we aim to support python 2 and 3)
class pax.formats.HDF5Dump(log=<logging.Logger object>)[source]

Bases: pax.formats.TableFormat

close()[source]
data_types_present
file_extension = 'hdf5'
n_in_data(df_name)[source]
open(name, mode)[source]
read_data(df_name, start=0, end=None)[source]
supports_array_fields = True
supports_read_back = True
supports_write_in_chunks = True
write_data(data)[source]
class pax.formats.NumpyDump(log=<logging.Logger object>)[source]

Bases: pax.formats.TableFormat

close()[source]
data_types_present
f = None
file_extension = 'npz'
n_in_data(df_name)[source]
open(name, mode)[source]
read_data(df_name, start=0, end=None)[source]
supports_array_fields = True
supports_read_back = True
write_data(data)[source]
class pax.formats.PandasCSV(log=<logging.Logger object>)[source]

Bases: pax.formats.PandasFormat

pandas_format_key = 'csv'
class pax.formats.PandasFormat(log=<logging.Logger object>)[source]

Bases: pax.formats.TableFormat

open(name, mode)[source]
pandas_format_key = None
supports_array_fields = False
write_data(data)[source]
write_pandas_dataframe(df_name, df)[source]
class pax.formats.PandasHDF5(log=<logging.Logger object>)[source]

Bases: pax.formats.PandasFormat

close()[source]
data_types_present
file_extension = 'hdf5'
n_in_data(df_name)[source]
open(name, mode)[source]
prefers_python_strings = True
read_data(df_name, start=0, end=None)[source]
string_data_length = 32
supports_append = True
supports_read_back = True
supports_write_in_chunks = True
write_pandas_dataframe(df_name, df)[source]
class pax.formats.PandasHTML(log=<logging.Logger object>)[source]

Bases: pax.formats.PandasFormat

pandas_format_key = 'html'
class pax.formats.PandasJSON(log=<logging.Logger object>)[source]

Bases: pax.formats.PandasFormat

pandas_format_key = 'json'
class pax.formats.ROOTDump(*args, **kwargs)[source]

Bases: pax.formats.TableFormat

Write data to ROOT file

Convert numpy structered array, every array becomes a TTree. Every record becomes a TBranch. For the first event the structure of the tree and branches is determined, for each branch the proper datatype is determined by converting the numpy types to their respective ROOT types. This is

close()[source]
data_types_present
file_extension = 'root'
n_in_data(df_name)[source]
numpy_type = {'D': <class 'numpy.float64'>, 'I': <class 'numpy.int32'>, 'O': <class 'bool'>, 'L': <class 'numpy.int64'>, 'F': <class 'numpy.float32'>, 'C': dtype('O')}
open(name, mode)[source]
read_data(df_name)[source]
root_type = {'int64': '/L', 'S': '/C', 'int16': '/S', 'bool': '/O', 'int32': '/I', 'float32': '/F', 'float64': '/D'}
supports_array_fields = True
supports_read_back = True
supports_write_in_chunks = False
write_data(data)[source]
class pax.formats.TableFormat(log=<logging.Logger object>)[source]

Bases: object

Base class for bulk output formats

close()[source]
data_types_present
file_extension = 'DIRECTORY'
open(name, mode)[source]
prefers_python_strings = False
read_data(df_name, start, end)[source]
supports_append = False
supports_array_fields = False
supports_read_back = False
supports_write_in_chunks = False
write_data(data)[source]

pax.parallel module

pax.plugin module

pax.recarray_tools module

Tools for working with numpy structured arrays Extends existing functionality in numpy.lib.recfunctions

pax.recarray_tools.append_fields(base, names, data, dtypes=None, fill_value=-1, usemask=False, asrecarray=False)[source]

Append fields to numpy structured array If fields already exists in data, will overwrite

pax.recarray_tools.dict_group_by(x, group_by_fields='Event', return_group_indices=False)[source]

Same as group_by, but returns OrderedDict of value -> group, where value is the value (or tuple of values) of group_by_fields in each subgroup Gotcha: assumes x is sorted by group_by_fields (works in either order, reversed or not) See also group_by

pax.recarray_tools.drop_fields(arr, *args, **kwargs)[source]

Drop fields from numpy structured array Gives error if fields don’t exist

pax.recarray_tools.drop_fields_if_exist(arr, fields)[source]
pax.recarray_tools.fields_data(arr, ignore_fields=None)[source]
pax.recarray_tools.fields_view(arr, fields)[source]

View one or several columns from a numpy record array

pax.recarray_tools.filter_on_fields(to_filter, for_filter, filter_fields, filter_fields_2=None, return_selection=False)[source]

Returns entries of to_filter whose combination of the filter_fields values are present in for_filter. filter_fields_2: names of filter_fields in for_filter (if different than in to_filter) If return_selection, will instead return boolean selection array for to_filter

pax.recarray_tools.group_by(x, group_by_fields='Event', return_group_indices=False)[source]

Splits x into LIST of arrays, each array with rows that have same group_by_fields values. Gotchas:

Assumes x is sorted by group_by_fields (works in either order, reversed or not) Does NOT put in empty lists if indices skip a value! (e.g. events without peaks)

If return_indices=True, returns list of arrays with indices of group elements in x instead

pax.simulation module

pax.trigger module

pax.units module

Define unit system for pax (i.e., seconds, etc.)

This sets up variables for the various unit abbreviations, ensuring we always have a ‘consistent’ unit system. There are almost no cases that you should change this without talking with a maintainer.

pax.utils module

Helper routines needed in pax

Please only put stuff here that you really can’t find any other place for! e.g. a list clustering routine that isn’t in some standard, library but several plugins depend on it

class pax.utils.Memoize(function)[source]

Bases: object

class pax.utils.Timer[source]

Bases: object

Simple stopwatch timer punch() returns ms since timer creation or last punch

last_t = 0
punch()[source]
pax.utils.data_file_name(filename)[source]

Returns filename if a file exists there, else returns PAX_DIR/data/filename

pax.utils.get_named_configuration_options()[source]

Return the names of all working named configurations

pax.utils.randomstring(n)[source]
pax.utils.refresh_status_line(text)[source]

Module contents

Processor for analyzing XENON1T

Provides a framework for calling plugins that manipulate data.