In order to do the analysis in Nim, we need two things:
- a filetype to store our processed data in -> HDF5
- a way to fit the excentricity of our blobs -> Minuit?
However, for both we need a wrapper for the given libraries. HDF5 is relatively straight forward, since it is written in C. Using c2nim it should be possible to create a wrapper. Minuit is another story, since it is written in Fortran or C++. No C implementation exists. Wrapping either of the two is not that easy, potentially. c2nim has C++ support, but not as well as C.
For both: is it possible to compile the two to a shared library, which can be linked?
- HDF5: https://support.hdfgroup.org/HDF5/Tutor/compile.html See the bottom of the link. It lists the (static) libraries, which are shipped with HDF5, in a precompiled download.
- Minuit2: ?
If this is the case, the library should:
- either raise an error, because this function (w/ DsetReadWrite == all) should only write the data, if the shapes are identical: Problem: currently if we haven’t read the data yet, we simply create a dataset, which still contains the wrong shape compared to the real dataset?! Fix that. Upon checking for existence of dataset, instead read the actual dimensions!
- of simply get the correct hyperslab of the data and write it at the beginning of the dataset, although this seems not all that nice. Would cause unintended behaviour in many cases, if the user does not realize he’s writing a too small array into a larger dataset.
In order to be able to resize datasets after they have been created we need to do the following: See: https://support.hdfgroup.org/ftp/HDF5/current/src/unpacked/examples/h5_extend.c
also affects reading an existing dataset, as we now need to take into account to read maxshape. During creation we now hand a dataset create property list to the create2 function as follows:
/* Modify dataset creation properties, i.e. enable chunking */
prop = H5Pcreate (H5P_DATASET_CREATE);
status = H5Pset_chunk (prop, RANK, chunk_dims);
/* Create a new dataset within the file using chunk
creation properties. */
dataset = H5Dcreate2 (file, DATASETNAME, H5T_NATIVE_INT, dataspace,
H5P_DEFAULT, prop, H5P_DEFAULT);
In case we read datasets, we need to get the property list from the dataset we have openend and check whether that is chunked data:
prop = H5Dget_create_plist (dataset);
if (H5D_CHUNKED == H5Pget_layout (prop))
rank_chunk = H5Pget_chunk (prop, rank, chunk_dimsr);
As far as I can tell, to read the dataset the function does not have to be changed, as long as we know the size of the dataset we read. Need to add propl_id to H5DataSet and H5Group.
this one is difficult:
- need to create the dataset with chunked storage
- then call set_extent
- select the elements to write
/* Extend the dataset. Dataset becomes 10 x 3 */
size[0] = dims[0]+ dimsext[0];
size[1] = dims[1];
status = H5Dset_extent (dataset, size);
/* Select a hyperslab in extended portion of dataset */
filespace = H5Dget_space (dataset);
offset[0] = 3;
offset[1] = 0;
status = H5Sselect_hyperslab (filespace, H5S_SELECT_SET, offset, NULL,
dimsext, NULL);
/* Define memory space */
memspace = H5Screate_simple (RANK, dimsext, NULL);
/* Write the data to the extended portion of dataset */
status = H5Dwrite (dataset, H5T_NATIVE_INT, memspace, filespace,
H5P_DEFAULT, dataext);
- read H5 data in reconstruction
- perform cluster analysis
- apply cuts from XrayCalib
- center chip, pos_x, pos_y = 7, radius = 4.5:
double distance = std::sqrt((xray->getCenterX() - _centerX) * (xray->getCenterX() - _centerX) + (xray->getCenterY() - _centerY) * (xray->getCenterY() - _centerY)); if( parameterSet("CutRadius") && distance > _cutRadius){ delete xray; continue; }
needs to be fulfilled
- eccentricity < 1.3
- number X-rays == 1 ? only one X-ray per calibration event maybe?
- transverse RMS < 1.2
every event passing these cuts from a calibration dataset, will be used to perform the fit to the iron source Note: In order to be able to calculate transverse RMS, we need to calculate it first. So: introduce cluster geometry calculation, which performs rotation and then calculation of statistical moments using nim.stats
- given Fe spectrum do the following:
- write to H5 file in dataset “/reconstruction/calibration/run_<number>/fe_spectrum” or something like this
- have Python script read this dataset, fit function as described in ~/src/MarlinTPC/vonMarkus/xray-reco-stuff/jane-doe/xmlbase/XrayCalibPixel.C to spectrum, create plot, write resulting fit parameters to H5 file, as attributes to the dataset. Given peaks of fit, fit linear function to peaks, resulting in energy calibration function. Also plot, write parameters to H5¤ file as well
- Python script receives run number as command line argument and filename, s.t. we can call the Python script from within Nim, have it perform its actions and continue with the calculation (if any)
What we have left to do… Now we have the energy calibration function, we can continue with the analysis. This means:
- for every run get the energy calibration parameters, which are closest in time to the run being analyzed. Use this set of parameters to calculate the energy of all clusters in this run.
- Given all properties and calculated energies, we can calculate the likelihood values for each. Concludes basic analysis of InGrid data.
FADC: we still want a couple more plots to understand the FADC better. Noise related:
- loss of data due to FADC noise. Calculate dead time of detector based on event duration and maybe time until the next event started relative to start of this event? Should allow for ratio. Alternative: bin events in e.g. 5 minute intervals, add event durations of all events in the interval and check dead time vs. shutter open time
actual FADC data:
- FADC spectrum for calibration based on same events as we use for the energy calibration! (alternatively could perform likelihood analysis on everything to filter out X-rays to do the same, but that’s too much work for now and better to compare anyways?)
- calculate rise and fall time of events. Done how? Fitting something
seems very difficult, since the shapes are so different. Basically
fits will mainly work for X-rays and the more ugly an event is, the
harder this becomes. Simplest way: we know the location of the
minimum, check number of steps we need to take until we’re close
enough again to the baseline? Could do this for each peak we find in
the spectrum. At least gives us a rough measure for this.
In order to do this: we need to determine the baseline of the event, so that we can determine where the start and end of the dip is. As a rough guess for the baseline the median should work well enough, since most registers are still at the baseline, even with larger peaks, which means the median should be on it.
for i in xrange(1230, 1300): baseline = np.percentile(data[i], 50) base.fill(baseline) print i plt.plot(ch, data[i]) plt.plot(ch, base) plt.show()
where data is the `fadc_data` from the H5 file. By eye this seems to work even in practice.
Given this baseline, we calculate the baseline + (10% of minimum) value. From the minimum we can now search for rise and fall times via:
- starting from minimum, search left (wrapping around end of event) until we cross to baseline + 10% again
- do same for right
- take note of the indices of these. Given indices, we can calculate the rise and fall time. For now we can keep them as register values, since we only care about the behavior of it, not the absolute values (as far as I’m concerned at least).
- be able to compare different FADC settings. E.g. when we have plots for rise times of each run (all data!), we can calculate the mean or something, depending on how it looks. If it’s sort of gaussian, take the mean, else whatever. Then we can compare these values for different FADC settings.
- calibration spectrum of all calibration runs
- FADC spectrum of all calibration runs
- separated into 50 ns and 100 ns?
- only show plot after change to amplification?
- FADC spectrum of rise and fall times, compared from 50 and 100 ns
- dead time of detector vs time
- plots for all X-ray properties
Over the last few days (<2017-11-17 Fr 18:32> is today) I have wrapped the C implementation of the HDF5 library successfully, such that we can import the C functions succesfully. The CLOSED is to be taken with a huge grain of salt, because basically this is only the start of the HDF5 wrapper…
To wrap HDF5 there are things I need to learn first.
This should be relatively easy. At least in principle. How far do we need to go with wrapping? What does have to be wrapped? Can we simply import the static library and call that instead? Would make our lives a lot easier.
We can learn this by doing the two following things:
- check the Python code where we convert the ROOT trees into HDF5 files to give us an idea on the necessary parts
- do the tutorial of HDF5. Seems to be pretty nice, maybe we can learn how to call HDF5 from Nim via its library?
Converting some of the C example programs to Nim and comparing the code with the Python examples, gives a lot of ideas on how to implement higher level functionality.
For a start: Need to get rid of most of the necessary ‘default’ arguments, which are usually supplied in form of the HDF5 constants. Need to implement proper typecasts of the Nim types to C types.
Closing of dataset, dataspace etc. needs to be automatic. No need for us to deal with id’s. But include an id(file, dataset etc. object) function, which returns the raw id, in case one wants to use a low level function.
For specific files:
- h5_crtdat.nim Opening a HDF5 file and creating a dataset of fixed size and specific datatype needs to be a one liner (compare with Python!)
- h5_rdwt.nim Opening of a dataset in a H5 file to be done via [] operator. Create H5file (basically the file id) object, which receives a string (name of dataset) and have the dataset object (dataset_id) returned. Same functionality needs to be possible with a dataset, include [] operator and return the data in that set. Give possibility to read only partial data (include slices basically)
As of <2017-12-22 Fr 17:42> the most basic functionality of the H5 high-level library are finally in place. We can read and write data to the file into arbitrary groups.
NOTE: make clear that while by default we hand nested sequences to be written to the HL library, we still require all dimensions to be full (hence why we need to give the shapes in advance!). For ‘ragged arrays’, we need to use the variable length datatypes!
We should use HDF5 in the following way:
- use ‘packet table’ for the raw frames, can structure it as follows
/runs/run_<number>/
where metadata of run_<number> stores the information, which is usually located in the header of each event
/runs/run_<number>/event_<number>
stores the raw zero suppressed data as a packet using variable length data. Event information is stored in metadata of this. FADC files are stored as fixed length data. Use metadata, attribute or whatever to refer this to the normal corresponding event.
- after reconstruction, we should create structure
/reconstruction/run_<number>/
with:
- one fixed size image (?) of occupancy of the run
- histogram of FADC
- ToT per pixel histogram
and
/reconstruction/run_<number>/events
where we store the calibrated data, based on a referred calibration run. This will be subdivided into
/reconstruction/run_<number>/events/<properties>
the different properties, as we do for the data we extracted from the old ROOT trees.
In the optimal case, the reconstruction branch is so close to the current existing HDF5 file that we can use the Python CNN analysis almost without changing the reading of the HDF5 file.
NOTE: Does the distinction between reconstruction and analysis, as it was done in Christoph’s case still make sense? Not in the way Christoph did it at least. We put the energy of each event into the reconstruction portion of the table. The analysis part will then only contain the calculated Likelihood (for reference with the old detector) and everything regarding CNN analysis.
It is possible to convert most header files of the C++ implementation
to nim files.
For optimization of the eccentricity funnction, we use NLopt, a C library, which provides many different non linear optimization algorithms. (Global / Local) (gradient based / derivative free) algorithms are available.
The C library was wrapped using c2nim, which proved to be pretty easy. Based on this, work is ongoing to build a high level interface for the library, which takes the library state from C to Nim.
The C library internally saves the state of the library, including things like stopping criteria, the user defined minimzation function etc. This was lifted into Nim instead, by defining an object ‘NloptObj’, which stores the parameters. The settings are set on this object. This is done lazily. Only when the call to optimize() is done, are the settings written to the library.
The bindings are located in ~/CastData/ExternCode/nimnlopt.
To minimize the eccentricity function the following algorithms were
tested. Currently LN_COBYLA
is in use.
opt = nlopt_create(NLOPT_LN_BOBYQA, 1) opt = nlopt_create(NLOPT_LN_NELDERMEAD, 1)
opt = nlopt_create(NLOPT_LN_SBPLX, 1) opt = nlopt_create(NLOPT_GN_DIRECT_L, 1) opt = nlopt_create(NLOPT_GN_CRS2_LM, 1) opt = nlopt_create(NLOPT_GN_ISRES, 1) opt = nlopt_create(NLOPT_GN_ESCH, 1)
opt = nlopt_create(NLOPT_LN_NEWUOA_BOUND, 1)
In this case derivative free algorithms are the only useful ones, as calculating the gradient of the eccentricity is somewhat ugly (would not even be that hard, but since the used algorithms converge quickly enough, there’s no point at this moment in time. Might be a useful optimization though!)
One thing to be wary of, is the initial step size. This was the major problem at the beginning. The library chose the step size too large, which caused the algorithm to enter local minima at specific values. Resulted in non continous distribution of the rotation angles. Some angles were never seen (which does not make sense physically).
Before any real work can begin, we need to do some work on the raw data. This includes
- reading all data*.txt and data*.txt-fadc files and writing them to a HDF5 file, one group for each run
Important things to do after CCM:
Change to inode and after inode sort again by filename. Otherwise problem with FADC files, since they will be out of order AND/OR: include event numbers separately for FADC events, then they can basically also be completely shuffled, since we can untangle it easily.
We have the occupancy currently, but we might want to change it by default to ignore full frames in the occupancy, because otherwise it might get ugly. This is especially the case for occupancies of calibration runs, since in some cases the FADC does not trigger, which results in completely filled frames.
Add min of FADC to peaks to file
noisy = int depth = float -> need separate datasets
… for which we wish to plot a histogram of ALL runs in one plot to something like
/reconstruction/combined/ToTs
with dataset names such as
./ToT_<chipnumber>_<run_number>
In the processFadcData proc, we can use multithreading to accelerate the calculation of whether an FADC file is noisy and the calculation of the minimum of it. The implementations compare as:
- using spawn: 140s
- single threaded: 420s
- single: only calcing Min: 151s
- single: only checking noise: 314s (-> 302s after slight mod)
- final after opt w/ spawn: 45 s (iirc)
In the reconstruction phase, the first part is to find the clusters in the events. In Christoph’s case this is done by using a very rough cluster finder algorithm, which performs a square search around a pixel, within a given search radius (in practice 50 pixels), if another pixel found in that range, part of that cluster, start search again from that range.
The data in MarlinTPC is stored in TrackerData objects (after having been converted from TrackerRawData) in the following way:
- One whole event (== frame) consists of a
std::vector<TrackerData*>
, where eachTrackerData*
is a set of hit pixels, which are next to each other, within the same row. e.g. (x denotes hit pixel, o non hit pixel):(x x x x) o (x x) o (x x) o (x) < TrackerData denoted by ( ); all x next to each other one TrackerData* until next o o (x x) o (x) o (x x x x x) o < TrackerData does not span more than 1 row
- Algorithm iterates over said vector and determines
// begin1 is the starting pixel ID of the first pixel in this TrackerData object
int begin1 = ( dataQueue.front()->getCellID0() )%nColumns;
// end1 is the ending pixel ID of the last pixel in that TrackerData object
int end1 = (dataQueue.front()->getCellID0())%nColumns + dataQueue.front()->getChargeValues().size() - 1;
// determine the row of that TrackerData object
int row1 = static_cast<int>((dataQueue.front()->getCellID0() - begin1 )/nRows);
- add first element to a DataQueue, delete element from std::vector<TrackerData*>, start iterating over all remaining TrackerData objects
- for each of these calculate begin2, end2, row2 variables in same way
- perforrm the following bool comparisons to check whether current TrackerData within search radius (typically 50 pixels) of the last one in the DataQueue
if (row2 > row1 + _searchRadius) break;
//adjacent data same line
bool same1 =
( begin2 < begin1 ) &&
( (end2 + _searchRadius) >= begin1 ) &&
( row1 == row2 );
bool same2 =
( begin2 > begin1 ) &&
( (begin2 - _searchRadius) <= end1 ) &&
( row1 == row2 );
//adjacent data below
bool low1 =
( (end2 - _searchRadius) <= end1 ) &&
( (end2 + _searchRadius) >= begin1 ) &&
( row2 < row1 ) &&
( row1 <= (row2 + _searchRadius) );
bool low2 =
( (begin2 + _searchRadius) >= begin1 ) &&
( (begin2 - _searchRadius) <= end1 ) &&
( row2 < row1 ) &&
( row1 <= (row2 + _searchRadius) );
bool low3 =
( (begin2 + _searchRadius) < begin1 ) &&
( (end2 - _searchRadius) > end1 ) &&
( row2 < row1 ) &&
( row1 <= (row2 + _searchRadius) );
//adjacent data above
bool up1 =
( (end2 - _searchRadius) <= end1 ) &&
( (end2 + _searchRadius) >= begin1 ) &&
( row2 > row1 ) &&
( row1 >= (row2 - _searchRadius) );
bool up2 =
( (begin2 + _searchRadius) >= begin1 ) &&
( (begin2 - _searchRadius) <= end1 ) &&
( row2 > row1 ) &&
( row1 >= (row2 - _searchRadius) );
n
bool up3 =
( (begin2 + _searchRadius) < begin1 ) &&
( (end2 - _searchRadius) > end1 ) &&
( row2 > row1 ) &&
( row1 >= (row2 - _searchRadius) );
- search above and below, since in MarlinTPC vector of TrackerData potentially not sorted
- if any of these bool statements is true:
- add this TrackerData object to the current cluster
- add number of pixels to cluster count
- add this TrackerData as element in DataQueue
- remove this TrackerData from std::vector<TrackerData*>
- if none of the bool statements is true, we found a full cluster, put cluster away, start new cluster with the TrackerData, which was not part of found cluster
Regarding cluster finding algorithm:
- cellID0 = from TimePixXYReaderProcessor.cc
int cellID0 = _nPixels * (tempY % _nPixels) + (tempX % _nPixels);
where _nPixels == 256, hence id of a specific pixel on the chip
- cellID1 = globalChipID, which is the chip ID from TOS, plus the different FECs, boards etc
In the main function, which does the reconstruction for a single event, we hand the following data: c: Cluster which means we only need 4 different datasets for the InGrid data, i.e.
- event numbers
- raw_x
- raw_y
- raw_ch
from which we can perform single event reco. Need to read this from H5.
Need to write the following additional properties to H5 afterwards:
- individual clusters instead of raw x, etc.
these are already filtered by:
- events smaller 3 (or 5?) pixels
- pixels of 11810?
- what happens to events with > 4096 pixel?
- eccentricity
- rot_angle
- sum tot
- rms_x
- rms_y
- skewness_x
- skewness_y
- kurtosis_x
- kurtosis_y
- length
- width
- pos_x
- pos_y
The default way to reconstruct the events was single threaded. Reconstructing the Run 21 (X-ray Finger) took:
- single-threaded: 155 s
- multi-threaded: 46 s
when compiling without release flag. With release flag:
- single-threaded: 27 s
- multi-threaded: 10.3 s
Need to integrate Christoph’s chip into the ingrid database, in order to perform charge calibration etc.
We’re done performing the fit of the Fe spectrum not only for the # of
pixels, but also for the total charge, see
Analysis/ingrid/calibration.nim
This will now perform the necessary calculations for the Fe Charge
spectrum if the runType
is rtCalibration
. Then the fit results are
stored on the FeSpetrumCharge
dataset for the “center chip” considered.
Need to iterate over all calibration runs and for each get
FeSpectrumCharge
datasets, extract thekeV_per_electron
attributecharge
datasets, extract theG
attribute
Having two seq
perform a linear fit of this dependency.
The fit results shall be stored in the ingridDatabase.h5
. This fit
will then in addition to the gas gain, which is calculated for each
background run from the charge
dataset, be used to get the correct
calibration factor for this run.
Then just reconstruct background runs, open the ingridDatabase
and get the just fitted parameters, calculate correction factor and
calculate energy based on charge.
Once that is done we can do the likelihood based on this. :)
We have calc’d all logL values based on the X-ray reference datasets XrayReferenceDataSet.h5 using likelihood.nim. What is left to do:
- define cut values on reference datasets correctly Done by: hist.sum[0:cut_value] / hist.sum Check if > software eff. -> cut value
- include potential cuts on location of events
- filter out by tracking / non-tracking (see fadc_analysis.nim for an example on how to split by tracking / non-tracking).
Once that is done, we can create spectra based on likelihood cuts.
All cut values are implemented and we’re able to create background spectra.
This allows us to filter out all events and write them to a different file
An attribute which keeps check of the total live time of the detector for each run.
that we actually get the correct data. In case of reconstructed data, we will have more entries in each dataset (or less) depending on the number of clusters etc, while the timestamps still correspond to the actual event numbers. For durations this is fine, because they are unchagned, but once we start reading reconstructed properties that breaks down?
What I need to do is the following: we have the event numbers of each cluster. That means we have to filter out the timestamps, which are still valid. From this, we can extract the event numbers, which are still valid. Then we need to reverse the mapping from event numbers to allowed events by saying:
let
energy = h5f[(somechipgrp / "energyFromPixel").grp_str]
# event numbers of clusters of this chip
evNumber = h5f[(somechipgrp / "eventNumber").grp_str]
# all event numbers. This is not really needed, because by default
# the tracking indices already correspond to event numbers!
allEvNumbers = h5f[(somerungrp / "eventNumber").grp_str]
tracking_inds = getTrackingInds(h5f, somechipgrp, 0)
var
# get event numbers (not needed strictly speaking)
allowedEvents = allEvNumbers[tracking_inds]
# using allowed events get indices for other events by iterating
# over all allowed events and removing those, which are not
# in the events of a chip
for i, el in allowedEvents:
# remove all events of the allowed events, which are not
# part of the events for one chip
if el notin evNumber:
let ind = find(allowedEvents, el)
# delete event, if not in event numbers
del(allowedEvents, ind)
# or simply
allowedEvents.delByElement(el)
This group “likelihood”? should contain again groups for each chip. In these however, we explicitly only have the data, which corresponds to the clusters, which passed the cut. We include the cut values as well, of course.
To do that we need to:
- read all properties of all events before we make the cuts
let float_dset_names = getFloatDsetNames()
var float_data_tab = initTable[string, seq[float]]()
for dset in float_dset_names:
float_data_tab[dset] = h5f[(group / dset).dset_str][float64]
- when performing the logL cut (after correctly selecting valid events…)
var passed_inds = initSet[int]()
if logL[ind] <= cutTab[dset]:
# replace this:
energy_passed.add energy[ind]
# by this:
passed_inds.incl ind
# then we can do EITHER:
for i in 0 ..< evNumbers.high:
if i notin passed_inds:
for dset in keys(float_data_tab):
# remove all indices, which are not in passed inds
del(float_data_tab[dset], i)
# then write to the new likelihood group for this chip
var logLgroup = &"/likelihood/chip_{chip_number}"
# got all datasets ready for write
for dset_name in keys(float_data_tab):
var dset = h5f.create_dataset((logLgroup / dset_name),
(passed_inds.len, 1), float64)
# write the data to the file
dset[dset.all] = float_data_tab[dset_name]
Currently we observe a quite different background rate spectrum compared with Christoph’s old analysis. While it does produce similar results above \SI{2}{\kilo \electronvolt}, below we “lose” a lot of background (would be great of course). Interesting things to take in mind / to try out:
- [ ] loss of background
$≤ \SI{2}{\kilo \electronvolt}$ for gold region, but increase for whole chip! - [ ] difference of calibration factors for carge calibrated Fe spectrum of old vs new framework
- [ ] gas gains are different, use the mean of the histogram instead of
the fit result as the gas gain. Just multiply
bin center * bin content
. Check whether to use raw histogram or histogram based on fit result? - [ ] check all cuts applied to reference datasets. Maybe application of one is wrong? Maybe value of one is wrong? Especially for the small energy bins?
- [ ] plot the reference datasets / the resulting distributions after the cuts and compare with old plots (see his PhD thesis)
- [ ] calculate properties from single raw event and see if and which properties show different values than event from old framework. Might be done using the reference tree, since there we should have everything in one place?
- [ ] plot again the distributions from the actual data after reconstruction and compare
- [ ] take a look at PhD thesis, reference spectrum bin
E
has a lot of plots to compare against - [ ] or calculation of logL spectra of reference sets check for overflow bins, empty bins in the resulting spectra etc., check the values used to construct these distributions (number of bins, start and end value etc). If several bins were empty in the low energy ranges and we’d get an event from the background dataset whose logL value resides in that bin, it’d drop out! That would fit with our observation!
- [ ] what else?
parse_raw_tpx3
Mini benchmark of parsing performance when comparing usage of
OrderedSet[int]
with IntSet
(std/intsets
):
Both with debugger native:
OrderedSet Number of dropped batches: 0 of batch size: 100000000 [INFO]: Closing input file /home/basti/CastData/data/Tpx3Data/Data/DataTake_2022-05-18_20-21-53.h5 [INFO]: Closing output file /t/t.h5 ./parse_raw_tpx3 -p –out /t/t.h5 18.42s user 0.36s system 99% cpu 18.794 total basti at voidRipper in ~/CastData/ExternCode/TimepixAnalysis/Analysis/ingrid ツ nim c -d:b
IntSet Number of dropped slices: 48664 Number of dropped batches: 0 of batch size: 100000000 [INFO]: Closing input file /home/basti/CastData/data/Tpx3Data/Data/DataTake_2022-05-18_20-21-53.h5 [INFO]: Closing output file /t/t.h5 ./parse_raw_tpx3 -p –out /t/t.h5 14.43s user 0.51s system 98% cpu 15.144 total basti at voidRipper in ~/CastData/ExternCode/TimepixAnalysis/Analysis/ingrid ツ
IntSet for `idxToKeep` and for `indices`: Processed file: /home/basti/CastData/data/Tpx3Data/Data/DataTake_2022-05-18_20-21-53.h5, run number: 0 written to /t/t.h5 Number of slices processed: 461788 Number of dropped slices: 48664 Number of dropped batches: 0 of batch size: 100000000 [INFO]: Closing input file /home/basti/CastData/data/Tpx3Data/Data/DataTake_2022-05-18_20-21-53.h5 [INFO]: Closing output file /t/t.h5 ./parse_raw_tpx3 -p –out /t/t.h5 13.23s user 0.37s system 99% cpu 13.609 total basti at voidRipper in ~/CastData/ExternCode/TimepixAnalysis/Analysis/ingrid ツ
both with lto and w/o debugger native:
Number of slices processed: 461788 Number of dropped slices: 48664 Number of dropped batches: 0 of batch size: 100000000 [INFO]: Closing input file /home/basti/CastData/data/Tpx3Data/Data/DataTake_2022-05-18_20-21-53.h5 [INFO]: Closing output file /t/t.h5 ./parse_raw_tpx3 -p –out /t/t.h5 15.59s user 0.36s system 99% cpu 15.967 total basti at voidRipper in ~/CastData/ExternCode/TimepixAnalysis/Analysis/ingrid ツ
IntSet
[INFO]: === Summary === Processed file: /home/basti/CastData/data/Tpx3Data/Data/DataTake_2022-05-18_20-21-53.h5, run number: 0 written to /t/t.h5 Number of slices processed: 461788 Number of dropped slices: 48664 Number of dropped batches: 0 of batch size: 100000000 [INFO]: Closing input file /home/basti/CastData/data/Tpx3Data/Data/DataTake_2022-05-18_20-21-53.h5 [INFO]: Closing output file /t/t.h5 ./parse_raw_tpx3 -p –out /t/t.h5 11.87s user 0.36s system 99% cpu 12.235 total basti at voidRipper in ~/CastData/ExternCode/TimepixAnalysis/Analysis/ingrid ツ
IntSet & IntSet for to `indices`:
Number of slices processed: 461788 Number of dropped slices: 48664 Number of dropped batches: 0 of batch size: 100000000 [INFO]: Closing input file /home/basti/CastData/data/Tpx3Data/Data/DataTake_2022-05-18_20-21-53.h5 [INFO]: Closing output file /t/t.h5 ./parse_raw_tpx3 -p –out /t/t.h5 10.85s user 0.38s system 99% cpu 11.242 total basti at voidRipper in ~/CastData/ExternCode/TimepixAnalysis/Analysis/ingrid ツ
All run with:
basti at voidRipper in ~/CastData/ExternCode/TimepixAnalysis/Analysis/ingrid ツ time \
./parse_raw_tpx3 -p /home/basti/CastData/data/Tpx3Data/Data/DataTake_2022-05-18_20-21-53.h5 \
--out /t/t.h5