NSCL DDAS  1.0
Support for XIA DDAS at the NSCL
 All Classes Namespaces Files Functions Variables Macros Pages
Author
Jeromy Tompkins
Date
6/23/2016

Introduction

The following tutorial will describe how to migrate from the pre-supported version of DDAS to the new supported system. The difference between the two systems is very minimal. The major difference will be that you will need to change the way you think about the software and how to run it.

To provide context for the migration steps, I will first describe what changed and why. The remainder of the document after that will be used to describe how to use each component of DDAS in the new supported system.

changed?

The experimenter no longer owns the code.

To support the DDAS software, the source code needs to be protected and managed. This is accomplished by installing pre-built binaries of the DDAS software into read/execute-only locations. The impact of this is that the user no longer will be able to customize DDAS especially for their needs, i.e. adding debugging output. Users will need to lean more on the NSCL scientific software team to debug and understand the behavior of their system when it does not make sense. It also means that DDAS will be the same for all groups using it, which will allow for more stable software and less domain specific knowledge that is not easily shared.

The experimenter no longer controls which firmware is loaded.

The Pixie-16 digitizer's behavior and capabilities are largely controlled by the firmware files that are loaded into it on boot. To simplify the support structure for understanding when bugs are the result of DDAS firmware or the software, we have preselected which firmware files are to be used during the software building process. A specific DDAS version will

The format of the data emitted by the DDAS Readout may be different.

The supported DDAS system is based on the most recent version of DDAS in the unsupported system. That version may be different than what you are using current. The main difference is that DDAS Readout added some extra data words to the physics event body to describe the module that the data was read from. The DDAS Readout was also updated to emit NSCLDAQ 11.x ring items with body headers. The data format is well documented in rdo_dataformat .

SpecTcl support is different

In the past, users of DDAS were provided the source code for a fully-implemented SpecTcl that they could modify to match their specific needs. Supporting this model is not feasible. The support for DDAS in SpecTcl is therefore very different. Pre-compiled implementations of generally useful functionalities, namely parsing the data format, are provided. The implementations also provide a "hook" that calls user-specific code responsible for mapping the parsed data to the user's TreeParameters.

libddaschannel.so tweaked

The ddaschannel class was modified very minimally in ways that break the public API compatibility. Those changes are:

The ddaschannel class was reduced into a struct-like entity. All of the unpacking logic that it used to include has been refactored out of the ddaschannel class and into a separate DDASHitUnpacker class. As a result, the libddaschannel.so library depends on libddasformat.so. This refactoring removed code duplication that existed and complicated the support structure.

Documentation

It exists and can be found at https://docs.nscl.msu.edu/daq.

Tutorial

Step 1 - Start fresh

Probably the simplest way to start out with the newly supported DDAS, is to create a new directory to work on. Copy in only the necessary files from your existing DDAS directory. Those are:

The .cfgPixie.txt file that used to be used by nscope is now defunct, so don't copy it. Nscope now uses the same cfgPixie16.txt file that is used by Readout.

Step 2 - Source the ddassetup.bash script

The ddassetup.bash script is similar to the daqsetup.bash script. It exports some useful environment variables for DDAS. In doing so, it also sources the daqsetup.bash script that is associated with the NSCLDAQ version that DDAS depends on. You should make sure to unset DAQROOT prior to sourcing it then.

unset DAQROOT
source /usr/opt/ddas/VERSION/ddassetup.bash

where VERSION is the highest version number.

Step 3 - Run nscope

This is as simple as:

$DDAS_BIN/nscope

Then use nscope just as you normally would.

Step 4 - Run Readout

Running Readout is very similar to previously.

  1. Make sure that your modevtlen.txt is correct for your crate file.
  2. If you want to run infinity clock configuration (i.e. clock synchronization only occurs on demand after Readout initialization), then you need to export the INFINITY_CLOCK variable as "YES".
export INFINITY_CLOCK="YES"
  1. Run Readout
    $DDAS_BIN/Readout
    

Step 5 - Transitioning SpecTcl

In principle, there is nothing that needs to be done here to use SpecTcl. If your SpecTcl could parse the most recent DDAS version before the supported version, then you can still use your code. It will work. If you want to lean on the supported software, then you need to do some coding.

To begin, you should understand that you don't need to throw out everything in your SpecTcl. Rather, you need to strip down the CBDecayUnpacker class to only have a mapToParameters() method, which should be highly inspired by the CBDecayUnpacker::ddastospectcl() method. It is impossible for me to know exactly whether your unpacker looks the same as the unpacker I am basing this tutorial off of, so please forgive if you do not see 100% correlation between your DDAS unpacker and the one I am using as an example. This should at least provide a basis to make your modifications. There is also a tutorial you can use that describes how to start from scratch for using the SpecTcl support. Upacker_ddas.h and Unpacker_ddas.cpp

In this file, we are going to throw out all of the methods that were implemented as part of the CEventProcessor public interface. For that reason, the following methods are no longer useful:

We then are going to change the CBDecayUnpacker to derive from CEventProcessor to DAQ::DDAS::CParameterMapper. This means that the header file will look something like this:

#include <ParameterMapper.h>
class CBDecayUnpacker : public DAQ::DDAS::CParameterMapper
{
public:
// The entry point for our CParameterMapper.
//
// The purpose of this function is to map the values extracted from the
// raw data to TreeParameters
//
// Parameters:
// hits - the data extracted from the raw buffer
// rEvent - the SpecTcl event
virtual void mapToParameters(const vector<DAQ::DDAS::DDASHit> &hits,
CEvent &rEvent);
private:
// Helper method to figure out the global index of each channel
int computeGlobalSlotId(const DAQ::DDAS::DDASHit& hit) const;
};

Don't fret about the fact that this looks nothing like what CBDecayUnpacker originally looked like. The mapToParameters method is really just a remaking of the CBDecayUnpacker::ddastospectcl(std::vector<ddaschannel*>& channellist) method. Let's dive into the implementation and see how this all works.

My CBDecayUnpacker::ddastospectcl() method took a vector of ddaschannel objects. These ddaschannel object should be thought of analogously to the DAQ::DDAS::DDASHit objects. They are actually containing the same information, but the DDASHit class does not make every data member public. Rather than having access directly to the data members, you need to call a getter function (e.g. hit.GetEnergy() rather than hit.energy). The other change is that we are passing a vector of objects rather than pointers to objects. That means we need to change "->" to "." in many places. I also find that the iterator-based iteration through the vector is verbose and less clear than accessing the elements of the vector by their index. In other words, I have chosen to favor:

for (int hitIndex=0; hitIndex<hits.size(); ++hitIndex) { auto& hit = hits[hitIndex]; }

over

for (std::vector<DDASHit>::iterator iter = hits.begin(); iter!=hits.end(); ++iter) {
auto& hit = *iter;
}

In the end, these loops do exactly the same thing. My mapToParameters() method ended up as:

void CBDecayUnpacker::mapToParameters(const vector<DAQ::DDAS::DDASHit> &channellist,
CEvent &rEvent)
{
size_t eventSize = channellist.size();
bdecay.ddasdiagnostics.cmult = eventSize;
double starttime_low, endtime_low;
double starttime_high, endtime_high;
bdecay.ddasdiagnostics.adc01hit = 0;
bdecay.ddasdiagnostics.adc02hit = 0;
bdecay.ddasdiagnostics.adc03hit = 0;
bdecay.ddasdiagnostics.adc04hit = 0;
bdecay.ddasdiagnostics.adc05hit = 0;
bdecay.ddasdiagnostics.adc06hit = 0;
bdecay.ddasdiagnostics.adc07hit = 0;
bdecay.ddasdiagnostics.adc08hit = 0;
bdecay.ddasdiagnostics.adc09hit = 0;
bdecay.ddasdiagnostics.adc10hit = 0;
bdecay.ddasdiagnostics.adc11hit = 0;
bdecay.ddasdiagnostics.adc12hit = 0;
bdecay.ddasdiagnostics.adc13hit = 0;
bdecay.ddasdiagnostics.adc14hit = 0;
bdecay.ddasdiagnostics.adc15hit = 0;
bdecay.ddasdiagnostics.adc16hit = 0;
bdecay.ddasdiagnostics.adc17hit = 0;
bdecay.ddasdiagnostics.adc18hit = 0;
bdecay.ddasdiagnostics.adc19hit = 0;
bdecay.ddasdiagnostics.adc20hit = 0;
for (size_t hitIndex=0; hitIndex<eventSize; ++hitIndex) {
const DAQ::DDAS::DDASHit& hit = channellist[hitIndex];
int globalId = computeGlobalSlotId(hit);
/* The time of an event will be taken as the time of the first
channel in the event */
if(hitIndex == 0){
starttime_low = hit.GetTimeLow();
starttime_high = hit.GetTimeHigh();
bdecay.clock.fast = hit.GetTimeLow();
bdecay.clock.slow = hit.GetTimeHigh();
bdecay.clock.cfd = hit.GetTimeCFD();
}
//Unpack the data according to the channel id number
bdecay.raw.chanidhit.push_back(globalId);
bdecay.raw.chanid[globalId].adc = hit.GetEnergy();
bdecay.raw.chanid[globalId].timehigh = hit.GetTimeHigh();
bdecay.raw.chanid[globalId].timelow = hit.GetTimeLow();
bdecay.raw.chanid[globalId].timecfd = hit.GetTimeCFD();
bdecay.raw.chanid[globalId].time = hit.GetTime();
endtime_low = hit.GetTimeLow();
endtime_high = hit.GetTimeHigh();
}
bdecay.ddasdiagnostics.eventlength = (endtime_low - starttime_low) * 10;
}

The CBDecayUnpacker::computeGlobalSlotId() method is just to map the specific slot and crate ids to a global slot number. It is implemented as follows:

int CBDecayUnpacker::computeGlobalSlotId(const DAQ::DDAS::DDASHit& hit) const
{
int id = 0;
// count how many channels are used up by Pixie-16 modules in crates of
// lower crate ids. This is the global index offset of the first channel in
// the crate the hit is associated with.
for (int z =0; z < hit.GetCrateID(); z++){
id += bdecayv.ddas.nmodc[z] * bdecayv.ddas.channelpermod;
}
// adjust index for the number of channels in previous modules within crate
id += (hit.GetSlotID() - 2)*bdecayv.ddas.channelpermod;
// adjust index for the channel offset in the module
id += hit.GetChannelID();
return id;
}

You should understand that the TreeVariables and TreeParameters are named with the convention that they start with "bdecayv" and "bdecay". These TreeVariables and TreeParameters are declared in the global scope and are included with external linkage. That happens behind the scenes in the #include directives at the top of Unpacker_ddas.cpp. Those are now:

#include <config.h>
#include <Event.h>
#include <stdint.h>
#include <string>
#include "Parameters-ddas.h"
#include "Variables-ddas.h"
#include "Unpacker_ddas.h"
using namespace std;

Now the next things that need to be done are to modify the SpecTcl_ddas.cpp file to set up the event processor pipeline slightly differently than before. We intend to replace the initialization of the CBDecayUnpacker with code that is a little different. Formerly, the CBDecayUnpacker was initialized on the stack in the global scope as such:

static CBDecayUnpacker Unpacker;

This needs to be changed so that the new version of the CBDecayUnpacker is initialized on the heap (i.e. constructed with operator new). We are also intending to replace the CBDecayUnpacker in the event processing pipeline with something else, because the CBDecayUnpacker is no longer an event processor (remember it derives from DAQ::DDAS::CParameterMapper now). The object we will replace it with is a DAQ::DDAS::CDDASBuiltUnpacker. Built unpackers expect that the data being read is the output of the NSCLDAQ event builder. To construct one of these, you need to pass it a set of source ids that it should pay attention to and then also a concrete instance of a DAQ::DDAS::CParameterMapper. We can accomplish all of this by writing:

static CBDecayUnpacker* UnpackBDecay = new CBDecayUnpacker;
static DAQ::DDAS::CDDASBuiltUnpacker Unpacker({0, 1}, *UnpackBDecay);

That is it.

The final changes we need to make are to the Makefile. The Makefile needs to be told to link against the DDAS libraries that you are making use of. Those are libddasformat.so, libFragwalker.so, and libDDASUnpacker.so. That can easily be accomplished by adding to the USERLDFLAGS

-L/usr/opt/ddas/VERSION/lib -lddasformat -lFragwalker -lDDASUnpacker

We also need to update the include search path by adding to USERCXXFLAGS:

-I/usr/opt/ddas/VERSION/include

In the above two examples, you need to replace VERSION with the actual version of DDAS that you are using.

Finally, because we have used some language features from the C++11 standard, we need to tell the compiler to use c++11 as its standard. You can do this by adding "-std=c++11" to USERCXXFLAGS.

That is all.