10.4. Creating the tailored SpecTcl

Now that the data look right, we're going to tailor SpecTcl to do online analysis of that data. In this example, we are going to write our code as if the module is the only item to decode. We will decode our data in to parameter named t.0...t.31. In a larger set up you might want to encapsulate the data from the TDC in a packet and have your code search for that packet in the data, only unpacking that part of the data.

SpecTcl analyzes data using the model of an analysis pipeline. Each stage of the pipeline has access to the prior stage's data, as well as the raw event. We'll illustrate this by writing a second stage of the pipeline that produces time differences between adjacent channels of the TDC. The interesting thing is that we can write that stage without having any knowledge of the format of the raw event.

This suggests that your SpecTcl software should consists of analysis stages that first decode the raw data and then produce physically useful parameters once those data are decoded. This protects what is usually the hard part of your analysis software from changes to the structure of the raw event.

10.4.1. Decoding raw TDC data

SpecTcl's data analysis pipeline is expected to take a stream of raw events and unpack it into parameters. Spectra can then be defined on those parameters. Gates can also be defined on parameters and used to conditionalize when a spectrum is incremented. Each stage of the event analysis pipeline is an object from a class derived from CEventProcessor.

In this section we'll write an event processor class that will decode the data our readout program is producing. To do this we will:

10.4.1.1. Getting the skeleton

In this section we are going to create a working directory and copy the SpecTcl-v3.4 skeleton in to that directory.

Example 10-8. Copying in the SpecTcl-v3.4 skeleton


mkdir spectcl
cd spectcl
cp /usr/opt/spetcl/3.4/Skel/* .

                    

10.4.1.2. Creating the raw unpacker class

Here is the header for our raw unpacker (event processor) pipeline stage:

Example 10-9. Raw TDC unpacker (RawUnpacker.h)


#ifndef _RAWUNPACKER_H
#define _RAWUNPACKER_H
#include <config.h>                             (1)
#include <EventProcessor.h>                     (2)

class CTreeParameterArray;                            (3)

class CRawUnpacker : public CEventProcessor           (4)
{
public:
  CRawUnpacker();                                     (5)
  virtual ~CRawUnpacker();
  virtual Bool_t operator()(const Address_t pEvent,
                            CEvent&         rEvent,  (6)
                            CAnalyzer&     rAnalyzer,
                            CBufferDecoder& rDecoder);
private:
    CTreeParameterArray&  m_times;                 (7)
};


#endif
                        
                    
(1)
SpecTcl's config.h header contains a bunch of definitions used by other headers to make their definitions properly depending on the system on which SpecTcl is installed.

This header must be included prior to any other headers.

(3)
The tree paramter package is a useful package that makes defining and accessing parameters simple. We'll use it to define and access the parameters we will create. This lets the compiler know that we will use that class in a way that does not require the compiler to know its shape.
(2)
The EventProcessor.h header defines the CEventProcessor abstract base class. Our class will derive from that base class. This means that the compiler will need to know the shape of that class when compiling our header.
(4)
This defines the new class CRawUnpacker. This will be our class for unpacking the raw data. Note that it is declared with CEventProcessor as a base class.
(5)
Our constructor needs to be declared because it cannot be compiler defined. Specifically, we will make a connection to our parameters by allocating and saving a tree parameter array and binding it to our parameters.
(6)
The function call operator is invoked for each event. Since our job is going to be to unpack the event, we need to declare this method. This method is pure virtual in the base class and therefore all event processors must declare it.
(7)
This reference to a tree parameter array will be used to access our parameters.

Let's fill in the class implementation. We'll do this in sections so that no single sample chunk of code is very large. Note that some sections may be presented out of order for the sake of clarity.

Example 10-10. v775 raw unpacker implementation includes and defs (RawUnpacker.cpp)


#include "RawUnpacker.h"                                (1)
#include <TreeParameter.h>                        (2)
#include <TranslatorPointer.h>                    (3)
#include <BufferDecoder.h>
#include <TCLAnalyzer.h>                          (4)
#include <assert.h>

#include <stdint.h>

static const uint32_t TYPE_MASK (0x07000000);
static const uint32_t TYPE_HDR  (0x02000000);
static const uint32_t TYPE_DATA (0x00000000);
static const uint32_t TYPE_TRAIL(0x04000000);

static const unsigned HDR_COUNT_SHIFT(8);               (5)
static const uint32_t HDR_COUNT_MASK (0x00003f00);
static const unsigned GEO_SHIFT(27);
static const uint32_t GEO_MASK(0xf8000000);

static const unsigned DATA_CHANSHIFT(16);
static const uint32_t DATA_CHANMASK(0x001f0000);
static const uint32_t DATA_CONVMASK(0x00000fff);

                    
(1)
This include directive includes the header that defines the class we are going to implement. The compiler needs the class definition in order to verify that our implementation methods have the right signatures and refer to defined data in the class.
(2)
This header defines the tree parameter package classes. We need it specifically to import the definition of CTreeParameterArray.
(3)
This class imports definition for the translating pointer. In the discussion of the readout code we made mention of the concept of data endian-ness. Translating pointers are pointer like objets that work with byte order signatures within the data and automatically convert data, if needed, to native format from the format of the data in the buffer.

Note that byte order signatures will typically be those of the system writing the event data. If those data, in turn, come from a device with different endian-ness thatn the host system, the user will have to know this and perform additional conversions.

(4)
These includes are for miscellaneous headers I am not going to describe in detail.
(5)
These constants give symbolic definitions to fields shift counts and values that will be seen in the raw data form the V775. It is best to give symbolic names to values of this sort rather than to just put them into the code where a reader my have trouble identifying their meaning.

Example 10-11. V775 raw unpacker - getLong utility (RawUnpacker.cpp)


static inline uint32_t getLong(TranslatorPointer<uint16_t>& p)
{
  uint32_t l = *p++ << 16;
  l         |= *p++;

  return l;
}

                    

While our redout program runs on a little endian computer, the VME bus is big-endian. This utility takes a translator pointer object that points to a specific 32 bit item in big endian format and converts it to native format.

Note that the translating pointer object is passed by reference. The pointer is incremented to point beyond the uint32_t that was converted.

Example 10-12. V775 raw unpacker object constructor/destructor (RawUnpacker.cpp)


CRawUnpacker::CRawUnpacker() :                        (1)
  m_times(*(new CTreeParameterArray("t", 4096, 0.0, 4095.0, "channels", 32, 0)))
{}

CRawUnpacker::~CRawUnpacker()
{
  delete &m_times;                                    (2)
}


                    
(1)
The constructor is required to initialize the reference we have to a tree parameter array. It does this by dynamically creating a new tree parameter array whose base name is t. This will create actual parameters named t.00 ... t.31.
(2)
If an object is ever destroyed, its tree parameter array musb be destroyed as well to prevent memory leaks.

Example 10-13. v75 raw unpacker unpacking events (RawUnpacker.cpp)


Bool_t CRawUnpacker::operator()(const Address_t pEvent,
                                CEvent& rEvent,             (1)
                                CAnalyzer& rAnalyzer,
                                CBufferDecoder& rDecoder)
{
  TranslatorPointer<uint16_t> p(*rDecoder.getBufferTranslator(), pEvent); (2)
  CTclAnalyzer& a(dynamic_cast<CTclAnalyzer&>(rAnalyzer)); (3)

  TranslatorPointer<uint32_t>p32 = p;
  uint32_t  size = *p32++;
  p = p32;                                                (4)
  a.SetEventSize(size*sizeof(uint16_t));

  uint32_t header = getLong(p);
  assert((header & TYPE_MASK) == TYPE_HDR);
  assert(((header & GEO_MASK) >> GEO_SHIFT) == 0xa);    (5)
  int nchans = (header & HDR_COUNT_MASK) >> HDR_COUNT_SHIFT;

  for (int i =0; i < nchans; i++) {                      (6)
    uint32_t datum = getLong(p);
    assert((datum & TYPE_MASK) == TYPE_DATA);
    int channel = (datum & DATA_CHANMASK) >> DATA_CHANSHIFT; (7)
    uint16_t conversion = datum & DATA_CONVMASK;

    m_times[channel] = conversion;                         (8)

  }

  uint32_t trailer = getLong(p);
  assert((trailer & TYPE_MASK) == TYPE_TRAIL);        (9)

  return kfTRUE;
}

                    
(1)
The operator() method of a registered event processor is called for each event. This function has access to the raw event via the pEvent parameter and access to the unpacked parameters via the rEvent parameter array, although binding tree parameters and tree parameter arrays to rEvent is usually simpler.

SpecTcl has two other objects that event processors need. The rAnalyzer parameter is a reference to an object that oversees the flow of control through the analysis of the data. It is actually the object that invoked operator(). Knowledge of the top level structure of the event data is held in rDecoder which, for historic reasons is called a CBufferDecoder. It is responsible for picking apart the outer structure of the data and passing type decoded data to the analyzer for dispatch.

The expectation is that the analysis pipeline will:

  • Decode the raw event turning it into parameters that are in rEvent (tree parameters get automatically bound to elements of rEvent before the analysis pipline starts).

  • Inform the analyzer about the number of bytes that are in the event.

  • Detect and inform the analyzer about failures in the pipeline that should abort its execution and discard the parameters prior to the histogramming pass over the data.

All of the work done by operator() is directed at one of these three tasks. Note that in a larger pipeline, the second of these tasks, telling the analyzer the event size, only needs to be performed by one of the pipeline elements. There is also no need for all elements of the analysis pipeline to touch the raw event data and, in complex analysis, usually only a few will.

(2)
SpecTcl and NSCLDAQ can run on systems of any endianness. TranslatorPointer objects behave somewhat like pointers but automatically translate data from the readout system's bye ordering to the host system's byte ordering. Therefore, to be fully portable, we encourage all access to raw event data to be done via a translating pointer.

This line of code creates a translating pointer for uint16_t data that points to the raw event.

(3)
SpecTcl's actual defautl analyzer is a TcAnalyzer object. While the analyzer object can be configured by the user normally this is not done.

We need to know the analyzer type because the method used to pass the size of the event back to the analyzer is, unfortunately analyzer dependent. In this line we initialize the variable a to be a reference to the anzalyzer. The dynamic cast will throw an exception if the analyzer is not, in fact, a CTclAnalyzer or an object from a type derived from CTclAnalyzer.

(4)
This section of code pulls the first 32 bit item from the event, the event size, and uses the CTclAnalyzer SetEventSize to inform the analyzer of the event size.

The code does not care about the byte ordering of the data in the buffer because it creates a TranslatorPointer<int32_t> to extract this size. Note that translator pointers of various simple data types can be assigned.

(5)
Immediately following the event we should see the header for the V775 data. This code ensures that this is the case. It does this by:

  • Using our utility function getLong to extract the next 32 bit item from the buffer in host order.

  • Ensuring that the type field of that item is that of a header (the assert macros will make the program exit with an error message unless the program is defined with -DNDEBUG).

  • Ensuring the geographical address field of the item matches the geographical address we programmed into the module.

Once the item is validated as a header, the number of channels of data are extracted from it. Note that in production code, the use of assert is probabl not appropriate other alternatives are:

  • Throw an std::string exception. The analyzer catches those exceptions, and outputs the message to stdout. If the analyzer catches an exception, it aborts the event processing pipeline and does not histogram any parameters tht were extracted at that time.

  • Output an error message and return kfFALSE. This return value causes the analyzer to abort the event processing pipeline and not to run the histogrammer for the parameters extracted so far.

  • In some cases it may even be appropriate to output a messasge and return kfTRUE. That stops processing the event data but lets the analyzer continue with the next stage of the pipeline (or with histogramming the parameters unpacked so far if this is the last stage).

(6)
This loop unpackes the channel data in the TDC.
(7)
The loop first asserts that the items that should contain channel data actually does. It then extracts the TDC channel number an data value from the data.

Note that the data also contains a Valid bit. The default programming of the TDC suppresses data for which this bit is not set. If you turn that supression off, you will need to decide what to do with data that are not valid.

(8)
Tree parameter array objects mimic arrays to the extent that they support indexing. Therefore setting the actual parameter is as easy as this line of text. In this way, t.00 is the data from TDC channel 0 and so on.
(9)
The data words from the TDC should be followed by a trailer. This code asserts that this is the case.

10.4.1.3. Adding the unpacker to the analysis pipeline

We have code to unpack the TDC. SpecTcl needs to be told to use that code. This is done in the method CreateAnalysisPipeline in the skeleton file MySpecTclApp.cpp.

That file contains an example event processing pipeline which needs to be deleted. In this section we'll look at the modifications you need to make to MySpecTclApp.cpp for our simple setup.

First locate the section of that file that contains #include directives. Add the following line after the last #include:


#include "RawUnpacker.h"
                    

That makes our raw event unpacking class CRawUnpacker known to the compiler in this file.

Next modify the CreateAnalysisPipeline method body to look like this:


void
CMySpecTclApp::CreateAnalysisPipeline(CAnalyzer& rAnalyzer)
{
  RegisterEventProcessor(*(new CRawUnpacker), "Raw-TDC");

}

                    

RegisterEventProcessor adds a new event processor to the end of the analysis pipeline. The first parameter is a reference to the event processor object (an instance of a CRawUnpacker). The second parameter is a name to associate with the pipeline element.

The name is an optional parameter, but there is the capability to introspect and to modify the analysis pipeline at run time (adding and removing pipeline elements at specific points in the pipe). The methods that locate a specific pipeline element require a name for that pipeline element.

10.4.1.4. Creating raw time spectra

We have our unpacking code and SpecTcl has an instance of our unpacker as its only analysis pipeline element. If we ran SpecTcl now it could unpack the data just fine but nothing would be done with the unpacked parameters. We also need to define a set of spectra. We are going to write a startup script for SpecTcl that does this and ensure that this script is run by SpecTcl when it starts up.

Before we do this, I want to point out that the Tcl in the name SpecTcl is there because SpecTcl uses an enhanced Tcl interpreter to implement its command language. Tcl is a powerful scripting language with a very simple and regular syntax.

For information about Tcl, and it's graphical user interface languate Tk, see http://www.tcl.tk/doc/. http://www.tcl.tk/man/tcl8.5/tutorial/tcltutorial.html is a good online tutorial that can get you up and running with the simple stuff quickly. http://www.tkdocs.com/tutorial/index.html is a tutorial for Tk if you are interested in building GUIs on top of SpecTcl.

Basing SpecTcl's command language around Tcl and Tk allows you to automate tasks SpecTcl performs as well as tailoring application specific graphical user interfaces (GUIs) on top of the program. Most experimental groups have their own GUIs. In our script we're going to look at two approaches to defining our spectra. One is simple but verbose, the other takes better advantage of Tcl's capabilities and is much more consise but still clear.

Example 10-14. Defining raw Time spectra the hard way.


spectrum t.00 1 t.00 {{0 4095 4096}}
spectrum t.01 1 t.01 {{0 4095 4096}}
spectrum t.02 1 t.02 {{0 4095 4096}}
spectrum t.03 1 t.03 {{0 4095 4096}}
spectrum t.04 1 t.04 {{0 4095 4096}}
spectrum t.05 1 t.05 {{0 4095 4096}}
spectrum t.06 1 t.06 {{0 4095 4096}}
spectrum t.07 1 t.07 {{0 4095 4096}}
...
spectrum t.31 1 t.31 {{0 4095 4096}}

                    

Typing all that was pretty traumatic and error prone wasn't it? Let's first look at the spectrum command, which is used to define, delete and list information about spectra. It is used to create spectra with a general form:


spectrum name type parameter(s) axis-definition(s)
                    

Where

name

Is the name of the spectrum you are creating and must be unique.

type

Is the type of spectrum being created. SpecTcl supports a rich set of spectrum types. Type 1 is a one dimensional specttrum.

parameter(s)

Is a list of parameters that are used to increment the spectrum. The actual meaning of this will vary from spectrum type to spectrum type. A one dimensional spectrum needs only one parameter.

axis-definition(s)

Are a list of three element lists that define the axis ranges and bins on each axis. These spectrum has a single axis definition with a range of 0..4095 and 4096 channels along that axis.

The number of axis definitions depends on the type of the spectrum (e.g. 2d spectra, type 2), have two axis definitions.

When you were typing this in I hope you were thinking "If Tcl is a scripting language there must be a better way to do this right? (Well actually I'm hoping you didn't bother to type this all in and were waiting for this next version).

Have a look at this:


for {set i 0} {$i < 32} {incr i} {
    set name [format t.%02d $i]
    spectrum $name 1 $name {{0 4096 4095}}
}
                    

Key things to know when decoding this:

$ substitution

If a $ precedes a variable name, the value of that variable is substituted at that place in the command prior to executing the command. (e.g. $i < 32).

[] substitution

If a string is enclosed with square brackets, It is considered to be a command and the result of executing that command is substituted right there in the original command prior to execution.

(e.g. [format t.%02d $i]).

format command

The format command is like the C function sprintf the first command parameter is a format string that is, essentially a sprintf format string. The remaining command parameters are values for the placeholders in this string. The command result is what sprintf would have stored in its str buffer.

For example in format t.%02d $i if i has the value 3, the format command result would be t.03

Now that's something that's much more type-able. Create a file spectra.tcl and copy/paste that text into it.

Having created spectra.tcl we want to ensure that our SpecTcl will execute the commands in that file when it starts. SpecTcl automatically executes a script named SpecTclRC.tcl in the current working directory when it starts running. This script is executed towards the end of initialization, after the analysis pipeline has been created (and in case the tree parameters have been created and bound to actual SpecTcl parameters).

A sample SpecTclRC.tcl is provided with the skeleton you copied. Locate the line:


splash::progress $splash {Loading SpecTcl Tree Gui} 1
                

Insert the following lines above that line:


set here [file dirname [info script]]
source [file join $here spectra.tcl]
sbind -all
                

The first line defines the variable here to be the directory in which the SpecTclRC.tcl file lives. The second line sources the spectra.tcl file we created from that directory. The third line binds all spectra into the shared memory region SpecTcl uses to provide its displayer the spectra.

10.4.1.5. Building and testing what we have so far.

Let's see if what we have so far actually works. To do this we need to:

  • Modify the skeleton Makefile so that our code will be built and linked to SpecTcl

  • Build our tailored SpecTcl

  • Run our tailored SpecTcl and attach it to the online data stream.

  • Start a run so that we're taking data

  • View the spectra SpecTcl creates.

As with the Makefile for the SBS readout program, an OBJECTS variable lists the names of the objects we want to build. Edit the Makefile that came with the skeleton you copied and change the definition of OBJECTS to look like this:


OBJECTS=MySpecTclApp.o RawUnpacker.o              
                

To build you tailored SpecTcl you can then type:


make
                

Run SpecTcl via the command:


./SpecTcl
                

A number of windows will pop up. We're going to use two of them. The window titled treegui will be used to connect to the online data. The window titled Xamine will be used to look at plots of our spectra.

To attach SpecTcl to the online system; use the treegui window and select the Online... menum ite from the Data Source menu at the top of that window. In the dialog that pops up, change the radio buttons at the bottom of the dialot to select ring11 and click Ok. You are now connected to the online data coming from the system on which you are logged in (so be sure that system is the one connected physically to your VME crate). You can also acquire data that is taken in a remote host, as long as the system you are logged into is running NSCLDAQ. Simply type the name of that system in the box labeled Host: before accepting the dialog.

Now using another terminal window login run the Readout program you had already created, and begin a run. You should see statistics at the bottom of the SpecTcl treegui window changing showing that SpecTcl is analyzing data. SpecTcl should not exit (that would most likely show that an assertion failed).

To view a spectrum Click on the Display button at the bottom of the Xamine window and select the desired spectrum from the list either by double clicking it or by selecting it and clicking Ok.

If you are using the sample electronics setup, the spectra that have signals should show sharp peaks that correspond to the delay you have set in your gate and delay generator.

10.4.2. Producing parameters computed from the raw data

In this section we are going to write a second event processor. This event processor will be positioned after the raw unpacker we just wrote in the event processing pipeline. It will produce parameters that are the differences of the times different channels of the TDC. To test this event processor you will need to fan out your delayed start so that at least two channels will have data. For more fun, delay the fanned signals so that there is a time difference between those channels.

The purpose of this section is to teach the following concepts:

We will produce parameters with names like tdiff.00.01 which wil be the time difference betweeen channel 0 and 1. For simplicity we will produce parameters like tdiff.00.00 even though these will always have the value 0. We just won't produce spectra for those parameters.

Let's see what the header for an event processor like this might look like:

Example 10-15. Header for time difference event processor (Tdiff.h)


#ifndef _TDIF_H
#define _TDIF_H

#include <config.h>
#include <EventProcessor.h>

class CTreeParameterArray;

class CTdiff : public CEventProcessor
{
public:
  CTdiff();
  virtual ~CTdiff();

 virtual Bool_t operator()(const Address_t pEvent,
                            CEvent&         rEvent,
                            CAnalyzer&      rAnalyzer,
                            CBufferDecoder& rDecoder);
private:
  CTreeParameterArray& m_times; 
  CTreeParameterArray* m_diffs[32];
  
};

#endif
                    
                

All this should look very familiar. The notable difference (Besides the change in the class name) is that in addition to a tree parameter reference for the raw times, we have m_diffs is an array of 32 pointesr to CTreeParameterArray objects. We use pointers because, without creating a new class for encapsulating an array of CTreeParameterArray objects we don't have a good way to initialize an array of references.

The idea of this data structure is that m_diffs[i] will be an array of differences between channel i and the other channels of the TDC.

We will make our life simple by not considering the problems inherent in allowing copy construction and assignment for a class like this.

Let's look at the implementation of the CTdiff class:

Example 10-16. CTdiff implementation (Tdiff.cpp)


#include "Tdiff.h"
#include <TreeParameter.h>
#include <BufferDecoder.h>
#include <TCLAnalyzer.h>
#include <stdio.h>

CTdiff::CTdiff() :
  m_times(*(new CTreeParameterArray("t", 8192, -4095, 4095, "channels", 32, 0)))
{
  char baseName[100];
  for (int i =0; i < 32; i++) {
    sprintf(baseName, "tdiff.%02d", i);
    m_diffs[i] =
      new  CTreeParameterArray(baseName, 8192, -4095, 4095, "channels", 32, 0);
  }
}

CTdiff::~CTdiff()
{
  for (int i =0; i <32; i++) {
    delete m_diffs[i];
  }
}


Bool_t CTdiff::operator()(const Address_t pEvent,
                         CEvent& rEvent,
                         CAnalyzer& rAnalyzer,
                         CBufferDecoder& rDecoder)
{
  for (int i = 0; i < 32; i++) {
    if (m_times[i].isValid()) {                       (1)
      for (int j = 0; j < 32; j++) {
        if (m_times[j].isValid()) {                   (2)
          (*m_diffs[i])[j] = m_times[i] - m_times[j]; (3)
        }
      }
    }
  }

  return kfTRUE;
}
                    
                
(1)
In order to be able to compute the difference of a pair of parameters we need to know that both parameters have been assigned a value by at least one prior stage of the analysis pipeline. Tree parameters, as well as elements of the rEvent vector have a method called isValid which returns true if this is the case.

This line ensures that the first time has been assigned a value.

(2)
This line ensures that the second parameter in the difference has been assigned a value.
(3)
If both parameters have been assigned a value, the difference is computed and assigned to the appropriate tree parameter.

Note how all of this is done without needing to know the structure of the raw event data. Should the experiment need to change the hardware in a way that changes that structure this code still works properly. A well structured SpecTcl tailoring should consist of several event processors working together to produce the needed parameters.

Don't forget to add an instance of this class to the analysis pipeline.

We'll leave it as an exercise to create a script that makes spectra and to modify SpecTclRC.tcl to source that script into the SpecTcl at startup. The axis specifications of these spectra should be {{-4095 4095 8192}} e.g.