Data Acquisition

Return to previous page

Table of Contents:



Chapter 1: Getting Started [return to top]

This chapter describes how to get started with the NSCL data acquisition system. In this chapter we will:

  • Give an overview of the system (Section 1.1).
  • Describe how to log on as a user (Section 1.2)
  • Describe how to get the system started (Section 1.3).
  • Describe how to start and end runs (Section 1.4).
  • Describe how to set up the event trigger signals (Section 1.5).

1.1 Overview of the System [return to top]

The model of data acquisition at NSCL is that data comes from a set of sources and is distributed to many destinations.

  • Data can be received by more than one computer for on-line analysis.
  • Data can be received by more than one program running in a computer.

Data is initially read in via a program called the event readout program. This program describes how and when each input to the experiment is read. The event readout program can also do some data processing. Event buffers (i.e. events collected into convenient-sized blocks) can be transmitted to more than one "back end" computer for more detailed processing. All systems in the NSCL data acquisition system run on the Linux operating system.

A block diagram of a typical configuration is shown at:

    Figure 1-1: Block Diagram.

Click on various parts of this diagram for more information. The items you will need to know the most about are: StagerGui, Readout, and SpecTcl.

1.2 Logging In as a User [return to top]

Experimental users will typically log in to one or more Linux andWindows machines located in the Data U's and the experimental vaults. Prior to your arrival, two accounts would have been established for your experiment, one on the Linux data acquisition cluster and one on the Windows system. Both accounts would have the same username and the same password. To log in, supply:

  • your username, which is the 5-digit number 0xxxx, where xxxx is your 4-digit experiment number, and
  • the initial password, which is e0xxxx, where xxxx has the same meaning as above.

Immediately after logging in, change your password on the Linux account by typing yppasswd and on the Windows account by hitting the ctrl-alt-delbuttons and then clicking on change password; then follow the instructions that show up.

Your Linux account will have a home directory, a set of other directories with skeleton codes that can be used to tailor them for your experiment (see section 1.3), an assigned stage area for event data, and assigned tape drive. A stage area is where your event data will be written while you are recording data. The stager component of the NSCL data acquisition system allows you to determine when data is written from this stage area on to tape. The stager also allows you to select runs to be retained or deleted after staging to tape occurs. When you are assigned a stage area, you will also be assigned a tape drive. The tape special file associated with the tape drive will be set up so that only your account can write to it for the duration of your experiment.

1.3 Getting the System Started [return to top]

One of the first things an experimenter must do to use the data acquisition system is to write the readout program. This is done by starting with already-provided skeleton files and adding code to them so as to tailor them to your experiment. Within the "userdevel" directory in your account, you will find three directories: "readout," "spectcl," and "scalers." Each has its set of skeleton codes, which have been prepared to analyze data from a very simplified experimental setup; they are well documented and relatively easy to browse through. Within each of these directories is a "Readme" text file which provides some guidance on the steps which should be taken to tailor the codes to the individual user. In this way, users who are new to the NSCL system can learn by example. The procedure for tailoring to your experiment is described in:

http://www.nscl.msu.edu/~fox/daq/readoutdocs/Tailoring.htm.

For additional details, please see Chapter 3: Software and the links therein.

1.4 Starting and Ending Runs [return to top]

To start a run: Click the Begin button on the run control graphical user interface. To end a run: Click the End button on the run control graphical user interface.

1.5 Front End Trigger Setup [return to top]

The front end requires trigger signals and emits several status signals. This section describes these signals. For more complete electronics setup information, refer to Chapter 2: Setting Up the Electronics.

The signals in Table 1-1 are NIM fast levels and must be sent to the indicated segments of the front end. This list is not exhaustive. It does not include things like ADC gates and so on.

Table 1-1 Trigger inputs to the front end

Signal Meaning
Master.Live Indicates an event start. [1]
Bit strobe Strobes pattern bits (see below). [2]
Pattern bits Controls the experiment readout. [3]

Notes to Table 1-1

  1. Should be connected to the VME Branch driver INT2 input, and CAMAC gate generator input 1 start. Often also used to gate the pattern register.
  2. Must come before or simultaneous to Master.live, and in coincidence with pattern bits. Goes into strobe or GATE input of CAMAC bit register.
  3. For each section of the experiment detection system, the user assigns a bit and readout instructions. The readout instructions associated with a bit are only performed if the corresponding bit is set in the pattern register at readout time.

Various outputs of the status NIM OUT module indicate the occurrence of various conditions in the acquisition system. These conditions are described in Figure 1-2.

Figure 1-2 System NIM OUT configuration

For the system to be able to read these signals, one of the CAMAC crates, crate 2 of branch 0, must be stuffed with modules in a particular way. This crate is referred to as the system crate or scaler crate. Table 1-2 shows the slot assignments for the system crate. There is also a NIM OUT module in slot 19 of crate 2 which provides signals suitable for clearing digitizers after event readout.

Table 1-2 System Crate Slot Assignments

Slot(s) Usage
1 Dataway Display
19 Clears
20 System status NIM OUT
23 Branch terminator or empty
24/25 Crate controller

1.6 In Case of Problems [return to top]

If you have problems, stay calm and try to work things out rationally. Be sure to fill out a bug report as you solve the problem, so you don't have to remember what happened later when we come and ask you what happened. If you cannot solve the problem that you are having, then call Ron Fox or Eric Kasten from the computer group; both have offices in Room 166. They are considered to be on call during experiments. If they need to be contacted on weekends or after working hours on weekdays, please have the Cyclotron Operators call them.

Chapter 2: Setting Up the Electronics [return to top]

This chapter describes how to set up the NIM logic for the experiment-independent part of the electronics. This includes:

  • Producing a Computer Busy signal to produce a MASTER.LIVE trigger.
  • A description of the CAMAC system.
  • How to set up scalers, and how to use scaler inhibits to measure dead time.
  • ADC and QDC strobes and avoiding LAM timeouts.

Of course, experimental set-ups vary greatly, and we cannot cover all aspects of the electronics set up. These notes, however, should aid you in your task of getting the analog signals encoded and correctly read.

2.1 Some Definitions [return to top]

Some terminology needs to be defined. By event we mean something has triggered the front end to begin a readout of some subset of the detectors in the experiment. Associated with each event is a bit mask of event type. Simple experiments might have only one detector; there would then be only one possible event type, even though there might be more than one element in the detector. More complicated experiments have several detectors, typically at different angles to the beam. Each of these detectors constitutes a different event type and needs to be tagged for the data acquisition system. The readout software allows you to produce event packets which identify the subsets of your experiment that have been read out. There are some things you have to do regardless of which detector fired. For example, you have to decide what triggers the event readout. The conditions for this trigger constitute the master event logic.

2.2 CAMAC System Overview [return to top]

The CAMAC system of the front end is based on the CAMAC parallel highway system. This scheme allows a single VME CAMAC controller to control up to seven CAMAC crates. Each VME system can contain as many as 8 VME CAMAC controller modules and 56 associated crates.

Most of the experiments that are set up at NSCL do not require anything close to that number of CAMAC slots. Typically, experiments we have done here require about one crate full of event signal digitizers and a half crate of scaler modules. This is such a typical upper bound, that a number of front end systems have been pre-configured with a single event crate and a single scaler/system crate. These systems are known as spdaq systems. Each major apparatus has a dedicated spdaq system.

2.3 Master Event Logic [return to top]

All types of events need to start the computer, which involves sending NIM signals to the Branch driver INT2 input. Additionally, all bit registers (sometimes called the coincidence register) need to be strobed for all events. This Bit Strobe should come before or simultaneous to the Master start. Typically, all the above signals come from the same logic fan-out (the Master event).

The Master event signal needs to be present regardless of which detector fired. Thus there has to be a logical fan-in (OR) of the individual logic signals for each event type (assuming that we are discussing an experiment with more than one detector; if only one event type is possible, then the Master logic reduces to the event type logic).

Computer busy can be generated via a Gate and Delay Generator (GADG) in Latch mode, strobed on the Master.live and stopped by the Event readout done NIM OUT. This GADG provides information on when the front end is busy handling data. This information comes in the form of a NIM signal which is held true (-1.2V) from the reception of the start input until the event processing is finished. There is also a NIM complement output. The experimenter should use these outputs to block subsequent strobes to the bit register and digitizers while an event is being processed. This may be done by demanding a coincidence between the Master and computer-not-busy in the individual event logic before the fan-in to the Master Logic.

The Gate Generator outputs are also the means to measure the computer dead time, which has to be accounted for in any calculation of yields or cross sections. The recommended way to measure the dead time is to send the number of Master triggers both before and after the logic requirement of computer-not-busy to a pair of scalers. These scalers would then record the number of events received and those accepted. Yields can then be corrected for the computer busy time by this ratio. It is not recommended that other scaler inputs, especially not the digitized output of the current integrator, be inhibited by the computer busy, since such inputs may not have the same timing relationships with the event triggers.

Note that the duration of the computer-busy period depends largely on what type and how many digitizing modules are being read for a given event. There is also a fixed overhead of CPU time per event to complete the event processing, but this is normally small compared to the digitization time.

A Gate and Delay Generator can also be set up to provide an output to let the experimenter know when the system is reading the scalers. Use the GADG in latched mode started on scaler readout and stopped on scaler readout done NIM OUT. This information is not intended for dead-time corrections or for inhibiting the input of further bit register strobes, since the Slave processor is still able to handle data. The reason lies in the fact that the scalers are read sequentially (from the lowest channel in the lowest slot through to the highest channel in the highest slot) with some finite time to read each scaler channel, but the channels in each scaler are cleared after all have been readout. If you know how long it takes to read each scaler, you can correct each scaler for the lost counts in the period between when the scaler was read and when it was reset. In practice, this is usually a small effect: a 12 channel scaler is read in about 72 μ sec; if four such modules are read every 10 seconds, the discrepancy in the worst case (the first scaler channel) is less than one hundredth of 1%.

2.4 Specific Event Type Logic [return to top]

The logic signals that must be provided independently by each event type are the strobes (gates) to the digitizing modules (ADC's and QDC's). The length of these signals depends on the time duration and characteristics of the analog signals to be encoded. The gate can be opened at any time after the Master Start is given. (Under special circumstances, where digitization modules are first cleared and reset, the gates to the ADC's can come before the Master Start. The strobes should still be in anti-coincidence with computer-busy signal.) For peak-sensing ADC's (e.g. Phillips 7164H, Caen V785, or Silena 7420/G), the gate need not extend much beyond the positive peak of the analog signal -- see timing schematic on Fig. 2-1.

Figure 2-1 Strobe Timing for Digitizing Devices

Most modern digitzers will have converted before the computer can react to the Master.Live gate. Older modules, however, may require that you wait for conversions to be complete. This is normally done by waiting for the module to assert a LAM (CAMAC Look At Me). It is important to wait only for a limited time or else a module which does not receive a strobe will cause the system to wait forever for a LAM which will never occur. The normal action to take when a module is not ready after waiting past this "timeout" is to declare a LAM timeout. Usually, some indicator is put into the event, and in many cases a NIM output is signaled so that a scaler channel can count the number of LAM timeouts. LAM timeouts are very undesirable, since the resulting dead time will be large. Usually, LAM timeouts are an indication that the electronics is incorrect and a module was not properly gated.

Aside from avoiding LAM time-outs, the experimenter should be aware that the dead time is largely influenced by the slowest digitizing unit to be read for a given event.

Chapter 3: Setting up the Software [return to top]

Setting up the software for an experiment is the responsibility of the experimentalists. In many cases existing software can be used as a starting point for an experiment. Experimentalists may be able to use ready-made software if they are running on the larger apparatus at the NSCL. The minimal software a user must create is:


  • Readout software: responsible for reading events from the digitization hardware given an event trigger.
  • SpecTcl: Responsible for online analysis and histogramming.

3.1.1 Readout Software [return to top]

Readout software responds to a computer trigger by reading events from the hardware. How this is done is highly experiment dependent. Therefore, the NSCL data acquisition system provides a skeleton that must be modified to produce the actual readout software.

3.1.2 General Procedure [return to top]


  • Create a directory (e.g. readout) in which this software will be developed and built.
    mkdir readout
    cd readout

  • Obtain copies of the experiment-specific parts of the readout skeleton (note: if you are running on established major apparatus at the NSCL, check with the apparatus maintainers to see if there are skeletons which already include the standard parts of the readout):
    cp /opt/daq/Readout/Skel/*.

  • Modify the file skeleton.cpp to create your readout code. Comments within that file describe what you need to do. Macros in the file: /opt/daq/Readout/Include/macros.h support readout from many types of CAMAC devices. C++ classes to support VME digitizers are being continuously added.
  • Build the readout software:
    make
  • Debug the readout software with, for example, the debugger program gdb available in the GNU library of free Unix software.

3.1.3 SpecTcl [return to top]

SpecTcl is the NSCL standard on-line analysis program. It is based on extensions to the Tcl/Tk scripting language. While a knowledge of that language is useful if you want to take advantage of some of the more advanced features of SpecTcl, it is not necessary. You can get by learning only the basic SpecTcl command set documented in the SpecTcl online user guide, and command reference. Advanced tailoring of SpecTcl is beyond the scope of this document. To prepare SpecTcl for use with your experiment you must:


  • Create a directory in which you will develop your tailored SpecTcl and build it:
    mkdir spectcl
    cd spectcl

  • Obtain a copy of the experiment-specific Skeleton software (note: if you are running with an established piece of NSCL apparatus, pre-built versions of SpecTcl or starting points for the Skeleton specific to that apparatus may already exist. Discuss your needs with the apparatus maintainers.)
    cp /opt/spectcl/current/Skel/*

  • Edit the file MySpecTclApp.cpp to meet your needs.

  • Build your tailored SpecTcl:
    make

  • Test your SpecTcl with e.g. gdb.

3.2 What the Components Do [return to top]

The following software components make up the NSCL data acquisition system:


  • Readout skeleton – a starting point from which to develop software that reads events in response to event triggers. This program is started by the staging subsystem and associated run control graphical user interface (GUI), and must run in the computer physically connected to the digitization hardware.

See http://www.nscl.msu.edu/~fox/daq/readoutdocs/index.htm.

  • Spectrodaq – A distributed buffer management and distribution server. This software is started up at Linux startup time by the script: /etc/rc.d/init.d/rc.Spectrodaq
  • SpecTcl – A histogramming/sorting program that can be connected either to data coming from Spectrodaq (via the pipe data source /opt/daq/Bin/spectcldaq), or from data on disk or pre-CCP event data on ANSI labeled tapes. The user normally starts this program.
  • See http://docs.nscl.msu.edu/daq/spectcl/.

  • Eventlog – Accepts event data from Spectrodaq and logs it to a disk file either on the local node or via ftp to a remote node. This program is normally started up by the staging subsystem and associated run control GUI.
  • See http://www.nscl.msu.edu/~fox/daq/eventlog.htm.

  • TclServer – A wish shell that has been extended to accept TCP/IP connections and execute commands received along those connections. The user normally starts this program and sources in a graphical user interface that monitors variables maintained by its clients.
  • See http://www.nscl.msu.edu/~fox/daq/Tclserver.htm.

  • Sclclient – A program which connects to the Spectrodaq server to receive scaler buffers on the one hand and a TclServer on the other with which it maintains variables and invokes procedures which combine to produce a live display of the scalers. The user normally starts this program.
  • See http://www.nscl.msu.edu/~fox/daq/scaler.htm


    • Stager/RunControl GUI – This Tcl/Tk script manages the data taken by the experiment. The staging component manages the movement of data from the stage area to tape and subsequent retention or deletion of the raw event data. The Runcontrol GUI portion of this software provides a GUI on top of the Readout program that starts the eventlog program as required and manages the standard experimental directory structure.

    See http://www.nscl.msu.edu/~fox/daq/Stager/stager.htm

    3.3 Absolute Minimal Stuff Needed to Run (Testing) [return to top]

    All of the components described above are used during production running. When testing the system, you can simply start the Readout program by hand, or under the control of the gdb debugger, and start SpecTcl by hand to monitor the data produced by the system. Spectrodaq should always be running in a system and therefore need not be started by you.

    3.4 The Full System (Production Data Taking) [return to top]

    See above for detailed descriptions of the data taking components. In practice, the user will start:


    • SpecTcl and spectcldaq as a pipe data source.

    • TclServer and sclclient to provide display of incremental scaler displays.

    • Stager that in turn will start the run control GUI. The run control GUI will in turn start:


      • Readout in the appropriate system.

      • eventlog prior to the start of each run that begins with event recording enabled. The eventlog program is run in single run mode (it exits after a run is completely recorded).


    3.5 Scaler Program Instruction [return to top]

    Given here is a brief description of the way the Gamma Ray Group implements the scaler program. It can be easily adapted by other interested users. The fully functional template is available at the following paths at NSCL: /automan/daq-soft/hu (from the Data-U Linux machines) /usr/TruCluster/analysis/daqsoft/hu (from the Alpha machines) Your experimental account will have a well-documented copy of this program, to serve as a starting point for your experiment. The program consists of three parts: description, readout, and display. For each experiment, the user should modify the description file to reflect the configuration and tailor the template scaler readout to fit the whole readout program. The display script does not need to be modified.

    a) Description This is basically a xml file describing the scaler modules in camac crates and the way the user wants to display the channels on screen. The xml file is straight-forward to understand. For each scaler module, the necessary information includes its camac crate number, slot number, and physical signals for each channel. As far as the scaler display is concerned, this is implemented as a microsoft-style property sheet with multiple pages. Each page comprises several rows. Each row has the following fields: scaler-channel name pair, numerator, denominator, ratio of previous two fields, totals of numerator, totals of denominator, ratio of previous two fields, alarm status. The user describes which page, row and column to put a specific scaler channel.

    b) Readout
    The readout program parses the description xml file for scaler modules, read them, and stuff the data to scaler buffers.

    c) Display
    This part is a tcl/tk script which parses the description xml file for display information and produces the actual display on screen.

    3.6 Using SpecTcl [return to top]

    SpecTcl is the NSCL standard histogramming/sorting program. It is described completely at http://www.nscl.msu.edu/~fox/SpecTcl. This section provides a simplified introduction to the program’s major commands and functions.

    Many apparatus maintainers have written Tk scripts that provide a GUI interface to SpecTcl’s functions. As these vary in form, this section will focus on the most frequently used commands. It is a simple matter to build Tk scripts that enclose these commands.

    Basic Control Commands

    Command Syntax Parameters and Switches Functionality
    source filename
    • filename - name of the file to source.
    Reads commands from filename until the end of file or until the file executes a return statement at top level.
    attach –{file|pipe|tape} source
    • source data source name.
    Switches the event source to the designated source.

    If the –file switch is present, source is the path to a file from which data will be taken. If the –pipe switch is present, the source is a command which is run with stdout connected to a pipe which serves as the data source. If the –tape switch is present, source is a device special file which is a tape drive containing an ANSI labeled tape from the pre CCP NSCL data acquisition system. Data will be analyzed from files on that tape. Tape control commands (beyond the scope of this document) control which file on tape is the data source.

    start   Begins analyzing from the data source.
    stop   Stops analyzing from the data source.

    Spectra, Parameters and Simple Gates

    Command Parameters and Flags Functionality
    parameter name index bits
    • name a name given to the parameter

    • index index in the parameter vector in which the parameter value is stored.

    • bits number of bits of resolution associated with the parameter
    Associates name with an index in the parameter vector. The parameter is assumed to have bits bits worth of resolution for scaling purposes.
    spectrum name type parameters reslist [chansize]
    • name name to be given to the new spectrum.

    • type type of spectrum to create.

    • parameters a TCL formatted list of parameters which will be histogrammed.

    • reslist a TCLformatted list of spectrum resolutions (number of ‘bits’ worth of spectrum).

    • chansize optional channel size (e.g. word, long, byte).
    Creates a new spectrum named name. The spectrum will be of type type (e.g. 1 2 s b). Depending on the spectrum type, parameters will be a single parameter or a list of parameters (e.g. {param1 param2}) which define the spectrum contents. Depending on the spectrum type, reslist will be a single resolution value or a TCL formatted list of resolutions.

    The optional chansize parameter specifies a non-default channel data type.

    Simple gates can be accepted on the Xamine displayer. Note that in SpecTcl, gates are defined on parameter(s) rather than spectra. Gates are stored in full parameter resolution. Simple gates are:

    • Slices (also called cuts), an upper/lower limit pair on a 1-d parameter.

    • Contours are represented by the interior of a polygon. For polygons with edge crossings, interior-ness is defined by the set of points from which a line to infinity crosses polygon edges an odd number of times.

    • Bands are represented by the set of points below an open polyline. If the polyline endpoints do not coincide with the endpoints of the spectrum, no points are considered to be in the gate beyond the polyline endpoints. Band points are sorted by increasing X coordinate value prior to computing the Band. If you enter multi-valued or other pathological bands, you can get a band which may not meet your preconceptions.

    To enter a gate into SpecTcl with Xamine:


    1. Select a spectrum that contains the parameter(s) on which the gate will be defined.

    2. Click the appropriate gate button on Xamine.

    3. Click points on the spectrum (or type them into the gate entry dialog box) to define the gate points.

    4. Provide the name of the gate in the gate entry dialog box.

    5. Click the Ok button on the gate entry dialog box.

    Chapter 4: NSCL DAQ System Buffer Structure [return to top]

    This chapter describes the format of tapes written by the NSCL data acquisition system. In this chapter we discuss:

    • Tape event log file structure.

    • The generic buffer structure, and the fields present in buffer headers.

    • The structure of event data buffers.

    • The structure of the various control and scaler buffers.

    4.1 Overall Buffer and Tape Structure [return to top]

    The standard NSCL directory structure is given at:
    http://www.nscl.msu.edu/~fox/daq/standard_nscl_directory_structur.htm
    The program Stager, which works within this directory structure, runs from a graphical user interface and is responsible for managing experimental data storage.

    Details about Stager are posted at:
    http://www.nscl.msu.edu/~fox/daq/Stager/stager.htm

    4.1.1 Buffer Headers [return to top]

    Each data buffer has a fixed format header which describes the buffer and, to some extent, the run which generated the buffer. Following this header is buffer-dependent data. Buffer headers are 16 words long, and contain the data shown in Table 4-1.

    Table 4-1 NSCL buffer headers

    Word # Words Contains
    1 1 Number of useful words in buffer
    2 1 Buffer type
    3 1 Buffer checksum over used words
    4 1 Run number
    5 2 Buffer sequence number
    7 1 Number of events in buffer
    8 1 Number of LAM registers in event stream
    9 1 Number of CPU which generated buffer
    10 1 Number of bit registers
    11 1 Buffer revision level
    12 5 Reserved for expansion

    NOTE: Event data buffers have a separate sequence number from the control data buffers. This enables on-line sampling programs to accurately compute the fraction of the data sampled.

    At present, all run control commands go through the front end computer systems. This includes instructions to turn on/off tape state. Tape state information is contained in the buffer type word. If the buffer type is positive, it is put on tape, if negative it is not taped. From now on, when we say buffer type, we will be referring to the absolute value of the buffer type word of the buffer header. This may seem somewhat baroque at first. However, this allows all tape logging programs connected to the front end, regardless of where they are on the network, to know the taping state just by looking at the data coming from the front end.

    Buffer type indicates what type of data the buffer contains. Table 4-2 describes all buffer types. Buffer types are defined symbolically in the include file: DAQ_LIB:DAQTXTLIB(FEDEF) for FORTRAN.

    Table 4-2 NSCL buffer types supported

    Value Symbol Contents
    1 FE_K_DATA Physics Data
    2 FE_K_SCALER Taped scaler data
    3 FE_K_SCALER PEEK Untaped scalar data
    4-10 --- Reserved for future use
    11 FE_K_START Begin run buffer
    12 FE_K_STOP End run buffer
    13 FE_K_PAUSE Pause run data
    14 FE_K_RESUME Resume run data
    15 FE_K_LINKLOST Link was lost with front end
    16-31 up to FE_K_LASTSYS Reserved for future expansion
    32-64 from FE_K_FIRSTUSR Available for special user applicationsto FE_K_LASTUSR

    Outside users wishing to import this include file will find the text of this include file in DAQ_INCLUDE:FEDEF.FOR

    4.2 Control Buffers [return to top]

    Control buffers are used to hold all data besides physics data. These buffers include:

    • Scaler data -- both taped and untaped.

    • Begin run events.

    • End run events.

    • Pause run events.

    • Resume run events.

    4.2.1 Scaler Data [return to top]

    Periodically the data acquisition system reads and clears scalers. These incremental scaler values are put to tape where they may be summed by off-line event processing programs. In addition, the scaler buffers are available to online buffer consumer programs. Scaler buffers begin with a standard buffer header with an event count of zero and a buffer type of FE_K_SCALER or FE_K_SCALER_PEEK. FE_K_SCALER_PEEK buffers are sent to allow the scaler display programs to update more rapidly than scalers are sent to tape. FE_K_SCALER buffers are what actually go to tape, and each one contains the number of scaler counts since the previous FE_K_SCALER buffer type.

    The event count (word 7) word of the header contains the number of scalers that have been read out. The front end allows user code to append non-scaler data to scaler buffers. In such cases, the word count field of the header may be larger than you would deduce from the event count field. The difference allows you to know how many words of additional non scaler data are present.

    Following the 16 word header, these buffers have the form shown in Figure 4-1.

    Figure 4-1 Layout of scaler buffers

    Figure 4-2 Begin run buffer format

    4.2.2 Begin Run Buffers [return to top]

    Begin run buffers are written on tape whenever a run starts. Begin run buffers have the type FE_K_START. The body of the buffer has the format shown in Fugure 4-2.

    The run title field of the buffer header is filled with zeroes. Following the title field is a time stamp that indicates when the buffer was formatted.

    4.2.3 End Run Buffers [return to top]

    End run buffers have the same format as begin run buffers with the following exceptions:

    • The buffer type is FE_K_STOP

    • The longword following the run title is the duration of the run in 10'ths of a second.

    End run buffers are the last buffer written in a run. They are preceded by a final scaler buffer. Following the end run buffer, the taping program will write ANSI file trailer labels, emit two end of file marks and back up over one of them. This ensures that runs will be separated by ANSI file label sets, and that the tape can be removed from the drive when the run is stopped, and yet remain logically closed.

    4.2.4 Pause Run Buffers [return to top]

    • The buffer type is FE_K_PAUSE

    • The longword following the title is the time of the pause from the start of the run in 10'ths of a second.

    Pause run buffers are written as a result of a pause command. They should be immediately preceded by a scaler buffer. When the tape program writes a pause buffer, it will write two EOF marks on the tape and then backspace over both of them. Thus, the tape can be taken off the drive between pause and resume buffers and still be logically closed. This is useful in case the VAX or the tape drive fails while the run is paused.

    4.2.5 Resume Run Buffers [return to top]

    The resume run buffer has the same format as a begin run buffer except that:

    • The buffer type is FE_K_RESUME

    • The longword following the title is the time since the begining of the run.

    A resume run buffer is sent whenever a paused run is resumed. Thus, on a properly formatted tape, one will always see Pause and Resume buffers back to back.

    4.3 Event Buffers [return to top]

    Event data buffers consist of the standard NSCL buffer header of 16 words followed by event data. Each event has the structure shown in Figure 4-3.

    Figure 4-3 Format of event data in event buffers

    Note 1: IF more than one bit register is needed, then additional bit registers follow immediately. Note that the high order bit of the first bit register is used by readout code written by the program generator to indicate a LAM timeout and therefore cannot be used to indicate the presence of an event packet.

    Note 2: If more than one of these is read, the additional data follows immediately.

    Note 3: Event packets will only be present for the bits which are SET in the pattern register, that is data is zero supressed.

    4.4 User Application Buffers [return to top]

    It is possible for the user to generate event types in the front end which are "experiment specific". These events will be put in user event buffers, one event per buffer. The user event buffers begin with the standard buffer header shown in Table 4-1. They have sequence numbers in the range of sequence numbers produced by control buffers. Word 7, the number-of-events field in application buffers is set by the user at the time the event is created. The format of the body of the user buffers is given in Figure 4-4.

    Figure 4-4 Format of user event buffers

    In Figure 4-4, the time stamp section of the buffer gives the time at which the buffer was formatted. The user-supplied data is byte-swapped so that INTEGER*2 variables will have VAX byte order and then placed unchanged in the buffer.

    Note that the discussion above implies that if the user wishes to present ASCII data, then this must be byte swapped prior to handing the buffer to the system in the front end. If the user wishes to pass INTEGER*4 information, then he or she must word swap this data before putting it in the buffer. This is because the buffer formatter cannot know a priori the data types of the words placed in user application buffers.

    NSCL Front End Tag Assignment [return to top]

    The data acquisition system assembles several event packets coming from various sources. For instance, the S800 spectrograph could be run with the Ge and NaI arrays around the target, while time-of-flight and momentum are measured in the A1900 fragment separator. In that case, four different local data acquisition systems will be synchronized to send their event packets over the ethernet. In order for the back-end computer(s) to recognize the origin of each packet, unique tags have to be assigned to each local data acquisition system. This is the purpose of the following table. The parameter root name column is part of an attempt to standardize the parameter creation and handling in SpecTcl. See link on work in progress.

    Acronym Description Tag Parameter root name Person responsible
    A1900 A1900 fragment separator 0x1900 a1900. Marc Hausmann
    S800 S800 spectrograph 0x5800 s800. Daniel Bazin
    Sweeper Sweeper magnet focal plane 0x5900 sweeper. Daniel Bazin
    Ge Ge detector array 0x2000 ge. Wilhelm Mueller
    NaI NaI detector array 0x2100 nai. Wilhelm Mueller
    BaF BaF detector array 0x2200 baf.  
    Miniball Miniball detector array 0x3000 mini. Betty Tsang
    HiRA High Resolution Array 0x3100 hira. Betty Tsang
    Lassa Large Area Si Strip Array 0x3200 lassa. Betty Tsang
    Silicon Silicon detector array 0x3300 si.  
    4¹ detector 0x4000 fourpi. Skip Van der Molen
    Neutron Neutron wall detector 0x5000 neutron. Thomas Baumann
    Track0…15 Tracking detectors 0 through 15 0x6000 to 0x6F00 track00.  
    ß-NMR ß Nuclear Magnetic Resonance 0x7000 bnmr. Colin Morton
    ß-decay ß-decay detection 0x7100 bdecay. Colin Morton
    Trap Penning trap 0x7200 trap.  
    Users User detectors 0x8000 to 0xFF00 To be assigned on demand  

    Event structure

    Overall event length
    First event packet length
    First event packet tag
    Data…
    Second event packet length
    Second event packet tag
    Data…
    And so on…

    Wish List [return to top]

    The purpose of this page is to reflect the user's wishes with regards to the new data acquisition system and analysis tools. If you would like to see your wish(es) added to this list, please E-mail them to bazin@nscl.msu.edu. As some progress is achieved towards the fulfillment of the wishes, links pointing to a description of the work done will be created.

    Data acquisition wish list:

    • Documentation buffer emitted periodically
    • Means to direct data from RS232 devices to documentation buffer
    • Structured buffer dumper
    • Event packet filter application to produce subset of data for online/offline analysis

    SpecTcl and Xamine wish list:

    • Create a tree-like structure for parameter creation and handling in SpecTcl
    • Hide parameter id from user side
    • Provide support for displaying spectra in real units rather than only channels
    • Tree-like structure for spectrum selection
    • Support for multiply incremented spectra and coincidence gates (useful for g spectra)
    • Network data source
    • Set of basic tools for parameter and spectrum manipulations