Documentation > Development > Writing New System Stored Procedures

The following page describes how to create a new system stored procedure in H-Store. System stored procedures (“sysprocs”) are special transactions that perform some administrative operation (e.g., invoking the JVM’s garbage collector, retrieving internal statistics).

A sysproc generally consists of two phases. In the first phase, the system will execute some operation at every partition in the cluster and generate some intermediate output (DISTRIBUTE). Then in the second phase, the transaction’s base partition will combine the output from each partition into a single output result that is returned to the client (AGGREGATE). Sysprocs transactions are not included in the the command log, thus any changes they may make to the database are not durable (excluding @LoadMultipartitionTable).

In this page, we will create a new sysproc called @ToneLoc that will generate an output table with the current time at each node.

Adding PlanFragment Ids

Before you will create the class for your new sysproc, you first need to add in unique identifiers for synthetic PlanFragments that will be used to denote which phase the transaction is in. Add two new entries to SysProcFragmentId:

// @ToneLoc
public static final int PF_ToneLocDistribute = 310;
public static final int PF_ToneLocAggregate = 311;

Note that the fragment ids used in the new entry must be different from all the other entries in SysProcFragmentId

Creating Procedure Class

Create a new class file in the src/frontend/org/voltdb/sysprocs directory. This new class needs to extend the VoltSystemProcedure class. The new class needs to implement three methods:

  • initImpl()
    In this method, you need to register the PlanFragment ids with the PartitionExecutor created in the previous step for the new class. This will allow the runtime to know what operations to invoke when the sysproc request arrives. See the below example:

    @Override
    public void initImpl() {
        executor.registerPlanFragment(SysProcFragmentId.PF_ToneLocDistribute, this);
        executor.registerPlanFragment(SysProcFragmentId.PF_ToneLocAggregate, this);
    }
  • executePlanFragment()
    This is the method that is invoked at each partition involved in the transaction. It will take in a fragment id (as defined in the previous step) and return a DependencySet that contains a VoltTable. For the DISTRIBUTE step, each invocation will create an output that is unique to the partition that it is running on. For the AGGREGATE step, there will only be a single invocation of executePlanFragment() at the base partition that takes in all of the output generated by the DISTRIBUTE step. This method can then combine the multiple DependencySets into a single output. See the sample below for our @ToneLoc example:

    @Override
    public DependencySet executePlanFragment(Long txn_id,
                                             Map<Integer, List<VoltTable>> dependencies,
                                             int fragmentId,
                                             ParameterSet params,
                                             SystemProcedureExecutionContext context) {
        // Output result
        ColumnInfo schema[] = {
            new ColumnInfo("TIMESTAMP", VoltType.TIMESTAMP),
            new ColumnInfo(VoltSystemProcedure.CNAME_HOST_ID, VoltSystemProcedure.CTYPE_ID),
            new ColumnInfo("HOSTNAME", VoltType.STRING),
            new ColumnInfo("PARTITION", VoltType.INTEGER),
        };
        VoltTable vt = new VoltTable(schema);
     
        switch (fragmentId) {
            // DISTRIBUTE
            case SysProcFragmentId.PF_ToneLocDistribute: {
                Object row[] = {
                    new TimestampType(),
                    this.hstore_site.getSiteId(),
                    this.hstore_site.getSiteName(),
                    this.partitionId,
                };
                vt.addRow(row);
                break;
            }
            // AGGREGATE
            case SysProcFragmentId.PF_ToneLocAggregate:
                List<VoltTable> siteResults = dependencies.get(SysProcFragmentId.PF_ToneLocDistribute);
                vt = VoltTableUtil.union(siteResults);
                break;
        } // SWITCH
     
        DependencySet result = new DependencySet(fragmentId, vt);
        return (result);
    }
  • run()
    Finally, you need to implement the run() method that will schedule the execution of the DISTRIBUTE and AGGREGATE tasks for the sysproc. The run method is similar to a regular stored procedure in that it can take in scalar primitives, arrays, and VoltTables as its input. There are some helper methods in VoltSystemProcedure to automatically schedule work either at every partition in the cluster (VoltSystemProcedure.executeOncePerPartition()) or once at every HStoreSite in the cluster (VoltSystemProcedure.executeOncePerSite()).

    public VoltTable[] run() {
        return this.executeOncePerPartition(SysProcFragmentId.PF_ToneLocDistribute,
                                            SysProcFragmentId.PF_ToneLocAggregate,
                                            new ParameterSet());
    }

Add Sysproc To Project Compiler

In the last step, we need to add the new sysproc to the project compiler. In VoltCompiler, modify the addSystemProcsToCatalog() to include your new class. Now when you invoke hstore-prepare from the commandline to build a project jar, the new sysproc will be included. Note that the “readonly” and “everysite” flags are currently ignored (as of November 2013) so it does not matter what you put in there.

Bypassing Transaction Scheduling (Optional)

Since sysprocs are essentially regular transactions, they will get queued and scheduled for execution along with other transactions. If the system is overloaded, then it may take a long time for the sysproc to acquire the partition locks for the entire cluster, which will increase the latency of other transactions.

To avoid this problem, you can intercept a sysproc request as it comes into the system and use the HStoreCoordinator to invoke operations asynchronously at each partition in the cluster. Modify the HStoreCoordinator’s ProtoRPC API to include a new command to broadcast the work requests to different nodes. Then in HStoreSite.processSysProc(), you can check whether the current transaction request matches the name of the sysproc that you want to override and use the ProtoRPC operation instead. Note that you will still need to return a response to the client from the transaction’s base partition HStoreSite.