Benchmark Generation Modeling Language

Research Experience >>

Fine-grain middleware componentization | Middleware Specialization Techniques | Simplifying programming model for Real-time Systems | Quality Assurances Approaches for Middleware Configuration Validation


Motivation

Reusable performance-intensive software is often used by applications with stringent quality of service (QoS) requirements, such as low latency and bounded jitter. The QoS of reusable performance-intensive software is influenced heavily by factors such as (1) the configuration options set by end-users to tune the underlying hardware/software platform and (2) characteristics of the underlying platform itself.  In a normal approach, creating a benchmarking experiment to measure QoS properties required QA engineers to write (1) the header files, source code, that implement the functionality, (2) the configuration and script files that tune the underlying ORB and automate running tests and output generation, and (3) project build files e.g., makefiles required to generate the executable code.

The Benchmark Generation BGML (BGML) was designed to help the QA engineer compose benchmarking experiments easily to measure QoS of a scenario, such as round-tip latency, throughput and jitter. BGML is part of a larger tool suite called CoSMIC that address several challenges involved in developing QoS enabled component middleware for Distributed Real-time and Embedded Systems (DRE) systems. In particular, the CoSMIC tool chain consists of tools that aid DRE system building in the following different phases:

Figure: CoSMIC Tool Chain

As shown in the figure, the BGML tool can be used in the Analysis Benchmarking phase to provide feedback for deployment planning, i.e., how the placement of the components on nodes affects the overall system QoS.

Modeling Language Capabilities

The BGML tools similar to other tools in the CoSMIC tool chain, is built on top of the Generic Modeling Environment (GME), which provides a meta-programmable framework for creating domain-specific modeling languages and generative tools. GME is programmed via meta-models and {\em model interpreters}. The meta-models define modeling languages called paradigms that specify allowed modeling elements, their properties, and their relationships. Model interpreters associated with a paradigm can also be built to traverse the paradigm's modeling elements, performing analysis and generating code.  The following are the modeling constructs present in the BGML paradigm:
 

Operation Reference points to a two way IDL operation. BGML requires the signature of the operation to generate benchmarking code

Event Reference points to a CORBA event exchanged between one/many components. The type of event is used by BGML to generate benchmarking code.

Latency element is associated with an IDL operation/event. A latency element can be associated with only one operation or an event. The BGML model interpreter then uses this association to generate code the measures the latency for each invocation of the operation/event. For each latency element, attributes, e.g. file to send the output to, number of iterations, number of warmup iterations, rate and priority of the benchmarking tasks can be specified.

Throughput element similar to Latency is associated with an IDL operation/event. Every throughput element can be associated with one operation or event. Similar to the latency measures, the benchmarking code generates the throughput measures. The same three attributes
associated with Latency can also be associated with throughput

Benchmarking experiments require some background load either CPU or other requests during the benchmarking process. These make the results more insightful. A TaskSet is a set of tasks that can be used to create a certain number of background tasks during the benchmarking
experiment.

A task represents a background activity, in our case generation of requests during the benchmarking process. These requests are remote CORBA operations on the target server process. The number of iterations are the same as the benchmarking operation. Each task set
can be associated with n (1..n) number of background tasks.

Time probe element can be used to generate timing information for both IDL operations and CORBA events. The start_time_probe () and stop_time_probe () functions generated from the BGML interpreters can be used to take timestamps before and after method invocation. These timestamps can then server as input to other timing analysis tools. Every timeprobe element can be associated with many operations and events.

Workflow

The figure illustrates the BGML Workflow at a higher level for the creation of benchmarking experiments. As shown in the figure, these can be decomposed into four related steps. These are explained in the table next to the figure.

1. In the first step, the application scenario is visually depicted via PICML. This step also involves representation of the deployment plan.

2. Using BGML, an experimenter associates QoS properties, such as latency, jitter, or throughput with this scenario.

3. Using the model interpreters, synthesize the appropriate test code to run the experiment & measure the QoS.

4. Feedback metrics into models to verify if system meets appropriate QoS.

A detailed description of how one can use BGML is available in the BGML documentation guide which can be downloaded here. Additionally, one can install CoSMIC to play with examples available in the distribution to learn more about BGML.

Publications:

Arvind S. Krishna, Emre Turkay, Aniruddha Gokhale, and Douglas C. Schmidt,Model-Driven Techniques for Evaluating the QoS of Middleware Configurations for DRE Systems, Proceedings of the 11th IEEE Real-Time and Embedded Technology and Applications, San Francisco, CA March 2005.

Symposium, San Francisco, CA, March 2005.Middleware is increasingly being used to develop and deploy large-scale distributed real-time and embedded (DRE) systems in domains ranging from avionics to industrial process control and financial services. Applications in these DRE systems require various levels and types of quality of service (QoS) from the middleware, and often run on heterogeneous hardware, network, OS, and compiler platforms. To support a wide range of DRE systems with diverse QoS needs, middleware often provides many (i.e., 10's-100's) of options and configuration parameters that enable it to be customized and tuned for different use cases. Supporting this level of flexibility, however, can significantly complicate middleware and hence application QoS. This problem is exacerbated for developers of DRE systems, who must understand precisely how various configuration options affect system performance on their target platforms.

Arvind Krishna, Douglas C. Schmidt, Adam Porter, Atif Memon, Diego Sevilla-Ruiz, Improving the Quality of Performance-intensive Software via Model-integrated Distributed Continuous Quality Assurance, The 8th International Conference on Software Reuse, ACM/IEEE, Madrid, Spain, July 2004.

Quality assurance (QA) tasks, such as testing, profiling, and performance evaluation, have historically been done in-house on developer-generated workloads and regression suites. Performance-intensive systems software, such as that found in the scientific computing grid and distributed real-time and embedded (DRE) domains, increasingly run on heterogeneous combinations of OS, compiler, and hardware platforms. Such software has stringent quality of service (QoS) requirements and often provides a variety of configuration options to optimize QoS. As a result, QA performed solely in-house is inadequate since it is hard to manage software variability, i.e., ensuring software quality on all supported target platforms across all desired configuration options. This paper describes how the Skoll project is addressing these issues by
developing advanced QA processes and tools that leverage the extensive computing resources of user communities in a distributed, continuous
manner to improve key software quality attributes.


Contact Me |CoSMIC |Home Page | Institute for Software Integrated Systems