Benchmark Generation Modeling LanguageFine-grain middleware componentization | Middleware Specialization Techniques | Simplifying programming model for Real-time Systems | Quality Assurances Approaches for Middleware Configuration Validation MotivationReusable performance-intensive software is often used by applications with stringent quality of service (QoS) requirements, such as low latency and bounded jitter. The QoS of reusable performance-intensive software is influenced heavily by factors such as (1) the configuration options set by end-users to tune the underlying hardware/software platform and (2) characteristics of the underlying platform itself. In a normal approach, creating a benchmarking experiment to measure QoS properties required QA engineers to write (1) the header files, source code, that implement the functionality, (2) the configuration and script files that tune the underlying ORB and automate running tests and output generation, and (3) project build files e.g., makefiles required to generate the executable code. The Benchmark Generation BGML (BGML) was designed to help the QA engineer compose benchmarking experiments easily to measure QoS of a scenario, such as round-tip latency, throughput and jitter. BGML is part of a larger tool suite called CoSMIC that address several challenges involved in developing QoS enabled component middleware for Distributed Real-time and Embedded Systems (DRE) systems. In particular, the CoSMIC tool chain consists of tools that aid DRE system building in the following different phases: Figure: CoSMIC Tool Chain As shown in the figure, the BGML tool can be used in the Analysis Benchmarking phase to provide feedback for deployment planning, i.e., how the placement of the components on nodes affects the overall system QoS. Modeling Language CapabilitiesThe BGML tools similar to other tools in the CoSMIC tool
chain, is built on top of the Generic Modeling Environment (GME),
which provides a meta-programmable framework for creating domain-specific
modeling languages and generative tools. GME is programmed via meta-models
and {\em model interpreters}. The meta-models define modeling languages
called paradigms that specify allowed modeling elements, their properties, and
their relationships. Model interpreters associated with a paradigm can also be
built to traverse the paradigm's modeling elements, performing analysis and
generating code. The following are the modeling constructs present in the
BGML paradigm:
WorkflowThe figure illustrates the BGML Workflow at a higher level for the creation of benchmarking experiments. As shown in the figure, these can be decomposed into four related steps. These are explained in the table next to the figure.
1. In the first step, the application scenario is visually depicted via PICML. This step also involves representation of the deployment plan. 2. Using BGML, an experimenter associates QoS properties, such as latency, jitter, or throughput with this scenario. 3. Using the model interpreters, synthesize the appropriate test code to run the experiment & measure the QoS. 4. Feedback metrics into models to verify if system meets appropriate QoS. A detailed description of how one can use BGML is available in the BGML documentation guide which can be downloaded here. Additionally, one can install CoSMIC to play with examples available in the distribution to learn more about BGML. Publications:Arvind S. Krishna, Emre Turkay, Aniruddha Gokhale, and Douglas C. Schmidt,Model-Driven Techniques for Evaluating the QoS of Middleware Configurations for DRE Systems, Proceedings of the 11th IEEE Real-Time and Embedded Technology and Applications, San Francisco, CA March 2005. Symposium, San Francisco, CA, March 2005.Middleware is increasingly being used to develop and deploy large-scale distributed real-time and embedded (DRE) systems in domains ranging from avionics to industrial process control and financial services. Applications in these DRE systems require various levels and types of quality of service (QoS) from the middleware, and often run on heterogeneous hardware, network, OS, and compiler platforms. To support a wide range of DRE systems with diverse QoS needs, middleware often provides many (i.e., 10's-100's) of options and configuration parameters that enable it to be customized and tuned for different use cases. Supporting this level of flexibility, however, can significantly complicate middleware and hence application QoS. This problem is exacerbated for developers of DRE systems, who must understand precisely how various configuration options affect system performance on their target platforms. Arvind Krishna, Douglas C. Schmidt, Adam Porter, Atif Memon, Diego Sevilla-Ruiz, Improving the Quality of Performance-intensive Software via Model-integrated Distributed Continuous Quality Assurance, The 8th International Conference on Software Reuse, ACM/IEEE, Madrid, Spain, July 2004. Quality assurance (QA) tasks, such as testing, profiling, and
performance evaluation, have historically been done in-house on
developer-generated workloads and regression suites. Performance-intensive
systems software, such as that found in the scientific computing grid and
distributed real-time and embedded (DRE) domains, increasingly run on
heterogeneous combinations of OS, compiler, and hardware platforms. Such
software has stringent quality of service (QoS) requirements and often provides
a variety of configuration options to optimize QoS. As a result, QA performed
solely in-house is inadequate since it is hard to manage software variability,
i.e., ensuring software quality on all supported target platforms across all
desired configuration options. This paper describes how the Skoll project is
addressing these issues by Contact Me |CoSMIC |Home Page | Institute for Software Integrated Systems |