% $Id$ Response to reviewer and editorial comments -------------------------------------------- Our response is shown with an asterisk (*) 1. The missusage of the term "QoS requirements" should be removed (Reviewer 1). What is meant with "QoS requirements" are actually "QoS policies". [Response] * Replaced all the usage of QoS requirements to QoS policies as suggested by the reviewers and editorial comments. This was done in the text as well as all figures. * Moved figure 15 from the related work section all the way to the intro section to give the paper more of a top-down feel. This was a suggestion by a reviewer. * Moved the use case (BasicSP) earlier to the challenges section so that challenges are described in the context of RTCORBA and the use case. 2. Figure 2 is very vague and should be improved (Reviewer 3). In order to understand what QoS policies authors mention, those QoS policies should be explicitly listed. [Response] * Removed the figure on RTCORBA since it did not add any value, but rather confused the reader. More explanation is added on RTCORBA capabilities including all the important mechanisms it supports and the ones we used in this paper. 3. In Figure 3 the QoS configuration process is unclear (Reviewer 2). From this picture it seems like a transformer takes a metamodel as an input and produces a metamodel as the output. Shouldn't it be that it produces models which conform to some metamodels? [Response] * improved the figure, and also fixed the error where we now clearly show that the developer provides a model (and not a metamodel as was incorrectly shown earlier). 4. Figure 4 has too many relations crossing each other. This makes the figure unclear. (Reviewer 2) [Response] * cleaned up the Policies metamodel figure by reducing crossings of associations. Also increased the font size of the cardinalities. * also, in the challenges section we added more information on the semantics of the component and its ports. So this way the details of this diagram are more clear. 5. Figure 5 does not have the "bursty_client_request" (Reviewer 2). [Response] * added "bursty_client_requests" in the metamodel figure. 6. Figure 6 is not explained (Reviewer 2). [Amogh] (this is now fig 7) 7. In Subsection 3.2. an example of a CQML model and the corresponding XML deployment descriptor is needed (Reviewer 3) [Response] * Since the XML descriptors are simply a representation of the model in textual form, we are reproducing a snippet that represents part of the CQML model. 8. Figure 8 should be, either better elaborated or changed according to the suggestions of the Reviewer 2. The vague places are: [Amogh] (this is now fig 9) 8.1. The relation between Component Assembly and Component [Amogh] (in fig 9) 8.2. The relation between the RTECConfiguration and the QoSCharacteristic [Amogh] (in fig 9) 9. It should be clarified how the transformation is performed (Reviewer 2 and 3) [Response] * Provided more details on how GReAT transformations work and how the algorithms presented in this paper are encoded. This description now appears (along with a few figure #10) in section 3.3 9.1. The behavior of the algorithm should be described with an example (Reviewer 2). [Amogh] (if not an example, at least describe more about what each step does for both algorithms) 9.2. It should be discussed whether the mapping rules defined in the algorithms are according to some graph rewriting rules (Reviewer 3). It is stated that the algorithms are implemented in the GReAT, but the notation in the submission does not look like it. [Response] * Hopefully the figure on GReAT shown before the algorithms are discussed will help address this dilemma. 9.3. The pseudo-functions used in algorithms (ClientComponents, ThreadResources, createTPlanes, etc) and their output should be described (Reviewer 2 and 3). [Amogh] (explain these) 9.4. The discussion on the limits of the algorithms is missing. The discussion should answer at leas the following questions: [Amogh] 9.4.1. Does always exist an admissible soluition for the mapping? If not, how does it handle such a case? (Reviewer 2) [Amogh] 9.4.2. The properties in the algorithms should clearly refer to the properties of DSLs defined in Section 3 (Reviewer 2 and 3) [Amogh] 10. Section 4 should contain the following improvements: 10.1. The internal report referenced as [17] should be publicly available via URL. It describes the key technique adopted for the verification of the transformation correctness. Furthermore, this approach should be summarized in the submission (Reviewer 1, 2, and 3). [Response] * We replaced this with a journal citation (electronic communications of the EASST) 10.2. Figure 9 should be either changed or better explained. From the figure it is unclear whether it is a transformation definition or (which should be defined between two metamodels) or the transformation execution (takes one model conformant to a metamodel and generates a model also a model conformant to some other metamodel) (Reviewer 1). [Response] * We have added more explanation about what is structural correspondence and how the verification process proceeds. * Moreover the paragraphs following the figure discuss what is in the figure. 10.3. The discussion on Figure 10 is missing. The discussion should answer at least at the following questions (Reviewer 3): 10.3.1. Who makes the model? [Response] * We added a sentence indicating that the transformation developer must add the rules for structural correspondence. 10.3.2. What is the relation of this model to the verification process described in Page 9. [Andy] 10.3.3. How is the model used in the verification process? [Andy] 10.4. The elaboration on the verification of generated QoS configurations should be improved. The improvement should answer the following questions: 10.4.1. What is motivation for the verification process and what are benefits it makes? (Reviewer 3) [Response] * The beginning of the section on correctness checking motivates why the two different verifications are needed. 10.4.2. Is there a strong reason for using BIR? (Reviewer 3) [Response] * We wanted to show a concrete approach towards verification. Model checking is one such approach. We were familiar with Bogor and its capabilities. That is why we have used BIR. Would the reviewer want us to add a motivation in the paper about this? 10.4.3. How is the BIR specification automatically generated from the QoS configuration model? Is it another model-to-model transformation? Has a metamodel for BIR been introduced and defined? (Reviewer 1) 10.4.4. What is the relation of BIR to difficulties/challenges? Does BIR cause them? (Reviewer 3) 10.4.5. Why isn't some other approach used? One of the approaches which could be used is usage of the OCL and verification at the model level. Would such approach alleviate the difficulties/challenges? (Reviewer 3) 10.4.6. Why is bogor used for the verification process (Reviewer 3)? 10.4.7. More on generative capabilities of GME should be added (Reviewer 1) [Response] * At the beginning of section 3.3 we have added an overview of GME capabilities. 10.4.8. The verification should be supported with an example to help the reader to follow the approach (Reviewer 1 and 3). 11. Section 5 should be improved in such a way that it addresses the following remarks: [Response] * To help address the following questions, we moved the case study earlier to the challenges section. In that section we then explain the individual challenges both in general terms as well as alluding to the case study. That should hopefully help to showcase the software engineering challenges in more detail. 11.1. The discussion of the software engineering problems or the case study is needed. (Reviewers 2 and 3). [Response] * by moving the case study to the challenges section we are better able to describe these challenges. Hopefully it is more clear this way. 11.2. The cases study should show: what is the input model, what is the output model, and what QoS configurations are used to the output model. (Reviewer 3) 11.3. A discussion on Figure 12's relation to the verification process is missing. Furthermore, the term "dependency structure" should be introduced somewhere earlier. In this way it is confusing for the reader (Reviewer 3) 11.4. The verification process should be elaborated step by step. In such a way it will be shown how it is carried out. (Reviewer 3) 11.5. More discussion on the performed statistical analysis is needed. The discussion should answer at least on the following questions: [Response] * We decided to remove the empirical validation section for two reasons. First, it was confusing the reader and not providing any additional benefit. Second, it was hard for us to run experiments for multiple different scenarios. 11.5.1. How the selected metrics describe the method? What is the purpose of performing those measurements? (Reviewer 2 and 3) 11.5.2. How the results characterize the proposed approach? What do they tell about the approach? How good it is? (Reviewer 3) 11.5.3. A detailed analysis of the obtained results. Some of questions which need to be answerd can be found in the reviewers provided by Reviewers 2 and 3. 11.6. What is the overhead of the QUICKER dependency recalculation in the case of a system evolution (Reviewer 1) 11.7. This section should be also extended with at least one more efficiency evaluation on one more case study. This evaluation is needed to justify the journal publication, because the complete previous material has already been published. 12. The related work lacks a comparative analysis of the proposed approach with the OMG standards for QoS specification. The modeling standards which need to be compared with the proposed approach are the MARTE UML profile and the UML profile for modeling QoS and Fault Tolerance. [TO-DO] Kind regards, SoSyM NFPinDSML Theme Section Editors