>RE: TSE-0331-1205, "Reliable Effects Screening: A Distributed >Continuous Quality Assurance Process for Monitoring Performance >Degradation in Evolving Software >Systems" >Manuscript Type: Regular > >Dear Dr. Porter, > >We have completed the review process of the above referenced >paper that was submitted to the IEEE Transactions on Software >Engineering for possible publication. Your reviews are >enclosed. > >Based on these reports, Associate Editor, Dr. Littlewood has >recommended to the Editor-in-Chief that your paper undergo a >major revision. We would suggest that you revise your paper >according to the reviewers' comments and resubmit the paper for >a second round of reviews. If you wish to revise the paper, >please do so by <<4 July 2006>>. The reviewers have several >questions and suggestions. Dr. Littlewood would like you to >address all the questions raised by the reviewers in your >revision. > >If you do not intend to submit a revised version of your paper, >please let us know so that we can formally close your file. > >MANUSCRIPT LENGTH >There is no length limit on submitted papers. However, it is >recommended that papers not exceed 11,000 words including >figures where a half-page figure equals approximately 250 words. > This length corresponds roughly to 35 double-spaced pages. >Papers that exceed this recommended maximum length will still be >considered, but might not be guaranteed editorial review or >publication in a timely fashion. Accepted papers that exceed >twelve journal pages will be charged mandatory page charges on >the excess pages. > >MANUSCRIPT FORMAT >The text of submitted manuscripts should be double spaced and in >a single column. Appropriate fonts and font sizes should be >used to maximize readability. > >Please upload your revision and summary of changes to your TSE >author center under your original paper's log number at > >http://cs-ieee.manuscriptcentral.com > >Click on Revised Manuscripts then on the View Comments/Respond >button of the paper with your original log number (you'll need >to include comments to the reviewers and the editor). Next, >click on the title to upload the new revision. This will take >you to the File upload screen - just follow the on-screen >instructions from there. Be sure to send an email to the >Transactions Assistant announcing your newly uploaded revision >because the system does not. When the submission process is >complete, you will receive an automated confirmation email >immediately. If you did not receive that email, your submission >is not yet complete. You will also receive an email from the >Transactions Assistant after it is confirmed that your revision >meets all submission guidelines. > >I will send your revised manuscript and your revision summary to >the associate editor and reviewers for comments. > >Should you realize that you forgot to include a file before >submission, please email it to me at tse@computer.org. > >Please do not hesitate in contacting me or my supervisor, >Suzanne Werner should you have any questions about our process >or are experiencing technical difficulties. You can reach me at >tse@computer.org and Suzanne at swerner@computer.org. Our full >contact information is below. > >Thank you for your contribution to TSE, and we look forward to >receiving your revised manuscript. > >Sincerely, > >Mrs. Selina Flynn, TSE >Transactions Assistant >IEEE Computer Society >10662 Los Vaqueros Circle >Los Alamitos, CA 90720 >USA >tse@computer.org >Phone: +714.821.8380 >Fax: +714.821.9975 > >*********** > >Editor Comments > >The reviewers generally liked you paper, as I did myself. Even >reviewer 3, who was most critical, thought there was a very good >paper struggling to get out of this one! > >I am recommending 'major revision' largely because I agree with >the views of reviewer 3, and I hope that you will be able to >respond to his extensive suggestions (see his .pdf file). I >should say that this reviewer is a statistician who has great >experience in dependability and software engineering. He is >someone for whom I have the greatest respect. > >Since your revision will mainly be addressing the criticisms of >reviewer 3, I would be happy to speed up the re-review process >by only asking for views from him and not from the others >(although, of course, you will have to satisfy me!) >************************** > >Reviewer Comments > >Please note that some reviewers may have included additional >comments in a separate file. If a review contains the note "see >the attached file" under Section III A - Public Comments, you >will need to log on to Manuscript Central to view the file. >After logging in to Manuscript Central, enter the Author Center. >Then, click on Submitted Manuscripts and find the correct paper >and click on "View Letter". Scroll down to the bottom of the >decision letter and click on the file attachment link. This >will pop-up the file that the reviewer included for you along >with their review. > >************************** > > >Reviewer 1 > > >Section I. Overview > >A. Reader Interest > >1. Which category describes this manuscript? > >( ) Practice / Application / Case Study / Experience Report > >(X) Research / Technology > >( ) Survey / Tutorial / How-To > > >2. How relevant is this manuscript to the readers of this >periodical? Please explain your rating under IIIA. Public >Comments. > >(X) Very Relevant > >( ) Relevant > >( ) Interesting - but not very relevant > >( ) Irrelevant > > >B. Content > >1. Please explain how this manuscript advances this field of >research and / or contributes something new to the literature. >Please explain your answer under IIIA. Public Comments. > >2. Is the manuscript technically sound? Please explain your >answer under IIIA. Public Comments. > >( ) Yes > >(X) Appears to be - but didn't check completely > >( ) Partially > >( ) No > > >C. Presentation > >1. Are the title, abstract, and keywords appropriate? Please >explain your answer under IIIA. Public Comments. > >(X) Yes > >( ) No > > >2. Does the manuscript contain sufficient and appropriate >references? Please explain your answer under IIIA. Public >Comments. > >(X) References are sufficient and appropriate > >( ) Important references are missing; more references are >needed > >( ) Number of references are excessive > > >3. Does the introduction state the objectives of the manuscript >in terms that encourage the reader to read on? Please explain >your answer under IIIA. Public Comments. > >(X) Yes > >( ) Could be improved > >( ) No > > >4. How would you rate the organization of the manuscript? Is it >focused? Is the length appropriate for the topic? Please explain >your answer under IIIA. Public Comments. > >(X) Satisfactory > >( ) Could be improved > >( ) Poor > > >5. Please rate and comment on the readability of this >manuscript. Please explain your answer under IIIA. Public >Comments. > >( ) Easy to read > >(X) Readable - but requires some effort to understand > >( ) Difficult to read and understand > >( ) Unreadable > > >Section II. Summary and Recommendation > > >A. Evaluation > >Please rate the manuscript. Please explain your answer under >IIIA. Public Comments. > >( ) Award Quality > >( ) Excellent > >(X) Good > >( ) Fair > >( ) Poor > > >B. Recommendation > >Please make your recommendation. Please explain your answer >under IIIA. Public Comments. > >( ) Accept with no changes > >(X) Author should prepare a minor revision > >( ) Author should prepare a major revision for a second review > >( ) Reject > > >Section III. Detailed Comments > > >A. Public Comments (these will be made available to the author) > The paper extends the authors' previous work regarding "main >effects screening" - a distributed continuous quality assurance >(DCQA) process for assessing and improving performance of >evolving systems having a large number of configuration options. > >They propose a new process called "reliable effects screening", >which includes verifying key assumptions taken during the >screening process without too much effort. > >The topic is very relevant to IEEE TSE and the paper is well >written and most parts are easy to read. > >Comments: >The configurations options that have been used, are all binary. >It will be useful to explain in a paragraph, how will some of >the key equations change if say options which can take four >values are used. Specifically what changes will happen to >equations 1 - 4. >If possible, the authors should give an appropriate reference(s) >in Section IIIC, to a previous study, which uses non-binary >options for similar experiments. > >On page 28: 3rd paragraph: Kindly mention how did you arrive at >exactly "additional 2328 configurations" for benchmarking. > >Page 31: In figure 11: If possible please put forth - Why is >that the last point an outlier in both top-5 and top-2 cases ? > > >Figure 3 is slightly confusing to look at and does not >immediately convey what is being explained. I believe that it is >a tool screenshot (and so cannot be changed) - Kindly mention >that it is a screenshot in the paper. > >Typos and Minor corrections: > >Page 35: Reliable effects screening: .... to be "rapid", cheap >and .... > >Page 29: 3rd line: "99.99%" > >Page 33: Please mention the units of latency on the y-axis (i.e. >ms, secs, etc.) > >References: >In [31] - "K.S.T. Vibhu ..." is not properly cited. Please >correct the names and the correct ordering of authors as given >at: http://dx.doi.org/10.1007/11424529_5 > >Please consistently use 'In Proceedings of' in all the >references to papers in conference proceedings, rather than >using 'in Proc.', etc. > >Please be consistent in naming the authors: Please mention all >authors in [14] (and not "et.al"). > >************************** > >Reviewer 2 > > >Section I. Overview > >A. Reader Interest > >1. Which category describes this manuscript? > >( ) Practice / Application / Case Study / Experience Report > >(X) Research / Technology > >( ) Survey / Tutorial / How-To > > >2. How relevant is this manuscript to the readers of this >periodical? Please explain your rating under IIIA. Public >Comments. > >(X) Very Relevant > >( ) Relevant > >( ) Interesting - but not very relevant > >( ) Irrelevant > > >B. Content > >1. Please explain how this manuscript advances this field of >research and / or contributes something new to the literature. >Please explain your answer under IIIA. Public Comments. > >2. Is the manuscript technically sound? Please explain your >answer under IIIA. Public Comments. > >( ) Yes > >(X) Appears to be - but didn't check completely > >( ) Partially > >( ) No > > >C. Presentation > >1. Are the title, abstract, and keywords appropriate? Please >explain your answer under IIIA. Public Comments. > >(X) Yes > >( ) No > > >2. Does the manuscript contain sufficient and appropriate >references? Please explain your answer under IIIA. Public >Comments. > >(X) References are sufficient and appropriate > >( ) Important references are missing; more references are >needed > >( ) Number of references are excessive > > >3. Does the introduction state the objectives of the manuscript >in terms that encourage the reader to read on? Please explain >your answer under IIIA. Public Comments. > >( ) Yes > >( ) Could be improved > >( ) No > > >4. How would you rate the organization of the manuscript? Is it >focused? Is the length appropriate for the topic? Please explain >your answer under IIIA. Public Comments. > >(X) Satisfactory > >( ) Could be improved > >( ) Poor > > >5. Please rate and comment on the readability of this >manuscript. Please explain your answer under IIIA. Public >Comments. > >( ) Easy to read > >(X) Readable - but requires some effort to understand > >( ) Difficult to read and understand > >( ) Unreadable > > >Section II. Summary and Recommendation > > >A. Evaluation > >Please rate the manuscript. Please explain your answer under >IIIA. Public Comments. > >( ) Award Quality > >(X) Excellent > >( ) Good > >( ) Fair > >( ) Poor > > >B. Recommendation > >Please make your recommendation. Please explain your answer >under IIIA. Public Comments. > >(X) Accept with no changes > >( ) Author should prepare a minor revision > >( ) Author should prepare a major revision for a second review > >( ) Reject > > >Section III. Detailed Comments > > >A. Public Comments (these will be made available to the author) > Very thorough, substantially improved from conference paper. > >The example discussed in Section V, "using changes in option >importance to detect bugs" is intriguing. It would be nice if >the authors could produce some numbers to back up the suggestion >that reliable effects screening would produce a "dramatic >change". Could the numbers be run and reported? Not essential, >but it would make the applications discussion stronger. > >************************** > >Reviewer 3 > > >Section I. Overview > >A. Reader Interest > >1. Which category describes this manuscript? > >(X) Practice / Application / Case Study / Experience Report > >( ) Research / Technology > >( ) Survey / Tutorial / How-To > > >2. How relevant is this manuscript to the readers of this >periodical? Please explain your rating under IIIA. Public >Comments. > >(X) Very Relevant > >( ) Relevant > >( ) Interesting - but not very relevant > >( ) Irrelevant > > >B. Content > >1. Please explain how this manuscript advances this field of >research and / or contributes something new to the literature. >Please explain your answer under IIIA. Public Comments. > >2. Is the manuscript technically sound? Please explain your >answer under IIIA. Public Comments. > >( ) Yes > >( ) Appears to be - but didn't check completely > >(X) Partially > >( ) No > > >C. Presentation > >1. Are the title, abstract, and keywords appropriate? Please >explain your answer under IIIA. Public Comments. > >(X) Yes > >( ) No > > >2. Does the manuscript contain sufficient and appropriate >references? Please explain your answer under IIIA. Public >Comments. > >( ) References are sufficient and appropriate > >(X) Important references are missing; more references are >needed > >( ) Number of references are excessive > > >3. Does the introduction state the objectives of the manuscript >in terms that encourage the reader to read on? Please explain >your answer under IIIA. Public Comments. > >( ) Yes > >(X) Could be improved > >( ) No > > >4. How would you rate the organization of the manuscript? Is it >focused? Is the length appropriate for the topic? Please explain >your answer under IIIA. Public Comments. > >( ) Satisfactory > >( ) Could be improved > >(X) Poor > > >5. Please rate and comment on the readability of this >manuscript. Please explain your answer under IIIA. Public >Comments. > >( ) Easy to read > >( ) Readable - but requires some effort to understand > >(X) Difficult to read and understand > >( ) Unreadable > > >Section II. Summary and Recommendation > > >A. Evaluation > >Please rate the manuscript. Please explain your answer under >IIIA. Public Comments. > >( ) Award Quality > >( ) Excellent > >( ) Good > >( ) Fair > >(X) Poor > > >B. Recommendation > >Please make your recommendation. Please explain your answer >under IIIA. Public Comments. > >( ) Accept with no changes > >( ) Author should prepare a minor revision > >(X) Author should prepare a major revision for a second review > >( ) Reject > > >Section III. Detailed Comments > > >A. Public Comments (these will be made available to the author) > >General Comments >This paper contains some interesting and important developments >which I think should be reported. However, it is far too long >and convoluted and I just got bored stiff reading it. I'll >address the general issue of structure and then move on to some >detailed comments. Structure The paper consists of a number of >elements, a description of the group's approach to software >design, the analytical and experimental treatment of the >candidate designs, and their conclusions. The approach to >software design and the arguments supporting the analytical and >experimental work deserve to be published. Unfortunately, these >have been mixed in with a rather poor exposition of the use of >designed experiments. A very large part of the paper can be >referenced away. It is also striking that after the problem >statement, they give a description of something that is already >widely available in proprietary software. Lastly, as an applied >statistician rather than a software engineer, I find the >description of the problem excessively complicated and full of >grandiose language when what they are doing is, at least >conceptually, simple. > >Problem definition >I should like the authors to say something similar to the >following in clear plain English. >^Õ They are developing a system to design and test software >systems. >^Õ The software systems are largely Distributed Object Computing >systems >^Õ The design phase uses conceptually: >o a number of sockets into which software elements can be fitted >(a modular approach); >o the system has to meet a number of performance requirements, >largely QoS; >o the software elements are standard components but their >performance can be adjusted through controllable parameters; >o there are environmental and interaction effects which are >difficult to predict. >^Õ The objective is to automate the design by developing a system >to select the software elements and to set the appropriate >parameter values. >^Õ The automation is necessary to allow the designers to cope >with the very large number of combinations within a single >design (there is a combinatorial explosion). Design and >analysis >This is the SKOLL part. > >The key points are: >^Õ the system produces a design which may be a restricted version >of the final product to allow effective testing; >^Õ the system produces the testing environment which can exercise >the product. > >Experimentation >This part is largely a rather poor and heavy handed exposition >of experimental design. It should be dispensed with and >referenced out. Further, it is a re-invention of the wheel and >the references suggest an almost perverse ignorance of the >application of experimental design and statistical techniques in >other industries. They seem to borrow from without referencing >^Õ six-sigma approaches >^Õ "the seven old tools" >^Õ Exploratory Data Analysis >There is much ready made software to implement theses parts of >their process which have been widely used and thoruoughly >proven: MINITAB, does everything they describe and provides >rather more help to the user. There is also a very good system >for statistical analysis available through the National >Institute of Standards and Technology (http://www.nist.gov/) and >Sematech (http://www.sematech.org/). The system is The >Engineering Statistics Handbook >(http://www.itl.nist.gov/div898/handbook/index.htm). >What they need to hold over in the article is the value of >screening experiments and the choice of the resolution of a >design. The details of fractional designs and D-optimal designs >can be dropped. If they do this, they can explain more clearly >about the values and limitations of their approach. In >particular, screening designs are all two level designs and so >miss the possibility of non-linearities in reesponses, they are >also poor on interaction (their discussion of aliasing anf >resolution). They do not need to say much about the >computational simplification that follows from using orthogonal >designs, we'll assume that there will be software available to >do this drudgery (MINITAB, for example). It is also striking >that they have not mentioned robust design, something which is >widely used in other industrial sectors. Robust design would >seem to be highly relevant in making software systems proof >against environmental variation. Lastly while discussing the use >of experimental design, it really isn't very useful when the >number of factors in the experiment is large. It is well known >(look at any book by Doug Montgomery) that while >the DOE experiments may produce feasible designs, even robust >designs, they cannot distinguish well between the large number >alternatives that meet the product requirements (an >identifiability problem). >They should look at the lecture from Richard Parry-Jones, >Engineering for Corporate Success in the New Millennium >(http://www.raeng.org.uk/news/publications/). > >Recommendation >I think there is a really interesting and useful piece of work >struggling to escape from this articicle and the authors should >be encouraged to resubmit a heaavily revised version which >addresses the issues I have raised. They need to cut out all >thestatistical exposition. Clarify the nature of the problem >addressed as outlined above Relate their approach to other uses >of experimental methods in engineering design. > >References >I have mentioned exploratory analysis, the seven old tools and >Exploratory Data analysis. They can find an excellent exposition >of all of these things in the book below. Since this book covers >all the statistics mentioned in the article, there does not seem >to be much of a case for leaving them in. >Quality: Systems, Concepts, Strategies and Tools, W.J. Kolarik, >McGraw-Hill Education, March 1995, ISBN: 0070352178 >************************** > >