We have chosen to adopt a Multiattribute Utility Evaluation approach  to evaluating the alternative contenders for a VRE. This approach is nevertheless straightforward and simple to apply. The core of the procedure is to identify the most relevant values or criteria that are appropriate to the functioning of a VRE. Measurements are then made to determine the degree to which the criteria are attained. By doing so systematically, and by making numerical judgements wherever possible, we can compare the VRE contenders on a more objective basis than is usually the case. We have identified the following 10 broad criteria, some of which are further subdivided.
These criteria are as follows:
a) It is generally accepted that software modules that are used by many projects end up being robust and well understood due to the amount of exposure they receive. It thus makes good sense to make use of publicly available libraries when building a software product as opposed to writing the same algorithms over and over again. This criterion is thus intended to reflect the degree of use of open source libraries by the VRE.
b) Conformance to ratified standards is another feature that is generally seen as being important. Standards conformance fosters ease of interoperability and ease of extension via plug-able components. JSR 168 and WSRP have been identified as the main two standards that, when adhered to, will allow Java components and Web service based components to be added to a VRE. What interface standardization also facilitates is reuse. There are many tools currently in circulation, written in Java, which could be re-factored to allow them to be plugged into a standards-based interface like JSR 168. There are also many tools written in other languages. As long as these tools can be re-factored to talk the WSRP protocol, then they could also be re-used in a WSRP compliant container.
a) This measures the degree of support there is for developers who wish to start writing tools for, or even extending, the VRE framework.
b) This measures the degree of support there is for users and administrators who wish to install, configure and use the VRE framework.
Have all the criteria been listed? There are others we could include like the track record of the developer team, but this is taken into account when we allocate a score under sub criterion F(a). We felt that other, perhaps domain specific criteria, would be less important in an overall evaluation and are thus likely to have a lower impact and not affect the overall rankings. This has been partially tested with a sensitivity analysis (see below).
We then ranked the 9 criteria in order of importance and allocated a score out of 10, this was then standardized to sum to 1. If criteria had a lot of overlap with another criterion they would be given similar rankings. For simplicity, and to start with, we have given them all equal weights (0.1).
The same process was then applied to the sub criteria. This gives us the weights that can be used to aggregate the score of each component of criterion to produce a total score for criterion . So for example, for criterion which is subdivided into 4 components, the total score for criterion is given by . The total score over all the criteria is then given by , where is the weight applied to criterion . The weightings we used are shown in parentheses in the following figure.
The scores (out of 100) for each component were then obtained using our judgment.