One of the main challenges in scaling symbolic execution to large systems is the fact that constraint solving consumes a large fraction of the available CPU cycles. The larger the tested software system, the deeper the paths and the more complex the constraints. In the papers for today and last week you have seen various approaches to scaling the constraint solving part.
For this OP, devise a metric for measuring this "scalability of constraint solving" property. Define the metric precisely and explain how you would apply this metric to experimentally evaluate scalability of constraint solving in a distributed/parallel constraint solver (e.g., like the one used in Cloud9). The ideal OP will design one or more experiments and then explain what would be measured and how, and provide some expected results (e.g., if you were to plot the measurements, what would they look like).