When parallelizing an exploration task of any sort – be it symbolic execution, test generation, model checking, etc. – one always faces the challenge of dealing with redundant work and/or states. What would it take for an exploration task of this sort to not incur any redundancy when parallelized on shared-nothing clusters? Can you conceive of a case in which it is practical to parallelize symbolic execution without incurring any redundancy, or will it always be that shared-nothing architectures require a tradeoff between overhead due to coordination and overhead due to redundancy? Argue your position with evidence that is as quantitative as possible. |
Archive (Fall 2009) > One-Pagers >