Jialu Huang will present her preFPO on Monday April 9 at 1PM in Room 301 (note room!!)
The members of her committee are:  David August, advisor; David Walker and Kai Li,
readers; Doug Clark and JP Singh, nonreaders.  Everyone is invited to attend her talk.
Her abstract follows below.

=========================================================================================
Abstract:

Automatic parallelization is a promising approach to deliver scalable multi-threaded
programs for multi-core architectures. Most existing techniques parallelize independent
loops and insert global synchronizations at the end of each loop invocation. For programs
with few loop invocations, these global synchronizations do not limit parallel
execution performance. However, for programs with many loop invocations, those synchronizations
can easily become the performance bottleneck since they frequently force
all threads to wait, losing potential parallelization opportunities. Some automatic parallelization
techniques apply static analyses to enable cross-invocation parallelization.
Instead of waiting, threads can execute iterations from follow-up invocations if they do
not cause any conflict. However, static analyses must be conservative, and therefore 
cannot handle programs with irregular dependence patterns.

In order to enable more parallelization across loop invocations, this thesis presents
two novel automatic parallelization techniques: BLISS and SBS. Unlike existing techniques
relying on static analyses, these two techniques take advantage of runtime information
to achieve much more aggressive parallelization. BLISS constructs a custom
runtime engine which non-speculatively observes dependences at runtime and synchronizes
iterations only when necessary; while SBS applies software speculative barriers
to permit some of the threads to execute past the invocation boundaries. The two techniques
are complimentary in the sense that they can parallelize programs with potentially
very different characteristics. SBS is most effective when a program's cross-invocation
dependences rarely cause a runtime conflict. BLISS' runtime engine imposes a marginal 
amount of overhead, but it also enables the technique to effectively parallelize programs 
whose dependences can cause frequent conflicts. Preliminary implementation and evaluation 
demonstrate that both techniques can achieve much better
scalability compared to existing automatic parallelization techniques.

==============================================================================================