[parsec-users] serial version of parsec suite

Jim Dempsey jim at quickthreadprogramming.com
Thu Feb 2 15:29:41 EST 2012


What is it you are trying to measure?

Or are you stress testing to see if your simulation code breaks?

>> I really don't understand why should I care memory bandwidth or lock
contention in this configuration.

Memory bandwidth is "within simulation virtual memory bandwidth" same for
the lock contention.

If you run the PARSEC single thread configuration the code, you ought not
have the lock contention code, however you will need to verify this. Running
the multi-threaded compiled code with 1 thread will not eliminate the lock
code, should it be present.

Your original post stated:

>>is it fair to run ferret, fluidanimate, dedup and facesim on a 4 core CMP
(each core runs one thread) and report the results?

As long as the results include the disclaimer: 4 virtual cores each running
single thread version of ferret, fluidanimate, dedup and facesim
concurrently.

Otherwise, somone reading your posted results (including yourself at later
time) might construe something different from what the test shows as
results.

When running concurrently you have memory bandwidth issues, both virtual
within your simulation (when implemented properly) and physically as your
run the simulation. Same issue with the cache evictions listed in earlier
message.

Without this disclaimer, one might construe or promote, design issues
between processor designs being emulated.
With the disclaimer: Under these conditions CPU design X is sperior to CPU
design Y

Jim Dempsey

-----Original Message-----
From: parsec-users-bounces at lists.cs.princeton.edu
[mailto:parsec-users-bounces at lists.cs.princeton.edu] On Behalf Of Mahmood
Naderan
Sent: Thursday, February 02, 2012 1:50 PM
To: PARSEC
Subject: Re: [parsec-users] serial version of parsec suite



Seems that I asked the question wrongly. The problem is not running one
program 4 times. I meant 4 different programs on 4 cores. As I said:

core 0 runs ferret
core 1 runs fluidanimate
core 2 runs dedup
core 3 runs facesim

Each core runs the serial version of application (not multithreaded). 


I really don't understand why should I care memory bandwidth or lock
contention in this configuration. 



To biswabandan:
The paper says this in section 5.1:
We simulate both 4-core (for sequential workloads) and 16-core (for parallel
workloads) CMP systems

So I think it separate the workloads based on parallel or sequential
applications. 

// Naderan *Mahmood;


________________________________
From: kishore kumar <kishoreguptaos at gmail.com>
To: Mahmood Naderan <nt_mahmood at yahoo.com>; PARSEC Users
<parsec-users at lists.cs.princeton.edu>
Sent: Thursday, February 2, 2012 6:56 PM
Subject: Re: [parsec-users] serial version of parsec suite


Apart from memory bandwidth, one more important factor that influence
scalable performance of multithreaded program is lock-contention. I have
observed that compared to memory bandwidth, most of the PARSEC programs are
not scalable on a multicore machine with a large number of cores (e.g. 64
core machine) is because of lock-contention.


Best,
Kishore Kumar Pusukuri
http://www.cs.ucr.edu/~kishore




On Thu, Feb 2, 2012 at 2:23 AM, Mahmood Naderan <nt_mahmood at yahoo.com>
wrote:

Hi,
>The main characteristic of PARSEC suite is that it is a collection of
multithreaded applications. However it is possible to run a serial version
of those applications. Now I want to know is it fine to use serial version
like SPEC benchmarks? For example, is it fair to run ferret, fluidanimate,
dedup and facesim on a 4 core CMP (each core runs one thread) and report the
results?
> 
>
>// Naderan *Mahmood;
>_______________________________________________
>parsec-users mailing list
>parsec-users at lists.cs.princeton.edu
>https://lists.cs.princeton.edu/mailman/listinfo/parsec-users
> 
_______________________________________________
parsec-users mailing list
parsec-users at lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/parsec-users



More information about the parsec-users mailing list