[parsec-users] Question regarding to fluidanimate in Parsec 3.0

Jason Kwok kwok.jason at live.com
Wed Oct 24 14:43:15 EDT 2012


Hi Rob,

I want to make sure I understand your argument because I think there are two independent things:  Floating point inaccuracy and path dependence.  

Are you saying that the non-deterministic order coupled with floating point inaccuracy combined caused the delta?  As in, if the calculations were integer-based, then the non-deterministic ordering would still produce identical results?

If the order in which the particles are getting updated impacts values being used in subsequent calculations, then it’s not just due to floating point inaccuracy.

Thanks,
Jason

Date: Tue, 23 Oct 2012 18:39:24 -0500
From: rsmolin2 at illinois.edu
To: parsec-users at lists.cs.princeton.edu
Subject: Re: [parsec-users] Question regarding to fluidanimate in Parsec 3.0


  
    
  
  
    Hi Jason,

    

    This is expected behavior for fluidanimate since each border cell
    (e.g. a cell that can be modified by 2 or more threads concurrently)
    doesn't enforce updates in a deterministic order. Non-determinism
    combined with the fact that finite-precison floating point
    arithmetic is not commutative or associative leads to your test
    failing when you test exact equality. Most programs using floating
    point arithmetic usually test for equality by making sure the
    absolute difference between the computed value and expected value is
    less than some small amount (e.g. 10^-7).

    

    To get a better understanding of floating-point arithmetic you might
    want to read "What Every Computer Scientist Should Know About
    Floating-Point Arithmetic" as there are many small details in
    floating-point arithmetic that might trip you up in your future
    research.

    

    

    Rob.

    

    

    On 10/23/2012 03:00 PM, Jason Kwok wrote:
    
      
      
        
        Hi All,
          

          
             I noticed some inconsistency between the results
            produced by fluidanimate single-threaded version and
            multi-threaded version.
          

          
             here is what I did:
          

          
             I built fluidanimate & fluidcmp on my ubuntu 10.04
            box, I ran fluidanimate on simLarge "in_300K.fluid" dataset
            as follows:
             
             ./fluidanimate 1 5
              in_300K.fluid t1f5out1.fluid
             ./fluidanimate 1 5 in_300K.fluid t1f5out2.fluid
             ./fluidcmp  t1f5out1.fluid t1f5out2.fluid --ptol 0
              --vtol 0 --bbox 0
               Position test:        PASS
               Velocity test:        PASS
               Bounding box test:    PASS
          

          
          
               ./fluidanimate 2 5
                in_300K.fluid t2f5out1.fluid
               ./fluidanimate 2 5
                in_300K.fluid t2f5out2.fluid
               ./fluidcmp  t2f5out1.fluid t2f5out2.fluid --ptol 0
                --vtol 0 --bbox 0
                 Position test:        FAIL
                 Velocity test:        FAIL
                 Bounding box test:    PASS
          
          

          
              ./fluidcmp  t1f5out1.fluid t2f5out1.fluid --ptol 0 --vtol 0 --bbox 0
               Position test:        FAIL
               Velocity test:        FAIL
               Bounding box test:    PASS
          

          
             As you can see, the
              tests passed on 1 thread but failed on 2 threads on
              different runs for exact same number of frame, the results
              between 1 and 2 threads for 5 frames are also different. I
              would expect the calculation should be the same regardless
              of how many threads it uses as long as the number of
              frames are the same.
             I have tried with just 1 frame, the
              results are the same as above. you probably notice that I
              set the tolerance value to 0 for all tests, which means all compared data should be exactly
              same to pass. In verbose mode, I can see that the failed
              tests are actually due to very small difference between
              the data value being compared. 
          

            
             My questions is, is this behavior
              expected? should I use a bigger tolerance value? if so,
              why the single thread version never has different data
              values?
          

            
          Thanks,

              Jason
          

          
        
      
    
    

  


_______________________________________________
parsec-users mailing list
parsec-users at lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/parsec-users 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cs.princeton.edu/pipermail/parsec-users/attachments/20121024/5a5561ea/attachment-0001.htm>


More information about the parsec-users mailing list