************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

/work/00131/fuentes/exec/dddas_barcelona-cxx on a barcelona named i130-305.ranger.tacc.utexas.edu with 48 processors, by fuentes Wed Nov 24 19:45:03 2010
Using Petsc Release Version 3.0.0, Patch 11, Mon Feb  1 11:01:51 CST 2010

                         Max       Max/Min        Avg      Total 
Time (sec):           1.824e+03      1.00000   1.824e+03
Objects:              4.370e+04      1.00000   4.370e+04
Flops:                4.283e+09      2.23702   3.198e+09  1.535e+11
Flops/sec:            2.348e+06      2.23702   1.753e+06  8.414e+07
MPI Messages:         5.929e+05      3.09053   3.730e+05  1.790e+07
MPI Message Lengths:  1.598e+09      3.58034   1.762e+03  3.154e+10
MPI Reductions:       1.020e+05      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 6.5064e+01   3.6%  1.3004e+10   8.5%  2.481e+06  13.9%  6.597e+02       37.5%  2.530e+03   2.5% 
 1:  Initialization: 2.1302e+01   1.2%  0.0000e+00   0.0%  6.862e+03   0.0%  1.794e+00        0.1%  1.900e+01   0.0% 
 3: function evaluation: 1.3437e+03  73.7%  1.0293e+11  67.1%  1.126e+07  62.9%  5.529e+02       31.4%  6.378e+04  62.5% 
 4: gradient evaluation: 3.9406e+02  21.6%  3.7551e+10  24.5%  4.157e+06  23.2%  5.471e+02       31.1%  2.556e+04  25.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage


--- Event Stage 1: Initialization


--- Event Stage 2: Unknown


--- Event Stage 3: function evaluation


--- Event Stage 4: gradient evaluation


--- Event Stage 5: Unknown


--- Event Stage 6: Unknown


--- Event Stage 7: Unknown

------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions   Memory  Descendants' Mem.

--- Event Stage 0: Main Stage

          TAO Solver     1              2       3864     0
                 Vec    26             28      36928     0
         Vec Scatter   363            364     285824     0
           Index Set   363            363   82144872     0
              Matrix     3              0          0     0
     TAO Application     1              2      25512     0
       Krylov Solver     1              1       1048     0
      Preconditioner     1              1        640     0
              Viewer     1              0          0     0

--- Event Stage 1: Initialization

                 Vec  7189           3586   49228608     0
         Vec Scatter     2              1        788     0
           Index Set     3              3     229204     0
              Matrix     9              0          0     0

--- Event Stage 2: Unknown


--- Event Stage 3: function evaluation

                 Vec 22136          22135  286023992     0
         Vec Scatter  2857           2856    2250528     0
           Index Set  5000           5000   39927260     0
              Matrix   714            714  315956424     0
       Krylov Solver  1428           1428   13480320     0
      Preconditioner  1428           1428    1005312     0
                SNES   714            714     736848     0

--- Event Stage 4: gradient evaluation

                 Vec    20              1       1304     0
         Vec Scatter   715            714     562632     0
           Index Set   719            716    6383540     0
              Matrix     1              0          0     0
       Krylov Solver     2              0          0     0
      Preconditioner     2              0          0     0

--- Event Stage 5: Unknown


--- Event Stage 6: Unknown


--- Event Stage 7: Unknown

========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 1.64032e-05
Average time for zero size MPI_Send(): 4.58459e-06
#PETSc Option Table entries:
-info
-info_exclude null,vec,mat,pc,ksp,snes
-ksp_rtol 1.e-9
-pc_type bjacobi
-snes_converged_reason
-snes_ls basic
-snes_monitor
-snes_rtol 1.e-6
-tao_max_funcs 0
#End o PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8
Configure run at: Tue Mar 23 14:13:09 2010
Configure options: --with-x=0 -with-pic --with-blas-lib="[/opt/apps/intel/mkl/10.0.1.014/lib/em64t/libmkl_em64t.a,libmkl.a,libguide.a,libpthread.a]" --with-lapack-lib="[/opt/apps/intel/mkl/10.0.1.014/lib/em64t/libmkl_em64t.a,libmkl.a,libguide.a,libpthread.a]" --with-external-packages-dir=/opt/apps/intel10_1/mvapich1_1_0_1/petsc/3.0.0/externalpackages --with-mpi-compilers=1 --with-mpi-dir=/opt/apps/intel10_1/mvapich/1.0.1 --with-clanguage=C++ --with-scalar-type=real --with-dynamic=0 --with-shared=0 --with-spai=1 --download-spai=1 --with-parmetis=1 --download-parmetis=yes --with-hdf5=1 --with-hdf5-dir=/opt/apps/intel10_1/mvapich1_1_0_1/phdf5/1.8.2 --with-hypre=1 --download-hypre=1 --with-plapack=1 --download-plapack=1 --with-ml=1 --download-ml=yes --with-mumps=1 --download-mumps=/share/home/0000/build/rpms/SOURCES/MUMPS_4.9.tar.gz --with-scalapack=1 --download-scalapack=yes --with-blacs=1 --download-blacs=yes --with-spooles=1 --download-spooles=1 --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-parmetis=1 --download-parmetis=yes --with-debugging=no --COPTFLAGS=-xW --CXXOPTFLAGS=-xW --FOPTFLAGS=-xW
-----------------------------------------
Libraries compiled on Tue Mar 23 14:13:09 CDT 2010 on build.ranger.tacc.utexas.edu 
Machine characteristics: Linux build.ranger.tacc.utexas.edu 2.6.18.8.TACC.lustre.perfctr #9 SMP Mon Oct 19 22:06:10 CDT 2009 x86_64 x86_64 x86_64 GNU/Linux 
Using PETSc directory: /opt/apps/intel10_1/mvapich1_1_0_1/petsc/3.0.0
Using PETSc arch: barcelona-cxx
-----------------------------------------
Using C compiler: /opt/apps/intel10_1/mvapich/1.0.1/bin/mpicxx -xW   -fPIC  -xW 
Using Fortran compiler: /opt/apps/intel10_1/mvapich/1.0.1/bin/mpif90 -fPIC -xW    
-----------------------------------------
Using include paths: -I/opt/apps/intel10_1/mvapich1_1_0_1/petsc/3.0.0/barcelona-cxx/include -I/opt/apps/intel10_1/mvapich1_1_0_1/petsc/3.0.0/include -I/opt/apps/intel10_1/mvapich1_1_0_1/petsc/3.0.0/barcelona-cxx/include -I/opt/apps/intel10_1/mvapich1_1_0_1/phdf5/1.8.2/include -I/opt/apps/intel10_1/mvapich/1.0.1/include   
------------------------------------------
Using C linker: /opt/apps/intel10_1/mvapich/1.0.1/bin/mpicxx -xW 
Using Fortran linker: /opt/apps/intel10_1/mvapich/1.0.1/bin/mpif90 -fPIC -xW  
Using libraries: -Wl,-rpath,/opt/apps/intel10_1/mvapich1_1_0_1/petsc/3.0.0/barcelona-cxx/lib -L/opt/apps/intel10_1/mvapich1_1_0_1/petsc/3.0.0/barcelona-cxx/lib -lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc        -Wl,-rpath,/opt/apps/intel10_1/mvapich1_1_0_1/petsc/3.0.0/barcelona-cxx/lib -L/opt/apps/intel10_1/mvapich1_1_0_1/petsc/3.0.0/barcelona-cxx/lib -lsuperlu_dist_2.3 -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lparmetis -lmetis -lscalapack -lblacs -lHYPRE -lspai -lspooles -lsuperlu_3.1 -lPLAPACK -lml -Wl,-rpath,/opt/apps/intel10_1/mvapich1_1_0_1/phdf5/1.8.2/lib -L/opt/apps/intel10_1/mvapich1_1_0_1/phdf5/1.8.2/lib -lhdf5 -lz -Wl,-rpath,/opt/apps/intel/mkl/10.0.1.014/lib/em64t -L/opt/apps/intel/mkl/10.0.1.014/lib/em64t -lmkl_em64t -lmkl -lguide -lpthread -lmkl_em64t -lmkl -lguide -lpthread -lPEPCF90 -Wl,-rpath,/opt/apps/intel10_1/mvapich/1.0.1/lib/shared -L/opt/apps/intel10_1/mvapich/1.0.1/lib/shared -Wl,-rpath,/opt/apps/intel10_1/mvapich/1.0.1/lib -L/opt/apps/intel10_1/mvapich/1.0.1/lib -ldl -lmpich -Wl,-rpath,/opt/ofed/lib64 -L/opt/ofed/lib64 -libverbs -libumad -lpthread -lrt -Wl,-rpath,/opt/apps/intel/10.1/cc/lib -L/opt/apps/intel/10.1/cc/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -limf -lsvml -lipgo -lintlc -lgcc_s -lirc_s -lmpichf90nc -lmpichfarg -Wl,-rpath,/opt/apps/intel/10.1/fc/lib -L/opt/apps/intel/10.1/fc/lib -lifport -lifcore -lm -lm -lpmpich++ -lstdc++ -lpmpich++ -lstdc++ -ldl -lmpich -libverbs -libumad -lpthread -lrt -limf -lsvml -lipgo -lintlc -lgcc_s -lirc_s -ldl 
------------------------------------------