> On Jul 20, 2016, at 3:16 PM, Neil Horman <nhorman at tuxdriver.com> wrote:
> 
> On Wed, Jul 20, 2016 at 07:47:32PM +0000, Wiles, Keith wrote:
>> 
>>> On Jul 20, 2016, at 12:48 PM, Neil Horman <nhorman at redhat.com> wrote:
>>> 
>>> On Wed, Jul 20, 2016 at 07:40:49PM +0200, Thomas Monjalon wrote:
>>>> 2016-07-20 13:09, Neil Horman:
>>>>> From: Neil Horman <nhorman at redhat.com>
>>>>> 
>>>>> John Mcnamara and I were discussing enhacing the validate_abi script to 
>>>>> build
>>>>> the dpdk tree faster with multiple jobs.  Theres no reason not to do it, 
>>>>> so this
>>>>> implements that requirement.  It uses a MAKE_JOBS variable that can be 
>>>>> set by
>>>>> the user to limit the job count.  By default the job count is set to the 
>>>>> number
>>>>> of online cpus.
>>>> 
>>>> Please could you use the variable name DPDK_MAKE_JOBS?
>>>> This name is already used in scripts/test-build.sh.
>>>> 
>>> Sure
>>> 
>>>>> +if [ -z "$MAKE_JOBS" ]
>>>>> +then
>>>>> + # This counts the number of cpus on the system
>>>>> + MAKE_JOBS=`lscpu -p=cpu | grep -v "#" | wc -l`
>>>>> +fi
>>>> 
>>>> Is lscpu common enough?
>>>> 
>>> I'm not sure how to answer that.  lscpu is part of the util-linux package, 
>>> which
>>> is part of any base install.  Theres a variant for BSD, but I'm not sure how
>>> common it is there.
>>> Neil
>>> 
>>>> Another acceptable default would be just "-j" without any number.
>>>> It would make the number of jobs unlimited.
>> 
>> I think the best is just use -j as it tries to use the correct number of 
>> jobs based on the number of cores, right?
>> 
> -j with no argument (or -j 0), is sort of, maybe what you want.  With either 
> of
> those options, make will just issue jobs as fast as it processes dependencies.
> Dependent on how parallel the build is, that can lead to tons of waiting 
> process
> (i.e. more than your number of online cpus), which can actually hurt your 
> build
> time.

I read the manual and looked at the code, which supports your statement. (I 
think I had some statement on stack overflow and the last time I believe 
anything on the internet :-) I have not seen a lot of differences in compile 
times with -j on my system. Mostly I suspect it is the number of paths in the 
dependency, cores and memory on the system.

I have 72 lcores or 2 sockets, 18 cores per socket. Xeon 2.3Ghz cores.

$ export RTE_TARGET=x86_64-native-linuxapp-gcc 

$ time make install T=${RTE_TARGET}
real    0m59.445s user  0m27.344s sys   0m7.040s

$ time make install T=${RTE_TARGET} -j
real    0m26.584s user  0m14.380s sys   0m5.120s

# Remove the x86_64-native-linuxapp-gcc

$ time make install T=${RTE_TARGET} -j 72
real    0m23.454s user  0m10.832s sys   0m4.664s

$ time make install T=${RTE_TARGET} -j 8
real    0m23.812s user  0m10.672s sys   0m4.276s

cd x86_64-native-linuxapp-gcc
$ make clean
$ time make
real    0m28.539s user  0m9.820s sys    0m3.620s

# Do a make clean between each build.

$ time make -j
real    0m7.217s user   0m6.532s sys    0m2.332s

$ time make -j 8
real    0m8.256s user   0m6.472s sys    0m2.456s

$ time make -j 72
real    0m6.866s user   0m6.184s sys    0m2.216s

Just the real time numbers in the following table.

processes     real Time   depdirs
     no -j             59.4s        Yes
       -j 8             23.8s        Yes
      -j 72            23.5s        Yes
        -j               26.5s        Yes

     no -j             28.5s         No
       -j 8               8.2s         No
      -j 72              6.8s         No
        -j                 7.2s         No

Looks like the depdirs build time on my system:
$ make clean -j
$ rm .depdirs
$ time make -j
real    0m23.734s user  0m11.228s sys   0m4.844s

About 16 seconds, which is not a lot of savings. Now the difference from no -j 
to -j is a lot, but the difference between -j and -j <cpu_count> is not a huge 
saving. This leads me back to over engineering the problem when ?-j? would work 
just as well here.

Even on my MacBook Pro i7 system the difference is not that much 1m8s without 
depdirs build for -j in a VirtualBox with all 4 cores 8G RAM. Compared to 1m13s 
with -j 4 option.

I just wonder if it makes a lot of sense to use cpuinfo in this given case if 
it turns out to be -j works with the 80% rule?

On some other project with a lot more files like the FreeBSD or Linux distro, 
yes it would make a fair amount of real time difference.

Keith

> 
> While its fine in los of cases, its not always fine, and with this
> implementation you can still opt in to that behavior by setting 
> DPDK_MAKE_JOBS=0
> 
> Neil
> 
>> 

Reply via email to