Discussion:
[gentoo-user] long compiles
(too old to reply)
Alan McKinnon
2023-09-11 19:20:01 UTC
Permalink
After my long time away from Gentoo, I thought perhaps some packages that
always took ages to compile would have improved. I needed to change to
~amd64 anyway (dumb n00b mistake leaving it at amd64). So that's what I did
and let emerge do it's thing.

chromium has been building since 10:14, it's now 21:16 and still going so 9
hours at least on this machine to build a browser - almost as bad as
openoffice at it's worst (regularly took 12 hours). Nodejs also took a
while, but I didn't record time.


What other packages have huge build times?
--
Alan McKinnon
alan dot mckinnon at gmail dot com
Siddhanth Rathod
2023-09-11 19:40:01 UTC
Permalink
Chromium and qtwebengine have the longest build times that I have
encountered
Post by Alan McKinnon
After my long time away from Gentoo, I thought perhaps some packages that
always took ages to compile would have improved. I needed to change to
~amd64 anyway (dumb n00b mistake leaving it at amd64). So that's what I did
and let emerge do it's thing.
chromium has been building since 10:14, it's now 21:16 and still going so
9 hours at least on this machine to build a browser - almost as bad as
openoffice at it's worst (regularly took 12 hours). Nodejs also took a
while, but I didn't record time.
What other packages have huge build times?
--
Alan McKinnon
alan dot mckinnon at gmail dot com
I have some software you don't likely use that takes a while but one that
is common is qtwebengine or something. If it's not that one, it's qtweb
something. It takes about 4 hours, sometimes 5 or so.
I think the software takes longer to compile so that we will build new
rigs. ROFL
Dale
:-) :-)
Dale
2023-09-11 19:40:01 UTC
Permalink
Post by Alan McKinnon
After my long time away from Gentoo, I thought perhaps some packages
that always took ages to compile would have improved. I needed to
change to ~amd64 anyway (dumb n00b mistake leaving it at amd64). So
that's what I did and let emerge do it's thing.
chromium has been building since 10:14, it's now 21:16 and still going
so 9 hours at least on this machine to build a browser - almost as bad
as openoffice at it's worst (regularly took 12 hours). Nodejs also
took a while, but I didn't record time.
What other packages have huge build times?
--
Alan McKinnon
alan dot mckinnon at gmail dot com
I have some software you don't likely use that takes a while but one
that is common is qtwebengine or something.  If it's not that one, it's
qtweb something.  It takes about 4 hours, sometimes 5 or so. 

I think the software takes longer to compile so that we will build new
rigs.  ROFL 

Dale

:-)  :-) 
Alan McKinnon
2023-09-11 19:50:01 UTC
Permalink
qtwebengine! yes that one took forever also. It also said my 16G of RAM was
smaller than the 16G it needed. Weird.

Anyways I enabled a swapfile and left it to run overnight

Alan
Post by Alan McKinnon
After my long time away from Gentoo, I thought perhaps some packages that
always took ages to compile would have improved. I needed to change to
~amd64 anyway (dumb n00b mistake leaving it at amd64). So that's what I did
and let emerge do it's thing.
chromium has been building since 10:14, it's now 21:16 and still going so
9 hours at least on this machine to build a browser - almost as bad as
openoffice at it's worst (regularly took 12 hours). Nodejs also took a
while, but I didn't record time.
What other packages have huge build times?
--
Alan McKinnon
alan dot mckinnon at gmail dot com
I have some software you don't likely use that takes a while but one that
is common is qtwebengine or something. If it's not that one, it's qtweb
something. It takes about 4 hours, sometimes 5 or so.
I think the software takes longer to compile so that we will build new
rigs. ROFL
Dale
:-) :-)
--
Alan McKinnon
alan dot mckinnon at gmail dot com
Wol
2023-09-12 21:10:02 UTC
Permalink
Post by Alan McKinnon
qtwebengine! yes that one took forever also. It also said my 16G of RAM
was smaller than the 16G it needed. Weird.
Anyways I enabled a swapfile and left it to run overnight
16GB physical ram <> 16GB usable ram for the compile ...

I concur with others that tmpfs is the way to go - I don't think my
system is set up that way just now, but I always have googols of swap
(twice max physical ram per disk) so I just declare a huge ramdisk for
compiling on.

My current system has four ram slots, two maxed out with 16GB chips
each, so that makes 128GB swap partitions per disk (4 of them) equals
512GB swap ...

(Yes the people at openSUSE said I was being stupid with that much swap)

But declare a 128GB ramdisk, and it'll spill over as required, but
anything that fits in ram will compile very quick.

And yes, I also used to have my systems configured so they had one
shared portage area, matching make.conf, and were set up to "use binary
if it exists, else build one". I had the opposite problem though, my
nice fast system had a habit of crashing, so I used the old slow one to
build most things, because it was more reliable. Hey ho.

There's all sorts of tricks, some work for some people, others work for
others.

Cheers,
Wol
Peter Humphrey
2023-09-13 11:30:01 UTC
Permalink
Post by Wol
There's all sorts of tricks, some work for some people, others work for
others.
Quite so. Here I have two swap partitions: 8GB priority 20 on NVME and 50GB
priority 10 on SSD. I've never noticed either of them being used, so I suppose
I could dispense with the smaller one. When I bought the machine I wanted it
to be as powerful as I could reasonably justify (to run BOINC projects, making
my own small contribution to the state of knowlege in astrophysics), so it has
64 GB RAM in its four slots.

Tmpfs earns its keep here. I don't set limits on it, preferring to let the
kernel manage it itself. One tmpfs is on /tmp, the other on /var/tmp/portage.
With all that RAM to play in, swap is rarely used, if ever.

A thought on compiling, which I hope some devs will read: I was tempted to
push the system hard at first, with load average and jobs as high as I thought
I could set them. I've come to believe, though, that job control by portage
and /usr/bin/make is weak at very high loads, because I would usually find that
a few packages had failed to compile; also that some complex programs were
sometimes unstable. Therefore I've had to throttle the system to be sure(r) of
correctness. Seems a waste. Thus:

$ grep '\-j' /etc/portage/make.conf
EMERGE_DEFAULT_OPTS="--jobs=4 --load-average=32 [...]"
MAKEOPTS="-j14"

That 14 will revert to its previous 12 if I find things going bump in the night
again, or perhaps go up a bit more if not.
--
Regards,
Peter.
Wols Lists
2023-09-13 12:00:03 UTC
Permalink
Post by Peter Humphrey
A thought on compiling, which I hope some devs will read: I was tempted to
push the system hard at first, with load average and jobs as high as I thought
I could set them. I've come to believe, though, that job control by portage
and /usr/bin/make is weak at very high loads, because I would usually find that
a few packages had failed to compile; also that some complex programs were
sometimes unstable. Therefore I've had to throttle the system to be sure(r) of
Bear in mind a lot of systems are thermally limited and can't run at
full pelt anyway ...

You might find it's actually better (and more efficient) to run at lower
loading. Certainly following the kernel lists you get the impression
that the CPU regularly goes into thermal throttling under heavy load,
and also that using a couple of cores lightly is more efficient than
using one core heavily.

It's so difficult to know what's best ... (because too many people make
decisions based on their interests, and then when you come along their
decisions may conflict with each other and certainly conflict with you ...)

Cheers,
Wol
Frank Steinmetzger
2023-09-13 12:40:01 UTC
Permalink
Bear in mind a lot of systems are thermally limited and can't run at full
pelt anyway ...
Usually those are space-constrained systems like mini PCs or laptops.
Typical Desktops shouldn’t be limited; even the stock CPU coolers should be
capable of dissipating all heat, as long as the case has enough air flow.
You might find it's actually better (and more efficient) to run at lower
loading. Certainly following the kernel lists you get the impression that
the CPU regularly goes into thermal throttling under heavy load, and also
that using a couple of cores lightly is more efficient than using one core
heavily.
At least very current CPUs tend to go into so high clock speeds that they
become quite inefficient. If you set a 105 W Ryzen 7700X to 65 W eco mode in
the BIOS (which means that the actual maximum intake goes down from 144 W to
84 W), you reduce consuption by a third, but only lose ~15 % in performance.

At very low figures (15 W), Ryzen 5000 and 7000 CPUs are almost as efficient
as Apple M1.
--
GrÌße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Einstein is dead. Newton is dead. I’m feeling sick, too.
Peter Humphrey
2023-09-13 12:50:01 UTC
Permalink
Post by Wols Lists
Post by Peter Humphrey
A thought on compiling, which I hope some devs will read: I was tempted to
push the system hard at first, with load average and jobs as high as I
thought I could set them. I've come to believe, though, that job control
by portage and /usr/bin/make is weak at very high loads, because I would
usually find that a few packages had failed to compile; also that some
complex programs were sometimes unstable. Therefore I've had to throttle
the system to be sure(r) of correctness. Seems a waste.
Bear in mind a lot of systems are thermally limited and can't run at
full pelt anyway ...
No doubt, but apparently not this box: I run it 24x7 with all 24 CPU threads
fully loaded with floating-point calculations, which make a good deal more heat
than 'mere' compiling with (I assume) integer arithmetic. :)
Post by Wols Lists
You might find it's actually better (and more efficient) to run at lower
loading. Certainly following the kernel lists you get the impression
that the CPU regularly goes into thermal throttling under heavy load,
and also that using a couple of cores lightly is more efficient than
using one core heavily.
See above; besides, I have to limit the load anyway when compiling, for the
reasons I gave last time.
Post by Wols Lists
It's so difficult to know what's best ... (because too many people make
decisions based on their interests, and then when you come along their
decisions may conflict with each other and certainly conflict with you ...)
I agree with you there Wol, even without the parenthesis. :)
--
Regards,
Peter.
Michael
2023-09-13 15:20:01 UTC
Permalink
Post by Peter Humphrey
Post by Wols Lists
Post by Peter Humphrey
A thought on compiling, which I hope some devs will read: I was tempted to
push the system hard at first, with load average and jobs as high as I
thought I could set them. I've come to believe, though, that job control
by portage and /usr/bin/make is weak at very high loads, because I would
usually find that a few packages had failed to compile; also that some
complex programs were sometimes unstable. Therefore I've had to throttle
the system to be sure(r) of correctness. Seems a waste.
Bear in mind a lot of systems are thermally limited and can't run at
full pelt anyway ...
No doubt, but apparently not this box: I run it 24x7 with all 24 CPU threads
fully loaded with floating-point calculations, which make a good deal more
heat than 'mere' compiling with (I assume) integer arithmetic. :)
I recall this being discussed in a previous thread, but if your CPU has 24
threads and you've set:

EMERGE_DEFAULT_OPTS="--jobs=4 --load-average=32
MAKEOPTS="-j14"

You will be asking emerge to run up to 4 x 14 = 56 threads, which could
potentially eat up to 2G of RAM each, i.e. 112G of RAM. This will exhaust
your 64G of RAM, not taking into account whatever else the OS will be trying
to run at the time. The --load-average=32 is normally expected to be a
floating point number indicating a percentage of load x the number of cores;
e.g. for 12 cores you could set it at 12 x 0.9 = 10.8 to limit the load to 90%
so as maintain some system responsiveness.

Of course, not all emerges use make and you may never or rarely emerge 4 x
monster packages in parallel to need 2G of RAM for each thread at the same
time.

If only we had at our disposal some AI algorithm to calculate dynamically each
time we run emerge the optimal combo of parallel emerge jobs and number of
make tasks, so as to achieve the highest total time saving Vs energy spent!
Or just the highest total time saving. ;-)

I haven't performed any meaningful comparisons to determine where the greatest
gains are to be achieved. Parallel emerges of many small packages, or a large
number of make jobs for big packages. The combination would change each time
according to the individual packages waiting for an update. In my use case,
instinctively feels more beneficial reducing the time I have to wait for huge
packages like qtwebengine to finish, than accelerating the updates of half a
dozen smaller packages. Therefore, as a rule I leave EMERGE_DEFAULT_OPTS
unset. I set MAKEOPTS jobs to the number of CPU threads +1 and the load
average at 95%, so I can continue using the PC without any noticeable latency.
On PCs where RAM is constrained I reduce the MAKEOPTS in /etc/portage/
package.env for any large packages which are bound to start swapping and
thrashing the disk.
Peter Humphrey
2023-09-14 09:50:01 UTC
Permalink
Post by Michael
I recall this being discussed in a previous thread, but if your CPU has 24
EMERGE_DEFAULT_OPTS="--jobs=4 --load-average=32
MAKEOPTS="-j14"
You will be asking emerge to run up to 4 x 14 = 56 threads, which could
potentially eat up to 2G of RAM each, i.e. 112G of RAM. This will exhaust
your 64G of RAM, not taking into account whatever else the OS will be trying
to run at the time.
Yes, I understand that, but I've spent a lot of time watching, and
instrumenting, and in practice the load doesn't exceed about 33.
Post by Michael
The --load-average=32 is normally expected to be a floating point number
That stipulation has only appeared recently. I have tried adding a '.0' to the
number, and it's made no noticeaible difference. Besides, I could attempt
pedantry and declare that the set of all integers is a proper subset of all
real numbers. ;-)
Post by Michael
Of course, not all emerges use make and you may never or rarely emerge 4 x
monster packages in parallel to need 2G of RAM for each thread at the same
time.
I have actually had three or four big packages running together, but my
observation is that they don't pump the load up too far.
Post by Michael
If only we had at our disposal some AI algorithm to calculate dynamically
each time we run emerge the optimal combo of parallel emerge jobs and
number of make tasks, so as to achieve the highest total time saving Vs
energy spent! Or just the highest total time saving. ;-)
Yes, that's what we need, all right.
Post by Michael
I haven't performed any meaningful comparisons to determine where the
greatest gains are to be achieved. Parallel emerges of many small
packages, or a large number of make jobs for big packages. The combination
would change each time according to the individual packages waiting for an
update. In my use case, instinctively feels more beneficial reducing the
time I have to wait for huge packages like qtwebengine to finish, than
accelerating the updates of half a dozen smaller packages.
That is the difficulty. I do often rebuild a new system, not trusting the
existing one any longer, and a lot of time is spent fiddling with four tiny
packages at a time in the early and middle stages, then benefitting from the
limit of 4 once the desktop packages begin.
Post by Michael
Therefore, as a rule I leave EMERGE_DEFAULT_OPTS unset. I set MAKEOPTS jobs
to the number of CPU threads +1 and the load average at 95%, so I can
continue using the PC without any noticeable latency. On PCs where RAM is
constrained I reduce the MAKEOPTS in /etc/portage/ package.env for any large
packages which are bound to start swapping and thrashing the disk.
Interesting. Do you mean 95% of the jobs figure? I'll do some more
experimenting.

Meanwhile, perhaps I'll run new builds in two stages: the first without any
desktop packages - I do have sets defined to enable that - and the second with
them. I can't do that with emerge -e though.
--
Regards,
Peter.
Michael
2023-09-14 12:30:01 UTC
Permalink
Post by Peter Humphrey
Post by Michael
I recall this being discussed in a previous thread, but if your CPU has 24
EMERGE_DEFAULT_OPTS="--jobs=4 --load-average=32
MAKEOPTS="-j14"
You will be asking emerge to run up to 4 x 14 = 56 threads, which could
potentially eat up to 2G of RAM each, i.e. 112G of RAM. This will exhaust
your 64G of RAM, not taking into account whatever else the OS will be
trying to run at the time.
Yes, I understand that, but I've spent a lot of time watching, and
instrumenting, and in practice the load doesn't exceed about 33.
Post by Michael
The --load-average=32 is normally expected to be a floating point number
That stipulation has only appeared recently. I have tried adding a '.0' to
the number, and it's made no noticeaible difference. Besides, I could
attempt pedantry and declare that the set of all integers is a proper
subset of all real numbers. ;-)
Post by Michael
Of course, not all emerges use make and you may never or rarely emerge 4 x
monster packages in parallel to need 2G of RAM for each thread at the same
time.
I have actually had three or four big packages running together, but my
observation is that they don't pump the load up too far.
Post by Michael
If only we had at our disposal some AI algorithm to calculate dynamically
each time we run emerge the optimal combo of parallel emerge jobs and
number of make tasks, so as to achieve the highest total time saving Vs
energy spent! Or just the highest total time saving. ;-)
Yes, that's what we need, all right.
Post by Michael
I haven't performed any meaningful comparisons to determine where the
greatest gains are to be achieved. Parallel emerges of many small
packages, or a large number of make jobs for big packages. The combination
would change each time according to the individual packages waiting for an
update. In my use case, instinctively feels more beneficial reducing the
time I have to wait for huge packages like qtwebengine to finish, than
accelerating the updates of half a dozen smaller packages.
That is the difficulty. I do often rebuild a new system, not trusting the
existing one any longer, and a lot of time is spent fiddling with four tiny
packages at a time in the early and middle stages, then benefitting from the
limit of 4 once the desktop packages begin.
I very rarely reinstall. So rarely, I can't remember the last time I did it.
Post by Peter Humphrey
Post by Michael
Therefore, as a rule I leave EMERGE_DEFAULT_OPTS unset. I set MAKEOPTS
jobs to the number of CPU threads +1 and the load average at 95%, so I
can continue using the PC without any noticeable latency. On PCs where
RAM is constrained I reduce the MAKEOPTS in /etc/portage/ package.env for
any large packages which are bound to start swapping and thrashing the
disk.
Interesting. Do you mean 95% of the jobs figure? I'll do some more
experimenting.
Yes. Example:

A CPU with 4 cores and 8 threads is set like this:

MAKEOPTS="-j8 -l7.2"

This allows up to 8 make jobs at a time, with an average load being kept at
90% to leave some breathing space for day to day desktop work and minimise too
much/frequent swapping:

8 x 0.9 = 7.2

Looking at top shows 7 to 8 compiling jobs at a time. The PC in question has
16G of RAM. I've observed small packages rarely if ever reach 2G of RAM per
job. So, I set the above MAKEOPTS in make.conf, but bypass it for large
packages in /etc/portage/env/ by setting, e.g.:

/etc/portage/env/j4.conf

with MAKEOPTS="-j4" in it and specify this for each large package in /etc/
portage/package.env, e.g.:

www-client/qtwebengine -j4.conf
www-client/firefox -j4.conf
dev-lang/rust j4.conf
etc.
Post by Peter Humphrey
Meanwhile, perhaps I'll run new builds in two stages: the first without any
desktop packages - I do have sets defined to enable that - and the second
with them. I can't do that with emerge -e though.
Unlike MAKEOPTS, the EMERGE_DEFAULT_OPTS variable cannot be set in /etc/
portage/package.env. Therefore, I leave the latter unset in make.conf and
specify it on the CLI when I'm about to update a lot of smaller packages and
remember to do it. ;-)

Neil Bothwick
2023-09-11 20:10:02 UTC
Permalink
Post by Alan McKinnon
chromium has been building since 10:14, it's now 21:16 and still going
so 9 hours at least on this machine to build a browser - almost as bad
as openoffice at it's worst (regularly took 12 hours). Nodejs also took
a while, but I didn't record time.
Chromium is definitely the worst, and strangely variable. The last few
compiles have taken between 6 and 14 hours. Since it takes longer than
everything else to build, it is usually compiling on its own, so parallel
emerges aren't a factor.

Qtwebengine is also bad, not surprising as it is a cut down Chromium.
Emerging world with --exclude then timing build to coincide with sleep
helps, although I haven't quite reached the age where I need 14 hours of
sleep a day.
--
Neil Bothwick

If it isn't broken, I can fix it.
Alan McKinnon
2023-09-11 20:30:01 UTC
Permalink
Post by Neil Bothwick
Post by Alan McKinnon
chromium has been building since 10:14, it's now 21:16 and still going
so 9 hours at least on this machine to build a browser - almost as bad
as openoffice at it's worst (regularly took 12 hours). Nodejs also took
a while, but I didn't record time.
Chromium is definitely the worst, and strangely variable. The last few
compiles have taken between 6 and 14 hours. Since it takes longer than
everything else to build, it is usually compiling on its own, so parallel
emerges aren't a factor.
Qtwebengine is also bad, not surprising as it is a cut down Chromium.
Emerging world with --exclude then timing build to coincide with sleep
helps, although I haven't quite reached the age where I need 14 hours of
sleep a day.
--
Neil Bothwick
If it isn't broken, I can fix it.
Yup, that jibes with what I see. Oh well, just means that the need for
overnight compiles did not go away haha

Thanks to every one else that replied too - everyone said much the same
thing so I figured one replay to rule them all was the best way


Alan
--
Alan McKinnon
alan dot mckinnon at gmail dot com
Michael
2023-09-11 21:30:02 UTC
Permalink
Post by Alan McKinnon
Post by Neil Bothwick
Post by Alan McKinnon
chromium has been building since 10:14, it's now 21:16 and still going
so 9 hours at least on this machine to build a browser - almost as bad
as openoffice at it's worst (regularly took 12 hours). Nodejs also took
a while, but I didn't record time.
Chromium is definitely the worst, and strangely variable. The last few
compiles have taken between 6 and 14 hours. Since it takes longer than
everything else to build, it is usually compiling on its own, so parallel
emerges aren't a factor.
Qtwebengine is also bad, not surprising as it is a cut down Chromium.
Emerging world with --exclude then timing build to coincide with sleep
helps, although I haven't quite reached the age where I need 14 hours of
sleep a day.
--
Neil Bothwick
If it isn't broken, I can fix it.
Yup, that jibes with what I see. Oh well, just means that the need for
overnight compiles did not go away haha
Thanks to every one else that replied too - everyone said much the same
thing so I figured one replay to rule them all was the best way
Alan
As the old saying goes, "there ain't no substitute to cubic inches". Moar
cores and moar RAM is almost always the solution, but with laptops and older
PCs in general overnight builds soon become inevitable. Selectively reducing
jobs and adding swap, or for packages like rust placing /var/tmp/portage on
the disk becomes necessary.

A solution I use for older/smaller laptops is to build binaries on a more
powerful PC and emerge these in turn on the weaker PCs.

There's also the option of using bin alternatives where available, e.g.
google-chrome, firefox-bin, libreoffice-bin.

Finally, there is a small scale project to provide systemd based binaries as
an alternative to building your own:

https://wiki.gentoo.org/wiki/Experimental_binary_package_host
Alan McKinnon
2023-09-11 21:50:01 UTC
Permalink
Post by Neil Bothwick
Post by Alan McKinnon
Post by Neil Bothwick
Post by Alan McKinnon
chromium has been building since 10:14, it's now 21:16 and still
going
Post by Alan McKinnon
Post by Neil Bothwick
Post by Alan McKinnon
so 9 hours at least on this machine to build a browser - almost as
bad
Post by Alan McKinnon
Post by Neil Bothwick
Post by Alan McKinnon
as openoffice at it's worst (regularly took 12 hours). Nodejs also
took
Post by Alan McKinnon
Post by Neil Bothwick
Post by Alan McKinnon
a while, but I didn't record time.
Chromium is definitely the worst, and strangely variable. The last few
compiles have taken between 6 and 14 hours. Since it takes longer than
everything else to build, it is usually compiling on its own, so
parallel
Post by Alan McKinnon
Post by Neil Bothwick
emerges aren't a factor.
Qtwebengine is also bad, not surprising as it is a cut down Chromium.
Emerging world with --exclude then timing build to coincide with sleep
helps, although I haven't quite reached the age where I need 14 hours
of
Post by Alan McKinnon
Post by Neil Bothwick
sleep a day.
--
Neil Bothwick
If it isn't broken, I can fix it.
Yup, that jibes with what I see. Oh well, just means that the need for
overnight compiles did not go away haha
Thanks to every one else that replied too - everyone said much the same
thing so I figured one replay to rule them all was the best way
Alan
As the old saying goes, "there ain't no substitute to cubic inches". Moar
cores and moar RAM is almost always the solution, but with laptops and older
PCs in general overnight builds soon become inevitable. Selectively reducing
jobs and adding swap, or for packages like rust placing /var/tmp/portage on
the disk becomes necessary.
A solution I use for older/smaller laptops is to build binaries on a more
powerful PC and emerge these in turn on the weaker PCs.
There's also the option of using bin alternatives where available, e.g.
google-chrome, firefox-bin, libreoffice-bin.
Finally, there is a small scale project to provide systemd based binaries as
https://wiki.gentoo.org/wiki/Experimental_binary_package_host
As it turns out this laptop is the most powerful machine I have available,
my large collection of previous work laptops are getting older and older.

Although, I *could* create a ginormous build host on one of the
virtualization clusters at work hahaha :-)

That link looks interesting, I'll check it out, thanks!
--
Alan McKinnon
alan dot mckinnon at gmail dot com
Alan McKinnon
2023-09-12 09:00:01 UTC
Permalink
You may also want to take a look at "distcc", with which you can set up
https://wiki.gentoo.org/wiki/Distcc#With_ccache
-Ramon
Hi Ramon,

distcc is way more than I need. I'm not complaining about long compile
times and wanting a solution, I was more curious about which packages these
days take long compared to when I was last here 5/6 years ago

Alan
Post by Alan McKinnon
On Mon, Sep 11, 2023 at 10:05 PM Neil Bothwick
Post by Neil Bothwick
Post by Alan McKinnon
chromium has been building since 10:14, it's now 21:16 and
still going
Post by Neil Bothwick
Post by Alan McKinnon
so 9 hours at least on this machine to build a browser -
almost as bad
Post by Neil Bothwick
Post by Alan McKinnon
as openoffice at it's worst (regularly took 12 hours).
Nodejs also took
Post by Neil Bothwick
Post by Alan McKinnon
a while, but I didn't record time.
Chromium is definitely the worst, and strangely variable. The
last few
Post by Neil Bothwick
compiles have taken between 6 and 14 hours. Since it takes
longer than
Post by Neil Bothwick
everything else to build, it is usually compiling on its own,
so parallel
Post by Neil Bothwick
emerges aren't a factor.
Qtwebengine is also bad, not surprising as it is a cut down
Chromium.
Post by Neil Bothwick
Emerging world with --exclude then timing build to coincide
with sleep
Post by Neil Bothwick
helps, although I haven't quite reached the age where I need
14 hours of
Post by Neil Bothwick
sleep a day.
--
Neil Bothwick
If it isn't broken, I can fix it.
Yup, that jibes with what I see. Oh well, just means that the
need for
overnight compiles did not go away haha
Thanks to every one else that replied too - everyone said much
the same
thing so I figured one replay to rule them all was the best way
Alan
As the old saying goes, "there ain't no substitute to cubic inches". Moar
cores and moar RAM is almost always the solution, but with laptops and older
PCs in general overnight builds soon become inevitable.
Selectively reducing
jobs and adding swap, or for packages like rust placing
/var/tmp/portage on
the disk becomes necessary.
A solution I use for older/smaller laptops is to build binaries on a more
powerful PC and emerge these in turn on the weaker PCs.
There's also the option of using bin alternatives where available, e.g.
google-chrome, firefox-bin, libreoffice-bin.
Finally, there is a small scale project to provide systemd based binaries as
https://wiki.gentoo.org/wiki/Experimental_binary_package_host
As it turns out this laptop is the most powerful machine I have
available, my large collection of previous work laptops are getting
older and older.
Although, I *could* create a ginormous build host on one of the
virtualization clusters at work hahaha :-)
That link looks interesting, I'll check it out, thanks!
--
Alan McKinnon
alan dot mckinnon at gmail dot com
--
GPG public key: 5983 98DA 5F4D A464 38FD CF87 155B E264 13E6 99BF
--
Alan McKinnon
alan dot mckinnon at gmail dot com
Peter Humphrey
2023-09-12 10:20:01 UTC
Permalink
Post by Michael
There's also the option of using bin alternatives where available, e.g.
google-chrome, firefox-bin, libreoffice-bin.
...and rust-bin, which is now the default in at least some desktop profiles.
--
Regards,
Peter.
Jacques Montier
2023-09-12 11:00:01 UTC
Permalink
Post by Peter Humphrey
Post by Michael
There's also the option of using bin alternatives where available, e.g.
google-chrome, firefox-bin, libreoffice-bin.
...and rust-bin, which is now the default in at least some desktop profiles.
--
Regards,
Peter.
Hello,
I could get rid of webkit-gtk with pleasure and i use rust-bin.
I tried libreoffice-bin but encountered some issues with pdf import, so i
went back to libreoffice.

Cheers,

Jacques
Nikos Chantziaras
2023-09-12 09:30:01 UTC
Permalink
Post by Alan McKinnon
Yup, that jibes with what I see. Oh well, just means that the need for
overnight compiles did not go away haha
Ever since I added the following to my make.conf:

PORTAGE_NICENESS=19
PORTAGE_IONICE_COMMAND="sh -c \"schedtool -D \${PID} && ionice -c 3 -p
\${PID}\""

I never needed overnight compiles again. Make sure sys-process/schedtool
is installed. As long as you have plenty of RAM so the system doesn't
swap, you can use the system normally even while building monster
packages. I can even play video games without issue while portage is
emerging now.
Nikos Chantziaras
2023-09-12 09:30:01 UTC
Permalink
Post by Alan McKinnon
chromium has been building since 10:14, it's now 21:16 and still going
so 9 hours at least on this machine to build a browser - almost as bad
as openoffice at it's worst (regularly took 12 hours). Nodejs also took
a while, but I didn't record time.
What's your CPU and how much RAM? Even on my older system I had (an
4-core i5 2500K) libreoffice took like 2 hours or so to build.
Post by Alan McKinnon
What other packages have huge build times?
IIRC, dev-qt/qtwebengine is one of the heaviest when it comes to build
times.

Anyway, a nice way to cut down on build times is to build on tmpfs. To
do that however with heavy packages like that, I had to upgrade to 32GB
RAM. There was a large price drop in the memory market a couple months
ago, so I snatched a 32GB DDR4 3600 kit (2x16GB) for like 80€. So now
with plenty of RAM, I configured a 14GB tmpfs in /var/tmp/portage. I
never hit swap when emerging.
Alan McKinnon
2023-09-12 19:10:01 UTC
Permalink
Post by Nikos Chantziaras
Post by Alan McKinnon
chromium has been building since 10:14, it's now 21:16 and still going
so 9 hours at least on this machine to build a browser - almost as bad
as openoffice at it's worst (regularly took 12 hours). Nodejs also took
a while, but I didn't record time.
What's your CPU and how much RAM? Even on my older system I had (an
4-core i5 2500K) libreoffice took like 2 hours or so to build.
Post by Alan McKinnon
What other packages have huge build times?
IIRC, dev-qt/qtwebengine is one of the heaviest when it comes to build
times.
Anyway, a nice way to cut down on build times is to build on tmpfs. To
do that however with heavy packages like that, I had to upgrade to 32GB
RAM. There was a large price drop in the memory market a couple months
ago, so I snatched a 32GB DDR4 3600 kit (2x16GB) for like 80€. So now
with plenty of RAM, I configured a 14GB tmpfs in /var/tmp/portage. I
never hit swap when emerging.
That's not an option for me, this is a corporate laptop with 16G RAM and a
case I may not open :-)
I'm not interested in a remote build host or distcc either

But anyways, this is not really about how to deal with long compiles, I was
asking what current packages take a long time after a 5 year absence.

The answer is what it was always - browsers and libreoffice. I do recall
icu being a bit of a beast back then


Alan
--
Alan McKinnon
alan dot mckinnon at gmail dot com
Nuno Silva
2023-09-12 19:30:01 UTC
Permalink
On 2023-09-12, Alan McKinnon wrote:
[...]
Post by Alan McKinnon
But anyways, this is not really about how to deal with long compiles, I was
asking what current packages take a long time after a 5 year absence.
The answer is what it was always - browsers and libreoffice. I do recall
icu being a bit of a beast back then
I remember insn-attrtab.c making the GCC compilation swap a lot :-)

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=29442
--
Nuno Silva
Neil Bothwick
2023-09-12 22:20:01 UTC
Permalink
Post by Alan McKinnon
But anyways, this is not really about how to deal with long compiles, I
was asking what current packages take a long time after a 5 year
absence.
The answer is what it was always - browsers and libreoffice. I do recall
icu being a bit of a beast back then
LibreOffice doesn't seem too bad these days. icu and boost are a pain
because of the number of other packages they rebuild.
--
Neil Bothwick

Light travels faster than sound. This is why some people appear bright
until you hear them speak.
Kristian Poul Herkild
2023-09-13 21:20:02 UTC
Permalink
Hi.

Nothing compares to Chromium (browser) in terms of compilation times. On
my system with 12 core threads it takes about 8 hours to compile - which
is 4 times longer than 10 years ago with 2 core threads ;)

Libreoffice takes a few hours, but less than half of chromium. Nothing
gets close to Chromium. But otherwise webkitgtk and qtwebengine are to
big ones - but still about a quarter of Chromium.

Kristian Poul Herkild
Post by Alan McKinnon
After my long time away from Gentoo, I thought perhaps some packages
that always took ages to compile would have improved. I needed to change
to ~amd64 anyway (dumb n00b mistake leaving it at amd64). So that's what
I did and let emerge do it's thing.
chromium has been building since 10:14, it's now 21:16 and still going
so 9 hours at least on this machine to build a browser - almost as bad
as openoffice at it's worst (regularly took 12 hours). Nodejs also took
a while, but I didn't record time.
What other packages have huge build times?
--
Alan McKinnon
alan dot mckinnon at gmail dot com
Grant Edwards
2023-09-13 21:30:02 UTC
Permalink
Post by Kristian Poul Herkild
Nothing compares to Chromium (browser) in terms of compilation times. On
my system with 12 core threads it takes about 8 hours to compile - which
is 4 times longer than 10 years ago with 2 core threads ;)
About a year ago I finally gave up building Chromium and switched to
www-client/google-chrome. It got to the point where it sometimes took
longer to build Chromium than it did for the next version to come out.

--
Grant
Neil Bothwick
2023-09-13 21:40:01 UTC
Permalink
Post by Grant Edwards
About a year ago I finally gave up building Chromium and switched to
www-client/google-chrome. It got to the point where it sometimes took
longer to build Chromium than it did for the next version to come out.
That's why I run stable Chromium on an otherwise testing system.
--
Neil Bothwick

We all know what comes after 'X', said Tom, wisely.
Loading...