Discussion:
[gentoo-user] Glibc and binpackages
(too old to reply)
Andreas Fink
2023-01-12 19:20:01 UTC
Permalink
On Thu, 12 Jan 2023 17:53:55 +0000
I'm not sure if I'm doing something horribly wrong, or missing something blindingly obvious, but I've just had to boot a rescue shell yet again, so I'm going to ask.
To save time and effort, I have my big, powerful machine create binpackages for everything when it updates, and then let all my smaller machines pull from that. It works pretty well for the most part.
But when there's a glibc update I have to specifically install it first. If I don't, then about half the time emerge decides that because it doesn't have to worry about build dependencies for binpackages, it can obviously update glibc last... Then it updates something that's needed for handling updates and that's it, stuck. If I'm lucky it's a compile tool, not one for handling binpackages and I can just tell it to do the glibc package next. Quite often though it's something like tar that's fundamental to installing anything at all and I have to ferry the new glibc over on a USB stick and unpack it with busybox... Occasionally it's something critical for the whole system and then I have to boot to a rescue shell of some kind.
Think it's worth a feature request to have emerge prioritize glibc as high up in the list as it can when installing things? Or am I the only one who runs into this?
LMP
I was running into this issue too already, but then at some point I
started more regularly updating and this problem disappeared.
I fully agree, that glibc should be updated rather earlier than later,
especially when compression libraries were built with a newer glibc.

/Andreas
John Blinka
2023-01-14 16:50:01 UTC
Permalink
I’m not sure if I’m doing something horribly wrong, or missing something
blindingly obvious, but I’ve just had to boot a rescue shell yet again, so
I’m going to ask.
To save time and effort, I have my big, powerful machine create
binpackages for everything when it updates, and then let all my smaller
machines pull from that. It works pretty well for the most part.
I do something quite similar, but have never had a glibc problem. Maybe the
problem lies in differences between the specific details of our two
approaches.

I have 3 boxes with different hardware but identical portage setup,
identical world file, identical o.s., etc, even identical CFLAGS, CPPFLAGS
and CPU_FLAGS_X86 despite different processors. Like you, I build on my
fastest box (but offload work via distcc), and save binpkgs. After a world
update (emerge -DuNv —changed-deps @world) , I rsync all repositories and
binpkgs from the fast box to the others. An emerge -DuNv —changed-deps
—usepkgonly @world on the other boxes completes the update. I do this
anywhere from daily to (rarely) weekly. Portage determines when to update
glibc relative to other packages. There hasn’t been a problem in years with
glibc.

I believe there are more sophisticated ways to supply updated portage trees
and binary packages across a local network. I think there are others on
the list using these more sophisticated techniques successfully. Just a
plain rsync satisfies my needs.

It’s not clear to me whether you have the problem on your big powerful
machine or on your other machines. If it’s the other machines, that
suggests that portage knows the proper build sequence on the big machine
and somehow doesn’t on the lesser machines. Why? What’s different?

Perhaps there’s something in my update frequency or maintaining an
identical setup on all my machines that avoids the problem you’re having?

If installing glibc first works, then maybe put a wrapper around your
emerge? Something that installs glibc first if there’s a new binpkg then
goes on to the remaining updates.

Just offered in case there’s a useful hint from my experience - not arguing
that mine is the one true way (tm).

HTH,

John Blinka
Andreas Fink
2023-01-17 17:10:01 UTC
Permalink
On Fri, 13 Jan 2023 11:17:29 -0500
Post by John Blinka
I’m not sure if I’m doing something horribly wrong, or missing something
blindingly obvious, but I’ve just had to boot a rescue shell yet again, so
I’m going to ask.
To save time and effort, I have my big, powerful machine create
binpackages for everything when it updates, and then let all my smaller
machines pull from that. It works pretty well for the most part.
I do something quite similar, but have never had a glibc problem. Maybe the
problem lies in differences between the specific details of our two
approaches.
I have 3 boxes with different hardware but identical portage setup,
identical world file, identical o.s., etc, even identical CFLAGS, CPPFLAGS
and CPU_FLAGS_X86 despite different processors. Like you, I build on my
fastest box (but offload work via distcc), and save binpkgs. After a world
binpkgs from the fast box to the others. An emerge -DuNv —changed-deps
anywhere from daily to (rarely) weekly. Portage determines when to update
glibc relative to other packages. There hasn’t been a problem in years with
glibc.
I believe there are more sophisticated ways to supply updated portage trees
and binary packages across a local network. I think there are others on
the list using these more sophisticated techniques successfully. Just a
plain rsync satisfies my needs.
It’s not clear to me whether you have the problem on your big powerful
machine or on your other machines. If it’s the other machines, that
suggests that portage knows the proper build sequence on the big machine
and somehow doesn’t on the lesser machines. Why? What’s different?
Perhaps there’s something in my update frequency or maintaining an
identical setup on all my machines that avoids the problem you’re having?
If installing glibc first works, then maybe put a wrapper around your
emerge? Something that installs glibc first if there’s a new binpkg then
goes on to the remaining updates.
Just offered in case there’s a useful hint from my experience - not arguing
that mine is the one true way (tm).
HTH,
John Blinka
In case it is not clear what the underlying problem is:

Slow machine updates and is on the same set of packages as the fast
machine.

Fast machine updates glibc to a new version at time T1
Fast machine updates app-arch/xz-utils to a new version at time T2.
This version of xz CAN have glibc symbols from the very newest glibc
version that was merged at time T1. Everything is fine on the fast
machine.

Now the slow machine starts its update process at a time T3 > T2. The
list of packages includes glibc AND xz-utils, however xz-utils is often
pulled in before glibc which ends in a disaster.
Now you have an xz decompressing tool on your slow machine that cannot
run, because some library symbols from glibc are missing (because
glibc was not merged yet), and you're pretty much stuck in the middle
of the update with a broken system.

I have seen this kind of behaviour only when I have not updated for a
very long time the slow machine (i.e. no update for a year).

Anyway I think a reasonable default for emerge would be to merge glibc
as early as possible, because all other binary packages could have
been built with the newer glibc version, and could potentially
fail to run on the slow machine until glibc is updated.

Hope that clears up what happens, and why it fails to update / breaks
the slow machine.

Andreas
Wol
2023-01-17 22:10:01 UTC
Permalink
That's not a terrible idea.  Although running emerge twice every time in
order to check would slow things down considerably.  Probably better to
just get it through my thick head to update core libraries first.
I guess that could mean putting all those libs in an @set? Then you
could just do eg "emerge --update @libraries; emerge --update @world".

Or maybe what I do if I'm expecting trouble - "emerge --update @system;
emerge --update @world". I guess those libraries are in @system?

Cheers,
Wol

Loading...