Discussion:
[WBEL-devel] Heads up on RHEL Update2 Beta
William Hooper
2004-04-06 15:22:02 UTC
Permalink
http://www.redhat.com/archives/taroon-list/2004-April/msg00126.html

"Availability of a full set of updated installable CD ISO images via Red
Hat Network, with OS package updates and _install-time support for new
hardware_."
(Emphisis mine)

It looks like it might be a good idea to start planning a re-spin to pick
up the driver updates (to make install easier):

"Driver updates including IBM ServeRAID (ips), LSI Logic RAID (megaraid),
LSI Logic MPT Fusion (mpt* drivers), Compaq SA53xx Controllers (cciss
driver), QLogic Fibre Channel (qla2xxx), Intel PRO/1000 (e1000), Broadcom
Tigon3 (tg3), Network Bonding (bonding), Serial ATA (libata)"
--
William Hooper
John Morris
2004-04-07 00:17:44 UTC
Permalink
Post by William Hooper
It looks like it might be a good idea to start planning a re-spin to pick
Yup. I'm hearing tales of up to 250 packages changed from the base 3.0
release. Should still be doable in a reasonable timeframe even if I won't
have the big Xeons this time.

Been pondering the longterm requirements more as I settle into this for
the long haul..... Here are my thoughts, if anyone sees any obvious brain
damage whack me, ok?

As the updates roll on over the years, I figure I need to keep a partition
for each supported version/platform, keep each up to date and use that to
build the next update on, then reinstall (only real way to ensure nothing
survives from the previous packages) with it for future errata. So that
means something like this:

3.x - i386
3.x - amd64 (possibly, depends on when I upgrade the box at home)
4.x - i386
4.x - amd64
5.x - i386
5.x - amd64

Since it would be prudent to figure on needing at least 20GB to respin for
3.0 and add 5-10 with each successive version that pretty much means a
dedicated 200G drive can handle it.

Then there is the question of how to handle the respins on the main site
in a way to minimize the burden for the mirrors. The best idea I have had
so far is to make this respin 3.1 and create a new top level, thus:

delete 3.0-RC1
3.0-RC2
3.0
3.1
contrib

Then make 3.1/en/updates and 3.1/en/obsolete-updates links back to the 3.0
tree. But will rsync handle that without just duplicating the files on
the mirrors? It would handle a symlink but would up2date (guess it really
depends on the configuration on the mirror) like that? Or should up2date
just continue to point at the 3.0 directory for updates, making a symlink
safe?

Either way, links are the only way to tackle the problem, since at two
respins per year per base version that is a boatload of saved storage.

Then there is the question of how many versions to plan on keeping online.
The base 3.0 version probably needs to stay up for at least the 5yr
availability of errata, but does a whole tree+iso set for 3.1 need to
remain when 3.8 is available? How many point revisions need to be
available? I'm inclined to be a little conservative and say at least two.
As in 3.1 stays until 3.3 appears and has had some time to be declared
good.
--
John M. http://www.beau.org/~jmorris This post is 100% M$ Free!
Geekcode 3.1:GCS C+++ UL++++$ P++ L+++ W++ w--- Y++ b++ 5+++ R tv- e* r
William Hooper
2004-04-07 13:18:04 UTC
Permalink
Post by John Morris
Post by William Hooper
It looks like it might be a good idea to start planning a re-spin to pick
Yup. I'm hearing tales of up to 250 packages changed from the base 3.0
release. Should still be doable in a reasonable timeframe even if I won't
have the big Xeons this time.
So far the list I have seen (stop me if you are on the Taroon-list):
http://www.redhat.com/archives/taroon-list/2004-April/msg00083.html

And rhn-applet should probably be added to the list for WBEL :-)
Post by John Morris
Been pondering the longterm requirements more as I settle into this for
the long haul..... Here are my thoughts, if anyone sees any obvious brain
damage whack me, ok?
As the updates roll on over the years, I figure I need to keep a partition
for each supported version/platform, keep each up to date and use that to
build the next update on, then reinstall (only real way to ensure nothing
survives from the previous packages) with it for future errata.
Time to invest in VMWare? Not sure if it handles AMD64 yet.
Post by John Morris
Since it would be prudent to figure on needing at least 20GB to respin for
3.0 and add 5-10 with each successive version that pretty much means a
dedicated 200G drive can handle it.
Then there is the question of how to handle the respins on the main site
in a way to minimize the burden for the mirrors. The best idea I have had
delete 3.0-RC1
3.0-RC2
3.0
3.1
contrib
Then make 3.1/en/updates and 3.1/en/obsolete-updates links back to the 3.0
tree. But will rsync handle that without just duplicating the files on
the mirrors? It would handle a symlink but would up2date (guess it really
depends on the configuration on the mirror) like that? Or should up2date
just continue to point at the 3.0 directory for updates, making a symlink
safe?
Making a distiction between 3.0, 3.1, 3.x for errata is probably not going
to make sense, because only one errata package will be released. Once all
the new updates go into the 3.1 tree no one will want to use the 3.0 tree.
Up to this point (and IIUC this should be true for update2), just
installing the errata will get you up to the current level, so pointing to
one repo for updates should do the trick. For that matter the version
number reported doesn't really change:

[***@token whooper]$ cat /etc/redhat-release
Red Hat Enterprise Linux WS release 3 (Taroon Update 1)

This way if you have a copy of the original ISOs you can still get up to
update1 just by using up2date.
Post by John Morris
Either way, links are the only way to tackle the problem, since at two
respins per year per base version that is a boatload of saved storage.
Then there is the question of how many versions to plan on keeping online.
The base 3.0 version probably needs to stay up for at least the 5yr
availability of errata, but does a whole tree+iso set for 3.1 need to
remain when 3.8 is available? How many point revisions need to be
available? I'm inclined to be a little conservative and say at least two.
As in 3.1 stays until 3.3 appears and has had some time to be declared
good.
One of the reasones that RHEL went to 4 CDs is so that the first CD would
have room to allow for updates. When update1 was released the only ISO
that changed was disc1. Unfortunately I'm told that this won't be the
case in update2. With WBEL it would probably make sense from a time
standpoint to just do re-spins when new hardware at install time is
supported. I'm not sure if it would make sense from a bandwidth
standpoint, however. The idea of keeping two 3.x ISO sets plus the 3.0
ISO set makes sense to me.
--
William Hooper
Joe Brouhard
2004-04-07 15:14:18 UTC
Permalink
Post by William Hooper
Making a distiction between 3.0, 3.1, 3.x for errata is probably not going
to make sense, because only one errata package will be released. Once all
the new updates go into the 3.1 tree no one will want to use the 3.0 tree.
Up to this point (and IIUC this should be true for update2), just
installing the errata will get you up to the current level, so pointing to
one repo for updates should do the trick.
I agree. I don't think anyone would want to use older packages, unless
there's a dependency issue that has yet to be resolved... tho YUM should
notice that before even downloading the package.
Post by William Hooper
This way if you have a copy of the original ISOs you can still get up to
update1 just by using up2date.
Yep.
Post by William Hooper
Post by John Morris
Either way, links are the only way to tackle the problem, since at two
respins per year per base version that is a boatload of saved storage.
One of the reasones that RHEL went to 4 CDs is so that the first CD would
have room to allow for updates. When update1 was released the only ISO
that changed was disc1. Unfortunately I'm told that this won't be the
case in update2. With WBEL it would probably make sense from a time
standpoint to just do re-spins when new hardware at install time is
supported. I'm not sure if it would make sense from a bandwidth
standpoint, however. The idea of keeping two 3.x ISO sets plus the 3.0
ISO set makes sense to me.
Going back over my post, I think this is what I recommended, tho Hooper's
post hit the list before mine...
--
Joe Brouhard
Chief of Information Services
Kansas City Open Source Consultants
***@kcosc.com
William Hooper
2004-04-07 16:26:38 UTC
Permalink
Post by Joe Brouhard
Post by William Hooper
The idea of keeping two 3.x ISO sets plus the 3.0
ISO set makes sense to me.
Going back over my post, I think this is what I recommended, tho Hooper's
post hit the list before mine...
You know, the more I ponder this, I might change my mind....

In theory, the day you have a release (let's say update2), any machine
that has been kept updated will be the same as a machine installed from
the new ISOs. Furthering this thought, if you have problems with update2,
you will have the same problem with the updates (excluding installer
issues)...

Keeping the 3.0 images are good from just a historical standpoint. Images
beyond that should probably overlap for a defined period (to verify that
the new installs work), but after that can probably be removed leaving 3.0
and 3.newest.

I'm still wavering on the idea of a re-spin for updates that don't provide
more install-time hardware support. For people buying CDs maybe working
on an "updates" CD that has the yum headers, etc. on it would make sense?
Yum supports the an option to specify a alternate config file. That could
probably be used to just do a "yum update" from the CD. A lot less work
than re-spinning the whole distro, and the narrowband users still can
benefit from not having to download a bunch of updates post install.
--
William Hooper
Joe Brouhard
2004-04-07 19:16:29 UTC
Permalink
Post by William Hooper
You know, the more I ponder this, I might change my mind....
In theory, the day you have a release (let's say update2), any machine
that has been kept updated will be the same as a machine installed from
the new ISOs. Furthering this thought, if you have problems with update2,
you will have the same problem with the updates (excluding installer
issues)...
In theory, I would agree. However, I make it a point not to apply updates
unless they're critical and they're known fixes. For example, my
mailserver (nicknamed Diablo, cause it's a dual 333Mhz box) has a
heavily-customized postfix and courier-imap packages that include mysql
support and a few other features. The updated packages that get sent out
won't work on my mail system, and as thus, I'd have to re-build the RPMs
with proper support. Makes for keeping the mailserver up to date a little
problematic, but nontheless, it can be done.
Post by William Hooper
Keeping the 3.0 images are good from just a historical standpoint. Images
beyond that should probably overlap for a defined period (to verify that
the new installs work), but after that can probably be removed leaving 3.0
and 3.newest.
This point, I agree wholeheartedly. Keep older packages for XX weeks, and
then remove them *OR* put them in a separate archive (some ISO or
something?0
Post by William Hooper
I'm still wavering on the idea of a re-spin for updates that don't provide
more install-time hardware support. For people buying CDs maybe working
on an "updates" CD that has the yum headers, etc. on it would make sense?
Yum supports the an option to specify a alternate config file. That could
probably be used to just do a "yum update" from the CD. A lot less work
than re-spinning the whole distro, and the narrowband users still can
benefit from not having to download a bunch of updates post install.
I'm probably not understanding the concept of 're-spin' correctly, but..

I'd make a single Install set. Then apply updates on the fly (i.e.
download them). yes, this creates a bandwith hog on the mirrors, but My
suggestion would be that on the next set, set the YUM repositories to go
to a round-robin? I know this has been done before (i'm no expert at
this... but I am a fan of the idea of having 'round-robin' yum
repositories so that there is one universal location that a user can 'yum'
to, but that location then sends the request to the fastest mirror it can
find).
--
Joe Brouhard
Chief of Information Services
Kansas City Open Source Consultants
***@kcosc.com
Johnny Hughes
2004-04-08 11:47:42 UTC
Permalink
My whole take on this is:

RedHat only releases bug fixes and security updates to RHEL. There are
no version upgrades without a bug or security reason.

So a respin of the install ISO, with all fixes incorporated is good...it
means a much faster install and much fewer updates after the original
install.

Up2date (or yum) will pick the proper updates (so long as the package
name versioning is correct) reguardless of wether you installed from the
orignal ISO or a respun ISO. (Since the respins only include some of
the newer packages)

So, as William Hooper said, you only need one update section for all the
respins...and just keep the latest package there for each package
updated (from the original ISOs), just like you are doing now.

I would think you would only keep the current respins files in the OS
directory ( 3.0/en/os/i386/ ) on the mirrors .... and copies of the
original ISOs and the Latest respin ISOs.

If you wanted the respins to be a new version number (like 3.0.1 or 3.1
instead of 3.0 ... then just put a text file in the 3.0 directory that
says goto 3.0.1 and provide an updated /etc/yum.conf and
/etc/sysconfig/rhn/sources files that point to the new locations).
From a GPL standpoint, the original source ISOs and the updates SRPMS
directory (and the obsolete-updates directory) meet all the
requirements.

-Johnny Hughes
Post by William Hooper
You know, the more I ponder this, I might change my mind....
In theory, the day you have a release (let's say update2), any machine
that has been kept updated will be the same as a machine installed from
the new ISOs. Furthering this thought, if you have problems with update2,
you will have the same problem with the updates (excluding installer
issues)...
In theory, I would agree. However, I make it a point not to apply updates
unless they're critical and they're known fixes. For example, my
mailserver (nicknamed Diablo, cause it's a dual 333Mhz box) has a
heavily-customized postfix and courier-imap packages that include mysql
support and a few other features. The updated packages that get sent out
won't work on my mail system, and as thus, I'd have to re-build the RPMs
with proper support. Makes for keeping the mailserver up to date a little
problematic, but nontheless, it can be done.
Post by William Hooper
Keeping the 3.0 images are good from just a historical standpoint. Images
beyond that should probably overlap for a defined period (to verify that
the new installs work), but after that can probably be removed leaving 3.0
and 3.newest.
This point, I agree wholeheartedly. Keep older packages for XX weeks, and
then remove them *OR* put them in a separate archive (some ISO or
something?0
Post by William Hooper
I'm still wavering on the idea of a re-spin for updates that don't provide
more install-time hardware support. For people buying CDs maybe working
on an "updates" CD that has the yum headers, etc. on it would make sense?
Yum supports the an option to specify a alternate config file. That could
probably be used to just do a "yum update" from the CD. A lot less work
than re-spinning the whole distro, and the narrowband users still can
benefit from not having to download a bunch of updates post install.
I'm probably not understanding the concept of 're-spin' correctly, but..
I'd make a single Install set. Then apply updates on the fly (i.e.
download them). yes, this creates a bandwith hog on the mirrors, but My
suggestion would be that on the next set, set the YUM repositories to go
to a round-robin? I know this has been done before (i'm no expert at
this... but I am a fan of the idea of having 'round-robin' yum
repositories so that there is one universal location that a user can 'yum'
to, but that location then sends the request to the fastest mirror it can
find).
Joe Brouhard
2004-04-07 13:31:43 UTC
Permalink
Post by John Morris
As the updates roll on over the years, I figure I need to keep a partition
for each supported version/platform, keep each up to date and use that to
build the next update on, then reinstall (only real way to ensure nothing
survives from the previous packages) with it for future errata. So that
3.x - i386
3.x - amd64 (possibly, depends on when I upgrade the box at home)
4.x - i386
4.x - amd64
5.x - i386
5.x - amd64
Looks like RedHat's structure as well.
Post by John Morris
Then there is the question of how to handle the respins on the main site
in a way to minimize the burden for the mirrors. The best idea I have had
delete 3.0-RC1
3.0-RC2
3.0
3.1
contrib
Then make 3.1/en/updates and 3.1/en/obsolete-updates links back to the 3.0
tree. But will rsync handle that without just duplicating the files on
the mirrors? It would handle a symlink but would up2date (guess it really
depends on the configuration on the mirror) like that? Or should up2date
just continue to point at the 3.0 directory for updates, making a symlink
safe?
IMNSHO, Older packages should be removed or placed in a "archive" area of
some sort. Some of the mirrors I see keep track of their packages by
putting them into a folder that correlates with the version number of the
product. But if we don't have a LARGE repository to put this stuff on,
Then I'd say lets keep two, maybe three versions up, but no more.

I'm probably not making sense here, since I have yet to have my daily
caffiene intake. <G>
Post by John Morris
Then there is the question of how many versions to plan on keeping online.
The base 3.0 version probably needs to stay up for at least the 5yr
availability of errata, but does a whole tree+iso set for 3.1 need to
remain when 3.8 is available? How many point revisions need to be
available? I'm inclined to be a little conservative and say at least two.
As in 3.1 stays until 3.3 appears and has had some time to be declared
good.
I'd keep only two prior revisions, but keep all previous packages?

Better yet: the YUM repositories should only keep the two most recent
trees only. If you keep any further back, there's the chance some people
out there are going to end up using obsolete packages and/or bug-ridden
packages. I don't think it'd be a good idea to keep anything past the
previous 2 releases. Archive the ISO's (ANd packages as ISO images), but
i wouldn't keep the packages residing on the hard drive.. Not sure if the
package v iso idea will save space or not... never really tried it.
--
Joe Brouhard
Chief of Information Services
Kansas City Open Source Consultants
***@kcosc.com
Milan Keršláger
2004-04-13 12:21:20 UTC
Permalink
Post by John Morris
As the updates roll on over the years, I figure I need to keep a partition
for each supported version/platform, keep each up to date and use that to
build the next update on, then reinstall (only real way to ensure nothing
survives from the previous packages) with it for future errata. So that
3.x - i386
3.x - amd64 (possibly, depends on when I upgrade the box at home)
4.x - i386
4.x - amd64
5.x - i386
5.x - amd64
You don't need extra partitions. All is able to live in chroot
environment without need to reinstall base system. With yum this is easy
to check if all updates/packages are in place (see list option).

If your build system is AMD64, you are able to build full clean i386
system in i386-only chroot tree and vice versa (AMD64 build needs clean
x86_64 tree with no i386 stuff).

Compilation does not depend on current (running) kernel.
Post by John Morris
Since it would be prudent to figure on needing at least 20GB to respin for
3.0 and add 5-10 with each successive version that pretty much means a
dedicated 200G drive can handle it.
There is no real reason to waste space like this. RH's build cluster is
in chroot environment too (there was posts in devel list in the past).
Post by John Morris
Then make 3.1/en/updates and 3.1/en/obsolete-updates links back to the 3.0
tree. But will rsync handle that without just duplicating the files on
the mirrors? It would handle a symlink but would up2date (guess it really
depends on the configuration on the mirror) like that? Or should up2date
just continue to point at the 3.0 directory for updates, making a symlink
safe?
There is no need to make 3.1 when Red Hat itself has none and 3.0 +
updates is 3.1 then. Just respin ISOs like RH does (ok, as RH will do
after final U2).
Post by John Morris
Either way, links are the only way to tackle the problem, since at two
respins per year per base version that is a boatload of saved storage.
Then there is the question of how many versions to plan on keeping online.
The base 3.0 version probably needs to stay up for at least the 5yr
availability of errata, but does a whole tree+iso set for 3.1 need to
remain when 3.8 is available? How many point revisions need to be
available? I'm inclined to be a little conservative and say at least two.
As in 3.1 stays until 3.3 appears and has had some time to be declared
good.
I see no real reson to maintain 5+ versions when all are the same
(except installation ISOs). Who needs obsolete ISOs?

Just leave updates in same place and remove old ISOs.
--
Milan Kerslager
E-mail: ***@pslib.cz
WWW: http://www.pslib.cz/~kerslage/
Tres Seaver
2004-04-14 02:23:33 UTC
Permalink
Post by Milan Keršláger
There is no need to make 3.1 when Red Hat itself has none and 3.0 +
updates is 3.1 then. Just respin ISOs like RH does (ok, as RH will do
after final U2).
Right, except WBEL might do respins for different reasons, or at
different times, than RHEL; the choice should be based on trading off
the pain of a respin versus the pain (and bandwidth) of old ISO + tons
of updates.
Post by Milan Keršláger
I see no real reson to maintain 5+ versions when all are the same
(except installation ISOs). Who needs obsolete ISOs?
Just leave updates in same place and remove old ISOs.
+1. I suppose the original ISOs might be archived somewhere public, for
curiosity's sake, but they don't need to be mirrored once newer ones are
available.

Tres.
--
===============================================================
Tres Seaver ***@zope.com
Zope Corporation "Zope Dealers" http://www.zope.com
John Morris
2004-04-14 05:14:45 UTC
Permalink
Post by Milan Keršláger
You don't need extra partitions. All is able to live in chroot
environment without need to reinstall base system. With yum this is easy
to check if all updates/packages are in place (see list option).
Hmm. I had heard that RH did it with a chroot. I have had problems with
grub in a chroot (rescue) before though and figured it was more thing that
could go wrong. Guess it is worth trying, then all I'd need is a spare
spot to do test installs.
Post by Milan Keršláger
If your build system is AMD64, you are able to build full clean i386
system in i386-only chroot tree and vice versa (AMD64 build needs clean
x86_64 tree with no i386 stuff).
Haven't started researching what I'll need to do to get AMD64 builds. So
you are saying that if you install the 32bit compatibility layer so you
can build i386 packages you can no longer build native packages? Oh boy,
life on the bleeding edge! Guess if I use a chroot environment to build
in it won't be that bad, but I pity the normal developer trying to support
both.

Guess I had better start reading though, the tax man screwed up this year
and instead of my usual reaming I'm actually going to get a refund. I
should dedicate the AMD port "This port made possible by the Bush tax
cut." just to watch the fireworks that would ensue. ;) (You being in .cz
you probably don't follow US politics. Basically, President Bush is a
Republican and most of the US IT industry is Democrats and in this
election year the rivalry has degenerated into outright hatred among the
Democrats for Bush and all his works.)
Post by Milan Keršláger
Compilation does not depend on current (running) kernel.
True, so long as it is possible to build i[356]86 packages with the AMD64
kernel and it sounds like that part works. I was planning to be paranoid
with the seperate partitions, but if the chroot does work I'd be able to
build stuff without a reboot and that is always good.
Post by Milan Keršláger
There is no need to make 3.1 when Red Hat itself has none and 3.0 +
updates is 3.1 then. Just respin ISOs like RH does (ok, as RH will do
after final U2).
I think this is the way to go, because after pondering it some more I have
hit a problem with the original scheme. If I create a 3.1 tree I'd have
to keep it up pretty much forever because I have already noticed up2date
go back for packages from the base set to install errata. So I'm now
figuring on keeping up2date pointing at the 3.0 tree. Any packages which
differ in the respin should be in the errata directory anyway so
everything should be good. So all that will be in the 3.1 tree will be
.iso images.
Post by Milan Keršláger
Just leave updates in same place and remove old ISOs.
Does the GPL require making source images available for a length of time?
Being non-commercial, my reading says no since anyone downloading the
binaries had an opportunity to download the correspending source.

Ok, questions for those who know more about this stuff than I do. Way I
see it there are three ways to get a chroot environment.

1. Install a system, then copy the whole live tree over.

2. Try to use rpm --initdb --root [path to chroot] then install enough
packages with rpm --root [path to chroot] to get self hosted.

3. Copy the running system's tree. Sounds like a bad idea.

Opinions as to which is safer, etc. welcome.
--
John M. http://www.beau.org/~jmorris This post is 100% M$ Free!
Geekcode 3.1:GCS C+++ UL++++$ P++ L+++ W++ w--- Y++ b++ 5+++ R tv- e* r
Johnny Hughes
2004-04-14 09:55:14 UTC
Permalink
On Wed, 2004-04-14 at 00:14, John Morris wrote:
{snip}
Post by John Morris
Guess I had better start reading though, the tax man screwed up this year
and instead of my usual reaming I'm actually going to get a refund. I
should dedicate the AMD port "This port made possible by the Bush tax
cut." just to watch the fireworks that would ensue. ;) (You being in .cz
you probably don't follow US politics. Basically, President Bush is a
Republican and most of the US IT industry is Democrats and in this
election year the rivalry has degenerated into outright hatred among the
Democrats for Bush and all his works.)
Not that it matters...but, just for the record, I am a big Bush and
Reagan fan. :)

- Johnny Hughes
Daniel T. Gynn
2004-04-14 14:52:54 UTC
Permalink
--=-LwBwwLQO17Jrvj31Tqhm
Content-Type: text/plain
Content-Transfer-Encoding: quoted-printable
Post by Johnny Hughes
{snip}
Guess I had better start reading though, the tax man screwed up this ye=
ar
Post by Johnny Hughes
and instead of my usual reaming I'm actually going to get a refund. I
should dedicate the AMD port "This port made possible by the Bush tax
cut." just to watch the fireworks that would ensue. ;) (You being in =
.cz=20
Post by Johnny Hughes
you probably don't follow US politics. Basically, President Bush is a=20
Republican and most of the US IT industry is Democrats and in this=20
election year the rivalry has degenerated into outright hatred among th=
e=20
Post by Johnny Hughes
Democrats for Bush and all his works.)
=20
Not that it matters...but, just for the record, I am a big Bush and
Reagan fan. :)
=20
Amen!

--=20
-----------------------
Daniel T. Gynn
RHCE #806200978201621
Essential Systems, Inc.
412-931-5403 ext. 1
fax: 412-931-5425
***@essensys.com
GnuPG Key http://www.essensys.com/~dan/gpgring.asc
Fingerprint: 0979 73B8 847A 349E 7363 66F4 6A79 DD72 495D CD60

--=-LwBwwLQO17Jrvj31Tqhm
Content-Type: application/pgp-signature; name=signature.asc
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQBAfVBGanndckldzWARAizAAJ9G+CI81kVNB3tbh+kY1xU07DJ0HwCghAob
mUWQwFk1/ocFyFXMJ9qCb58=
=OcOs
-----END PGP SIGNATURE-----

--=-LwBwwLQO17Jrvj31Tqhm--
Milan Keršláger
2004-04-14 21:25:36 UTC
Permalink
Post by John Morris
Post by Milan Keršláger
You don't need extra partitions. All is able to live in chroot
environment without need to reinstall base system. With yum this is easy
to check if all updates/packages are in place (see list option).
Hmm. I had heard that RH did it with a chroot. I have had problems with
grub in a chroot (rescue) before though and figured it was more thing that
could go wrong. Guess it is worth trying, then all I'd need is a spare
spot to do test installs.
There are the same problems with packages like in "fresh installed"
system (needs to be builded as non-root etc). When building no package
should need access to master system. If you have a problem with grub,
try to (re)install package in chroot with --noscripts --notriggers.
Post by John Morris
Post by Milan Keršláger
If your build system is AMD64, you are able to build full clean i386
system in i386-only chroot tree and vice versa (AMD64 build needs clean
x86_64 tree with no i386 stuff).
Haven't started researching what I'll need to do to get AMD64 builds. So
you are saying that if you install the 32bit compatibility layer so you
can build i386 packages you can no longer build native packages? Oh boy,
life on the bleeding edge! Guess if I use a chroot environment to build
in it won't be that bad, but I pity the normal developer trying to support
There are few broken packages in x86_64 and you need to handle them even
you don't use chroot (wrong autodetection in configure etc so you will
need sometimes to install glibc.i386 package and sometimes you will need
to remove it, but safe is to have plain x86_64 tree except for
exceptions). Chroot is about
I-do-not-need-to-reinstall-system-many-times. It simply save your time
when you are know what you are doing and does not need to sit behind
building machine and walking through installation process again and
again.
Post by John Morris
I think this is the way to go, because after pondering it some more I have
hit a problem with the original scheme. If I create a 3.1 tree I'd have
to keep it up pretty much forever because I have already noticed up2date
go back for packages from the base set to install errata. So I'm now
figuring on keeping up2date pointing at the 3.0 tree. Any packages which
differ in the respin should be in the errata directory anyway so
everything should be good. So all that will be in the 3.1 tree will be
.iso images.
You was talking about rebuilding whole WBEL. Don't do it. It is only
wasting of time and throwing away QA you and others did here (even we
are unable to track bugs in something like Bugzilla). Every rebuilded
package could be badly linked or miscompiled (because configure bug or
so).

If you underestand this, you will leave current packages as-are-now, add
U2 like others errata, don't make 3.1 and only respin ISOs to save time
for downloaders.

There is really no reason to have 3.1 because 3.0+updates = 3.1 (so
there is no difference and no need to bump version except confuse
non-experienced users who will try to "upgrade").
Post by John Morris
Ok, questions for those who know more about this stuff than I do. Way I
see it there are three ways to get a chroot environment.
1. Install a system, then copy the whole live tree over.
2. Try to use rpm --initdb --root [path to chroot] then install enough
packages with rpm --root [path to chroot] to get self hosted.
This is not easy like this because scripts (i tryed this already). The
safer method is to install minimal system, use tar for backup (except
/proc and the directory where you are putting backup). Then transfer to
build system, extract, chroot to new tree, mkdir /proc, mount /proc and
/dev/pts and then run yum (to update system and installl the rest) by
using something like this:

$ yum update
$ yum list | awk '{print $1} > to-install
$ vim to-install # remove kernels, glibc for other archs etc
$ yum install $(cat to-install)

Maybe I forgot something but this is the major working schema.
Post by John Morris
3. Copy the running system's tree. Sounds like a bad idea.
Opinions as to which is safer, etc. welcome.
--
Milan Kerslager
E-mail: ***@pslib.cz
WWW: http://www.pslib.cz/~kerslage/
Bogdan Costescu
2004-04-14 10:50:02 UTC
Permalink
Post by John Morris
Ok, questions for those who know more about this stuff than I do. Way I
see it there are three ways to get a chroot environment.
The easiest method that I've seen so far involves yum, which you run
as:

yum --installroot=/chroot/base/dir groupinstall "Base"

This needs the "comps.xml" file to be available in the baseurl of one
of the repositories mentioned in yum.conf.
--
Bogdan Costescu

IWR - Interdisziplinaeres Zentrum fuer Wissenschaftliches Rechnen
Universitaet Heidelberg, INF 368, D-69120 Heidelberg, GERMANY
Telephone: +49 6221 54 8869, Telefax: +49 6221 54 8868
E-mail: ***@IWR.Uni-Heidelberg.De
John A. Tamplin
2004-04-14 11:45:24 UTC
Permalink
Post by John Morris
Basically, President Bush is a
Republican and most of the US IT industry is Democrats and in this
election year the rivalry has degenerated into outright hatred among the
Democrats for Bush and all his works.)
All over-generalizatons are wrong :).
--
John A. Tamplin ***@jaet.org
770/436-5387 HOME 4116 Manson Ave
Smyrna, GA 30082-3723
John Morris
2004-04-16 03:57:44 UTC
Permalink
Post by Milan Keršláger
There are the same problems with packages like in "fresh installed"
system (needs to be builded as non-root etc). When building no package
should need access to master system. If you have a problem with grub,
try to (re)install package in chroot with --noscripts --notriggers.
I had problems with grub while rescueing a system. Commands like du and
mount are unreliable at best in a chroot environment because of lack of
access to the real mtab file and and /proc and grub-install needs those.
Hopefully few if any packages need access to that sort of system details
to compile.
Post by Milan Keršláger
There are few broken packages in x86_64 and you need to handle them even
you don't use chroot (wrong autodetection in configure etc so you will
need sometimes to install glibc.i386 package and sometimes you will need
to remove it, but safe is to have plain x86_64 tree except for
exceptions). Chroot is about
I-do-not-need-to-reinstall-system-many-times. It simply save your time
when you are know what you are doing and does not need to sit behind
building machine and walking through installation process again and
again.
Oh joy of joys! Sounds like this is going to be loads of fun and I'm
going to get sliced up good on the bleeding edge of Linux evolution. So
long as I can manage to actually get a correctly functioning build it will
be a good learning experience.
Post by Milan Keršláger
If you underestand this, you will leave current packages as-are-now, add
U2 like others errata, don't make 3.1 and only respin ISOs to save time
for downloaders.
Wasn't planning on rebuilding every package, but was thinking of having a
3.1 tree until I thought it through and realized I didn't have to, so long
as I make darned sure any package which differs from 3.0 is in the errata
directory, up2date shouldn't get confused.
Post by Milan Keršláger
There is really no reason to have 3.1 because 3.0+updates = 3.1 (so
there is no difference and no need to bump version except confuse
non-experienced users who will try to "upgrade").
Well I have to call it something, and I don't really like U2, especially
since I didn't issue a formal U1 and anyway, this really smells like a
point release (new functionality like OO.o 1.1 instead of just bug fixes)
so might as well call it one.
Post by Milan Keršláger
This is not easy like this because scripts (i tryed this already). The
safer method is to install minimal system, use tar for backup (except
/proc and the directory where you are putting backup). Then transfer to
build system, extract, chroot to new tree, mkdir /proc, mount /proc and
/dev/pts and then run yum (to update system and installl the rest) by
Obviously I haven't used chroot enough. Didn't realize you could mount
/proc and /dev/pts multiple times inside chrooted environments. That
changes a few things. Going to start playing with some of this stuff now
that I have enough drive space to toss a few complete trees around.
--
John M. http://www.beau.org/~jmorris This post is 100% M$ Free!
Geekcode 3.1:GCS C+++ UL++++$ P++ L+++ W++ w--- Y++ b++ 5+++ R tv- e* r
Milan Keršláger
2004-04-16 06:58:15 UTC
Permalink
Post by John Morris
Wasn't planning on rebuilding every package, but was thinking of having a
3.1 tree until I thought it through and realized I didn't have to, so long
as I make darned sure any package which differs from 3.0 is in the errata
directory, up2date shouldn't get confused.
There is a checking utility /usr/lib/anaconda-runtime/check-repository.py
to make sure there are no duplicates or unresolved dependency troubles
in the tree for Anaconda installer (package anaconda-runtime).

There was a change in comps file so if it does not work directly use
some older comps file (to check packages only). I don't remember what I
had to do about it 6 months ago, sorry.
--
Milan Kerslager
E-mail: ***@pslib.cz
WWW: http://www.pslib.cz/~kerslage/
Ewan Mac Mahon
2004-04-16 16:10:12 UTC
Permalink
Post by John Morris
Obviously I haven't used chroot enough. Didn't realize you could mount
/proc and /dev/pts multiple times inside chrooted environments. That
changes a few things.
There are two ways to do it; either mount them multiple times (works for
virtual filing systems) or bind mount them into the new location (works
for anything). Bind mounts even work for files, so for /etc/mtab you could
do (assuming the new system is in /mnt/chroot):
# touch /mnt/chroot/etc/mtab
# mount /etc/mtab /mnt/chroot/etc/mtab --bind
Post by John Morris
Going to start playing with some of this stuff now that I have enough drive
space to toss a few complete trees around.
It might also be instructive to work through the first stage of a Gentoo
install - I don't know if they do anything beyond bind mounting the
necessary bits but the first stage does all take place in a chroot and
IIRC involves building grub, so there must be a way to do it.

Ewan

Loading...