Paul Vanderhoof
2012-05-19 00:27:04 UTC
Need Help: Reverted to old BE with older version of ZFS cannot mount
rpool root or get back to new BE
Solaris 10 u9 on sparc v490. 3 non-global zones. I did the
following: I used Live Upgrade method to install latest 10_recommended
which worked
with no errors. Rebooted into the new BE with no apparent problems.
ZFS tools reported that my disk format was now outdated ZFS version,
so I
ran zpool upgrade rpool; zpool upgrade dpool. No errors. Now my ZFS
disk format is ZFS version 29.
In the porcess of troubleshooting a strange error that surfaced with
regard to zones -- zoneadm and zlogin both fail with <segmentatin
fault> core -- I
decided to revert back to the original BE with the luactivate
command. Rebooted and mout reboot fails with panic as the system
cannot mount root --
rpool -- because the original BE is still at an earlier ZFS version
and not comapatible with the newer ZFS version 29 on disk format.
NOTICE: zfs_parse_bootfs: error 48
Cannot mount root on rpool/51 fstype zfs
panic[cpu2]/thread=180e000: vfs_mountroot: cannot mount root
Obtained the latest Sol10-u10 iso and burned DVD and rebooted cdrom
which works then tried to implement the Solaris instructions for
reverting
back to the previous BE in the advent that your patched BE fails or
has serious problems. Did a zpool import rpool and basically the
following:
**********************************************************************
The target boot environment has been activated. It will be used when
you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands.
You
MUST USE either the init or the shutdown command when you reboot. If
you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following
process
needs to be followed to fallback to the currently working boot
environment:
1. Enter the PROM monitor (ok prompt).
2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:
At the PROM monitor (ok prompt):
For boot to Solaris CD: boot cdrom -s
For boot to network: boot net -s
3. Mount the Current boot environment root slice to some directory
(like
/mnt). You can use the following commands in sequence to mount the BE:
zpool import rpool
zfs inherit -r mountpoint rpool/ROOT/sol25Q10u9-patch-May12
zfs set mountpoint=<mountpointName> rpool/ROOT/sol25Q10u9-patch-
May12
zfs mount rpool/ROOT/sol25Q10u9-patch-May12
4. Run <luactivate> utility with out any arguments from the Parent
boot
environment root slice, as shown below:
<mountpointName>/sbin/luactivate
5. luactivate, activates the previous working boot environment and
indicates the result.
6. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Activation of boot environment <sol25Q10u9-baseline> successful.
*********************************************************************************
So one issue here is that in fact I do not want to activate the boot
environment <sol25Q10u9-baseline> but the NEW BE sol25Q10u9-patch-
May12
and then proceed to work out my zone issues. Alternatively I need to
figure out how to upgrade the original BE to the new ZFS version
without being
able to boot it.
When I did the zpool import and tried to run luactivate or lustatus:
# /mnt/sbin/luactivate
luactivate: ERROR: Live Upgrade not installed properly (/etc/default/
lu not found).
# cd /
# /mnt/sbin/luactivate
luactivate: ERROR: Live Upgrade not installed properly (/etc/default/
lu not found).
# lustatus
lustatus: not found
# /mnt/sbin/lustatus
/mnt/sbin/lustatus: not found
Also tried:
# chroot /mnt /sbin/luactivate sol25Q10u9-patch-May12
df: Could not find mount point for /
ERROR: Unable to determine major and minor device numbers for boot
device </dev/dsk/c3t0d0s0>.
ERROR: Unable to determine the configuration of the current boot
environment <sol25Q10u9-patch-May12>.
Regards,
Paul Vanderhoof
System Administrator
Data Center Services
Northrop Grumman
***@ngc.com
rpool root or get back to new BE
Solaris 10 u9 on sparc v490. 3 non-global zones. I did the
following: I used Live Upgrade method to install latest 10_recommended
which worked
with no errors. Rebooted into the new BE with no apparent problems.
ZFS tools reported that my disk format was now outdated ZFS version,
so I
ran zpool upgrade rpool; zpool upgrade dpool. No errors. Now my ZFS
disk format is ZFS version 29.
In the porcess of troubleshooting a strange error that surfaced with
regard to zones -- zoneadm and zlogin both fail with <segmentatin
fault> core -- I
decided to revert back to the original BE with the luactivate
command. Rebooted and mout reboot fails with panic as the system
cannot mount root --
rpool -- because the original BE is still at an earlier ZFS version
and not comapatible with the newer ZFS version 29 on disk format.
NOTICE: zfs_parse_bootfs: error 48
Cannot mount root on rpool/51 fstype zfs
panic[cpu2]/thread=180e000: vfs_mountroot: cannot mount root
Obtained the latest Sol10-u10 iso and burned DVD and rebooted cdrom
which works then tried to implement the Solaris instructions for
reverting
back to the previous BE in the advent that your patched BE fails or
has serious problems. Did a zpool import rpool and basically the
following:
**********************************************************************
The target boot environment has been activated. It will be used when
you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands.
You
MUST USE either the init or the shutdown command when you reboot. If
you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following
process
needs to be followed to fallback to the currently working boot
environment:
1. Enter the PROM monitor (ok prompt).
2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:
At the PROM monitor (ok prompt):
For boot to Solaris CD: boot cdrom -s
For boot to network: boot net -s
3. Mount the Current boot environment root slice to some directory
(like
/mnt). You can use the following commands in sequence to mount the BE:
zpool import rpool
zfs inherit -r mountpoint rpool/ROOT/sol25Q10u9-patch-May12
zfs set mountpoint=<mountpointName> rpool/ROOT/sol25Q10u9-patch-
May12
zfs mount rpool/ROOT/sol25Q10u9-patch-May12
4. Run <luactivate> utility with out any arguments from the Parent
boot
environment root slice, as shown below:
<mountpointName>/sbin/luactivate
5. luactivate, activates the previous working boot environment and
indicates the result.
6. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Activation of boot environment <sol25Q10u9-baseline> successful.
*********************************************************************************
So one issue here is that in fact I do not want to activate the boot
environment <sol25Q10u9-baseline> but the NEW BE sol25Q10u9-patch-
May12
and then proceed to work out my zone issues. Alternatively I need to
figure out how to upgrade the original BE to the new ZFS version
without being
able to boot it.
When I did the zpool import and tried to run luactivate or lustatus:
# /mnt/sbin/luactivate
luactivate: ERROR: Live Upgrade not installed properly (/etc/default/
lu not found).
# cd /
# /mnt/sbin/luactivate
luactivate: ERROR: Live Upgrade not installed properly (/etc/default/
lu not found).
# lustatus
lustatus: not found
# /mnt/sbin/lustatus
/mnt/sbin/lustatus: not found
Also tried:
# chroot /mnt /sbin/luactivate sol25Q10u9-patch-May12
df: Could not find mount point for /
ERROR: Unable to determine major and minor device numbers for boot
device </dev/dsk/c3t0d0s0>.
ERROR: Unable to determine the configuration of the current boot
environment <sol25Q10u9-patch-May12>.
Regards,
Paul Vanderhoof
System Administrator
Data Center Services
Northrop Grumman
***@ngc.com