Error during update to Filr 24.1 on Filr and Index Appliance

Today we tried the Filr update from 23.4 to 24.1

Database and Content Editor works without any error.

on the Filr and Searchindex we get the Error "Failed to communicate with the rpc service"

After reboot the appliance version shows 24.1 but the Filr Login failed with error that the service is unavailable

In the zypper logs we get following errors during update. 

2024-01-31 14:40:40 <1> filr01(8121) [zypp::exec++] abstractspawnengine.cc(checkStatus):189 Pid 12283 successfully completed
2024-01-31 14:40:40 <1> filr01(8121) [zypp-core] PathInfo.cc(chmod):1098 chmod /var/adm/update-scripts/web-filr-5.0.0-150400.54.156.1-restart-9443-services.sh 0100755
2024-01-31 14:40:40 <1> filr01(8121) [zypp] TargetImpl.cc(RunUpdateScripts):606 Found update script web-filr-5.0.0-150400.54.156.1-restart-9443-services.sh
2024-01-31 14:40:40 <1> filr01(8121) [zypp] TargetImpl.cc(doExecuteScript):449 Execute script /var/adm/update-scripts/web-filr-5.0.0-150400.54.156.1-restart-9443-services.sh{- 0755 0/0 size 4748}
2024-01-31 14:40:40 <1> filr01(8121) [zypp::exec++] forkspawnengine.cc(start):274 Executing[C] '/bin/sh' '-c' '/var/adm/update-scripts/web-filr-5.0.0-150400.54.156.1-restart-9443-services.sh'
2024-01-31 14:40:40 <1> filr01(8121) [zypp::exec++] forkspawnengine.cc(start):427 pid 13192 launched
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(testPipe):61 FD(1) pipe is broken
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 Exiting on SIGPIPE...
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [hd]: (-3) /usr/lib64/libzypp.so.1722 : zypp::dumpBacktrace(std::ostream&)+0x39 [0x7fcb745ac339]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [hd]: (-2) /usr/bin/zypper : signal_nopipe(int)+0x73 [0x5589b99d44a3]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [hd]: (-1) /lib64/libc.so.6 : +0x4adc0 [0x7fcb733bfdc0]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 vvvvvvvvvv----------------------------------------
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (0) /lib64/libc.so.6 : __write+0x49 [0x7fcb7347c1a7]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (1) /usr/bin/zypper : signal_handler(int)+0x39 [0x5589b99d3ea9]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (2) /lib64/libc.so.6 : +0x4adc0 [0x7fcb733bfdc0]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (3) /lib64/libc.so.6 : __read+0x44 [0x7fcb7347c0f2]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (4) /lib64/libc.so.6 : _IO_file_underflow+0x121 [0x7fcb734091ef]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (5) /lib64/libc.so.6 : getdelim+0x28a [0x7fcb733fbd08]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (6) /usr/lib64/libzypp.so.1722 : zypp::externalprogram::ExternalDataSource::receiveLine[abi:cxx11]()+0x30 [0x7fcb74674ee0]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (7) /usr/lib64/libzypp.so.1722 : +0x23b367 [0x7fcb74404367]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (8) /usr/lib64/libzypp.so.1722 : zypp::target::TargetImpl::commit(zypp::ZYppCommitPolicy const&, zypp::target::CommitPackageCache&, zypp::ZYppCommitResult&)+0x674 [0x7fcb74406144]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (9) /usr/lib64/libzypp.so.1722 : zypp::target::TargetImpl::commit(zypp::ResPool, zypp::ZYppCommitPolicy const&)+0x1e28 [0x7fcb74410ec8]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (10) /usr/lib64/libzypp.so.1722 : zypp::zypp_detail::ZYppImpl::commit(zypp::ZYppCommitPolicy const&)+0x290 [0x7fcb74569970]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (11) /usr/lib64/libzypp.so.1722 : zypp::ZYpp::commit(zypp::ZYppCommitPolicy const&)+0x20 [0x7fcb7455cfc0]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (12) /usr/bin/zypper : solve_and_commit(Zypper&, SolveAndCommitPolicy)+0xe7f [0x5589b9b78a9f]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (13) /usr/bin/zypper : PatchCmd::execute(Zypper&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)+0x9a6 [0x5589b9ae66e6]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (14) /usr/bin/zypper : ZypperBaseCommand::run(Zypper&)+0x149 [0x5589b9a531c9]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (15) /usr/bin/zypper : Zypper::doCommand(int, char**, int)+0xd10 [0x5589b99fe470]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (16) /usr/bin/zypper : Zypper::main(int, char**)+0x4a [0x5589b99d197a]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (17) /usr/bin/zypper : main+0x5a4 [0x5589b99d0c84]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (18) /lib64/libc.so.6 : __libc_start_main+0xef [0x7fcb733aa24d]
2024-01-31 14:45:47 <2> filr01(8121) [zypper] main.cc(signal_nopipe):77 [bt]: (19) /usr/bin/zypper : _start+0x2c [0x5589b99d3d8a]
2024-01-31 14:45:47 <2> filr01(8121) [zypp::exec] abstractspawnengine.cc(checkStatus):203 Pid 13192 was killed by signal 15 (Terminated)
2024-01-31 14:45:47 <2> filr01(8121) [zypper] Zypper.h(immediateExit):194 Immediate Exit requested (0,SigExitTreasure-Void).
2024-01-31 14:45:47 <1> filr01(8121) [zypper] Zypper.cc(cleanup):731 START

Same error on the Searchindex Appliance:

024-01-31 14:22:49 <1> idx01(5575) [zypp] TargetImpl.cc(RunUpdateScripts):606 Found update script web-filrsearch-5.0.0-150400.21.81.1-restart-9443-services.sh
2024-01-31 14:22:49 <1> idx01(5575) [zypp] TargetImpl.cc(doExecuteScript):449 Execute script /var/adm/update-scripts/web-filrsearch-5.0.0-150400.21.81.1-restart-9443-services.sh{- 0755 0/0 size 1861}
2024-01-31 14:22:49 <1> idx01(5575) [zypp::exec++] forkspawnengine.cc(start):274 Executing[C] '/bin/sh' '-c' '/var/adm/update-scripts/web-filrsearch-5.0.0-150400.21.81.1-restart-9443-services.sh'
2024-01-31 14:22:49 <1> idx01(5575) [zypp::exec++] forkspawnengine.cc(start):427 pid 8303 launched
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(testPipe):61 FD(1) pipe is broken
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 Exiting on SIGPIPE...
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [hd]: (-3) /usr/lib64/libzypp.so.1722 : zypp::dumpBacktrace(std::ostream&)+0x39 [0x7f0e94177339]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [hd]: (-2) /usr/bin/zypper : signal_nopipe(int)+0x73 [0x55f267cfe4a3]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [hd]: (-1) /lib64/libc.so.6 : +0x4adc0 [0x7f0e92f8adc0]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 vvvvvvvvvv----------------------------------------
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (0) /lib64/libc.so.6 : __write+0x49 [0x7f0e930471a7]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (1) /usr/bin/zypper : signal_handler(int)+0x39 [0x55f267cfdea9]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (2) /lib64/libc.so.6 : +0x4adc0 [0x7f0e92f8adc0]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (3) /lib64/libc.so.6 : __read+0x44 [0x7f0e930470f2]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (4) /lib64/libc.so.6 : _IO_file_underflow+0x121 [0x7f0e92fd41ef]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (5) /lib64/libc.so.6 : getdelim+0x28a [0x7f0e92fc6d08]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (6) /usr/lib64/libzypp.so.1722 : zypp::externalprogram::ExternalDataSource::receiveLine[abi:cxx11]()+0x30 [0x7f0e9423fee0]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (7) /usr/lib64/libzypp.so.1722 : +0x23b310 [0x7f0e93fcf310]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (8) /usr/lib64/libzypp.so.1722 : zypp::target::TargetImpl::commit(zypp::ZYppCommitPolicy const&, zypp::target::CommitPackageCache&, zypp::ZYppCommitResult&)+0x674 [0x7f0e93fd1144]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (9) /usr/lib64/libzypp.so.1722 : zypp::target::TargetImpl::commit(zypp::ResPool, zypp::ZYppCommitPolicy const&)+0x1e28 [0x7f0e93fdbec8]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (10) /usr/lib64/libzypp.so.1722 : zypp::zypp_detail::ZYppImpl::commit(zypp::ZYppCommitPolicy const&)+0x290 [0x7f0e94134970]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (11) /usr/lib64/libzypp.so.1722 : zypp::ZYpp::commit(zypp::ZYppCommitPolicy const&)+0x20 [0x7f0e94127fc0]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (12) /usr/bin/zypper : solve_and_commit(Zypper&, SolveAndCommitPolicy)+0xe7f [0x55f267ea2a9f]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (13) /usr/bin/zypper : PatchCmd::execute(Zypper&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)+0x9a6 [0x55f267e106e6]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (14) /usr/bin/zypper : ZypperBaseCommand::run(Zypper&)+0x149 [0x55f267d7d1c9]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (15) /usr/bin/zypper : Zypper::doCommand(int, char**, int)+0xd10 [0x55f267d28470]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (16) /usr/bin/zypper : Zypper::main(int, char**)+0x4a [0x55f267cfb97a]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (17) /usr/bin/zypper : main+0x5a4 [0x55f267cfac84]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (18) /lib64/libc.so.6 : __libc_start_main+0xef [0x7f0e92f7524d]
2024-01-31 14:27:57 <2> idx01(5575) [zypper] main.cc(signal_nopipe):77 [bt]: (19) /usr/bin/zypper : _start+0x2c [0x55f267cfdd8a]
2024-01-31 14:27:57 <2> idx01(5575) [zypp::exec] abstractspawnengine.cc(checkStatus):203 Pid 8303 was killed by signal 15 (Terminated)
2024-01-31 14:27:57 <2> idx01(5575) [zypper] Zypper.h(immediateExit):194 Immediate Exit requested (0,SigExitTreasure-Void).
2024-01-31 14:27:57 <1> idx01(5575) [zypper] Zypper.cc(cleanup):731 START

Anyone else has this problem?

Parents
  • Suggested Answer

    0

    I have been wanting to do a write up on this since 23.4 came out, but never found the time. Took me 1.5 days of anylsis, too.

    Yes, this problem is had by everyone who has a Filr installation that was first deployed before a cerain date/Applicae file version.

    To fix the issue, you need to make sure the vastorage volume has the label "vastorage". You can use ssh and "yast disk" for that as the "most graphical" solution.
    If you can, go back to before the upgrade, add the volume label and just try the upgrade again. If you can't go back then add the volume label and just follow the TID on the topic of something like "Search Appliance not available after upgrade to 23.4". This should be fine.

    Background: the issue comes from a sub-script (which is run by "restart-9443-services.sh") that wants to change /etc/fstab to mount by UUID instead of device name (a good thing) but it gets the UUID by Volume Label, and Installations first deployed before version X (I couldn't check, but probably something like 4.0) do not have a volume label on the vastorage disk, so the sub-script dies and restart-9443-services.sh hangs in limbo until it is killed by zypper.
    As a result, the script given in the TID mentioned above is not run, and that script applies the configuration to the search code and starts it. Without it, the search service  has a broken configuration (with tcl/tk variable names in it) and won't run.
    Since I found that I just make sure to add volume labels to vastorage on all Filr installations I get in contact with or make the admins do that (like I do here now :) ).

    The fun part: this also happens on the Filr appliance, but on 23.4 there is no script after restart-9443-services.sh, so most people didn't notice during the 23.4 upgrade. Mayb 24.1 (which I have not dropped in yet) runsa nother script after that and Filr now chokes on the issue, too. The postgreSQL and CE appliances are new enough and do have a vastorage label so they are fine.

    I do hope this is your issue (confidence:high) and I hope this helps.

    Cheers,
    Christian

  • 0 in reply to 

    Hi Christian,

     

    Thanks for the hint. I set the label vastorage on both the Filr and den Index Appliance.

    filridx1:/dev/disk/by-label # ls -la

    total 0

    drwxr-xr-x 2 root root 100 Jan 31 14:56 .

    drwxr-xr-x 7 root root 140 Jan 31 14:56 ..

    lrwxrwxrwx 1 root root  10 Jan 31 14:56 EFI -> ../../sda2

    lrwxrwxrwx 1 root root  10 Jan 31 14:56 ROOT -> ../../sda3

    lrwxrwxrwx 1 root root  10 Jan 31 14:56 var -> ../../sdc1

    filridx1:/dev/disk/by-label # ls -la

    total 0

    drwxr-xr-x 2 root root 120 Feb  1 09:27 .

    drwxr-xr-x 7 root root 140 Jan 31 14:56 ..

    lrwxrwxrwx 1 root root  10 Jan 31 14:56 EFI -> ../../sda2

    lrwxrwxrwx 1 root root  10 Jan 31 14:56 ROOT -> ../../sda3

    lrwxrwxrwx 1 root root  10 Jan 31 14:56 var -> ../../sdc1

    lrwxrwxrwx 1 root root  10 Feb  1 09:27 vastorage -> ../../sdb1

    filridx1:/dev/disk/by-label #

     

    The Upgrade runs without any errors. The Upgrade scripts changed the mounts in the fstab to UUIDs

     

    After the reboot we where not able to login into filr with following Message "Service is not available please try again later."

     

    I didn’t have enough time to troubleshoot on this error, so I reverted the Snapshot back to Version 23.4

  • 0 in reply to 

    Hi

    Same here (small deployment)

    I have seen too, that it stuck at

    web-filr-5.0.0-150400.54.156.1-restart-9443-services.sh, sub-script /opt/novell/filr_config/updateFstab.sh

    so I have edited it, as soon it was there with exit 0

    than installation is finishing, but I got

    Service Unavailable, try again later

    too

    reverted back to 23.4

    regards

    Markus

  • 0   in reply to 

    Mine worked fine. Can you open a SR?

    CU,
    Massimo

  • 0   in reply to   

    Another customer told me today, that his update worked fine (Small deployment).


    Use "Verified Answers" if your problem/issue has been solved!

  • 0   in reply to   

    Bad news. I tried to update some other customers today (small deployments). Both failed. Disks have been labeled in the right way ...


    Use "Verified Answers" if your problem/issue has been solved!

  • 0 in reply to   

    Hi Diethmar

    Yesterday, I have opened an SR. After workaround with the disk labels.

    filr is not working:
    Service Unavailable, try again later

    None of these TIDs applies:
    https://portal.microfocus.com/s/article/KM000013518?language=en_US
    https://portal.microfocus.com/s/article/KM000019511?language=en_US

    appserver.log shows Failed to validate zone 1

    regards, Markus

  • 0   in reply to 

    I will open two additional SRs tomorrow. Always good if they feel it ...


    Use "Verified Answers" if your problem/issue has been solved!

  • 0   in reply to   

    I have tried to find out the difference of appliances where it worked and where it failed.

      which file system do you use for your second disc /vastorage?


    Use "Verified Answers" if your problem/issue has been solved!

  • 0   in reply to   

    No, it seems that the file system does not be the cause.

    I see very mixed vastorage file systems: btrfs, xfs, ext3 and ext4. So there is a total mix.

    My update was working without any issues. Strange enough my file system is on btrfs. Not really recommended but I assume that my vastorage is a very, very old one which has been transferred from the very beginning.

    Just now I updated another small appliance successfully. vastorage was on ext4.

    So I try to find out any other reasons ...


    Use "Verified Answers" if your problem/issue has been solved!

Reply
  • 0   in reply to   

    No, it seems that the file system does not be the cause.

    I see very mixed vastorage file systems: btrfs, xfs, ext3 and ext4. So there is a total mix.

    My update was working without any issues. Strange enough my file system is on btrfs. Not really recommended but I assume that my vastorage is a very, very old one which has been transferred from the very beginning.

    Just now I updated another small appliance successfully. vastorage was on ext4.

    So I try to find out any other reasons ...


    Use "Verified Answers" if your problem/issue has been solved!

Children
No Data