Update Filr 2.0 to 3.0 issue

Hello,

I am trying to update Filr 2 (fully patched) to Filr 3.0 Advanced in small environment (all-in-one appliance) . I perform everything that is included in:

https://www.novell.com/documentation/filr-3/filr-inst/data/t43dt993zq1c.html

However, when I power on appliance (point 11.2.5 "Deploying the Upgraded Filr VM) and when I access console, I have warning:



I have 3 disks in my VM:

First - boot / that is included by default
Second - existing /vastorage disk which is copy od Filr 2.0 /vastorage disk
Third - created /var disk

Could anyone help me please?

Best Regards
Parents Reply Children
  • Unfortunately that Filr Upgrade Pre-Flight checklist does not contain things that are related to my issue.

    The things I performed were (before copying disk): changing webdav auth to basic, and making sure that second disk in vmware is independent-persistent.

    The rest of things in pre-flight checklist refer to clustered environment with /vashare issues or database issues, not to my problem.

    The issue is still occuring. Do you have any other ideas? I would be very grateful.

  • The Pre-flight check indeed focuses on a Cluster (as this is in general the most common one, but it's still valid for all-in-one small and one of each, but no vashare large setups.



    I would focus on the /etc/sysconfig/novell/NvlVAinit



    As it appears it's unable to get the vashare disk, double check these values on the "old" Filr 2 appliance.:
    CONFIG_VAINIT_STORAGE_TYPE="device"
    CONFIG_VAINIT_DEVICE_STORAGE="type:existing drive:sdb name:sdb1 format:MSDOS fs:[chosen.filesystem (in general ext3)]" (!!)



    (!!) In case this value reads "type:uninitialized drive:sdc size:* format:ext3" then the value should be set to CONFIG_VAINIT_DEVICE_STORAGE="type:existing drive:auto name:auto format:auto"



    When this requires change, make and save them, then run the "python /opt/novell/base_config/zipVAConfig.py" so the vaconfig.zip (used for the upgrade) is updated.



    Then re-copy or -clone the disk used for the vastorage. and attach it to a (newly deployed) Filr 3 appliance.



    PS: It does not harm to check the other files mentioned in that TID, but the value /vashare/filr should be /vastorage/filr

    Lucene (Search Index) settings should point to itself, some files may have near to no contents.



    Kind Regards,

    Bart.



    >>> maciekf<maciekf@no-mx.forums.microfocus.com> 18/07/2017 12:46 >>>









    Unfortunately that Filr Upgrade Pre-Flight checklist does not contain

    things that are related to my issue.




    The things I performed were (before copying disk): changing webdav auth

    to basic, and making sure that second disk in vmware is

    independent-persistent.




    The rest of things in pre-flight checklist refer to clustered

    environment with /vashare issues or database issues, not to my problem.




    The issue is still occuring. Do you have any other ideas? I would be

    very grateful.







    --

    maciekf

    ------------------------------------------------------------------------
    maciekf's Profile: https://forums.novell.com/member.php?userid=163273


    View this thread: https://forums.novell.com/showthread.php?t=504490
  • Thank you for your reply!

    I checked everything, compared "/vashare" to "/vastorage" etc. Everything was compatible then I saw I had:

    CONFIG_VAINIT_DEVICE_STORAGE="type:existing drive:sdb name:sdb1 format:GUID fs:ext3"

    so I changed "GUID" to "MSDOS", performed python zipVAConfig.py and it was the only change I did, then copied the disk (ofc whole appliance is shut down and disk persistent-independent before copying)

    Unfortunetaly my problem still exists. If you have some other ideas, please help.

  • When changing that, it's best to use: "type:existing drive:auto name:auto format:auto" this will basically force to read the header of the disk.



    One more thing comes to mind to double check.

    mnake sure that the scsi id for the sys disk is SCSI0:0, for the disk used for vastorage is SCSI1:0 and for the /var (the new disk) is set to SCSI2:0




    Kind Regards,

    Bart.


    >>> maciekf<maciekf@no-mx.forums.microfocus.com> 19/07/2017 12:54 >>>







    Thank you for your reply!




    I checked everything, compared "/vashare" to "/vastorage" etc.

    Everything was compatible then I saw I had:




    CONFIG_VAINIT_DEVICE_STORAGE="type:existing drive:sdb name:sdb1

    format:*GUID* fs:ext3"




    so I changed "GUID" to "MSDOS", performed python zipVAConfig.py and it

    was the only change I did, then copied the disk (ofc whole appliance is

    shut down and disk persistent-independent before copying)




    Unfortunetaly my problem still exists. If you have some other ideas,

    please help.







    --

    maciekf

    ------------------------------------------------------------------------
    maciekf's Profile: https://forums.novell.com/member.php?userid=163273


    View this thread: https://forums.novell.com/showthread.php?t=504490
  • I checked the disks. They were correct (sys SCSI0:1, vastorage SCSI0:2, var SCSI0:3). I also changed to "auto" that parameters then performed python zipVAConfig.py and nothing happened. The problem still occurs.

    When I perform "python zipVAConfig.py" should I get any information on screen? Because I see no output. Maybe that script is bad? Do you have some ideas?

  • Not SCSI 0:1 0:2 0:3

    But:

    - SCSI 0:0 (first controller first disk) for sys

    - SCSI 1:0 (second controller first disk) for vastorage

    - SCSI 2:0 (third controller first disk) for var



    Make sure to use VMWare ESX(i) 5.5 or later.



    If it then still fails, it's best to file a Service Request, as it requires more investigation, debugging.



    Kind Regards.

    Bart.



    >>> maciekf<maciekf@no-mx.forums.microfocus.com> 20/07/2017 08:24 >>>







    I checked the disks. They were correct (sys SCSI0:1, vastorage SCSI0:2,

    var SCSI0:3). I also changed to "auto" that parameters then performed

    python zipVAConfig.py and nothing happened. The problem still occurs.




    When I perform "python zipVAConfig.py" should I get any information on

    screen? Because I see no output. Maybe that script is bad? Do you have

    some ideas?




    --

    maciekf

    ------------------------------------------------------------------------
    maciekf's Profile: https://forums.novell.com/member.php?userid=163273


    View this thread: https://forums.novell.com/showthread.php?t=504490
  • Hello,

    I am sorry for long inactive. The problem is solved!

    I believe that concerned SCSi controllers (1:0, 2:0, 3:0). I had to create additional controllers in vmware machine settings (because I only had SCSI 0) and everything went fine.

    Best Regards