Appliance migration to NAM 5 SP1

Has anybody already successfully migrated appliance to NAM 5 SP1? I get to the point where AM501 appliance is installed as a secondary console, but that installation (or better configuration) is not very successful...

Based on documentation (https://www.microfocus.com/documentation/access-manager/appliance-5.0/install_upgrade/b10jmps7.html), process is quite simple:

  1. Install AM appliance 501 as secondary appliance
  2. Convert secondary 501 appliance to primary
  3. Delete old 45x appliance

So I've started with 1) Install AM appliance 501 as secondary appliance.

As a starting point I have NAM 4.5.4 appliance, IDP and AGW all green, but there is also Analytics server installed (5.0.0)

Installation proceeds without any problem, then configuration is started (using va interface on port 9443). Configuration finishes with very encouraging popup message "Access Manager is successfully configured.". But...

Identity cluster has new server installed, but new server has red health.

Looking at new servers health details I can see following errors:

  • Signing key not available tomcat
  • Encryption key not available
  • SSL Keystore is NOT available

Access Gateway cluster does not have any new server:

Looking at Troubleshooting->Version I can see that new IDP and AGW were imported (version 5.0.1.0.147), also Troubleshooting->Configuration showns new Device Manager under "Other Known NAM Appliances"

I repeated installation multiple times (fortunately I had snapshot of 4.5.4 appliance before I have started with installation) and I have noticed following things:

  • Part of the configuration is "Adding to the Access Gateway cluster" task and then "Apply Changes" task. Before AGW was added to cluster I was able to see new AGW in admin console (of course not part of the cluster, yet), but at that time AGW's health was red and detailed status showed something like "Signing key not available tomcat" -> similar to status of IDP
  • Looking at configure_cluster_2021-09-08_15:41:22.log I can see following:

Invoking URL: 10.10.1.164:8443/.../cntl
Response code: 302

Is response code 302 ok? For other calls in that log file response codes are all 200.

  • Looking at nam.keystore on new appliance also shows that a lot of certificates are missing, because there is only one entry and this is trusted cert for configca:

nam5:/opt/novell/nam/idp/conf # keytool -list -keystore /opt/novell/devman/jcc/certs/nam/nam.keystore
Enter keystore password:
Keystore type: jks
Keystore provider: SUN

Your keystore contains 1 entry

configca, Sep 8, 2021, trustedCertEntry,
Certificate fingerprint (SHA-256): B1:86:97:85:6B:0C:8A:12:B4:AF:76:C2:C3:75:9B:B0:9B:FC:B5:63:7C:EB:58:9A:E1:F2:28:4E:68:A8:7F:EB

nam5:/opt/novell/nam/idp/conf # ll /opt/novell/devman/jcc/certs/nam/nam.keystore
-rw-r--r-- 1 novlwww novlwww 1381 Sep 8 16:07 /opt/novell/devman/jcc/certs/nam/nam.keystore

So has anybody successfully installed NAM 5.0.1 appliance as secondary console to existing 4.5.x appliance? What have I missed?

Kind regards,

Sebastijan

  • I see that URL with 302 response code was shortened. URL was:

    <primaryadminconsole>/roma/autoconfig/cntl?handler=group_create&actionCmd=AddDevicesToGroup&GroupName=AG-Cluster&appliance_ag-58E8373562E2892A=ag-58E8373562E2892A

  • Hi,

    I more or less see the same thing as you, pushing the certificates solved the certificate warning. AG does not get added to cluster and I cant recall how to add a AG to cluster except at install time

    Process of adding the new server to existing NAM was painfully slow and I thought it had stopped several times

    /Lelle

  • A litte update on my problems with upgrade/migrating to v5.1 appliance

    Regarding adding and deleting nodes from AG Cluster, according to documentation it should be possible to add and deleting nodes in cluster. I don't have that options not in the 4.5.3 primary admin console and not in v5.1 console.

    /Lelle

  • Repushing certificates does not work for me. When I do that, I can see following in app_sc.0.log of admin  console:

    33790(D)2021-09-09T06:56:09Z(L)webui.sc.servlet(T)2148(C)com.volera.roma.app.handler.TroubleshootingHandler(M)doRepushCertificates(Msg)Starting repush of certificates
    33791(D)2021-09-09T06:56:09Z(L)webui.sc(T)2148(C)com.volera.roma.app.handler.CertHandler(M)repushCertificates(Msg)In repushCertificates - Singlebox deployment
    33792(D)2021-09-09T06:56:09Z(L)webui.sc(T)2148(C)com.volera.roma.app.handler.CertHandler(M)repushCertificates(Msg)Processing keystore ag-58E8373562E2892A-proxy
    33793(D)2021-09-09T06:56:09Z(L)webui.sc(T)2148(C)com.volera.roma.app.handler.CertHandler(M)repushCertificates(Msg)Singlebox associated with the keystore : 10.10.1.161
    33794(D)2021-09-09T06:56:09Z(L)webui.sc(T)2148(C)com.volera.roma.app.handler.CertHandler(M)repushCertificates(Msg)Devices in this box: [idp-3E18D20B66C941D1, ag-58E8373562E2892A, idp-esp-58E8373562E2892A]
    33795(D)2021-09-09T06:56:09Z(L)webui.sc(T)2148(C)com.volera.roma.app.handler.CertHandler(M)repushCertificates(Msg)Pushing certificates to keystore ag-58E8373562E2892A-proxy
    33796(D)2021-09-09T06:56:09Z(L)webui.sc(T)2148(C)com.volera.roma.app.handler.CertHandler(M)repushCertificates(Msg)Repush certificates ignored for keystore (ag-58E8373562E2892A-proxy): No keys in keystore
    33797(D)2021-09-09T06:56:09Z(L)webui.sc(T)2148(C)com.volera.roma.app.handler.CertHandler(M)repushCertificates(Msg)Pushing certificates to keystore ag-58E8373562E2892A-proxy-truststore
    33798(D)2021-09-09T06:56:09Z(L)webui.sc(T)2148(C)com.volera.roma.app.handler.CertHandler(M)repushCertificates(Msg)Repush certificates ignored for keystore (ag-58E8373562E2892A-proxy-truststore): No keys in keystore

    Since admin console thinks nothing should be in keystore I assume it is not just the problem with pushing certs to local filesystem, something is corrupted in config store.

  • Your certificate repush problem might be related to something I experience in a 4.5 appliance cluster environment: https://support.microfocus.com/kb/doc.php?id=7025146. Maybe copying nam.keystore from the old to the new appliance (after creating a backup of the target file) will help you with the certificates. This appliance cluster certificate issue is supposed to be fixed with 5.x, but at this point your primary AC is still the old appliance version...

  • Thanks for the idea. It "solved" the problem with "keys not available" errors, but I was still not able to push any changes from admin console to new IDP. Also AGW is still missing in cluster.

    I think certificate error is "consequence", not "cause" of the problem.

    Probably something is wrong in configuration store (internal eDirectory), since app_sc.0.log says "Repush certificates ignored for keystore (ag-58E8373562E2892A-proxy): No keys in keystore".

  • Maybe the repush was ignored because the new AG is not member of the cluster?

    Since you have snapshots you could play around with the following command found in the config_cluster log:

    /opt/novell/java/bin/java -classpath "/opt/novell/nam/adminconsole/webapps/roma/WEB-INF/lib/*" com.volera.roma.app.handler.AutoConfigHandlerWrapper addToAGCluster ... ... ... ...

    It seems to be the command that adds the AG to the cluster and leads to response code 302 on your system. During the first upgrade/migration I am currently testing the response code was 200 and both IDP and AG were added to their respective cluster and now have a green health status.

  • By the way during the configuration I also saw something like "Signing key not available for tomcat" in AC for the new AG but that later disappeared.

  • Any updates on this?  I tried doing this in my lab and it was a miserable fail as well.

    I too am wondering if anyone has been successful?

    Matt

  • Unfortunately not. I've manually repushed cert to new appliance, manually triggered AutoConfigHandlerWrapper, but still got 302 as a response.

    When you're doing upgrade in a lab, are you using vanilla 4.5.x appliance as migration source?

    Appliance that I'm trying to migrate was set up years and versions ago, with a lot of in place upgrades, so I'm wondering if maybe there's something odd in my appliance configuration strore.

    I can see some *tmp* objects in my ou=AppliancesContainer, which seem to be in use: