IDM 4.8 upgrade - unable to login to idmdash or idmadmin
Just upgraded to IDM 4.8 from 4.7.2. In the environment, it is integrated with IG 3.5.1 and NAM 4.5 via OSP/SAML. After the upgrade, I am properly redirected to NAM, login, it bounces back to OSP and lands on the applications:
- SSPR works fine
- IG works fine
- /idmdcs works fine
- /idmdash and /idmadmin only show "Your login process did not complete successfully." In the OSP log, I see: "Log Data: Err: invalid_request, Sub: invrediruri, Desc: Redirect URI mis-match., Code: 400".
As a note, all of the services are protected by the NAM MAG as a reverse proxy. The configuration has only changed to add the new /workflow and /forms endpoints. I'm trying to understand which redirect URI is wrong, everything in ism-configuration.properties looks fine. They currently do not have :443 on them. I tried adding the :443, but it gives the expected error from OSP that it does not match the URL. I'm not getting much on the errors other than that OSP logged error.
GCA Technology Services
If anyone else runs into a similar issue, please contact Tech Support and reference SR# 101270427541 (the support ticket for which this issue was resolved)
Could you possibly summarize what the resolution was?
I.e. Does it require a patch? Does it require config changes?
I have a sneaking suspicion I know one way to resolve this curious what you found.
My particular situation had a NetIQ Access Manager proxy (access gateway) performing the proxy operations, but I suppose this would have a similar problem with anything else in front of it doing port translation.
My IDApps were configured to run on port 8443 (OOTB port), but I was doing a port translation in the Access Gateway over to 443, so my URL would be normalized and not require the :8443 junk on it. This was ultimately my issue.
I borrowed a page from the NAM documentation (https://www.netiq.com/documentation/access-manager-45/install_upgrade/data/b6fyxpk.html ) to have the local linux firewall redirect port 443 over to 8443, so that all of my URLs in the configuration didn't require any :443 and all of the :8443 garbage would disappear. This seemed to solve my problem.
In my troubleshooting, what I was finding is that in the OAuth transaction (IDApps login), it would redirect over to OSP (then OSP up to NAM via SAML, but that is irrelevant as it wasn't the problem child). The full process would complete, but the /IDMProv (Api layer) was unable to verify its bearer tokens. So, I was logged into OSP and had a proper bearer token and refresh token, but when IDMProv would send it over to OSP to verify it, that step was failing. There wasn't anything good in any of the logs, even with OSP logs cranked all the way up. I found this was the issue by manually doing the entire login process in Postman, then finding that the bearer tokens in subsequent calls weren't working.
Something under the covers seemed to not like the fact that the traffic would be :8443, but have an X-Forwarded-Port of 443. I suspect that may be some sort of bug, but the way I was able to get around it was by essentially making the entire solution end-to-end work on port 443.
The link to the NAM documentation I provided was to take an Identity Server and have it configured for 443 even though it actually listens on 8443. This uses the local linux firewall to do that translation so that no components of the solution ever see an 8443 anywhere. I've been using this for a while in IDApps configurations to simplify the solution and it seems to prevent quite a bit of madness.
A secondary fix that I also added to prevent cluster issues is to override DNS (via /etc/hosts file) of the IDApps FQDN (idapps.mydomain.com/idmdash, ..../osp, .../IDMProv, etc) to point to itself on each of the IDApps servers. The reason I did this is because if you rely on DNS, when /IDMProv has a bearer token it needs to verify, it'll run it back out to the load balancer and possibly hit another node. This was giving me fits because an error on one node could potentially take down completely separate nodes. This simple local network fix kept all traffic isolated within the node itself. Note: You CANNOT do this if you are running SSPR on that server. SSPR will specifically bomb. In my cases, I am using SSPR appliances, so it isn't an issue.
GCA Technology Services
That was exactly the solution I thought you were going for.
I actually only learned of this a few months ago, working on a client my colleague had implemented this way and I did not understand why.
Shorter version of Rob's note:
Using IPTables on the OSP box, allows OSP to think it is working on 443 even if Tomcat is really listening on 8443 and the metadata matches what is offered to what is seen.
Howdy hopefully someone can assist. I am seeing this issue on a clean install of IDM 4.8. installed using IP addresses of the server, all single box, iManager, IDMApps, OSP with SSPR already as an external appliance.
Have a support case raised, which pointed me to this thread, but even with the IPTables redirect in place, there is still something missing.
I have access manager 4.5 as a reverse proxy only, resource is public as I am happy to let the OSP do the work (if it will bloody work hahaha)
IDMApps configured to run on 8543
IPtables updated to redirect 443 to 8543 all works, will resolve on the host, goto the login screen, allow the credentials, complete the process and the logs show:
Priority Level: FINEST
Java: internal.osp.oidp.service.oauth2.handler.RequestHandler.setJsonError()  thread=https-jsse-nio-8543-exec-1
Log Data: Err: invalid_request, Sub: invrediruri, Desc: Redirect URI mis-match., Code: 400
# cat ../conf/ism-configuration.properties | grep redirect
com.novell.afw.portal.aggregation.redirect_after_request = true
com.netiq.forms.redirect.url = http://localhost:8180/forms/oauth.html
com.netiq.sspr.redirect.url = https://sspr.internal.domain/sspr/public/oauth
com.netiq.ualanding.redirect.url = https://10.120.244.145/landing/oauth.html
com.netiq.ualanding.additional.redirect.urls = https://idmportal.internal.domain/landing/oauth.html
com.netiq.idmadmin.redirect.url = https://10.120.244.145/idmadmin/oauth.html
com.netiq.idmadmin.additional.redirect.urls = https://idmportal.internal.domain/idmadmin/oauth.html
com.netiq.rbpm.redirect.url = https://10.120.244.145/IDMProv/oauth
com.netiq.rbpm.additional.redirect.urls = https://idmportal.internal.domain/IDMProv/oauth
com.netiq.idmdash.redirect.url = https://10.120.244.145/idmdash/oauth.html
com.netiq.idmdash.additional.redirect.urls = https://idmportal.internal.domain/idmdash/oauth.html
host headers coming are are actually the IP address of the service, if I change to the hostname in "NAM forward receeived" it still breaks, or works but does not return the access token
If I make the request direct on the IP address, everything works fine, if I make it on the NAM proxy it will not...
This is doing my head, what am I missing?
Hello, thats correct. The URL thats listed in additional.redirect is what the NAM domain uses.
If I tell NAM to send the IP address onto the server or the above URL same thing...
and it all works no problem.
Funny thing is I can replicate the behavior in the a Lab also, with the all installed the same way, Access manager in front, whitelisting the URL, will only work with the IP address direct, not through Access Manager.
if I had more hair I would be bald, doing my head in haha