Identity Manager Applications with https on standard 443 port using NGINX

I have decided, that I am tired of running Identity Applications on port 8543, and I wanted to put it on standard port, like a "normal person" :-). Primarily with focus on users, so they do not have to remember the exact URL with port, or make extra "place to go and click".

Identity Applications run (at least in my case) under single tomcat instance as regular user 'novlua', hence it is not able to bind to privileged ports (1-1024). Using Identity Manager 4.7 on Linux.

By searching thru NetIQ forums, I found out this can be, in general, done several ways:

  • Fronting IdM Apps by Access Manager (recommended and supported solution)

  • running tomcat as root (do not do this - compromised application will compromise your whole box)

  • use solutions that allow tomcat bind to privileged ports:

  • authbind [1], [2]

  • use iptables and redirect ports 80/443 -> 8080/8543 [1]

  • front the IdM Apps tomcat with HTTP server. The HTTP server acts as a proxy.

    • Apache (they have done this in Open Enterprise Server, and there are some indices that in future it might be done in IdM Apps as well)

  • Nginx (As a http proxy. There is also ajp module, but development looks quite dead to me)

  • HAproxy?

  • And probably more things could be put in front of tomcat...

Since I have some background in web development and managing webservers, and I also love nginx, I decided to take the challenge and try nginx as a HTTPS proxy for Identity Applications. This is not even close to be supported solution and you probably should not try this in production environment.

Let's have some fun.

I am using two servers:

  • - SLES 12 SP3

    • Identity Manager Engine [Version 4.7.0]

  • iManager Web Administration [Version 3.1.0]

  • Identity Reporting [Version "6.0.0"]

  • - CentOS Linux 7 (Core)

    • Identity Applications [Version "4.7.0"]

  • CentOS is not supported (!), but RHEL7 is, and since they are binary compatible I tried go with it from interest, and I did not notice any obvious issues - I even have selinux enforcing without any hassle.

IDV installed, IdM Apps installed and working on https port 8543 like charm. I want users to not care about server names, so created CNAME and IdM Apps are configured for it. Now it is time to install nginx on the IdM Apps server.

Added my own nginx repo, where I create my own nginx mainline rpm builds and modules. They are nothing really special and you can use nginx official mainline repo.

Main nginx config is /etc/nginx/nginx.conf. I modify it like this.

Now I disable default server and create structure for my vHost(s)
mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.original
mkdir -p /etc/nginx/conf.d/includes
touch /etc/nginx/conf.d/ /etc/nginx/conf.d/includes/ssl_security_options.conf


In the configs, I left on purpose some things commented, so you can see or test some different options. Headers like 'X-Xss-Protection' or 'X-Frame-Options' had to be commented out in order to Apps behave properly.

The first server{} location on port 80 just redirects everything to https.

Second server{} location listens on 443 and reacts on Host header. Basically currently the idea is, that it is proxying every HTTP request to the tomcat upstream server. During the way, it terminates the SSL connection and proxy the HTTP request to the HTTPS upstream. The tomcat does not have to run on the same server, as can be seen when proxying /nps/ or in vHost (iManager is on The proxy buffer values were bumped up little bit, because I did run to error when proxying:
*1 upstream sent too big header while reading response header from upstream, client

I had to set selinux boolean: setsebool -P httpd_can_network_connect 1, otherwise I did run into this error:
*1 connect() to failed (13: Permission denied) while connecting to upstream, client:, server:, request: "GET /idmdash HTTP/2.0", upstream: "", host: ""

I am aware, that the nginx config is far from perfect. Some proxy values might be redundant, some caching might be contraproductive or ineffective. Also when I have nginx, I rarely proxy everything, and I would like to see nginx serving static files directly. That, and probably more, should be indeed explored and tested more, but I hope I established good starting point for someone, who would like to test this path.

Now we are in the state, where from user perspective, the Identity Apps operate under https@443 and that might be sufficient for certain use-cases (like mine).

From backend perspective, very little has actually changed. IdM Apps still operate on port :8543 like before, we did not make any changes there. When you open developer tools in your favourite browser, you can see in Network that some requests are still made to port 8543, mainly the htpps://* parts.
Note: make sure that tomcat and OSP use the same SSL certificates as nginx, this might save you some headaches. I took my base64 certificates and imported them into osp.jks and also created keystore with same certificates and pointed tomcat to it. 

<Connector port="8543" protocol="org.apache.coyote.http11.Http11NioProtocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLSv1.2" keystoreFile="conf/" keystorePass="changeit" sslEnabledProtocols="TLSv1.2" />

For keystore manipulation, you can use keytool, or my favourite GUI Portecle.

I wanted to go further and get rid of the :8543 requests completely, so I fired up and started modifying things.
Note: I am using headless CentOS server, so no GUI is available. If I wanted to run GUI, I had to connect with 'ssh -X idm2' and use X forwarding. Also had to install xorg-x11-xauth libXext libXrender libXtst packages, in order for it to work.

I changed OAuth server TCP port to 443, and in SSO clients,  removed all :8543, so they have clean https URL without port specified. This is where I hit the wall with OSP. OSP does not like it. It really hates it, when there is no port in the URL and will refuse to work. Even if I type, every common browser will strip that redundant :443 and send the URL without it. And OSP will always complain, because the URLs does not match exactly. Even if it kind of does, right?

{"Fault":{"Code":{"Value":"Sender","Subcode":{"Value":"XDAS_OUT_POLICY_VIOLATION"}},"Reason":{"Text":"Unrecognized interface. Invalid Host Header Name or Request URL Domain Name."}}}

I was not able to overcome this (yet?), even if I tried to manually rewrite with my desired values, remove :443 completely from, I was not able move forward. Maybe someone else can point me to some direction?

As of finishing and polishing this article, I found out what the problem was. I turned on ALL debug on OSP and went thru log.
Preamble: [OSP]
Priority Level: FINER
Java: internal.osp.common.logging.HttpRequestLogger.log() [340] thread=https-jsse-nio-8543-exec-9
Time: 2018-07-11T13:46:19.709 0200
Log Data: HttpServletRequest (Number 1)
Method: GET
Request URL: /osp/a/idm/auth/oauth2/grant
Query String: ?redirect_uri=
Scheme: https
Context Path: /osp
Servlet Path: /a
Path Info: /idm/auth/oauth2/grant
Server Name:
Server Port: 8543
Locale: en_US
Host IP Address:
Remote Client IP Address:
user-agent=Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3472.3 Safari/537.36
accept=text/html,application/xhtml xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
accept-encoding=gzip, deflate, br

This Headers part seemed not right to me: Why? I went back to nginx config, and yeah, I did set Host to $proxy_host;. That's why!! After changing to 'proxy_set_header Host $host;' OSP seems to be happy.
One interesting article:

I hope you find my testing and findings at least little bit informative and enjoyed reading it. I want to thank you for reading my very first post on Cool Solutions and if you have any suggestions or pointers for me, I would very much welcome them.



How To-Best Practice
Comment List
  • I have just updated the github gist. Use the latest version if you have problems with cookie and session persistence.

    Also added static file serving for /idmdash/commons/.