IDM stays at 0/2 after upgrading CDF to 2019.02 (readiness probe fail)


After upgrading the CDF to version 2019.02,  when checking kube-status   it seems idm deployment is not being up.   

from kube-status output:


kubectl get deployments -n core    :  

idm 0/2 


kubectl get pods -n core:

kubectl describe pod idm-597756fd98-5bksp -n core :



Warning ....  .....               Readiness probe failed: HTTP probe failed with statuscode: 404


I can not login to both cdf administration and suite pages.


Any idea please?  







  • Hi Levent,

    For this issue, I would suggest getting more information from idm logs, you can run the command as follows:

    kubectl logs -f idm-xxxx -n core -c idm



  • hi,

    there is an error in that log.  it says one or more configuration file is not provided.  


    2019-06-11T09:04:30.623 0000 INFO [localhost-startStop-1] - was not initialized explicitly. Trying to initialize it implicitly from lwssofmconf.xml
    2019-06-11T09:04:31.712 0000 INFO [localhost-startStop-1] - StringEncrypter [ isUseEncryption = false] : One or any of configuration files is not provided ...
    2019-06-11T09:04:31.819 0000 INFO [localhost-startStop-1] - Building of configuration completed in 1195 milliseconds.
    2019-06-11T09:04:31.846000000 0000 SEVERE org.apache.catalina.core.StandardContext startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file
    2019-06-11T09:04:31.852000000 0000 SEVERE org.apache.catalina.core.StandardContext startInternal Context [/idm-service] startup failed due to previous errors



    Before running upgrade-l and upgrade-u  I hadn't modified file in upgrade folder.   Should I had made it? 


  • Can you attach the log file with this command:  kubectl logs idm-xxxx -n core -c idm > idm.log, i am not sure whether the error message you posted is the cause or not.



  • Hi, 

    ok.  I have attached that logs. I applied masking on private information like hostname, server name in the logs.  There are two pods in core namespace. Their logs are attached. 

    Thanks in advance.
  • From the log(line 370 to 582), we can see that it caused by the migrate seeded data failed.  Maybe you need to check the seeded file: /opt/kubernetes/cfg/idm/seeded/com.microfocus.cdf__2019.02__Add_Update_User.json and check the "organizations"  table of CDF database, to see whether there is an organization with id: 2c90928f67d13d970167d13daddb00b3. if not, that's should be some kind of data issue, need to be fixed first. Thanks, Elva
  • Hi,

    I have checked that organisation table in CDF database.   There is no line for that id.  There are two lines and their ids are different from this id.  Their display names are: "Service Management Automation"  and   in second line "IdMIntegration" .    There are no other lines in the table.  


  • what's the name of the 2 records? and can you attach the seeded file here?
  • Hi Elva,

    I attched the seeded file. 

    Names in organizations table lines are:

    uuid: "2c9092866823a1d7016823a1f1a500b3"      name: "Provider"

    uuid: "2c9092866823a1d7016823a1f24600d5"      name : "IdMIntegration"

    Thanks for your helps

  • please run this command: kubectl get cm idm-conf-file -n core -o yaml


    and check whether the content of com.microfocus.cdf__2019.02__Add_Update_User.json is as the same as the seeded file you attached.



  • Hi Elva,

    Their contents are same until line 191 in notepad .  Screenshot is below. In command output; some more lines until 201 which they are under the screen shot. 



    kind: ConfigMap
    annotations: |
    {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"idm-conf-file","cr2__Add_Update_User.json":"[\n {\n \"operation\": \"ADD_OR_UPDATE\",\n \"typ\": \"Provider\"\n },\n \"attributes\": {\n \"name\": \"CLUSTER_ADMIN\"ription\": \"CLUSTER ADMIN ROLE\"\n },\n \"associations\": [\n {\n p\"\n },\n {\n \"name\": \"SUPER_IDM_ADMIN\",\n \"type\": _ADMIN\",\n \"type\": \"permission\"\n }\n ]\n },\n {\n \"oper\"names\": {\n \"organizationName\": \"Provider\"\n },\n \"attributes\"yName\": \"DEPLOYMENT ADMIN\",\n \"description\": \"DEPLOYMENT ADMIN ROLE\"\n: \"Administrators\",\n \"type\": \"group\"\n },\n {\n \"nn\"\n },\n {\n \"name\": \"MNG_ADMIN\",\n \"type\": \"permDD_OR_UPDATE\",\n \"type\":\"databaseUser\",\n \"names\":{\n \"organizaegration_admin\"\n },\n \"attributes\":{\n \"name\":\"integration_admine\": \"group\",\n \"name\": \"Administrators\"\n }\n ]\n },\n {\naseUser\",\n \"names\":{\n \"organizationName\":\"Provider\",\n \"dat:{\n \"name\":\"core_admin\"\n },\n \"associations\":[\n {\n \"\n }\n ]\n },\n {\n \"operation\":\"ADD_OR_UPDATE\",\n \"type\":me\":\"Provider\",\n \"databaseUserName\":\"itsma_admin\"\n },\n \"attrassociations\":[\n {\n \"type\": \"group\",\n \"name\": \"Admin\"ADD_OR_UPDATE\",\n \"type\":\"databaseUser\",\n \"names\":{\n \"organopsbridge_admin\"\n },\n \"attributes\":{\n \"name\":\"opsbridge_admin\\": \"group\",\n \"name\": \"Administrators\"\n }\n ]\n },\n {\n seUser\",\n \"names\":{\n \"organizationName\":\"Provider\",\n \"data\n \"name\":\"hcm_admin\"\n },\n \"associations\":[\n {\n \n }\n ]\n },\n {\n \"operation\":\"ADD_OR_UPDATE\",\n \"type\":\"d":\"Provider\",\n \"databaseUserName\":\"dca_admin\"\n },\n \"attributetions\":[\n {\n \"type\": \"group\",\n \"name\": \"AdministratoR_UPDATE\",\n \"type\":\"databaseUser\",\n \"names\":{\n \"organizationmin\"\n },\n \"attributes\":{\n \"name\":\"demo_admin\"\n },\n \" \"name\": \"Administrators\"\n }\n ]\n },\n {\n \"operation\":names\":{\n \"organizationName\":\"Provider\",\n \"databaseUserName\":\"\"nom_admin\"\n },\n \"associations\":[\n {\n \"type\": \"group\ }\n]\n"}}
    creationTimestamp: "2019-06-10T07:48:55Z"
    name: idm-conf-file
    namespace: core
    resourceVersion: "21887341"
    selfLink: /api/v1/namespaces/core/configmaps/idm-conf-file
    uid: 327130ed-8b54-11e9-a89f-00505699382a


    Thanks for your interest and helps