Absent Member.. leo chang Absent Member..
Absent Member..
377 views

OO cluster not work well

Jump to solution

Hi ,

I installed a OO cluster following the HA guide, but it seems not work well as expected.

OS: windows 2008

OO: version 9.07, 1 load balancer + 2 central & RAS

DB: oracle 11g

Using shared repository and point to the same database. Configured 2 OO central PAS services and 2 OO RAS services in load balancer, and also configured the HP load balancer admin flow by setting lbUrl point to the load balancer service.

 

The problem is:

  1. When I connect to OO central portal via load balancer service, only one node is listed in the node administrator tab. Which one is the current connecting.
  2. Under the report tab, list the HP load balancer admin flow running failed many times, the error msg shows that caused by the unavailable of the second node.
  3. When I create a new flow via studio(connecting thru load balancer), it is only available in one central. The new flow just created will display only after I restart the other central service.

 

I suspect the HA is not working well. But I don't know the reason.

Attach all the configuration file and some screen snapshot. Very appreciate any advice.

Thanks.

Labels (2)
Tags (2)
0 Likes
1 Solution

Accepted Solutions
Absent Member.. leo chang Absent Member..
Absent Member..

Re: OO cluster not work well

Jump to solution

I reinstalled the OO cluster yesterday and follow the guide,  it is work now.

Here is several points need pay attention,

1. Install patch9.07 after install the cluster service. Do not apply patch between install central service and cluster service

2. When install cluster service on second node, need delete all the existing file in the share repository folder

3. After applied patch 9.07, central service of second node cannot startup. I update the localhost to hostname of ‘Servers’ section in tc-config.xml.

 

Thanks for all the help.

 

0 Likes
4 Replies
Contributor.. ChristineB Contributor..
Contributor..

Re: OO cluster not work well

Jump to solution

Are the "tc-config_oo001.xml" and "tc-config_oo002.xml" files labeled correctly?

In the "tc-config.xml" file on a Central cluster node, that node should be listed first, in the top part of the file, and the other node should be listed below the "When configuring multiple terracotta servers" section.

 

I assume the "oo001" and "oo002" strings were added to the filenames you provided to label which files came from which server.  If the "tc-config.xml" files are labeled correctly, then the order of servers is wrong in both.

 

In "tc-config_oo001.xml", server "oo002" is defined first and "oo001" is defined in the second section where other servers are to be defined.

In "tc-config_oo002.xml", the opposite is seen - server "oo001" is defined first and "oo002" is defined in the second section where other servers are to be defined.

 

Again, if these files are labeled correctly, then the server orders need to be switched.  If that is done, shutdown all OO services on both Central nodes, reverse the server orders in the tc-config.xml files, then restart OO in the following order:

1) On one server - oo001 - start RSGridServer service, then start RSCentral and RSJRAS.

2) After the services on oo001 are started, logon to Central oo001 to confirm you can logon.  Check the cluster nodes, either from terracotta admin or from Central > Node Administration to confirm that node oo001 is listed.

3) Then start the OO services on the second node - oo002 - in the same order as you did on oo001. Do NOT start any OO services on the second node until all OO services have been started on the first node and you have confirmed that the first node is up and running and things appear as expected.

4) After starting OO services on oo002, logon to Central and check to see if it is now showing up in terrracotta admin or on Node Aministration tab.

 

If you are still not seeing oo002 showing up in terracotta admin or on Node Administrator tab, check the terracotta-wrapper.log.  There was an error from oo001's terracotta-wrapper.log indicating that the terracotta database on oo002 was dirty.  If you are still seeing message indicating that terracotta database on oo002 is dirty, there are knowledge base articles on fixing that (or post here again for followup).

 

0 Likes
Absent Member.. leo chang Absent Member..
Absent Member..

Re: OO cluster not work well

Jump to solution

Thanks so much.

I'll try next monday.

0 Likes
Absent Member.. leo chang Absent Member..
Absent Member..

Re: OO cluster not work well

Jump to solution

I tried follow the steps.  No luck. 

 

I made a misstake for the name of tc-config.xml previous.  Sorry.

 

Attach the log file and screen snapshot.

 It seems no 'dirty database' message in the log this time

 

Thanks again!

0 Likes
Absent Member.. leo chang Absent Member..
Absent Member..

Re: OO cluster not work well

Jump to solution

I reinstalled the OO cluster yesterday and follow the guide,  it is work now.

Here is several points need pay attention,

1. Install patch9.07 after install the cluster service. Do not apply patch between install central service and cluster service

2. When install cluster service on second node, need delete all the existing file in the share repository folder

3. After applied patch 9.07, central service of second node cannot startup. I update the localhost to hostname of ‘Servers’ section in tc-config.xml.

 

Thanks for all the help.

 

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.