Fleet Admiral
Fleet Admiral
565 views

Not seeing "Number of messages in history-message-table reached limit

Jump to solution

L.S.

 

After migrating from OMU Solaris 9.10 to OM Linux  9.20 I find I see lines in /var/opt/OV/log/System.txt about too many messages in history message table, but I do not see this message in the message browser.

I checked using sqlplus searching for this text and find 0 rows.

 

Further info:

ovconfget (with or without -ovrg) shows OPC_INT_MSG_FLT=TRUE in namespace opc_op

 

I have several opcmsg policies. The policy that has the conditions to match these messages has option 'forward unmatched', which I added since I don't see any message coming in.

 

  msgi              "NSR_OM_opcmsg"             enabled    0002.0702

 

DESCRIPTION "OpC40-282: Number of history messages near limit."
 CONDITION_ID "dd7b83ba-b3ae-71e1-12e0-0a16fc290000"
 CONDITION
         SEVERITY Warning
         MSGGRP "OpC"
         TEXT "Number of <*.aantal> messages in history-message-table is near to limit <*.max>. (OpC40-282)"
 SET
         SEVERITY Major
         APPLICATION "HP OpenView Operations"
         MSGGRP "OpC"
         OBJECT "opcctlm"
         TEXT "Number of <aantal> messages in history-message-table is near to limit <max>. (OpC40-282)"
         NOTIFICATION
 DESCRIPTION "OpC40-283: Number of history messages reached limit."
 CONDITION_ID "dd7c556a-b3ae-71e1-12e0-0a16fc290000"
 CONDITION
         SEVERITY Warning Minor Major Critical
         TEXT "^Number of <*.aantal> messages in history-message-table reached limit <#.max>. (OpC40-283)"
 SET
         SEVERITY Major
         APPLICATION "HP OpenView Operations"
         MSGGRP "OpC"
         OBJECT "opcctlm"
         NOTIFICATION

 

Suggestions?

 

JP

Labels (1)
0 Likes
1 Solution

Accepted Solutions
Absent Member.. Absent Member..
Absent Member..

 

Hello,

 

Thanks for the confirmation.

 

If the output of "ovconfget -ovrg server opc | grep OPC_IP_ADDRESS"  retruned the correct OM IP , trace log will be helpful to see how OM deal with such messages.

 

 

# ovconfchg -ovrg server -ns opc -set OPC_TRACE TRUE -set OPC_TRC_PROCS opcmsgm -set OPC_DBG_PROCS opcmsgm -set OPC_TRACE_AREA ALL,DEBUG

  

after OpC40-280/OpC40-283 logged into System.txt,turn off the trace:

# ovconfchg -ovrg server -ns opc -clear OPC_TRACE -clear OPC_TRC_PROCS -clear OPC_DBG_PROCS -clear OPC_TRACE_AREA

 

trace log:

/var/opt/OV/share/tmp/OpC/mgmt_sv/trace

 

Maybe you'd better to sumbit a support case and send the trace log.

 

 

Best Regards,

 

TangYing

View solution in original post

11 Replies
Absent Member.
Absent Member.

Hello, Concerning your situation first of all it’s pretty important to clarify the functionality of the OPC_INT_MSG_FLT variable, which if TRUE, HPOM-internal messages (message-group OpC or OpenView; mainly HPOM-internal status- and error-messages) are passed to the HP Operations agent and can be filtered through message interceptor templates. Note: This is also possible on the HPOM management server. However, the local HPOM management server's agent must run and it must use the same character set as the server. In this case, I see that the variable is set into the opc name space which corresponds to the management server configuration itself. So you need to take a deeper look to the note above and if your management server not accomplish with set the variable to the default value (FALSE). There are some known issues that when this variable is not set properly no internal message come to browser. Related to the huge amount of messages on the history table, there is an option to download or delete if is required all the messages that are amounted on this table. The command would be the opchistdwn, here’s the guide of the same: #opchistdwn -h opchistdwn [-help][-older

0 Likes
Fleet Admiral
Fleet Admiral

L.S.

 

I see some messages on the mgmt_sv from the source NSR_OM_opcmsg(2.702), for example:

Could not map the certificate request automatically for node xxxx (OpC40-2032)

I guess this would be an 'internal'  message that is handled correctly, so why not the 'Number of messages' message?

 

I checked the settings on Solaris:

Both ovconfget en ovconfget -ovrg server show OPC_INT_MSG_FLT=TRUE in namespace opc.
The same as on the Linux server.

 

So why a message on Solaris and no message on Linux?

 

JP.

 

P.S. I have no problems downloading history messages.

 

 

 

0 Likes
Micro Focus Expert
Micro Focus Expert

Is there any Oracle related errors in System.txt log file?

 

One possibility is that you did not setup schedule maintenance to automatically download history messages.

As the result, the tablespace might have reached its limit.

 

 

May I have the outputs from the commands below executed on OMU server?

echo "select * from (select message_number, count(*) an_amnt from opc_annotation group by message_number order by 2 desc ) where rownum < 10;" | /opt/OV/bin/OpC/opcdbpwd -e sqlplus -s

echo "select * from (select message_number, count(*) hst_an_amnt from opc_hist_annotation group by message_number order by 2 desc ) where rownum < 10;" | /opt/OV/bin/OpC/opcdbpwd -e sqlplus -s

echo "select * from (select message_number, count(*) hst_msg_amnt from opc_hist_msg_text group by message_number order by 2 desc) where rownum < 10;" | /opt/OV/bin/OpC/opcdbpwd -e sqlplus -s

echo "select * from (select message_number, count(*) msg_amnt from opc_msg_text group by message_number order by 2 desc ) where rownum < 10;" | /opt/OV/bin/OpC/opcdbpwd -e sqlplus -s

echo "select * from (select message_number, count(*) orig_msg_amnt from opc_orig_msg_text group by message_number order by 2 desc ) where rownum < 10;" | /opt/OV/bin/OpC/opcdbpwd -e sqlplus -s

echo "select * from (select instruction_id, count(*) instr_amnt from opc_instructions group by instruction_id order by 2 desc ) where rownum < 10;" | /opt/OV/bin/OpC/opcdbpwd -e sqlplus -s

 

 

Micro Focus Support
If you find that this or any post resolves your issue, please be sure to mark it as an accepted solution.
If you liked it I would appreciate KUDOs. Thanks
Fleet Admiral
Fleet Admiral

Vladislav,

 

currently some performance testing is done on a new application server, resulting in many messages. DB maintenance is setup, but is based on normal behavior. So I manually acknowlegde messages and execute opchistdwn.

 

The last oracle message in System.txt is on July 30:

 

0: INF: Thu Jul 30 15:55:54 2015: opcmsgm (20722/140576652277536): [chk_sqlcode.scp:111]: Database: ORA-03113: end-of-file on communication channel
Process ID: 9467
Session ID: 333 Serial number: 1561
 (OpC50-15)
Retry. (OpC51-22)

 

Your commands yield:

 

MESSAGE_NUMBER                          AN_AMNT
------------------------------------ ----------
087f6602-36c5-71e5-0b14-0ae8c3ca0000          1
1514fdae-3a31-71e5-03ff-0ae8c3ca0000          1
310480c0-36c6-71e5-0b16-0ae8c3ca0000          1
422e7b8c-36ce-71e5-0b16-0ae8c3ca0000          1
4e270fc6-3a34-71e5-0400-0ae8c3ca0000          1
5f64b6aa-3979-71e5-1674-0ae8c3ca0000          1
6d2c503e-3a34-71e5-0400-0ae8c3ca0000          1
6fdc3714-36c8-71e5-0b16-0ae8c3ca0000          1
757b77ba-36ce-71e5-0b16-0ae8c3ca0000          1

9 rows selected.


MESSAGE_NUMBER                       HST_AN_AMNT
------------------------------------ -----------
7defdb90-2579-71e5-121f-0af3318c0000           3
34b8fc48-3967-71e5-1673-0ae8c3ca0000           2
5a406c6e-3a64-71e5-0400-0ae8c3ca0000           2
2db2675e-370f-71e5-1b4b-0ae8c3ca0000           2
2c257820-3707-71e5-1952-0af331030000           2
53f0d94c-39b1-71e5-1674-0ae8c3ca0000           2
bd307238-396a-71e5-1674-0ae8c3ca0000           2
c957ded0-3801-71e5-1952-0af331030000           2
3bee5fc0-3a30-71e5-03ff-0ae8c3ca0000           2

9 rows selected.


MESSAGE_NUMBER                       HST_MSG_AMNT
------------------------------------ ------------
800d5954-3774-71e5-16d3-0ae7c0f50000            1
801cf21a-3774-71e5-16d3-0ae7c0f50000            1
8024f928-3775-71e5-132f-0ae7c0f40000            1
803cafb0-3774-71e5-16d3-0ae7c0f50000            1
0470e61a-36cc-71e5-0f34-0ae8c1070000            1
04710d02-36cc-71e5-0f34-0ae8c1070000            1
04724a14-36cc-71e5-0f34-0ae8c1070000            1
04748e32-36cc-71e5-0f34-0ae8c1070000            1
04777ade-36cc-71e5-0f34-0ae8c1070000            1

9 rows selected.


MESSAGE_NUMBER                         MSG_AMNT
------------------------------------ ----------
c0111b8e-3a59-71e5-0d28-0ae8c1080000          1
c03801e4-3a5a-71e5-0d29-0ae8c1080000          1
c044238a-3a59-71e5-0d28-0ae8c1080000          1
c052e992-3a59-71e5-0d28-0ae8c1080000          1
c07606e2-3a5a-71e5-0d29-0ae8c1080000          1
38dfc7d6-3a5a-71e5-0d29-0ae8c1080000          1
000bfd34-3a5b-71e5-0f3f-0ae8c1070000          1
000c3dda-3a5b-71e5-0f3f-0ae8c1070000          1
0020eaaa-3a5b-71e5-0f3f-0ae8c1070000          1

9 rows selected.


MESSAGE_NUMBER                       ORIG_MSG_AMNT
------------------------------------ -------------
f7463f96-3a5c-71e5-0d2a-0ae8c1080000             1
f74abe7e-3a55-71e5-0f3d-0ae8c1070000             1
f769b3d8-3a55-71e5-0f3d-0ae8c1070000             1
f788688a-3a5c-71e5-0d2a-0ae8c1080000             1
f78bd044-3a55-71e5-0f3d-0ae8c1070000             1
c0111b8e-3a59-71e5-0d28-0ae8c1080000             1
c03801e4-3a5a-71e5-0d29-0ae8c1080000             1
c044238a-3a59-71e5-0d28-0ae8c1080000             1
c052e992-3a59-71e5-0d28-0ae8c1080000             1

9 rows selected.


INSTRUCTION_ID                       INSTR_AMNT
------------------------------------ ----------
8ed80a58-c033-71e1-1c82-0a16fc290000          2
5b1b8e14-94b3-47f2-bdc8-7f49abc31eab          2
8ed8143a-c033-71e1-1c82-0a16fc290000          2
79ab1ff3-9407-48fe-b207-9ce0d2226af4          2
78e78b47-9ba0-4659-a53c-8998e88e77b5          2
11b28de7-6b71-4bd4-a73e-2fcd78a080fa          2
014d22a4-2574-71e1-19ea-0a1b8c0c0000          1
02e7801e-f177-4726-a696-ee21bf7d0a5b          1
02f8d0e9-02b9-40ed-b7be-da8d1c4e43e7          1

9 rows selected.

I am not worried about the huge number of messages, I am worried about not seeing a message about it (on the new Linux mgmt_sv server, which is to replace the Solaris server).

 

JP

 

0 Likes
Fleet Admiral
Fleet Admiral

L.S.

 

On the mgmt_sv I changed OPC_INT_MSG_FLT to FALSE and executed ovc -stop / ovc -start and waited for the message to show up.

In the System.txt I see

0: WRN: Tue Aug  4 11:41:20 2015: ovoareqsdr (2746/140360050845472): [ctlmdb.cpp:777]: Number of 58036 messages in active-message-table is near to limit 60000. (OpC40-280)
0: ERR: Tue Aug  4 11:41:20 2015: ovoareqsdr (2746/140360050845472): [ctlmdb.cpp:804]: Number of 159227 messages in history-message-table reached limit 100000. (OpC40-283)

and also 2 minutes earlier:

0: INF: Tue Aug  4 11:39:05 2015: API (5255/140047416846112): [msgs_get.scp:8449]: Invalid message identifier 9dd20f51-3a8c-71e5-1b94-0a19fc230000 received.
This error occurs if an old message is not in the database or a new message is already in the database.  (OpC50-10)

 

JP.

0 Likes
Fleet Admiral
Fleet Admiral

Any suggestions or do I need to submit a service request / support case?

I should have received this message every 2 hours, but have none.

 

JP

0 Likes
Absent Member.. Absent Member..
Absent Member..

Hello,

 

I met the same issue several years ago and after done further research(trace),

we found out the root casue is in the cluster OM if the  OPC_IP_ADDRESS in the [opc] is not the OM server IP,

the messages will be discarded because there is no matched node inforamation in OM DB.

 

It happened on OM which has multi  net interfaces:

Internal messages from the server are not getting into the message browser

https://softwaresupport.hp.com/ja/group/softwaresupport/search-result/-/facetsearch/document/KM742738

 

After set  OPC_IP_ADDRESS correctly, internal messages from OM server could be seen from JavaGUI

#ovconfchg -ovrg server -ns opc -set OPC_IP_ADDRESS <OM IP>

restart OM

 

Hope it is not too late...

 

Best Regards,

 

TangYing

Fleet Admiral
Fleet Admiral

Thanks.

 

I checked to see, but OPC_IP_ADDRESS is set correctly.

 

JP.

0 Likes
Absent Member.. Absent Member..
Absent Member..

 

Hello,

 

Thanks for the confirmation.

 

If the output of "ovconfget -ovrg server opc | grep OPC_IP_ADDRESS"  retruned the correct OM IP , trace log will be helpful to see how OM deal with such messages.

 

 

# ovconfchg -ovrg server -ns opc -set OPC_TRACE TRUE -set OPC_TRC_PROCS opcmsgm -set OPC_DBG_PROCS opcmsgm -set OPC_TRACE_AREA ALL,DEBUG

  

after OpC40-280/OpC40-283 logged into System.txt,turn off the trace:

# ovconfchg -ovrg server -ns opc -clear OPC_TRACE -clear OPC_TRC_PROCS -clear OPC_DBG_PROCS -clear OPC_TRACE_AREA

 

trace log:

/var/opt/OV/share/tmp/OpC/mgmt_sv/trace

 

Maybe you'd better to sumbit a support case and send the trace log.

 

 

Best Regards,

 

TangYing

View solution in original post

Fleet Admiral
Fleet Admiral

L.S.

 

We submitted a support case. I will let you know the result.

 

JP

0 Likes
Fleet Admiral
Fleet Admiral

OPC_IP_ADDRESS was only specified in eaagt namespace.

After adding it to opc namespace as specified above, the messages are shown.

 

JP

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.