Operation bridge performance

Dears

We have OBM 2021.05 installed on 2 windows  Servers 1 GW and 1DPS with dedicated SQL 2014 Server

recently we increase the number of user and we start feel slowness in the system specially event perspective 

how to troubleshoot this issue and how to increase the performance

Best Regards; 

  • Verified Answer

    +1  

    There are so many different parameters which can impact OBM performance. This forum is probably not the right place for such troubleshooting.

    Please submit support case for further investigation.

  • 0  

    Hello there,

    I think Asaf has a good point. I’m sure that Opentext Support can help you. These are the steps I use when working on event processing issues.

    opr-troubleshooter
    You can run /opt/HP/BSM/opr/support/opr-troubleshooter.sh on your system.
    /opt/HP/BSM/opr/support/opr-troubleshooter.sh
    Usage: opr-troubleshooter <options>

    -pipeline : analyze the current pipeline status
    -ucmdb : analyze the current ucmdb status
    -gui : analyze the current AS status
    -bbc : analyze the current BBC status
    -bus : analyze the current bus status
    -db : analyze the current DB connectivity and performance
    -fwd : analyze the current event forwarding status
    -ma : analyze the current monitoring automation status
    -all : analyze all relevant components to the system
    -reportDir : the directory where the reports will be created
    -checkJVM : analyze the JVM statistics
    -trh <CI Hint>: analyze the CI Resolution of the given hint

    # /opt/HP/BSM/opr/support/opr-troubleshooter.sh -pipeline
    Executing and analyzing thread dump of opr

    Setting opr.mom.event.flowtrace.mode is set to = both
    Waiting 30 seconds to capture a flowtrace snapshot
    Reading in logs opr-flowtrace-backend from /opt/HP/BSM/log/opr-backend
    Got lines: 0
    09:40:35.033 [main] ERROR com.hp.opr.support.tracegui.analyzers.Analyzer - failed to open browser to file:/opt/HP/BSM/opr/support/oprtracegui_reports/index-PipelineTroubleshooter-2024-10-01-09-40-35.html:
    No X11 DISPLAY variable was set, but this program performed an operation which requires it.
    zipping results into: /opt/HP/BSM/opr/support/oprtracegui_reports/troubleshooter-2024-10-01-09-40-35.zip
    /opt/HP/BSM/opr/support/oprtracegui_reports/index-PipelineTroubleshooter-2024-10-01-09-40-35.html

    Then you can view the html on in your browser:
    # lynx /opt/HP/BSM/opr/support/oprtracegui_reports/index-PipelineTroubleshooter-2024-10-01-09-40-35.html
    OmiThreadDump_opr.txt-2024-10-01-09-40-00.html
    Thread dump generated at: Tue Oct 01 09:39:59 CEST 2024

    BLOCKED WAITING RUNNING
    0 106 129

    BLOCKED

    WAITING

    H2-save Id=424457

    /opt/HP/BSM/opr/support/oprtracegui_reports/opr.txt

    Thread ID Thread Name Thread State Lock Owned By Root Cause
    4 H2-save Id=424457 WAITING java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@43f756c9 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@43f756c9

    "H2-save" Id=424457 WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@43f756c9
    at sun.misc.Unsafe.park(Native Method)
    - waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@43f756c9
    at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2044)
    at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)

    H2-serialization Id=424456

    /opt/HP/BSM/opr/support/oprtracegui_reports/opr.txt

    Thread ID Thread Name Thread State Lock Owned By Root Cause
    5 H2-serialization Id=424456 WAITING java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@629c2a9e java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@629c2a9e

    This output is helpful to start identifying the problem area, but it’s not that easy to understand. Giving this information to support would be a good next step.

    Check you have enough memory:
    # ./opr-troubleshooter.sh -checkJVM

    Checking the free heap of OBM JVM Processes ...

    webapps | 22.19%
    wde | 63.01%
    marble_supervisor | 79.91%
    businessImpact_service | 79.56%
    opr-scripting-host | 69.97%
    opr-backend | 63.91%


    Checking the free heap of the UCMDB JVM Process ...

    log | 64.58%

    Remember this returns a percentage value, but don’t let the values fall below 256MB memory free (however that percentage works itself out).


    Performance Dashboard:
    Performance Dashboard’s Event Pipeline Statistics gives a really good overview and is a great place to start. However, if you don’t like PD’s Event Pipeline Statistics, then you can always enable flowtrace and process with a script /opt/HP/BSM/log/opr-backend/opr-flowtrace-backend.log. Personally, I find it more useful to understand pipeline performance on an event basis as it gives more facts that I can then use to solve the problem. More about that later.


    Flowtrace:
    I always enable and keep running flowtrace:

    Then you can look to see each pipeline step (column 8) and how the event flows through the pipeline. This gives you the time, event ID, the pipeline step and the bulk size. This is also with the understanding that everything until IMDBStore happens in a pipeline, whereas after that steps are in performed in parallel – looks like this:

    # grep b9568242-5f8f-71ef-19d4-0a5e4a350000 /opt/HP/BSM/log/opr-backend/opr-flowtrace-backend.log
    2024-08-21 09:33:56,704 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: KPIStatusChangeHandler. Bulk events size: 1
    2024-08-21 09:33:56,704 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: SameEventFilter. Bulk events size: 1
    2024-08-21 09:33:56,704 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: InitEvents. Bulk events size: 1
    2024-08-21 09:33:56,704 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: EventReceiver: EventReceiver: received event with state: OPEN, severity: MINOR, title: FileSystem space utilization for Logical Disk /dev of type devtmpfs - Minor threshold exceeded.
    2024-08-21 09:33:56,704 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: SequenceEvents. Bulk events size: 1
    2024-08-21 09:33:56,704 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: LogReceivedStatistics. Bulk events size: 1
    2024-08-21 09:33:56,705 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: RegisterEvents. Bulk events size: 1
    2024-08-21 09:33:56,705 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: EventUpdateHandler. Bulk events size: 1
    2024-08-21 09:33:56,705 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: ActionResponseHandler. Bulk events size: 1
    2024-08-21 09:33:56,706 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: PipelineEntry. Bulk events size: 1
    2024-08-21 09:33:56,706 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: AutoCorrelation. Bulk events size: 1
    2024-08-21 09:33:56,706 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: CIResolver. Bulk events size: 1
    2024-08-21 09:33:56,707 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: NodeCiGenerator. Bulk events size: 1
    2024-08-21 09:33:56,707 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: CiVariableReplacer. Bulk events size: 1
    2024-08-21 09:33:56,707 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: EtiResolverByHint. Bulk events size: 1
    2024-08-21 09:33:56,707 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: DowntimeProvider. Bulk events size: 1
    2024-08-21 09:33:56,707 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: EtiResolverByRule. Bulk events size: 1
    2024-08-21 09:33:56,708 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: ResolutionCompleted. Bulk events size: 1
    2024-08-21 09:33:56,708 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ExternalEventEnrichment: Started: EPI-Executor-1 script: 4d9120c0-f331-44a3-8252-965f6b96bdcd
    2024-08-21 09:33:56,755 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: EventSuppression. Bulk events size: 1
    2024-08-21 09:33:56,755 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: IMDBStore. Bulk events size: 1
    2024-08-21 09:33:56,755 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: KPIStatusChangeHandler. Bulk events size: 1
    2024-08-21 09:33:56,755 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: EventStreamCorrelation. Bulk events size: 1
    2024-08-21 09:33:56,755 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: SameEventFilter. Bulk events size: 1
    2024-08-21 09:33:56,756 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: PairwiseCorrelation. Bulk events size: 1
    2024-08-21 09:33:56,756 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: PairwiseCorrelation: Processing step: PairwiseCorrelation. Started: Pairwise-Worker-2 Key Pattern: ^Sys_FileSystemUtilizationMonitor:puegcsvm43794.swinfra.net:/dev:<*> ETI: fe9b8df4-dd6c-4543-aea3-d06aed3526b4:6768dbcc-2eee-400e-8497-a1b23ff66fc3
    2024-08-21 09:33:56,774 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: PairwiseCorrelation: Processing step: PairwiseCorrelation. Finished: Pairwise-Worker-2
    2024-08-21 09:33:56,785 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: Deduplication. Bulk events size: 1
    2024-08-21 09:33:56,786 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: EventDeduplicator: Processing step: Deduplication. Started: Dedup-Worker-7 Key: Sys_FileSystemUtilizationMonitor:puegcsvm43794.swinfra.net:/dev:START:DiskUsageLevel:Minor ETI: fe9b8df4-dd6c-4543-aea3-d06aed3526b4:6768dbcc-2eee-400e-8497-a1b23ff66fc3
    2024-08-21 09:33:56,786 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: EventDeduplicator: Processing step: Deduplication. Finished: Dedup-Worker-7
    2024-08-21 09:33:56,787 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: PriorityResolver. Bulk events size: 1
    2024-08-21 09:33:56,805 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: KPIStatusChangeHandler. Bulk events size: 1
    2024-08-21 09:33:56,805 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: SameEventFilter. Bulk events size: 1
    2024-08-21 09:33:56,829 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: UserGroupResolver. Bulk events size: 1
    2024-08-21 09:33:56,829 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: PipelineExit. Bulk events size: 1
    2024-08-21 09:33:56,830 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: ImdbUpdater. Bulk events size: 1
    2024-08-21 09:33:56,830 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: EventStore. Bulk events size: 1
    2024-08-21 09:33:56,846 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: EventUpdater: Event committed. Events bulk size: 1 EventUpdate-Worker-3
    2024-08-21 09:33:56,850 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: UnregisterEvents. Bulk events size: 1
    2024-08-21 09:33:56,851 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: HIUpdater. Bulk events size: 1
    2024-08-21 09:33:56,851 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: NewEventsHandler. Bulk events size: 1
    2024-08-21 09:33:56,851 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: TopoCorrelator: New event. For step: TopoCorrelation
    2024-08-21 09:33:56,851 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: PostStoreEpiCaller: Processing step: com.hp.opr.common.pipeline.StepInfo$InternalStepInfo@701646fe
    2024-08-21 09:33:56,851 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: NNMiCorrelator: NNMiCorrelator executed.
    2024-08-21 09:33:56,851 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: EventAutomation: Automation [New Event]
    2024-08-21 09:33:56,851 INFO [FlowTrace] b9568242-5f8f-71ef-19d4-0a5e4a350000:opr-backend: ProcessingTask: Step: AcknowledgeEvents. Bulk events size: 1

    I frequently use stack traces, which may sound complex, but they are mostly about logical thinking. Since traces can generate quickly and writing trace files involves I/O, I usually focus on tailing the flowtrace log file to identify gaps, and then use stack traces (such as jstack) when the pipeline isn’t processing—these can be found in DPS:/log/opr-backend/threaddumps.log. It's also incredibly useful that stack traces are created automatically when the pipeline stalls. Additionally, the DPS:opr-backend/opr-backend.log file can provide valuable insights, especially when you know what to look for or have scripts to help process the data. I also believe that correctly configuring the out of the box OBM server monitoring policies is very helpful at identifying there is an error.


    /opt/HP/BSM/log/opr-backend/opr-flowtrace-backend.log is my favourite log file on an OBM system as it gives you facts about specific events which you can look up to find out what the problem is. Yes, maybe I should get out more. Trace GUI gives you a great overview in the events section, but I also like to use this unsupported PowerShell script to look for gaps in events which indicate a slow step.


    param (
    [string]$logDirectoryPath,
    [string]$outputFilePath = $null, # Default output file path is null
    [switch]$append # Flag to indicate whether to append to the output file
    )

    # Check if the log directory exists
    if (-not (Test-Path -Path $logDirectoryPath -PathType Container)) {
    Write-Host "Error: Log directory not found at '$logDirectoryPath'. Exiting script."
    exit
    }

    # Get all log files in the directory matching the pattern opr-flowtrace-backend.log.*
    $logFiles = Get-ChildItem -Path $logDirectoryPath -Filter "opr-flowtrace-backend.log.*" | Sort-Object -Property LastWriteTime

    foreach ($logFile in $logFiles) {
    # Define a hashtable to store step start times
    $stepStartTimes = @{}

    # Read each line from the log file
    Get-Content -Path $logFile.FullName | ForEach-Object {
    # Extract timestamp, UUID, and step information from the log line
    if ($_ -match '^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3}).*Step: (.+?)\. Bulk events size: (\d+)?') {
    $logTimestamp = [datetime]::ParseExact($matches[1], "yyyy-MM-dd HH:mm:ss,fff", $null)
    $step = $matches[2]
    $bulkSize = $matches[3]

    # Extract UUID from the log line
    if ($_ -match '\[FlowTrace\] ([\w-]+):opr-backend') {
    $uuid = $matches[1]
    }
    else {
    $uuid = "UUID not found"
    }

    # Calculate the time difference between steps
    if ($previousStepTime -ne $null) {
    $timeDifference = $logTimestamp - $previousStepTime
    $formattedTimeDifference = "{0:hh\:mm\:ss\.fff}" -f $timeDifference
    $logTimestampFormatted = $logTimestamp.ToString("HH:mm:ss,fff")
    if ($timeDifference.TotalSeconds -gt 5) {
    $output = "TD = $formattedTimeDifference Time: $logTimestampFormatted UUID: $uuid, Step: $step, Bulk size: $bulkSize (Time difference > 5 seconds)"
    } else {
    $output = "TD = $formattedTimeDifference Time: $logTimestampFormatted UUID: $uuid, Step: $step, Bulk size: $bulkSize"
    }

    # Output to console or file based on the provided outputFilePath
    if ($outputFilePath -eq $null) {
    Write-Host $output
    } else {
    if ($append) {
    $output | Out-File -FilePath $outputFilePath -Append
    } else {
    $output | Out-File -FilePath $outputFilePath
    }
    }
    }

    # Store the timestamp of the current step as the previous step's timestamp
    $previousStepTime = $logTimestamp
    }
    }
    }

    EventStatisticsUtil
    My favourite support command is /opt/HP/BSM/opr/support/EventStatisticsUtil.sh (portal.microfocus.com/.../KM000012187 and portal.microfocus.com/.../KM000012198 as this is really the only reporting OBM has and shows you relationships between all the relevant event attributes. This is key to understanding and resolving pipeline issues. These KM documents help.


    opr-jmsUtil.sh
    From OBM 23.4, the opr-jmsUtil.sh command offers a “-v” or verbose option. Running the command looks like this:

    # /opt/HP/BSM/opr/support/opr-jmsUtil.sh -v
    Live Server: puegcsvm72311
    Last Alive: Thu Aug 15 15:57:41 CEST 2024 (7 seconds ago)
    ==============================================================================================================================

    queue | total | buffered | delivering | memory | pages | consumers
    -----------------------------------------------------------------------------------------------------------------
    opr_event_forward_queue | 0 | 0 | 0 | 0 | 0 | 0
    opr_action_launch_queue | 0 | 0 | 0 | 0 | 0 | 1
    recipient_notification | 0 | 0 | 0 | 0 | 0 | 1
    queue/alert_engine_alert | 0 | 0 | 0 | 0 | 0 | 1
    queue/alert_engine_notification | 0 | 0 | 0 | 0 | 0 | 1
    opr-so-forwarder-queue | 0 | 0 | 0 | 0 | 0 | 0
    opr_gateway_queue | 71 | 0 | 0 | 0 | 0 | 1
    failed_recipient_notification | 0 | 0 | 0 | 0 | 0 | 1

    topic | total | buffered | delivering | memory | pages | dur-subs | nondur-subs
    ------------------------------------------------------------------------------------------------------------------------------
    opr_oas_rtsm_view_permission_change_topic | 0 | 0 | 0 | 0 | 0 | 0 | 1
    f239e35f-1d45-45bc-81bd-57e3d16ba3a6 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:33432 | Tue Jul 30 14:12:46 CEST 2024
    IMS.customer_1 | 18 | 0 | 0 | 0 | 0 | 0 | 3
    65590ce8-d4b2-422c-abb7-2d5efa724405 | 6 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:51018 | Tue Jul 30 14:13:20 CEST 2024
    50a8c48c-bada-4c82-80ae-374420dde4ed | 6 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:48892 | Tue Jul 30 14:13:58 CEST 2024
    81c6ca70-c831-4134-805f-27b44fe865eb | 6 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:39352 | Tue Jul 30 14:11:21 CEST 2024
    opr_scripting_host_topic | 0 | 0 | 0 | 0 | 0 | 0 | 0
    opr_action_responses | 0 | 0 | 0 | 0 | 0 | 0 | 1
    83edcde5-d14f-479b-b97f-ea5ea5cede3e | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:44612 | Tue Jul 30 14:12:20 CEST 2024
    opr_content_autoupload_topic | 0 | 0 | 0 | 0 | 0 | 0 | 1
    1f41c582-35b2-478a-8a19-bc1a0d5dd58b | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:50758 | Tue Jul 30 14:12:31 CEST 2024
    opr_marble_calc_result_topic | 76 | 0 | 0 | 0 | 0 | 0 | 2
    8f97ca0b-49f5-4fcb-9b63-2a469eadacfe | 35 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:41830 | Wed Jul 31 12:27:00 CEST 2024
    c10f8fb1-5256-411c-bd79-d5c1cad61390 | 41 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:41830 | Tue Jul 30 14:12:13 CEST 2024
    opr_marble_over_time_status | 0 | 0 | 0 | 0 | 0 | 0 | 0
    opr_config_store_update_event_topic | 742 | 0 | 0 | 0 | 0 | 0 | 1
    67cce012-dc80-4518-9dc2-6b51575fac77 | 742 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:44630 | Tue Jul 30 14:10:40 CEST 2024
    topic/repositories_change | 371 | 0 | 0 | 0 | 0 | 0 | 6
    3d1863ef-62f4-4b3b-bc8a-54da3f5a45ad | 50 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:56072 | Tue Jul 30 14:14:30 CEST 2024
    532cf5cc-40d7-4530-9351-01074876d75a | 51 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:48902 | Tue Jul 30 14:13:59 CEST 2024
    b8474258-88b9-4dc3-8963-c0c181ad6059 | 66 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:51006 | Tue Jul 30 14:13:18 CEST 2024
    e13eb85b-31a3-40ee-a2cd-2ea865e3f70e | 66 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:51014 | Tue Jul 30 14:13:19 CEST 2024
    57332793-546d-4d09-bb9f-0c9389c45b1b | 52 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:59204 | Tue Jul 30 14:13:46 CEST 2024
    b1bf3eff-c265-4088-8793-2894ace86b03 | 86 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:39368 | Tue Jul 30 14:11:21 CEST 2024
    opr_ha_backend_sync_topic | 0 | 0 | 0 | 0 | 0 | 0 | 0
    opr_store_update_event_topic | 2878 | 0 | 0 | 0 | 0 | 0 | 9
    02ca0c52-c854-4187-b748-fae78cea85ef | 303 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:59424 | Tue Jul 30 14:13:08 CEST 2024
    872f87da-c2b6-4221-a77b-c92fa6c4c524 | 358 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:51372 | Tue Jul 30 14:12:45 CEST 2024
    b700e927-7f88-4487-aed5-3047cefbcb0b | 363 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:51372 | Tue Jul 30 14:10:53 CEST 2024
    f8f8d172-6b75-4c5f-8563-4f2ac0e88805 | 363 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:51372 | Tue Jul 30 14:11:15 CEST 2024
    ee8abdc9-780c-4dc1-9730-3cdee510be61 | 358 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:51372 | Tue Jul 30 14:12:46 CEST 2024
    de3b582c-d3f0-417d-8309-58c044ca7ba6 | 270 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:47318 | Tue Jul 30 14:13:37 CEST 2024
    7e25bef1-3fdd-4317-b18b-b040de8b2ba5 | 363 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:51372 | Tue Jul 30 14:11:52 CEST 2024
    8c8a3c63-b0f4-4731-af7f-48896c82c2f4 | 243 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:56026 | Tue Jul 30 14:14:43 CEST 2024
    abd22b6c-7f31-4a4c-a0de-c020d06ebac2 | 257 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:56026 | Tue Jul 30 14:14:26 CEST 2024
    opr_store_reinit_topic | 0 | 0 | 0 | 0 | 0 | 0 | 2
    d88d6f47-6688-4b72-9610-d9c1eeb42ce6 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:56096 | Tue Jul 30 14:14:32 CEST 2024
    951de419-2e98-49f8-b320-5f6da6febb10 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:55784 | Tue Jul 30 14:12:01 CEST 2024
    opr_reconciliation_change_topic | 0 | 0 | 0 | 0 | 0 | 0 | 1
    a0b5b1eb-b985-4750-905f-4c3fae7270e7 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:36846 | Tue Jul 30 14:14:40 CEST 2024
    opr_global_store_update_event_topic | 20 | 0 | 0 | 0 | 0 | 0 | 4
    5f0577c3-f65f-45f5-9c2b-bae2b325649b | 5 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:56048 | Tue Jul 30 14:14:27 CEST 2024
    9798068d-d48b-4a63-b083-2a97cff080c1 | 5 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:59468 | Tue Jul 30 14:13:13 CEST 2024
    83c79bbe-1efb-4466-8241-1513958965b6 | 5 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:59468 | Tue Jul 30 14:13:17 CEST 2024
    96b533e5-8743-4410-a729-b28221177e0e | 5 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:44610 | Tue Jul 30 14:10:40 CEST 2024
    activemq.notifications | 0 | 0 | 0 | 0 | 0 | 0 | 0
    opr_event_sync_topic | 0 | 0 | 0 | 0 | 0 | 0 | 1
    4117b4bd-6f8d-42c5-baab-31269c36ce75 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:51040 | Tue Jul 30 14:13:22 CEST 2024
    opr_event_flowtrace_topic | 0 | 0 | 0 | 0 | 0 | 0 | 1
    42c4166d-5eb3-4f0a-b7c5-9710193257c4 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:36828 | Tue Jul 30 14:14:35 CEST 2024
    DTNotification | 0 | 0 | 0 | 0 | 0 | 0 | 1
    07bd8389-cb17-4068-88b1-c5c73be0ab29 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:39338 | Tue Jul 30 14:11:21 CEST 2024
    Notification | 50 | 0 | 0 | 0 | 0 | 0 | 7
    45ff6d3d-fb48-4ef5-acdf-16e5b1b35698 | 7 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:50982 | Tue Jul 30 14:13:16 CEST 2024
    f596e725-9a0d-44cd-882e-09e23e326533 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:47638 | Tue Jul 30 14:10:18 CEST 2024
    a27a3031-e08b-4aee-9828-a8ad25e3fdc2 | 7 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:59402 | Tue Jul 30 14:13:05 CEST 2024
    34c269ca-5d26-459a-bfe0-ac8dd2d1946c | 15 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:45146 | Tue Jul 30 14:09:47 CEST 2024
    8450d231-c887-43fc-8a0e-322853d05d40 | 7 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:59220 | Tue Jul 30 14:13:47 CEST 2024
    40c7e2be-ca5d-4ff3-8aff-c7ca0dbe4d50 | 7 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:59426 | Tue Jul 30 14:13:11 CEST 2024
    378d886e-61de-444f-bc6c-9f9b255b4ff8 | 6 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:54214 | Tue Jul 30 14:14:23 CEST 2024
    ESS_Marble_Topic_1 | 103 | 0 | 0 | 0 | 0 | 1 | 0
    MarbleEssTopicSub_1.MarbleEssTopicSub_1 | 103 | 0 | 0 | 0 | 0 | 1 | 0 | 10.94.78.7:48924 | Tue Jul 30 14:14:02 CEST 2024
    opr_ha_backend_full_sync_topic | 0 | 0 | 0 | 0 | 0 | 0 | 1
    366db09d-d667-4f68-b95e-eebb3454e916 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:46804 | Tue Jul 30 14:14:46 CEST 2024
    opr_toposync_summary_info_topic | 18 | 0 | 0 | 0 | 0 | 0 | 1
    dedc9589-7c0b-41b3-b9c3-820d81237bdb | 18 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:36876 | Tue Jul 30 14:14:41 CEST 2024
    DTStatusChange | 0 | 0 | 0 | 0 | 0 | 0 | 0
    opr_content_uploaded_topic | 0 | 0 | 0 | 0 | 0 | 0 | 0
    GlobalIdJMSTopic | 0 | 0 | 0 | 0 | 0 | 0 | 2
    47721ed8-bac9-403e-aa3c-7b4947ce3522 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:44600 | Tue Jul 30 14:12:20 CEST 2024
    1f8c8874-02e1-48e7-a555-75fbd909b804 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:56056 | Tue Jul 30 14:14:29 CEST 2024
    opr_healthcheck_heartbeat_topic | 772 | 0 | 0 | 0 | 0 | 1 | 0
    DURABLE_HEALTH_TOPIC_LISTENER.DURABLE_HEALTH_TOPIC_LISTENER | 772 | 0 | 0 | 0 | 0 | 1 | 0 | 10.94.78.7:51324 | Tue Jul 30 14:15:02 CEST 2024

    topic | subscriber | buffered | delivering | durable
    -------------------------------------------------------------------------------------------------------------
    ESS_Marble_Topic_1 | MarbleEssTopicSub_1 | 0 | 0 | Y
    opr_healthcheck_heartbeat_topic | DURABLE_HEALTH_TOPIC_LISTENER | 0 | 0 | Y


    The opr-jmsUtil output is can be shown in two parts, queues and topics:

    Queues:
    Queue-based systems messages are sent to a queue and are consumed by a single.
    queue | total | buffered | delivering | memory | pages | consumers
    -----------------------------------------------------------------------------------------------------------------
    opr_event_forward_queue | 0 | 0 | 0 | 0 | 0 | 0
    opr_action_launch_queue | 0 | 0 | 0 | 0 | 0 | 1


    Topics:
    Topic-based systems, messages are sent to a topic and are received by multiple subscribers. This means that topics may have more than one subscriber.
    topic | total | buffered | delivering | memory | pages | dur-subs | nondur-subs
    ------------------------------------------------------------------------------------------------------------------------------
    opr_content_autoupload_topic | 0 | 0 | 0 | 0 | 0 | 0 | 1
    1f41c582-35b2-478a-8a19-bc1a0d5dd58b | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:50758 | Tue Jul 30 14:12:31 CEST 2024
    opr_marble_calc_result_topic | 76 | 0 | 0 | 0 | 0 | 0 | 2
    8f97ca0b-49f5-4fcb-9b63-2a469eadacfe | 35 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:41830 | Wed Jul 31 12:27:00 CEST 2024
    c10f8fb1-5256-411c-bd79-d5c1cad61390 | 41 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:41830 | Tue Jul 30 14:12:13 CEST 2024


    Columns are defined as:
    • queue/topic/subscriber: name of queue/topic/subscriber.
    • total: total number of messages sent to a destination since the bus server process has been started.
    • buffered: currently buffered messages. If there is a positive integer (>1000) here one of the receiver/consumers may have stopped processing messages.
    o Once again, this should help find which log file to look under /opt/HP/BSM/log.
    • delivering: subset of buffered messages, which are currently being processed by consumer. This means the message is being processed.
    o If this value is zero and buffered is high, buffering value is zero one of the receiver/consumers have failed.
    o Once again, this should help find which log file to look under /opt/HP/BSM/log.
    • memory: number of bytes currently allocated in memory.
    • pages: number of page files (0 if not paging).
    • consumers: number of connected queue readers.
    • dur-subs / nondur-subs: number of durable / non-durable topic subscriptions.
    • durable: Y indicates a durable subscription. N indicates a non-durable subscriptions:
    o Durable messages are stored on disk or other persistent storage (/opt/HP/BSM/bus).
    o Non-durable messages are stored only in memory or transient storage. If the messaging system crashes or restarts, these messages are lost. OBM is designed to work with

    You can take each port number and find the subscriber and consumer, so in the first example, hpbsm_bus is the process name for bus (see /opt/HP/BSM/opr/support/opr-support-utils.sh -ls for more information):
    # lsof -i :59424
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    hpbsm_bus 115095 root 824u IPv6 518972 0t0 TCP puegcsvm72311.swinfra.net:smbdirect->puegcsvm72311.swinfra.net:59424 (ESTABLISHED)
    hpbsm_wde 116942 root 728u IPv6 547817 0t0 TCP puegcsvm72311.swinfra.net:59424->puegcsvm72311.swinfra.net:smbdirect (ESTABLISHED)

    # lsof -i :47318
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    hpbsm_bus 115095 root 844u IPv6 601166 0t0 TCP puegcsvm72311.swinfra.net:smbdirect->puegcsvm72311.swinfra.net:47318 (ESTABLISHED)
    hpbsm_opr 118578 root 669u IPv6 617817 0t0 TCP puegcsvm72311.swinfra.net:47318->puegcsvm72311.swinfra.net:smbdirect (ESTABLISHED)

    # lsof -i :56026
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    hpbsm_bus 115095 root 856u IPv6 601291 0t0 TCP puegcsvm72311.swinfra.net:smbdirect->puegcsvm72311.swinfra.net:56026 (ESTABLISHED)
    hpbsm_opr 119119 root 515u IPv6 667416 0t0 TCP puegcsvm72311.swinfra.net:56026->puegcsvm72311.swinfra.net:smbdirect (ESTABLISHED)


    You can combine these two commands together:
    # cat ./updated-opr-jms.sh
    #!/bin/bash

    check_consumer() {
    local ip=$1
    local port=$2
    local consumer_details

    consumer_details=$(lsof -i :$port)

    if [ -n "$consumer_details" ]; then

    echo "$consumer_details" | sed 's/^/\t/'
    else
    echo " No process found for port $port"
    fi
    }

    while IFS= read -r line; do
    echo "$line"
    if [[ $line =~ ([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+):([0-9]+) ]]; then
    ip="${BASH_REMATCH[1]}"
    port="${BASH_REMATCH[2]}"
    check_consumer $ip $port
    fi

    done < <(/opt/HP/BSM/opr/support/opr-jmsUtil.sh -v)


    When running, it shows the networking details:

    # ./updated-opr-jms.sh
    Live Server: puegcsvm72311
    Last Alive: Thu Aug 15 16:57:56 CEST 2024 (10 seconds ago)
    ==============================================================================================================================

    queue | total | buffered | delivering | memory | pages | consumers
    -----------------------------------------------------------------------------------------------------------------
    opr_event_forward_queue | 0 | 0 | 0 | 0 | 0 | 0
    opr_action_launch_queue | 0 | 0 | 0 | 0 | 0 | 1
    recipient_notification | 0 | 0 | 0 | 0 | 0 | 1
    queue/alert_engine_alert | 0 | 0 | 0 | 0 | 0 | 1
    queue/alert_engine_notification | 0 | 0 | 0 | 0 | 0 | 1
    opr-so-forwarder-queue | 0 | 0 | 0 | 0 | 0 | 0
    opr_gateway_queue | 73 | 0 | 0 | 0 | 0 | 1
    failed_recipient_notification | 0 | 0 | 0 | 0 | 0 | 1

    topic | total | buffered | delivering | memory | pages | dur-subs | nondur-subs
    ------------------------------------------------------------------------------------------------------------------------------
    opr_oas_rtsm_view_permission_change_topic | 0 | 0 | 0 | 0 | 0 | 0 | 1
    f239e35f-1d45-45bc-81bd-57e3d16ba3a6 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:33432 | Tue Jul 30 14:12:46 CEST 2024
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    hpbsm_bus 115095 root 820u IPv6 518271 0t0 TCP puegcsvm72311.swinfra.net:smbdirect->puegcsvm72311.swinfra.net:33432 (ESTABLISHED)
    opr_as 115264 root 2101u IPv6 480031 0t0 TCP puegcsvm72311.swinfra.net:33432->puegcsvm72311.swinfra.net:smbdirect (ESTABLISHED)
    IMS.customer_1 | 18 | 0 | 0 | 0 | 0 | 0 | 3
    65590ce8-d4b2-422c-abb7-2d5efa724405 | 6 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:51018 | Tue Jul 30 14:13:20 CEST 2024
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    hpbsm_bus 115095 root 838u IPv6 601113 0t0 TCP puegcsvm72311.swinfra.net:smbdirect->puegcsvm72311.swinfra.net:51018 (ESTABLISHED)
    hpbsm_biz 117245 root 637u IPv6 604169 0t0 TCP puegcsvm72311.swinfra.net:51018->puegcsvm72311.swinfra.net:smbdirect (ESTABLISHED)
    50a8c48c-bada-4c82-80ae-374420dde4ed | 6 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:48892 | Tue Jul 30 14:13:58 CEST 2024
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    hpbsm_bus 115095 root 842u IPv6 601239 0t0 TCP puegcsvm72311.swinfra.net:smbdirect->puegcsvm72311.swinfra.net:48892 (ESTABLISHED)
    hpbsm_mar 117041 root 658u IPv6 666418 0t0 TCP puegcsvm72311.swinfra.net:48892->puegcsvm72311.swinfra.net:smbdirect (ESTABLISHED)
    81c6ca70-c831-4134-805f-27b44fe865eb | 6 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:39352 | Tue Jul 30 14:11:21 CEST 2024
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    hpbsm_bus 115095 root 806u IPv6 481950 0t0 TCP puegcsvm72311.swinfra.net:smbdirect->puegcsvm72311.swinfra.net:39352 (ESTABLISHED)
    opr_as 115264 root 1700u IPv6 260496 0t0 TCP puegcsvm72311.swinfra.net:39352->puegcsvm72311.swinfra.net:smbdirect (ESTABLISHED)
    opr_scripting_host_topic | 0 | 0 | 0 | 0 | 0 | 0 | 0
    opr_action_responses | 0 | 0 | 0 | 0 | 0 | 0 | 1
    83edcde5-d14f-479b-b97f-ea5ea5cede3e | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 10.94.78.7:44612 | Tue Jul 30 14:12:20 CEST 2024
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


    See portal.microfocus.com/.../KM000033009 for more information, but if something is buffering (that’s the buffered column) it’s bad news.


    Using coda to see performance issues:
    These values are then added into coda which you can view. I stopped the collectors on this system which is why the values are not shown but I think you get the idea:
    # ovcodautil -ds OMi -o EventStats -m ClosedAutomation,ClosedEPI,ClosedPairwise,ClosedSBEC,CorrelAEC,CorrelSBEC,CorrelTBEC,DiscardDedup,DiscardEPI,DiscardAge,DiscardStorm,DiscardSBEC,DiscardSuppression,EventCount > /tmp/l
    # tail /tmp/l
    11/15/22 03:05:00 0.00 0.00 11.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15.00
    11/15/22 03:10:00 0.00 0.00 12.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18.00
    11/15/22 03:15:00 0.00 0.00 9.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14.00
    11/15/22 03:20:00 0.00 0.00 12.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16.00
    11/15/22 03:25:00 0.00 0.00 10.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15.00
    11/15/22 03:30:00 0.00 0.00 12.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20.00
    11/15/22 03:35:00 0.00 0.00 38.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 44.00
    11/15/22 03:40:00 0.00 0.00 10.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15.00
    11/15/22 03:45:00 0.00 0.00 11.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17.00
    11/15/22 03:50:00 0.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20.00


    One big log file:
    Sometimes it’s used to view chronalogical order. Why? The shell script was too slow as it took hours to process one loggrabber – this unsupported python version processes the logs in less than a minute. To use, you will need to change “log_dir” to match the log file you want to process:


    # unsupported script
    import os
    import re
    from datetime import datetime

    # Please change as required:
    log_dir = "/tmp/pathto/log"
    temp_log_file = "/tmp/temp_biglogfile.log"
    big_log_file = "/tmp/biglogfile.log"

    timestamp_regex = re.compile(r"^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3}")

    def extract_timestamp(line):
    match = timestamp_regex.match(line)
    if match:
    # Extract and return the timestamp part
    return ' '.join(line[:23].split())
    return None

    with open(temp_log_file, 'w') as temp_file:
    for root, _, files in os.walk(log_dir):
    for file_name in files:
    if file_name.startswith("biglogfile"):
    continue
    if file_name.endswith(".log") or re.match(r".*\.log\.\d+$", file_name):
    log_file_path = os.path.join(root, file_name)
    with open(log_file_path, 'r') as log_file:
    for line in log_file:
    timestamp_str = extract_timestamp(line)
    if timestamp_str:
    relative_path = os.path.relpath(root, log_dir)
    temp_file.write(f"[{relative_path}/{file_name}] {line}")

    with open(temp_log_file, 'r') as temp_file, open(big_log_file, 'w') as big_file:
    sorted_lines = sorted(temp_file.readlines(), key=lambda x: ' '.join(x.split()[1:3]))
    big_file.writelines(sorted_lines)

    os.remove(temp_log_file)

    To run:
    # python ./script

    Please note, if you are only interested in log files for opr-backend, use the JVM directory name:
    log_dir = "/tmp/vf/DPS_vgdod2vr/log/opr-backend"

    The output is in /tmp/vf/biglogfile.log.

    This is a sample output where you can see the JVM directory names along with the log file name:


    [opr-backend/opr-heartbeat.log] 2024-08-01 13:32:09,001 [HeartBeatConfig-1] WARN UcmdbConfigSynchronizer.updateConfigurationsCheckAll(1001) - UCMDB has duplicate core IDs for nodes 1622f201197fc2730a630b1b34e1f0a6 and 7336a8d774481283eab2324eacb79adf / core ID d96c1a02-3d03-757a-1ea5-b475324e
    [opr-backend/opr-heartbeat.log] 2024-08-01 13:32:09,001 [HeartBeatConfig-1] WARN UcmdbConfigSynchronizer.updateConfigurationsCheckAll(1001) - UCMDB has duplicate core IDs for nodes 5aee9f6eab74b7bd2cefb78f02d566fa and d555ca7047b2d1c6daf543752dd64269 / core ID 385d6ed6-4f0e-7585-0e70-9cdeb2c8
    [opr-backend/opr-backend.log] 2024-08-01 13:32:09,001 [HeartBeatConfig-1] INFO EventSubmitter.submit(421) - successfully submitted events
    [opr-backend/opr-backend.log] 2024-08-01 13:32:09,001 [HeartBeatConfig-1] INFO EventSubmitter.submit(393) - accepted event Event[Id=a24dad31-bef6-4614-93da-dfb14421192d, Severity=CRITICAL, Title=Agent health not being monitored, due to duplicate core id :d96c1a02-3d03-757a-1ea5-b475324e2f33,
    [opr-backend/opr-backend.log] 2024-08-01 13:32:09,001 [HeartBeatConfig-1] INFO EventSubmitter.submit(421) - successfully submitted events
    [opr-backend/opr-backend.log] 2024-08-01 13:32:09,001 [HeartBeatConfig-1] INFO EventSubmitter.submit(393) - accepted event Event[Id=42c03b06-a4ae-4d33-8104-05be7ba63844, Severity=CRITICAL, Title=Agent health not being monitored, due to duplicate core id :385d6ed6-4f0e-7585-0e70-9cdeb2c856ed, [jboss/bsm_sdk_utils.log] 2024-08-01 13:32:09,002 [quartzScheduler_Worker-60] (UcmdbSdkPool.java:254) INFO - Successfully got UCMDB service provider
    [jboss/bsm_sdk_utils.log] 2024-08-01 13:32:09,002 [quartzScheduler_Worker-60] (UcmdbSdkPool.java:235) INFO - Start get UCMDB service provider. Protocol: https; host: localhost; port: 8443; root context: /
    [jboss/bsm_sdk_utils.log] 2024-08-01 13:32:09,002 [quartzScheduler_Worker-60] (UcmdbSdkPool.java:254) INFO - Successfully got UCMDB service provider
    [jboss/bsm_sdk_utils.log] 2024-08-01 13:32:09,002 [quartzScheduler_Worker-60] (UcmdbSdkPool.java:235) INFO - Start get UCMDB service provider. Protocol: https; host: localhost; port: 8443; root context: /
    [opr-backend/opr-heartbeat.log] 2024-08-01 13:32:09,002 [HeartBeatConfig-1] WARN UcmdbConfigSynchronizer.updateConfigurationsCheckAll(1001) - UCMDB has duplicate core IDs for nodes 5a520053bcfb45bbfe713bc6cf85f92e and 4d6b4277d563450b80ccbe9684e8dbb3 / core ID cba13aba-9ca1-7586-11b7-cb566312[opr-backend/opr-backend.log] 2024-08-01 13:32:09,002 [HeartBeatConfig-1] INFO EventSubmitter.submit(421) - successfully submitted events
    [opr-backend/opr-backend.log] 2024-08-01 13:32:09,002 [HeartBeatConfig-1] INFO EventSubmitter.submit(393) - accepted event Event[Id=1546cc29-d87d-4a22-9e67-e781833f64b9, Severity=CRITICAL, Title=Agent health not being monitored, due to duplicate core id :cba13aba-9ca1-7586-11b7-cb56631228a2,
    [jboss/bsm_sdk_utils.log] 2024-08-01 13:32:09,003 [quartzScheduler_Worker-60] (UcmdbSdkPool.java:254) INFO - Successfully got UCMDB service provider
    [opr-backend/opr-heartbeat.log] 2024-08-01 13:32:09,003 [HeartBeatConfig-1] WARN UcmdbConfigSynchronizer.updateConfigurationsCheckAll(1001) - UCMDB has duplicate core IDs for nodes ca229e417457d54fe9211fa8f0fe3ebf and 73ea6a1e3104308e465170e9a42270b6 / core ID b113202c-9a0b-75ac-168a-dae8f892[opr-backend/opr-backend.log] 2024-08-01 13:32:09,003 [HeartBeatConfig-1] INFO EventSubmitter.submit(421) - successfully submitted events


    If you are only interested in the logs after a certain time, in this case everything after ‘2024-08-01 11’:
    grep -an '2024-08-01 11' /tmp/vf/biglogfile.log | awk -F: '{print "NR>=" $1}' | xargs -I{} awk '{}' /tmp/vf/biglogfile.log > /tmp/final.file

    Please let me know if you have any feedback or how this can be improved.

    I hope you find this as useful as me.

    --
    If you found this post useful, give it a “Like” or click on "Verify Answer" under the "More" button

  • 0 in reply to   

    Thank you all