11 minute read time

The Use of the OPTIC Data Lake in Operations Bridge (Update)

by Micro Focus Employee in IT Operations Management

In the era of big data technologies, organizations are looking for ways to adopt a new data collection, storage and processing model, in order to solve big data management challenges and increase operational efficiency.

Imagine the situation where you have multiple data sources each sending the data in its own format to multiple collection stores. Performing data processing, data analysis and various services on top of all this data would become an extremely challenging task. Therefore, there’s a high demand to simplify data collection and data storage from various endpoints and enable common data analysis and data visualization, as well as value-adding services on top of the collected data.

Therefore, industry experts have made recommendations about the need to define a central data lake, in order to simplify and empower capabilities such as advanced analytics.

To address this requirement, we at Micro Focus are continuously developing innovative solutions to help our customers achieve greater operational efficiency in their data management. Our strategy is to bring together efficient only-once data collection and storage, instead of collecting and storing the data multiple times and in multiple places for different needs, and to provide various services on top of this multi-source data. (These services can be used across multiple ITOM solutions, or be restricted to a particular product or a solution.)

As a result, we developed OPTIC (Operations Platform for Transformation, Intelligence, and Cloud), which is a set of common services used by Micro Focus ITOM software. In part, it is a rebranding of Collect Once Store Once (COSO), but it covers more than COSO did. The OPTIC Data Lake (formerly known as the COSO Data Lake), which is based Micro Focus Vertica and ITOM Container Deployment Foundation (CDF), provides data storage and is able to receive and process high-volume and high-velocity data from a variety of independent data sources. The OPTIC Data Lake therefore acts as a common data ingestion platform, offering you the following key benefits:

  • Optimized data ingestion providing a common platform to process and store data, with built-in machine learning and analytics
  • Ability to handle terabytes of data at a high ingestion rate
  • Reduced complexity: easier integrations, pre-integrated use cases, the ability to scale out horizontally as needed
  • Easy installation and configuration, simple management
  • Built-in analytical functions and SQL access to machine learning (Vertica capabilities)
  • Cost-effectiveness: Lower maintenance costs

The OPTIC Data Lake is utilized by a set of our ITOM solutions. Containerized solutions are available as microservice components that include analytics, dashboarding and reporting and can be easily deployed anywhere in your environment. The shift to a microservices architecture enables us to integrate and manage our assets more easily, faster adapt to market changes, offer better scalability and reduce the overall complexity of your IT environment.

For more information, see the blog article Announcing OPTIC - The Operations Platform for Transformation, Intelligence and Cloud.

Use of the OPTIC Data Lake in Containerized Operations Bridge

The OPTIC Data Lake was initially introduced with Operations Bridge 2018.02 (as the COSO Data Lake). With every release, we enhanced it by supporting more data types that can be streamed into OPTIC, bringing services on top of the collected data and introducing performance improvements.

Initially, Operations Bridge was able to collect system metrics from Operations Agents streamed into the Data Lake. Next releases added metric streaming from SiteScope and Business Process Monitor (BPM) probes, followed by the data from Management Packs, Real User Monitor (RUM)Diagnostics and Network Operations Management (NOM) data. The data streamed to the OPTIC Data Lake can be visualized in Business Value Dashboards (the Stakeholder Dashboard capability) and Operations Bridge Manager (OBM) Performance Dashboards (PD). You can also Bring Your Own Business Intelligence (BYOBI) tool to visualize the collected data.

In addition, Automatic Event Correlation (AEC), the analytic capability on top of the OPTIC Data Lake, runs in the background providing automatic correlation of events using a machine-learning algorithm. 

Figure 1. A Graphical overview of the OPTIC Data Lake

As you can see in the diagram, we provide a single framework for managing data collection and storing the data into the OPTIC Data Lake (powered by Vertica). The OPTIC Data Lake possesses scalable interfaces to ingest and process data. Data Access APIs allow you to access the collected data out of the OPTIC Data Lake, in order to use it for other purposes than the ones presented in the diagram.

Services on Top of the OPTIC Data Lake

As already outlined in the overview diagram, the following services provided on top of the OPTIC Data Lake are used by Operations Bridge:

OPTIC Reporting: As already mentioned above, the data from multiple sources streamed to the OPTIC Data Lake can be visualized either using BVD (the Operations Bridge Stakeholder Dashboard capability) or a business intelligence (BI) tool of your choice (for example, Microsoft Power BI, SAP Business Intelligence, Tableau, IBM Cognos, etc.).

The reporting methods available in Operations Bridge can therefore be divided into 2 categories:

  • BVD-based reporting, where you can choose between out-of-the-box reports (which are included with the product and are based on parameterized dashboards) and custom BVD-based reports
  • Custom reporting using the BI tool of your choice (a.k.a. BYOBI – Bring Your Own BI). BYOBI allows controlled access to the OPTIC Data Lake data for reporting and business intelligence with the 3rd-party tools

Out-of-the-box BVD-based reports are available for the following categories:

  • System Management – These reports provide information about the availability and performance of physical systems in your environment, such as average utilization of resources for a chosen period of time. You can use the Agent Metric Collector, Metric Streaming policies or SiteScope to collect system infrastructure metrics.
  • Events – These reports show statistics of the events forwarded to OBM from various data sources, such as Agent Metric Collector, SiteScope, RUM and BPM. Use these reports to gain insight into event trends, the severity of events and event assignments.
  • Synthetic Transactions – These reports display information about synthetic polling of end-user experience, availability and performance of applications. You can use BPM to collect synthetic transaction metrics.
  • Real User Monitors (RUM) – Use these reports to view information about real end-user experience, availability and performance of applications. You can use RUM to monitor real user behaviour and collect the required metrics.
  • Network Operations Management (NOM) – These reports help you predict resource utilization, detect network problems and take corrective actions before critical business availability is impacted.
  • Operations Bridge – Data Center Automation (DCA) composite report (System Details – Utilization and Vulnerability). This report provides you with the detailed information about the nodes that are monitored by both OBM and DCA. You can view the utilization and vulnerability trend of a particular node.

Figure 2. OPTIC Data Lake with cross-domain insights in real time

The metrics collected by Operations Agent, SiteScope, BPM and RUM are sent to the OPTIC Data Lake and are stored in schemas inside the Vertica database in a standardized format called the Operations Bridge Data Model.

To get started with OPTIC Reporting, you need to deploy the OPTIC Reporting capability from the containerized Operations Bridge suite, as part of which OPTIC Data Lake and BVD components are deployed along with other services. These components are containerized so they can be installed on premises or in AWS or Azure. They are also deployed and available in Operations Bridge SaaS. For details on Operations Bridge SaaS, see the following blog articles: Announcing Operations Bridge – SaaS, the fast path to AIOps and Operations Bridge – SaaS Overview.

For general information about Operations Bridge OPTIC Reporting, see the Get Started topic. Also, check out this recent blog post: Getting Started with OPTIC Reporting.

Automatic Event Correlation (AEC): The first analytic capability on top of the OPTIC Data Lake, Automatic Event Correlation works by analyzing patterns in the event stream and using these patterns to group events together which, with a high probability, originate from the same problem. A group of related events is then transformed into a single (correlated) event, which is sent back to OBM. This grouping of events facilitates event processing for an operator, since it shows all related events together (grouped by AEC) making it easier to identify and work on the root-cause. Closing the group event automatically closes all related events.


Figure 3. AI-based automatic event correlation built on OPTIC

AEC can correlate the events of topologically related CIs. In the past, it was necessary to create CI collections manually in the Run-time Service Model (RtSM). Starting with Operations Bridge 2020.08, AEC can automatically determine correlation groups by reading the RTSM topology data and checking which CIs are linked with Impact Relationships.

It it worth noting that Operations Bridge offers multiple correlation techniques, such as Topology-Based Event Correlation (TBEC), Stream-Based Event correlation (SBEC) and the above-mentioned AEC. These techniques can all used together so that that they complement each other and are all used to their best advantage. For more information, see Best practices to use multiple event correlation techniques. Also, check out the following blog article: Anomaly detection and metric analytics on OPTIC Data Lake.

A Few Words about CDF and Vertica

Now that we have introduced the OPTIC Data Lake, let’s briefly talk about the processes and technologies behind them. The OPTIC Data Lake is based on the Container Deployment Foundation (CDF).

The CDF is a new delivery and deployment model that provides a container management and orchestration framework leveraging the Docker container architecture and using Kubernetes for container orchestration. A set of Micro Focus ITOM solutions are successfully running on the CDF enabling easier integrations of capabilities within the same solution. (In fact, capabilities come as pre-integrated containers, with a lot of pre-integration delivered for standard use cases.) Based on the open-source software – Docker containers and Kubernetes clusters – the CDF brings quick software installation and a consistent upgrade model with rolling updates. All this is done on top of an automated and modern orchestration layer, built natively to handle container clusters at scale.

The OPTIC Data Lake is powered by Vertica, a software-based analytics platform that provides high-performance advanced analytics and in-database machine learning, supporting all major cloud platforms and all popular data formats. Please, see the Vertica documentation to learn more.

Don’t wait! Start your OPTIC journey today! The container technology it is based on is an industry trend that is rapidly gaining maturity and offers tremendous benefits, so take the opportunity to learn about it and take advantage of container architecture features, such as microservices. With our rich portfolio, we are able to combine containerized and non-containerized capabilities and deliver you the innovative solution to support your business needs. If you are already using OBM, you do not have to replace it immediately – if need be, classic versions of OBM can be integrated with the OPTIC Data Lake on a container deployment. If you are interested and need more information, please contact your Micro Focus representative for assistance.

We encourage you to try out our new features and enhancements! For further information on our offerings, visit the Operations Bridge product page, explore our documentation resources and check out our videos and blogs.

If you have feedback or suggestions, don’t hesitate to comment on this article.

Explore the full capabilities of Operations Bridge by taking a look at these pages on our Practitioner Portal: Operations Bridge Manager, SiteScope, Operations Agent, Operations Connector (OpsCx), Operations Bridge Analytics, Application Performance Management (APM) and Operations Orchestration (OO)..

Events

See all the Micro Focus events worldwide.

Read all our news at the Operations Bridge blog.

Related items

Have technical questions about Operations Bridge? Visit the OpsBridge User Discussion Forum.

Keep up with the latest Tips & Info about Operations Bridge.

Do you have an idea or Product Enhancement Request about Operations Bridge? Submit it in the Ops Bridge Idea Exchange.

We’d love to hear your thoughts on this blog. Comment below. 

Labels:

Operations Bridge
Operations Bridge