If you look at this picture, you can surmise that wonky workloads like this are unlikely to deliver business value without excessive risk?
Is the captain aware of the shape of his cargo?
If he is what can he do?
What is his approach, full steam to the harbour and let someone else deal with the problem once there?
Not so long ago, I was involved in a meeting with people from an IT department of a large customer. They indicated that after several years of standing up cloud services, they knew that roughly 400 different services had been developed, but didn’t really know how or if they were being used. They freely admitted that this transformation was to offer IT capacity in the form of this private cloud, but now they didn’t really have much of an idea if any benefits were being derived from it.
Many seem to think that pushing the resource deployment issue to a different model will simplify and resolve the issues. The thought process is to “let someone else deal with the hardware”. There are a fair share of issues with the agility the abstraction provides. These issues include the visibility of what’s being consumed, and what’s actually useful. These are recurrent issues we hear of, especially now that IT is expected to account for the cost of the cloud resources.
The impact of the weakest link
Whether you’re in a similar context as they are, or if you haven’t started yet—or even if you’re on a different scale—it’s easy to see that your IT performance can only be as good as the weakest link in the chain of resources used to support the business consumers, both internal and external.
Furthermore, when consuming services from another service provider for specific capabilities such as stock or credit check or others, the end-to-end performance of your own service will be limited if those services degrade. Again, you are only going to be as good as the weakest link in the chain.
Keep in mind that even the big cloud guys have rough days, when worldwide reputable cloud services simply stop. The results of outages include complaining users feeling the pinch of the degraded service, but they can also be measurable in $M.
The potential of new technologies
Now consider the newer technologies that everyone is excited about such as containers. Just like with VMs, it isn’t because there’s a host that re-spawns them if they fail; then it’s someone else’s problem. What happens if those containerized applications or components die and respawn every minute? The end user or service will suffer just as much in this situation. If you put containerized infrastructure into the cloud and you might just be spiralling the set of issues to debug when the going gets tough.
I think monitoring has a major role to play when it comes to bringing the right information to bear and analyse the situation. Since newer technologies are created ever faster and being adopted in many cases without IT operations being at all involved, you want to make sure that monitoring solutions can handle future complexity and scale.
We’re seeing more and more containerized, componentized application designs come from our of DevOps teams. Those teams carry as much responsibility as operations for the targeted business benefits of the newer applications that are rolled out, possibly more.
Research we conducted corroborates with such observations and knowing which monitoring is “perfect enough” can be assisted when choosing tools that incorporate expert knowledge built-in.
What is the responsibility of the captain
You don’t want to find yourself in the situation of the captain of the ship pictured above, which is clearly ploughing on and he possibly has zero visibility at the rear of his ship. Maybe he considers that his role is to get the ship to harbour with all the containers aboard. Somebody else then has the headache of discharging the messy load. ( I wouldn’t be surprised if the Captain has lost a few along the way.)
In a relationship between the service provider and the business owner, truly working together to gain maximum visibility at all times is key to making the best decisions. Saying “it’s someone else’s problem”, is NOT a business responsible approach.
That customer I mentioned earlier is now faced with a dilemma. His business is now asking for evidence that shows the investment in the internal cloud is providing benefit. This is when the big questions start to emerge; the one’s they should have thought about at the outset. If that company I mentioned earlier had visibility, they would know what’s “useful” and better manage the resources and their costs.
Now they face a few questions:
How will they determine which services are up and running and serving users?
How will they see which resources are actually assigned to each service?
What does it mean to calculate the cost of the service commissioning versus the business benefit derived?
What if a service is useful to users but has a widely varying workload?
How would services that don’t appear to be used be de-commissioned?
How can they determine if containerized components are the root of application issues?
If the application inside a container has issues how will you know if it’s due to resource starvation, a host issue, a connectivity issue (yes they still exist) rather than an actual application issue?
Many questions may remain unanswered until the specialist comes along and decodes log files and other information to perform work where the issues lie.
Few organizations will have access to these specialists, and not many of them are typically available when you most need them. Few business users will wait for them to free up to utilize their services.
How to gain Autonomous Operations for Hybrid IT is detailed in our ebook
Imagine an operations configuration where expertise of this nature is built-in
Imagine how things could be for that company (and maybe you) if that operations setup is able to automatically discover and show each cloud service, what it depends on, which application and other infrastructure components it is consuming?
How about correlating the measurements such as service response times and throughputs to infrastructure health?
Are you attracted to “What-if” modelling that looks at the current configuration, and proposes a better recipe?
What if that same system could manage containers, capturing all the data on the container hosts, each container (and their might be hundreds or thousands of them), AND the applications inside them?
These characteristics and more are contained in our Operations Bridge solution, which can now be deployed in a containerized installation to accelerate time to value, and gain flexibility and scalability. See my previous blog to learn about the latest features of OpsBridge.
It’s crucial to gain access and centralize logs, event, metrics, topology because then automated analysis tunes algorithms applied to all the data, spots patterns, anomalies and trends that would take humans forever.
It is even better when that analysis derives seasonal baselines, so your staff doesn’t need to set up and modify thresholds.
According to Gartner and other specialists, analysis provides the means to enable IT transformation. I observe how numerous customers using our Operations Bridge capabilities have now derived a high degree of autonomous operations, freeing their staff to spend valuable time on business priorities.
This newsletter details how we match our Operations Bridge solution with the next generation technologies that Gartner indicates should be adopted going forward.
You can learn how some of our customers have gained substantial benefits through adoption of our solution.
For more information on Operations Bridge
- See our blogs here: https://community.microfocus.com/t5/IT-Operations-Management-ITOM/bg-p/sws-571
- Consult our Operations Bridge web pages here
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.