Photo: Bostjan Kaluza |
IT operations teams often focus on more than one approach to infrastructure monitoring, such as device, network, server, application and storage, with the implication that the whole is equal to the sum of its parts. According to a 2015 Application Performance Monitoring survey, 65 percent of surveyed companies own more than 10 different monitoring tools.
Despite the increase in instrumentation capabilities and the amount of collected data, enterprises barely use significantly larger data sets to improve availability and performance process effectiveness with root cause analysis and incident prediction. W. Cappelli (Gartner, October 2015) emphasizes in a recent report that “although availability and performance data volumes have increased by an order of magnitude over the last 10 years, enterprises find data in their possession insufficiently actionable … Root causes of performance problems have taken an average of 7 days to diagnose, compared to 8 days in 2005 and only 3 percent of incidents were predicted, compared to 2 percent in 2005”. The key question is: How can enterprises make sense of these piles of data?
Bostjan Kaluza ends his article with the following conclusion:
"The analysis of collected data processed by an ITOA solution powered by machine learning now gains a completely new perspective. The data collected by separated monitoring solutions could be analyzed simultaneously resulting in semantically annotated sequence of events. The short list of possible root causes could be significantly reduced by applying probabilistic matching, fuzzy logic, linguistic correlation, and frequent pattern mining. And, finally, reasoning about the most probable root causes performed by automatic inference now takes into account environment dependency structure as well as previous incidents."
Read more...
Source: Data Center Knowledge