Skip to main content

Hi,

Centreon provides plugin for Elasticsearch and Graylog. Does Centreon plan a plugin for Splunk ?

Thanks

Hi, 


Indeed, we would love to have such​ Plugin. Let’s connect and see how we can mutually help. 


Cheers,


Hi,

Don’t have the product yet, just a POC planned.

Regards


Hi,
Meanwhile, during the POC, we made a Python script to retrieve the number of results from a query sent to SPLUNK (Like Graylog plugin). This may be a start for a Centreon plugin.

Commande :
python3.6 /usr/lib64/nagios/plugins/check_splunk_api.py --hostname='XXX.XXX.XXX.XXX' --api_username='USERNAME' --api_password='PASSWORD' --port=8089  --query='QUERY' --timeframe='-15min' --warning_query_matches='5' --critical_query_matches='10'
 

Script :

from __future__ import print_function
from mailbox import _mboxMMDFMessage
from xml.dom import minidom
from future import standard_library
standard_library.install_aliases()
import sys, getopt, time, httplib2, argparse
import urllib.request, urllib.parse, urllib.error

# Get arguments
parser = argparse.ArgumentParser(description='Centreon check Splunk')
parser.add_argument("--hostname", help="IP splunk server")
parser.add_argument("--port", help="Port splunk server",default=8089)
parser.add_argument("--api_username", help="username")
parser.add_argument("--api_password", help="password")
parser.add_argument("--query", help="Search request")
parser.add_argument("--timeframe", help="time")
parser.add_argument("--warning_query_matches", type=int, help="Warning alert")
parser.add_argument("--critical_query_matches", type=int, help="Critical alert")
args = parser.parse_args()

# Initialize variables
username = args.api_username
password = args.api_password
baseurl = 'https://'+ args.hostname +':'+ args.port
searchQuery = args.query + ' earliest=' + args.timeframe + ' latest=now | stats count'
warningQueryMatches = args.warning_query_matches
criticalQueryMatches = args.critical_query_matches


def getSessionKey(baseurl,username,password):
try:
# Authenticate with server.
# Disable SSL cert validation. Splunk certs are self-signed. Proxy off.
serverContent = httplib2.Http(disable_ssl_certificate_validation=True,proxy_info=None).request(baseurl + '/services/auth/login','POST', headers={}, body=urllib.parse.urlencode({'username':username, 'password':password})))1]
sessionKey = minidom.parseString(serverContent).getElementsByTagName('sessionKey')'0].childNodese0].nodeValue
return sessionKey

except:
message="Authentication problem"
unknown(message)


def getSearchID(baseurl,sessionKey,searchQuery):
try:
# Remove leading and trailing whitespace from the search
searchQuery = searchQuery.strip()
# If the query doesn't already start with the 'search' operator or another
# generating command (e.g. "| inputcsv"), then prepend "search " to it.
if not (searchQuery.startswith('search') or searchQuery.startswith("|")):
searchQuery = 'search ' + searchQuery

# Run the search.
serverContent = httplib2.Http(disable_ssl_certificate_validation=True,proxy_info=None).request(baseurl + '/services/search/jobs','POST', headers={'Authorization': 'Splunk %s' % sessionKey},body=urllib.parse.urlencode({'search': searchQuery})))1]
searchId = minidom.parseString(serverContent).getElementsByTagName('sid')'0].childNodese0].nodeValue
return searchId

except:
message="Search problem"
unknown(message)


def getSearchDone(baseurl,sessionKey,searchId):
try:
countExit=0
dispatcheState="UNKNOWN"
while dispatcheState != "DONE" and countExit < 20 :
countExit += 1
# Sleep 3sec
time.sleep(5)
# Verify the request end.
serverContent = httplib2.Http(disable_ssl_certificate_validation=True,proxy_info=None).request(baseurl + '/services/search/jobs/' + searchId,'GET', headers={'Authorization': 'Splunk %s' % sessionKey})}1]
keys = minidom.parseString(serverContent).getElementsByTagName('s:key')

for n in keys:
if n.getAttribute('name') == 'isDone':
isDone = n.childNodese0].nodeValue
if n.getAttribute('name') == 'isFailed':
isFailed = n.childNodese0].nodeValue
if n.getAttribute('name') == 'dispatchState':
dispatcheState = n.childNodese0].nodeValue

if isFailed == 1:
message="Query error"
unknown(message)
if isDone == 0:
message="Query error"
unknown(message)

except:
message="Search problem"
unknown(message)


def compareValues(baseurl,sessionKey,searchId,warningQueryMatches,criticalQueryMatches):
try:
# Get result.
serverContent = httplib2.Http(disable_ssl_certificate_validation=True,proxy_info=None).request(baseurl + '/services/search/jobs/' + searchId + '/results','GET', headers={'Authorization': 'Splunk %s' % sessionKey})}1]
count = minidom.parseString(serverContent).getElementsByTagName('text')'0].childNodese0].nodeValue
except:
message="Result problem"
unknown(message)

# Compare with warning and critical
if warningQueryMatches and criticalQueryMatches:
if int(count) > criticalQueryMatches:
message=count + " requests find."
critical(message,count,str(warningQueryMatches),str(criticalQueryMatches))
elif int(count) > warningQueryMatches:
message=count + " requests find."
warning(message,count,str(warningQueryMatches),str(criticalQueryMatches))
else:
message=count + " requests find."
ok(message,count,str(warningQueryMatches),str(criticalQueryMatches))
else:
message=count + " requests find."
ok(message,count,str(warningQueryMatches),str(criticalQueryMatches))


def unknown(message):
print("UNKNOWN, " + message)
sys.exit(3)


def critical(message,count,warningQueryMatches,criticalQueryMatches):
print("CRITICAL, " + message + "| 'splunk.query.match.count'=" + count + ";" + warningQueryMatches + ";" + criticalQueryMatches + ";;")
print("Nombre de resultats : " + count + " parametre de temps suivant : " + args.timeframe)
sys.exit(2)


def warning(message,count,warningQueryMatches,criticalQueryMatches):
print("WARNING, " + message + "| 'splunk.query.match.count'=" + count + ";" + warningQueryMatches + ";" + criticalQueryMatches + ";;")
print("Nombre de resultats : " + count + " parametre de temps suivant : " + args.timeframe)
sys.exit(1)


def ok(message,count,warningQueryMatches,criticalQueryMatches):
print("OK, " + message + "| 'splunk.query.match.count'=" + count + ";" + warningQueryMatches + ";" + criticalQueryMatches + ";;")
print("Nombre de resultats : " + count + " parametre de temps suivant : " + args.timeframe)
sys.exit(0)


sessionKey = getSessionKey(baseurl,username,password)
searchId = getSearchID(baseurl,sessionKey,searchQuery)
getSearchDone(baseurl,sessionKey,searchId)
compareValues(baseurl,sessionKey,searchId,warningQueryMatches,criticalQueryMatches)

Regards


Hi, 

Thanks for the update.

Would it be possible to book a slot to discuss how we can benefit from this POC period to deliver an official centreon plugin? We can do something quickly.

 

Reach me @ sbomm@centreon.com if you’re interested in such meeting.

​​​​Best,


Hi, 

 

Update to share the information with everybody that might be interested in this.

 

A new Splunk Pack will be out in the next release, it will bring the following capabilities: 

  • Index update: check the MaxTime field to make sure index is correctly ingesting data. 
  • Query: perform a SPL query and count the number of returned rows.
  • Splunkd health: Check the state of critical components/features of a splunk:
    • 'file-monitor-input-status', 'index-processor-status', 'resource-usage-status',  ‘search-scheduler-status', 'workload-management-status'

 


Reply