


Any existing search names with no available notable count in the core search will have a 0 filled in. With a subsearch, I populate the remaining enabled search names. My current best solution for this is to monitor for when notable events are no longer being generated.įirst, I generate a count of all created notables in a given time period. When fields change, it can affect the correlation search integrity and no longer produce the intended results as notables. Using Splunk to help keep an eye on notable events.Īutomating the management of the fields and proper CIM is a bit too dynamic for these methods, but they do affect the correlation searches and their integrity. This is great for a regular review of your ES deployment as a whole. Not only will it show you what source types are going into each data model, it will also dump the configured correlation searches that would be associated as well. This beauty was written by my coworker and friend Toby Deemer. | table rule_name datamodel sourcetype enabled | eval enabled=if(disabled=0, "Yes", "No") | map maxsearches=100 search="| tstats `summariesonly` values(sourcetype) as sourcetype from datamodel=$datamodel$ WHERE sourcetype!=\"stash\" earliest_time=-7d | eval datamodel=\"$datamodel$\"" ] | join type=outer datamodel [| rest /services/admin/summarization by_tstats=t splunk_server=local count=0 | eval datamodel=replace('summary.id',"DM_".'eai:acl.app'."_","") | table datamodel | eval datamodel=mvappend(datamodel, datamodel2) | eval datamodel2=case(match(search, "src_dest_tstats"), mvappend("Network_Traffic", "Intrusion_Detection", "Web"), match(search, "(access_tracker|inactive_account_usage)"), "Authentication", match(search, "malware_operations_tracker"), "Malware", match(search, "(primary_functions|listeningports|localprocesses|services)_tracker"), "Application_State", match(search, "useraccounts_tracker"), "Compute_Inventory") | rex field=search "tstats.*?from datamodel=(?\w+)"
Splunk savedsearches conf manual#
If you’ve been there, or maybe you’re there right now, you can agree that’s too much manual work.įirst things first, let’s verify that our source types are making it into the proper data model.Ĭopy to Clipboard | rest splunk_server=local /servicesNS/-/-/configs/conf-savedsearches | search =* | rename AS rule_name | fields + title,rule_name Is the data still coming in? Is it being formatted properly for CIM? Is it getting tagged appropriately for the given search/data model? Is it being ingested into the data model summaries? Breathe. Making sure our source types are going to the right spot.Īs an ES admin, you’ll need to be sure the following all flows together cleanly without losing your mind to a perfectionist rage. Having that type of access allows you to monitor your overall configurations the way you would your operational and security information. REST will allow you to call local config file information through search in a format similar to log data. Having this same impact on the config level simply requires the REST command. Splunk as an alerting and reporting tool is great when it comes to the logs indexed, and it allows for much of the same on the configuration level.
Splunk savedsearches conf how to#
If you’re an admin dealing with the management of ES, this write-up will show you how to integrate automation into your configuration and eliminate some of the manual tasks that have been bogging you down. In my travels, even trusted app updates have left inconsistencies in the requirements of ES – resulting in security monitoring gaps. Using automation to ease the pain of your ES deployment.Īdding automated checks to your Splunk Enterprise Security (ES) deployment will help you avoid some of the configuration issues and subsequent data gaps you may be experiencing.
