Splunk is great but also has its limitations. Here are just 3 scenarios that may seem familiar to you:
- Your company has a clear integrated process for software development and so the need of 3 environments (with different namings often enough):
a. DEV (for actively developing)
b. INT (for testing stuff finished in DEV)
c. PROD (for the production environment)
So lets say your daily index volume is ~500 GB and you want to have all data in all 3 environments available – which is tbh a must. This means you need to index the same data 3 times to make them available in all 3 environments. .. and so pay the price for the license 3 times as well, of course. - You want to migrate from a non-clustered to a clustered environment, or vice versa or to different hardware
- Backup indexed data?!
I tell you the truth that the third one is always questionable and it really depends on the why to define the how. Often enough the reasons for requesting a backup of splunk indexed data are not valid and coming from non-clustered thinking but of course there are (rare) cases where this kind of classical backup is a hard requirement. In these rare cases the how depends on a set of things and one part of it could be to sync data as for the other ones above.
Back to 1 and 2. I saw both situations often enough that I thought there must be a (better) way and so splunk_rsyncix.sh was born. This is actively developed since 2016 and continuously developed and extended (yea just a few commits as even when working once a week on it I do not push public often (enough)). Started as a pure migration tool (bullet point 2 above) it is now able to fully handle different clusters of any size (bullet point 1 above).
As you can see atm of writing the branch is still in “develop” so some stuff might have issues. One of the known issues is that hot buckets will not be properly handled due to the way splunk numbers them and handles them. I am currently working on that part.
The synchronization will always be 1-way only as who wants to get data in PROD modified, right ;) ?
The most challenging part was to understand how splunk handles buckets internally while avoiding duplicates and so crashes of splunkd of the remote splunkd (i.e. the sync target).
What you can expect of the current version is:
- cluster-cluster synchronization (1-way only)
continuous sync + oneshot: colddb, warmdb, summarydb
oneshot only: hotdb - standalone-cluster synchronization (1-way only)
continuous sync + oneshot: colddb, warmdb, summarydb
oneshot only: hotdb - standalone-standalone synchronization (1-way only)
continuous sync + oneshot: colddb, warmdb, summarydb
oneshot only: hotdb
The indexer amount of your source environment (where you sync from) can be different from your target as well. That means if you have a set of 8 indexers in your source PROD cluster and want to sync to your INT cluster with a set of just 2 indexers, well that works! splunk_rsyncix.sh is not just able to replace or remove the GUID of a bucket on-the-fly it can also re-map buckets depending on where they come from to avoid any bucket conflicts.
You think that sounds too good to be true, then I can tell you it actually is true!:)
Check it out here: splunk_rsyncix.sh and always keep in mind to test everything first but NOT on PROD ;)
Don’t be worried about that overwhelming set of features splunk_rsyncix.sh provides many of them are optional and if you went through everything once it is just a simple one-liner overall ;)
Pro-Tip: Combining this with ansible can dramatically simplify starting/stopping and so using splunk_rsyncix.sh.
As soon as time permits I will give some real-world examples which will greatly help in understanding and first start in one of the next posts, so stay tuned ;)
Comments are closed.