Quantcast
Channel: Dev – Splunk Blogs
Viewing all articles
Browse latest Browse all 218

Shuttl – A New Year a New Release

$
0
0

Data is the life blood of the modern business. Managing the flow of data, however, is as important as the data itself. That is why Shuttl was created. Through Shuttl users can move (nay, shuttl!) buckets of data from Splunk to other systems and back again. This has proved immensely useful as people realize how data can be used and reused to drive business value.Happy New Year 2013!

The Elves have been busy at work bringing Shuttl users a bunch of goodies in the form of the new 0.7.2 Release. Christmas came early when the code landed in Master on Github 6 days before Santa’s big night, and now it’s available for download on Splunkbase!

Since Shuttl’s release last year, at events such as Splunk Lives, Splunk .conf, meetups, and just everyday fortuitous meetings with Splunk users, there’s been a great deal of interest in Shuttl, and in addition, a great deal of feedback of what new features would make it more valuable. So, let’s get started.

Amazon Glacier

When Amazon Glacier was announced late last year, the spread of the news was anything but glacial. It seemed like every other question concerned, “How about putting Splunk buckets in Glacier?” Well, we heard that request loud and clear, and Shuttl 0.7.2 supports Amazon Glacier, proving that having a pluggable backend is a key advantage of the Shuttl architecture. Now data shuttles to HDFS, NFS, Amazon S3, and now Amazon Glacier.

Splunk 5.0

Splunk 5.0 was announced in the fall, a key advancement in Splunk’s HA capabilities. However, a funny thing happened on the way to the release. Given that Splunk calls the cold to frozen hook for each bucket, that means that it’s also calling it on replicated buckets as well. Now, if you have N replicas, you’ll end up with N+1 of the data archived! What to do? Shuttl solves this problem in 0.7.2, enabling customers to avoid rolling their own de-duplification mechanisms. Now, only one copy of data is archived, even if Splunk is replicating data across nodes.

Cold Bucket Shuttling

And finally, another piece of feedback was the following: “Well, it’s all well and good that the data will be archived, but I archive my data every few months, however, I want to to have data in HDFS much, much sooner! And I want the data to still be available to Splunk when it gets archived! Can you help?” Our answer is, “¡Sí, se puede!”, or perhaps “Ja, vi kan!” given the elves are Swedish. Now, Shuttl supports warm-to-cold bucket transfers, so anything that is a cold bucket, is both available in Splunk for search, and the backend for both storage and analysis.

Get it Now!

Over the next few weeks, I’ll blog in more detail about the individual new features in Shuttl, and share some of our testing we’ve done with it.

In the meantime:

More is in the works. Special thanks to our lead developer, Petter Eriksson, and our lead tester, Fredrik Klevmarken, in making this such an awesome release.


Viewing all articles
Browse latest Browse all 218

Trending Articles