Quantcast
Channel: Dev – Splunk Blogs
Viewing all 218 articles
Browse latest View live

Send JSON objects to HTTP Event Collector using our .NET Logging Library

$
0
0

Recently we shipped a bunch of logging libraries at the same time our new HTTP Event Collector hit the streets: http://blogs.splunk.com/2015/10/06/http-event-collector-your-direct-event-pipe-to-splunk-6-3/

One of the questions I’ve heard from customers using the libraries, is “Can I send JSON objects with the .NET logging library?

Yes, you can. To do it, you need to use our Splunk.Logging.Common library which our other loggers depend on. Interfaces like TraceListener were designed for sending strings not objects.

For example TraceSource has a TraceData method which accepts objects and which it appears should work. However (at least based on my testin)g the objects are serialized to strings and then passed on as such to the listeners. Thus by the time we get it we have a string message, not an object. We considered trying to detect if the string is a JSON object by trying to deserialize it, but that felt messy and against the spirit of the TraceListener interface.

You can however send objects using our underlying Splunk.Logging.Common library. The library also contains all the robust retry and batching logic. Our TraceListener and SLAB libraries are literally facades on top.

This library fully supports serailizable objects including Strongly typed objects, Anonymous types, and dynamic/JObject.

Here’s a snippet showing sending an Anonymous type and a dynamic.

var middleware = new HttpEventCollectorResendMiddleware(100);

var ecSender = new HttpEventCollectorSender(new Uri(“https://localhost:8088″), 

    “3E712E99-63C5-4C5A-841D-592DD070DA51″

    null,

    HttpEventCollectorSender.SendMode.Sequential, 

    0,

    0,

    0,

    middleware.Plugin

);

ecSender.OnError += o => Console.WriteLine(o.Message);

ecSender.Send(Guid.NewGuid().ToString(), “INFO”, null, new { Foo = “Bar” });

dynamic obj = new JObject();

obj.Bar = “Baz”;

ecSender.Send(Guid.NewGuid().ToString(), “INFO”, null, (JObject) obj);

await ecSender.FlushAsync();

Assuming you have the proper Uri and Token and Event Collector is reachable, running the code will send events similar to the following into Splunk. Notice below the “data” field’s value is a JSON object, not a string 😉

Screen Shot 2015 10 28 at 12 57 41 AM

 

In order to use the library directly you have to pass a bunch of information which we handle for you when you use the higher level libraries. We’ll working on making this easier.

As to what the code is doing:

  • Creates middleware. In the lower levels, our logging lib uses middleware to handle automatically resending if it is not able to send. We ship a default HttpEventCollectorResendMiddleware in the box which uses an incremental back-off retry policy. Here we are creating that middleware and configuring it to do 100 retries.
  • We pass the Uri and Token.
  • The metadata param is set to null as it is optional.
  • We set the send mode to sequential. Setting it to parallel will send at a higher throughput rate, but the events may not show up in sequence in Splunk. Sequential is the default that we use for our HttpEventCollectorTraceListener and our HttpEventCollectorSink
  • The next 3 parameters relate to batching, which can all be defaulted to 0.
  • The last parameter accepts a delegate which the middleware exposes a Plugin property.
  • It wires up to the OnError event so that you can see any errors that might occur.
  • Calls Send passing in an anonymous object.
  • Creates a Object and passes it calling send.
  • Calls FlushAsync to force the sender to flush the events to HttpEventCollector.

Using this approach you can easily send JSON objects to Splunk!

You can download the code for this project by cloning my repository here: https://github.com/glennblock/SendJsonToHec

Let us know if it works for you.

 


Wait, what – a youtube video for my app!?

$
0
0

At Splunkbase we are constantly striving to improve the experience for our users – whether it’s the app-discovery process for a Splunk admin/user, or the app-submission and management experience for our developers. We’ve been busy making changes over the last few months, and I thought this would be a good time to cover some of the more important changes we’ve made recently.

There was a lot of backend engineering work done to spruce up the infrastructure, the API, and search results relevancy – changes that are not always apparent to an end-user of Splunkbase. However, in this post I will talk about some user-facing features we recently added with the goal of improving the experience for our developer community. These features will allow you to better position your app on Splunkbase, as well as understand its adoption by the Splunk community.

Youtube Video Support

We launched a survey at .Conf 2015 that (among other questions) asked Splunk users & developers about the relevance of video content to understand an app’s functionality. Based on the feedback and interest from the community we have now added the ability to provide a link to a youtube video as part of an app.

This will allow:

  1. Developers, to better explain the features and functionality of their app through richer content
  2. Splunk admins and users, to better understand an app and the functionality it provides before they download and install the app

To ensure a good experience for our users the video content will be:

  1. Tested – to make sure it is a valid youtube video, and
  2. Curated – to ensure that
    • It is appropriate – similar to curation we already do for screenshots
    • It does not include any advertisements – so our users don’t have to go through advertisements before they can get to the actual content

We have also updated our content submission guidelines to cover video content. This provides guidance on what is considered appropriate content.

What this means for you!

With this change now you – an app developer – can add a youtube video to your app. You can do this by going to “Manage App” –> “Media” section for your app, and adding the ID for your youtube video. This will allow you to better market your app to prospective users.

Try out this new feature and let us know what you think, or how we can improve this experience!

Here is a screenshot of the Machine Learning App that recently added a video to explain the functionality:

MLApp_manage

MLApp_video

Github Flavored Markdown

Until recently, the markdown that you could use in the documentation section for an app was very basic, supporting the original Markdown language. This kind of worked…until it didn’t, or was not sufficient. The basic markdown does not even support tables – and fairly limits what you can do with documentation – this is what spurred the discussion and a need to update this functionality. However, we took this further – rather than adding just the capability to support tables, we switched to using Github Flavored Markdown. It’s popular in the developer community, and is a lot more powerful than basic Markdown!

Here’s a screenshot of an app using the updated markdown capability and tables:

GithubFlavoredMarkdown

Application Analytics (and custom time range!)

App developers have had the ability to view analytics on their apps for some time now. You can click on “View Analytics” in your app’s administration section to access these analytics for your app. This includes downloads, and active installations for the app.

The early release of this feature had this functionality limited to the last 30 days of analytics for an app. While useful, this did not provide you a picture of how the downloads and installs are trending over a longer time horizon. We recently enhanced this view to allow you to view this data over presets of last month, last six months, or provide a custom time range to view analytics over any time range.

AnalyticsCustomRange

We hope these features make your life better as a Splunk App Developer! Look out for more updates in the future with features targeted at developers as well as Splunk admins and users.

Have suggestions to improve Splunkbase – please reach out to us at splunkbase-admin@splunk.com

Splunk Archive Bucket Reader and Hive

$
0
0

This year was my first .conf, and it was an amazingly fun experience! During the keynote, we announced a number of new Hunk features, one of which was the Splunk Archive Bucket Reader. This tool allows you to read Splunk raw data journal files using any Hadoop application that allows the user to configure which InputFormat implementation is used. In particular, if you are using Hunk archiving to copy your indexes onto HDFS, you can now query and analyze the archived data from those indexes using whatever your organization’s favorite Hadoop applications are (e.g. Hive, Pig, Spark). This will hopefully be the first of a series of posts showing in detail how to integrate with these systems. This post is going to cover some general information about using Archive Bucket Reader, and then will discuss how to use it with Hive.

Getting to Know the Splunk Archive Bucket Reader

The Archive Bucket Reader is packaged as a Splunk app, and is available for free here.

It provides implementations of Hadoop classes that read Splunk raw data journal files, and make the data available to Hadoop jobs. In particular, it implements an InputFormat and a RecordReader. These will make available any index-time fields contained in a journal file. This usually includes, at a minimum, the original raw text of the event, the host, source, and sourcetype fields, the event timestamp, and the time the event was indexed. It cannot make available search-time fields, as these are not kept in the journal file. More details are available in the online documentation.

Now let’s get started. If you haven’t already, install the app from the link above. If your Hunk user does not have adequate permissions, you may need the assistance of a Hunk administrator for that step.

Log onto Hunk, and look at your home screen. You should see a “Bucket Reader” icon on the left side of the screen. Click on this. You should see a page of documentation, like this:

hive-pic1

Take some time and look around this page. There is lots of good information, including how to configure Archive Bucket Reader to get the fields you want.

Click on the Downloads tab at the top of the page. You should see the following:

hive-pic2

There are two links for downloading the jar file you will need. If you are using a Hadoop version of 2.0 or greater (including any version of Yarn), click the second link. Otherwise, click the first link. Either way, your browser will begin downloading the corresponding jar to your computer.

Using Hive with Splunk Archive Bucket Reader

We’ll assume that you already have a working Hive installation. If not, you can find more information about installing and configuring Hive here.

We need to take the jar we downloaded in the last section, and make it available to Hive. It needs to be available both to the local client, and on the Hadoop cluster where our commands will be executed. The easiest way to do this is to use the “auxpath” argument when starting Hive, with the path to the jar file. For example:

hive --auxpath /home/hive/splunk-bucket-reader-2.0beta.jar

If you forget this step, you may get class-not-found errors in the following steps. Now let’s create a Hive table backed by a jounal.gz file. Enter the following into your Hive command-line:

CREATE EXTERNAL TABLE splunk_event_table (
    Time DATE,
    Host STRING,
    Source STRING,
    date_wday STRING,
    date_mday INT
)
ROW FORMAT SERDE 'com.splunk.journal.hive.JournalSerDe'
WITH SERDEPROPERTIES (
    "com.splunk.journal.hadoop.value_format" = 
        "_time,host,source,date_wday,date_mday"
)
STORED AS INPUTFORMAT 'com.splunk.journal.hadoop.mapred.JournalInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat'
LOCATION '/user/hive/user_data’;

If this was successful, you should see something like this:

OK
Time taken: 0.595 seconds

Let’s look at a few features of this “create table” statement.

  • First of all, note the EXTERNAL keyword in the first line, and the LOCATION keyword in the last line. EXTERNAL tells Hive that we want to leave any data files listed in the LOCATION clause in place, and read them when necessary to complete queries. This assumes that /user/hive/user_data contains only journal files. If you want Hive to maintain it’s own copy of the data, drop the EXTERNAL keyword, and drop the LOCATION clause at the end. Once the table has been created, use a LOAD DATA statement.
  • The line
    STORED AS INPUTFORMAT 'com.splunk.journal.hadoop.mapred.JournalInputFormat'

    tells Hive that we want to use the JournalInputFormat class to read the data files. This class is located in the jar file that we told Hive about when we started the command-line. Note the use of “mapred” instead of “mapreduce”—Hive requires “old-style” Hadoop InputFormat classes, instead of new-style. Both are available in the jar.

  • These lines:
    ROW FORMAT SERDE 'com.splunk.journal.hive.JournalSerDe'
    WITH SERDEPROPERTIES (
        "com.splunk.journal.hadoop.value_format" = 
            "_time,host,source,date_wday,date_mday"
    )

    tell Hive with fields we want to pull from the journal files to use in the table. See the app documentation for more detail about which fields are available. Note that we are invoking another class from the Archive Bucket Reader jar, JournalSerDe. “SerDe” stands for serializer-deserializer.

  • This section:
    (Time DATE,
    Host STRING,
    Source STRING,
    date_wday STRING,
    date_mday INT)

    tells Hive how we want the columns to be presented to the user. Note that there are the same number of columns here as in the SERDEPROPERTIES clause. This section could be left out altogether, in which case each field would be treated as a string, and would have the name it has in the journal file, e.g. _time as a string instead of Time as a date.

Now that you have a Hive table backed by a Splunk journal file, let’s practice using it. Try the following queries:

select * from splunk_event_table limit 10;
select count(*) from splunk_event_table group by host;
select min(time) from splunk_event_table;

Hopefully that’s enough to get you started. Happy analyzing!

HTTP Event Collect: a Python Class

$
0
0

splunktrust(Hi all–welcome to the first of what will be a series of technical blog posts from members of the SplunkTrust, our Community MVP program. We’re very proud to have such a fantastic group of community MVPs, and are excited to see what you’ll do with what you learn from them over the coming months and years.
–rachel perkins, Sr. Director, Splunk Community)

——————————————————————————————————

Happy Holidays everyone!

I am George Starcher, one of the members of the SplunkTrust.

I tend to make new code this time of year. So, I decided to make a python class after a lovely Thanksgiving with the family.
There is a lot of great content on the HTTP Event Collector thanks to Glenn Block and his development team. However, I found there isn’t a Python Logger already built for it.

That motivated me to write a Python class you can leverage in your own Python code. It can be downloaded from my git repository: https://github.com/georgestarcher/Splunk-Class-httpevent

I encourage you to also vote up a Python Logger from the Splunk development team over at the SplunkDev User Voice page. That will get a fully supported logger made.

About the Class

The class allows you to declare an http_event_collector object. Default behavior is to use SSL and port 8088. You can override these if you need to for your environment. You can then submit events individually using the sentEvent method. This will immediately send the single event JSON payload to the collector. The more efficient way is to use the batchEvent method to acccumulate the events and finish your code with a batchFlush method call. This method will also auto flush for you if your accumulating events come close to the default max bytes size HTTP Event Collector accepts.

Using the Class

Let’s Collect some Crime Reports. Check out the sample usage of our new class below. It is also in the git repo. I hope you enjoy!

  1. We import the new class
  2. We generate the key value pairs in a JSON payload with our commitCrime function
  3. We create an http_event_collector object called testevent providing the Collector Token and hostname
  4. Note that by default we don’t have to specify the HTTP collector is using SSL and default port 8088. The class will let you override the defaults.
  5. Next we make the payload JSON base for our sample Python code. This is where we add the normal event metadata fields using the update method.
  6. From there we make 5 individual events to collect and immediately send them using the sendEvent method. You can see how we even add a few extra fields to the JSON coming from our crime function.
  7. We also demonstrate the batch event submission by calling batchEvent followed by a flushBatch before we are done.
 from splunk_http_event_collector import http_event_collector 
 import random
 import json
def commitCrime():
# list of sample values
 suspects = ['Miss Scarlett','Professor Plum','Miss Peacock','Mr. Green','Colonel Mustard','Mrs. White']
 weapons = ['candlestick','knife','lead pipe','revolver','rope','wrench']
 rooms = ['kitchen','ballroom','conservatory','dining room','cellar','billiard room','library','lounge','hall','study']
killer = random.choice(suspects)
 weapon = random.choice(weapons)
 location = random.choice(rooms)
 victims = [ suspect for suspect in suspects if suspect != killer ]
 victim = random.choice(victims)
return {"killer":killer, "weapon":weapon, "location":location, "victim":victim}
# Create event collector object, default SSL and HTTP Event Collector Port
 http_event_collector_key = "B02336E2-EEC2-48FF-9FA8-267B553A0C6B"
 http_event_collector_host = "localhost"
testevent = http_event_collector(http_event_collector_key, http_event_collector_host)
# Start event payload and add the metadata information
 payload = {}
 payload.update({"index":"temp"})
 payload.update({"sourcetype":"crime"})
 payload.update({"source":"witness"})
 payload.update({"host":"mrboddy"})
# Report 5 Crimes individually
 for i in range(1,5):
 event = commitCrime()
 event.update({"action":"success"})
 event.update({"crime_type":"single"})
 payload.update({"event":event})
 testevent.sendEvent(payload)
# Report 50 Crimes in a batch
 for i in range(1,50):
 event = commitCrime()
 event.update({"action":"success"})
 event.update({"crime_type":"batch"})
 payload.update({"event":event})
 testevent.batchEvent(payload)
 testevent.flushBatch()

Using Splunk Archive Bucket Reader with Pig

$
0
0

This is part II in a series of posts about how to use the Splunk Archive Bucket Reader. For information about installing the app and using it to obtain jar files, please see the first post in this series.

In this post I want to show how to use Pig to read archived Splunk data. Unlike Hive, Pig cannot be directly configured to use InputFormat classes. However, Pig provides a Java interface—LoadFunc—that makes it reasonably easy to use an arbitrary InputFormat with just a small amount of Java code. A LoadFunc is provided with Splunk Archive Bucket Reader: com.splunk.journal.hive.JournalLoadFunc. If you would prefer to write your own, you can find more information here.

Whereas Hive closely resembles a relational database, Pig is more like a high-level imperative language for creating Hadoop jobs. You tell Pig how to make data “relations” from data, and from other relations.

In the following, we’ll assume you already have Pig installed and configured to point to your Hadoop cluster, and that you know how to start an interactive session. If not, you can find more information here.

Here is an example Pig session. The language used is called Pig Latin.

REGISTER splunk-bucket-reader-1.1.h2.jar;
A = LOAD 'journal.gz' USING com.splunk.journal.pig.JournalLoadFunc('host', 'source', '_time') AS (host:chararray, source:chararray, time:long);
B = GROUP A BY host;
C = FOREACH B GENERATE group, COUNT(A);
dump C;

Let’s look at these statements in more detail.

  • First:
    REGISTER splunk-bucket-reader-1.1.h2.jar;

    This statement tells Pig where to find the jar file containing the Splunk-specific classes.

  • Next:
    A = LOAD 'journal.gz' USING com.splunk.journal.pig.JournalLoadFunc('host', 'source', '_time') AS (host:chararray, source:chararray, time:long);

    This statement creates a relations called “A” that contains data loaded from the file ‘journal.gz’ in the user’s HDFS home directory. The expression “(‘host’, ‘source’, ‘_time’)” determines which fields will be loaded from the file. The expression “AS (host:chararray, source:chararray, time:long)” determines what they will be named in this session, and what data types they should be assigned.

  • Next:
    B = GROUP A BY host;
    C = FOREACH B GENERATE group, COUNT(A);

    These statements say that we want to group events (or in Pig-speak, tuples) together based on the “host” field, and then count how many tuples each host has.

  • Finally:
    dump C;

    This tells Pig that we want the results printed to the screen.

I ran these commands on a journal file containing data from the “Buttercup Games” tutorial, which you can download from here. They produced these results:

(host::www1,24221)
(host::www2,22595)
(host::www3,22975)
(host::mailsv,9829)
(host::vendor_sales,30244)

Viola! Now you can use Pig with archived Splunk data.

Splunk Logging Driver for Docker

$
0
0

With Splunk 6.3 we introduced HTTP Event Collector which offers a simple, high volume way to send events from applications directly to Splunk Enterprise and Splunk Cloud for analysis. HTTP Event Collector makes it possible to cover more cases of collecting logs including from Docker. Previously I blogged on using the Splunk Universal Forwarder to collect logs from Docker containers.

Today following up on Docker’s press release, we’re announcing early availability in the Docker experimental branch of a new log driver for Splunk. The driver uses the HTTP Event Collector to allow forwarder-less collection of your Docker logs. If you are not familiar yet with the Event Collector check out this blog post.

You can get the new Splunk Logging Driver by following the instructions here to install the Docker experimental binary. Note if you are running on OSX or Windows you’ll need to have a dedicated Linux VM. Using the driver, you can configure your host to directly send all logs sent to stdout to Splunk Enterprise or to a clustered Splunk Cloud environment. The driver offers a bunch of additional options for enriching your events as they go to Splunk, including support for format tags, as well as labels, and env.

Now let’s see how to use the new driver. I am going to use the latest Splunk available, which I have installed in my network running on address 192.168.1.123. You need to first enable HTTP Event Collector. (Note: In Splunk Cloud you need to work with support to enable HTTP Event Collector). Open Splunk’s Web UI, go to the SettingsData Inputs. Choose HTTP Event Collector. Enable it with Global Settings and add one New Token. After the token is created, you will find the Token Value which is a guid. Write it down, as you will need it later for configuring the Splunk Logging Driver.

Verify that you are using the Docker experimental latest docker version, 1.10.0-dev.

# docker --version

Now we are ready to test the Splunk logging driver. You can configure the logging driver for the whole Docker daemon or per container. For this example, I am going to use the nginx container and configure it for the container

# docker run --publish 80:80 --log-driver=splunk --log-opt splunk-token=99E16DCD-E064-4D74-BBDA-E88CE902F600 --log-opt splunk-url=https://192.168.1.123:8088 --log-opt splunk-insecureskipverify=true nginx

Here is more detail on the settings above:

  • First I’ve specified to publish to port 80, so I can test my nginx container.
  • log-driver=splunk specifies that I want to use the Splunk logging driver.
  • splunk-token is using the the token which I previously created in Splunk Web.
  • splunk-url is set to the the host (including port) where the HTTP Event Collector is listening.
  • splunk-insecureskipverify instructs the driver to skip cert validation, as my Splunk Enterprise instance is using the default self-signed cert.
  • Lastly I’ve told Docker to use the nginx image.

Now that the container is running, I can send some GET requests nginx to generate some logs output.

# curl localhost:80
# curl localhost:80?hello=world

Heading over Splunk, I can see the events pouring in real time

splunk_logging_driver_first_results

These are just the basics. I can now add additional configuration to control how Splunk indexes the events, including changing default index, source and sourcetype.

I can also configure the Splunk Logging Driver to include more detailed information about the container itself, something which is very useful for analyzing the logs later.

# docker run --publish 80:80 --label type=test --label location=home --log-driver=splunk --log-opt splunk-token=99E16DCD-E064-4D74-BBDA-E88CE902F600 --log-opt splunk-url=https://192.168.1.123:8088 --log-opt splunk-insecureskipverify=true --log-opt tag="{{.ImageName}}/{{.Name}}/{{.ID}}" --log-opt labels=type,location nginx

The additional options do the following:

  • label – defines one or more labels for the container
  • labels – defines which labels to send to the log driver and which will be included in the event payload
  • tag – changes how my container will be tagged when events are passed to the Splunk Logging Driver

Now I’ll send a few more GET requests again and see the result.

# curl localhost:80
# curl localhost:80?hello=world

splunk_logging_driver_advanced

As you can see above, each event now has a dictionary of attrs which contains the labels in the driver configuration (this can  also include list of environment variables). Tag has also been changed with the format I specified.

Splunk and Docker better together

With the new Docker driver, we’re making it really easy for customers to combine the power of Splunk with Docker in analyzing their Docker logs. This is just the beginning, there are many more things to come! Go grab the latest experimental branch of Docker and start mining your Docker containers in Splunk!

Splunk in Space: Splunking Satellite Data in the Cloud

$
0
0

Hello all,

This year a Team of Splunkers attended the ESA App Camp 2015 in lovely Frascati, Italy. The topic of this year’s challenge was:

“There are thousands of ways to enrich apps with data from space – what’s yours?”

The Splunk team featured Robert Fujara and Philipp Drieger alongside with camp participants Claire Crotty and Anthony Thomas. Together the team created a mobile web app that accessed a Splunk Cloud instance to analyze geolocation-based satellite data and inform users about different environmental indicators across Europe. Users can input their preferences in terms of living environment and based on different indicators they then receive recommendations on which city or region would suit them best. 

The key data sources for this project included

All this data was fed into Splunk Cloud, which subsequently presented the data through a mobile web app built utilizing Splunk JS SDK and Google Material Design.

Data Analysis

To begin with the team used satellite data to create a matrix that split the landscape into buckets. After this, the team went through each bucket and built out information such as how green a certain area is and how it compares to others based on the satellite data. They also analyzed the data available for pollution levels in order to demonstrate how the natural environment in a region fared when compared with another. The factors considered ranged from green house gas emissions to other pollution levels like ozone, SO2 or NO2. The team was also able to integrate social media data to show which issues were currently being discussed online in particular areas.

Data Visualization

To ensure that the tool could be used during other participants at the ESA Camp, the team created a nice interface based on the Splunk SDK and Google Material Design. Users can then adjust their preferences and based on this receive a list of the top cities that fit their requirements. The following images show the user interface on the web app and in the mobile apps.

ESA_web_app_landscape_pollution

ESA_web_app_portraits_pollution_social_media

Conclusion

The ESA App Camp 2015 was a great opportunity to explore data analytics with Splunk in new areas like satellite data. The team also showed how Splunk SDKs can be leveraged to build custom (web) applications. Now imagine what could be done with Splunk 6.3 choropleth maps for geoanalytics. Read more about the camp results here: ESA App Camp 2015 summary article.

 

An Hour of Code with Splunk

$
0
0

HourOfCode1

The Hour of Code is a global effort to educate children in more than 180 countries with as little as one hour of computer science. Held as part of Computer Science Education Week (December 7-13), the most recent Hour of Code included more than 198,473 events around the world. And this year, several Splunkers taught sessions in events across the country.

Here in the Seattle Area, Shakeel Mohamed, one of our engineers, taught sessions on Lightbot and Minecraft at Rainier View Elementary School, and I had the pleasure of teaching approximately 150 students at Ingraham High School an hour about log / time-series data and how to mine it with Splunk. The courses are a challenging mix of students with little to no programming experience, together with those who have some background. In this case, all the students had experience with coding so Splunk was specifically selected to introduce how event logging and data visualization fit into the world of programming.

HourOfCode1b

HourOfCode2b

 

I was joined by several volunteers from Teals – a nonprofit that pairs computer science professionals with educators to teach CS – including Jay Waltmunson and Taylor Weiss, both from Microsoft.  We based our sessions on the “Looking at Data with Splunk” tutorial on code.org, developed jointly by StudentRND and Splunk. The version of the tutorial we used in class was updated to use the latest version of Splunk 6.3.

Part 1: Logs and visualization

In the first part of the lesson we described the importance of log / activity data and visualization. Using Bungie’s Halo game as an example, we talked about the massive volume of activity data generated within the game based on player actions, and how the data can be visualized to not only analyze patterns of activity, but ultimately to improve the game. We showed the students a heat map that Bungie had created which illustrated where people had died in the game. We then put the students to the test and asked them to come up with reasons why the patterns we were seeing might be occurring – why the red zones?

HourOfCode3b

The students really engaged in the conversation: Maybe weapon arsenals are heavily concentrated in those zones? Perhaps there are a large number of people entering the game in the red zones, or maybe the issue is that the landscape of the black zones is too difficult to navigate. We illustrated how we could leverage the underlying user data to confirm or refute their assertions. I got the sense that the students walked away with a much greater appreciation for the power of data and visualization.

Part 2: Getting data into Splunk by cutting the rope

For the second part of the lesson we worked on getting data into Splunk, extracting fields, searching, and creating simple visualizations. The scenario we used from the tutorial was something the kids really connected to, the “Cut the rope” game. Everyone knew about it. I quickly found out that if you are in high school and don’t know about “Cut the rope” you must be living in a cave or on a remote island!

If you haven’t heard of it, it is a really fun and addicting game where you move a frog through obstacles / swing on vines to get as many stars as you can. You can see a screenshot below.

HourOfCode4b

After talking a bit about the game, we talked about how the game could generate different events for each action in the game. The students took some time to think about the possibilities and what would make sense to log. Some examples were logging each time you get a star, logging how long it takes to complete the level, logging when someone dies, etc. The students were spot on!

The students then downloaded a data file which contained some sample entries that the app might generate. We talked about a bit about the structure and how we would read this data into Splunk to further analyze it. The file looked like this:

12-07-2015 10:56:05AM level_loaded 0
12-07-2015 10:56:09AM rope_cut
12-07-2015 10:56:19AM star_collected
12-07-2015 10:56:28AM candy_collected
12-07-2015 10:56:30AM level_loaded 1
12-07-2015 10:56:34AM rope_cut
12-07-2015 10:56:36AM star_collected
12-07-2015 10:56:41AM candy_collected

Part 3: Hands on – importing data and visualization

Then the students got started! Each student logged into a Splunk instance hosted on a Linux VM in Microsoft’s Azure Cloud. They imported the data using our “Splunk’s Getting Data In” functionality. They did some simple text searches. The students quickly saw how easy it was to enter a simple query and immediately got results!

We talked about we might want to do some more complex analysis. For example, we might want to know things like a break down of each action by count, or which level is the hardest. Currently this was not possible as all we could do is simple matches like every event that contains the word “Level”.

The students moved on to start telling Splunk about the data to support these kinds of queries. Using field extraction, the students specified that the data had game_event and level fields. With those fields in place the students learned how to use the chart command to answer some of the earlier questions. For example the students wrote “* | chart count by game_event” to see the breakdown of game_events.

Final Step: Visualize and solve

Next we showed how they take the results and turn them into a visualization. Once they had these basics, they moved on to answer the question of which levels were the hardest to finish, by looking at the number of resets by level_number. Using visualization allowed them to quickly arrive at the answer.

Finally, if any time remained, we talked a bit about how the skills learned could be useful / what other domains this could be applied to.

One fun thing I almost forgot to mention – these students were adventurous! Throughout the day, a number of the students figured out how they could jump to their user page, rename their accounts, change their passwords, and updating their profiles. Seeing this and getting a bit of a kick out of it, at one point I said “Folks, don’t change your profile”. All the students immediately froze as if they had committed something horrible. :-) It was great to see their energy, excitement and willingness to try new things. At one point in one of the classes, one of the students even tried to create a Data Model.

It was a very fruitful day. I was amazed at how fast the kids picked up Splunk, and how some of the kids went much further to try and see what they could do.

Because of all of this, it really was exciting to be part of the Hour of Code. This is a great effort that is helping to prepare our young generation to take the reigns of technology. Special thanks to Jay and Lawrence for the invitation.

I look forward to being a part of this event in the future!

To learn more about Hour of Code and how you can volunteer to teach students Computer Science, visit http://hourofcode.com.


Splunk Add-on > Where’s That Command – Converting a Field’s Hexadecimal Value to Binary

$
0
0

When looking through Splunk’s Search Reference Manual, there are a ton of search commands with their syntax, descriptions, and examples.  After all, if Splunk is the platform for machine data, there needs to be an extensive list of commands, functions, and references that guide Splunkers through the Search Processing Language (SPL).  But one would think that we had everything covered, right?  Well, almost….

I have a couple of great customers from the Houston, Texas area to thank for this.  Gabe and Andrew (you know who you are) are not only strong Splunkers, but frequent the Splunk Houston User Group (SHUG) meetings and are always looking for ways to expand their use of Splunk as well as get others just as passionate and excited about it as they are!  In two separate instances they brought me a simple question – Where’s that command that converts my hexadecimal values in this field to a binary number?

As I started digging into the Search Reference Manual and across our www.splunk.com website, I quickly found what many were already finding or found at answers.splunk.com – there is not a command that does this!  DOH!  Various people had ideas of building searches that included eval functions, even using the replace command (something I blogged about before here), but ultimately, no SPL-based command.  While it’s cool to have massive, multi-line search strings in your Splunk search bar, its not very efficient or a good use of time as compared with just doing a single command type call.

The first time I attempted to help with this it was an Energy-based use case that had some IT Security use case to it.  The second time I worked on this it was with a retail/point-of-sale analytics use case.  Regardless of the use case, what I quickly realized is that we needed something to make converting the hexadecimal values in fields to binary as simple as just flipping a switch…. or installing a Splunk Add-on.

Enter the Splunk Add-on – Hexadecimal to Binary Add-on (Hex2Binary Add-on)!

This is a fairly simple add-on which leverages the power of Splunk’s search macros.  You download the add-on and then use the “Manage Apps” to install the app from a file or use the new feature in Splunk 6.3.x to Browse More Apps to find and download the add-on:

Screen Shot 2016-02-11 at 2.44.47 PM

 

Once installed, the add-on is set to Global Sharing Permission which means any of your apps in Splunk should be able to leverage it.

Screen Shot 2016-02-11 at 2.49.53 PM

 

For documentation, please refer to the README.txt file in the “…etc/apps/SA_Hex2Binary/” directory:

 

Screen Shot 2016-02-11 at 3.13.10 PM

 

To use the “hex2binary()” macro, you use the SPL call format for Splunk macros but it requires you to pass the field which contains the hexadecimal values you wish to convert to binary.  As a simple test (since I was not able to use any of my Splunk customers’ data) I will create a field and give it one hexadecimal value:

* | eval hex_num=”BC55″

 

Screen Shot 2016-02-11 at 3.19.23 PM

 

Now that we have a field with a hexadecimal value, I can pass that field to the “hex2binary()” Splunk macro, where the binary conversion is placed into a field named “binary”:

* | eval hex_num=”BC55″ | `hex2binary(hex_num)`

 

Screen Shot 2016-02-11 at 3.22.57 PM

 

That is a LOT easier than having to write eval and loop statements into your search!

Enjoy the new add-on and should there be any questions or requests for enhancement/upgrades, please let me know!

Happy Splunking!

PD2

Remote Images Retrieval With Splunk Using Custom Command “getimage.py”

$
0
0

Every once in a while my customers ask for a functionality that is not natively supported by Splunk. Out of the box Splunk is a very capable platform, however, there are certain tasks Splunk is not designed for. But that never stops a Splunker from finding a solution! The use-case I am about to discuss in this blog is an example of that: The customer owns large chain of pharmacies across the country, the bulk of the stores transactions end up in Hadoop Data Lake; the customer wants to use Hunk/Splunk to visualize and analyze the massive amount of information collected, which is something Hunk can do easily. The challenge came about when I was asked if Splunk could show RX TIFF images (doctor’s hand written prescription) along side the patient’s records. I was presented with the following criteria:

  • Retrieve patient’s records from Hadoop and marry them to RX image residing on an imaging server(s).
  • The imaging server(s) is running Apache tomcat.
  • RX Images are stored in TIFF format (but sometime can be in different format)
  • Must be able to handle billion of images and accommodate error conditions (ex: file not found, failed communication, …etc.)

The Hadoop part was easy with Hunk. After all that’s what we do best! But dealing with images retrieval required a little more work. Initially I thought, I can use the workflow built-in Splunk to retrieve remote URLs http://docs.splunk.com/Documentation/Splunk/6.3.3/Knowledge/CreateworkflowactionsinSplunkWeb

But I soon discovered that simple approach would not help solve the following problems:

  1. Images are stored in TIFF format and most browsers don’t support it (with the exception of Safari).
  2. We need to process (download) large amounts of image files from multiple image servers. Therefore, performance and error handling are critical.
  3. Retrieved images need to be annotated for added information.
  4. Retrieved images may need to be resized before displaying (if larger than certain size).

To address those challenges I turned to the power of custom search commands. Splunk Enterprise lets you implement custom search command for extending SPL (Search Processing Language). I wrote a search command called getimage.py  that will satisfy all of the above requirements. To demonstrate the usage of this custom command I also created a little app the can be found here https://github.com/mhassan2/rx_image_review_app/tree/master/rx_image_review_app

How does it work?

The script will accept two arguments <fieldname> <url>. The first argument is the field name that contains the image name in raw data. The second argument is the remote imaging server URL (without the destination file name).  I used the image_file field name to retrieve the file from the remote imaging server(s)

Example:

 source=”patients.records” | getimage image_file http://10.211.55.3/icons | table Patient_Name, Prescription, image_file, new_image, wget_result, link, cached_image

 

The script uses the infamous wget command https://www.gnu.org/software/wget/ to download images. Once retrieved, we use “convert” command from a well-known image manipulation package called ImageMagick http://www.imagemagick.org/script/binary-releases.php

The “convert” command (not to be confused with Splunk own convert command) is used to transform images format from TIFF to JPG (or PNG) then add any required annotation or resizing. Additionally, the script utilizes a caching mechanism to minimize the impact on the network. So if an image is retrieved repeatedly (within a pre-configured time value), it will be fetched from the cache directory (residing on Splunk Search Head). The getimage.py script will append several fields to the search results. Most of these fields are used for troubleshooting. Depending on how the dashboard xml is written you may want to use the “link” field or “new_image” field.

Here is a list of fields injected into the search output:

  • [rc_wget]:                               The output of wget command
  • [rc_convert]:                           The output of ImageMagick’s convert command
  • [new_image]:                       The converted image name
  • [image_size]:                         Size of newly converted image (shows up when cached only)
  • [file_loc]:                                 *Experimental* location new image URI (using file:///)
  • [link]:                                       URL of the new converted image on the Search Head
  • [cached_imaged]:                Indicate if image is served from local cache or not
  • [wget_result]:                       A cleaned up version of wget output
  • [app_shortcut_url]:            location of all cached images for xml use (dashboard)

 

How to test it?

 From the command line:

  1. Setup a VM running apache server (my test VM IP is 10.211.55.3).
  2. Install ImageMagick on your Search Head (we’re interested in convert command only).
  3. Install wget on your Search Head.
  4. Copy sample images (found in the apps sample_images directory) to /var/www/icons/ directory on apache server (default apache configurations for icons).
  5. Modify getimage.conf  to match your environment.
  6. The shipped app rx_image_review_app should have the required files to make the script work (command.conf, authorize.conf).
  7. Verify that you are able to retrieve images manually using wget. Run this on the Search Head: wget –timeout=2 –tries=1 –no-use-server-timestamps http://10.211.55.3/icons/rx_sample1.tif
  8. Verify that you are able to use convert command (part of ImageMagick). Run this on the Search Head: /opt/ImageMagic/bin/convert rx_sample1.tif rx_sample1.jpg

From the app UI:

  1. Manually import patients.records (under sample_logs) into Splunk. Make sure you use CSV source type to get a quick field extraction; otherwise you will have to manually extract the fields.
  2. Verify that the log import was successful using Splunk UI. The most important field to us is “image_file”.
  3. To test connectivity, kill httpd on the image server then try refreshing your dashboard.
  4. To test 404 file not found error, remove one of the sample files from the image server; refresh your browser.
  5. To test 403 forbidden error (which mostly means permission issues), change perms to 600 on a sample file on the image server then try to connect again.
  6. To test handling of different image types, convert an image to any of the 100’s of types the ImageMagick supports on the image server then refresh your dashboard.
  7. A log file to track activity is created in: /opt/splunk/var/log/splunk/getimage.log 

Screenshots:

This is how the sample file (patients.records) will look like without getimage.py:

search_screenshot

 

Here is an example of how the output will look like using getimage command:

rx_images_Retrieval

Using the built-in dashboard (RX Images Retrieval with icons):

rx_images_Retrieval_with_icons

As you can see the script added an additional field called “link” which is a link to the location of the image on the Search Head.

Script’s capabilities:

  1. Can handle images without file name extension.
  2. Can convert over a 100 major image file formats (ImageMagick).
  3. Built-in caching (can be turned off).
  4. Cached files do not linger around forever. There is a cleaning mechanism.
  5. Dynamically handles many caching conditions scenarios.
  6. Multiple configurable parameters (set in a config file) for agile deployment.
  7. Can handle network connection failures, 404 File not found, and 403 Forbidden errors.
  8. Images are annotated before displaying (Image name, cached condition, and/or error conditions).
  9. The script produces multiple fields that can be used for troubleshooting.

What else can you do with this script?

 I wrote this custom command and created a showcase app in order to solve a specific problem for my customer. I am sure there are a lot more use-cases around images with Splunk. So feel free to borrow whatever you need in order to solve your problem. I attempted to document as much as possible of the code with the intention that someone is going to read, dissect, and/or reverse engineer it.

Here is a list of functionality improvements you can add:

  • Remove dependency on shell commands (wget, convert) and use equivalent python modules (this may require additional packages imported into Splunk’s shipped python (PIL and WGET libraries).
  • Add JavaScript/xml to show images inline without having to open a new browser.
  • Add more annotation to the images to communicate precise/detailed messages to users.
  • In high volume retrieval scenarios, you can download images in bulk to speed up the response time.

The App (rx_image_review_app):

 A simple app was created to demonstrate how getimage.py could be used. This app is shipped with sample data and sample images. I borrowed some of the xml and java script code (Table Icon Set-rangemap) from Splunk 6.x Dashboard Examples app https://splunkbase.splunk.com/app/1603/

I used the rangemap icons to quickly visualize the status of communication with the imaging server.

 

Here are the relevant directories from the main app directory rx_image_review_app:

  •  appserver/static/cache                      all retrieved images are deposited here.
  • bin                                                             location of python script and configuration files
  • /default/data/ui/view                         all dashboards xml documents
  • sample_log                                             sample input data patients.records
  • sample_images                                     sample images (need to copy to your imaging server)
  • default                                                      authorize.conf and command.conf must have for script to work. Please note web.conf is for testing (turn of splunkweb caching)

Enjoy!

Excelling with Excel in Splunk

$
0
0

Hey all,

if you didn’t already know that you can heavily customize Splunk through our open developer framework you should check it out. You can even develop and introduce new search commands. This particular blog illustrates this with an example where business people wanted to have excel files with reports mailed to them from Splunk.
ExcelYou might already know that Splunk enables you to connect Excel directly to Splunk with the ODBC Connector for Windows, as well as enabling you to export with outputcsv a csv file. Dominique from Helvetia Insurance has developed a Splunk TA that is freely available on Splunkbase which allows you to import, export and e-mail data in XLS format.

Dominique notes that, “If tabular reports go to a business user it’s more convenient to provide them with an Excel formatted file rather than a csv“. This app directly set’s the correct cell formatting for numbers, dates and strings to display them nicely in Microsoft Excel. The app also performs the conversion of the normalized _time field from epoch time to human readable date syntax. The new search commands that come with the app are called “outputxls” and “sendfile“, and can directly mail reports according to a schedule. The app also brings a xls2csv command which can convert an Excel sheet into a csv that can then be loaded directly into Splunk. The collect command can store it longterm into an index.

outputxls

| outputxls <filename.xls> “<sender>” “<receiver>” “<subject>” “<bodyText>” “<smtpHost>”

Description: This command will take the search results and write them into an excelsheet where the name is specified by the parameter. This command wraps also the sendfile command if you add the parameters optionally

sendfile

| sendfile “<sender>” “<receiver>” “<subject>” “<bodyText>” “<attachment>” “<smtpHost>”

Description: This command will take the file “<attachement>” which can only reside in /var/run/splunk and sends it via email using the parameters.

csv2xls

| csv2xls <commaseparated.csv> <filename.xls>

Description: This command will take a filename to a csv file residing in /var/run/splunk and write it into a Excel file

xls2csv

| xls2csv <filename.xls> <numOfWorksheets> <commaseparated.csv>

Description: This command will take a excel file that is located in /var/run/splunk, selects the proper Worksheet (Worksheet 0 is the first one) and writes it into a splunk-readable csv file.

 

The creation of the Excel Worksheet is based on the Python XLWT Module. The extraction capabilities use the xlrd Python Module.

If you have any questions related to the app, you can ask directly via Splunk Answers.

Thanks Dominique for developing such a great app and sharing it with the community.

Happy Splunking.

Announcing Splunk Enterprise in Microsoft Azure Marketplace

$
0
0

AzureWe are pleased to announce the release of Splunk Enterprise in Microsoft Azure Marketplace!

Now Azure customers can deploy and purchase Azure-certified Splunk Enterprise clusters in minutes, with the entire point-and-click workflow contained within their Azure portal.

This Bring-Your-Own-License offering on Azure IaaS, provides Splunk customers another platform for self-managed Splunk deployments in addition to on-premise and other public cloud deployment options.

 

What can Splunk Enterprise in Azure Marketplace do for you?

Our mission at Splunk is to make machine data accessible, usable and valuable to everyone. We strive to turn machine data into valuable insights in as little time as possible to help businesses in their journey towards operational intelligence:

Time to value flowchart

Splunk Enterprise in Azure Marketplace enables and accelerates that journey by dramatically simplifying the deployment phase, where a Splunk Enterprise solution is provisioned in matter of minutes using Linux VMs and other Azure cloud resources in any user-selected Azure region around the world (there are 19 regions at the time of writing this post). Deployment is powered by Azure Resource Manager templates which provides extensibility, security, auditing, and tagging features to help you manage your resources as a group after deployment.

Splunk Enterprise in Azure Marketplace therefore provides all the benefits of cloud such as:

  • Low total cost of owning and operating an enterprise-grade operational intelligence solution
  • Faster time-to-value since it is easy and quick to get started on Azure without having to worry about hardware, lengthy installation and configuration processes
  • Easily scale your use case without dealing with months of hardware and capacity planning
  • Increased collaboration with access to your data anywhere, anytime and by any authorized user
  • Less environmental impact with reduced data centers, shared resources and optimized operational usage.

 

How to get started with Splunk Enterprise in Azure Marketplace?


A) From Azure Marketplace:

  1. Search for ‘Splunk’ or visit the Splunk Enterprise offering page directly
  2. Click ‘Deploy’ button which redirects to Azure portal with Splunk Enterprise solution pre-selected

B) From Azure Portal:

  1. Click ‘New’ or ‘+’ from the left panel and type ‘Splunk’ in the top search bar to search the marketplace
  2. Click ‘Splunk Enterprise’ search result to start configuring your Splunk Enterprise solution

From this point onward, Splunk Enterprise solution configuration is straightforward and divided into 3 steps or tabs:

  1. Basics settings: to select location for all resources, and associated resource group and subscription, as well as admin credentials for underlying Azure VM(s).
  2. Infrastructure settings: to select VM size, and optionally customize the virtual network and subnets.
  3. Splunk settings: to configure a custom DNS subdomain to access the solution, in addition to Splunk admin credentials, and to select a deployment size for Splunk Enterprise. For now, you can choose to deploy Splunk Enterprise as either a single instance or a cluster, where the latter is set to 3 indexer peers, a cluster master and a cluster search head. For security hardening, you can also optionally restrict the IP ranges from which VM access is allowed and from which data can be forwarded from.

Enter Splunk settings in Azure

 

Deployed Topology of Splunk Enterprise

You can specify the desired deployment, whether it’s a single instance or a 3-peer indexer cluster for higher usage and availability. Each indexer has eight 1TB VHDs (Azure Standard Storage) striped in RAID 0 configuration for a total of 8TB per indexer and a whopping 3000 IOPS based on internal tests. At a data ingestion rate of say 100GB/day, there’s enough fast storage in the cluster for about 7-month data retention. The following diagram shows the architecture of the cluster version of Splunk Enterprise deployed in Azure Marketplace by an example company ABC:

Topology of Splunk on Azure

NOTE:

  • This solution uses Splunk’s default certificates to enable secure HTTPS traffic, but this will create a browser warning since the certificates are self-signed. Please follow instructions in Splunk Docs to secure Splunk Web with your own SSL certificates.
  • This solution uses Splunk’s 60-day Enterprise Trial license which includes only 500MB of indexing per day. Contact the Splunk sales team online if you need to extend your license or need more volume per day. Once you acquire a license, please follow instructions in Splunk Docs to install the license in the single-node deployment, or, in case of a cluster deployment, you can configure a central license master to which the cluster peer nodes can subscribe. You could re-use existing cluster master for the license master role or create a new dedicated node.
  • The cluster version of this solution will mostly likely need more than 20 cores which will require an increase in your default Azure core quota for ARM. Please contact Microsoft support to increase your quota.

 

What’s next?

We’re excited about this first release of Splunk Enterprise in Azure Marketplace. Stay tuned for additional enhancements to make the solution even more customizable and to leverage more HA/DR capabilities. Reach us at azure-marketplace@splunk.com or leave us a note below to tell us how you’re using Splunk Enterprise in Azure Marketplace and which features and Azure integrations you’d like to see added. Now deploy Splunk and start listening to your data!

Login Screen of Splunk on Azure

Another Update to Keyword App

$
0
0

It’s been three years since I first released the relatively simple Keyword app on Splunkbase and wrote an initial blog entry for it describing it followed by an updated entry. In summary, the Keyword app is a series of form search dashboards designed for Splunk 6.x and later that allow a relatively new user to type in keywords (e.g., error, success, fail*) and get quick analytical results such as baselines, prediction, outliers, etc. Splunk administrators can give this app to their users as is, use the app as a template to write their own keyword dashboards, or take the searches in the app to create new views.

For this update, I’ve used, fellow Splunker, Hutch’s icons to update the display. I also removed the quotes around the token in the search so that users can now type things like

index=_internal err*

or anything that you want that is used before the pipe symbol in a search. Finally, I added a new dashboard using the abstract command. The abstract command in Splunk is a way for viewing a summary of multi-line events using a scoring mechanism that saves you from having to view the whole event. This is useful for viewing things like stack traces without having to view the whole stack trace as an event. Rather than continue to describe it, I’ll end with a screenshot of the form search dashboard.

Abstract Form Search

Abstract Form Search

Splunking Microsoft Azure Data

$
0
0

AzureThere are a lot of services in Microsoft Azure, and a lot of those services are producing machine data. Hal Rottenberg wrote a post covering several of these services and some ways to integrate Splunk with Microsoft Azure. We recently released a new cross-platform Azure add-on that consumes data for some IaaS and PaaS services. In this blog post, I will detail what we are collecting, how to use the data, and what is coming next for the add-on.

What are we collecting?

The add-on ships with two modular inputs:

  1. Azure Diagnostics – this input collects data from an Azure Storage account that contains virtual machine diagnostic information.
  2. Azure Website Diagnostics – this input collects server and application data for Azure Websites. This data is stored in an Azure Storage account blob storage container.

These modular inputs rely on diagnostic data written to an Azure Storage account.  For more information about enabling diagnostic data for your Virtual Machines and Azure Websites, refer to this article.

How to use the Azure data

There are several prebuilt panels included in the add-on to get you started quickly:

Windows Events

  • Azure – top Event IDs (last 24 hours)
  • Azure – Security Event ID Count (last 7 days)
  • Azure – Application Event ID Count (last 7 days)
  • Azure – Event Channel Distribution

Azure Windows Events

 

Performance

  • Azure – Performance – Available Counters
  • Azure – Performance – % Processor Time by Instance
  • Azure – Performance – Memory Available Bytes
  • Azure – Performance – Memory Pages/sec
  • Azure – Performance – Disk Reads/sec
  • Azure – Performance – Disk Write/sec
  • Azure – Performance – Thread Context Switches/sec

Azure Performance

 

Azure Website

  • Azure – Website – Top Transfers by IP Address
  • Azure – Website – Top Transfers by HTTP Request
  • Azure – Website – Average Request Size
  • Azure – Website – Application Message Level Distribution
  • Azure – Website – Application Message Details

Azure Websites

 

General

  • Count by sourcetype

 

What is coming next?

The next integration slated to roll into this add-on is Azure audit data. This modular input will pull data from the Azure Insights Events API.  The idea here is to be able to tell who did what and when.

 

Running Splunk in Azure

In addition to collecting data from Microsoft Azure, it is possible to quickly spin up Splunk workloads in Azure. The easiest way to do this is by using the Azure Marketplace. For more information on this, read Roy Arsan’s article about Splunk in the Azure Marketplace.

 

Resources

Downlaod the Azure Add-on on Splunkbase

Azure tag on answers.splunk.com

Azure tag on Splunk blogs

Splunking Microsoft Azure Audit Data

$
0
0

Azure We recently made available a community-supported Splunk Add-on for Microsoft Azure, which gives you insight into Azure IaaS and PaaS. I am happy to announce that this add-on now includes the ability to ingest Azure Audit data. The idea behind Splunking Azure Audit logs is to be able to tell who did what and when and what events might impact the health of your Azure resources.  In this blog post, I will detail what we are collecting, how to use the data, and what is coming next for the add-on.

What are we collecting?

This update adds a new modular input to your Splunk environment:

AzureAuditInput

 

This modular input grabs data using the Azure Insights Events API.

How to use the Azure Audit data

There are several new prebuilt panels included in the add-on to get you started:

Azure – Audit – Event Actions

Azure – Audit – Events by Caller

Azure – Audit – Events by Resource Group

Azure – Audit – Operation Levels by Geography

Azure – Audit – Top Events by Resource Type

 
AzureAudit

 

Setting up the Azure Audit input

The Azure Insights Events API is a REST endpoint and requires a little bit of setup on the Azure side. An Azure Active Directory application must be set up and a few key pieces of information must be supplied to the modular input. Don’t worry though, there are step-by-step instructions provided in the docs folder in the add-on.

For a quick start, check out the video below:

What is coming next?

The next integration slated to roll into this add-on is Azure Network Security Group logs – meaning network flow, load balancers, and network security group activity. Stay tuned…


Building add-ons has never been easier

$
0
0

Speaking from personal experience, building add-ons had never been the easiest task for me. There are numerous steps required, and each step may come with its owns challenges. Worse, I might spend time on a solutions just to hear it wasn’t best practice.

Wouldn’t it be great if there was a way to make this process easier by equipping developers, consultants, and Splunk Admins with the right tool to build their own add-ons? To take it a step further, wouldn’t it be even better if this tool actually helps you build the add-on by following tried and true best practices?

Allow me to introduce you to the Splunk Add-on Builder that helps to address the challenges highlighted above. Splunk Add-on Builder V1 was released on April 1st, 2016. In this release the Add-on Builder assists with building the basic components of add-ons. Namely:

UI based creation of the add-on and its folder structure:

Screen Shot 2016-04-01 at 5.12.33 PM

Intuitive add-on setup page creation: No need to write xml files, just select the fields you want your add-on setup to expose. Multiple accounts and custom fields are easy to support now:

Screen Shot 2016-04-04 at 2.47.28 PM

Building data collection: in this release, Add-on Builder helps you build your modular input supporting various mechanisms such as REST API, shell commands, or using your own python code to pull data from third party systems. If you have a REST API, let us generate the code and modular input for you. Just input the API URL and parameters and hit save:

REST

If you need a modular input that requires you to write you own Python code or run a system command, you can use the Add-on Builder to interactively validate the output:

Modinputvalidation

Interactive fields extraction: Add-on Builder uses a machine learning clustering algorithm to classify data ingested by add-on into groups that share the same format structure. That means it can automatically generate the field extractions for each group, letting you skip the grunt work and go straight through to recognizing event types.

Fieldextraction

Mapping to CIM made easy:

CIM2

Last but not least, the Add-on Builder offers validation for best practices so you can see if you’re going to run into trouble before you post your Add-on on Splunkbase:

Score

Please give Splunk Add-on Builder a try and provide us with your feedback. We’re very excited to hear how the first version works for you, and we are looking forward to your help to take this to the next level.

Show Me Your Viz!

$
0
0

Have you just download Splunk 6.4 and asked yourself what’s new and awesome? Have you ever built a dashboard with a custom visualization and wanted to share that with someone or easily replicate it somewhere else? Have Splunk’s core visualizations dulled your senses?

Reader, please meet Splunk 6.4 Custom Visualizations. Are you besties yet? If not, you two will be making sweet love by the end of this article.

I’m going to walk you through a Custom Visualization app I recently wrote and lay it all out there. I’m going to talk about why building these visualizations in Simple XML and HTML are a pain in your ass and how the API’s make your life easier. I’m going to show you techniques I learned the hard way so you can accelerate the creation of your app. Does all that sound good? Great, let’s get started.

First, a little backstory…

I recently presented a Developer session at Splunk Live San Francisco this past March. After the session I had a customer come up to me and ask a really simple question. How do you plot single values on a Splunk map? The simple answer is you don’t. But that’s a really shitty answer especially since there is no way to natively do it. Our maps were built to leverage aggregate statistics clustered into geographic bins from the geostats command. The output of that are bubbles on a map that optionally show multiple series of data (like a pie chart) if you use a split by clause in your search. If you’ve ever tried to plot massive amounts of categories using the argument globallimit=0, your system will come to a grinding halt as it spirals into javascript induced browser death. I thought about the problem for a while and figured there had to be a better way.

Less than a month later, in the span of a week, I built an app that solves this problem. The best part is it’s distributable on Splunkbase and can be used by anyone running a Splunk 6.4 search head. If you’re a Javascript pro then I’m confident you can build a visualization app in a couple days. Here’s how I did it and what I learned.

Step 0 – Follow The Docs

The docs are quite good for a new feature. Use them as a definitive reference on how to build a Custom Visualization app. Use this article to supplement the docs for some practical tips and tricks.

Step 1 – Put Splunk Into Development Mode

This will prevent splunk from caching javascript and won’t minify your JS or CSS. This allows you to just refresh the browser and re-load any changes you made without having to restart Splunk.

Create $SPLUNK_HOME/etc/system/local/web.conf with the following settings and restart Splunk.

[settings]
minify_js = False
minify_css = False
js_no_cache = True
cacheEntriesLimit = 0
cacheBytesLimit = 0
enableWebDebug = True

Step 2 – Create The App Structure

Follow this section from the docs. There’s a link to a template that has the requisite app structure and files you’ll need to be successful. The only thing it’s missing is the static directory to house your app icons if you decide to post the app on Splunkbase and want it to look pretty.

Step 3 – Create The Visualization Logic

Step 3a – Include Your Dependences

Here’s where we’ll spend the bulk of our time. This step is where you install any dependencies that your visualization relies on and write your code to display your viz.

You may be asking yourself when I’m going to get to some of the relevant points I mentioned above; specifically how using this API makes your life easier. Here’s where there rubber meets the road. We’re managing dependencies a little differently this time around. If you’ve built a custom visualization Splunk 6.0-6.3 you’ve done it using one of two methods. The first is converting your Simple XML dashboard into HTML. This works well but isn’t guaranteed to be upgrade friendly. The second method is loading javascript code from Simple XML. If you’ve used either method you’ll likely have run into RequireJS. We use RequireJS under the covers in Splunk to manage the loading of Javascript libraries. It works but it’s a major pain in the ass and it’s a nightmare when you have non-AMD compliant modules or conflicting version dependencies for modules that Splunk provides. I come from a Python world where importing dependencies (modules) is easy. Call me simplistic or call me naive, but why shouldn’t Javascript be so simple?

The Custom Visualization framework makes dealing with dependencies a lot easier by leveraging npm and webpack. This makes maintaining and building your application a lot easier than trying to do things in RequireJS. Use npm to download dependencies with a package.json (or manually install with npm install) and webpack will build your code and all the dependencies into a single visualization.js file that the custom viz leverages. This code will integrate smoothly with any dashboard and you won’t run into conflicts like you may have in the past with RequireJS.

The visualization I built requires a couple libraries; Leaflet and a plugin called Leaflet.markercluster.

Here’s what it looked like to load these libraries using RequireJS in an HTML dashboard within an app called ‘leaflet_maps’. Luckily, Leaflet doesn’t require any newer versions of Jquery or Underscore than are provided by Splunk. I’ve had to shelve an app I want to build because of RequireJS and the need for newer versions of Jquery and Lodash (modified Underscore). If you’re a RequireJS pro you may be screaming at me to use Multiversion support in RequireJS. I’ve tried it unsuccessfully. If you can figure it out, please let me know what you did to get it working.

require.config({
    baseUrl: "{{SPLUNKWEB_URL_PREFIX}}/static/js",
    paths: {
        'leaflet': '/static/app/leaflet_maps/js/leaflet-src',
        'markercluster': '/static/app/leaflet_maps/js/leaflet.markercluster-src',
        'async': '/static/app/leaflet_maps/js/async',
    },
    shim: {
        leaflet: {
            exports: 'L'
        },
        markercluster: {
            deps: ['leaflet']
        }
    }
});

This piece of code literally took me half a day to figure out. Things are easy in RequireJS if your module is AMD compliant. If it isn’t, like Leaflet.markercluster, you have to shim it. The bottom line is it’s a pain in the ass and difficult to get working. It took a lot of Google searching and digging through docs.

Here’s what it looks like using npm and webpack.

npm config – package.json

{
  "name": "leaflet_maps_app",
  "version": "1.0.0",
  "description": "Leaflet maps app with Markercluster plugin functionality.",
  "main": "visualization.js",
  "scripts": {
    "build": "node ./node_modules/webpack/bin/webpack.js",
    "devbuild": "node ./node_modules/webpack/bin/webpack.js --progress",
    "watch": "node ./node_modules/webpack/bin/webpack.js -d --watch --progress"
  },
  "author": "Scott Haskell",
  "license": "End User License Agreement for Third-Party Content",
  "devDependencies": {
    "imports-loader": "^0.6.5",
    "webpack": "^1.12.6"
  },
  "dependencies": {
    "jquery": "^2.2.0",
    "underscore": "^1.8.3",
    "leaflet": "~1.0.0-beta.2"
  }
}

This is the same package.json provided in the sample app template. The only things I modified were the name, author, license, devDependencies and dependencies. The important dependencies are imports-loader and leaflet. Leaflet.markercluster is available via npm but it’s an older version that was missing some features I needed so I couldn’t include it here. Now all I need to do is have nodejs and npm installed and run ‘npm install’ in the same directory as the package.json ($SPLUNK_HOME/etc/apps/leaflet_maps_app/appserver/static/visualizations/leaflet_maps). This creates a node_modules directory with your dependencies code.

webpack config – webpack.config.js

var webpack = require('webpack');
var path = require('path');

module.exports = {
    entry: 'leaflet_maps',
    resolve: {
        root: [
            path.join(__dirname, 'src'),
        ]
    },
    output: {
        filename: 'visualization.js',
        libraryTarget: 'amd'
    },
    module: {
        loaders: [
            {
                test: /leaflet\.markercluster-src\.js$/,
                loader: 'imports-loader?L=leaflet'
            }
        ]
    },
    externals: [
        'vizapi/SplunkVisualizationBase',
        'vizapi/SplunkVisualizationUtils'
    ]
};

Again, this is the same file provided in the template app. The difference here is the ‘loaders’ section of the ‘module’ definition. I’m using the webpack imports-loader to shim the Leaflet.markercluster module since it’s not AMD compliant. This is analogous to the RequireJS shim code I provided above. The difference here is that it’s much more intuitive (once you figure out you need imports-loader) to shim in webpack. The test key is a regex that matches the Leaflet.markercluster source file. The loader key defines the dependency on the function ‘L’ which is exported in the leaflet library.

Lastly, here’s the one small portion of RequireJS that you have to touch in your source.

define([
            'jquery',
            'underscore',
            'leaflet',
            'vizapi/SplunkVisualizationBase',
            'vizapi/SplunkVisualizationUtils',
            '../contrib/leaflet.markercluster-src'
        ],
        function(
            $,
            _,
            L,
            SplunkVisualizationBase,
            SplunkVisualizationUtils
        ) {

I’ve created a contrib directory and added some supporting Javascript and CSS files. I’ve defined my leaflet module and it’s L function as well as the leaflet.markercluster source location in contrib. Notice that since this leaflet.markercluster is not AMD compliant I don’t need to define a function.

Now all you have to do is build the code using npm.

bash-3.2$ cd $SPLUNK_HOME/etc/apps/leaflet_maps_app/appserver/static/visualizations/leaflet_maps
bash-3.2$ npm run build

> leaflet_maps_app@1.0.0 build /opt/splunk/etc/apps/leaflet_maps_app/appserver/static/visualizations/leaflet_maps
> node ./node_modules/webpack/bin/webpack.js

Hash: 9ea37b6ef76197f0a3b7
Version: webpack 1.12.14
Time: 511ms
           Asset    Size  Chunks             Chunk Names
visualization.js  649 kB       0  [emitted]  main
   [0] ./src/leaflet_maps.js 7.39 kB {0} [built]
    + 6 hidden modules

Any time you make subsequent changes to your source just re-run the build and re-fresh Splunk.

I built this app on CentOS 7 in a docker image. I have npm and node installed in the docker image but it’s also possible to leverage node that gets shipped with Splunk. You’d just tweak this section of your package.json.

"scripts": {
    "build": "$SPLUNK_HOME/bin/splunk cmd node ./node_modules/webpack/bin/webpack.js",
    "devbuild": "$SPLUNK_HOME/bin/splunk cmd node ./node_modules/webpack/bin/webpack.js --progress",
    "watch": "$SPLUNK_HOME/bin/splunk cmd node ./node_modules/webpack/bin/webpack.js -d --watch --progress"
  },

Then export the path to your Splunk 6.4 install as SPLUNK_HOME.

bash-3.2$ export SPLUNK_HOME=/opt/splunk

Your visualization code and all dependencies are now built into a single file called visualization.js.

Step 3b – Write The Code

If you’re using the app template and following the docs then you’ll be modifying the file /appserver/static/visualizations//src/visualization_source.js to place your code. There are a bunch of methods in the API that will be relevant.

The first method is updateView

The updateView method is where you stick your custom code to create your visualization. There’s nothing super fancy going on and the docs do a great job explaining what needs to be done here. One important thing to cover is how to handle searches that return > 50,000 results. If you’ve worked with the REST API or written dashboards using the SplunkJS stack you’ll know that you can only get 50,000 results at a time. Things are no different here. It just wasn’t obvious how to do it. I had to dig through the API to figure it out. Here’s how I did it so you don’t have to waste your time.

    updateView: function(data, config) {
        // get data
        var dataRows = data.rows;

        // check for data
        if (!dataRows || dataRows.length === 0 || dataRows[0].length === 0) {
            return this;
        }

        if (!this.isInitializedDom) {
	    // more initialization code here
	    this.chunk = 50000;
	    this.offset = 0;
	}

	// the rest of your code logic here

	// Chunk through data 50k results at a time
	if(dataRows.length === this.chunk) {
	    this.offset += this.chunk;
	    this.updateDataParams({count: this.chunk, offset: this.offset});
	}
    }

I initialize a couple variables; offset and chunk. I then check to see if I get a full 50k events back. If so, I increment my offset by the chunk size and update my data params. This will continue to page through my results set, calling updateView each time and synchronously running back through the code, until I get < 50k events. It's straightforward but not documented anywhere.

This leads us to the second method getInitialDataParams

This is where you set the output format of your data and how many results the search is limited to.

        // Search data params
        getInitialDataParams: function() {
            return ({
                outputMode: SplunkVisualizationBase.ROW_MAJOR_OUTPUT_MODE,
                count: 0
            });
        },

I set the count to 0 which is an unlimited amount of results. This can be dangerous and could potentially overwhelm your visualization so be sure that it can handle it before you go down this route. Here are the available options and output modes.

/**
         * Override to define initial data parameters that the framework should use to
         * fetch data for the visualization.
         *
         * Allowed data parameters:
         *
         * outputMode (required) the data format that the visualization expects, one of
         * - SplunkVisualizationBase.COLUMN_MAJOR_OUTPUT_MODE
         *     {
         *         fields: [
         *             { name: 'x' },
         *             { name: 'y' },
         *             { name: 'z' }
         *         ],
         *         columns: [
         *             ['a', 'b', 'c'],
         *             [4, 5, 6],
         *             [70, 80, 90]
         *         ]
         *     }
         * - SplunkVisualizationBase.ROW_MAJOR_OUTPUT_MODE
         *     {
         *         fields: [
         *             { name: 'x' },
         *             { name: 'y' },
         *             { name: 'z' }
         *         ],
         *         rows: [
         *             ['a', 4, 70],
         *             ['b', 5, 80],
         *             ['c', 6, 90]
         *         ]
         *     }
         * - SplunkVisualizationBase.RAW_OUTPUT_MODE
         *     {
         *         fields: [
         *             { name: 'x' },
         *             { name: 'y' },
         *             { name: 'z' }
         *         ],
         *         results: [
         *             { x: 'a', y: 4, z: 70 },
         *             { x: 'b', y: 5, z: 80 },
         *             { x: 'c', y: 6, z: 90 }
         *         ]
         *     }
         *
         * count (optional) how many rows of results to request, default is 1000
         *
         * offset (optional) the index of the first requested result row, default is 0
         *
         * sortKey (optional) the field name to sort the results by
         *
         * sortDirection (optional) the direction of the sort, one of:
         * - SplunkVisualizationBase.SORT_ASCENDING
         * - SplunkVisualizationBase.SORT_DESCENDING (default)
         *
         * search (optional) a post-processing search to apply to generate the results
         *
         * @param {Object} config The initial config attributes
         * @returns {Object}
         *
         */

Some other methods you may want to look into are initialize, setupView, formatData and drilldown. If you want to look at all the methods take a look at $SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/js/vizapi/SplunkVisualizationBase.js

Steps 4-7 – Add CSS, Add Configuration Settings, Try Out The Visualization, Handle Data Format Errors

Refer to the docs.

Step 8 – Add User Configurable Properties

You’ll most likely want to give your user an interface to tweak the parameters of the visualization. You HAVE to define these properties in default/savedsearches.conf and README/savedsearches.conf.spec. Follow the docs here and don’t skip this step!

Step 9 – Implement a format Menu

Refer to the docs. If you want to add a default value here’s an example of how you’d do it. Quick warning, I had to strip out the HTML so WordPress wouldn’t mangle it. Have a look at the code in my app if you want a full example.

splunk-radio-input name="display.visualizations.custom.leaflet_maps_app.leaflet_maps.cluster" value="true"

That’s all there is to it! I’m not a Javascript developer and this was pretty damn simple to figure out. I hope you find the experience as enjoyable as I did. If you build something cool, please contribute it back to the community and post the app on Splunkbase.com.

HTTP Event Collector and sending from the browser

$
0
0

Recently we’ve been seeing a bunch of questions coming in related to errors when folks try to send events to HEC (HTTP Event Collector) from the browser and the requests are denied. One reason you might want to send from the browser is to capture errors or logs within your client-side applications. Another is to capture telemetry / how the application is being used. It is a great match for HEC however…

Making calls from a browser to Splunk get you into the world of cross-domain requests and CORS. In this post I’ll describe quickly what CORS (Cross Origin Resource Sharing) is and how you can enable your browsers to take advantage of HEC.

Problem

Browser clients are trying to send events to HEC from Javascript and the requests are denied. The issue is related to CORS .  Most browsers by default (Chrome, Safari) are not going to allow cross-domain requests (which includes HEC) unless they are authorized. A cross-domain call is when a page served from one domain (like a website) tries to make a request from a script to another domain (like the Splunk server). The browser will first go and make a pre-flight request asking the target server who is allowed to access it and what methods are supported. The server may respond with an Access-Control-Allow-Origin header which includes either a wildcard (*), or a list of domains that are acceptable. Assuming the browser gets a response that indicates its origin is permitted then it will allow the request to go through. If it the origin is not permitted, then an HTTP Status 401 will get returned.

Solution

Splunk supports CORS and it can be enabled within conf. Depending on the version of Splunk, where you enable it differs. In Splunk 6.4, this will be enabled in the [http] stanza of inputs.conf. Which is specific for HEC. You’ll see the crossOriginSharingPolicy setting here.

If you are using Splunk 6.3, then the setting is in server.conf under [httpserver] and applies generally to the REST API as well. Once the policy is properly configured, browsers will be able to make cross domain requests.

Note: For Splunk Cloud customers, you will need to work with support to get this enabled.

Caveats on SSL and CORS

There is one big caveat though, the SSL cert on the Splunk side MUST be a valid cert. This is not a Splunk constraint, this is a constrain for browsers like Chrome, Firefox etc. Without a valid SSL cert the request will complete and you will get an error. The only way to work around this is to not use SSL (which I am guessing you don’t want to do).

Now depending again on which version of Splunk you are using determines where to configure the valid SSL cert. If you are in Splunk 6.4, this is also in inputs.conf. For Splunk 6.3 it is in server.conf under [sslconfig]

Note: If you are in Splunk Cloud trial or Single Instance then the cert is self-signed and you cannot change it today.

Enjoy having fun with HEC and the browser!

Announcing Splunk Add-on for Microsoft Cloud Services

$
0
0

I am pleased to announce the availability of Splunk Add-On for Microsoft Cloud Services. Released on April 1st 2016, this add-on which is available on Splunkbase, provides Splunk admins the ability to collect events from various Microsoft Cloud Services APIs. In this first release, this includes:

  • Admin, user, system, and policy action events from a variety of Office 365 services such as Sharepoint Online and Exchange Online and other services supported by the Office 365 Management API.
  • Audit logs for Azure Active Directory, supported by the Office 365 Management API.
  • Current and historical service status, as well as planned maintenance updates for a variety of services supported by the Office 365 Service Communications API.

If you are wondering what use cases could be achieved by ingesting this data into Splunk Enterprise or Splunk Cloud, following is a small sample:

  • Track all your Sharepoint Online user and admin level activities. Activities include file and folder actions such as view, create, edit, upload, delete and download; that in addition to file sharing and collaboration.
  • Track Azure AD authentications by users, IPs for all of your Microsoft Cloud apps such as Skype, , Yammer, Exchange Online and other Office 365 apps.
  • Whether it is Skype, Yammer or any other Microsoft cloud service, track the availability and scheduled maintenance windows of these services
  • Track Exchange online admin and user activities.
  • Track user adoption by browser and geo locations
  • This add-on is CIM compatible. This means that by using Splunk Enterprise Security, you can address a myriad of security focused use cases such as improbable access detection (snapshot below):

Screen Shot 2016-04-18 at 7.50.52 AM

  • Office 365 apps access anomaly detection (snapshot below):

Screen Shot 2016-04-18 at 7.43.52 AM

  • Using the prebuilt panels that come with the add-on, you can get a quick start to building out your own dashboards. These are some of panels that are included in this release:

Screen Shot 2016-04-15 at 12.30.45 PM

Last but not least, the configuration of this add-on supports OAuth v2 allowing you to run the setup without having to save any Azure credentials on your Splunk instance.Please give Splunk Add-on for Microsoft Cloud Services a try and let us know your feedback.

Happy Splunking!

Creating a Splunk Javascript View

$
0
0

Once of the best things about Splunk is the ability to customize it. Splunk allows you to make your own Javascript views without imposing many limitations on you. This means you make apps that includes things such as:

  • Custom editors or management interfaces (e.g. lookup editing, slide-show creation)
  • Custom visualizations (though modular visualizations are likely what you will want to use from now on)
  • etc.

That said, getting started on creating a Splunk Javascript view can appear a little daunting at first. It really isn’t that hard though. Keep reading and I’ll explain how to do it.

Parts of a Splunk Javascript View

Before we get started, lets outline the basic parts of a custom Javascript view:

Component Path Example Description
Javascript view file appserver/static/js/views/HelloSplunkJSView.js This is the main view Javascript file
HTML template file appserver/static/js/templates/HelloSplunkJSView.html This is where you can put HTML you want to be rendered
Stylesheet file appserver/static/css/HelloSplunkJSView.css Here is where you put your custom CSS
Third party libraries appserver/static/contrib/text.js Include any third-party libraries under a contrib directory
Splunk View default/data/ui/views/hellosplunkjs.xml This is the view where the custom view will appear
View Javascript file appserver/static/hellosplunkjs.js This Javascript will put your custom view on the dashboard

Lets get started

We are going to make a very simple app that will allow you to run a search and get the most recent event. The completed app is available on Github for your reference.

Step 1: make a basic Splunk app

We are going to put out content into a basic Splunk app. To do this, we will make a few files inside of a Splunk install.

Step 1.1: make app directory

First, make a directory in your Splunk install under the /etc/apps directory for the app you are going to create. I’m going to name the app “hello-splunkjs”. Thus, I will make the following directory: /etc/apps/hello-splunkjs

Step 1.2: make app.conf

Now, lets make the app.conf file. This goes here in your Splunk install: /etc/apps/hello-splunkjs/default/app.conf

The contents will just be this:

[launcher]
version = 0.5
description = Example of writing basic SplunkJS views

[ui]
is_visible = true
label = Hello SplunkJS 

You should see the app in Splunk If you restart it.

Step 2: make the basic view

Now let’s get started making the view.

Step 2.1: deploy the template

Now, let’s make the basic Javascript view file. To make things a little easier, you can start with the template available on Github. The Javascript views ought to be placed in your app in the /appserver/static/js/views/ directory. In our case, the view will be in /etc/apps/hello-splunkjs/appserver/static/js/views/HelloSplunkJSView.js.

All of the places in the template with a CHANGEME will be replaced in the next few steps.

Step 2.2: update the app references

We need to update the references in the require.js statement to point to your app. This is done by changing /app/CHANGEME to /app/hello-splunkjs (since our app directory is hello-splunkjs). This results in the following at the top of/etc/apps/hello-splunkjs/appserver/static/js/views/HelloSplunkJSView.js:

define([
    "underscore",
    "backbone",
    "splunkjs/mvc",
    "jquery",
    "splunkjs/mvc/simplesplunkview",
    'text!../app/hello-splunkjs/js/templates/CHANGEMEView.html', // Changed to 
    "css!../app/hello-splunkjs/css/CHANGEMEView.css" // CHANGE_THIS: Modify the path to use a template
], function(

See the diff in Github.

Step 2.3: add the template and stylesheet files

We now need to add the files that will contain the stylesheet and the HTML. For the stylesheet, make an empty file in /etc/apps/hello-splunkjs/appserver/static/css/HelloSplunkJSView.css. We will leave it empty for now.

For the HTML template, make a file /etc/apps/hello-splunkjs/appserver/static/js/templates/HelloSplunkJSView.html with the following:

Hellll-lllo Splunk!

See the diff in Github.

Step 2.4: set the view name

Now, lets change the other CHANGEME’s in the view’s Javascript (/etc/apps/hello-splunkjs/appserver/static/js/views/HelloSplunkJSView.js). We are naming the view “HelloSplunkJSView” so change the rest of the CHANGEME’s accordingly.

This will result in:

define([
    "underscore",
    "backbone",
    "splunkjs/mvc",
    "jquery",
    "splunkjs/mvc/simplesplunkview",
    'text!../app/hello-splunkjs/js/templates/HelloSplunkJSView.html',
    "css!../app/hello-splunkjs/css/HelloSplunkJSView.css"
], function(
    _,
    Backbone,
    mvc,
    $,
    SimpleSplunkView,
    Template
){
    // Define the custom view class
    var HelloSplunkJSView = SimpleSplunkView.extend({
        className: "HelloSplunkJSView",
        
        defaults: {
        	
        },
        
        initialize: function() {
        	this.options = _.extend({}, this.defaults, this.options);
        	
        	//this.some_option = this.options.some_option;
        },
        
        render: function () {
        	
        	this.$el.html(_.template(Template, {
        		//'some_option' : some_option
        	}));
        	
        }
    });
    
    return HelloSplunkJSView;
});

See the diff in Github.

Step 2.5: insert the text.js third party library

To make things easy, we are going to use a third-party library called text.js. The nice thing about Splunk views is that you can use the plethora of third-party Javaacript libraries in your apps. It is best to keep third-party libraries in a dedicated directory so that you can easily determine which parts were made by someone else. Let’s put those under /appserver/static/contrib. In the case of our app, the path will be /etc/apps/hello-splunkjs/appserver/static/contrib.

text.js is available from https://github.com/requirejs/text. Put it the app in the path /etc/apps/hello-splunkjs/appserver/static/contrib/text.js. Next, we will need to tell our view where to find text.js by adding a line to require.js’s shim. Put the following at the top of /etc/apps/hello-splunkjs/appserver/static/js/views/HelloSplunkJSView.js:

require.config({
    paths: {
    	text: "../app/hello-splunkjs/contrib/text"
    }
});

See the diff in Github.

Step 3: add to a dashboard

Step 3.1: make the view

We need to make a view that will host the Javascript view we just created. To do this, we will create a simple view that includes an place-holder where the view will render.

To do this, create the following view in /etc/apps/hello-splunkjs/default/data/ui/views/hellosplunkjs.xml:

<?xml version='1.0' encoding='utf-8'?>

<form script="hellosplunkjs.js" >
	<label>Hello SplunkJS</label>
	
	<row>
		<html>
			<div id="placeholder_for_view">This placeholder should be replaced with the content of the view</div>
		</html>
	</row>
</form>

See the diff in Github.

Step 3.2: put the view in the app’s navigation file

Next, make the nav.xml to register the view by making the following file in /etc/apps/hello-splunkjs/default/data/ui/nav/default.xml:

<nav color="#3863A0">
  <view name="hellosplunkjs" default="true" />
</nav>

See the diff in Github.

Restart Splunk and navigate to the view; you should see it with the text “This placeholder should be replaced with the content of the view”.

Step 3.3: wire up the Javascript view to dashboard

Now, we need to wire-up the Javascript view to the dashboard. To do so, make the following file at /etc/apps/hello-splunkjs/appserver/static/hellosplunkjs.js:

require.config({
    paths: {
        hello_splunk_js_view: '../app/hello-splunkjs/js/views/HelloSplunkJSView'
    }
});

require(['jquery','underscore','splunkjs/mvc', 'hello_splunk_js_view', 'splunkjs/mvc/simplexml/ready!'],
		function($, _, mvc, HelloSplunkJSView){
	
    // Render the view on the page
    var helloSplunkJSView = new HelloSplunkJSView({
        el: $('#placeholder_for_view')
    });
    
    // Render the view
    helloSplunkJSView.render();
	
});

This script instantiates an instance of the HelloSplunkJSView and tells it to render in the “placeholder_for_view” element (which was declared in the hellosplunkjs.xml view).

See the diff in Github.

Step 4: add click handlers

Now, lets make something in the view that is interactive (takes input from the user).

Step 4.1: create an HTML element that is clickable

We need to change the template file to include a clickable element. To do this, modify the file /etc/apps/hello-splunkjs/appserver/static/js/templates/HelloSplunkJSView.html with the following:

Hellll-lllo Splunk!
<div class="get-most-recent-event">Get the most recent event in Splunk</div>
<textarea id="most-recent-event"></textarea>

See the diff in Github.

Step 4.2: wire up the clickable element to a function

Next, wire up a click handlers along with a function that will fire when the user clicks the “get-most-recent-event” element. We do this by adding an events attribute that connects the HTML to a function called doGetMostRecentEvent(), which we will create in the next step:

define([
    "underscore",
    "backbone",
    "splunkjs/mvc",
    "jquery",
    "splunkjs/mvc/simplesplunkview",
    'text!../app/hello-splunkjs/js/templates/HelloSplunkJSView.html',
    "css!../app/hello-splunkjs/css/HelloSplunkJSView.css"
], function(
    _,
    Backbone,
    mvc,
    $,
    SimpleSplunkView,
    Template
){
    // Define the custom view class
    var HelloSplunkJSView = SimpleSplunkView.extend({
        className: "HelloSplunkJSView",
        
        events: {
        	"click .get-most-recent-event" : "doGetMostRecentEvent"
        },

See the diff in Github.

Step 4.3: run a search from Javascript

Now, lets add a require statement to import the SearchManager so that we kick off a search. We do this by adding “splunkjs/mvc/searchmanager” to the define statement and assigning the resulting object to “SearchManager” in the function:

define([
    "underscore",
    "backbone",
    "splunkjs/mvc",
    "jquery",
    "splunkjs/mvc/simplesplunkview",
    "splunkjs/mvc/searchmanager",
    'text!../app/hello-splunkjs/js/templates/HelloSplunkJSView.html',
    "css!../app/hello-splunkjs/css/HelloSplunkJSView.css"
], function(
    _,
    Backbone,
    mvc,
    $,
    SimpleSplunkView,
    SearchManager,
    Template
){

See the diff in Github.

Now, let’s add code in the function doGetMostRecentEvent() that will kick off a search and put the most recent event in the view. See below for the created function:

        doGetMostRecentEvent: function(){ 
        	
           // Make a search
            var search = new SearchManager({
                "id": "get-most-recent-event-search",
                "earliest_time": "-1h@h",
                "latest_time": "now",
                "search":'index=_internal OR index=main | head 1 | fields _raw',
                "cancelOnUnload": true,
                "autostart": false,
                "auto_cancel": 90,
                "preview": false
            }, {tokens: true});
            
        	
            search.on('search:failed', function() {
                alert("Search failed");
            }.bind(this));
            
            search.on("search:start", function() {
                console.log("Search started");
            }.bind(this));
            
            search.on("search:done", function() {
                console.log("Search completed");
            }.bind(this));
        	
            // Get a reference to the search results
            var searchResults = search.data("results");
            
            // Process the results of the search when they become available
            searchResults.on("data", function() {
            	$("#most-recent-event", this.$el).val(searchResults.data().rows[0][0]);
            }.bind(this));
            
            // Start the search
            search.startSearch();
            
        },

See the diff in Github.

Restart Splunk and click the text “Get the most recent event in Splunk”; the most recent event should show up in the view when you click the “Get the most recent event in Splunk” text:

most_recent_event

Step 4.4: customize styling

The link we made for getting the raw event doesn’t look like a link. Let’s deploy some styling to make it look clickable. To do this, edit the file /etc/apps/hello-splunkjs/appserver/static/css/HelloSplunkJSView.css with the following:

.get-most-recent-event{
	color: blue;
	text-decoration: underline;
}

This will style the link such that it looks like this:

hello_splunkjs_with_link

See the diff in Github.

Conclusion

There is many more things that you can do with Javascript in Splunk, this is just the start. See dev.splunk.com and Splunk answers if you need more help.

 

Viewing all 218 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>