Quantcast
Channel: Dev – Splunk Blogs
Viewing all 218 articles
Browse latest View live

Experimental App Helps Find Other Splunkbase Apps

$
0
0

I’ve recently developed a Splunk app called “splunkbase“.  It looks at your Splunk installation and suggests apps on splunkbase.com relevant to your data.  It analyzes your indexed data, as well as data in your file system not yet indexed.  It also suggests apps based on what other Splunk users have installed at similar installations — sort of like how Amazon will suggest items to purchase based on what other users similar to you have purchase.

The app is simple to run — it’s just one dashboard, with several reports that suggest apps.

Security: At no time is any of your data uploaded or forwarded on. The signatures of all free splunkbase apps are included with this app so everything runs locally on your machines.

Limitation: Currently it does not directly link to the apps on splunkbase.com, but it should be straightforward for you to find them on splunkbase, by searching for their names.

Feedback: This is an early app.  I’d like feedback — do the recommended apps make sense?  If not, can you explain?  Did you encounter any problems?

splunkbase-app screenshot

Example Splunkbase App Screenshot


Splunk4Good partners with Geeks Without Bounds

$
0
0

I am happy to announce Splunk4Good is formally partnering with Geeks Without Bounds (GWOB) in 2013! The partnership will lend broad support to GWOB’s mission and encourage the use of Splunk products in humanitarian projects.

GWOB is a non-profit organization with a mission to apply technology as an accelerator for humanitarian projects. In order to achieve this mission GWOB hosts humanitarian hackathons around the globe and also works with humanitarian startups in an accelerator program with 2 cohorts per year.

Read a bit more about GWOB in the words of their Executive Director, Willow Brugh, (and click thru to read their full Theory of Change):

Geeks Without Bounds works towards positive change by bridging the gap between technological capabilities and institutional knowledge within humanitarian response. We work to connect technologists and organizations in scenarios that run the gamut from disaster and crisis response to humanitarian systems management.

To bring these groups together, Geeks Without Bounds facilitates humanitarian hackathons…and bi-annually chooses three particularly promising projects generated at these hackathons to develop via an acceleration process that further connects  technological innovators and humanitarian agencies. We provide six months of mentorship in business development, funding acquisition, user experience and engagement, and ethical usage of these technologies.

Although newly partnered, Splunk4Good and GWOB have closely collaborated on multiple projects in the last year. In addition to co-presenting at the White House as part of the FEMA Think Tank on Innovations in Emergency Management, Splunk sponsored hackathons GWOB organized and facilitated such as NASA Space Apps Challenge and Everyone Hacks.

I am excited to see how our existing relationship will be made even stronger by this formal partnership in 2013. Keep up with upcoming events and other cool ongoings (like this mentorship video from my 1st startup accelerator mentor session this moth with the Pineapple Project) by following @Splunk4Good and @GWOBorg on Twitter.

Splunk SDK for Java now has maven support

$
0
0

Does your project depend on 3rd party jars, like Splunk SDK for Java? Do you use maven, ivy or gradle to build your project? Do you rely on maven or similar tools to fetch all your dependencies behind the scenes so you can build your project seamlessly?

If your answer to these questions is Yes, then we have great news for you. Now you can add our jar as a dependency. For maven, here is how you can update your project’s pom.xml in 2 easy steps. You can make similar changes for ivy as well as gradle.

Step 1. Add the repository.

<repositories>
  ...
  <repository>
    <id>ext-release-local</id>
    <url>http://splunk.artifactoryonline.com/splunk/ext-releases-local</url>
  </repository>
</repositories>

Step 2. Add the dependency.

<dependencies>
  ...
  <dependency>
    <groupId>com.splunk</groupId>
    <artifactId>splunk</artifactId>
    <version>1.1.0</version>
  </dependency>
</dependencies>

And you are done!

Send us feedback at devinfo@splunk.com.

Splunk components for Apache Camel

$
0
0

Recently David Turanski from SpringSource and myself held a joint webinar on Extending Spring Integration for Splunk.

The developer feedback was great , and no feedback is better than when an audience member gets inspired to go and create and new set of Splunk components for another enterprise Java framework , in this case Apache Camel.

Similarly to Spring Integration , Apache Camel is an open-source integration framework based on Enterprise Integration Patterns. The programming semantic  to which the developer builds their integration solution with the respective frameworks will differ, and for this reason the developer may prefer one framework over the over , but the high level approach is the same, that being a development framework that can take input messages from many different sources  and then route, transform and filter them out to many different destinations. There is a degree of interoperability between Spring and Camel also if you want to get the best of both worlds.

So Preben Asmussen has started working on some Splunk components for Apache Camel.

The code is being hosted at Github :  https://github.com/pax95/camel-splunk

Also , there is already a working example available :  https://github.com/pax95/camel-splunk-example

Currently only the Apache Camel component for sending events to Splunk is coded.The Apache Camel component for executing Splunk searches is in progress.

Awesome stuff Preben !

Splunking Websphere MQ Queues and Topics

$
0
0

What is Websphere MQ

IBM Websphere MQ , formerly known as MQSeries , is IBM’s Message Oriented Middleware offering and has been the most widely implemented system for messaging across multiple platforms over the last couple of decades.

What is Message Oriented Middleware

From Wikipedia :

“Message-oriented middleware (MOM) is software or hardware infrastructure supporting sending and receiving messages between distributed systems. MOM allows application modules to be distributed over heterogeneous platforms and reduces the complexity of developing applications that span multiple operating systems and network protocols. The middleware creates a distributed communications layer that insulates the application developer from the details of the various operating system and network interfaces. APIs that extend across diverse platforms and networks are typically provided by MOM.”

Where does MQ fit into the landscape

In the course of my career , I’ve architected and coded solutions across many different verticals (aviation, core banking, card payments, telco, industrial automation, utilities) and MQ has been a fundamental mainstay in the enterprise IT fabric in all of these industries, stitching these often very heterogeneous enterprise computing environments together.Ergo , the messages being sent to MQ queues and topics represent  a massive source of valuable machine data that can be pulled into Splunk to derive operational visibility into the various systems and applications that  are communicating via MQ.

Enter Splunk

So how can we tap into MQ from Splunk ? The JMS Messaging Modular Input (freely available on splunkbase) is the answer , and I blogged about this in more detail recently.

When I first developed the JMS Messaging Modular Input , it wasn’t particularly feasible to test it against every MOM system that had a JMS provider , so my testing was done against ActiveMQ with the knowledge that JMS is just an API interface and in theory the modular input should work with all JMS providers. Upon release , the emails started coming in , and much to my delight , many users were successfully using the JMS Messaging Modular Input to Splunk their MQ environments.The theory had worked.

So recently , in collaboration with Splunk Client Architect Thomas Mann , we set about building a Websphere MQ environment and hooking the JMS Messaging Modular Input into it so as to end to end test this for ourselves.

I am not an MQ administrator by any means , I am quite familiar with JMS concepts and coding , but as far as install, admin and configuration of the MQ software , this is probably what took me the most time. Once MQ was setup properly , then configuring the JMS Messaging Modular Input via the Splunk Manager UI , was very quick and simple.

If you already have MQ in your environment and have  MQ admins that know JMS concepts with respect to setting up the MQ side and configuring the client side , then you should find all of the setup steps to be quite trivial.

Setting up MQ

Prerequisites

  • Websphere version 7x installed (for this example, but previous MQ versions are compatible also)

Create a Queue

Create an MQ Queue under the default queue manger.  This step is optional if you have some destination the messages are already going to.

Create the JMS Objects

Using a JNDI File context is the simplest approach, unless you want to setup a directory service to host your JMS objects.  In this step you setup  the location of the .bindings file that MQ will create for you.  The Provider URL and Factory Class will be used later in your JMS Modular Input configuration.

Configure a connection factory

In this case I called it SplunkConnectionFactory.  This name will also be used in the JMS Modular Input configuration.

Ensure that you set the Transport mode  to Client – not Bindings

Setup server name in bindings file

Right click on the SplunkConnectionFactory and open properties.  Select the Connection item on the right hand side.  Change localhost(1414) to <servername>(1414).  This is the connection info between the .bindings file and the MQ host.  If you don’t specify that, then Splunk will try to connect to MQ on the localhost , which clearly won’t work in a remote configuration.

Create a new JMS Destination

Create a new Destination under the JMS Administered Context.  This links what is published in the .bindings file with the associated queue you want to manage.

Associate the JMS destination with a MQ Queue Manager

Disable Channel Auth in MQ

This step should not be done in production.  In that scenario, work with your MQ admin to set up appropriate access in MQ 7x and later.

  1. Open a terminal and navigate to <mq install dir>/bin
  2. Run runmqsc
  3. Enter the following command: AlTER QMGR CHLAUTH(DISABLED), then hit Return.
  4. Type END to exit runmqsc

More details here

Create the Channel

This step only needs to be done if the MQ Admin doesn’t create a channel for you.  Also, this may be unnecessary if you disable Channel Auth as outlined above.

  1. First thing to do is make sure your listener is running.  runmqlsr –t tcp –m qmgr –p nnnn where qmgr is the name of your queue manager and nnnn is the port number your listener is on (default is 1414).  For my configuration, the command was:  runmqlsr –t tcp –m QMgr –p 1414
  2. Create the channel for Splunk: DEFINE CHANNEL(‘splunkChannel’) CHLTYPE(SVRCONN) TRPTYPE(TCP)  DESCR(‘Channel for use by splunk programs’)
  3. Create a Channel Authentication rule using the IP address of the splunk indexer that will be reading the queues.  Assign it to a user (splunk is a local user created on the MQ box, non admin but in the mqm group). Run the command: SET CHLAUTH(‘splunkChannel’) TYPE(ADDRESSMAP) ADDRESS(’10.0.0.20′)  MCAUSER(‘splunk’)
  4. Grant access to connect and inquire the Queue manager: SET AUTHREC OBJTYPE(QMgr) PRINCIPAL(‘splunk’) AUTHADD(CONNECT, INQ).  Be sure to replace QMgr with your queue manager name and splunk with your local username.
  5. Grant access to inquire / get / put messages on the queue. SET AUTHREC PROFILE(‘SplunkQueue’) OBJTYPE(QUEUE) PRINCIPAL(‘splunk’) AUTHADD(PUT, GET, INQ, BROWSE).  SplunkQueue is the name of the queue you created.  Replace splunk with the user id specified in step 3.

More details here

Setting up Splunk

Prerequisites

  • Splunk version 5x installed
  • JMS Messaging Modular Input installed

Jar files

You need to copy the MQ jar files into the mod input’s lib directory at $SPLUNK_HOME/etc/apps/jms_ta/bin/lib
These are the jars I  ended up needing , note : these 4 jars are already part of  the core mod input release.

  • jmsmodinput.jar
  • jms.jar
  • splunk.jar
  • log4j-1.2.16.jar

Bindings file

MQ will create your bindings file for you and write it to the location that you specified.
If your Splunk instance is running locally to MQ , then you are good to go.
If your Splunk instance is running remote to MQ , you can just copy the bindings file to the remote Splunk host.
The directory location of the bindings file can be anywhere you like , the path gets specified as a parameter (jndi_provider_url) when you setup the mod input stanza.

Setup the Input Stanza

You can setup the JMS Modular Input stanza manually(as a stanza entry in an inputs.conf file) or via the Splunk Manager UI.

Browse to Data Inputs >> JMS Messaging

The values that you use for the setup will come from what you setup in MQ.

Optionally , you can also configure what components of the messages you wish to index, and also whether you just want to browse the messages queue rather than consuming the messages.

This is what the resulting stanza declaration that gets written to inputs.conf will look like. The items in bold are values that come from your MQ setup.

[jms://queue/SplunkQueue]
browse_mode = all
browse_queue_only = 1
durable = 0
index = jms
index_message_header = 1
index_message_properties = 1
init_mode = jndi
jms_connection_factory_name = SplunkConnectionFactory
jndi_initialcontext_factory = com.sun.jndi.fscontext.RefFSContextFactory
jndi_provider_url = file:/home/damien/MQJNDI/
sourcetype = mq
strip_newlines = 1
browse_frequency = -1
disabled = 1

Queues : browsing or consuming ?

The JMS Messaging Modular Input allows you to specify browse mode or consume mode(default).
Browsing does not remove the messages from the queue whereas consuming does(so it is slightly more invasive).
However there are issues with browsing in 2 main respects :

  1. you might miss messages if they are consumed before they are browsed
  2. you might get duplicate messages if you browse multiple times before the messages get consumed

So my preferred “least invasive” approach is actually to have the MQ admin setup an alias queue where a copy of all the messages you are interested in can get sent to and then the mod input can just consume from this queue without impacting any other consumers of the source queues.

Testing

Expand the Queue Managers tab on the left hand side.  Select your queue manager and expand it (QMgr in this example).  Expand Queues.  Find the queue you have linked your JMS object too.

Right click the queue and select “Put Test Message”

You can put as many messages as you like in the queue.  You will see these messages indexed if you everything is configured correctly.

Modular Inputs Tools

$
0
0

Tools


I’m a tools kind of a guy. I like things that make my life easier or allow me to accomplish some task that would be otherwise prohibitive. I also like Tool the band , but that’s another blog.

And so it is with software. Languages, libraries, frameworks are just tools that make it easier for us to accomplish some task.

Modular Inputs

With the release of Splunk 5 came a great new feature called Modular Inputs.

Modular Inputs extend the Splunk framework to define a custom input capability.In many respects you can think of them as your old friend the “scripted input” , but elevated to first class citizen status in the Splunk Manager. Splunk treats your custom input definitions as if they were part of Splunk’s native inputs and users interactively create and update the input via Splunk manager just as they would for native inputs (tcp, files etc…) The Modular Input’s lifecycle, schema, validation, configuration is all managed by Splunk. This is the big differentiator over scripted inputs which are very loosely coupled to Splunk.
What attracts me most to Modular Inputs  is the potential we have to build up a rich collection of these inputs and make is easier and quicker for users to get their data into Splunk.

Modular Inputs Tools

When I wrote my first modular input , there was certainly an initial learning curve to figuring out exactly how to do it. As powerful as modular inputs are , there are many semantics that have to be understood, for development and also building the release.

So I have created 2 Modular Inputs frameworks that should abstract the developer from having to understand all of these semantics up front , and instead just focus on developing their modular input’s business logic , significantly lowering the technical barrier of entry and getting to that point of productivity faster.

You can  write a modular input using any language , but for the most part my recommendation would be to stick with Python. It is more seamlessly integrated into the Splunk runtime. The reason you might use another language is if there is a specific library or runtime environment that your modular input depends upon.

The 2 modular inputs frameworks that I have created are for Python and Java. They can be cloned from github, and the best way to get started is to have a look at the hello world example implementations.

Python Modular Inputs framework

Github Repo

https://github.com/damiendallimore/SplunkModularInputsPythonFramework

Helloworld example

https://github.com/damiendallimore/SplunkModularInputsPythonFramework/tree/master/implementations/helloworld

Java Modular Inputs framework

Github Repo

https://github.com/damiendallimore/SplunkModularInputsJavaFramework

Helloworld example

https://github.com/damiendallimore/SplunkModularInputsJavaFramework/tree/master/helloworld

The Splunk SDKs for Ruby and C# are now in Beta

$
0
0

The Splunk SDKs for Ruby and C# have reached Beta! Developers familiar with Ruby and C#/.NET can now easily leverage their existing skills to integrate data and functionality from Splunk with other applications across the enterprise, letting the entire organization get more value out of Splunk. Do you have an existing reporting app or customer support system that would benefit from being able to search and display data from Splunk? Want to build a .NET or Ruby app powered by Splunk data?  Then these SDKs are for you. As Beta releases these SDKs are now fully supported, customers with support contracts are covered for any questions about the Splunk SDKs for C# and Ruby.

Let’s take a look at some sample Ruby code …

Connect to Splunk

require 'splunk-sdk-ruby'
service = Splunk::connect(:scheme=>"https", :host=>"localhost", :port=>8089, :username=>"admin", :password=>"changeme")

Run a oneshot search and print the results

stream = service.create_oneshot("search index=_internal | head 10")
reader = Splunk::ResultsReader.new(stream)
reader.each do |result|
  puts result
end

Run an export and print the results

stream = service.create_export("search index=_internal | head 10")
reader = Splunk::ResultsReader.new(stream)
reader.each do |result|
  puts result
end

Write events into Splunk

main = service.indexes["main"]

# Using the simple receiver endpoint
main.submit("This is a test event.")

# Using the streaming receiver endpoint
socket = main.attach()
begin
  socket.write("The first event.\r\n")
  socket.write("The second event.\r\n")
ensure
  socket.close()
end

Now let’s take a look at some sample C# code …

Connect to Splunk

// Define the context of the Splunk service
ServiceArgs svcArgs = new ServiceArgs();
svcArgs.Host = "localhost";
svcArgs.Port = 8089;

// Create a Service instance and log in
Service service = new Service(svcArgs);
service = service.Login("admin", "changeme");

List Splunk objects, e.g. list of apps installed on Splunk

foreach (var app in service.GetApplications().Values)
{
    Console.WriteLine(app.Name);

    // Write a seperator between the name and the description of an app.
    Console.WriteLine(Enumerable.Repeat('-', app.Name.Length).ToArray());

    Console.WriteLine(app.Description);
    Console.WriteLine();
}

Run a search and display results

var jobs = service.GetJobs();
var job = jobs.Create("search index=_internal | head 10");
while (!job.IsDone)
{
    Thread.Sleep(1000);
}

var outArgs = new Args
{
    { "output_mode", "json" },
};

using (var stream = job.Results(outArgs))
{
    using (var rr = new ResultsReaderJson(stream))
    {
        foreach (var map in rr)
        {
            System.Console.WriteLine("EVENT:");
            foreach (string key in map.Keys)
            {
                System.Console.WriteLine("   " + key + " -> " + map[key]);
            }
        }
    }
}

Write events into Splunk

var args = new Args
{
    { "source", "splunk-sdk-tests" },
    { "sourcetype", "splunk-sdk-test-event" }
};

Receiver receiver = new Receiver(service);

// Submit to default index using the simple receiver endpoint
receiver.Submit(args, "Hello World from C# SDK!");

Mobile Analytics with Storm (Part 2)

$
0
0

In the previous article “Mobile Analytics with Storm“, we discussed how to configure the logging library for mobile apps to send stacktrace messages to Storm via REST API. To make this logging library more usable and robust, mobile app developers are now able to send invaluable stacktrace messages via TCP (through the Network Inputs option). The configuration steps are incredibly simple and are summarized using the diagram shown below:

  1. Click at “Network data” to enable Storm to receive data via TCP
  2. Click at “Authorize your IP address” so that Storm is receiving data from authorized IP address(es). Please take note of the “IP/Port combination” in “Send data to” – we are going to use that combination to send data to Storm.
  3. Choose “Manually authorize IP address
  4. Click at “What is my IP?” to obtain an authorized IP address and then set the custom sourcetype to “app_message

The steps to configure the logging library in your IDE is as follow (if you have not done so already):

  1. Download the logging library either from SplunkBase (http://splunk-base.splunk.com/apps/78613/mobile-analytics-with-splunk-storm-android) or from github (https://github.com/nicholaskeytholeong/splunk-storm-mobile-analytics/blob/master/android/splunkstormmobileanalytics.jar)
  2. Include splunkstormmobileanalytics.jar into your project. This is how you can simply do it: move the jar file into libs directory of your app project
  3. libraryineclipse

  4. Allow the Internet permission between the <manifest> tags in the AndroidManifest.xml
  5. <uses-permission android:name="android.permission.INTERNET"/>
  6. Add the import statement into the main activity of your app
  7. import com.storm.android.Storm;
  8. Connect to Splunk Storm in the onCreate() method of the main activity
  9. protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        Storm.TCPconnect(RECEIVING_IP, RECEIVING_PORT_NUMBER, getApplicationContext());
        /**
         * an example would be Storm.TCPconnect("logs1.nkey.splunkstorm.com", 20072, getApplicationContext());
         * your other wonderful codes to develop your wonderful mobile app
        **/
    }
  10. You are now able to send data via TCP from your Android mobile app to SplunkStorm

To conclude, this is how we send data with the logging library for Android apps via TCP into Splunk Storm:

  1. Create a Splunk Storm project at https://www.splunkstorm.com
  2. Download the jar file either from SplunkBase (http://splunk-base.splunk.com/apps/78613/mobile-analytics-with-splunk-storm-android) or from github (https://github.com/nicholaskeytholeong/splunk-storm-mobile-analytics/blob/master/android/splunkstormmobileanalytics.jar)
  3. Configure Network Input from Splunk Storm UI accordingly (discussed earlier in this article)

Give it a try and tell us your experience integrating it into your mobile apps. Feedback and suggestions are always welcomed!

“Enterprise, Developer, Cloud, Content”


The Splunk SDKs for C#, PHP and Ruby have arrived

$
0
0

We’re excited to announce the general availability of the Splunk Development Kits (SDKs) for C#, PHP and Ruby.  Coupled with the Splunk SDKs for Java, Python and JavaScript, developers are now fully equipped to customize and extend the power of Splunk using the languages, frameworks and tools they know and love.

Developers can use the Splunk SDKs to:

  • Access Splunk data from line of business systems like customer service apps
  • Integrate data from Splunk with other BI and reporting tools
  • Build mobile reporting apps
  • Power customer-facing dashboards and reports with Splunk data
  • Log directly to Splunk from any application

The Splunk SDKs include documentation, code samples, resources, and tools to make it quick and easy for developers to get started and be productive.  The Splunk SDKs enable developers to handle things like HTTP access, authentication and namespaces in just a few lines of code. The SDKs also simplify output from searches, providing results readers that parse search results and return them in a simplified structure with clear key-value pairs.

Thanks to everyone who provided feedback on the SDKs along the way. Stay tuned for more great resources and tools to help developers be productive with Splunk.

Stop by the Splunk Developer portal to download the new Splunk SDKs today, and come back here to let us know what you think. Happy Splunking!

Splunk Powers Up With jQuery!

$
0
0

Splunk is happy to announce that we will be a Diamond level sponsor of the 2013 jQuery Portland Conference on June 13 & 14. This is shaping up to be the best jQuery event to date, so if you haven’t registered yet – do it now, but first read below to find out about two special Splunk related offers.

Did you know jQuery is a Splunk customer? It’s true! In fact, jQuery received the very first Nonprofit License issued by Splunk4Good one year ago. Come to the conference and find out in one of the talks how jQuery is using Splunk to make sense of all their data.

This exciting event will feature 2 tracks, 31 talks, a Splunk breakout session and much more. If you haven’t registered yet, Splunk is extending two special offers.

Splunk is offering one fully paid late-bird conference ticket as a scholarship. This is a $499 value! All you have to do to apply is follow @Splunk4Good on Twitter and email splunk4good at splunk.com to say why you want to go to the conference. Please note travel & expenses are not included.

Splunk is also offering $25 off any regular or late-bird conference or training ticket. Just go here and enter enter “Splunk25off” to get $25 off.

See you in Portland next week!

Getting data from your REST APIs into Splunk

$
0
0

Overview

More and more products,services and platforms these days are exposing their data and functionality via RESTful APIs.

REST really has emerged over previous architectural approaches as the defacto standard for building and exposing web APIs to enable third partys to hook into your data and functionality. It is simple , lightweight , platform independent,language interoperable and re-uses HTTP constructs. All good gravy. And of course , Splunk has it’s own REST API also.

The Data Potential

I see a world of data out there available via REST that can be brought into Splunk, correlated and enriched against your existing data, or used for entirely new uses cases that you might conceive of once you see what is available and where your data might take you.

What type of data is available ? Well here is a very brief list that came to mind as I typed :

  • Twitter
  • Foursquare
  • LinkedIn
  • Facebook
  • Fitbit
  • Amazon
  • Yahoo
  • Reddit
  • YouTube
  • Flickr
  • Wikipedia
  • GNIP
  • Box
  • Okta
  • Datasift
  • Google APIs
  • Weather Services
  • Seismic monitoring
  • Publicly available socio-economic data
  • Traffic data
  • Stock monitoring
  • Security service providers
  • Proprietary systems and platforms
  • Other “data related” software products

The REST “dataverse” is vast , but I think you get the point.

Getting the Data

I am most interested in the “getting data in” part of the Splunk equation. As our esteemed Ninja once said , “Data First , Sexy Next”.

And I want to make it as easy, simple and intuitive as possible to allow you to hook Splunk into your REST endpoints, get that data , and starting writing searches.

Therefore building a generic Splunk Modular Input for polling data from any REST API is the perfect solution. One input to rule them all so to speak.

Building the REST Modular Input

From a development point of view it is actually quite a simple proposition for some pretty cool results.

For RESTful API’s we only need to be concerned about RESTful HTTP GET requests , this is the HTTP method that we will use for getting the data.

And by building the Modular Input in Python , I can take advantage of the Python Requests library , which simplifys most of the HTTP REST plumbing for me.

Using my Python Modular Inputs utility on Github , I can also rapidly build the Modular Input implementation.

You can check out the REST Modular Input implementation on Github

Using the REST Modular Input

Or if you want get straight into Splunking some REST data , make your way over to Splunkbase and download the latest release.

Installation is as simple as untarring the release to SPLUNK_HOME/etc/apps and restarting Splunk.

Configuration is via navigating to Manager->Data Inputs->REST

And then clicking on  “New” to create a new REST Input. As you can see below , I have already created several that I used for testing.

Configuring your new REST input is simply a matter of filling in the fields

Then search your data ! Many RESTful responses are in JSON format , which is very convenient for Splunk’s auto field extraction.

Key Features

  • Perform HTTP(s) GET requests to REST endpoints and output the responses to Splunk
  • Multiple authentication mechanisms
  • Add custom HTTP(s) Header properties
  • Add custom URL arguments
  • HTTP(s) Streaming Requests
  • HTTP(s) Proxy support
  • Response regex patterns to filter out responses
  • Configurable polling interval
  • Configurable timeouts
  • Configurable indexing of error codes

Authentication

The following authentication mechanisms are supported:

  • None
  • HTTP Basic
  • HTTP Digest
  • OAuth1
  • OAuth2 (with auto refresh of the access token)
  • Custom

Custom Authentication Handlers

You can provide your own custom Authentication Handler. This is a Python class that you should add to the
rest_ta/bin/authhandlers.py module.

You can then declare this class name and any parameters in the REST Input setup page.

Custom Response Handlers

You can provide your own custom Response Handler. This is a Python class that you should add to the
rest_ta/bin/responsehandlers.py module.

You can then declare this class name and any parameters in the REST Input setup page.

Mobile Analytics (iOS) with Storm

$
0
0

As those who have been following articles about mobile analytics with Storm and Splunk already know, there’s been demand for an  iOS library to help iOS app developers to debug their apps. I’m happy to announce that the iOS library is now available for use with Storm REST API input. The installation steps to use this libary are trivial:

[1] Get a Splunk Storm account by registering yourself here: https://www.splunkstorm.com

[2] Create a Storm project and obtain the STORM PROJECT ID and STORM ACCESS TOKEN at the STORM REST API page:

Splunk Storm REST API page

[3] Then download the logging library from http://splunk-base.splunk.com/apps/92296/mobile-analytics-with-splunk-storm-ios or  https://github.com/nicholaskeytholeong/splunk-storm-mobile-analytics/blob/master/ios/splunkmobileanalytics.zip

[4] Unzip it and drag the splunkmobileanalytics folder into the project.

[5] Select Relative to Project at Reference Type, then click Add.


[6] In the AppDelegate interface file (AppDelegate.h), import Storm.h, like so:

#import
#import "Storm.h"
// other awesome codes that you are writing

[7] In the AppDelegate implementation file (AppDelegate.m), provide the stormAPIProjectId and stormAPIAccessToken values in the message

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
    // Override point for customization after application launch.
    // Set the view controller as the window's root view controller and display.
    self.window.rootViewController = self.viewController;
    [self.window makeKeyAndVisible];

    [Storm stormAPIProjectId:@"YOUR_STORM_PROJECT_ID" stormAPIAccessToken:@"YOUR_STORM_ACCESS_TOKEN];

    return YES;
}

[8] You are set! Splunk Storm is now integrated seamlessly into your iOS mobile app!

This is a sample snapshot of what the stacktrace looks like in Storm upon the triggering of any uncaught exceptions from an iPhone emulator:

There are certainly areas to improve. Nevertheless, give it a try and tell us your experience integrating it into your iOS apps – what works and what doesn’t. Feedback and suggestions are very much appreciated! Until next time … stay tuned for more updates.

Command Modular Input

$
0
0

Simplifying the status quo

I’m often thinking about potential sources of data for Splunk and how to facilitate getting this data into Splunk in the simplest manner possible.

And what better source of data than existing programs on your operating system that already do the heavy lifting for you.

Now this is nothing new to Splunk , we’ve always been able to wrap up a program in a scripted input, execute it, transform the output and pipe it into Splunk.

But rather than going and creating many of these specific program wrappers for Splunk each time you need to capture a program’s output , why not create 1 single Modular Input that can be used as a generic wrapper for whatever program output you want to capture ?

Well , that’s just what I have done.The Command Modular Input is quite simply just a wrapper around whatever system programs that you want to periodically execute and capture the output from ie: (top, ps , iostat, sar ,vmstat, netstat , tcpdump, tshark etc…). It will work on all supported Splunk platforms.

Download and Install

Head on over to Splunkbase and download the Command Modular Input.

Untar to SPLUNK_HOME/etc/apps and restart Splunk

Setup

Login to Splunk and browse to Manager->Data Inputs

Setup a new command input

List command inputs you have setup

Search your command output

Custom Output Handlers

You may want to transform and process the raw command output before sending it to Splunk.So to facilitate this you can provide your own custom output handler.

This is a Python class that you should add to the command_ta/bin/outputhandlers.py module.

You can then declare this class name and any parameters in the Command setup page.

Streaming vs Non Streaming Command Output

Some commands will keep STD OUT open and stream results. An example of such a command might be tcpdump.

For these scenarios ensure you check the “streaming output” option on the setup page.

Hunk: Splunk Analytics for Hadoop Intro – Part 1

$
0
0

As you might have already seen, we recently announced the beta availability of our latest product, Hunk: Splunk Analytics for Hadoop. In this post I will cover some of the basic technology aspects of this new product and how they enable Hunk to perform analytics on top of raw data residing in Hadoop.

Introduction to Native Indexes
For those of you new to Splunk, please read this section to get a quick understanding of native indexes, as it will help differentiate them from virtual indexes. If you are already a Splunkguru please feel free to skip this section.

Whenever a Splunk indexer ingests raw data from any source (file, script, network, etc.), it performs some processing on that data and stores it in a format optimized for efficient keyword and time-based searches. We call a collection of data and associated metadata files that we lay on disk an “index.” Generally, one data source goes to one and only one index, however, you can create as many indexes as you need and can search many indexes concurrently in a single search.

To summarize, native indexes are data containers optimized for keyword and time-based searches. Additionally, indexes provide a natural way of implementing access controls, e.g. allow only users of group “ops” to access index call “os”. Another important feature of native indexes is data retention policies, e.g. age out data that is older than 30 days or when an index grows beyond a certain size.

Introduction to Virtual Indexes
Now, wouldn’t it be cool if Splunk’s Search Processing Language (SPL) would be able to address data sources from anywhere, not just its native indexes?

Well, this is exactly what virtual indexes allow Hunk to do. A virtual index, just like a native index, behaves as an addressable data container that can be referenced by a search. Just like native indexes, you can reference as many virtual indexes as you desire in a search and you can also mix native and virtual indexes together. This gives you the ability to correlate data no matter where it resides.

There are a few key differences between native and virtual indexes; for example, since the data that resides in the external system is not under direct management of Splunk, retention policies cannot be applied to the datasets that make up virtual indexes. Another equally important difference is the efficiency of certain searches, for example, data in external systems can be optimized for certain search patterns, or maybe not even optimized at all, which are different from those which Splunk users are accustomed to.

More formally, a virtual index is a search time concept that allows a Splunk search to access data and optionally push computation to external systems. Which brings us to the next topic, external result providers (ERPs).

Introduction to External Result Providers (ERPs)
In order for a search process to access data and push computation out to external systems, it needs to know specific details about the external system in question. One way to achieve this would be to add support for all the known systems where data resides. However, even enumerating the different flavors of external systems would be a daunting task.

We opted for the next best thing—an interface that a search process can use to communicate with a helper process that handles all the intricacies of interacting with the external system. We call this helper process an External Result Provider.

So to recap, Hunk is able to provide access to and perform analytics on data that resides in external system by encapsulating the data into addressable units using virtual indexes, while utilizing ERPs to handle the details of pushing down computations to the external system.

Learn more at www.splunk.com/bigdata

Continue reading the second part for more details …

Making SNMP Simpler

$
0
0

Overview

From Wikipedia :

Simple Network Management Protocol (SNMP) is an “Internet-standard protocol for managing devices on IP networks”. Devices that typically support SNMP include routers, switches, servers, workstations, printers, modem racks, and more.

SNMP exposes management data in the form of variables on the managed systems.

The variables accessible via SNMP are organized in hierarchies. These hierarchies, and other metadata (such as type and description of the variable), are described by Management Information Bases (MIBs).

MIBs describe the structure of the management data of a device subsystem; they use a hierarchical namespace containing object identifiers (OID). Each OID identifies a variable that can be read or set via SNMP. MIBs use the notation defined by ASN.1.

SNMP agents can also send notifications , called Traps , to an SNMP trap listening daemon.

Splunking SNMP Data

SNMP represents an incredibly rich source of data that you can get into Splunk for visibility across a very diverse IT landscape.

For as long as I have been involved with Splunk , one of the most recurring requests on Splunkbase answers and in conversations has been ” how do I get my SNMP data into Splunk ? “.

And whilst there has always been a way , it has involved cobbling together a few different steps.

For polling SNMP variables this has typically involved writing a custom scripted input utilizing an existing program or library under the hood , such as snmpget or pysnmp.

And for capturing SNMP traps the approach has been to run a trap daemon such as snmptrapd on your Splunk server to capture the trap, dump it to a file and have Splunk monitor the file.

I think there is a much simpler way , a way that is more natively integrated into Splunk by implementing SNMP data collection in a Splunk Modular Input.

So myself and my colleague Scott Spencer set about doing just that.

SNMP Modular Input

The SNMP Modular Input allows you to configure your connections to your SNMP devices , poll attribute values and capture traps. It has no external dependencies , all of the functionality is built into the Modular Input and it will run on all supported Splunk platforms.

Features overview

  • Simple UI based configuration via Splunk Manager
  • Capture SNMP traps (Splunk becomes a SNMP trap daemon in its own right)
  • Poll SNMP object attributes
  • Declare objects to poll in textual or numeric format
  • Ships with a wide selection of standard industry MIBs
  • Add in your own Custom MIBs
  • Walk object trees using GET BULK
  • Optionally index bulk results as individual events in Splunk
  • Monitor 1 or more Objects per stanza
  • Create as many SNMP input stanzas as you require
  • IPv4 and IPv6 support
  • Indexes SNMP events in key=value semantic format
  • Ships with some additional custom field extractions

SNMP version support

SNMP V1 & V2c support are currently implemented. SNMP V3 is in the pipeline. So you don’t need to email me requesting this :)

Implementation

The Modular Input is implemented in Python and under the hood  pysnmp is used as the library upon which the Modular Input is written.

Getting started

Browse to Splunkbase and download the SNMP Modular Input

To install , you simply just untar it to SPLUNK_HOME/etc/apps and restart Splunk.

Configuration

Login to SplunkWeb and browse to Manager->Data Inputs->SNMP->New and setup your input stanza

View the SNMP inputs you have setup

Searching

You can then search over the SNMP data that gets indexed. In the example below, in addition to the SNMPv2-MIB,  I have also loaded in the Interface MIB (IF-MIB) to resolve the IF-MID OID names and values to their textual representation.

A note about MIBs

Many industry standard MIBs ship with the Modular Input.
You can see which MIBs are available by looking in SPLUNK_HOME/etc/apps/snmp_ta/bin/mibs/pysnmp_mibs-0.1.4-py2.7.egg

Any additional custom MIBs need to be converted into Python Modules.

You can simply do this by using the build-pysnmp-mib tool that is part of the pysnmp installation

build-pysnmp-mib -o SOME-CUSTOM-MIB.py SOME-CUSTOM-MIB.mib

build-pysnmp-mib is just a wrapper around smidump.

So alternatively you can also execute :

smidump  -f  python <mib-text-file.txt> | libsmi2pysnmp > <mib-text-file.py>

Then “egg” up your python MIB modules and place them in SPLUNK_HOME/etc/apps/snmp_ta/bin/mibs

In the configuration screen for the SNMP input in Splunk Manager , there is a field called “MIB Names” (see above).
Here you can specify the MIB names you want applied to the SNMP input definition ie: IF-MIB,DNS-SERVER-MIB,BRIDGE-MIB
The MIB Name is the same as the name of the MIB python module in your egg package.

This is all just an interim measure until pysnmp supports plain text MIB files, a development feature in the pipeline for pysnmp.
When that feature is ready , all you will have to do is drop the plain text MIB in the SPLUNK_HOME/etc/apps/snmp_ta/bin/mibs and the  SNMP Modular Input will do the rest. Watch this space !

What’s next

Now it’s your turn…go and download the Modular Input, plug it in and Splunk some SNMP data . I’d love to hear your feedback about any way to make it better and even simpler.And as mentioned , SNMP Version 3 support is coming.


Hunk: Splunk Analytics for Hadoop Intro – Part 2

$
0
0

Now that you know the basic technology behind Hunk, lets take a look at some of the features of Hunk and how they unlock the value of the data resting in Hadoop.

Defining the problem
More and more enterprises these days are storing massive amounts of data in Hadoop, with the goal that someday they will be able to analyze and gain insight from it and ultimately see a positive ROI. Since HDFS is a generic filesystem it can easily store all kinds of data, be it machine data, images, videos, documents etc, if you can put it in a file it can reside in HDFS. However, while storing the data in HDFS is relatively straightforward getting value out of this data is proving to be a daunting task for many. Unlocking value out of the data resting in Hadoop is the primary goal of Hunk.

What customers love about Splunk
So, let’s start with a few things that people love about Splunk, while I don’t claim to have a complete list, here’s a few that our customers boast about (in no particular order)

  • Immediate search feedback
  • Ability to process all kinds of data – i.e late binding schema
  • Ease of setup and rapid time to value
  • when designing Hunk we wanted to make sure that we preserve as many of the things that people love about Splunk and even add a few more. So, let’s take a look at how we were able to achieve each of those goals

    Immediate feedback

    Hadoop was designed to be a batch job processing system, ie you start a job and have no expectation to see any results back (except maybe some status reports) for a long time (ranging from tens of minutes to days). I am not going to argue the merits of immediate feedback, but we knew for a fact that anything “batch” was not going to fly with customers already accustomed to Splunk’s immediate feedback. Our first challenge: how can we provide immediate feedback to users when building on top of a system that was designed for the exact opposite?

    Data processing modes

    There are two widely used computation models for data processing:

    1. Move data to the computation unit – yes, goes completely against what Hadoop stands for, but bear with me. The major key disadvantage to this model is that it has low throughput because of the large network bandwidth required. However, this model also has a very important property, namely low latency

    2. Move computation to the data – this model is at the core of MapReduce and almost exclusively the only computation model used on Hadoop. The major advantage that this model has is data locality, leading to high throughput. However, the increase in throughput comes at the cost of latency – thus the batch nature of Hadoop. A MapReduce job (and anything built on top of them, Pig jobs, Hive queries etc) could takes tens of seconds all the way to minutes to even setup, let alone get scheduled and executed.

    So, above I’ve described the two ends of the spectrum: low latency, low throughput and high latency, high throughput. What we’re actually looking for in solving our challenge is low latency, however we don’t want to give up on throughput, ie we need low latency and high throughput.

    Now there’s nothing that says that one and only one of the above models can be used at a time. Do you see where I am going … maybe you’ve already thought of a solution, but here’s ours. In order to give users immediate feedback we start moving data to the compute unit, also known as a Search Head (we call this streaming mode) and concurrently we start moving computation to the data (a MapReduce job). While the MR job is setting up, starting and finally producing some results we display results from the streaming component, then as soon as some MapReduce tasks complete we stop streaming and consume the MR job results. Thus achieving low latency (via streaming) and high throughput (via MapReduce) – who said you can’t have it all? (I’m leave the costs of this method as an exercise for the reader.)

    Late binding schema

    Splunk uses a combination of early and late binding schema. Even though most users care about the flexibility of our search time schema binding, they’re usually unaware that there’s also some minimal index time schema applied to the data. When Splunk ingests data, it first breaks the data stream into events, performs timestamp extraction, source typing etc. Both of these schema applications are important and necessary to allow maximum flexibility in the type of data that can be processed by Hunk. However, in Hunk we could be asked to analyze data that did not necessarily end up in Hadoop via Splunk (or Hunk) – ie it’s either already resting in HDFS or getting there via some other mechanism, e.g. Flume, custom application etc. So, in Hunk we’ve implemented truly late binding schema – ie all the index time processing as well as all the search time processing are all applied to at search time. However, this does not mean that we are creating an index in HDFS, just the index time processing. We treat the HDFS data placed in virtual indexes in Hunk as a read only data source. For those already familiar with Splunk’s index time processing pipeline the following picture depicts the data flows in Hunk:

    I mentioned that we wanted to preserve all the things that people love about Splunk and maybe even add more. The data processing pipeline is something where we’ve added something – before data is even processed by Hunk we allow you to plug in your own data preprocessor. The preprocessors have to be written in Java and have a chance to transform the data in some way before Hunk gets a chance to – they can vary in complexity from simple translators (say Avro to JSON) to as complex as doing image/video/document processing.

    Ease of setup and rapid time to value
    As I mentioned at the beginning of this post most enterprises are having a hard time getting value out of the data stored in Hadoop. So in Hunk we aimed at making the setup/installation and getting started experience as easy as possible. To this end, the setup is as simple as telling us (a) some key info about the Hadoop cluster, such as NameNode/JobTracker host and port, Hadoop client libraries to use when submitting MR jobs, Kerberos credentials etc and (b) creating virtual indexes that correspond to data in Hadoop.

    In terms of providing a fast time to value we chose to allow users to run analytics searches against the data that rests in Hadoop without Hunk ever seeing/preprocessing the data! The reason for this is that we don’t want you to have to wait for potentially days until Hunk preprocesses the data before you can execute your first search. Some of Hunk Beta customers were able to setup Hunk and start running analytics searches against their Hadoop data within minutes of starting the setup process – yes I said minutes!

    Stay tuned for the third part in which I will walk you through an example …

    LPS High School Gets Splunk-ducated

    $
    0
    0

    As the Notorious B.I.G said, “If you don’t know, now you know” that on Tuesday, May 21st, there were some young’ns roaming Splunk’s SF building. Splunk4Good invited LPS High School for a visit as part of our STEM (Science, Technology, Engineering & Mathematics) educational outreach. Splunk4Good volunteers had previously mentored LPS HS students as part of the Technovation App Challenge. On this occasion LPS High School visited Splunk as part of their Week Without Walls that encourages their students to learn outside the school and gives them the opportunity to engage with their community.

    After the high schoolers belly’s were full from the open taco bar, the day continued with Splunk-ducated fun! Introductions were made, then Splunk Sr Product Manager Divanny gave an overview of the power of Splunk. Next Christy gave a Splunk4Good Demo, which blew their minds on how Splunk can be used in ways they never imagined.  The meeting continued with Splunk’s former interns, Eddie and Petter,  who gave a description of what it’s like to work at Splunk.  Later that day the students were broken up into groups for a scaventour (scavenger hunt/tour).

    Once the scaventour was over Christy handed out some schwag and some pony’s were given to new owners.

    As the evening went on and the students were frolicking around the patio, we found ourselves saying our “See-ya later’s” as we watched the future of Corporate America walk away. 

    The fun doesn’t stop there! On May 30th Splunk SF HQ hosted Oakland International High School!  Stay tuned in for the upcoming blog post!

    Oakland International High School Visit

    $
    0
    0

    On Thursday 5/30 Oakland International High School had a blast visiting the Splunk HQ as part of Splunk4Good‘s STEM (Science, Technology, Engineering & Mathematics) educational outreach. Splunk UI Dev Kate Feenney, who volunteers with Oakland International in her spare time, did an awesome job acting as host and at making sure that the students visit was a great learning experience.

    The Splunk4Good volunteers had the day planned out for these young talented minds. Nick Key intrigued our guests with the Splunk demo and implemented a little bit of coding during his presentation, Eric Grant did a great job at covering the demo, while Splunk intern Ross Lazerowitz covered the What It’s Like to Work at Splunk!

    After the great food that was provided and the informative presentations it was time for some scanvenger hunt fun! During the touring of the site, the high schoolers enjoyed the life size pony’s and having to count all the baby pony’s that seemed to always be hidden.

    Once the students completed their scavenger hunt, they showed off their basket ball skills on the court yard. The visit had come to an end, but the memories made will last forever.  I can safely say that the Splunk employees have most def  inspired the minds of these young adults.

    Fremont Robotics Team

    $
    0
    0

    On 6/24 Splunk4Good invited Fremont High School’s Robotics team for a visit as part of our STEM (Science, Technology, Engineering & Mathematics) educational outreach. There was a little bit of rain, but Splunk4Good did not allow that to be a roadblock on delivering a memorable experience for the Fremont High School students.

    Two important facts: the Fremont Robotics team is an extra curricular club that offered to the high school students. Also, Fremont High School is not based out in Fremont, it is actually located in Sunnyvale.

    Now lets get back to their visit.

    Ashwin, the Splunk BizDev team member extraordinaire did an awesome job hosting this visit and making sure these students had a unique experience at the SF HQ. Christy covered the Splunk demo and Joe, former intern and now FT Splunker, spoke about what it’s like to work at Splunk. The scavenger hunt/tour was up next, which is always a great opportunity for students to ask questions one-on-one.

    This young, bright group had asked great questions about technology as well as how to get into the tech industry. It was an educational day for the students and Splunkers were inspired by enthusiasm and professionalism of these students! I am sure they will be a force to reckon with at the next Robotics competition and beyond.

    Happy SysAdmin day! I need to Splunk my brain – does your organization need to?

    $
    0
    0

    Hi. I’m having one of those weeks where I could do with Splunking my brain. Why? Because one thought keeps firing off another activity and adding to the unstructured list of things that I need to do. Essentially – it is working a bit like this:

    What I really need, is it to work like this:

    I’m sure we’ve all had times like this – lots of data coming at you that fits the mythical “three Vs”. There’s a high volume of data, it is moving quickly to give it velocity and there’s a lot of variety. What further adds to the need to Splunk my brain is the fact the data is at so many different levels. There are low level, everyday practical things all the way up to high level strategic planning. I’ve been trying to think of an alternative solution to writing a “Matt’s Brain” app to go on Splunkbase (a highly niche app I suspect) – thoughts most welcome in the comments below…

    So we’ve got all this data flying at us, at all different levels. All that data is linked together and correlated in one way or another. How am I going to turn that into something useful, accessible and actionable at all levels. Rather than using my haphazard brain as an example – I’ll try and make it a bit more real world and topical.

    Today is SysAdmin day – Happy SysAdmin Day for anyone reading. You may have seen Splunk on SysAdmin magazine today. If you haven’t – check it out. If you’re lucky enough to be a SysAdmin then is a Happy SysAdmin Day gift waiting for you (hint: it is a free Splunk T-Shirt) if you visit http://www.co-store.com/splunk with the code sysadminFTW

    SysAdmins have their challenges – lots of infrastructure and app considerations, lots of data about lots of technology requiring the “need for speed” (thank you Top Gun) to fix a potentially wide variety of issues that may come up. So you can Splunk that – I’m sure a lot of you already are (if you’re not – you can get a good overview here). You can search it, get alerts on it and get good visibility into what’s going on. There are the three Vs in action that a SysAdmin has to deal with.

    Taking it all the way to the opposite end of the spectrum – we’re busy putting together the program for the Gartner Symposium, talking to CIOs and how they can “lead in a digital world”. They have a different set of challenges around operational intelligence across a lot of data, making decisions very quickly to try and change the game and looking for new opportunities with the variety of data they have. There are the three Vs in action for a CIO and the executive team.

    So, SysAdmins, CIOs and I all need to do the same thing – get to grips with these three Vs – at multiple levels. This kind of big data and machine data has never been the most accessible, easy to use, manage or visualize.  To date, big data has been the remit of data scientists and The Guardian posted an article on how to get the most out of big (and open) data – there was a great quote in it:

    “Data doesn’t have to be scary and you don’t need to be a mathematician or a scientist in order to use it to your advantage,”

    Loren Treisman, chief executive of the Indigo Trust.

    Earlier in the year, Gartner analyst, Svetlana Sicular, blogged about the fact that big data was falling into the “Trough Of Disillusionment”. Companies are having great ideas about what to do with big data but are finding it very difficult to put those ideas into practice. You can read here post here. <plug>She does mention Splunk as a way of being productive with this data.</plug>

    So I’ll leave you with this thought – are you dealing with your data like my brain is dealing with it on a Friday afternoon or are you Splunking it?

    Keep an eye out for the Matt’s Brain app on Splunkbase.

    Happy SysAdmin day again…

    Viewing all 218 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>