Quantcast
Channel: Dev – Splunk Blogs
Viewing all articles
Browse latest Browse all 218

Splunk Logging Driver for Docker

$
0
0

With Splunk 6.3 we introduced HTTP Event Collector which offers a simple, high volume way to send events from applications directly to Splunk Enterprise and Splunk Cloud for analysis. HTTP Event Collector makes it possible to cover more cases of collecting logs including from Docker. Previously I blogged on using the Splunk Universal Forwarder to collect logs from Docker containers.

Today following up on Docker’s press release, we’re announcing early availability in the Docker experimental branch of a new log driver for Splunk. The driver uses the HTTP Event Collector to allow forwarder-less collection of your Docker logs. If you are not familiar yet with the Event Collector check out this blog post.

You can get the new Splunk Logging Driver by following the instructions here to install the Docker experimental binary. Note if you are running on OSX or Windows you’ll need to have a dedicated Linux VM. Using the driver, you can configure your host to directly send all logs sent to stdout to Splunk Enterprise or to a clustered Splunk Cloud environment. The driver offers a bunch of additional options for enriching your events as they go to Splunk, including support for format tags, as well as labels, and env.

Now let’s see how to use the new driver. I am going to use the latest Splunk available, which I have installed in my network running on address 192.168.1.123. You need to first enable HTTP Event Collector. (Note: In Splunk Cloud you need to work with support to enable HTTP Event Collector). Open Splunk’s Web UI, go to the SettingsData Inputs. Choose HTTP Event Collector. Enable it with Global Settings and add one New Token. After the token is created, you will find the Token Value which is a guid. Write it down, as you will need it later for configuring the Splunk Logging Driver.

Verify that you are using the Docker experimental latest docker version, 1.10.0-dev.

# docker --version

Now we are ready to test the Splunk logging driver. You can configure the logging driver for the whole Docker daemon or per container. For this example, I am going to use the nginx container and configure it for the container

# docker run --publish 80:80 --log-driver=splunk --log-opt splunk-token=99E16DCD-E064-4D74-BBDA-E88CE902F600 --log-opt splunk-url=https://192.168.1.123:8088 --log-opt splunk-insecureskipverify=true nginx

Here is more detail on the settings above:

  • First I’ve specified to publish to port 80, so I can test my nginx container.
  • log-driver=splunk specifies that I want to use the Splunk logging driver.
  • splunk-token is using the the token which I previously created in Splunk Web.
  • splunk-url is set to the the host (including port) where the HTTP Event Collector is listening.
  • splunk-insecureskipverify instructs the driver to skip cert validation, as my Splunk Enterprise instance is using the default self-signed cert.
  • Lastly I’ve told Docker to use the nginx image.

Now that the container is running, I can send some GET requests nginx to generate some logs output.

# curl localhost:80
# curl localhost:80?hello=world

Heading over Splunk, I can see the events pouring in real time

splunk_logging_driver_first_results

These are just the basics. I can now add additional configuration to control how Splunk indexes the events, including changing default index, source and sourcetype.

I can also configure the Splunk Logging Driver to include more detailed information about the container itself, something which is very useful for analyzing the logs later.

# docker run --publish 80:80 --label type=test --label location=home --log-driver=splunk --log-opt splunk-token=99E16DCD-E064-4D74-BBDA-E88CE902F600 --log-opt splunk-url=https://192.168.1.123:8088 --log-opt splunk-insecureskipverify=true --log-opt tag="{{.ImageName}}/{{.Name}}/{{.ID}}" --log-opt labels=type,location nginx

The additional options do the following:

  • label – defines one or more labels for the container
  • labels – defines which labels to send to the log driver and which will be included in the event payload
  • tag – changes how my container will be tagged when events are passed to the Splunk Logging Driver

Now I’ll send a few more GET requests again and see the result.

# curl localhost:80
# curl localhost:80?hello=world

splunk_logging_driver_advanced

As you can see above, each event now has a dictionary of attrs which contains the labels in the driver configuration (this can  also include list of environment variables). Tag has also been changed with the format I specified.

Splunk and Docker better together

With the new Docker driver, we’re making it really easy for customers to combine the power of Splunk with Docker in analyzing their Docker logs. This is just the beginning, there are many more things to come! Go grab the latest experimental branch of Docker and start mining your Docker containers in Splunk!


Viewing all articles
Browse latest Browse all 218

Trending Articles