Logsene: Upgrade to Elasticsearch 2.2 and Kibana 4.3.1

Last week, we upgraded Logsene to Elasticsearch v2.2.0 including the required upgrade to Kibana 4.3.1.  This means you can benefit from Elasticsearch 2.2.0, Kibana 4.3.1 and the updated Logsene UI using Elasticsearch 2.x features. In addition, we migrated all existing data from Elasticsearch 1.7 to Elasticsearch 2.2, that’s why you might have noticed a slight delay in processing of logs recently.

There is a long list of “breaking changes” in Elasticsearch 2.x.  However, you won’t notice most of these changes – this is why it pays to use a managed service like Logsene 🙂  That said, a few of them, such as breaking changes in Elasticsearch Mapping (the “database schema”) will have an impact oh how logs should be structured before you ship them to Logsene:

  1. Elasticsearch 2.x does not accept documents containing Elasticsearch metadata field names. Field names starting with underscore (“_”) are reserved for Elasticsearch metadata and should not be used in indexed documents (e.g. _type, _id, _ttl, …). This means existing log shipper configurations, which generated such fields should be changed.
  2. Field names may not contain dots, e.g. “memory.free” is not a valid field name anymore. This must be changed to a JSON structure {“memory”: { “free”: 110234}}  or the field name needs a different separator e.g. “memory_free” instead of “memory.free”.

The good news:  Both cases are transparently handled for you in the current Logsene Receiver.  While Logsene handles this for you automatically, we strongly suggest you make the required changes.  Logsene detects malformed field names and renames them to valid field names in Elasticsearch. The automatic field renaming follows a simple method:
1) Remove leading underscores
2) Replace dots with underscores

Please check if your log data is affected by this change. Please change your log shipper configuration to avoid this time consuming pre-processing of your logs and take full control over the field names in your configuration.

If you use Sematext Docker Agent to ship logs, please use the latest version available on Docker Hub – it makes sure that the generated field names are compliant with Elasticsearch 2.x for all your containers logs.

Goodbye Kibana 3 and update to Kibana 4.3.1:

  • Kibana 3 is not supported anymore by Elastic and does not work with Elasticsearch 2.x. As a consequence we had to remove Kibana 3 from Logsene. We know that we have big fans of Kibana 3 out there, but maybe it’s time to move to Kibana 4 … or Grafana might be an alternative for you – check out  how to use Grafana with Logsene.
  • Kibana 4.3.1 is now the default Kibana application behind the “Kibana” button. Please note that we automatically changed all index patterns to match your Logsene applications. The new index-pattern are not based on date formats anymore (i.e. not TOKEN_YYYY-MM-DD any more), the new index-pattern is simply TOKEN_*. This works without timeouts because Kibana 4.3.1 improved time range queries.
  • Having issues with the new Kibana? Most of the warning or error messages in Kibana are caused by outdated mapping information. Please check this first before you get in touch with support@sematext.com. To  refresh the mapping information open Kibana / Settings / Indices and press the orange “refresh” button:

refresh_fields_kibana4   

Questions or Feedback?

If you have any questions or feedback for us, please contact us by email or using live chat in Logsene.

Presentation: Top Node.js Metrics to Watch

Fresh from Germany’s largest Node.js Meetup, hosted by Wikimedia in Berlin, is the latest presentation from Sematext DevOps Evangelist Stefan Thies — “Top Node.js Metrics To Watch”. The event was shared with the Node.js Meetup in London via video-live stream, the full recording is available on YouTube.

Stefan’s talk goes through challenges of developing Node.js monitoring solutions, such as SPM for Node.js  and key metrics including examples from monitoring Kibana’s Node.js Server

Here is the video:

and the slides:

Please note, the article “ Top Node.js Metrics to Watch” was originally published by O’Reilly Radar.

Have a look at our other Node.js posts — there is a lot more interesting material to discover, like  MongoDB monitoring made with Node.js.

Questions or Feedback?
If you have any questions or feedback for us, please contact us by email or hit us on Twitter.  We love talking about performance monitoring — and log management!

 

How to Ship Heroku Logs to Logsene / Managed ELK Stack

Heroku is a cloud platform based on a managed container system, with integrated data services and a powerful ecosystem for deploying and running modern apps.  In this post we’ll show how you can ship logs from Heroku to Logsene, where you can then search your logs, get alerts based on log data, share log dashboards with your team, etc.

Watching Heroku logs in real-time in the terminal is easy using the “heroku logs” command, which is fine for ad-hoc log checks, but not for a serious production system.  For production, you want to collect, parse, and ship logs to a log management system, where rich reporting and troubleshooting can be done.  To do that the Heroku Log Drain setup is a must. What is a Heroku Log Drain and what does it do? In short, a Heroku Log Drain streams logs of your applications deployed on Heroku, either to a syslog or an HTTPS server.

When you have to deal with a large log volume a scalable log storage is required.  This is where Logsene comes into play. Logsene provides a hosted ELK Stack and is available On Premises and in the Cloud. Logagent-js is a smart log parser written in node.js, taking advantage of async I/O to receive, parse and ship logs – including routing different application logs to different full text indices. We made the Logagent-js deployment on Heroku very easy and scaling out for a very high log volume is just one “heroku scale web=N” command away.

Let’s have a look at the architecture of this setup:

  1. Heroku Apps configured with a Heroku Log Drain
  2. logagent-js to receive, parse and ship logs
  3. Logsene as backend to store all your logs

 

Step 1 – Create your Logsene App

If you don’t have a Logsene account already simply get a free account and create a Logsene App. This will get you a Logsene Application Token, which we’ll use in Step 3.

Step 2 – Deploy Logagent-js to Heroku

heroku-button.png

We’ve prepared a  “Deploy to Heroku” button – just click on it and enter a name for the deployed log agent in the Heroku UI:

heroku-screenshot

Remember this name because we’ll need it later as the URL for the Log Drain.
Logagent-js can handle multiple Logsene tokens, which means it could be used for more than 1 Logsene app, simply addressed by /LOGSENE_TOKEN in the URL.

To run a short test without deploying logagent-js feel free to use the one we deployed for demos with the name “logsene-test”, reachable via https://logsene-test.herokuapp.com.

Step 3 – Configure Log Drain for your App

To configure the Heroku Log Drain we need the following information:

  1. The Logsene App Token
  2. The URL for the deployed logagent-js (e.g. logsene-app1.herokuapp.com)
  3. The Heroku App ID or name of your Application on Heroku (e.g. web-app-1 in the example below)

Then we can use the Heroku command line tool, for example like this:

heroku drains:add –app web-app-1 https://logsene-app1.herokuapp.com/25560d7b-ef59-4345-xxx-xxx

Or we could use the Heroku API to activate the Log Drain:

curl -n -X POST https://api.heroku.com/apps/web-app-1/log-drains \
-d '{"url": "https://logsene-app1.herokuapp.com/25560d7b-ef59-4345-xxx-xxx"}' \
-H "Content-Type: application/json" \
-H "Accept: application/vnd.heroku+json; version=3"

Step 4 – Watch your Logs in Logsene

If you now access your App, Heroku should log your HTTP request and a few seconds later the logs will be visible in Logsene. And not in just any format!  You’ll see PERFECTLY STRUCTURED HEROKU LOGS:

heroku-logs-in-logsene.png

Enjoy!

Like what you saw here? To start with Logsene get a free account here or drop us an email, hit us on Twitter.  Logagent-js is open-source – if you find any bugs please create an issue on Github with suggestions, questions or comments.

 

Logagent-js – alternative to logstash, filebeat, fluentd, rsyslog?

What is the easiest way to parse, ship and analyze my web server logs? You should know that I’m a Node.js fan boy and not very thrilled with the idea of running a heavy process like Logstash on my low memory server, hosting my private Ghost Blog. I looked into Filebeat, a very light-weight log forwarder written in Go with an impressively low memory footprint of only a few MB, but Filebeat ships only unparsed log lines to Elasticsearch.  In other words, it sort of still needs Logstash to parse web server logs, which include many fields and numeric values!  Of course, structuring logs is essential for analytics.  The setup for rsyslog with elasticsearch and regex parsers is a bit more time consuming but very efficient compared to Logstash. Are there any better alternatives? Having a quick setup, well structured logs and a low memory footprint?

Guess what?  There is! Meet logagent-js – a log parser and shipper with log patterns for a number of popular log formats – from various Docker Images including Nginx, Apache, Linux and Mac system logs, to Elasticsearch, Redis, Solr, MongoDB and more. Logagent-js detects the log format automatically using the built-in pattern definitions (and also lets you provide your own, custom patterns).

Logagent-js includes a command line tool with default settings for Logsene as the Elasticsearch backend for storing the shipped logs.  Logsene is compatible with the Elasticsearch API, but can do much more, such as role-based access control, account sharing for DevOps teams,  ad-hoc charts in the Logsene UI, alerts on logs, and finally it integrates Kibana to ease the life of everybody dealing with log data!

Now let’s see what I run on my private blog site: logagent-js as single command to tail, parse and ship logs, all with less than 40 MB of RAM. Compare that to Logstash, which would not even start with just 40 MB of JVM heap.  Logagent-js can be installed as a command line tool with npm, which is included in Node.js (>0.12):

npm i logagent-js -g

Logagent-js needs only the Logsene Token as a parameter to ship logs to Logsene. When running it as a background process or daemon, it makes sense to limit the Node.js memory with  –max-old-space-size=60 to 100 MB, just in case.  Without such setting Node.js could consume more memory to improve performance in a long running process:

node --max-old-space-size=60 /usr/local/bin/logagent -s -t your-logsene-token-here logs/access_log &

You can also run logagent-js as upstart or systemd service, of course.

A few seconds after you start it you’ll see all your logs, parsed and structured into fields, with correct timestamps, numeric fields, etc., all without any additional configuration! A real gift and a huge time time saver for busy ops people!

Logsene-create-chart

Charting Logs

Next, let’s create some fancy charts with data from our logs. Logsene has ad-hoc charting functions (look for the little blue chart icons in the above screenshot) that let you draw Pie, Area, Line, Spline, Bar, and other types of charts. Logsene is smart and automatically provides chooses Pie charts to display distinct values and bar/line charts for numeric values over time.

Bildschirmfoto 2016-01-20 um 10.11.37

In the above screenshot we see the top viewed pages and the distribution of HTTP status codes.  We were able to generate these charts literally with just a few mouse clicks. The charts use the current query, so we could search for specific URLs and exclude e.g. images, stylesheets or traffic from robots using Logsene’s query language e.g. ‘NOT css AND NOT jpg AND NOT png AND NOT seoscanners’ or, more simply: -css -jpg -png -seoscanners).

Kibana Dashboards

If you prefer Kibana dashboards then you’ll need more complex Elasticsearch queries to remove Stylesheets, JavaScripts or other URLs from the top list. Open Kibana 4 in the Logsene UI and create a visualistaion to filter specific URLs – a ‘Terms Query’ can use regular expressions to Exclude and Include Filters.

Bildschirmfoto 2016-01-20 um 10.21.29

This visualization could be saved and added to a Kibana dashboard. If you know Kibana this takes a few minutes per visualization.  The result is a stored dashboard that could be shared with colleagues, which might not know how to create such dashboards.

Alert Me

The final thing I usually do is define alert queries e.g. to get notified about a growing number of HTTP error messages. For my private blog I use e-mail notifications, but Logsene integrates well with PagerDuty, HipChat, Slack or arbitrary WebHooks.

There are even more options like using Grafana with Logsene, or shipping logs automatically when using Docker.

Finally, a few more words about  logagent-js, which I consider a ‘swiss army knife’ for logs.  It integrates seamlessly with Logsene, while at the same time it can also work with other log destinations. It provides what I believe is a good compromise in terms of performance and setup time – I’d say it’s somewhere between rsyslog and logstash.

All tools for log processing require memory for this processing, but looking at the initial memory usage after starting the tools gives you an impression of the minimum resource usage.  Here are some numbers taking from my server:

Contributions to the pattern library for even more log formats are welcome – we are happy to help with additional log formats or input sources beside the existing inputs (standard input, file, Heroku, CloudFoundry and syslog UDP). Feel free to contact me @seti321 or @sematext to get up and running with your special setup!

If you don’t want to run and manage your own Elasticsearch cluster but would like to use Kibana for log and data analysis, then give Logsene a quick try by registering here – we do all the backend heavy lifting so you can focus on what you want to get out of your data and not on infrastructure.  There’s no commitment and no credit card required.  

We are happy to answer questions or receive feedback – please drop us a line or get us @sematext.

Slack Analytics & Search with Elasticsearch, Node.js and React

Sematext team is highly distributed. We are ex-Skype users who recently switched to Slack for team collaboration. We’ve been happy with Slack features and especially integrations for watching our Github repositories, Jenkins, or receiving SPM or Logsene Alerts from our production servers through their ChatOps support. The ability to add custom integrations is really awesome! Being search experts it is hard for us to accept any limitation in search functionality in tools we use. For example, I personally miss the ability to search over all teams and all channels and I really miss having no analytics on user activity or channel usage. Elasticsearch has become a popular data store for analytical queries.  What if we could take all Slack messages and index them into Elasticsearch? This would make it possible to perform advanced analytics with Kibana or Grafana, such as getting like top terms used, most active users or channels. Finally, a simple mobile web page to access only the indexed data from various Teams and Channels might be handy to have, too.

In this post we’re going to see how to build what we just described.  We’ll use the Slack API, Node.js, React and Elasticsearch in 3 steps:

  • Index Data from Slack
  • Analyse Data from Slack
  • Create a custom Web-App for searchslack-indexing-logsene.png

Index Data from Slack

The Slack API provides several ways to access data. For example, outgoing webhook. This looks useful at first, however, this needs a setup per channel or keywords as trigger. Then I discovered a better way – the Node.js Slack Client.  Simply log in with your Slack account and get all Slack messages! I wrote a little Node.js app to dump the relevant information as JSON to the console or to a file.  Having the JSON output, it can be piped to Logagent-js a smart log shipper written in Node.js. I packaged this as “slack-elasticsearch-indexer” so it’s super easy to run:

npm install slack-elasticsearch-indexer
# Set Elasticsearch Server, btw. the Logsene Receiver is the default
export LOGSENE_URL=https://logsene-receiver.sematext.com/_bulk
# 1 - Slack API Token from https://api.slack.com/web
# 2 - Index name or Logsene Token from https://apps.sematext.com
npm start SLACK_WEB_API_TOKEN LOGSENE_TOKEN

The LOGSENE_TOKEN is what you can get from Logsene – the “ELK log management service”.  Using Logsene means you don’t have to bother running your own Elasticsearch, plus the volume of most team’s Slack data is probably so small that it fits in Logsene’s free plan! 🙂

Once you run the above you should see new Slack Messages on the console.  At the same time the messages will also be sent to Logsene and you will see them in the Logsene UI (or your local Elasticsearch server or cluster) right away.

Analyze Slack Messages in Logsene

Now that our Slack messages are in Logsene we can build our Kibana Dashboards to visualize channel utilization, top terms, the chattiest people, and so on.  But … did you know, that Logsene comes with a nice ad-hoc charting function? Simply open one of the Slack messages in Logsene, and click on the little chart symbol in the field userName and channel (see below).

logsene-slack-search.png

This will very quickly render top users and channels for you:

slack-pie-charts.png

Slack Alerting

Imagine a support chat channel – wouldn’t it be nice to be notified when people start mentioning “Error”, “Problems” and “Broken” things increasingly frequently? This is where we can make use of Logsene Alerts and its ability to do anomaly detection. Any triggered alerts can be delivered via email, PagerDuty, Slack, HipChat or WebHooks:

logsene-alert-definition.pngWhile Logsene is great for alerts, analytics and Slack message search, as a general ‘data viewer’ the message rendering in Logsene does not show application-specific things like users’ profile pictures, which would allow much faster recognition of user messages. Thus, as our next step, we’ll create a simple Web Client with nice rendering of indexed Slack messages. Let’s see how this can be done very quickly using some cutting edge Web technology together with Logsene.

Create a Custom Web-App for Search

We recently started using Facebook’s React.js for rendering of various UI parts like the views for Top Database Operations and we came across a new set of React UI Components for Elasticsearch called SearchKit. Thanks to Logsene’s Elasticsearch API SearchKit works out of the box with Logsene!
After a few lines of CSS and some JavaScript a simple Slack Search UI is born. Check it out!

searchkit-react.png

Edit the source code codepen.io

You just need to use your Logsene token as the Elasticsearch index name to run this app on your own data. For production we recommend adding a proxy to Elasticsearch (or Logsene) on the server side as described in the SearchKit UI documentation to hide connection details from the client application.

While this post shows how to index your Slack messages in Logsene for the purpose of archiving, searching, and analytics, we hope it also serves as an inspiration to build your own custom Search application with SearckKit, React, Node.js and Logsene?

If you haven’t used Logsene before, give it try – you can get a free account and have your logs and other event data in Logsene in no time. Drop us an email or hit us on Twitter with suggestions, questions or comments.

 

 

Elasticsearch Training in London

3 Elasticsearch Classes in London

 

es-training-240x187

Elasticsearch for Developers ……. April 4-5

Elasticsearch for Logging ……… April 6

Elasticsearch Operations …….  April 6

All classes cover Elasticsearch 2.x

Hands-on — lab exercises follow each class section

Early bird pricing until February 29

Add a second seat for 50% off

Register_Now_2

Course overviews are on our Elasticsearch Training page.

Want a training in your city or on-site?  Let us know!

Attendees in all three workshops will go through several sequences of short lectures followed by interactive, group, hands-on exercises. There will be Q&A sessions in each workshop after each such lecture-practicum block.

Got any questions or suggestions for the course? Just drop us a line or hit us @sematext!

Lastly, if you can’t make it…watch this space or follow @sematext — we’ll be adding more Elasticsearch training workshops in the US, Europe and possibly other locations in the coming months.  We are also known worldwide for Elasticsearch Consulting Services, and Elasticsearch Production Support.
We hope to see you in London in April!

Core Solr Training in London

solr-training-240x198

April 4 & 5 — Covers Solr 5.x

Hands-on — lab exercises follow each class section

Early bird pricing until February 29

Add a second seat for 50% off

Sematext is running a 2-day, very comprehensive, hands-on workshop in London on April 4 & 5 for Developers and DevOps who want to configure, tune and manage Solr at scale.

The workshop will be taught by Sematext engineer — and author of Solr books — Rafał Kuć. Attendees will go through several sequences of short lectures followed by interactive, group, hands-on exercises. There will be a Q&A session after each such lecture-practicum block.  See details, including training overview.

 

Register_Now_2

Target audience:

Developers who want to configure, tune and manage Solr at scale and learn about a wide array of Solr features this training covers in its 23 chapters – we mean it when we say this is comprehensive!

What you’ll get out of it:

In two days of training Rafal will:

  1. Bring Solr novices to the level where he/she would be comfortable with taking Solr to production
  2. Give experienced Solr users proven and practical advice based on years of experience designing, tuning, and operating numerous Solr clusters to help with their most advanced and pressing issues

When & Where:

  • Dates: April 4-5 (Monday & Tuesday)
  • Time: 9:00 am to 5:00 pm
  • Place: Imparando City of London Training Centre — 56 Commercial Road, Aldgate, London, E1 1LP (see map)
  • Cost: GBP £845.00 “Early Bird” rate (valid through February 29) and GBP £1.045.00 afterward.  There’s also a 50% discount for the purchase of a 2nd seat! (limit of 1 discounted seat per full-price seat)
  • Food/Drinks: Light morning & afternoon refreshments and Lunch will be provided

Got any questions or suggestions for the course? Just drop us a line or hit us @sematext!

Lastly, if you can’t make it…watch this space or follow @sematext — we’ll be adding more Solr training workshops in the US, Europe and possibly other locations in the coming months.  We are also known worldwide for our Solr Consulting Services and Solr Production Support.

Register_Now_2

Hope to see you in the London in April!  See detailed info about this training.

 

 

Sending your Windows Event Logs to Logsene using NxLog and Logstash

There are a lot of sources of logs these days. Some may come from mobile devices, some from your Linux servers used to host data, while other can be related to your Docker containers. They are all supported by Logsene. What’s more, you can also ship logs from your Microsoft Windows based hosts and visualize them using Logsene. In this blog post we’ll show how to send your Windows Event Logs to Logsene in a way that will let you build great visualizations and really see what is happening on your Windows-based systems.
Continue reading “Sending your Windows Event Logs to Logsene using NxLog and Logstash”