Recipe: Reindexing Elasticsearch Documents with Logstash

If you’re working with Elasticsearch, it’s very likely that you’ll need to reindex data at some point. The most popular reason is because you need a mapping change that is incompatible with your current mapping. New fields can be added by default, but many changes are not allowed, for example:

  • Want to switch to doc values because field data is taking too much heap? Reindex!
  • Want to change the analyzer of a given field? Reindex!
  • Want to break one great big index into time-based indices? Reindex!

Enter Logstash

A while ago I was using stream2es for reindexing, but if you look at the GitHub page it recommends using Logstash instead. Why? In general, Logstash can do more stuff, here are my top three reasons:

  1. On the input side, you can filter only a subset of documents to reindex
  2. You can add filters to transform documents on their way to the new index (or indices)
  3. It should perform better, as you can add more filter threads (using the -w parameter) and multiple output worker threads (using the workers configuration option)

Show Me the Configuration!

In short, you’ll use the elasticsearch input to read existing data and the elasticsearch output to write it. In between, you can use various filters to change how documents look like.

Input

To read documents, you’ll use the elasticsearch input. You’ll probably want to specify the host(s) to connect to and the index (check the documentation for more options like query):

input {
  elasticsearch {
   hosts => ["localhost"]
   index => "old-index"
  }
}

By default, this will run a match_all query that does a scan through all the documents of the index, fetch pages of 1000, and times out in a minute (i.e. after a minute it won’t know where it left off). All this is configurable, but the defaults are sensible. Scan is good for deep paging (as normally when you fetch a page from 1000000 to 1000020, Elasticsearch fetches 1000020, sorts them, and gives back the last 20) and also works with a “snapshot” of the index (updates after the scan started won’t be taken into account).

Filter

Next, you might want to change documents in their way to the new index. For example, if the data you’re reindexing wasn’t originally indexed with Logstash, you probably want to remove the @version and/or @timestamp fields that are automatically added. To do that, you’ll use the mutate filter:

filter {
 mutate {
  remove_field => [ "@version" ]
 }
}

Output

Finally, you’ll use the elasticsearch output to send data to a new index. The defaults are once again geared towards the logging use-case. If this is not your setup, you might want to disable the default Logstash template (manage_template=false) and use yours:

output {
 elasticsearch {
   host => "localhost"
   protocol => "http"
   manage_template => false
   index => "new-index"
   index_type => "new-type"
   workers => 5
 }
}

Final Remarks

If you want to use time-based indices, you can change index to something like “logstash-%{+YYYY.MM.dd}” (this is the default), and the date would be taken from the @timestamp field. This is by default populated with the time Logstash processes the document, but you can use the date filter to replace it with a timestamp from the document itself:

filter {
 date {
   "match" => [ "custom_timestamp", "MM/dd/YYYY HH:mm:ss" ]
   target => "@timestamp"
 }
}

If your Logstash configuration contains only these snippets, it will nicely shut down when it’s done reindexing.

That’s it! We are happy answer questions or receive feedback – please drop us a line or get us @sematext. And, yes, we’re hiring!

Solr Cookbook, 3rd Edition — Now Available and includes Solr 5.0

Hot off the press: a brand new Solr Cookbook!  One of Sematext’s Solr and Elasticsearch experts — and authorsRafał Kuć, has just published the third and latest edition of Solr Cookbook.  This edition covers both Solr 4.x (based on the newest 4.10.3 version of Solr) and the just-released Solr 5.0.

Similar to previous Solr Cookbooks, Rafal updated the book significantly — half of the previous content has been changed — and rewrote all of the recipes.

Solr_Cookbook

Chapter List

Here’s a list of the chapters:

  1. Apache Solr Configuration
  2. Indexing Your Data
  3. Analyzing Your Text Data
  4. Querying Solr
  5. Faceting
  6. Improving Solr Performance
  7. In the Cloud
  8. Using Additional Solr Functionalities
  9. Dealing with Problems
  10. Real-life Situations

For more information about Solr Cookbook, Third Edition — including info on getting a free chapter — check out the Packt Publishing web page dedicated to it.  The book is available in both electronic and paperback versions.  Even better, here is a discount code you can use for 20% off (valid until March 22, 2015; see details for applying code below*): scte20

Need Some Solr Expertise?

Rafal isn’t the only Solr expert at Sematext; we’ve got several more who have helped 100+ clients to architect, scale, tune, and successfully deploy their Solr-based products.  We also offer 24/7 production support for Solr and Elasticsearch.  Here’s more info about our professional services, which also include Elasticsearch and Logging consulting.  You can also monitor Solr performance (and many other platforms) with SPM Performance Monitoring.

Have some feedback or questions for Rafal?

He’d love to hear from you — get him @kucrafal

——-

* Using discount code:

  1. Set up a free Packt account or log into your existing account
  2. Add the title “Solr Cookbook – Third Edition” in the cart
  3. Click on ‘View Cart’
  4. Then in the “Do you have a promo code?” field enter scte20
  5. Click on the “Apply” button for the discount to get applied

 

Using Elasticsearch Mapping Types to Handle Different JSON Logs

By default, Elasticsearch does a good job of figuring the type of data in each field of your logs. But if you like your logs structured like we do, you probably want more control over how they’re indexed: is time_elapsed an integer or a float? Do you want your tags analyzed so you can search for big in big data? Or do you need it not_analyzed, so you can show top tags via the terms aggregation? Or maybe both?

In this post, we’ll look at how to use index templates to manage multiple types of logs across multiple indices. Also, we’ll explain how to use logging tools (such as Logstash and rsyslog) to handle JSON logging and specify types.

Elasticsearch Mapping and Logs

As you may already know, to control these things in Elasticsearch you’ll need to define a mapping. This works similarly in Logsene, our log analytics SaaS, because it uses Elasticsearch and exposes its API.

With logs you’ll probably use time-based indices, because they scale better (in Logsene, for instance, you get daily indices). That said, to make sure the mapping you define today applies to the index you create tomorrow, you need to define it in an index template.

Managing Multiple Types

Mappings provide a nice abstraction when you have to deal with multiple types of structured data. Let’s say you have two apps generating logs of different structures: both have a timestamp field, but one recording logins has a user field, and another one recording purchases has an amount field.

To deal with this, you can define the timestamp field in the _default_ mapping which applies to all types. Then, in each type’s own mapping we’ll define fields unique to that mapping. The following snippet is an example that works with Logsene, provided that aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee is your Logsene app token. If you roll your own Elasticsearch, you can use whichever name you want, and make sure the template applies to your index pattern.

curl -XPUT 'logsene-receiver.sematext.com/_template/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee_MyTemplate' -d '{
 "template" : "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee*",
 "order" : 21,
 "mappings" : {
  "_default_" : {
   "properties" : {
    "timestamp" : { "type" : "date" }
   }
  },
  "firstapp" : {
   "properties" : {
    "user" : { "type" : "string" }
   }
  },
  "secondapp" : {
   "properties" : {
    "amount" : { "type" : "long" }
   }
  }
 }
}'

Sending JSON Logs to Specific Types

When you send a document to Elasticsearch by using the API, you have to provide an index and a type. You can use an Elasticsearch client for your preferred language to log directly to Elasticsearch or Logsene this way. But I wouldn’t recommend this, because then you’d have to manage things like buffering if the destination is unreachable.

Instead, I’d keep my logging simple and use a specialized logging tool, such as Logstash or rsyslog to do the hard work for me. Logging to a file is usually the easiest option. It’s local, and you can have your logging tool tail the file and send contents over the network. I usually prefer sockets (like syslog) because they let me configure Logstash/rsyslog to:
– write events in a human format to a local file I can tail if I need to (usually in development)
– forward logs without hitting disk if I need to (usually in production)
Whatever you prefer, I think writing to local files or sockets is better than sending logs over the network from your application. Unless you’re willing to do a reliability trade-off and use UDP, which gets rid of most complexities.

Opinions aside, here’s a Logstash configuration for tailing a file with JSON logs separated by a newline. Here’s how you’d send those documents to Logsene via the Elasticsearch API:

input {
 file {
 path => "/var/log/test"
 codec => "json"
 }
}

output {
 elasticsearch {
 host => "logsene-receiver.sematext.com"
 port => 80
 index => "LOGSENE-APP-TOKEN-GOES-HERE"
 index_type => "fileapp"
 protocol => "http"
 manage_template => false
 }
}

Note how the JSON codec does the parsing here, instead of the more expensive and maintenance-heavy approach with grok that we’ve shown in an earlier post on getting started with Logstash. Some applications let you configure the log format, so you can make them write JSON (Apache httpd, for example).

If you want to send JSON over syslog, there’s the JSON-over-syslog (CEE) format that we detailed in a previous post. You can use rsyslog’s JSON parser module to take your structured logs and forward them to Logsene:

module(load="imuxsock")        # can listen to local syslog socket
module(load="omelasticsearch") # can forward to Elasticsearch
module(load="mmjsonparse")     # can parse JSON

action(type="mmjsonparse")  # parse CEE-formatted messages

template(name="syslog-cee" type="list") {  # Elasticsearch documents will contain
  property(name="$!all-json")              # all JSON fields that were parsed
}

action(
  type="omelasticsearch"
  template="syslog-cee"                     # use the template defined earlier
  server="logsene-receiver.sematext.com"
  serverport="80"
  searchType="syslogapp"
  searchIndex="LOGSENE-APP-TOKEN-GOES-HERE"
  bulkmode="on"                                # send logs in batches
  queue.dequeuebatchsize="1000"                # of up to 1000
  action.resumeretrycount="-1"    # retry indefinitely (buffer) if destination is unreachable
)

To send a CEE-formatted syslog, you can run logger ‘@cee: {“amount”: 50}’ for example. Rsyslog would forward this JSON to Elasticsearch or Logsene via HTTP. Note that Logsene also supports CEE-formatted JSON over syslog out of the box if you want to use a syslog protocol instead of the Elasticsearch API.

Filtering by Type

Once your logs are in, you can filter them by type (via the _type field) in Kibana:
Type Filtering with Kibana
However, if you want more refined filtering by source, we suggest using a separate field for storing the application name. This can be useful when you have different applications using the same logging format. For example, both crond and postfix use plain syslog.

If you’re looking for a place to send your logs to, check out Logsene!

Solr vs. Elasticsearch — How to Decide?

by Otis Gospodnetić

[Otis is a Lucene, Solr, and Elasticsearch expert and co-author of “Lucene in Action” (1st and 2nd editions).  He is also the founder and CEO of Sematext. See full bio below.]

“Solr or Elasticsearch?”…well, at least that is the common question I hear from Sematext’s consulting services clients and prospects.  Which one is better, Solr or Elasticsearch?  Which one is faster?  Which one scales better?  Which one can do X, and Y, and Z?  Which one is easier to manage?  Which one should we use?  Which one do you recommend? etc., etc.

These are all great questions, though not always with clear and definite, universally applicable answers. So which one do we recommend you use? How do you choose in the end?  Well, let me share how I see Solr and Elasticsearch past, present, and future, let’s do a bit of comparing and contrasting, and hopefully help you make the right choice for your particular needs.

Solr_vs_Elasticsearch

Early Days: Youth Vs. Experience

Apache Solr is a mature project with a large and active development and user community behind it, as well as the Apache brand.  First released to open-source in 2006, Solr has long dominated the search engine space and was the go-to engine for anyone needing search functionality.  Its maturity translates to rich functionality beyond vanilla text indexing and searching; such as faceting, grouping (aka field collapsing), powerful filtering, pluggable document processing, pluggable search chain components, language detection, etc.

Continue reading “Solr vs. Elasticsearch — How to Decide?”

Solr 5: Replication Throttling

With the release of Solr 5.0, the most recent major version of this great search server, we didn’t only get improvements and changes from the Lucene library.  Of course, we did get features like:

  • segments control sum
  • segments identifiers
  • Lucene using only classes from Java NIO.2 package to access files
  • lowered heap usage because of new Lucene50Codec

…but those features came from the Lucene core itself.  Solr introduced:

  • improved usability for start-up scripts
  • scripts for Linux service installation and running
  • distributed IDF calculation
  • ability to register new handlers using the API (with jar uploads)
  • replication throttling
  • …and so on

All of these features come with the first release of branch 5 of Solr, and we can expect even more from future releases — like cross data center replication! We want to start sharing what we know about those features and, today, we start with replication throttling.

Continue reading “Solr 5: Replication Throttling”

Job: Sematext is hiring – Elasticsearch Engineer

The Sematext team is more distributed than your average Elasticsearch cluster and, trust me, we’ve seen a a good portion of the world’s Elasticsearch clusters.  The thing with Elasticsearch clusters is they often get new nodes added and they keep expanding to handle more data and more queries.  Similarly, we are looking to add a new node to the Sematext team so we can reshard our work a bit, distribute it more evenly, and scale further.  In plain English, we are looking for an Engineer who loves working with Elasticsearch, who loves large volumes of data, and a wide variety of projects and challenges involving large scale data processing, high volume indexing, high query rates, who likes working with our clients, and wants to make Logsene and SPM the killer log management and monitoring platforms.  Advanced knowledge of Elasticsearch is less important than passion to learn and build, positive attitude, ability to make decisions, work both independently and with the rest of the team, communicate well, and simply be a good person.  We can teach you everything about Elasticsearch and turn you into a bonsai tree loving Elasticsearch samurai, but we need you to be all these other things.

As a member of our team you will get to:

  • Work with world-class search experts
  • Design and implement systems (both our own and our clients’) that process 10s of thousands of queries per second and handle billions of documents, logs, data points, etc.
  • Interact with clients and customers world-wide
  • Provide guidance, architecture design, implementation, and production support around Elasticsearch
  • Participate in and contribute to open-source (we’ve contributed to Solr, Lucene, HBase, Flume, rsyslog, Logstash, etc.)
  • Share your knowledge with clients, at conferences and under-conferences, online community, etc.

This position:

  • Offers a lot of independence, learning, and growth
  • Is open to applicants “west of New York City” (this could be South, Central, or North America, of course), though we’ll happily make an exception if you persuade us we should make an exception for you!

Our search team members have written several books about search, regularly give talks at conferences, blog, and participate in open-source projects.  For more info, see 19 things you may like about Sematext.

Interested? Please send your resume to jobs@sematext.com.

For other job openings please see Jobs @ Sematext or even our previous job listings.

JOB: Elasticsearch / Lucene Engineer (starts in the Netherlands)

In addition to looking for an Elasticsearch / Solr Engineer to join the Sematext team, we are also looking for an Lucene / Elasticsearch Engineer in EU for a specific project.  This project calls for 6 months of on-site work with our client in Netherlands.  After 6 months the collaboration with our client would continue remotely if there is more work to be done for the client or, if the client project(s) are over, this person would join our global team of Engineers and Search Consultants and work remotely (we are all very distributed over several countries and continents). This is a position focused on search – it involves working with Elasticsearch, but also requires enough understanding of Lucene to allow one to write custom Elasticsearch/Lucene components, such as tokenizers, for example. Here are some of the skills one should have for this job:

  •  knowledge of different types of Lucene queries/filters (boolean, spans, etc.) and their capabilities
  •  experience in extending out-of-the-box Lucene functionality via developing custom queries, scorers, collectors
  •  understanding of Lucene document analysis in the process of indexing, experience in writing custom analyzers
  •  experience in mapping advanced hierarchical data structures to Lucene fields
  •  experience in scalable distributed open-source search technologies such as Elasticsearch or Solr

The above is not much information to go by, but if this piqued your interest and if you think you are a good match, please fix up your resume and send it to jobs@sematext.com quickly.

JOB: Elasticsearch / Solr Engineer

We’ve grown nicely this year.  Our team has a new UI Developer, a new Solr/Elasticsearch Engineer, a new Marketing person, a new Automation Engineer, and this summer we have the first ever Intern.

Like all healthy organizations, we keep growing, and we are now looking for good Search Engineers who know Elasticsearch and/or Solr to join our geographically distributed search consulting team.  You will work remotely, from wherever you are, with smart people spread out across the planet and with an amazing array of companies world-wide on projects that range from just a week or two to several months.

At Sematext, we’ve built several exciting products – from smaller, search-focused products that work with Solr and Elasticsearch, to larger ones like SPMSearch Analytics, and most recently Logsene.  While not building products and running services, we help organizations world-wide with their search and big data needs – from fixing issues and providing production support to building complex search systems from scratch.  Our client list is long with a number of household names on it – from Instagram (Facebook) and Tumblr (Yahoo), Etsy and Shutterstock, to The BBC, Elsevier, Lockheed Martin, Reuters, Library of Congress, etc.  We did this without raising any money.  The demand for our products and services is growing and we are looking for good engineers and good people to join our adventure!

More formally:

Sematext is looking for a responsible, professional individual to join our team of search engineers.

Sematext is a New York-based startup with people spread over multiple continents and several hundred customers from Instagram and Tumblr, Etsy and Shutterstock, to The BBC, Elsevier, Lockheed Martin, Reuters, Library of Congress, etc. We’ve built systems handling over 10,000 QPS and have worked with multi-billion document indices. Our core products are:

In addition to the above products we offer consulting services around open source search and big data.

We are looking for a person who is:

  • Enthusiastic and positive
  • Driven, independent, and professional
  • A good communicator, both written and oral
  • Good with Solr and/or Elasticsearch and is hungry to learn more
  • Enjoys helping organizations make the best out of search

As a member of our search team you will get to:

  • Interact with clients world-wide
  • Provide guidance, architecture design, implementation, and support
  • Participate in Solr, Lucene, and Elasticsearch user and development communities
  • Work on Sematext’s search and data analytics products and participate in open-source search projects

This position:

  • Offers a lot of independence, learning, and growth
  • May require a bit of travel here and there, typically in the US and Europe
  • Is open world-wide

Our search team members have written several books about search, regularly give talks at conferences, blog, and participate in open-source projects.
For more info, see 19 things you may like about Sematext.

Interested? Please send your resume to jobs@sematext.com.

For other job openings please see Jobs @ Sematext or even our previous job listings.

Key Phrases for Better Search: Smart Content Presentation

We are 3 for 3 this month – 3 talks at 3 different conferences – Lucene Revolution (see our presentation), Hadoop World (see our presentation), and Smart Content (see full agenda).  The last conference was a small one-day conference here in New York, organized by Seth Grimes.  It turns out there are tons of vendors in the text analytics / “semantic” analysis space who all do more or less the same thing – Named Entity Recognition, Classification, Clustering, Key Phrase Extraction, etc.  Sematext is not in that space, though we do have a classifier, a Language Identifier, and a Key Phrase Extractor.  If is this last tool, the Key Phrase Extractor that I made use of in the presentation.  But enough talk, here is our presentation:

Search Analytics: Hadoop World Presentation

After our Lucene Revolution talk in Boston, we got ready for last week’s Hadoop World conference in New York.  Like at the Lucene Revolution, we presented to a packed room of 200+ people. The topic of our talk was the Search Analytics tool we’ve built with the help of Flume, HBase, MapReduce, and other open-source tools, and which are now starting to use for search-hadoop.com and search-lucene.com.  If you couldn’t make it to Hadoop World, have a look at our presentation below.  And if you’d like to work on Search, Analytics, and related areas, we’re looking for good people world-wide – see our jobs page.  Enjoy!