Thursday, June 6, 2013

Couchbase - use it for Logging!

My Introduction to Couchbase

For a short introduction - I've been using Couchbase for almost half a year now, but I've actually been around that "grey" couch-area all back to 2008/2009 where I tried to play with CouchDB using it for document storage.

But I introduced couchbase in my current company (I am working as a consultant), because we wanted:
  • To store many JSON documents 
  • To be fast!
  • Make secondary indexes on the arbitrary documents
  • Many clients should be able to fetch the documents - so caching is needed. 
  • The database should be to failover some how
  • Easy to setup, scale and understand

We almost got everything we needed for our architecture, so we based this setup on couchbase and have now used it a couple of month in production. We have quite easily setup a cluster of 3 nodes and used that to take in large amount of metadata for a Video On Demand (VOD) workflow engine. We have tested it and we got quite good results of operations per seconds (both during writes and reads - we mainly write a lot in this scenario).

Logging - problem statement

The system my customer currently have is a broker/workflow system for VOD that integrates with many other systems and platform, which means a lot of stuff can go wrong. And this is not always easy to spot in the current system. So with a good logging mechanism I want it to be able to:

  • Application developers to be able to investigating incidents.
  • Customer support team investigating a lost transaction reported by a customer complaint.
  • The security team to do security forensics.
  • Easily spot trends and somekind of buisiness intelligence (BI) for the business managers demanding statistics
So I've been using couchbase for storing trivial document data and all is working out very fine indeed.
But then it hit me, why not use it for arbitrary logging?  Couchbase database sure looked fast enough to store a massive load of documents coming in a short amount of time. And when we already have a cluster of couchbase nodes, it would be straight forward to do - and why not use what you already have.. But also my idea could be fun and giving!

The Couchbase Idea

The idea of using couchbase also gives me the ability to shape the log structure as I want it and dynamicaly change it - if I want!. This is one of the true powers of using a document storage database, such as couchbase. And the documents that are stored are firstly cached in memcached that couchbase automatically have in front - so that means speed when you are searching your logs!

Even though a lot of different solutions exists for logging, I've always had the feeling that they had some shortcomings. Firstly we have a cluster of application servers that are behind a load balancer. This means for ordinary logging on the file system, that our logs are spread around on each application server - where you have to login and look at the log - compare etc. This of course become more and more complicated the more application servers you add.

Many remote loggers do exists, these either makes their logging through a normal RDBMS database or a protocol for exchanging logs - such as syslog. I've had some problems using syslog, together with my environment and since we did not had a syslog server, it made it even harder to use in this scenario.

A lot of these log services store the logs for safe keeping and makes it available later. Which often means, store the logs in a centralised place and parse it later (eventually first when needed).
I wanted my logging to be instant, so when a new software is released, we can see the impact and eventual errors instantly! Also I wanted to be able to evict logs which have no significance over time (say info or other levels). In this case I will use the expire time that you can set for each item that you persist (both in memory and on disk).

Almost all of the Java companies I've been working for as a freelancer, have been using some sort of Log4j Library - meaning directly Log4J or a SL4J facade. This company is doing the same, using Log4J (version 1.2).

Log standards?

So instead of implementing yet another REST service where each piece of software on the application servers have to re-write their logging technique, why not just use the "standard" Java logging mechanism - like log4j in this case?

Speaking of standards, I would also like to have my log documents stored in some kind of standard. Coming from a open standard world (XMPP Advocate and XSF member) I know how much it actually means to comply to a standard so it makes is easy to interoperate. Making it so, will also make it easy to export data from other sources of loggers and import etc. No real standards really exists that are really widespread, except syslog, which I have tried to use a couple of times now.

Actually some shortcomings I've had with syslog and java was the length of a stacktrace. Stacktraces can be enormous and syslog sometimes can't handle that - so it does have some limitations! I dont like limitations, so a requirement to be able to "shape" my own log document structure or standards (if any) was very important as well. 

Summary of requirements

So lets assemble my requirements of my idea of a centralised logger:
  • Easy to setup!
  • Log4j compliant (which makes it directly plugable from a java perspective)
  • Fast and distributed 
  • Close to instant logging
  • Easy to shape the log documents
  • Easy to index the log documents as we go along(some incident reports, some want BI, some want forensics)
  • Some kind of standardised log format

Log4j-couchbase (

So as you can guess, I used couchbase for storage and implemented a Log4J appender, that would make it posible to use our current log4j setup and format the log document the way we wish.
Because I like standards and because I don't want to re-invent the wheel, I decided to go for Logstash as a "standard" logging format. I have researched it a bit,  but I found only Logstash that looked simple and widely used enough for my purpose.

So in half a day or so I created the log4j-couchbase project. Its a simple log4j appender that formats the logs as Logstash (almost like). :-)

I've made some simple design assumptions, such that the log appender is adding the logs asynchronous, instead of waiting for a OK from couchbase. That is I am not using the get() method on my OperationFuture, so all my appender should be running in a non-blocking mode for performance. Later I might make this optional to turn on synchronous mode. But for now I find performance more important for logging than the robustness that synch. mode offers.


If you are using maven you should be able to add the following dependency to your pom.xml. Otherwise compile the project by doing a "mvn clean install" and put the jar file (located under target) in your classpath of your project.
I have uploaded the snapshot of version 0.4.0 to the maven central and it should be promoted soon to a released version. Until then you have to live with a snapshot version.
Now that the class is added to your project, you only need to add the file. This file will setup the properties for the couchbase appender. An example could be:
log4j.rootLogger=DEBUG, COUCHBASE
Remember to substitute your couchbase server settings accordingly. Also remember to set your root appender to a console appender, if you would like to see what is happening on your console.
If you are using an eviction strategy like 0 (eviction=0), your logs will be persisted forever. If you are setting the value (in seconds) higher than zero, the document will be evicted after the given value of seconds. Eg. if eviction=60, the documents will be stored in only 60 seconds.  In the next major version (1.0.0) of the log appender, I will try to add an even more clever eviction strategy, maybe based upon log level, occurrence etc.

 After adding the property file, you are now ready to use the couchbase logger. So in your java project, just use your normal log4j statements and levels like so:
Log.warn("This is an warn!");
Log.error("Auch we got a stacktrace", exception);
A more detailed description of the log levels etc can be found at Log4J.


When the log statements is implemented like above and the property file is set correctly, the couchbase appender will now put all your log data in couchbase into the default bucket. If you want another bucket specific for your logs, you can set it via the property file (loggingBucket property). The log will be persisted in the couchbase data structure which will look something like this:
  "message": "Auch we got a stacktrace",  "hostname": "",
  "thread": "main",
  "timestamp": 1366404509322,
  "fieldData": {
    "level": "ERROR",
    "mdc": {},
    "file": "",
    "exception": {
      "exception_class": "java.lang.StackOverflowError",
      "exception_message": "Craaaap",
      "stacktrace": "java.lang.StackOverflowError: Craaaap\n\tat dk.braintrust.os.logger.TestLogger.testErrorLogger("
    "class": "dk.braintrust.os.logger.TestLogger",
    "line_number": "19",
    "method": "testErrorLogger"
  "exceptionInformation": {
    "exception_class": "java.lang.StackOverflowError",
    "exception_message": "Craaaap",
    "stacktrace": "java.lang.StackOverflowError: Craaaap\n\tat dk.braintrust.os.logger.TestLogger.testErrorLogger(TestL.. java:390)\n\tat org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main("
The layout is based somewhat on the work done at, so it is easially portable to and from logstash. 

Views and GUI?

So now that the logs are stored in couchbase what to do with it? From now on when it is persisted in couchbase we can make views based upon what we want explore in the logs. 
We can use these views to make the logs easily searchable in a webgui etc.
But because I wanted the appender to be a small and understandable project, I find the views and the GUI a bit out of scope.

I have therefore made a new little pet-project, called couchpotato. This project will put some views on the documents and do some GUI on top - probably based on some angularJS or knockout. Another thing the couchpotato project might add is the possibility to full-text search in the JSON document, no matter what structure it have. That is going to be a neat feature and is easily done though couchbase using the elastic search plugin. I will deal with this project in a later blogpost.


So what does my log appender missing and what features would be nice to have?
I've developed this appender specifically for my current customer, so it is for Log4J version 1.2.x only.
It could also be nice to develop it for 2.x standard of Log4J and maybe do a LogBack implementation, which is gaining a lot of traction at the moment. Logback natively implements the SLF4J API. This means that if you are using logback, you are actually using the SLF4J API.

Otherwise these are on my TODO list:
  • Comply better to the Logstash datamodel.
  • Maybe store the log object into a temp memtable to be able to continuos pump log data fast.
  • Clean up code.
  • Make the log appender asynchronous
  • Do a massive load test, to see if it holds water.
  • Strategy for the log, if the log appender gets disconnected from the cluster. Should it put it into a queue, write it locally to disk or?
  • Develop an appender for the Log4J 2.x standard, maybe in the same projet.
  • Put the project into the maven central releases (right now its residing in snapshots)
  • Make prototype of GUI for testing out the maps and views - I've started a template project: 
So what will be next on the roadmap for this little project? I think load testing the framework and see if it scales will be the next priority and also to build my couchpotato project. Build views and using it in a GUI makes it better to understand what is missing and what flaws it have.

BTW: If you are interested in logging and couchbase, you should really read Michael Nitschinger's blogpost about setting up log4j for the Couchbase client. The blog is about the internal logging of the couchbase java client and how you set it up and tweak it.


Well thats it for now, Cheers!

Twitter: zooldk

No comments:

Post a Comment