Tips series: logging
- Posted in:
- Tech
- Quick thoughts
- Web dev
It usually goes like this:
- Lets hack a prototype or proof of concept
- Deploy to production
- The hell, people are visiting my web site and I didn't add any application logging!
Happens to me all the time. I'm adding some logging framework when it's too late, users complained, site went down, don't even know what happened! Not really sure why is that, but my theory is that most popular framework requires XML configuration in web/app config, and it's often very difficult to find all the config options, so I tend to leave "logging feature" for the end. I also hate XML-based configuration for those types of stuff. Some logging framework supports code-based configuration, like Serilog.
Some options for logging are:
- Elmah - really easy to add, but logs only exception
- Nlog or Log4net - really similar frameworks, more of a personal preference which one to use
- Serilog - interesting new framework, easy to use code based configuration yay!
- AppInsight, New Relic, Splunk - much bigger and complex solutions and services, can be plugged in to some other framework like nlog or serilog, as a storage of events. They offer whole set of analytics, diagnostics and reporting tools. Really useful feature of those are ability to log server events - number of web request, memory consumptions, database requests and timings, etc
No matter what framework you pick, it's important to set API over logging right, and then implement it in the framework of your choice. In one of my recent projects, we created logging semantics in two dimensions: level and type:
- Level is standard logging level like verbose, debug, info, error, etc
- Type is something that describes an event, provides better semantics about the event, and can be, for example:
- Diagnostics - system and application related events
- Operations - business related events
- Performance - benchmarks
So every log event needs to describe also those two dimensions; level and type. Translating that to an interface, it can look like:
Some logging frameworks and services like Serilog and Splunk supports storing of structured objects (DTOs) into a store, which are then later available for querying. That can be really useful feature, since those objects are stored in their "native" structure (JSON) inside NoSQL store, they are not flattened into one string, all information are preserved (think of queue messages, commands, requests, that can be stored and later inspected in case of a problem).
To support sending of objects API can be expanded little bit:
void Log(LogLevel level, LogType type, string message, params object[] data)
There's one more neat trick that can be used to make API semantics little bit better and easier to use; we can add extension methods to for ILoggger:
And so on. You get the point:)
With those extension methods, usage can look like this:
which will result in log entries that can look like:
14/10/2015 7:00:00 [Info] [Operation] Trying to pay the order { OrderId:1, Prop:1, Prop2:2, …}
14/10/2015 7:00:01 [Error] [Operation] Error processing payment request
full stack trace of exception…
14/10/2015 7:00:10 [Debug] [Performance] Payment service executed in 1234ms
with log entries like this, it much easier to get the context of the entry. We can search for all Performance entries for example, and find bottlenecks, or get only business related (Operation) entries and filter out technical one.