Python: logging program state into multiple files for analysis

ByJay

How does the Python logging library enable us to store program state?

Introduction:

Extensive applications require focused and calculated development strategies.
Purely functional programming languages such as Scala and Haskell offer highly flexible and modular code by reducing side effects through IO abstractions.

Unlike Scala, Python is an object-oriented programming language. Programmers tend to use print statements more often than return NoneType.

Printing state to the console might look like a viable solution at the moment, but it yields no benefits when cloud or remote calls are involved.

Logging the state and responses can be an exceptional solution when reporting and debugging is the goal. Python comes pre-packed with a logging library designed exclusively to capture and offer clear visibility of what is happening in the application with little to no effort.

This post focuses on designing and setting up the program structure to log the program state concisely.

💡 Learn how to use the zip method to convert lists into dictionaries in Python:

How to Use the Zip Method to Convert Lists into Dictionaries

👉 To read more such acrticles, sign up for free on Differ.

Imports:

We will be using Python’s inbuilt logging library to achieve our requirements.

import logging

The logging library focuses more on routing the logs into directed locations and offers insights from a central log repository. The module provides a to-go feature for beginners usage which is very easy to use. basicConfig().

logging.basicConfig(level=logging.INFO, filename='success_logs.txt', format='[%(asctime)s] %(levelname)s:%(message)s')

basicConfig() method takes 3 parameters. The level of the logging that we are aiming to capture (INFO / DEBUG / ERROR / WARNING).

Formatter:

Although this is a better option, the usage is limited to or somewhat complex when applying the method for our use case.

We will make use of the Formatter method in the logging module. Formatters specify the log record layout in the final output.

formatter = logging.Formatter(‘[%(asctime)s] %(levelname)s %(message)s’)

FileHandler:

The nature of our use cases is to write logs to the specified file names and create one if it does not exist. The logging module has a class called FileHandler.

The functionality of FileHadler assures that a specified file is opened for use as the stream for logging.

handler = logging.FileHandler(file_name)

setFormatter:

Prepending the logging with time and log level to the message in an accurate format is necessary. setFormatter Sets the Formatter for the handler to format.

handler.setFormatter(self.formatter)

getLogger:

next, we need to get our logger by the specified name,

specified_logger = logging.getLogger(log_name)

setLevel:

An important and mandatory configuration is to set the level at which we want to log.

specified_logger.setLevel(level)

addHandler:

The final step is to connect our handler and the log level, to write the logs to a specified location on the notified log level.

specified_logger.addHandler(handler)

Let’s wrap all of it up into a reusable method.

def extendable_logger(log_name, file_name,level=logging.INFO):
    handler = logging.FileHandler(file_name)
    handler.setFormatter(self.formatter)
    specified_logger = logging.getLogger(log_name)
    specified_logger.setLevel(level)
    specified_logger.addHandler(handler)
    return specified_logger

“Without a proper illustration, even a straight line looks zig-zag when imagined for the very first time” — Jay Reddy.

Let’s merge all our understanding until this point into a class and make a simple REST call and capture the logs.

import logging
import requests
class Log_level_capture(object):
    """
    Simple class to call in methods and
    capture INFO logs into mutiple files.
    """
    def __init__(self):
        self.formatter = logging.Formatter('[%(asctime)s] %(levelname)s %(message)s')
    def extendable_logger(self, log_name, file_name, level=logging.INFO):
        handler = logging.FileHandler(file_name)
        handler.setFormatter(self.formatter)
        specified_logger = logging.getLogger(log_name)
        specified_logger.setLevel(level)
        specified_logger.addHandler(handler)
        return specified_logger
    def google_call(self):
        response = requests.get("https://www.google.com")
        google_logger = self.extendable_logger('google_logs', 'google.txt')
        google_logger.info(response.text)
        return response
    def python_call(self):
        response = requests.get("https://www.scala-lang.org")
        python_logger = self.extendable_logger('scala_logs', 'scala.txt')
        python_logger.info(response.text)
        return response
if __name__ == '__main__':
    log_test = Log_level_capture()
    log_test.google_call()
    log_test.python_call()

Output:

Conclusion:

In this post, we’ve learned custom ways to configure Python’s standard logging library for the generation of context-rich logs and to capture them into the appropriate destinations.

We’ve also observed how you can make HTTP calls and parse the JSON-formatted response logs for analysis.

I hope this will be useful in your Python journey and help the readers achieve their dedicated use cases.

Enjoyed this article?

Share it with your network to help others discover it

Continue Learning

Discover more articles on similar topics