Each container produces records with valuable information. A record is basically data written by the container to STDOUT or STDERR. However, if you run a container in separate mode, you cannot view logs in the console because separate mode refers to running a container in the background of your terminal. Therefore, you won’t see any logs or other output from your Docker container.
To display these valuable logs, you can use a range of Docker log commands. And in this article, you’ll learn how to display Docker logs in your console, tips and tricks related to viewing logs, and how to configure Docker to send logs to SolarWinds® Papertrail™.
Let’s first learn about the importance of Docker logs.
Importance of
Docker logs Docker logs
are important to developers and DevOps engineers. Logs are most useful when debugging or analyzing a problem because they provide insight into what went wrong. Therefore, developers and engineers can solve problems faster.
In addition, you can apply trend analysis to Docker logs to detect anomalies. For example, Loggly offers this feature for any type of logging, so you can detect anomalies faster and move from reactive to proactive monitoring.
How to shrink
Docker logs Imagine that we are running a
container and we want to access the logs in this container. How can we carry out this task?
First, we can use the following command to check the containers that are currently running:
docker ps -a
This command prints a list of running containers. You’ll see their identifiers, images used, startup commands, how long the containers have been running, and many other variables.
For us, the most important parameter is the container ID, which we will use in the next step.
CONTAINER ID IMAGE COMMAND CREATED STATE FCAE22FA4144 XYZ/MY-IMAGE:LATEST “/OPT/BIN/ENTRYPOINT-” 2 HOURS AGO Up to 2 minutes 155206af2f03 abc/my-image:latest “/entrypoint.sh mysql” 2 hours ago
Up to 2 minutes ago Now that we are sure that our container is running, let’s use the container ID to view all its logs. Type
the
following command in the command line interface (CLI), replace <container ID> with your container ID: docker logs <Container ID>
Although this will show us the logs, it will not allow us to see the continuous output of the log. In Docker jargon, we refer to the creation of a continuous stream of log output as tailings logs. To track the records of our container, we can use the follow option.
docker logs -follow <container ID>
Next, let’s explore more interesting tricks related to viewing Docker logs
. Other
Docker
Registration Tricks
Here are three useful logging tricks that you can access through your CLI.
#1: Show Only the Last Lines
In some cases, you don’t want to see all the records in a container. Maybe something happened, and you want to quickly check the last 100 records for your container. In this case, you can use the tail option to specify the number of logs you want to view
: docker logs -tail 100 <container ID>
#2: Streaming logs up to a specific point in
time
Docker provides the option to transmit only time-specific logs. For example, logs written during the first three seconds when the container was active can tell you whether the container started successfully. In this case, you don’t have to create an endless stream of records. Here, you can use the option up with the follow option. The hasta option allows you to specify a time interval during which the container should print records to the CLI.
docker logs -follow -until=3s
You can use different notations to designate the timeline. For example, to view logs for the last 30 minutes, use the following command
: docker logs -follow -until=30m
#3: Transmit logs from a specific point in time
The opposite action is also possible with Docker CLI options. Let’s say you want to view the logs from a specific point in time so far. The option from help with this task.
Docker logs -since 2019-03-02 <Container ID>
The format accepted here is YYYY-MM-DDTHH:MM:SS. This means that you can specify a very precise timestamp from which you want to display records, or a less specific timestamp, as shown in the previous example.
How to register
Docker logs in Papertrail In
this section, we’ll give you a simple example of how you can configure Docker to send logs to Papertrail
. First, you
can run a logspout container, which allows you to configure an address for sending logs. The following example starts a logspout container configured to send logs to Papertrail.
docker run -restart=always -d \ -v=/var/run/docker.sock:/var/run/docker.sock gliderlabs/logspout \ syslog+tls://logsN.papertrailapp.com:XXXXX
Alternatively, you can configure the syslog handler to tell the container where to send logs. Be sure to change the syslog-address property to your custom Papertrail address.
docker run -log-driver=syslog -log-opt syslog-address=udp://logsN.papertrailapp.com:XXXXX image-name
You can find more information in the Papertrail tutorial
.
4 best practices when logging into Docker
Export logs to persistent storage
You can easily create and destroy containers. However, every time a container is restarted, it loses all the data it contains. Therefore, never store application-specific data in the container.
For the same reason, you need to take good care of your records. Logs can be stored persistently on one volume, but it is even better to store them long-term. For example, you can pipe logs to a local hard drive or send them to a records management platform. Both options allow you to save your records long-term and use them for future analysis.
Consider using a
log container
A log container helps you scale the log. The idea is to pipe log output from multiple containers to one log container. The log container then takes care of saving the logs to persistent storage or a records management platform.
In addition, you can activate multiple log containers to scale the logging service when you decide to host more containers. It is a flexible and easy solution to handle log output.
Standard text
log data: output channels
Before you can add logs from your containers, you must ensure that applications running in those containers log data to STDOUT or STDERR, both standard channels for logging outbound messages or error messages. Docker is configured to automatically collect data from both outputs. If you log data to a file inside your container, you risk losing all this data when the container crashes or restarts. Therefore, if you do not want to lose important registry data, it is important to log in to STDOUT or STDERR.
Log data as
JSON format
Docker supports the JSON logging format, and it is recommended to log data in this format. Docker stores logs as JSON files; therefore, it is optimized to handle JSON data.
For this reason, many Node .js registry libraries such as Bunyan or Winston prefer to record data using the JSON format.
Final Words About
Docker Logs
As you’ve seen, Docker provides several CLI options for displaying logs in different formats. For example, you can display records for a point in time specified. In addition, the tracking option is one of the most widely used Docker options because it allows developers to live queue logs for specific containers.
Finally, you can configure Docker to transport logs to a records management platform, such as Papertrail. This also helps to visualize your logs. Papertrail allows you to easily monitor logs and provides developers with the ability to create alarms to warn them if an anomaly is detected.
Do you want to try it? You can sign up for a free trial of Papertrail now and see how it can work for you.
This post was written by Michiel Mulders. Michiel is a passionate blockchain developer who loves to write technical content. On top of that, she loves learning about marketing, UX psychology, and entrepreneurship. When he’s not writing, he’s probably enjoying a Belgian beer!