How To Install and Configure Elasticsearch on Ubuntu 18.04

An earlier version of this article was written by Toli.

Introduction

Elasticsearch is a platform for distributed search and real-time data analysis. It is a popular choice due to its ease of use, powerful features, and scalability.

This article will walk you through installing Elasticsearch, setting up for your use case, securing your installation, and getting started with your Elasticsearch server.

Prerequisites

Before following this tutorial, you will need:

  • An Ubuntu 18.04 server with 4 GB of RAM and 2 CPUs configured with a non-root sudo user. You can achieve this by following the initial server setup with Ubuntu 18.04

  • OpenJDK 11 installed. For instructions, see our guide How to Install Java with Apt on Ubuntu 18.04.

For this tutorial, we’ll work with the minimum amount of CPU and RAM required to run Elasticsearch. Note that the amount of CPU, RAM, and storage your Elasticsearch server will require depends on the volume of logs you expect.

Step 1 — Installing

Elasticsearch

Elasticsearch components are not available in Ubuntu’s default package repositories. However, they can be installed with APT after adding the list of Elastic package sources.

All packages are signed with the Elasticsearch signing key to protect your system from package spoofing. Packages that have been authenticated with the key will be considered trusted by your package manager. In this step, you will import the Elasticsearch public GPG key and add the list of sources from the Elastic package to install Elasticsearch.

To get started, use cURL, the

command line tool for transferring data with URLs, to import the public GPG key from Elasticsearch into APT. Note that we are using the -fsSL arguments to silence all progress and possible errors (except for a server failure) and to allow cURL to make a request to a new location if it is redirected. It pipes the output of the cURL command to the apt-key program, which adds the public GPG key to APT.

  1. curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch

| sudo apt-key add – Next, add the list of elastic fonts

to the

sources.list.d directory, where APT will search for new fonts: echo “

  1. deb https://artifacts.elastic.co/packages/7.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

Next, update your package lists so that APT reads the new elastic font

:

  1. sudo apt update

Then install Elasticsearch with this command:

  1. sudo apt install elasticsearch Elasticsearch

is now installed and ready to be configured

.

Step 2 — Configuring

Elasticsearch

To configure Elasticsearch, we’ll edit your main elasticsearch.yml configuration file where most of your configuration settings are stored. This file is located in the /etc/elasticsearch directory.

Use your preferred text editor to edit the Elasticsearch configuration file. Here, we will use nano:

  1. sudo nano /etc

/elasticsearch/ elasticsearch.yml

The elasticsearch.yml file provides configuration options for the cluster, node, paths, memory, network, discovery, and gateway. Most of these options are preconfigured in the file, but you can change them according to your needs. For the purposes of our demonstration of a single server configuration, we will only adjust the configuration for the network host.

Elasticsearch listens for traffic from anywhere on port 9200. You’ll want to restrict external access to your Elasticsearch instance to prevent outsiders from reading your data or shutting down your Elasticsearch cluster through its [REST API] (https://en.wikipedia.org/wiki/Representational_state_transfer). To restrict access and therefore increase security, find the line that specifies network.host, uncomment out, and replace its value with localhost to look like this

: . . . # – Network – # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: localhost . . .

We specified localhost for Elasticsearch to listen on all bound interfaces and IPs. If you want it to listen only on a specific interface, you can specify its IP instead of localhost. Save and close elasticsearch.yml. If you’re using nano, you can do so by pressing CTRL+X, followed by Y, and then ENTER.

Here are the minimum configurations you can start with to use Elasticsearch. Now you can start Elasticsearch for the first time.

Start the Elasticsearch service with systemctl. Give Elasticsearch a few moments to start. Otherwise, you may get errors about not being able to connect.

sudo systemctl start elasticsearch Next, run the following command to allow Elasticsearch to

start every time the server boots:

  1. sudo systemctl enable

Elasticsearch With Elasticsearch

enabled at startup, let’s move on to the next step to discuss security.

Step 3 — Secure

  1. Elasticsearch

By default, Elasticsearch can be controlled by anyone who can access the HTTP API. This is not always a security risk because Elasticsearch listens only on the loopback interface (i.e. 127.0.0.1), which can only be accessed locally. Therefore, public access is not possible, and as long as all server users are trusted, security may not be a major concern.

If you need to allow remote access to the HTTP API, you can limit network exposure with Ubuntu’s default firewall, UFW. This firewall should already be enabled if you followed the steps in the initial server setup tutorial with Ubuntu 18.04 as a prerequisite.

Now we’ll configure the firewall to allow access to the default Elasticsearch HTTP API port (TCP 9200) for the trusted remote host, usually the server you’re using in a single-server configuration, such as 198.51.100.0. To

allow access, type the following command: sudo ufw

  1. allow from 198.51.100.0 to any port 9200

Once it is complete

, you can enable UFW with the command: sudo ufw enable

Finally, check the UFW status with the following command

:

  1. sudo ufw status

If you specified the rules correctly, the output should look like this:

OutputStatus: active For action From – – – 9200 ALLOW 198.51.100.0 22 ALLOW Anywhere 22 (v6) ALLOW Anywhere (v6)

The UFW must now be enabled and configured to protect

Elasticsearch port 9200.

If you want to invest in additional protection, Elasticsearch offers the commercial Shield add-on for purchase

.

Step 4 — Test

Elasticsearch

For now, Elasticsearch should be running on port 9200. You can test it with cURL and a GET request.

  1. curl -X GET

‘http://localhost:9200’ You

should see the following response

: Output{ “node.name” : “My First Node”, “cluster.name” : “mycluster1”, “version” : { “number” : “2.3.1”, “build_hash” : “bd980929010aef404e7cb0843e61d0665269fc39”, “build_timestamp” : “2020-04-04T12:25:05Z”, “build_snapshot” : false, “lucene_version” : “5.5.0” }, “tagline” : “You know, to search” }

If you see a response similar to the one above, Elasticsearch works correctly. If not, make sure you’ve followed the installation instructions correctly and allowed time for Elasticsearch to fully boot.

To perform a more thorough Elasticsearch check,

run the following command:

  1. curl -XGET ‘http://localhost:9200/_nodes?pretty’

In the output of the above command, you can check all the current configuration for the node, cluster, application paths, modules, and more

.

Step 5 — Using

Elasticsearch

To get started with Elasticsearch, let’s first add some data. Elasticsearch uses a RESTful API, which responds to the usual CRUD commands: create, read, update, and delete. To work with it, we will use the cURL command again.

You

can add your first entry like this

:

  1. curl -XPOST -H “Content-Type: application/json

” ‘http://localhost:9200/tutorial/helloworld/1’ -d ‘{ “message”: “Hello World!” }’

You should receive the following response

: Output{“_index”:”tutorial”,”_type”:”helloworld”,”_id”:”1″,”_version”:2,”result”:”updated”,”_shards”:{“total”:2,”successful”:1,”failed”:0},”_seq_no”:1,”_primary_term”:1}

With cURL, we have sent an HTTP POST request to the Elasticsearch server. The request URI was /tutorial/helloworld/1 with several parameters: tutorial

is the index of

  • the data in Elasticsearch
  • .

  • HelloWorld
  • is the type.

  • 1 is the ID of our entry under the index and type above.

You can retrieve this first entry with an HTTP GET request.

  1. curl -X GET -H “Content-Type: application/json

” ‘http://localhost:9200/tutorial/helloworld/1’ -d ‘{ “message”: “Hello World!” }’

This should be the resulting output

: Output{“_index”:”tutorial”,”_type”:”helloworld”,”_id”:”1″,”_version”:1,”found”:true,”_source”:{ “message”: “Hello, World!” }}

To modify an existing entry, you can use an HTTP PUT request.

  1. curl -X PUT -H “Content-Type: application/json” ‘localhost:9200
  2. /tutorial/helloworld

  3. /1?pretty’ -d ‘ { “message”: “

  4. Hello, People!

” }’

Elasticsearch should recognize a successful modification like this

: Output{ “_index” : “tutorial”, “_type” : “helloworld”, “_id” : “1”, “_version” : 2, “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 }, “created” : false }

In the previous example we have modified the message of the first entry to “Hello, people!”. With that, the version number has been automatically increased to 2.

You may have noticed the rather nice additional argument in the previous application. It allows readable formatting so you can type each data field into a new row. You can also “beautify” your results by retrieving data for more readable output by entering the following command

:

  1. curl -X GET -H “Content-Type: application/json

” ‘http://localhost:9200/tutorial/helloworld/1?pretty’

Now the answer will be formatted for a human to analyze

: Output{ “_index” : “tutorial”, “_type” : “helloworld”, “_id” : “1”, “_version” : 2, “found” : true, “_source” : { “message” : “Hello, People!” } }

We have now added and queried data in Elasticsearch. For information about the other operations, see the API documentation.

Conclusion

You’ve now installed, configured, and started using Elasticsearch. Since the original version of Elasticsearch, Elastic has developed three additional tools: Logstash, Kabana, and Beats, which will be used in conjunction with Elasticsearch as part of the Elastic Stack. Used together, these tools allow you to search, analyze, and visualize logs generated from any source and in any format in a practice known as centralized logging. To get started with the Elastic Stack on Ubuntu 18.04, check out our guide How to Install Elasticsearch, Logstash, and Kibana (Elastic Stack) on Ubuntu 18.04.