Update, November 17, 2016: We took this series of blog posts, expanded it, and turned it into a book called Terraform: Up & Running!
Update, July 8, 2019: We’ve updated this series of blog posts for Terraform 0.12 and released the second edition of Terraform: Up & Running!
Update, September 28, 2022: We’ve updated this blog post series for Terraform 1.2 and released the 3rd edition of Terraform: Up & Running!
This is Part 1 of the Complete Terraform Guide series. In the introduction to the series, we discussed why every company should use infrastructure as code (IAC). In this post, we are going to discuss why we chose Terraform as our IAC tool of choice.
If you search the internet for “infrastructure as code”, it’s pretty easy to find a list of the most popular tools:
- Chef
- Puppet
- Ansible
- Pulumi
- CloudFormation
- Terraform
- Heat
What’s not easy is figuring out which of these you should use. All of these tools can be used to manage infrastructure as code. All of them are open source, backed by large communities of contributors, and work with many different cloud providers (with the notable exception of CloudFormation, which is closed source and only for AWS). All of them offer enterprise support. All of them are well documented, both in terms of official documentation and community resources such as blog posts and StackOverflow questions. So how do you decide?
What makes this even more difficult is that most of the comparisons you find online between these tools do little more than list the general properties of each tool and make it look like you could have the same success with any of them. And while that’s technically true, it’s not helpful. It’s a bit like telling a programming newbie that you could have just as much success building a website with PHP, C, or Assembly, a statement that’s technically true, but omits a lot of information that would be incredibly helpful in making a good decision.
In this post, we’re going to dive into some very specific reasons why we chose Terraform over IAC’s other tools. As with all technology decisions, it’s a matter of trade-offs and priorities, and while your particular priorities may be different than ours, we hope that sharing our thought process will help you make your own decision. Here are the main trade-offs we considered:
- Configuration Management vs Provisioning
- Mutable Infrastructure vs Immutable Infrastructure
- Declarative
- Domain-Specific Language
- Agent Without Master vs
- Agent Without Agent Paid
- Offer Large Community vs Small Community
- Mature vs Vanguard
- Joint use
Procedure vs
General Purpose Language vs
Master Agent vs
Offer vs
Free
of multiple tools Chef, Puppet, and Ansible are configuration management tools, which means they are designed to install and manage software on existing servers. CloudFormation, Heat, Pulumi, and Terraform are provisioning tools, meaning they’re designed to provision the servers themselves (as well as the rest of their infrastructure, such as load balancers, databases, network configuration, etc.), leaving the work of configuring those servers to other tools. Although the distinction is not entirely clear, given that configuration management tools can generally perform some degree of provisioning (for example, you can deploy a server with Ansible) and that provisioning tools can generally do some degree of configuration (for example, you can run configuration scripts on each server that you provision with Terraform), You generally want to choose the tool that best suits your use case.
In particular, we’ve found that if you use Docker or Packer, the vast majority of your configuration management needs are already taken care of. With Docker and Packer, you can create images (such as containers or virtual machine images) that have all the software your server needs already installed and configured. Once you have such an image, all you need is a server to run it. And if all you need to do is provision a bunch of servers, then a provisioning tool like Terraform will usually be better suited than a configuration management tool (here’s an example of how to use Terraform to deploy Docker on AWS).
That said, if you’re not using server template tools, a good alternative is to use a configuration provisioning and management tool together. For example, a popular combination is to use Terraform to provision your servers and Ansible to configure each.
Configuration management tools such as Chef, Puppet, and Ansible typically use a mutable infrastructure paradigm by default. For example, if you instruct Chef to install a new version of OpenSSL, it will run the software update on existing servers and the changes will occur instead. Over time, as you apply more and more updates, each server creates a unique history of changes. As a result, each server becomes slightly different from all the others, leading to subtle configuration errors that are difficult to diagnose and reproduce (configuration deviation). Even with automated testing, these errors are difficult to detect; A configuration management change may work well on a test server, but that same change may behave differently on a production server because the production server has accumulated months of changes that are not reflected in the test environment.
If you’re using a provisioning tool like Terraform to deploy machine images created by Docker or Packer, most of the “changes” are actually deployments of an entirely new server. For example, to deploy a new version of OpenSSL, you would use Packer to create a new image with the new version of OpenSSL, deploy that image to a new set of servers, and then terminate the old servers. Because each deployment uses immutable images on new servers, this approach reduces the likelihood of configuration deviation errors, makes it easy to know exactly what software is running on each server, and allows you to easily deploy any previous version of the software (any previous image) at any time. It also makes automated testing more efficient, as an immutable image that passes testing in the test environment is likely to behave in exactly the same way in the production environment.
Of course, it’s also possible to force configuration management tools to perform immutable deployments, but it’s not the idiomatic approach for those tools, whereas it’s a natural way to use provisioning tools. It is also worth mentioning that the immutable approach has its own disadvantages. For example, reimaging from a server template and redeploying all servers for a trivial change can be time-consuming. Also, immutability lasts only until you actually run the image. After a server is up and running, it will begin to make changes to the hard drive and experience some degree of configuration deviation (although this is mitigated if you deploy frequently).
Chef and Ansible encourage a procedural style in which code is written that specifies, step by step, how to achieve a desired end state. Terraform, CloudFormation, Pulumi, Heat, and Puppet encourage a more declarative style in which code is written that specifies the desired end state, and the IAC tool itself is responsible for figuring out how to achieve that state.
To demonstrate the difference, let’s look at an example. Imagine you want to deploy 10 servers (EC2 instances in AWS jargon) to run an AMI with ID ami-0fb653ca2d3203ac1 (Ubuntu 20.04). Here
is a simplified example of an Ansible template that does this using a procedural approach:- EC2:count:10 Image: ami-0FB653CA2D3203AC1 instance_type:T2.micro
And here’s a simplified example of a Terraform template that does the same thing using a declarative approach:
resource “aws_instance” “example” { count = 10 ami = “ami-0fb653ca2d3203ac1” instance_type = “t2.micro”}
On the surface, these two approaches may seem similar, and when you initially run them with Ansible or Terraform, they will produce similar results. What’s interesting is what happens when you want to make a change.
For example, suppose the traffic has increased and you want to increase the number of servers to 15. With Ansible, the procedural code you wrote earlier is no longer useful; If you just upgraded the number of servers to 15 and repeated that code, you would deploy 15 new servers, giving you 25 in total! So, instead, you need to know what’s already deployed and write an entirely
new how-to script to add the five new servers:- EC2: Count: 5 Image: AMI-0FB653CA2D3203AC1 instance_type: T2.micro
With declarative code, because all you do is declare the end state you want and Terraform figure out how to get to that end state, Terraform will also be aware of any states you have created in the past. Therefore
, to deploy five more servers, all you need to do is go back to the same Terraform configuration and update the count from 10 to 15:resource “aws_instance” “example” { count = 15 ami = “ami-0fb653ca2d3203ac1” instance_type = “t2.micro”}
If you applied this configuration, Terraform would realize that you had already created 10 servers and, So, all you need to do is create five new servers. In fact, before applying these settings, you can use the Terraform plan command to preview the changes you would make:$ terraform plan
# aws_instance.example[10] will be created + resource “aws_instance” “example” { + ami = “ami-0fb653ca2d3203ac1” + instance_type = “t2.micro” + (…) } # aws_instance.example[11] will be created + resource “aws_instance” “example” { + ami = “ami-0fb653ca2d3203ac1” + instance_type = “t2.micro” + (…) } # aws_instance.example[12] se create + resource “aws_instance” “example” { + ami = “ami-0fb653ca2d3203ac1” + instance_type = “t2.micro” + (…) } # aws_instance.example[13] will be created + resource “aws_instance” “example” { + ami = “ami-0fb653ca2d3203ac1” + instance_type = “t2.micro” + (…) } # aws_instance.example[14] will be created + resource “aws_instance” “example” { + ami = “ami-0fb653ca2d3203ac1” + instance_type = “t2.micro” + (…) } Plan: 5 to add, 0 to change, 0 to destroy.
Now, what happens when you want to deploy a different version of the application, such as AMI ID ami-02bcbb802e03574ba? With the procedural approach, Ansible’s previous two templates are again not useful, so you need to write another template to track the 10 servers you previously deployed (or was it 15 now?) and carefully update each to the new version. Using Terraform’s declarative approach, you go back to the same configuration file and simply change the ami parameter to
ami-02bcbb802e03574ba:resource “aws_instance” “example” { count = 15 ami = “ami-02bcbb802e03574ba” instance_type = “t2.micro”}
Obviously, these examples are simplified. Ansible allows you to use tags to find existing EC2 instances before deploying new ones (for example, using the instance_tags and count_tag parameters), but having to manually figure out this type of logic for each resource you manage with Ansible, based on the past history of each resource, can be surprisingly complicated: for example, you might have to manually configure code to search for existing instances not just by tag, but also by image version, availability zone and other parameters. This highlights two main problems with procedural IaC tools:
- The procedural code does not fully capture the state of the infrastructure. Reading the three previous Ansible templates is not enough to know what has been implemented. He would also need to know the order in which those templates were applied. If you had applied them in a different order, you might have ended up with a different infrastructure, and that’s not something you can see in the codebase itself. In other words, to reason on an Ansible or Chef code base, you need to know the complete history of every change that has occurred.
- The procedural code limits reuse. Reuse of procedural code is inherently limited because you must manually consider the current state of the infrastructure. Because that state is constantly changing, the code you used a week ago may no longer be usable because it was designed to modify a state of your infrastructure that no longer exists. As a result, procedural codebases tend to grow large and complicated over time.
With Terraform’s declarative approach, code always represents the last state of your infrastructure. At a glance, you can determine what is currently deployed and how it is configured, without having to worry about history or time. This also makes it easier to create reusable code, as you don’t need to manually consider the current state of the world. Instead, you only focus on describing your desired state, and Terraform figures out how to move from one state to another automatically. As a result, Terraform’s codebases tend to remain small and easy to understand.
Chef
and Pulumi allow you to use a general-purpose programming language (GPL) to manage infrastructure as code: Chef supports Ruby; Pulumi supports a wide variety of GPLs, including JavaScript, TypeScript, Python, Go, C#, Java and others. Terraform, Puppet, Ansible, CloudTraining, and OpenStack Heat each use a domain-specific language (DSL) to manage infrastructure as code: Terraform uses HCL; Puppet uses Puppet Language; Ansible, CloudFormation, and OpenStack Heat use YAML (CloudFormation also supports JSON).
The distinction between GPL and DSL is not entirely clear, it’s
more of a useful mental model than a clean, separate categorization, but the basic idea is that DSLs are designed for use in a specific domain, while GPLs can be used in a wide range of domains. For example, the HCL code you write for Terraform only works with Terraform and is limited only to functionality supported by Terraform, such as infrastructure implementation. This is in contrast to using a GPL like JavaScript with Pulumi, where the code you write can not only manage infrastructure using Pulumi’s libraries, but also perform almost any other programming task you want, such as running a web application (in fact, Pulumi offers an automation API that you can use to embed Pulumi within your application’s code), perform complicated control logic (loops, conditionals and abstraction are easier to do in a GPL than in a DSL), run various validations and tests, integrate with other tools and APIs, etc.
DSLs have several advantages over GPLs:
- Easier to learn. Since DSLs, by design, deal with a single domain, they tend to be smaller and simpler languages than GPLs and are therefore easier to learn than GPLs. Most developers will be able to learn Terraform faster than, say, Java.
- Clearer and more concise. Since DSLs are designed for a specific purpose, with all the keywords in the language built to do that one thing, code written in DSL tends to be easier to understand and more concise than code written to do exactly the same thing but written in a GPL. Code for deploying a single server on AWS will generally be shorter and easier to understand in Terraform than in Java.
- More uniform. Most DSLs are limited in what they allow you to do. This has some drawbacks, as I’ll mention shortly, but one of the advantages is that code written in DSL generally uses a uniform and predictable structure, making it easier to navigate and understand than code written in GPL, where each developer can solve the same problem in a completely different way. There’s really only one way to deploy a server on AWS with Terraform; there are hundreds of ways to do the same with Java.
GPLs also have several advantages over DSLs:
- There may be no need to learn anything new. Since the GPL is used in many domains, there’s a chance you won’t have to learn a new language at all. This is especially true in Pulumi, as it supports several of the world’s most popular languages, including JavaScript, Python, and Java. If you already know Java, you’ll be able to jump into Pulumi faster than if you had to learn HCL to use Terraform.
- Bigger ecosystem and more mature tools. Since GPLs are used in many domains, they have much larger communities and much more mature tools than a typical DSL. The number and quality of integrated development environments (IDEs), libraries, patterns, testing tools, etc. for Java far exceed what is available for Terraform.
- More power. GPLs, by design, can be used to perform almost any programming task, so they offer much more power and functionality than DSLs. Certain tasks, such as control logic (loops and conditionals), automated testing, code reuse, abstraction, and integration with other tools, are much easier with Java than with Terraform.
By default, Chef and Puppet require you to run a master server to store infrastructure state and distribute updates. Whenever you want to update something in your infrastructure, you use one client (for example, a command-line tool) to issue new commands to the master server, and the master server sends the updates to all other servers or those servers pull the latest updates from the master server on a regular basis.
A master server offers some advantages. First, it’s a single, central place where you can view and manage the health of your infrastructure. Many configuration management tools even provide a web interface (e.g. Chef Console, Puppet Enterprise Console) to make it easier for the master server to see what’s going on. Second, some master servers can run continuously in the background and apply settings. This way, if someone makes a manual change to a server, the master server can roll back that change to prevent the configuration from deviating.
However, having to run a master server has some serious drawbacks:
- Additional infrastructure. You must deploy an additional server, or even a cluster of additional servers (for high availability and scalability), just to run the master.
- maintenance. You must maintain, update, back up, monitor, and scale master servers.
- safety. You must provide a way for the client to communicate with the master servers and a way for the master servers to communicate with all the other servers, which typically means opening additional ports and configuring additional authentication systems, all of which increases their surface area for attackers.
Chef and Puppet have different levels of support for masterless modes in which you run only your agent software on each of your servers, usually on a periodic schedule (e.g. a cron job that runs every five minutes) and use it to pull the latest updates from version control (rather than a master server). This significantly reduces the number of moving parts, but, as I discuss in the next section, this still leaves a number of questions unanswered, especially about how to provision the servers and install the agent software on them in the first place.
Ansible, CloudFormation, Heat, Terraform, and Pulumi have no domain by default. Or, to be more precise, some of them rely on a master server, but it’s already part of the infrastructure you’re using and not an extra piece you need to manage. For example, Terraform communicates with cloud providers using the cloud provider’s APIs, so in a sense, API servers are master servers, except they don’t require any additional infrastructure or any additional authentication mechanism (i.e. just use your API keys). Ansible works by connecting directly to each server via SSH, so again, you don’t need to run any additional infrastructure or manage additional authentication mechanisms (i.e. just use your SSH keys).
Chef and Puppet require that you
install agent software (for example, Chef Client, Puppet Agent) on each server that you want to configure. The agent typically runs in the background on each server and is responsible for installing the latest configuration management updates.
This has some drawbacks:
- Bootstrapping. How do you provision your servers and install the agent software on them in the first place? Some configuration management tools kick the can down the road, assuming some external process will take care of this for them (for example, you first use Terraform to deploy a bunch of servers with an AMI that already has the agent installed); other configuration management tools have a special boot process where you run one-time commands to provision the servers using the cloud provider’s APIs and install the agent software on those servers via SSH.
- maintenance. You should update the agent software periodically, taking care to keep it synchronized with the master server if there is one. You should also monitor the agent software and restart it if it crashes.
- safety. If the agent software pulls the configuration from a master server (or from some other server if you are not using a master server), you must open the outbound ports on each server. If the master server sends the configuration to the agent, you must open the inbound ports on each server. In either case, you must figure out how to authenticate the agent on the server with which you are communicating. All this increases their surface to the attackers.
Again, Chef and Puppet have different levels of support for agentless modes, but these feel like they’re added as an afterthought and don’t support the full set of features of the configuration management tool. That’s why in nature, the default or language settings for Chef and Puppet almost always include an agent and usually a master as well.
All of these additional moving parts introduce a host of new failure modes into your infrastructure. Each time you receive an error report at 3 a.m., you should find out if it is an error in your application code, or your IaC code, or the configuration management client, or the master servers, or the way the client communicates with the master servers, or the way other servers communicate with the master servers, or…
Ansible, CloudFormation, Heat, Terraform, and Pulumi do not require you to install any additional agents. Or, to be more precise, some of them require agents, but these are usually already installed as part of the infrastructure you’re using. For example, AWS, Azure, Google Cloud, and all other cloud providers install, manage, and authenticate the agent software on each of your physical servers. As a Terraform user, you don’t need to worry about any of that: you simply issue commands and the cloud provider’s agents run them for you on all their servers. With Ansible, your servers need to run the SSH daemon, which is common to run on most servers anyway.
CloudFormation and OpenStack Heat are completely free: the resources you deploy with those tools can cost money, but you pay nothing to use the tools themselves. Terraform, Chef, Puppet, Ansible, and Pulumi are available in both free and paid versions: for example, you can use the free and open-source version of Terraform on its own, or you can choose to use it with HashiCorp’s paid product, Terraform Cloud. Price points, packaging, and trade-offs with paid versions are beyond the scope of this blog post. The only question I want to focus on here is whether the free version is so limited that you’re forced to use the paid offering for real-world production use cases.
To be clear, there is nothing wrong with a company offering a paid service for one of these tools; in fact, if you are using these tools in production, I highly recommend that you look for the paid services, as many of them are worthwhile. However, you need to realize that those paid services are not under your control: they could shut down or be acquired (e.g., Chef, Puppet, and Ansible have gone through acquisitions that had a significant impact on their paid product offerings), or change their pricing model (e.g., Pulumi changed its price in 2021, which benefited some users but increased prices by ~10 times for others), or change the product, or discontinue the product altogether, so it’s important to know if the IaC tool you chose would still be usable if, for some reason, you couldn’t use one of these paid services.
In my experience, free versions of Terraform, Chef, Puppet, and Ansible can be successfully used for production use cases; paid services can make these tools even better, but if they weren’t available, I could still survive. Pulumi, on the other hand, is harder to use in production without the paid offering known as Pulumi Service.
A key part of managing infrastructure as code is managing state (you’ll learn how Terraform manages state in How to Manage Terraform State), and Pulumi, by default, uses Pulumi Service as the backend for state storage. You can switch to other supported backends for state storage, such as Amazon S3, Azure Blob storage, or Google Cloud Storage, but the Pulumi backend documentation explains that only Pulumi Service supports transactional checkpoint (for fault tolerance and recovery), concurrent state locking (to prevent corruption of infrastructure state in a team environment), and encrypted state in transit and at rest. In my opinion, without these features, it is not practical to use Pulumi in any kind of production environment (i.e. with more than one developer), so if you are going to use Pulumi, you more or less have to pay for the Pulumi service.
Every time you choose a technology, you’re also choosing a community. In many cases, the ecosystem around the project can have a greater impact on your experience than the inherent quality of the technology itself. The community determines how many people contribute to the project; how many plugins, integrations and extensions are available; how easy it is to find help online (e.g. blog posts, questions about Stack Overflow); and how easy it is to hire someone to help you (for example, an employee, consultant, or support company).
It’s hard to make an accurate comparison between communities, but you can spot some trends by searching online. The table below shows a comparison of popular IaC tools, with data I collected in June 2022, including whether the IaC tool is open source or closed source, which cloud providers it supports, the total number of contributors and stars on GitHub, how many open source libraries are available for the tool, and the number of questions listed for that tool on Stack Overflow (note: data on contributors and stars comes from the open source repositories for each tool, but since CloudFormation is closed source, this information is not available.)
Obviously, this is not a perfect apples-to-apples comparison. For example, some of the tools have more than one repository: for example, Terraform split vendor code (i.e. code specific to AWS, Google Cloud, Azure, etc.) into separate repositories in 2017, so the table above significantly underestimates activity; some tools offer alternatives to Stack Overflow for questions; and so on.
That said, some trends are obvious. First of all, all of the IaC tools in this comparison are open source and work with many cloud providers, except CloudFormation, which is closed source and only works with AWS. Second, Ansible and Terraform seem to be the clear leaders in terms of popularity.
Another interesting trend to note is how these numbers have changed since the first edition of Terraform: Up & Running. The table below shows the percentage change in each of the numbers of the values I collected in the first edition of Terraform: Up & Running, in September 2016. (Note: Pulumi is not included in this table, as he was not part of this comparison in the first edition of the book.)
Again, the data here isn’t perfect, but it’s good enough to spot a clear trend: Terraform and Ansible are experiencing explosive growth. The increase in the number of contributors, stars, open source libraries, and Stack Overflow publications is through the roof. Both tools have large and active communities today, and judging by these trends, they are likely to become even larger in the future.
Another key factor to consider when choosing any technology is maturity. Is this a technology that has been around for years, where all usage patterns, best practices, problems, and failure modes are well understood? Or is this a new technology where you’ll have to learn all those hard lessons from scratch? The table below shows the initial release dates, current version numbers (as of June 2022), and my own subjective perception of the maturity of each of the IaC tools.
Again, this is not an apples-to-apples comparison: age alone does not determine maturity, nor does a high version number (different tools have different versioning schemes). Still, some trends are clear. Pulumi is the youngest IaC tool in this comparison and possibly the least mature: this becomes evident when looking for documentation, best practices, community modules, etc. Terraform is a bit more mature these days: tools have improved, best practices are better understood, there are many more learning resources available (including this blog post series!), and now that it has reached the 1.0.0 milestone, it is a considerably more stable and reliable tool than when the first and second editions of Terraform: Up & Running came out. Chef and Puppet are the oldest and arguably most mature tools on this list.
Although I’ve been comparing IaC tools throughout this blog post, the reality is that you’ll likely need to use multiple tools to build your infrastructure. Each of the tools you’ve seen has strengths and weaknesses, so it’s your job to choose the right tools for the job.
The following sections show three common combinations that I’ve seen work well in various companies.
Provisioning plus configuration management Provisioning more server templates Provisioning more
- server templates plus orchestration
- Provisioning
- plus
configuration
management
Example: Terraform and Ansible. Use Terraform to deploy the entire underlying infrastructure, including the network topology (i.e., virtual private clouds, subnets, path tables), data stores (e.g., MySQL, Redis), load balancers, and servers. Then, use Ansible to deploy your applications on those servers.
This is an easy approach to get started, because there’s no additional infrastructure to run (Terraform and Ansible are client-only applications), and there are plenty of ways to get Ansible and Terraform to work together (for example, Terraform adds special tags to its servers, and Ansible uses those tags to find the servers and configure them). The main drawback is that using Ansible usually means you’re writing a lot of procedural code, with mutable servers, so as your code base, infrastructure, and equipment grow, maintenance can become more difficult.
Provisioning plus server templates
Example: Terraform and Packer. Packer is used to package applications as virtual machine images. Then use Terraform to deploy servers with these VM images and the rest of your infrastructure, including network topology (i.e. VPCs, subnets, route tables), data stores (e.g. MySQL, Redis), and load balancers.
This is also an easy approach to get started, because there’s no additional infrastructure to run (Terraform and Packer are client-only applications), and you’ll get plenty of practice deploying VM images using Terraform later in this blog post series. In addition, this is an immutable infrastructure approach, which will facilitate maintenance. However, there are two major drawbacks. First, virtual machines can take a long time to build and deploy, which will slow down the iteration rate. Second, as you’ll see later in this series of blog posts, the deployment strategies you can implement with Terraform are limited (for example, you can’t deploy blue-green natively in Terraform), so you end up writing a lot of complicated deployment scripts or resorting to orchestration tools, as described below.
Provisioning plus server templates plus orchestration
Example: Terraform, Packer, Docker, and Kubernetes. Use Packer to create a VM image that has Docker and Kubernetes agents installed. Next, use Terraform to deploy a cluster of servers, each of which runs this virtual machine image and the rest of its infrastructure, including the network topology (that is, VPCs, subnets, route tables), data stores (for example, MySQL, Redis), and load balancers. Finally, when the server cluster starts, it forms a Kubernetes cluster that is used to run and manage your Dockerized applications.
The advantage of
this approach is that Docker images are created fairly quickly, you can run and test them on your local machine, and you can take advantage of all the built-in Kubernetes functionality, including various deployment strategies, self-healing, auto-scaling, and so on. The downside is the added complexity, both in terms of additional infrastructure to run (Kubernetes clusters are difficult and expensive to deploy and operate, although most major cloud providers now provide Kubernetes managed services, which can offload some of this work) and in terms of several additional layers of abstraction (Kubernetes, Docker, Packer) to learn, manage, and debug.
Putting it all together, the table below shows how the most popular IaC tools stack up. Note that this table shows the default or most common way the various IaC tools are used, although as discussed earlier in this blog post, these IaC tools are flexible enough to be used in other configurations as well (e.g. you can use Chef without a teacher, you can use Puppet to make immutable infrastructure, etc.).
At Gruntwork, what we wanted was an open-source, cloud-independent provisioning tool with a large community, a mature code base and support for immutable infrastructure, a declarative language, a masterless and agentless architecture, and an optional paid service. The table above shows that Terraform, while not perfect, comes closest to meeting all of our criteria.
If Terraform
sounds like something that can also fit your criteria, head over to Part 2: An Introduction to Terraform, for more information.
For an expanded version of this blog post series, get a copy of the book Terraform: Up & Running (3rd edition available now!). If you need help with Terraform, DevOps practices, or AWS in your company, feel free to reach out to us at Gruntwork.