banner



Which Of The Following Is Used To Configure A Service Template?

Chapter 4. Server Configuration Tools

Using scripts and automation to create, provision, and update servers is non specially new, but a new generation of tools has emerged over the past decade or and then. CFEngine, Puppet, Chef, Ansible, and others define this category of tooling. Virtualization and cloud has driven the popularity of these tools by making it piece of cake to create large numbers of new server instances which then demand to be configured and updated.

Containerization tools such as Docker accept emerged even more recently as a method for packaging, distributing, and running applications and processes. Containers bundle elements of the operating organisation with the application, which has implications for the fashion that servers are provisioned and updated.

As mentioned in the previous chapter, not all tools are designed to treat infrastructure every bit code. The guidelines from that affiliate for selecting tools utilise equally well to server configuration tools; they should be scriptable, run unattended, and use externalized configuration.

This chapter describes how server automation tools designed for infrastructure equally lawmaking work. This includes unlike approaches that tools can take and unlike approaches that teams tin can use to implement these tools for their own infrastructure.

Patterns for Managing Servers

Several of the capacity in Part Ii build on the fabric covered in this affiliate. Chapter 6 discusses general patterns and approaches for provisioning servers, Chapter 7 explores ways of managing server templates in more depth, and and then Chapter 8 discusses patterns for managing changes to servers.

Goals for Automatic Server Management

Using infrastructure as code to manage server configuration should result in the following:

  • A new server can be completely provisioned1 on demand, without waiting more than a few minutes.

  • A new server can exist completely provisioned without human involvement—for example, in response to events.

  • When a server configuration change is defined, it is applied to servers without human involvement.

  • Each modify is practical to all the servers it is relevant to, and is reflected in all new servers provisioned after the alter has been made.

  • The processes for provisioning and for applying changes to servers are repeatable, consequent, cocky-documented, and transparent.

  • It is easy and safe to make changes to the processes used to provision servers and change their configuration.

  • Automated tests are run every time a change is made to a server configuration definition, and to any process involved in provisioning and modifying servers.

  • Changes to configuration, and changes to the processes that carry out tasks on an infrastructure, are versioned and applied to different environments, in order to back up controlled testing and staged release strategies.

Tools for Different Server Direction Functions

In order to understand server management tooling, it tin be helpful to think virtually the lifecycle of a server every bit having several phases (shown in Figure four-1).

A server's life

Figure 4-one. A server'south lifecycle

This lifecycle will be the basis for discussing different server management patterns, starting in Chapter six.

This section volition explore the tools involved in this lifecycle. There are several functions, some of which apply to more than one lifecycle phase. The functions discussed in this section are creating servers, configuring servers, packaging templates, and running commands on servers.

Tools for Creating Servers

A new server is created by the dynamic infrastructure platform using an infrastructure definition tool, as described in the previous chapter. The server is created from a server template, which is a base paradigm of some kind. This might be in a VM image format specific to the infrastructure platform (e.g., an AWS AMI image or VMware VM template), or it could be an OS installation disk image from a vendor (e.g., an ISO prototype of the Red Hat installation DVD). Most infrastructure platforms let servers to be created interactively with a UI, as in Figure iv-2. Simply whatever important server should exist created automatically.

AWS web console for creating a new server

Figure four-2. AWS web console for creating a new server

There are many use cases where new servers are created:

  • A member of the infrastructure team needs to build a new server of a standard blazon—for example, calculation a new file server to a cluster. They change an infrastructure definition file to specify the new server.

  • A user wants to prepare a new instance of a standard application—for example, a issues-tracking application. They use a self-service portal, which builds an application server with the issues-tracking software installed.

  • A spider web server VM crashes because of a hardware issue. The monitoring service detects the failure and triggers the creation of a new VM to supercede information technology.

  • User traffic grows beyond the capacity of the existing application server pool, and so the infrastructure platform's autoscaling functionality creates new application servers and adds them to the pool to meet the demand.

  • A developer commits a change to the software they are working on. The CI software (e.thou., Jenkins or GoCD) automatically provisions an application server in a exam environment with the new build of the software so information technology can run an automated test suite against it.

Tools for Configuring Servers

Ansible, CFEngine, Chef, Puppet, and Saltstack are examples of tools specifically designed for configuring servers with an infrastructure-every bit-lawmaking approach. They use externalized configuration definition files, with a DSL designed for server configuration. The tool reads the definitions from these files and applies the relevant configuration to a server.

Many server configuration tools use an agent installed on each server. The agent runs periodically, pulling the latest definitions from a central repository and applying them to the server. This is how both Chef and Puppet are designed to piece of work in their default apply example.2

Other tools use a push model, where a central server triggers updates to managed servers. Ansible uses this model past default, using SSH keys to connect to server and run commands.iii This has the advantage of non requiring managed servers to have configuration agents installed on them, but arguably sacrifices security. Chapter 8 discusses these models in more detail.

Security Trade-Offs with Automated Server Configuration Models

A centralized organization that controls how all of your servers are configured creates a wonderful opportunity for evil-doers. Push-based configuration opens ports on your servers, which an assailant can potentially use to connect. An assaulter might impersonate the configuration master and feed the target server configuration definitions that volition open the server upwardly for malicious use. It could fifty-fifty allow an attacker to execute arbitrary commands. Cryptographic keys are normally used to prevent this, just this requires robust key management.

A pull model simplifies security, but of course there are still opportunities for evil. The attack vector in this case is wherever the customer pulls its configuration definitions from. If an attacker can compromise the repository of definitions, then they can gain complete control of the managed servers.

In any instance, the VCS used to store scripts and definitions is a critical office of your infrastructure'south attack surface, and so must be part of your security strategy. The same is truthful if yous utilize a CI or CD server to implement a change direction pipeline, equally described in Chapter 12.

Security concerns with infrastructure as lawmaking are discussed in more detail in "Security".

Server configuration products have wider toolchains beyond bones server configuration. Most have repository servers to manage configuration definitions—for example, Chef Server, Puppetmaster, and Ansible Tower. These may have additional functionality, providing configuration registries, CMDBs, and dashboards. Affiliate 5 discusses broader infrastructure orchestration services of this type.

Arguably, choosing a vendor that provides an all-in-one ecosystem of tools simplifies things for an infrastructure team. However, it'southward useful if elements of the ecosystem tin can be swapped out for unlike tools so the team tin choose the best pieces that fit their needs.

Tools for Packaging Server Templates

In many cases, new servers can exist congenital using off-the-shelf server template images. Infrastructure platforms, such every bit IaaS clouds, often provide template images for mutual operating systems. Many too offer libraries of templates built by vendors and 3rd parties, who may provide images that accept been preinstalled and configured for particular purposes, such as application servers.

But many infrastructure teams discover it useful to build their own server templates. They tin can pre-configure them with their team'south preferred tools, software, and configuration.

Packaging common elements onto a template makes it faster to provision new servers. Some teams take this further by creating server templates for particular roles such equally web servers and application servers. Chapter seven discusses trade-offs and patterns around baking server elements into templates versus calculation them when creating servers ("Provisioning Servers Using Templates").

One of the key merchandise-offs is that, every bit more than elements are managed past packaging them into server templates, the templates need to be updated more oftentimes. This then requires more sophisticated processes and tooling to build and manage templates.

Netflix pioneered approaches for building server templates with everything pre-packaged. They open sourced the tool they created for building AMI templates an AWS, Aminator.4

Aminator is fairly specific to Netflix'south needs, limited to building CentOS/Carmine Hat servers for the AWS deject. But HashiCorp has released the open source Packer tool, which supports a diversity of operating systems as well as different cloud and virtualization platforms. Packer defines server templates using a file format that is designed following the principles of infrastructure as lawmaking.

Unlike patterns and practices for edifice server templates using these kinds of tools are covered in detail in Chapter 7.

Tools for Running Commands on Servers

Tools for running commands remotely beyond multiple machines can be helpful for teams managing many servers. Remote command execution tools like MCollective, Fabric, and Capistrano tin be used for advertisement hoc tasks such equally investigating and fixing problems, or they can be scripted to automate routine activities. Example 4-1 shows an case of an MCollective command.

Some people refer to this kind of tool as "SSH-in-a-loop." Many of them do use SSH to connect to target machines, so this isn't completely inaccurate. Only they typically have more advanced features as well, to arrive easier to script them, to define groupings of servers to run commands on, or to integrate with other tools.

Although information technology is useful to be able to run advertizing hoc commands interactively across servers, this should but be done for exceptional situations. Manually running a remote command tool to make changes to servers isn't reproducible, so isn't a practiced practice for infrastructure equally code.

Example 4-ane. Sample MCollective command
                $                mco service httpd restart -S                "environment=staging and /apache/"              

If people find themselves routinely using interactive tools, they should consider how to automate the tasks they're using them for. The ideal is to put information technology into a configuration definition if advisable. Tasks that don't brand sense to run unattended tin can be scripted in the language offered by the team's preferred remote control tool.

The danger of using scripting languages with these tools is that over fourth dimension they can grow into a complicated mess. Their scripting languages are designed for fairly small scripts and lack features to help manage larger codebases in a clean way, such as reusable, shareable modules. Server configuration tools are designed to support larger codebases, so they are more advisable.

Using Configuration from a Cardinal Registry

Affiliate 3 described using a configuration registry to manage data near unlike elements of an infrastructure. Server configuration definitions can read values from a configuration registry in social club to fix parameters (as described in "Reusability with Configuration Definitions").

For case, a team running VMs in several data centers may want to configure monitoring agent software on each VM to connect to a monitoring server running in the same information center. The team is running Chef, so they add these attributes to the Chef server as shown in Example iv-2.

Instance 4-2. Using Chef server attributes every bit configuration registration entries
                default                [                'monitoring'                ][                'servers'                ][                'sydney'                ]                =                'monitoring.au.myco'                default                [                'monitoring'                ][                'servers'                ][                'dublin'                ]                =                'monitoring.eu.myco'              

When a new VM is created, it is given a registry field called data_center, which is set to dublin or sydney.

When the chef-client runs on a VM, it runs the recipe in Instance four-three to configure the monitoring agent.

Example 4-3. Using configuration registry entries in a Chef recipe
                my_datacenter                =                node                [                'data_center'                ]                template                '/etc/monitoring/agent.conf'                do                owner                'root'                group                'root'                way                0644                variables                (                :monitoring_server                =>                node                [                'monitoring'                ][                'servers'                ][                my_datacenter                ]                )                end              

The Chef recipe retrieves values from the Chef server configuration registry with the node['attribute_name'] syntax. In this case, after putting the proper noun of the data middle into the variable my_datacenter, that variable is then used to retrieve the monitoring server's IP accost for that data center. This address is then passed to the template (not shown hither) used to create the monitoring amanuensis configuration file.

Server Change Direction Models

Dynamic infrastructure and containerization are leading people to experiment with different approaches for server change direction. There are several different models for managing changes to servers, some traditional, some new and controversial. These models are the ground for Part Ii of this book, particularly Affiliate 8, which digs into specific patterns and practices.

Advertising Hoc Change Management

Ad hoc change management makes changes to servers only when a specific change is needed. This was the traditional approach earlier the automated server configuration tools became mainstream, and is nonetheless the most commonly used arroyo. It is vulnerable to configuration drift, snowflakes, and all of the evils described in Chapter ane.

Configuration Synchronization

Configuration synchronization repeatedly applies configuration definitions to servers, for example, past running a Puppet or Chef agent on an hourly schedule. This ensures that any changes to parts of the system managed by these definitions are kept in line. Configuration synchronization is the mainstream approach for infrastructure equally code, and most server configuration tools are designed with this arroyo in heed.

The main limitation of this approach is that many areas of a server are left unmanaged, leaving them vulnerable to configuration drift.

Immutable Infrastructure

Immutable infrastructure makes configuration changes by completely replacing servers. Changes are made by building new server templates, and so rebuilding relevant servers using those templates. This increases predictability, as there is little variance between servers as tested, and servers in production. It requires sophistication in server template direction.

Containerized Services

Containerized services works by packaging applications and services in lightweight containers (every bit popularized by Docker). This reduces coupling between server configuration and the things that run on the servers. So host servers tend to be very unproblematic, with a lower rate of change. I of the other modify management models yet needs to exist applied to these hosts, but their implementation becomes much simpler and easier to maintain. Virtually effort and attending goes into packaging, testing, distributing, and orchestrating the services and applications, but this follows something similar to the immutable infrastructure model, which again is simpler than managing the configuration of full-blown virtual machines and servers.

Containers

Containerization systems such as Docker, Rocket, Warden, and Windows Containers take emerged every bit an alternative way to install and run applications on servers. A container system is used to define and package a runtime environs for a process into a container image. It can so distribute, create, and run instances of that epitome. A container uses operating organization features to isolate the processes, networking, and filesystem of the container, so it appears to exist its ain, self-independent server surround.

The value of a containerization arrangement is that information technology provides a standard format for container images and tools for building, distributing, and running those images. Before Docker, teams could isolate running processes using the same operating system features, but Docker and similar tools make the process much simpler.

The benefits of containerization include:

  • Decoupling the runtime requirements of specific applications from the host server that the container runs on

  • Repeatably create consistent runtime environments past having a container image that can be distributed and run on any host server that supports the runtime

  • Defining containers as code (e.thou.,in a Dockerfile) that can be managed in a VCS, used to trigger automated testing, and generally having all of the characteristics for infrastructure every bit lawmaking

The benefits of decoupling runtime requirements from the host organization are especially powerful for infrastructure management. It creates a make clean separation of concerns betwixt infrastructure and applications. The host organisation but needs to have the container runtime software installed, and then it can run nearly whatsoever container image.seven Applications, services, and jobs are packaged into containers forth with all of their dependencies, as shown in Figure four-3. These dependencies tin can include operating arrangement packages, linguistic communication runtimes, libraries, and system files. Different containers may have different, even conflicting dependencies, but still run on the same host without problems. Changes to the dependencies tin can be fabricated without any changes to the host organization.

Isolating dependencies in containers

Figure 4-3. Isolating packages and libraries in containers

Managing Crimson Applications with and without Containers

For example, suppose a squad runs many Ruddy applications. Without containers, the server they run on might need to have multiple versions of the Ruby runtime installed. If one application requires an upgrade, the upgrade needs to be rolled out to any server where the application needs to run.

This could impact other Ruby-red applications. Those other applications might start running with the newer Scarlet version, but may have incompatibilities. 2 applications that use the same version of Ruby-red might use different versions of a library that has been installed as a system jewel (a Ruby shared library package). Although both versions of the gem tin be installed, making certain each application uses the correct version is catchy.

These issues are manageable, only it requires the people configuring the servers and applications to be enlightened of each requirement and potential conflict and do some piece of work to brand everything play nicely. And each new disharmonize tends to pop up and interrupt people from working on other tasks.

With Docker containers, each of these Ruby applications has its own Dockerfile, which specifies the Ruby version and which gems to bundle into the container image. These images tin can be deployed and run on a host system that doesn't need to have any version of Cherry installed. Each Ruby application has its own runtime surround and can be replaced and upgraded with different dependencies, regardless of the others applications running on the aforementioned host.

Instance 4-4 is a Dockerfile that packages a Ruby Sinatra application.

Example 4-iv. Dockerfile to create a Carmine Sinatra awarding
                # Outset with a CentOS docker paradigm                FROM                                  centos:6.4                # Directory of the Sinatra app                Add                                  . /app                # Install Sinatra                RUN                cd                /app                ;                gem install sinatra                # Open the Sinatra port                Expose                                  4567                # Run the app                CMD                                  ["ruby", "/app/hi.rb"]              

Are Containers Virtual Machines?

Containers are sometimes described every bit being a type of virtual machine. There are similarities, in that they give multiple processes running on a single host server the illusion that they are each running in their own, separate servers. Just there are meaning technical differences. The use case of a container is quite different from that of a virtual machine.

The differences between virtual machines and containers

A host server runs virtual machines using a hypervisor, such as VMware ESX or Xen (which underlies Amazon's EC2 service). A hypervisor is typically installed on the blank metal of the hardware host server, every bit the operating system. However, some virtualization packages tin can be installed on top of another operating organisation, especially those like VMware Workstation and VirtualBox, which are intended to run on desktops.

A hypervisor provides emulated hardware to a VM. Each VM tin can have different emulated hardware from the host server, and different hardware from i another. Consider a physical server running the Xen hypervisor, with two different VMs. 1 VM tin can take an emulated SCSI hard drive, two CPUs, and 8 GB of RAM. The other can be given an emulated IDE difficult drive, 1 CPU, and ii GB of RAM. Because the abstraction is at the hardware level, each VM tin can take a completely different OS installed; for case, you tin install CentOS Linux on one, and Windows Server on the other, and run them side by side on the same physical server.

Figure 4-4 shows the relationship between virtual machines and containers. Containers are not virtual servers in this sense. They don't have emulated hardware, and they use the same operating system as their host server, actually running on the same kernel. The system uses operating arrangement features to segregate processes, filesystems, and networking, giving a process running in a container the illusion that it is running on its own. But this is an illusion created by restricting what the process tin meet, non past emulating hardware resource.

Container instances share the operating system kernel of their host organisation, then they tin can't run a unlike Bone. Containers tin can, however, run different distributions of the same OS—for example, CentOS Linux on 1 and Ubuntu Linux on another. This is because a Linux distribution is just a different gear up of files and processes. But these instances would still share the aforementioned Linux kernel.

Containers and virtual machines

Effigy 4-four. Containers and virtual machines

Sharing the Os kernel ways a container has less overhead than a hardware virtual motorcar. A container image can be much smaller than a VM image, because information technology doesn't need to include the entire Bone. It can start up in seconds, equally it doesn't need to boot a kernel from scratch. And it consumes fewer system resources, because it doesn't need to run its own kernel. Then a given host can run more container processes than full VMs.

Using Containers Rather than Virtual Machines

A naive approach to containers is to build them the same fashion that you lot would build a virtual car image. Multiple processes, services, and agents could all exist packaged into a single container and then run the same fashion you would run a VM. But this misses the sugariness spot for containers.

The best fashion to think of a container is as a method to package a service, application, or job. It'southward an RPM on steroids, taking the awarding and calculation in its dependencies, every bit well equally providing a standard way for its host system to manage its runtime environment.

Rather than a single container running multiple processes, aim for multiple containers, each running one procedure. These processes then get independent, loosely coupled entities. This makes containers a nice match for microservice application architectures.eight

A container congenital with this philosophy can outset upwardly extremely rapidly. This is useful for long-running service processes, because information technology makes it easy to deploy, redeploy, drift, and upgrade them routinely. Just quick startup also makes containers well suited for processes run every bit jobs. A script can be packaged equally a container image with everything it needs to run, and so executed on one or many machines in parallel.

Containers are the side by side pace in the development of managing resources across an infrastructure efficiently. Virtualization was ane pace, allowing you to add together and remove VMs to scale your capacity to your load on a timescale of minutes. Containers take this to the next level, allowing you to scale your capacity up and down on a timescale of seconds.nine

Running Containers

Packaging and running a unmarried application in a container is fairly simple. Using containers as a routine mode to run applications, services, and jobs across multiple host servers is more than complicated. Container orchestration systems automate the distribution and execution of containers across host systems (Chapter 5 touches on container orchestration in a bit more item, in "Container Orchestration Tools").

Containerization has the potential to create a make clean separation between layers of infrastructure and the services and applications that run on information technology. Host servers that run containers tin can be kept very simple, without needing to be tailored to the requirements of specific applications, and without imposing constraints on the applications beyond those imposed by containerization and supporting services like logging and monitoring.

So the infrastructure that runs containers consists of generic container hosts. These can be stripped downward to a bare minimum, including but the minimum toolsets to run containers, and potentially a few agents for monitoring and other administrative tasks. This simplifies management of these hosts, equally they change less oft and have fewer things that can break or demand updating. Information technology also reduces the surface area for security exploits.

Minimal OS Distributions

Container-savvy vendors are offer stripped downward OS distributions for running container hosts, such as Red Lid Diminutive, CoreOS, Microsoft Nano, RancherOS, Ubuntu Snappy, and VMware Photon.

Note that these stripped-downwards OSes are not the same equally the earlier mentioned Unikernel. A stripped-down Os combines a full Os kernel with a stripped-down distribution of preinstalled packages and services. A Unikernel actually strips down the Bone kernel itself, building 1 upwards from a ready of libraries and including the application in the kernel'due south memory space.

Some teams run container hosts every bit virtual machines on a hypervisor, which is in turn installed on hardware. Others take the adjacent stride and remove the hypervisor layer entirely, running the host OS directly on hardware. Which of these approaches to use depends on the context.

Teams that already take hypervisor-based virtualization and infrastructure clouds, but don't have much bare-metallic automation, will tend to run containers on VMs. This is specially appropriate when the team is still exploring and expanding their use of containers, and peculiarly when there are many services and applications running outside of containers. This is likely to exist the case for many organizations for some time.

When containerization becomes more routine for an system, and when significant parts of their services are containerized, teams will probably want to test how well running containers directly on hardware-based hosts. This is likely to get easier every bit virtualization and cloud platform vendors build support for running containers directly into their hypervisors.

Security and Containers

One business concern that is inevitably raised when discussing containers is security. The isolation that containers provide can lead people to assume they offering more inherent security than they actually do. And the model that Docker provides for conveniently building customer containers on peak of images from community libraries tin open serious vulnerabilities if it isn't managed with care.x

Container isolation and security

While containers isolate processes running on a host from one some other, this isolation is not impossible to interruption. Different container implementations have dissimilar strengths and weaknesses. When using containers, a team should exist certain to fully understand how the technology works, and where its vulnerabilities may lie.

Teams should be especially cautious with untrusted lawmaking. Containers appear to offer a safe way to run capricious code from people outside the organisation. For example, a company running hosted software might offer customers the ability to upload and run code on the hosted platform, as a plug-in or extension model. The assumption is that, because the customer's code runs in a container, an assaulter won't be able to take advantage of this to proceeds access to other customers' data, or to the software company's systems.

However, this is a dangerous supposition. Organizations running potentially untrusted lawmaking should thoroughly analyze their technology stack and its potential vulnerabilities. Many companies that offer hosted containers actually keep each customer's containers isolated to their ain physical servers (not just hypervisor-based virtual machines running the container host). Every bit of belatedly 2015, this is true of the hosted container services run by both Amazon and Google.

So teams should utilize stronger isolation between containers running untrusted lawmaking than the isolation provided past the containerization stack. They should also have measures to harden the host systems, all the style down to the metal. This is another good reason to strip down the host OS to only the minimum needed to run the containerization organization. Platform services should ideally be partitioned onto different concrete infrastructure from that used to run untrusted code.

Even organizations that don't run arbitrary lawmaking from outsiders should take appropriate care to ensure the segregation of code, rather than assuming containers provide fully protected runtime environments. This tin arrive more difficult for an attacker who compromises ane part of the system from leveraging it to widen their admission.

Container image provenance

Fifty-fifty when outsiders can't directly run lawmaking in containers on your infrastructure, they may be able to do so indirectly. It'due south common for insiders to download exterior code then package and run it. This is non unique to containers. As will be discussed in "Provenance of Packages" in Chapter fourteen, information technology'south common to automatically download and install arrangement packages and language libraries from customs repositories.

Docker and other containerization systems offer the ability to layer container images. Rather than having to take an Bone installation image and build a complete container from scratch, you tin use common images that accept a bones OS install already. Other images offer prepackaged applications such equally web servers, application servers, and monitoring agents. Many Bone and application vendors are offering this as a distribution mechanism.

A container image for an Bone distribution like CentOS, for case, may be maintained by people with deep knowledge of that OS. The maintainers can make sure the image is optimized, tuned, and hardened. They can too make sure updated images are ever made available with the latest security patches. Ideally, these maintainers are able to invest more time and expertise in maintaining the CentOS epitome than the people on an infrastructure squad that is supporting a multifariousness of systems, servers, and software. Spreading this model out over the various pieces of software used by the infrastructure team means the team is able to leverage a slap-up deal of industry expertise.

The chance is when there aren't sufficient guarantees of the provenance of container base of operations images used in a squad'south infrastructure. An epitome published on a public repository may be maintained past responsible, honest experts, or it could take been put there by evil hackers or the NSA. Even if the maintainers are well intentioned, someone evil could have compromised their work, adding subtle back doors.

While community-provided container images aren't inherently less trustworthy than community-provided RPMs or RubyGems, their growing popularity emphasizes the need to manage all of these things carefully. Teams should ensure the provenance of each prototype used within the infrastructure is well known, trusted, and can be verified and traced. Containerization tool vendors are building mechanisms to automatically validate the provenance of images.11 Teams should ensure that they understand how these mechanisms work and that they are being properly used.

Conclusion

The intention of this chapter was to sympathize several different high-level models for managing individual servers and how these models chronicle to the types of tooling available. Hopefully it will aid you consider how your team could go about provisioning and configuring servers.

However, earlier selecting specific tools, it would exist a good idea to exist familiar with the patterns in Part II of this book. Those chapters provide more detail on specific patterns and practices for provisioning servers, building server templates, and updating running servers.

The next chapter will look at the bigger picture of the infrastructure, exploring the types of tools that are needed to run the infrastructure as a whole.

1 See Definition of "Provisioning" in Chapter three for clarity on how I use the term in this volume.

2 It is perfectly possible to use Chef or Puppet in a pull model, for case, by having a central server run an ssh command to connect to servers and run the client command-line tool.

three Although Ansible's main utilise case is the push button model, information technology can also exist run in a pull model, as described in a blog mail past Jan-Piet Mens.

iv Netflix described their approach to using AMI templates in this weblog mail service.

5 Neal Ford coined the term "polyglot programming." Meet this interview with Neal for more about information technology.

vi The pace of change in containerization is currently quite fast. In the form of writing this book, I've had to aggrandize coverage from a couple of paragraphs, to a section, and made it one of the master models for managing server configuration. Many of the details I've described will have changed by the time yous read this. But hopefully the full general concepts, especially how containers chronicle to infrastructure-every bit-lawmaking principles and practices, volition still be relevant.

7 There is really some dependency betwixt the host and container. In detail, container instances use the Linux kernel of the host system, and then a given image could potentially deport differently, or even neglect, when run on different versions of the kernel.

8 Encounter my colleague Sam Newman'south book Building Microservices (O'Reilly) for more on microservices.

9 The folks at force12 are doing interesting things with microscaling.

10 The folks at Docker have published an article on container security that offers a number of useful insights.

11 See "Introducing Docker Content Trust".

Which Of The Following Is Used To Configure A Service Template?,

Source: https://www.oreilly.com/library/view/infrastructure-as-code/9781491924334/ch04.html

Posted by: lopesleou1984.blogspot.com

0 Response to "Which Of The Following Is Used To Configure A Service Template?"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel