Részletes bemutatás a Dockerről az AWS-en

A konténer virtualizáció - amelyet a Docker szemmel láthatóan képvisel - egy olyan szerverparadigma, amely valószínűleg az elkövetkező évekre is ösztönzi a vállalati számítástechnikát.

A Cloud a legkézenfekvőbb és leglogikusabb platform a konténerek számára

bevetés.

Az Amazon Web Services nagyrészt uralja a felhőalapú számítástechnika világát. Add össze. Ha érdekel egy darab ilyen akció, mindenképpen ki kell derítenie, hogyan működik mindez.

Először is, gyorsan határozzunk meg néhány kulcsfogalmat.

Virtualizáció

A virtualizáció a fizikai számítógépes és hálózati erőforrások felosztása kisebb, rugalmasabb egységekre, és ezeket a kisebb egységeket úgy mutatja be a felhasználók számára, mintha mindegyik különálló erőforrás lenne.

Az elképzelés az, hogy ahelyett, hogy egyedi számítógépes kiszolgálókat külön számítógépes feladatokhoz rendelnének - amelyek néha túl- vagy alulhasználhatók lehetnek -, egyetlen fizikai szervert logikusan fel lehet osztani annyi vagy annyi virtuális szerverre, amennyi szükséges.

Ez azt jelenti, ahogyan az alábbi ábra szemlélteti, több tucat külön-külön telepített operációs rendszer futhat egymás mellett ugyanazon a merevlemezen. Minden operációs rendszer gyakorlatilag nincs tisztában azzal, hogy a helyi környezetben nincs egyedül.

Gyakorlatilag mindegyik operációs rendszerpéldányt távolról is elérhetik a rendszergazdák és az ügyfelek, ugyanúgy, mint bármely más szervert.

Ilyen környezetben, amint a virtuális szerver elvégzi a feladatát, vagy szükségtelenné válik, azonnal törölheti. Ez felszabadítja azokat az erőforrásokat, amelyeket a sor következő feladatához használt.

A lehetséges jövőbeni igények előrejelzéséhez nincs szükség a virtuális szerverek túlzott ellátására, mert a jövőbeli igényeket bármikor könnyen ki lehet elégíteni.

Valójában a mai virtuális szerver csak néhány perccel vagy akár másodperccel is élhet, mielőtt elvégezte feladatát, végleg leállították, hogy helyet biztosítson a következõknek. Mindez lehetővé teszi a drága hardverek sokkal hatékonyabb felhasználását. Lehetővé teszi új szerverek előkészítését és indítását tetszés szerint, akár új konfigurációk tesztelésére, akár új erőforrások hozzáadására a termelési szolgáltatásaihoz.

A felhőalapú számítástechnikai szolgáltatók, például az AWS, virtualizált számítógépeket használnak. Az Amazon EC2 példányok százezrei például mind a nyílt forráskódú Xen vagy KVM hipervizorok tetején futnak - amelyek maguk is telepítve vannak és sok ezer fizikai szerveren futnak, amelyeket az Amazon hatalmas szerverfarmjaiban tartanak fenn.

Bármilyen hipervizoros technológiát is használnak, a cél egy nagyrészt automatizált tárhelykörnyezet biztosítása több komplett, önálló virtuális számítógép számára.

A Dockerhez hasonló tárolók viszont nem önálló virtuális gépek, hanem módosított fájlrendszerek, amelyek megosztják fizikai állomásuk operációs rendszermagját. Ezt majd megbeszéljük a továbbiakban.

Konténerek

Mik azok a konténerek? Nos, egyrészt nem hipervizorok. Ehelyett rendkívül könnyű virtuális szerverek, amelyek, mint az ábrán látható, ahelyett, hogy teljes operációs rendszerként futnának, megosztják a fogadó operációs rendszer mögöttes kerneljét.

A konténerek egyszerű szövegű szkriptekből építhetők, másodpercek alatt létrehozhatók és elindíthatók, valamint egyszerűen és megbízhatóan oszthatók meg a hálózatok között. A konténertechnológiák magukban foglalják a Linux Container projektet, amely a Docker eredeti inspirációja volt.

A szkriptbarát tárolóterv megkönnyíti a tárolók komplex fürtjeinek automatizálását és távoli kezelését, gyakran mikroszolgáltatásokként telepítve.

A Microservices egy számítási szolgáltatási architektúra, amelyben több tároló van telepítve, mindegyiknek külön, de kiegészítő szerepe van. Ezért elindíthatja az egyik tárolót adatbázis háttérként, egy másikat fájlszerverként és egy harmadik webszerverként.

Dokkmunkás

Ahogy egy vagy két Pluralsight tanfolyamom során felfedezem, a Docker-tároló olyan kép, amelynek viselkedését egy szkript határozza meg. A tároló szoftveresen kerül forgalomba, amelyet ravaszul szerverként álcáznak.

De mi a kép? Ez egy szoftverfájl, amely az operációs rendszer teljes fájlrendszerének pillanatképét tartalmazza. Minden szükséges egy életképes virtuális szerver elindításához szükséges.

A kép állhat egy olyan alap operációs rendszerből, mint az Ubuntu Linux, vagy az apró és szupergyors Alpine Linux-ból. De egy kép további rétegeket is tartalmazhat olyan szoftveralkalmazásokkal, mint a webszerverek és az adatbázisok. Nem számít, hány réteg van egy képen, és mennyire bonyolultak lehetnek a közöttük lévő kapcsolatok, maga a kép soha nem változik.

Amikor a következő ábrán látható módon egy képet tárolóként indítanak, akkor egy extra írható réteg automatikusan hozzáadódik, amelybe a rendszer folyamatban lévő bármely tevékenységének rekordja mentésre kerül.

Mit csinálnak az emberek általában Docker konténereikkel? Gyakran betöltenek valamilyen alkalmazásfejlesztési projektet, hogy teszteljék annak működését, majd visszajelzések és frissítések céljából megosztják a csapat tagjaival. Amikor az alkalmazás elkészült, elindítható konténerklaszterként (vagy „rajként”, ahogy a Docker nevezi), amelyek programozottan és azonnal fel- vagy kicsinyíthetők a felhasználói igényeknek megfelelően.

Míg a Docker Linux alapú technológia és futtatásához Linux kernelre van szükség, távoli vagy akár helyi Docker konténerek futtatása Mac vagy Windows gépeken lehetséges a Docker for Mac vagy a Docker for Windows alkalmazások, vagy régebbi gépek, a Docker Machine segítségével. eszköz.

Felhő alapú számítástechnika

A felhőszámítás igény szerinti, önkiszolgáló számítási, memória- és tárolási erőforrások nyújtása távolról egy hálózaton keresztül.

Mivel a felhőalapú szolgáltatásokat nagyon kis lépésekben számlázzák, gyorsan konfigurálhatja és elindíthatja a projektek széles skáláját. És mivel az erőforrások mind virtuálisak, egy kísérlet részeként vagy rövid távú probléma megoldása érdekében gyakran sok értelme van. Amikor a munka elkészült, az erőforrás leáll.

A felhőplatformok lehetővé teszik, hogy bárhol máshol lehetetlen - vagy lehetetlenül drága - dolgokat hajtson végre.

Nem biztos abban, hogy a projekt mennyi ideig fog futni, vagy mennyi keresletet vonzana? Talán nem igazolható az összes drága hardver beszerzése, építése és elhelyezése, amelyre szüksége van a projekt megfelelő támogatásához.

Lehet, hogy nincs értelme befektetni a szerver, hűtés és útválasztó berendezésekbe.

But if you could rent just enough of someone else’s equipment to match fast-changing demand levels and pay only for what you actually use, then it might work.

AWS

There’s no shortage of ways to manage Docker containers on AWS. In fact, between frameworks, orchestration interfaces, image repositories,

and hybrid solutions, the variety can get confusing.

This article won’t dive deeply into every option, but you should at least be aware of all your choices:

Amazon’s EC2 Container Service(ECS) leverages specially configured EC2 instances as hosts for integrated Docker containers. You don’t have to get your hands dirty on the EC2 instance itself, as you can provision and administrate your containers through the ECS framework. ECS now offers greater abstraction (and simplicity) through their Fargate mode option.

AWS CloudFormation allows you to configure any combination of AWS resources into a template that can be deployed one or many times. You can include specified dependencies and custom parameters in the template. Given its self-contained and scriptable design, CloudFormation is a natural environment for Docker deployments. In fact, Docker itself offers its Docker for AWS service (currently in beta), that will automatically generate a CloudFormation template to orchestrate a swarm of Docker containers to run on AWS infrastructure within your account.

AWS Elastic Beanstalk effectively sits on top of ECS. It allows you to deploy your application across all the AWS resources normally used by ECS, but with virtually all of the logistics neatly abstracted away. Effectively, all you need in order to launch a fully scalable, complex microservices environment is a declarative JSON-formatted script in a file called Dockerrun.aws.json. You can either upload your script to the GUI or, from an initialized local directory using the AWS Beanstalk CLI.

Amazon Elastic Container Service for Kubernetes (EKS) is currently still in preview. It’s a tool allowing you to manage containers using the open source Kubernetes orchestrator, but without having to install your own clusters. Like ECS, EKS will deploy all the necessary AWS infrastructure for your clusters without manual intervention.

Docker for AWS is, at the time of writing, still in beta. Using its browser interface, you can use the service to install and run a “swarm of Docker Engines” that are fully integrated with AWS infrastructure services like auto scaling, load balancing (ELB), and block storage.

Docker Datacenter (now marketed as part of Docker Enterprise Edition) is a joint AWS/Docker project that provides commercial customers with a more customizable interface for integrating Docker with AWS, Azure, and IBM infrastructures.

Docker Cloud, much like Docker Datacenter, offers a GUI, browser-based console for managing all aspects of your Docker deployments. This includes administration for your host nodes running in public clouds. The big difference is that, unlike Datacenter, the Docker Cloud administration service is hosted from its own site. There’s no server software to install on your own equipment.

Docker Hub is probably the obvious first place to look for and to share Docker images. Provided by Docker itself, Docker Hub holds a vast collection of images that come pre-loaded to support all kinds of application projects. You can find and research images on the hub.docker.com web site, and then pull them directly into your own Docker Engine environment.

EC2 Container Registry(ECR) is Amazon’s own image registry to go with their EC2 Container Service platform. Images can be pushed, pulled, and managed through the AWS GUI or CLI tool. Permissions policies can closely control image access only to the people you select.

I think you’re ready to start. If you haven’t yet, do head over to the

Amazon Web Services site to create an AWS account. In case you’re not yet familiar with how this all works, new accounts get a generous full year of experimentation with any service level that’s eligible for the Free Tier. Assuming you’re still in your first year, nothing we’re going to do in this course should cost you a penny.

Next, we’ll pop the lid off Docker and see how it works at its most basic level: your laptop command line. Technically, this has very little relevance to AWS workloads, but it’ll be a great way to better understand the workflow.

Introduction to Docker

Properly visualizing how all the many AWS parts work will probably be easier if you first understand what’s going on under the hood with Docker itself. So in this article I’ll walk you through launching and configuring a simple Docker container on my local workstation.

Ready to go?

The Docker command line

Let’s see how this thing actually works. I’m going to get Docker up and running on my local workstation and then test it out with a quick hello-world operation. I will then pull a real working Ubuntu image and run it.

I won’t go through the process of installing Docker on your machine here for a few reasons. First of all, the specifics will vary greatly depending on the operating system you’re running. But they’re also likely to frequently change, so anything I write here will probably be obsolete within a short while. And finally, none of this is all that relevant to AWS. Check out Docker’s own instructions at docs.docker.com/install.

Along the way I’ll try out some of Docker’s command line tools, including creating a new network interface and associating a container with it. This is the kind of environment configuration that can be very useful for real-world deployments involving multiple tiers of resources that need to be logically separated.

Most Linux distributions now use systemd via the systemctl command to handle processes. In this case systemctl start docker will launch the Docker daemon if it’s not already running. systemctl status docker will return some useful information, including in-depth error messages if something has gone wrong. In this case, everything looks healthy.

# systemctl start docker# systemctl status docker

That’s the only Linux-specific bit. From here on in we’ll be using commands that’ll work anywhere Docker’s properly installed.

Launch a container

Running commands from the Docker command line always begins with the word “docker”. The normal first test of a newly installed system is to

use docker run to launch a small image — the purpose-built “hello-world” image in this case.

As you can tell from the output below, Docker first looked for the image on the local system. Docker is particularly efficient in that way. It will always try to reuse locally available elements before turning to remote sources.

In this case, since there are no existing images in this new environment, Docker goes out to pull hello-world from the official Docker library.

$ docker run hello-world Unable to find image ‘hello-world:latest’ locally latest: Pulling from library/hello-world ca4f61b1923c: Pull complete Digest: sha256:66ef312bbac49c39a89aa9bcc3cb4f3c9e7de3788c9 44158df3ee0176d32b751 Status: Downloaded newer image for hello-world:latest2.1. Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the “hello-world” image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: //cloud.docker.com/ For more examples and ideas, visit: //docs.docker.com/engine/userguide/

The full output of this command includes a useful four part description of what just happened. The Docker client contacted the Docker daemon which proceeded to download the hello-world image from the repository. The image is converted to a running container by the docker run command whose output is streamed to our command line shell — the Docker client.

Let me break that jargon down for you just a bit:

  • Docker client — the command line shell activated by running docker

    commands

  • Docker daemon — the local Docker process we started just before

    with the systemctl command

  • Image — a file containing the data that will be used to make up an

    operating system

Typing just docker will print a useful list of common commands along

with brief descriptions, and docker info will return information about

the current state of our Docker client.

Notice how we’ve currently got one container and one image (the hello-world container) and that there are zero containers running right now.

$ docker info Containers: 1 Running: 0 Paused: 0 Stopped: 1 Images: 3 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 28 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay

Interactive container sessions

Let’s try out the “more ambitious” docker run -it ubuntu bash command that the Docker documentation previously suggested. This will download the latest official base Ubuntu image and run it as a container.

The -i option will make the session interactive, meaning you’ll be dropped into a live shell within the running container where you’ll be able to control things like you would on any other server. The -t argument will open a TTY shell.

$ docker run -it ubuntu bash Unable to find image ‘ubuntu:latest’ locally latest: Pulling from library/ubuntu 1be7f2b886e8: Pull complete 6fbc4a21b806: Pull complete c71a6f8e1378: Pull complete 4be3072e5a37: Pull complete 06c6d2f59700: Pull complete Digest: sha256:e27e9d7f7f28d67aa9e2d7540bdc2b33254 b452ee8e60f388875e5b7d9b2b696 Status: Downloaded newer image for ubuntu:latest [email protected]:/#

Note the new command line prompt [email protected]:/#. We’re

now actually inside a minimal but working Docker container.

We can, for instance, update our software repository indexes .

# ls bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr # apt update Get:1 //security.ubuntu.com/ubuntu xenial-security InRelease Get:2 //archive.ubuntu.com/ubuntu xenial InRelease […] Fetched 24.8 MB in 48s (515 kB/s) Reading package lists… Done Building dependency tree Reading state information… Done 6 packages can be upgraded. Run ‘apt list — upgradable’ to see them.

If I exit the container, it will shut down and I’ll find myself back in my host server. Typing docker info once more now shows me two stopped containers rather than just one.

$ docker infoContainers: 2Running: 0Paused: 0Stopped: 2Images: 4[…]

Running containers in the background

I could launch a container in the background by adding the detach=true option which will return a container ID. Listing all active docker containers with ps will then show me my new running container.

$ docker info Containers: 2 Running: 0 Paused: 0 Stopped: 2 Images: 4 […]

Managing containers

As you can see from the wizardly_pasteur name, the people who designed Docker compiled a rather eccentric pool of names to assign to your containers. If you’d like to rename a container — perhaps so managing it will require less typing — run docker rename, followed by the current container name and the new name you’d like to give it. I’ll run docker ps once again to show the update in action.

$ docker rename wizardly_pasteur MyContainer $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES 232a83013d39 ubuntu “bash” 3 minutes ago Up 5 minutes MyContainer

docker inspect followed by a container name, will return pages and pages of useful information about that container’s configuration and environment. The output snippet I’ve included below displays the container’s network environment details. Note that the network gateway is 172.17.0.1 and the container’s actual IP address is 172.17.0.2 — that will be useful later.

$ docker inspect MyContainer [...] "Gateway": "172.17.0.1", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "172.17.0.2", "IPPrefixLen": 16, "IPv6Gateway": "", "MacAddress": "02:42:ac:11:00:02", "Networks": { "bridge": { "IPAMConfig": null, "Links": null, "Aliases": null, [...]

Docker networks

docker network ls will list all the network interfaces currently associated with our Docker client. Note in particular the bridge interface which connects a container to the Docker host, allowing network communication into and out of the container.

$ docker network ls NETWORK ID NAME DRIVER SCOPE fa4da6f158de bridge bridge local 18385f695b4e host host local 6daa514c5756 none null local

We can create a new network interface by running docker network create followed by the name we’d like to give our new interface. Running inspect against the new interface shows us — through the Driver value — that this new interface has been automatically associated with the network bridge we saw earlier, but exists on its own 172.18.0.x network. You’ll remember that our default network used 172.17.0.x.

$ docker network create newNet 715f775551522c43104738dfc2043b66aca6f2946919b39ce 06961f3f86e33bb $ docker network inspect newNet [ { "Name": "newNet", [...] "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" [...] ]

Confused? My Solving for Technology book has a chapter on basic TCP/IP networking.

Moving containers between networks

You might sometimes want to move an existing container from one network to another — perhaps you need to reorganize and better secure your resources. Try it out by moving that Ubuntu container to a different network, like the newNet interface we just created. Use docker network connect followed by the network name newNet and then the container name MyContainer.

$ docker network connect newNet MyContainer

Running inspect on the container once again will show you that MyContainer is now connected to both the bridge interface with its 172.17.0.2 address, and the newNet interface on 172.18.0.2. It’s now like a computer with two network interface cards physically connected to separate networks.

Don’t believe me? You can successfully ping both interfaces from the command line, so we can see they’re both active. All this was possible, by the way, despite the fact that the container was up and running all along. Don’t try that on a physical machine!

$ ping 172.17.0.2 PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data. 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.103 ms 64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.070 ms ^C — — 172.17.0.2 ping statistics — - 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.070/0.086/0.103/0.018 ms $ ping 172.18.0.2 PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data. 64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.079 ms 64 bytes from 172.18.0.2: icmp_seq=2 ttl=64 time=0.062 ms ^C — — 172.18.0.2 ping statistics — - 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.062/0.070/0.079/0.011 ms

Working with Dockerfiles

While containers can be defined and controlled from the command line, the process can be largely automated through scripts called Dockerfiles. Running Dockerfile as part of a docker build operation will tell Docker to create a container using the configurations specified by the script.

In the simple dockerfile example displayed below, the FROM line tells the docker host to use Ubuntu version 16.04 as the base operating system. If there isn’t already an Ubuntu 16.04 image on the local system, Docker will download one.

# Simple Dockerfile FROM ubuntu:16.04 RUN apt-get update RUN apt-get install -y apache2 RUN echo “Welcome to my web site” > /var/www/html/index.html EXPOSE 80

Each of the RUN lines launches a command within the operating system whose results will be included in the container — even before it’s actually launched as a live virtual machine.

In this case, apt-get update updates the local repository indexes to permit software downloads, apt-get install apache2 will download and install the Apache webserver package. The -y will automatically answer “yes” to any prompts included in the installation process.

The echo command will replace the contents of the index.html file with my customized Welcome text. index.html is, of course, the first file a browser will look for and then load when it visits a new site.

Finally, EXPOSE 80 opens up port 80 on the container to allow HTTP traffic — necessary because this will be a web server. This will allow us to access the web server from the Docker host machine. It’ll be your responsibility to provide access to your host for any remote clients you might want to invite in.

If you’re up on the latest Ubuntu package management news, you’ll know that there’s been a shift away from apt-get to its new apt replacement. So why did I use apt-get in that Dockerfile? Because it’s still more reliable for use in scripted settings.

To actually create a container based on this Dockerfile, you run docker build with -t to create a name (or “tag”) for the container. I’ll go with webserver. You add a space and then a dot to tell Docker to read the file named Dockerfile found in this current directory. Docker will immediately get to work building a container on top of the Ubuntu image we pulled earlier, and running the apt-get and echo commands.

$ docker build -t “webserver” . Sending build context to Docker daemon 2.048 kB Step 1/5 : FROM ubuntu:16.04 16.04: Pulling from library/ubuntu Digest: sha256:e27e9d7f7f28d67aa9e2d7540bdc2b33254b452ee8e 60f388875e5b7d9b2b696 Status: Downloaded newer image for ubuntu:16.04 — -> 0458a4468cbc Step 2/5 : RUN apt-get update — -> Running in c25f5462e0f2 […] Processing triggers for systemd (229–4ubuntu21) … Processing triggers for sgml-base (1.26+nmu4ubuntu1) … — -> 3d9f2f14150e Removing intermediate container 42cd3a92d3ca Step 4/5 : RUN echo “Welcome to my web site” > /var/www/html/index.html — -> Running in ddf45c195467 — -> a1d21f1ba1f6 Removing intermediate container ddf45c195467 Step 5/5 : EXPOSE 80 — -> Running in af639e6b1c85 — -> 7a206b180a62 Removing intermediate container af639e6b1c85 Successfully built 7a206b180a62

If I run docker images, I’ll now see a version of my Ubuntu image with the name webserver.

$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE webserver latest 7a206b180a62 3 minutes ago 250 MB ubuntu 16.04 0458a4468cbc 12 days ago 112 MB hello-world latest f2a91732366c 2 months ago 1.85 kB

Now we’re ready to launch the container using docker run.

Structuring this command properly is a bit of a delicate process and there’s a lot that can go wrong. The -d argument tells Docker to run this container detached, meaning we won’t find ourselves on the container’s command line but it will be running in the background. -p tells Docker to forward any traffic coming on port 80 (the default HTTP port) through to port 80 on the container. This allows external access to the web server. I can’t say that I understand why, but the order here is critical: only add the -p argument after-d.

Next, we tell Docker the name of the container we’d like to launch, webserver in our case. And after that, we tell Docker to run a single command once the container is running to get the Apache webserver up.

$ docker run -d -p 80:80 webserver \ /usr/sbin/apache2ctl -D FOREGROUND

Perhaps you’re wondering why I didn’t use the more modern Systemd command systemctl start apache. Well I tried it, and discovered that, at this point at least, systemd is good and broken in Ubuntu Docker containers. Stay away if you know what’s good for you. -D FOREGROUND ensures that Apache — and the container as a whole — will remain running even once the launch has completed. Run it for yourself.

We’re given an ID for the new container, but nothing else. You can run docker ps and you should see our webserver among the list of all running containers. You should also be able to open webserver’s index.html page by pointing your browser to the container’s IP address.

What’s that? You don’t know your container’s IP address? Well, since the container will have been associated with the default bridge network, you can use docker network inspect bridge and, within the Containers section of the output, you should find what you’re after. In my case, that was 172.17.0.3.

Working with Docker Hub images

We’ve already enjoyed some of the benefits Docker Hub has to offer. The images we used to build the containers on the previous clips were all seamlessly downloaded from Docker Hub behind the scenes.

In fact, using something like docker search apache2, you can manually comb through the repository for publicly available images that come with Apache pre-installed. You can also browse through what’s available on the Docker Hub web site.

However, you should remember that not all of those images are reliable or even safe. You’ll want to look for results that have earned lots of review stars and, in particular, are designated as “official.” Running docker search ubuntu returns at least a few official images.

Find something that interests you? You can add it to your local collection using docker pull. Once the download is complete, you can view your images using docker images.

$ docker pull ubuntu-upstart

While you’re on the Docker Hub site, take the time to create a free account. That’ll allow you to store and share your own images much the way you might use a tool like GitHub. This is probably the most popular use-case for Docker, as it allows team members working remotely — or lazy devs working in the same office — to get instant and reliable access to the exact environments being used at every stage of a project’s progress.

Those are the bare-bone basics, and it’s important to understand them clearly. But, because of the complexity involved in coordinating clusters of dozens or thousands of containers all at once, most serious container workloads won’t use those particular command line tools.

Instead, you’re most likely going to want a more robust and feature-rich framework. You can read about some of those tools — including Docker’s own Docker Swarm Mode, Docker Enterprise Edition, or Docker Cloud, and Kubernetes — in my article, “Too Many Choices: how to pick the right tool to manage your Docker clusters”.

This article is largely based on video courses I authored for Pluralsight. I’ve also got loads of Docker, AWS, and Linux content available through my website, including links to my book, Linux in Action, and a hybrid course called Linux in Motion that’s made up of more than two hours of video and around 40% of the text of Linux in Action.