Automate Building your Own Atomic Host

Project Atomic hosts are built from standard RPM packages which have been composed into filesystem trees using rpm-ostree. This post provides method for automation of Building Atomic host (Creating new trees).

Requirements

Process

Clone the Git repo on your working machine Build-Atomic-Host.

$ git clone https://github.com/trishnaguha/build-atomic-host.git
$ cd build-atomic-host

Create VM from the QCOW2 Image

The following creates VM from QCOW2 Image where username is atomic-user and password is atomic. Here atomic-nodein the instance name.

$ sudo sh create-vm.sh atomic-node /path/to/fedora-atomic25.qcow2
# For example: /var/lib/libvirt/images/Fedora-Atomic-25-20170131.0.x86_64.qcow2

Start HTTP Server

The tree is made available via web server. The following playbook creates directory structure, initializes OSTree repository and starts the HTTP server.

$ ansible-playbook httpserver.yml --ask-sudo-pass

Use ip addr to check IP Address of the HTTP server.

Give OSTree a name and add HTTP Server IP Address

Replace the variables given in vars/atomic.yml with OSTree name and HTTP Server IP Address.

For Instance:

# Variables for Atomic host
atomicname: my-atomic
httpserver: 192.168.122.1

Here my-atomic is OSTree name and 192.168.122.1 is HTTP Server IP Address.

Run Main Playbook

The following playbook installs requirements, starts HTTP Server, composes OSTree, performs SSH-setup and rebases on created Tree.

$ ansible-playbook main.yml --ask-sudo-pass

Check IP Address of the Atomic instance

The following command returns the IP Address of the running Atomic instance

$ sudo virsh domifaddr atomic-node

Reboot

Now SSH to the Atomic Host and reboot it so that it can reboot in to the created OSTree:

$ ssh atomic-user@<atomic-hostIP>
$ sudo systemctl reboot

Verify: SSH to the Atomic Host

Wait for 10 minutes, You may want to go for a Coffee now.

$ ssh atomic-user@192.168.122.221
[atomic-user@atomic-node ~]$ sudo rpm-ostree status
State: idle
Deployments:
● my-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.1 (2017-02-07 05:34:46)
        Commit: 15b70198b8ec7fd54271f9672578544ff03d1f61df8d7f0fa262ff7519438eb6
        OSName: fedora-atomic

  fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.51 (2017-01-30 20:09:59)
        Commit: f294635a1dc62d9ae52151a5fa897085cac8eaa601c52e9a4bc376e9ecee11dd
        OSName: fedora-atomic

Now you have the Updated Tree.

Shout-Out for the following folks:

My future post will have customizing packages (includes addition/deletion) for OSTree.

Talking to Docker daemon of Fedora Atomic Host

This post is now deprecated, Please follow the more enhanced one: http://www.projectatomic.io/blog/2017/01/remote-access-docker-daemon

This post will describe how to use Docker daemon of Fedora Atomic host remotely.  Note that we are also going to secure the Docker daemon since we are connecting via Network which we will be doing with TLS.

TLS (Transport Layer Security) provides communication security over computer network. We will create client cert and server cert to secure our Docker daemon. OpenSSL will be used to to create the cert keys for establishing TLS connection.

I am using Fedora Atomic host as remote and workstation as my present host.

Thanks to Chris Houseknecht for writing an Ansible role which creates all the certs required automatically, so that there is no need to issue openssl commands manually. Here is the Ansible role repository: https://github.com/ansible/role-secure-docker-daemon. Clone it to your present working host.

$ mkdir secure-docker-daemon
$ cd secure-docker-daemon
$ git clone https://github.com/ansible/role-secure-docker-daemon.git
$ touch ansible.cfg inventory secure-docker-daemon.yml
$ ls 
ansible.cfg  inventory  role-secure-docker-daemon  secure-docker-daemon.yml

$ vim ansible.cfg
[defaults]
inventory=inventory
remote_user='USER_OF_ATOMIC_HOST'

$ vim inventory 
[serveratomic]
'IP_OF_ATOMIC_HOST' ansible_ssh_private_key_file='PRIVATE_KEY_FILE'

$ vim secure-docker-daemon.yml
---
- name: Secure Docker daemon for Atomic host
  hosts: serveratomic
  gather_facts: no
  become: yes
  roles:
    - role: role-secure-docker-daemon
      dds_host: 'IP_OF_ATOMIC_HOST'
      dds_server_cert_path: /etc/docker
      dds_restart_docker: no

Replace ‘USER_OF_ATOMIC_HOST’ with the user of your Atomic host, ‘IP_OF_ATOMIC_HOST’ with the IP of your Atomic host, ‘PRIVATE_KEY_FILE’ with the ssh private key file of your workstation.

Now we will run the ansible playbook. This will create client and server certs on the Atomic host.

$ ansible-playbook secure-docker-daemon.yml

Now ssh to your Atomic host.

We will copy the client certs created on the Atomic host to the workstation. You will find the client certs file in ~/.docker directory as root user. Now create ~/.docker directory on your workstation for regular user and copy the client certs there. You can use scp to copy the cert files from Atomic host to Workstation or do it manually ;-).

We are going to append some Environment variables in the ~/.bashrc file of the workstation for regular user.

$ vim ~/.bashrc
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=~/.docker/
export DOCKER_HOST=tcp://IP_OF_ATOMIC_HOST:2376

Docker’s port is 2376 for TLS (secured port).

Now go your Atomic host. We will add tls options to docker command on atomic host.

Add –tlsverify –tlscacert=/etc/docker/ca.pem –tlscert=/etc/docker/server-cert.pem –tlskey=/etc/docker/server-key.pem -H=0.0.0.0:2376 -H=unix:///var/run/docker.sock in the /etc/sysconfig/docker file.

$ vi /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --tlsverify --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server-cert.pem --tlskey=/etc/docker/server-key.pem -H=0.0.0.0:2376 -H=unix:///var/run/docker.sock'

We will need to reload and restart the docker daemon.

$ sudo systemctl docker-reload
$ sudo systemctl restart docker.service

Reboot both of your Atomic host and Workstation.

So now if you try running any docker command as regular user on your workstation it will talk to the docker daemon of the Atomic host and execute the command there. You do not need to manually ssh and issue docker command on your Atomic host :-).

Here are some screenshots for demonstration:

Atomic Host:

screenshot-from-2016-12-09-10-27-47

screenshot-from-2016-12-09-10-29-46

screenshot-from-2016-12-09-10-26-31

Workstation:

fotoflexer_photo

screenshot-from-2016-12-09-10-26-35

 

Containerization and Deployment of Application on Atomic Host using Ansible Playbook

This article describes how to build Docker image and deploy containerized application on Atomic host (any Remote host) using Ansible Playbook.

Building Docker image for an application and run container/cluster of containers is nothing new. But the idea is to automate the whole process and this is where Ansible playbooks come in to play.

Note that you can use Cloud/Workstation based Image to execute the following task. Here I am issuing the commands on Fedora Workstation.

Let’s see How to automate the containerization and deployment process for a simple Flask application:

We are going to deploy container on Fedora Atomic host.

First, Let’s Create a simple Flask Hello-World Application.

This is the Directory structure of the entire Application:

flask-helloworld/
├── ansible
│   ├── ansible.cfg
│   ├── inventory
│   └── main.yml
├── Dockerfile
└── flask-helloworld
    ├── hello_world.py
    ├── static
    │   └── style.css
    └── templates
        ├── index.html
        └── master.html

hello_world.py

from flask import Flask, render_template

APP = Flask(__name__)

@APP.route('/')
def index():
    return render_template('index.html')

if __name__ == '__main__':
    APP.run(debug=True, host='0.0.0.0')

static/style.css

body {
  background: #F8A434;
  font-family: 'Lato', sans-serif;
  color: #FDFCFB;
  text-align: center;
  position: relative;
  bottom: 35px;
  top: 65px;
}
.description {
  position: relative;
  top: 55px;
  font-size: 50px;
  letter-spacing: 1.5px;
  line-height: 1.3em;
  margin: -2px 0 45px;
}

templates/master.html

<!doctype html>
<html>
<head>
    {% block head %}
    <title>{% block title %}{% endblock %}</title>
    {% endblock %}
    												<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7" crossorigin="anonymous">
    												<link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.6.3/css/font-awesome.min.css" rel="stylesheet" integrity="sha384-T8Gy5hrqNKT+hzMclPo118YTQO6cYprQmhrYwIiQ/3axmI1hQomh7Ud2hPOy8SP1" crossorigin="anonymous">
    												<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
    												<link href='http://fonts.googleapis.com/css?family=Lato:400,700' rel='stylesheet' type='text/css'>

</head>
<body>
<div id="container">
    {% block content %}
    {% endblock %}</div>
</body>
</html>

templates/index.html

{% extends "master.html" %}

{% block title %}Welcome to Flask App{% endblock %}

{% block content %}
<div class="description">

Hello World</div>
{% endblock %}

Let’s write the Dockerfile.

FROM fedora
MAINTAINER Trishna Guha<tguha@redhat.com>

RUN dnf -y update && dnf -y install python-flask python-jinja2 && dnf clean all
RUN mkdir -p /app

COPY files/ /app/
WORKDIR /app

ENTRYPOINT ["python"]
CMD ["hello_world.py"]

Now we will work on Ansible playbook for our application that deals with the automation part:

Create inventory file:

[atomic]
IP_ADDRESS_OF_HOST ansible_ssh_private_key_file=<'PRIVATE_KEY_FILE'>

Replace IP_ADDRESS_OF_HOST with the IP address of the atomic/remote host and ‘PRIVATE_KEY_FILE’ with your private key file.

Create ansible.cfg file:

[defaults]
inventory=inventory
remote_user=USER

[privilege_escalation]
become_method=sudo
become_user=root

Replace USER with the user of your remote host.

Create main.yml file:

---
- name: Deploy Flask App
  hosts: atomic
  become: yes

  vars:
    src_dir: [Source Directory]
    dest_dir: [Destination Directory]

  tasks:
    - name: Create Destination Directory
      file:
       path: "{{ dest_dir }}/files"
       state: directory
       recurse: yes

    - name: Copy Dockerfile to host
      copy:
       src: "{{ src_dir }}/Dockerfile"
       dest: "{{ dest_dir }}"

    - name: Copy Application to host
      copy:
       src: "{{ src_dir }}/flask-helloworld/"
       dest: "{{ dest_dir }}/files/"

    - name: Make sure that the current directory is {{ dest_dir }}
      command: cd {{ dest_dir }}

    - name: Build Docker Image
      command: docker build --rm -t fedora/flask-app:test -f "{{ dest_dir }}/Dockerfile" "{{ dest_dir }}"

    - name: Run Docker Container
      command: docker run -d --name helloworld -p 5000:5000 fedora/flask-app:test
...

Replace [Source Directory] in src_dir field in main.yml with your /path/to/src_dir of your current host.

Replace [Destination Directory] in dest_dir field in main.yml with your /path/to/dest_dir of your remote atomic host.

Now simply run $ ansible-playbook main.yml :).  To verify if the application is running issue this command $ curl http://localhost:5000 on your atomic/remote host.

You can also manage your containers running on remote host using Cockpit. Check this article to know how to use Cockpit to manage your containers: https://fedoramagazine.org/deploy-containers-atomic-host-ansible-cockpit

fotoflexer_photo

screenshot-from-2016-10-21-18-52-45

Here is the repository of the above example:  https://github.com/trishnaguha/fedora-cloud-ansible/tree/master/examples/flask-helloworld

My future post will be related to ansible-container where I will describe how we can build Docker image and orchestrate container without writing any Dockerfile :).

Run Apache on Fedora Atomic Host

This post describes how to run Apache on Atomic host. I am using Fedora atomic host.

Boot up an atomic instance (Fedora preferred).

To test the Apache container, just run

atomic run docker.io/fedora/apache

Make sure you are using

sudo

After the container has started successfully, Now do

curl http://localhost:80

This will display

Apache

Now If you want to build your own image copy the source https://github.com/fedora-cloud/Fedora-Dockerfiles/tree/master/apache down to your host. 

Then Edit the Dockerfile and make your changes.

Now Build the image:

# docker build --rm -t /httpd .

After the build is successful, Run the container:

# docker run -d -p 80:80 /httpd #To assign port 80 of your host that maps to port 80 on the container
# docker run -d -p 80 /httpd #To assign random port that maps to port 80 on the container

If you do curl http://localhost you will see the required output.

Screenshot from 2016-09-06 10-55-07

Screenshot from 2016-09-06 10-58-21

IRC Client: Irssi On Atomic Host

If you are a terminal geek you will always want to do things using terminal ;). And when it comes to Atomic host, YES you will have to do stuffs using terminal.

If you don’t know about Atomic, you must visit http://www.projectatomic.io 🙂

This post will describe how to setup and use IRC client on Atomic host. This will be applicable for any Cloud host also.

Irssi is a terminal based IRC client for Unix/Linux systems. And the best part is we will not need to setup things manually because we have containers :).

Let’s Get Stared:

I am using Fedora Atomic host here. Get Fedora atomic host from herehttps://getfedora.org/en/cloud/download/atomic.html

Make Sure you have Docker installed.

Copy the Dockerfile from here: https://github.com/trishnaguha/Fedora-Dockerfiles/blob/irssi/irssi/Dockerfile

Now run docker build -t username/irssi .This will build image.

There after you just need to run the container 🙂  docker run -it username/irssi.

Later on sometime you will be able to do the whole set up only docker run -it fedora/irssi once Fedora adds Irssi to its Docker hub :).

After you start the container you will see something like this:

Screenshot from 2016-08-19 14-12-05

Let’s join a channel

Screenshot from 2016-08-19 14-14-16

You will find the Irssi Commands here: Irssi Commands.

Cockpit Container on Atomic Host

Screenshot from 2016-08-16 17-42-05

Cockpit is a remote manager for GNU/Linux servers.

  • Cockpit is a server manager that makes it easy to administer your GNU/Linux servers via a web browser.
  • Cockpit makes it easy for any sysadmin to perform simple tasks, such as administering storage, inspecting journals and starting and stopping services.
  • Jumping between the terminal and the web tool is no problem. A service started via Cockpit can be stopped via the terminal. Likewise, if an error occurs in the terminal, it can be seen in the Cockpit journal interface.
  • You can monitor and administer several servers at the same time. Just add them with a single click and your machines will look after its buddies.

The Cockpit team is currently uploading the cockpit container to the Fedora repo on the Docker Hub, but Fedora Release Engineering is working on publishing layered images. We now have a super-privileged container (SPC) for the web service (cockpit-ws) with the bridge, shell, and docker components installed by default on the Atomic host.

This means you can simply run atomic run fedora/cockpitws as root or with sudo and cockpit will be running on port 9090. Try it :).

Getting Started

Boot up Fedora Atomic instance.

Install the Container

Install cockpitws container using atomic.

# atomic install fedora/cockpitws
/usr/bin/docker run -ti --rm --privileged -v /:/host fedora/cockpitws /container/atomic-install
+ sed -e /pam_selinux/d -e /pam_sepermit/d /etc/pam.d/cockpit
+ mkdir -p /host/etc/cockpit/ws-certs.d
+ chmod 755 /host/etc/cockpit/ws-certs.d
+ chown root:root /host/etc/cockpit/ws-certs.d
+ mkdir -p /host/var/lib/cockpit
+ chmod 775 /host/var/lib/cockpit
+ chown root:wheel /host/var/lib/cockpit
+ /bin/mount --bind /host/etc/cockpit /etc/cockpit
+ /usr/sbin/remotectl certificate --ensure

There’s a few things going on here in the install method.

Note that we’re exposing the Atomic host root directory to the container at /host. As a SPC (Super Privileged Container), this allows the container to access the host filesystem and make changes. The install method creates a set of directories in /etc and /var to persist configs. This means that we don’t need any particular cockpitws container to stick around, any cockpitws container will be able to read the appropriate state from the host. We can upgrade the cockpit image and not worry about losing data. Since/etc and /var are writable on an Atomic host, and /etc content will be appropriately merged on a tree change, cockpit data will also survive an atomic host upgrade as well.

Set up the systemd unit

# vi /etc/systemd/system/cockpitws.service

[Unit]
Description=Cockpit Web Interface
Requires=docker.service
After=docker.service

[Service]
Restart=on-failure
RestartSec=10
ExecStart=/usr/bin/docker run --rm --privileged --pid host -v /:/host --name %p fedora/cockpitws /container/atomic-run --local-ssh
ExecStop=-/usr/bin/docker stop -t 2 %p

[Install]
WantedBy=multi-user.target

With the container available to docker, we’ll build the systemd unit file next. For local systemd unit files, we want them to reside in /etc/systemd/system.

The ExecStart line in the unit file looks nearly identical to the RUN label, with one change. When running containers from systemd, we don’t want to use docker -d, instead we want either docker -a or docker --rm. We’re using docker --rm here since we don’t need this particular container instance to survice a restart. We are going to name container using the %p tag to pick up the systemd service name, just to make it easier to find in docker ps.

Start the Service

Now we can reload systemd to read the new unit file, enable the service to start on reboot, and then start the new cockpitws service.

  # systemctl daemon-reload
  # systemctl enable cockpitws.service
  Created symlink from /etc/systemd/system/multi-user.target.wants/cockpitws.service to /etc/systemd/system/cockpitws.service.
  # systemctl start cockpitws.service
  # systemctl status cockpitws.service

  ● cockpitws.service - Cockpit Web Interface
  Loaded: loaded (/etc/systemd/system/cockpitws.service; enabled; vendor preset: disabled)
  Active: active (running) since Tue 2016-08-16 12:42:23 UTC; 10s ago
Main PID: 2047 (docker)
   Tasks: 6 (limit: 512)
  Memory: 0B
     CPU: 1ms
  CGroup: /system.slice/cockpitws.service
          └─2047 /usr/bin/docker run --rm --privileged --pid host -v /:/host --name cockpitws fedora/cockpitws /container/atomic-run --local-ssh

  Aug 16 12:42:25 atomic.novalocal docker[2047]: + sed -e /pam_selinux/d -e /pam_sepermit/d /etc/pam.d/cockpit
  Aug 16 12:42:25 atomic.novalocal docker[2047]: + mkdir -p /host/etc/cockpit/ws-certs.d
  Aug 16 12:42:25 atomic.novalocal docker[2047]: + chmod 755 /host/etc/cockpit/ws-certs.d
  Aug 16 12:42:25 atomic.novalocal docker[2047]: + chown root:root /host/etc/cockpit/ws-certs.d
  Aug 16 12:42:25 atomic.novalocal docker[2047]: + mkdir -p /host/var/lib/cockpit
  Aug 16 12:42:25 atomic.novalocal docker[2047]: + chmod 775 /host/var/lib/cockpit
  Aug 16 12:42:25 atomic.novalocal docker[2047]: + chown root:wheel /host/var/lib/cockpit
  Aug 16 12:42:25 atomic.novalocal docker[2047]: + /bin/mount --bind /host/etc/cockpit /etc/cockpit
  Aug 16 12:42:25 atomic.novalocal docker[2047]: + /usr/sbin/remotectl certificate --ensure
  Aug 16 12:42:25 atomic.novalocal docker[2047]: INFO: cockpit-ws: Using certificate: /etc/cockpit/ws-certs.d/0-self-signed.cert

Now that the service is up and running, point your web browser at port 9090 on the Atomic host and you should see the Cockpit login page. You’ll need to log in with a user in the wheel group in order to administrate the system, but you can log in as any user to view the local host. For the published Fedora Atomic cloud image, log in with the fedora credentials and you should be ready to go. You can login as root user. For that You need to setup password for root user in your atomic instance. After that you need to change PasswordAuthentication to yes in /etc/ssh/sshd_config and you are ready to go.

You can add other hosts to this Cockpit instance, with the knowledge that reboots and upgrades to the host or the container won’t affect the configuration.

Note that if you are using Openstack you need to add Port 9090 in your security group.

I just started Cockpit container on atomic host yesterday.

Here is the screenshot of the containers running.

Screenshot from 2016-08-17 11-30-22

Getting Started with Atomic Commands

Project Atomic is a framework to create OS from RHEL, CentOS, Fedora and the aim of Project Atomic is to create better OS for containers.

Why Atomic?

  • For running containers we don’t need full fledged distribution.
  • Less number of packages to maintain

rpm-ostree is a software management tool that combines the features of both traditional RPMs and OSTree. we can be way more confident on updating system if we know that we can have reliable rollback even after updating system. It provides clear transaction for updates. Since the whole process is atomic there is almost no chance o half way update of the system hence less chance of breaking system.

The atomic command defines the entrypoint for Project Atomic hosts.

On Atomic hosts there are two software delivery vehicles:

  • rpm-ostree for managing deployment and updates of host system.
  • Docker to provide containers running services and applications.

RPM-OSTree makes the file-system immutable i.e, read only except var and etc. Docker uses /var/lib/docker where all the docker related files, images are stored. /etc has all the configuration files.

Atomic Command: Let’s get Started!

We will first need to have an atomic host running.

  • atomic host upgrade will upgrade to a newer version.
  • atomic host rollback will rollback to the previous version.
  • atomic host status displays the status of the atomic host installed.
  • atomic run <name> allows an image provider how a container image expects to be run.
  • atomic install <name> installs a container on atomic host with systemd unit file to run it as service.
  • atomic uninstall <name> uninstalls the container from atomic host.
  • atomic info <name> displays LABEL information of the image.
  • atomic images lists the container images on your Atomic host.

When we ship an application you need to run an install script. Using Atomic tool management system we can embed install and uninstall script within our application itself. In the Dockerfile of our application we need to have LABEL INSTALL that points to the docker command for the application with executable install script. When we execute atomic install it will specifically run LABEL INSTALL command from the Dockerfile to install the application on atomic host.
Same way to uninstall an application we need to run atomic uninstall that will specifically run LABEL UNINSTALL from Dockerfile which specifically points to the uninstall script for the application.

For further reading regarding Install and Uninstall: http://www.projectatomic.io/docs/usr-bin-atomic

Atomic Command Cheat Sheet is now available.

BeFunky Collage

Further Reading:

 

Publish Docker image with Automated build

docker1

Docker is an open source tool that helps us to develop, ship and run applications.  Each and every application needs a Dockerfile. Instructions are written over Dockerfile in a layered/step-by-step structure to run the application. The Dockerfile is then built to create Docker image. We can then push the image either to Docker Hub or keep it in our Workstation. Finally to run the application we have to run container from the image which can be treated as a concept like an instance of the image since we can have more than one container of a single image. If the Docker image is not in our local Workstation we first have to pull the image from Docker hub and then run container from it. But when we need to make changes in Dockerfile(as needed to make changes in application) we have to build a new Docker image again from it. Or we can make change in the container we are running and commit it but that doesn’t reflect the Dockerfile which contains all the instructions to run an application say it for development or production purpose. Also building Docker image locally from Dockerfile makes us doing things like build, push and all manually. Automating build of Docker image from Dockerfile that is either on Github or Bitbucket is the only solution for this 🙂 .

In Short:  We first create Dockerfile and then push it to Github or Bitbucket. After authenticating Github account with Dockerhub we choose repository on Dockerhub from Github that contains the Dockerfile. After that the Dockerfile triggers build which in result creates Docker image getting ready to pull.

I will share an example of making a CentOS image having Apache httpd server pre-installed.

First we need to create a Dockerfile which can be viewed here.

FROM centos:centos6
MAINTAINER trishnag <trishnaguha17@gmail.com>
RUN yum -y update; yum clean all
RUN yum -y install httpd
RUN echo "This is our new Apache Test Site" >> /var/www/html/index.html
EXPOSE 80
RUN echo "/sbin/service httpd start" >> /root/.bashrc 

Then push the Dockerfile to Github. I have created a repository named CentOS6-Apache and pushed the Dockerfile to it. The repository can be found here.

After doing so

  • Go to DockerHub and Settings —> Linked Account —> Link your Github account.
  • Create —> Create Automated Build —> Create Auto-build Github.
  • Select the repository that contains Dockerfile of your application.
  • Create followed by the Automating build process.

After the image is built successfully you will be able to see Success in Build Details which indicates that the image is successfully built. The Image is now live on docker hub https://hub.docker.com/r/trishnag/centos6-apache.

Now we have to test the image to make sure whether Apache httpd server is actually pre-installed or not.

docker pull trishnag/centos6-apache  #Pull the image from Dockerhub
docker run -t -i -d trishnag/centos-apache /bin/bash #Run the container as daemon
docker ps #Get the name of the container

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8bcd1199bb8f trishnag/centos6-apache "/bin/bash" 2 minutes ago Up 2 minutes 80/tcp jovial_cray

docker logs jovial_cray #To see What is happening on bash of container.Gives httpd status
docker inspect jovial_cray | grep IPAddress #Shows the IPAddress
curl 172.17.0.1 #curl the IPAddress we got
This is our new Apache Test Site

We just have got the text that we echoed out to index.html. Hence we can finally come to conclusion that Apache httpd server has already been pre-installed in the image.

Now even if we commit changes to the Dockerfile on Github we really don’t have to worry about the image. The build process of Docker image starts automatically once any changes are committed to Dockerfile. When we pull the newly built image and run container we will be able to find those changes.

Automating build process really makes our life easier.

Resources: