20151012-4 选题

This commit is contained in:
DeadFire 2015-10-12 16:37:11 +08:00
parent 087ad141bf
commit ee9e3e82c8
3 changed files with 683 additions and 0 deletions

View File

@ -0,0 +1,322 @@
Getting Started to Calico Virtual Private Networking on Docker
================================================================================
Calico is a free and open source software for virtual networking in data centers. It is a pure Layer 3 approach to highly scalable datacenter for cloud virtual networking. It seamlessly integrates with cloud orchestration system such as openstack, docker clusters in order to enable secure IP communication between virtual machines and containers. It implements a highly productive vRouter in each node that takes advantage of the existing Linux kernel forwarding engine. Calico works in such an awesome technology that it has the ability to peer directly with the data centers physical fabric whether L2 or L3, without the NAT, tunnels on/off ramps, or overlays. Calico makes full utilization of docker to run its containers in the nodes which makes it multi-platform and very easy to ship, pack and deploy. Calico has the following salient features out of the box.
- It can scale tens of thousands of servers and millions of workloads.
- Calico is easy to deploy, operate and diagnose.
- It is open source software licensed under Apache License version 2 and uses open standards.
- It supports container, virtual machines and bare metal workloads.
- It supports both IPv4 and IPv6 internet protocols.
- It is designed internally to support rich, flexible and secure network policy.
In this tutorial, we'll perform a virtual private networking between two nodes running Calico in them with Docker Technology. Here are some easy steps on how we can do that.
### 1. Installing etcd ###
To get started with the calico virtual private networking, we'll need to have a linux machine running etcd. As CoreOS comes preinstalled and preconfigured with etcd, we can use CoreOS but if we want to configure Calico in other linux distributions, then we'll need to setup it in our machine. As we are running Ubuntu 14.04 LTS, we'll need to first install and configure etcd in our machine. To install etcd in our Ubuntu box, we'll need to add the official ppa repository of Calico by running the following command in the machine which we want to run etcd server. Here, we'll be installing etcd in our 1st node.
# apt-add-repository ppa:project-calico/icehouse
The primary source of Ubuntu packages for Project Calico based on OpenStack Icehouse, an open source solution for virtual networking in cloud data centers. Find out more at http://www.projectcalico.org/
More info: https://launchpad.net/~project-calico/+archive/ubuntu/icehouse
Press [ENTER] to continue or ctrl-c to cancel adding it
gpg: keyring `/tmp/tmpi9zcmls1/secring.gpg' created
gpg: keyring `/tmp/tmpi9zcmls1/pubring.gpg' created
gpg: requesting key 3D40A6A7 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmpi9zcmls1/trustdb.gpg: trustdb created
gpg: key 3D40A6A7: public key "Launchpad PPA for Project Calico" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK
Then, we'll need to edit /etc/apt/preferences and make changes to prefer Calico-provided packages for Nova and Neutron.
# nano /etc/apt/preferences
We'll need to add the following lines into it.
Package: *
Pin: release o=LP-PPA-project-calico-*
Pin-Priority: 100
![Calico PPA Config](http://blog.linoxide.com/wp-content/uploads/2015/10/calico-ppa-config.png)
Next, we'll also need to add the official BIRD PPA for Ubuntu 14.04 LTS so that bugs fixes are installed before its available on the Ubuntu repo.
# add-apt-repository ppa:cz.nic-labs/bird
The BIRD Internet Routing Daemon PPA (by upstream & .deb maintainer)
More info: https://launchpad.net/~cz.nic-labs/+archive/ubuntu/bird
Press [ENTER] to continue or ctrl-c to cancel adding it
gpg: keyring `/tmp/tmphxqr5hjf/secring.gpg' created
gpg: keyring `/tmp/tmphxqr5hjf/pubring.gpg' created
gpg: requesting key F9C59A45 from hkp server keyserver.ubuntu.com
apt-ggpg: /tmp/tmphxqr5hjf/trustdb.gpg: trustdb created
gpg: key F9C59A45: public key "Launchpad Datov<6F> schr<68>nky" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK
Now, after the PPA jobs are done, we'll now gonna update the local repository index and then install etcd in our machine.
# apt-get update
To install etcd in our ubuntu machine, we'll gonna run the following apt command.
# apt-get install etcd python-etcd
### 2. Starting Etcd ###
After the installation is complete, we'll now configure the etcd configuration file. Here, we'll edit **/etc/init/etcd.conf** using a text editor and append the line exec **/usr/bin/etcd** and make it look like below configuration.
# nano /etc/init/etcd.conf
exec /usr/bin/etcd --name="node1" \
--advertise-client-urls="http://10.130.65.71:2379,http://10.130.65.71:4001" \
--listen-client-urls="http://0.0.0.0:2379,http://0.0.0.0:4001" \
--listen-peer-urls "http://0.0.0.0:2380" \
--initial-advertise-peer-urls "http://10.130.65.71:2380" \
--initial-cluster-token $(uuidgen) \
--initial-cluster "node1=http://10.130.65.71:2380" \
--initial-cluster-state "new"
![Configuring ETCD](http://blog.linoxide.com/wp-content/uploads/2015/10/configuring-etcd.png)
**Note**: In the above configuration, we'll need to replace 10.130.65.71 and node-1 with the private ip address and hostname of your etcd server box. After done with editing, we'll need to save and exit the file.
We can get the private ip address of our etcd server by running the following command.
# ifconfig
![ifconfig](http://blog.linoxide.com/wp-content/uploads/2015/10/ifconfig1.png)
As our etcd configuration is done, we'll now gonna start our etcd service in our Ubuntu node. To start etcd daemon, we'll gonna run the following command.
# service etcd start
After done, we'll have a check if etcd is really running or not. To ensure that, we'll need to run the following command.
# service etcd status
### 3. Installing Docker ###
Next, we'll gonna install Docker in both of our nodes running Ubuntu. To install the latest release of docker, we'll simply need to run the following command.
# curl -sSL https://get.docker.com/ | sh
![Docker Engine Installation](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-engine-installation.png)
After the installation is completed, we'll gonna start the docker daemon in-order to make sure that its running before we move towards Calico.
# service docker restart
docker stop/waiting
docker start/running, process 3056
### 3. Installing Calico ###
We'll now install calico in our linux machine in-order to run the calico containers. We'll need to install Calico in every node which we're wanting to connect into the Calico network. To install Calico, we'll need to run the following command under root or sudo permission.
#### On 1st Node ####
# wget https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl
--2015-09-28 12:08:59-- https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl
Resolving github.com (github.com)... 192.30.252.129
Connecting to github.com (github.com)|192.30.252.129|:443... connected.
...
Resolving github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)... 54.231.9.9
Connecting to github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)|54.231.9.9|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6166661 (5.9M) [application/octet-stream]
Saving to: 'calicoctl'
100%[=========================================>] 6,166,661 1.47MB/s in 6.7s
2015-09-28 12:09:08 (898 KB/s) - 'calicoctl' saved [6166661/6166661]
# chmod +x calicoctl
After done with making it executable, we'll gonna make the binary calicoctl available as the command in any directory. To do so, we'll need to run the following command.
# mv calicoctl /usr/bin/
#### On 2nd Node ####
# wget https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl
--2015-09-28 12:09:03-- https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl
Resolving github.com (github.com)... 192.30.252.131
Connecting to github.com (github.com)|192.30.252.131|:443... connected.
...
Resolving github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)... 54.231.8.113
Connecting to github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)|54.231.8.113|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6166661 (5.9M) [application/octet-stream]
Saving to: 'calicoctl'
100%[=========================================>] 6,166,661 1.47MB/s in 5.9s
2015-09-28 12:09:11 (1022 KB/s) - 'calicoctl' saved [6166661/6166661]
# chmod +x calicoctl
After done with making it executable, we'll gonna make the binary calicoctl available as the command in any directory. To do so, we'll need to run the following command.
# mv calicoctl /usr/bin/
Likewise, we'll need to execute the above commands to install in every other nodes.
### 4. Starting Calico services ###
After we have installed calico on each of our nodes, we'll gonna start our Calico services. To start the calico services, we'll need to run the following commands.
#### On 1st Node ####
# calicoctl node
WARNING: Unable to detect the xt_set module. Load with `modprobe xt_set`
WARNING: Unable to detect the ipip module. Load with `modprobe ipip`
No IP provided. Using detected IP: 10.130.61.244
Pulling Docker image calico/node:v0.6.0
Calico node is running with id: fa0ca1f26683563fa71d2ccc81d62706e02fac4bbb08f562d45009c720c24a43
#### On 2nd Node ####
Next, we'll gonna export a global variable in order to connect our calico nodes to the same etcd server which is hosted in node1 in our case. To do so, we'll need to run the following command in each of our nodes.
# export ETCD_AUTHORITY=10.130.61.244:2379
Then, we'll gonna run calicoctl container in our every our second node.
# calicoctl node
WARNING: Unable to detect the xt_set module. Load with `modprobe xt_set`
WARNING: Unable to detect the ipip module. Load with `modprobe ipip`
No IP provided. Using detected IP: 10.130.61.245
Pulling Docker image calico/node:v0.6.0
Calico node is running with id: 70f79c746b28491277e28a8d002db4ab49f76a3e7d42e0aca8287a7178668de4
This command should be executed in every nodes in which we want to start our Calico services. The above command start a container in the respective node. To check if the container is running or not, we'll gonna run the following docker command.
# docker ps
![Docker Running Containers](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-running-containers.png)
If we see the output something similar to the output shown below then we can confirm that Calico containers are up and running.
### 5. Starting Containers ###
Next, we'll need to start few containers in each of our nodes running Calico services. We'll assign a different name to each of the containers running ubuntu. Here, workload-A, workload-B, etc has been assigned as the unique name for each of the containers. To do so, we'll need to run the following command.
#### On 1st Node ####
# docker run --net=none --name workload-A -tid ubuntu
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
...
91e54dfb1179: Already exists
library/ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:73fbe2308f5f5cb6e343425831b8ab44f10bbd77070ecdfbe4081daa4dbe3ed1
Status: Downloaded newer image for ubuntu:latest
a1ba9105955e9f5b32cbdad531cf6ecd9cab0647d5d3d8b33eca0093605b7a18
# docker run --net=none --name workload-B -tid ubuntu
89dd3d00f72ac681bddee4b31835c395f14eeb1467300f2b1b9fd3e704c28b7d
#### On 2nd Node ####
# docker run --net=none --name workload-C -tid ubuntu
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
...
91e54dfb1179: Already exists
library/ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:73fbe2308f5f5cb6e343425831b8ab44f10bbd77070ecdfbe4081daa4dbe3ed1
Status: Downloaded newer image for ubuntu:latest
24e2d5d7d6f3990b534b5643c0e483da5b4620a1ac2a5b921b2ba08ebf754746
# docker run --net=none --name workload-D -tid ubuntu
c6f28d1ab8f7ac1d9ccc48e6e4234972ed790205c9ca4538b506bec4dc533555
Similarly, if we have more nodes, we can run ubuntu docker container into it by running the above command with assigning a different container name.
### 6. Assigning IP addresses ###
After we have got our docker containers running in each of our hosts, we'll go for adding a networking support to the containers. Now, we'll gonna assign a new ip address to each of the containers using calicoctl. This will add a new network interface to the containers with the assigned ip addresses. To do so, we'll need to run the following commands in the hosts running the containers.
#### On 1st Node ####
# calicoctl container add workload-A 192.168.0.1
# calicoctl container add workload-B 192.168.0.2
#### On 2nd Node ####
# calicoctl container add workload-C 192.168.0.3
# calicoctl container add workload-D 192.168.0.4
### 7. Adding Policy Profiles ###
After our containers have got networking interfaces and ip address assigned, we'll now need to add policy profiles to enable networking between the containers each other. After adding the profiles, the containers will be able to communicate to each other only if they have the common profiles assigned. That means, if they have different profiles assigned, they won't be able to communicate to eachother. So, before being able to assign. we'll need to first create some new profiles. That can be done in either of the hosts. Here, we'll run the following command in 1st Node.
# calicoctl profile add A_C
Created profile A_C
# calicoctl profile add B_D
Created profile B_D
After the profile has been created, we'll simply add our workload to the required profile. Here, in this tutorial, we'll place workload A and workload C in a common profile A_C and workload B and D in a common profile B_D. To do so, we'll run the following command in our hosts.
#### On 1st Node ####
# calicoctl container workload-A profile append A_C
# calicoctl container workload-B profile append B_D
#### On 2nd Node ####
# calicoctl container workload-C profile append A_C
# calicoctl container workload-D profile append B_D
### 8. Testing the Network ###
After we've added a policy profile to each of our containers using Calicoctl, we'll now test whether our networking is working as expected or not. We'll take a node and a workload and try to communicate with the other containers running in same or different nodes. And due to the profile, we should be able to communicate only with the containers having a common profile. So, in this case, workload A should be able to communicate with only C and vice versa whereas workload A shouldn't be able to communicate with B or D. To test the network, we'll gonna ping the containers having common profiles from the 1st host running workload A and B.
We'll first ping workload-C having ip 192.168.0.3 using workload-A as shown below.
# docker exec workload-A ping -c 4 192.168.0.3
Then, we'll ping workload-D having ip 192.168.0.4 using workload-B as shown below.
# docker exec workload-B ping -c 4 192.168.0.4
![Ping Test Success](http://blog.linoxide.com/wp-content/uploads/2015/10/ping-test-success.png)
Now, we'll check if we're able to ping the containers having different profiles. We'll now ping workload-D having ip address 192.168.0.4 using workload-A.
# docker exec workload-A ping -c 4 192.168.0.4
After done, we'll try to ping workload-C having ip address 192.168.0.3 using workload-B.
# docker exec workload-B ping -c 4 192.168.0.3
![Ping Test Failed](http://blog.linoxide.com/wp-content/uploads/2015/10/ping-test-failed.png)
Hence, the workloads having same profiles could ping each other whereas having different profiles couldn't ping to each other.
### Conclusion ###
Calico is an awesome project providing an easy way to configure a virtual network using the latest docker technology. It is considered as a great open source solution for virtual networking in cloud data centers. Calico is being experimented by people in different cloud platforms like AWS, DigitalOcean, GCE and more these days. As Calico is currently under experiment, its stable version hasn't been released yet and is still in pre-release. The project consists a well documented documentations, tutorials and manuals in their [official documentation site][2].
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/calico-virtual-private-networking-docker/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://docs.projectcalico.org/

View File

@ -0,0 +1,111 @@
How to Setup DockerUI - a Web Interface for Docker
================================================================================
Docker is getting more popularity day by day. The idea of running a complete Operating System inside a container rather than running inside a virtual machine is an awesome technology. Docker has made lives of millions of system administrators and developers pretty easy for getting their work done in no time. It is an open source technology that provides an open platform to pack, ship, share and run any application as a lightweight container without caring on which operating system we are running on the host. It has no boundaries of Language support, Frameworks or packaging system and can be run anywhere, anytime from a small home computers to high-end servers. Running docker containers and managing them may come a bit difficult and time consuming, so there is a web based application named DockerUI which is make managing and running container pretty simple. DockerUI is highly beneficial to people who are not much aware of linux command lines and want to run containerized applications. DockerUI is an open source web based application best known for its beautiful design and ease simple interface for running and managing docker containers.
Here are some easy steps on how we can setup Docker Engine with DockerUI in our linux machine.
### 1. Installing Docker Engine ###
First of all, we'll gonna install docker engine in our linux machine. Thanks to its developers, docker is very easy to install in any major linux distribution. To install docker engine, we'll need to run the following command with respect to which distribution we are running.
#### On Ubuntu/Fedora/CentOS/RHEL/Debian ####
Docker maintainers have written an awesome script that can be used to install docker engine in Ubuntu 15.04/14.10/14.04, CentOS 6.x/7, Fedora 22, RHEL 7 and Debian 8.x distributions of linux. This script recognizes the distribution of linux installed in our machine, then adds the required repository to the filesystem, updates the local repository index and finally installs docker engine and required dependencies from it. To install docker engine using that script, we'll need to run the following command under root or sudo mode.
# curl -sSL https://get.docker.com/ | sh
#### On OpenSuse/SUSE Linux Enterprise ####
To install docker engine in the machine running OpenSuse 13.1/13.2 or SUSE Linux Enterprise Server 12, we'll simply need to execute the zypper command. We'll gonna install docker using zypper command as the latest docker engine is available on the official repository. To do so, we'll run the following command under root/sudo mode.
# zypper in docker
#### On ArchLinux ####
Docker is available in the official repository of Archlinux as well as in the AUR packages maintained by the community. So, we have two options to install docker in archlinux. To install docker using the official arch repository, we'll need to run the following pacman command.
# pacman -S docker
But if we want to install docker from the Archlinux User Repository ie AUR, then we'll need to execute the following command.
# yaourt -S docker-git
### 2. Starting Docker Daemon ###
After docker is installed, we'll now gonna start our docker daemon so that we can run docker containers and manage them. We'll run the following command to make sure that docker daemon is installed and to start the docker daemon.
#### On SysVinit ####
# service docker start
#### On Systemd ####
# systemctl start docker
### 3. Installing DockerUI ###
Installing DockerUI is pretty easy than installing docker engine. We just need to pull the dockerui from the Docker Registry Hub and run it inside a container. To do so, we'll simply need to run the following command.
# docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui
![Starting DockerUI Container](http://blog.linoxide.com/wp-content/uploads/2015/09/starting-dockerui-container.png)
Here, in the above command, as the default port of the dockerui web application server 9000, we'll simply map the default port of it with -p flag. With -v flag, we specify the docker socket. The --privileged flag is required for hosts using SELinux.
After executing the above command, we'll now check if the dockerui container is running or not by running the following command.
# docker ps
![Running Docker Containers](http://blog.linoxide.com/wp-content/uploads/2015/09/running-docker-containers.png)
### 4. Pulling an Image ###
Currently, we cannot pull an image directly from DockerUI so, we'll need to pull a docker image from the linux console/terminal. To do so, we'll need to run the following command.
# docker pull ubuntu
![Docker Image Pull](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-image-pull.png)
The above command will pull an image tagged as ubuntu from the official [Docker Hub][1]. Similarly, we can pull more images that we require and are available in the hub.
### 4. Managing with DockerUI ###
After we have started the dockerui container, we'll now have fun with it to start, pause, stop, remove and perform many possible activities featured by dockerui with docker containers and images. First of all, we'll need to open the web application using our web browser. To do so, we'll need to point our browser to http://ip-address:9000 or http://mydomain.com:9000 according to the configuration of our system. By default, there is no login authentication needed for the user access but we can configure our web server for adding authentication. To start a container, first we'll need to have images of the required application we want to run a container with.
#### Create a Container ####
To create a container, we'll need to go to the section named Images then, we'll need to click on the image id which we want to create a container of. After clicking on the required image id, we'll need to click on Create button then we'll be asked to enter the required properties for our container. And after everything is set and done. We'll need to click on Create button to finally create a container.
![Creating Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/10/creating-docker-container.png)
#### Stop a Container ####
To stop a container, we'll need to move towards the Containers page and then select the required container we want to stop. Now, we'll want to click on Stop option which we can see under Actions drop-down menu.
![Managing Container](http://blog.linoxide.com/wp-content/uploads/2015/10/managing-container.png)
#### Pause and Resume ####
To pause a container, we simply select the required container we want to pause by keeping a check mark on the container and then click the Pause option under Actions . This is will pause the running container and then, we can simply resume the container by selecting Unpause option from the Actions drop down menu.
#### Kill and Remove ####
Like we had performed the above tasks, its pretty easy to kill and remove a container or an image. We just need to check/select the required container or image and then select the Kill or Remove button from the application according to our need.
### Conclusion ###
DockerUI is a beautiful utilization of Docker Remote API to develop an awesome web interface for managing docker containers. The developers have designed and developed this application in pure HTML and JS language. It is currently incomplete and is under heavy development so we don't recommend it for the use in production currently. It makes users pretty easy to manage their containers and images with simple clicks without needing to execute lines of commands to do small jobs. If we want to contribute DockerUI, we can simply visit its [Github Repository][2]. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you !
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/setup-dockerui-web-interface-docker/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://hub.docker.com/
[2]:https://github.com/crosbymichael/dockerui/

View File

@ -0,0 +1,250 @@
How to Setup Red Hat Ceph Storage on CentOS 7.0
================================================================================
Ceph is an open source software platform that stores data on a single distributed computer cluster. When you are planning to build a cloud, then on top of the requirements you have to decide on how to implement your storage. Open Source CEPH is one of RED HAT mature technology based on object-store system, called RADOS, with a set of gateway APIs that present the data in block, file, and object modes. As a result of its open source nature, this portable storage platform may be installed and used in public or private clouds. The topology of a Ceph cluster is designed around replication and information distribution, which are intrinsic and provide data integrity. It is designed to be fault-tolerant, and can run on commodity hardware, but can also be run on a number of more advanced systems with the right setup.
Ceph can be installed on any Linux distribution but it requires the recent kernel and other up-to-date libraries in order to be properly executed. But, here in this tutorial we will be using CentOS-7.0 with minimal installation packages on it.
### System Resources ###
**CEPH-STORAGE**
OS: CentOS Linux 7 (Core)
RAM:1 GB
CPU:1 CPU
DISK: 20
Network: 45.79.136.163
FQDN: ceph-storage.linoxide.com
**CEPH-NODE**
OS: CentOS Linux 7 (Core)
RAM:1 GB
CPU:1 CPU
DISK: 20
Network: 45.79.171.138
FQDN: ceph-node.linoxide.com
### Pre-Installation Setup ###
There are few steps that we need to perform on each of our node before the CEPH storage setup. So first thing is to make sure that each node is configured with its networking setup with FQDN that is reachable to other nodes.
**Configure Hosts**
To setup the hosts entry on each node let's open the default hosts configuration file as shown below.
# vi /etc/hosts
----------
45.79.136.163 ceph-storage ceph-storage.linoxide.com
45.79.171.138 ceph-node ceph-node.linoxide.com
**Install VMware Tools**
While working on the VMware virtual environment, its recommended that you have installed its open VM tools. You can install using below command.
#yum install -y open-vm-tools
**Firewall Setup**
If you are working on a restrictive environment where your local firewall in enabled then make sure that the number of following ports are allowed from in your CEPH storge admin node and client nodes.
You must open ports 80, 2003, and 4505-4506 on your Admin Calamari node and port 80 to CEPH admin or Calamari node for inbound so that clients in your network can access the Calamari web user interface.
You can start and enable firewall in centos 7 with given below command.
#systemctl start firewalld
#systemctl enable firewalld
To allow the mentioned ports in the Admin Calamari node run the following commands.
#firewall-cmd --zone=public --add-port=80/tcp --permanent
#firewall-cmd --zone=public --add-port=2003/tcp --permanent
#firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent
#firewall-cmd --reload
On the CEPH Monitor nodes you have to allow the following ports in the firewall.
#firewall-cmd --zone=public --add-port=6789/tcp --permanent
Then allow the following list of default ports for talking to clients and monitors and for sending data to other OSDs.
#firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
It quite fair that you should disable firewall and SELinux settings if you are working in a non-production environment , so we are going to disable the firewall and SELinux in our test environment.
#systemctl stop firewalld
#systemctl disable firewalld
**System Update**
Now update your system and then give it a reboot to implement the required changes.
#yum update
#shutdown -r 0
### Setup CEPH User ###
Now we will create a separate sudo user that will be used for installing the ceph-deploy utility on each node and allow that user to have password less access on each node because it needs to install software and configuration files without prompting for passwords on CEPH nodes.
To create new user with its separate home directory run the below command on the ceph-storage host.
[root@ceph-storage ~]# useradd -d /home/ceph -m ceph
[root@ceph-storage ~]# passwd ceph
Each user created on the nodes must have sudo rights, you can assign the sudo rights to the user using running the following command as shown.
[root@ceph-storage ~]# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
ceph ALL = (root) NOPASSWD:ALL
[root@ceph-storage ~]# sudo chmod 0440 /etc/sudoers.d/ceph
### Setup SSH-Key ###
Now we will generate SSH keys on the admin ceph node and then copy that key to each Ceph cluster nodes.
Let's run the following command on the ceph-node to copy its ssh key on the ceph-storage.
[root@ceph-node ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
5b:*:*:*:*:*:*:*:*:*:c9 root@ceph-node
The key's randomart image is:
+--[ RSA 2048]----+
----------
[root@ceph-node ~]# ssh-copy-id ceph@ceph-storage
![SSH key](http://blog.linoxide.com/wp-content/uploads/2015/10/k3.png)
### Configure PID Count ###
To configure the PID count value, we will make use of the following commands to check the default kernel value. By default its a small maximum number of threads that is '32768'.
So will configure this value to a higher number of threads by editing the system conf file as shown in the image.
![Change PID Value](http://blog.linoxide.com/wp-content/uploads/2015/10/3-PID-value.png)
### Setup Your Administration Node Server ###
With all the networking setup and verified, now we will install ceph-deploy using the user ceph. So, check the hosts entry by opening its file.
#vim /etc/hosts
ceph-storage 45.79.136.163
ceph-node 45.79.171.138
Now to add its repository run the below command.
#rpm -Uhv http://ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm
![Adding EPEL](http://blog.linoxide.com/wp-content/uploads/2015/10/k1.png)
OR create a new file and update the CEPH repository parameters but do not forget to mention your current release and distribution.
[root@ceph-storage ~]# vi /etc/yum.repos.d/ceph.repo
----------
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-{ceph-release}/{distro}/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
After this update your system and install the ceph deploy package.
### Installing CEPH-Deploy Package ###
To upate the system with latest ceph repository and other packages, we will run the following command along with ceph-deploy installation command.
#yum update -y && yum install ceph-deploy -y
Image-
### Setup the cluster ###
Create a new directory and move into it on the admin ceph-node to collect all output files and logs by using the following commands.
#mkdir ~/ceph-cluster
#cd ~/ceph-cluster
----------
#ceph-deploy new storage
![setup ceph cluster](http://blog.linoxide.com/wp-content/uploads/2015/10/k4.png)
Upon successful execution of above command you can see it creating its configuration files.
Now to configure the default configuration file of CEPH, open it using any editor and place the following two lines under its global parameters that reflects your public network.
#vim ceph.conf
osd pool default size = 1
public network = 45.79.0.0/16
### Installing CEPH ###
We are now going to install CEPH on each of the node associated with our CEPH cluster. To do so we make use of the following command to install CEPH on our both nodes that is ceph-storage and ceph-node as shown below.
#ceph-deploy install ceph-node ceph-storage
![installing ceph](http://blog.linoxide.com/wp-content/uploads/2015/10/k5.png)
This will takes some time while processing all its required repositories and installing the required packages.
Once the ceph installation process is complete on both nodes we will proceed to create monitor and gather keys by running the following command on the same node.
#ceph-deploy mon create-initial
![CEPH Initial Monitor](http://blog.linoxide.com/wp-content/uploads/2015/10/k6.png)
### Setup OSDs and OSD Daemons ###
Now we will setup disk storages, to do so first run the below command to List all of your usable disks by using the below command.
#ceph-deploy disk list ceph-storage
In results will get the list of your disks used on your storage nodes that you will use for creating the OSD. Let's run the following command that consists of your disks names as shown below.
#ceph-deploy disk zap storage:sda
#ceph-deploy disk zap storage:sdb
Now to finalize the OSD setup let's run the below commands to setup the journaling disk along with data disk.
#ceph-deploy osd prepare storage:sdb:/dev/sda
#ceph-deploy osd activate storage:/dev/sdb1:/dev/sda1
You will have to repeat the same command on all the nodes while it will clean everything present on the disk. Afterwards to have a functioning cluster, we need to copy the different keys and configuration files from the admin ceph-node to all the associated nodes by using the following command.
#ceph-deploy admin ceph-node ceph-storage
### Testing CEPH ###
We have almost completed the CEPH cluster setup, let's run the below command to check the status of the running ceph by using the below command on the admin ceph-node.
#ceph status
#ceph health
HEALTH_OK
So, if you did not get any error message at ceph status , that means you have successfully setup your ceph storage cluster on CentOS 7.
### Conclusion ###
In this detailed article we learned about the CEPH storage clustering setup using the two virtual Machines with CentOS 7 OS installed on them that can be used as a backup or as your local storage that can be used to precess other virtual machines by creating its pools. We hope you have got this article helpful. Do share your experiences when you try this at your end.
--------------------------------------------------------------------------------
via: http://linoxide.com/storage/setup-red-hat-ceph-storage-centos-7-0/
作者:[Kashif Siddique][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/kashifs/