mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-21 02:10:11 +08:00
20140128-1 选题
This commit is contained in:
parent
711207878d
commit
cb18a8be80
77
sources/How To Install Gnome 3.10 In Ubuntu 13.10.md
Normal file
77
sources/How To Install Gnome 3.10 In Ubuntu 13.10.md
Normal file
@ -0,0 +1,77 @@
|
||||
How To Install Gnome 3.10 In Ubuntu 13.10
|
||||
================================================================================
|
||||

|
||||
|
||||
Bored of Unity or simply dislike it? Why not **install Gnome 3.10 in Ubuntu 13.10**? Installing a new desktop environment is one of the first few [things to do after installing Ubuntu 13.10][1], if you like experimenting a bit. In this quick tutorial we shall see **how to install Gnome 3.10 in Ubuntu 13.10**.
|
||||
|
||||
### Install Gnome 3.10 in Ubuntu 13.10: ###
|
||||
|
||||
We shall be using several PPAs to install Gnome 3.10 and distribution upgrade will take some time to finish. I presume you have good internet speed, if not, you can use some of the [tips to improve system performance in Ubuntu 13.10][2].
|
||||
|
||||
#### Step 1: Install GDM [Optional] ####
|
||||
|
||||
First step is to install [GDM][3] along with the default [LightDM][4]. This is optional but recommended as some people mentioned issues with LightDM. Open a terminal (Ctrl+Alt+T) and use the following command:
|
||||
|
||||
sudo apt-get install gdm
|
||||
|
||||
#### Step 2: Add PPAs and upgrade the system ####
|
||||
|
||||
Now is the time to add some Gnome 3.10 PPAs. Addition of PPAs will be followed by a distribution upgrade which takes time and downloads over 200 MB of data.
|
||||
|
||||
sudo add-apt-repository ppa:gnome3-team/gnome3-next
|
||||
sudo add-apt-repository ppa:gnome3-team/gnome3-staging
|
||||
sudo apt-get update
|
||||
sudo apt-get dist-upgrade
|
||||
|
||||
#### Step 3: Install Gnome shell ####
|
||||
|
||||
Once the upgrade has been done, use the following command to install Gnome 3.10 in Ubuntu.
|
||||
|
||||
sudo apt-get install gnome-shell
|
||||
|
||||
#### Step 4: Install Gnome specific apps [Optional] ####
|
||||
|
||||
This step too is optional. You may want to install some Gnome specific applications to get the full feel of Gnome 3.10 in Ubuntu. You may face issues with some of these apps.
|
||||
|
||||
sudo apt-get install gnome-weather gnome-music gnome-maps gnome-documents gnome-boxes gnome-shell-extensions gnome-tweak-tool gnome-clocks
|
||||
|
||||
That would be all you need to do. Restart your computer, at login, choose Gnome by clicking on the gear symbol. Here is what my Gnome 3.10 looks like on my laptop:
|
||||
|
||||

|
||||
|
||||
### Uninstall Gnome 3.10: ###
|
||||
|
||||
Did not like Gnome 3.10? No worries. Uninstall them by [deleting PPA][5]. To do that, you need to install PPA Purge (if not installed already). Use the following command:
|
||||
|
||||
sudo apt-get install ppa-purge
|
||||
|
||||
And afterwards, install the PPAs you installed:
|
||||
|
||||
sudo ppa-purge ppa:gnome3-team/gnome3-staging
|
||||
sudo ppa-purge ppa:gnome3-team/gnome3-next
|
||||
|
||||
This will revert Gnome 3.10 to Gnome 3.8 which is available in Ubuntu 13.10 repository. To completely remove Gnome 3, use the following command:
|
||||
|
||||
sudo apt-get remove gnome-shell ubuntu-gnome-desktop
|
||||
|
||||
This will revert Gnome 3.10 to Gnome 3.8 which is available in Ubuntu 13.10 repository. To completely remove Gnome 3, use the following command:
|
||||
|
||||
sudo apt-get remove gnome-shell ubuntu-gnome-desktop
|
||||
|
||||
And of course you should remove any application that you installed specifically for Gnome 3.10
|
||||
|
||||
I hope this tutorial helped you to install Gnome 3.10 in Ubuntu 13.10. Did you try Gnome 3.10 already? Which you like more, Gnome or Unity?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/install-gnome-3-ubuntu-1310/
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://itsfoss.com/things-to-do-after-installing-ubuntu-13-10/
|
||||
[2]:http://itsfoss.com/speed-up-ubuntu-1310/
|
||||
[3]:https://wiki.gnome.org/Projects/GDM
|
||||
[4]:http://en.wikipedia.org/wiki/LightDM
|
||||
[5]:http://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/
|
@ -0,0 +1,34 @@
|
||||
Markdown Text Editor CuteMarkEd 0.9.0 Gets New Options
|
||||
================================================================================
|
||||

|
||||
|
||||
**CuteMarkEd 0.9.0, a Qt-based, free, and open source markdown editor with live HTML preview, has been released and is available for download.**
|
||||
|
||||
CuteMarkEd is a very useful Qt Text Editor that can provide support for math expressions, code syntax highlighting, and syntax highlighting for a markdown document.
|
||||
|
||||
### Highlights of CuteMarkEd 0.9.0: ###
|
||||
|
||||
- A snippets system has been added;
|
||||
- A "Go to Line" menu item has been added;
|
||||
- The new options "case sensitive," "whole words only," and "use regular expressions" have been added to find/replace functionality;
|
||||
- Support has been implemented for adding the selected word to a user dictionary;
|
||||
- An option to change width of tab characters has been added.
|
||||
|
||||
Check the complete list of changes and improvements in the official [announcement][1].
|
||||
|
||||
Download CuteMarkEd 0.9.0 right now:
|
||||
|
||||
- [CuteMarkEd 0.9.0 tar.gz][2] [sources] [372 KB]
|
||||
|
||||
Remember that this is a development version and it should NOT be installed on production machines. It is intended for testing purposes only.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://news.softpedia.com/news/Markdown-Text-Editor-CuteMarkEd-0-9-0-Gets-News-Options-421082.shtml
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://qt-apps.org/content/show.php/CuteMarkEd?content=158801
|
||||
[2]:https://github.com/cloose/CuteMarkEd/archive/v0.9.0.tar.gz
|
@ -0,0 +1,19 @@
|
||||
Should Canonical Drop the Current Background Theme for Ubuntu 14.04 LTS?
|
||||
================================================================================
|
||||

|
||||
|
||||
Ubuntu has been sporting the same kind of background for years, but the upcoming Ubuntu 14.04 LTS (Trusty Tahr) could be the perfect time for a change of scenery.
|
||||
|
||||
The Ubuntu design team always aimed as keeping the background simple and familiar. As a rule of thumb, you need to make sure that people recognize the operating system at a glance, just by looking at the colors of the desktop.
|
||||
|
||||
The last major change in this direction was at the launch of Ubuntu 10.04 LTS (Lucid Lynx). After Lucid Lynx, the backgrounds have been evolving, from one version to another, in small increments.
|
||||
|
||||
Ubuntu 14.04 LTS (Trusty Tahr) might be the time to shake things up a bit. Canonical is also preparing a face lift for the icons and Unity7. What better moment to make Ubuntu 14.04 LTS stand apart from all the ones that came before it?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://news.softpedia.com/news/Should-Canonical-Drop-the-Curent-Background-Theme-for-Ubuntu-14-04-LTS-420737.shtml
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
87
sources/Two Pi R 2--Web Servers.md
Normal file
87
sources/Two Pi R 2--Web Servers.md
Normal file
@ -0,0 +1,87 @@
|
||||
Two Pi R 2: Web Servers
|
||||
================================================================================
|
||||
In my last [article][1][注:此文章在另一篇原文“Two Pi R”中], I talked about how even though an individual Raspberry Pi is not that redundant, two Pis are. I described how to set up two Raspberry Pis as a fault-tolerant file server using the GlusterFS clustered filesystem. Well, now that we have redundant, fault-tolerant storage shared across two Raspberry Pis, we can use that as a foundation to build other fault-tolerant services. In this article, I describe how to set up a simple Web server cluster on top of the Raspberry Pi foundation we already have.
|
||||
|
||||
Just in case you didn't catch the first column, I'll go over the setup from last month. I have two Raspberry Pis: Pi1 and Pi2. Pi1 has an IP address of 192.168.0.121, and Pi2 has 192.168.0.122. I've set them up as a GlusterFS cluster, and they are sharing a volume named gv0 between them. I also mounted this shared volume on both machines at /mnt/gluster1, so they each could access the shared storage at the same time. Finally, I performed some failure testing. I mounted this shared storage on a third machine and launched a simple script that wrote the date to a file on the shared storage. Then, I experimented with taking down each Raspberry Pi individually to confirm the storage stayed up.
|
||||
|
||||
Now that I have the storage up and tested, I'd like to set up these Raspberry Pis as a fault-tolerant Web cluster. Granted, Raspberry Pis don't have speedy processors or a lot of RAM, but they still have more than enough resources to act as a Web server for static files. Although the example I'm going to give is very simplistic, that's intentional—the idea is that once you have validated that a simple static site can be hosted on redundant Raspberry Pis, you can expand that with some more sophisticated content yourself.
|
||||
|
||||
### Install Nginx ###
|
||||
|
||||
Although I like Apache just fine, for a limited-resource Web server serving static files, something like nginx has the right blend of features, speed and low resource consumption that make it ideal for this site. Nginx is available in the default Raspbian package repository, so I log in to the first Raspberry Pi in the cluster and run:
|
||||
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install nginx
|
||||
|
||||
Once nginx installed, I created a new basic nginx configuration at /mnt/gluster1/cluster that contains the following config:
|
||||
|
||||
server {
|
||||
root /mnt/gluster1/www;
|
||||
index index.html index.htm;
|
||||
server_name twopir twopir.example.com;
|
||||
|
||||
location / {
|
||||
try_files $uri $uri/ /index.html;
|
||||
}
|
||||
}
|
||||
|
||||
Note: I decided to name the service twopir, but you would change this to whatever hostname you want to use for the site. Also notice that I set the document root to /mnt/gluster1/www. This way, I can put all of my static files onto shared storage so they are available from either host.
|
||||
|
||||
Now that I have an nginx config, I need to move the default nginx config out of the way and set up this config to be the default. Under Debian, nginx organizes its files a lot like Apache with sites-available and sites-enabled directories. Virtual host configs are stored in sites-available, and sites-enabled contains symlinks to those configs that you want to enable. Here are the steps I performed on the first Raspberry Pi:
|
||||
|
||||
$ cd /etc/nginx/sites-available
|
||||
$ sudo ln -s /mnt/gluster1/cluster .
|
||||
$ cd /etc/nginx/sites-enabled
|
||||
$ sudo rm default
|
||||
$ sudo ln -s /etc/nginx/sites-available/cluster .
|
||||
|
||||
Now I have a configuration in place but no document root to serve. The next step is to create a /mnt/gluster1/www directory and copy over the default nginx index.html file to it. Of course, you probably would want to create your own custom index.html file here instead, but copying a file is a good start:
|
||||
|
||||
$ sudo mkdir /mnt/gluster1/www
|
||||
$ cp /usr/share/nginx/www/index.html /mnt/gluster1/www
|
||||
|
||||
With the document root in place, I can restart the nginx service:
|
||||
|
||||
$ sudo /etc/init.d/nginx restart
|
||||
|
||||
Now I can go to my DNS server and make sure I have an A record for twopir that points to my first Raspberry Pi at 192.168.0.121. In your case, of course, you would update your DNS server with your hostname and IP. Now I would open up http://twopir/ in a browser and confirm that I see the default nginx page. If I look at the /var/log/nginx/access.log file, I should see evidence that I hit the page.
|
||||
|
||||
Once I've validated that the Web server works on the first Raspberry Pi, it's time to duplicate some of the work on the second Raspberry Pi. Because I'm storing configurations on the shared GlusterFS storage, really all I need to do is install nginx, create the proper symlinks to enable my custom nginx config and restart nginx:
|
||||
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install nginx
|
||||
$ cd /etc/nginx/sites-available
|
||||
$ sudo ln -s /mnt/gluster1/cluster .
|
||||
$ cd /etc/nginx/sites-enabled
|
||||
$ sudo rm default
|
||||
$ sudo ln -s /etc/nginx/sites-available/cluster .
|
||||
$ sudo /etc/init.d/nginx restart
|
||||
|
||||
### Two DNS A Records ###
|
||||
|
||||
So, now I have two Web hosts that can host the same content, but the next step in this process is an important part of what makes this setup redundant. Although you definitely could set up a service like heartbeat with some sort of floating IP address that changed from one Raspberry Pi to the next depending on what was up, an even better approach is to use two DNS A records for the same hostname that point to each of the Raspberry Pi IPs. Some people refer to this as DNS load balancing, because by default, DNS lookups for a hostname that has multiple A records will return the results in random order each time you make the request:
|
||||
|
||||
$ dig twopir.example.com A +short
|
||||
192.168.0.121
|
||||
192.168.0.122
|
||||
$ dig twopir.example.com A +short
|
||||
192.168.0.122
|
||||
192.168.0.121
|
||||
|
||||
Because the results are returned in random order, clients should get sent evenly between the different hosts, and in effect, multiple A records do result in a form of load balancing. What interests me about a host having multiple A records though isn't as much the load balancing as how a Web browser handles failure. When a browser gets two A records for a Web host, and the first host is unavailable, the browser almost immediately will fail over to the next A record in the list. This failover is fast enough that in many cases it's imperceptible to the user and definitely is much faster than the kind of failover you might see in a traditional heartbeat cluster.
|
||||
|
||||
So, go to the same DNS server you used to add the first A record and add a second record that references the same hostname but a different IP address—the IP address of the second host in the cluster. Once you save your changes, perform a dig query like I performed above and you should get two IP addresses back.
|
||||
|
||||
Once you have two A records set up, the cluster is basically ready for use and is fault-tolerant. Open two terminals and log in to each Raspberry Pi, and run `tail -f /var/log/nginx/access.log` so you can watch the Web server access then load your page in a Web browser. You should see activity on the access logs on one of the servers but not the other. Now refresh a few times, and you'll notice that your browser should be sticking to a single Web server. After you feel satisfied that your requests are going to that server successfully, reboot it while refreshing the Web page multiple times. If you see a blip at all, it should be a short one, because the moment the Web server drops, you should be redirected to the second Raspberry Pi and be able to see the same index page. You also should see activity in the access logs. Once the first Raspberry Pi comes back from the reboot, you probably will not even be able to notice from the perspective of the Web browser.
|
||||
|
||||
Experiment with rebooting one Raspberry Pi at a time, and you should see that as long as you have one server available, the site stays up. Although this is a simplistic example, all you have to do now is copy over any other static Web content you want to serve into /mnt/gluster1/www, and enjoy your new low-cost fault-tolerant Web cluster.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/two-pi-r-2-web-servers
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.linuxjournal.com/content/two-pi-r
|
149
sources/Two Pi R.md
Normal file
149
sources/Two Pi R.md
Normal file
@ -0,0 +1,149 @@
|
||||
Two Pi R
|
||||
================================================================================
|
||||
Although many people are excited about the hardware-hacking possibilities with the Raspberry Pi, one of the things that interests me most is the fact that it is essentially a small low-power Linux server I can use to replace other Linux servers I already have around the house. In previous columns, I've talked about using the Raspberry Pi to replace the server that controls my beer fridge and colocating a Raspberry Pi in Austria. After I colocated a Raspberry Pi in Austria, I started thinking about the advantages and disadvantages of using something with so many single points of failure as a server I relied on, so I started thinking about ways to handle that single point of failure. When you see "Two Pi R", you probably think the R stands for the radius for a circle. To me, it stands for redundancy. I came to the conclusion that although one Pi isn't redundant, two Pi are.
|
||||
|
||||
So, in this article, I'm building the foundation for setting up redundant services with a pair of Raspberry Pis. I start with setting up a basic clustered network filesystem using GlusterFS. In later articles, I'll follow up with how to take advantage of this shared storage to set up other redundant services. Of course, although I'm using a Raspberry Pi for this article, these same steps should work with other hardware as well.
|
||||
|
||||
### Configure the Raspberry Pis ###
|
||||
|
||||
To begin, I got two SD cards and loaded them with the latest version of the default Raspberry Pi distribution from the official Raspberry Pi downloads page, the Debian-based Raspbian. I followed the documentation to set up the image and then booted in to both Raspberry Pis while they were connected to a TV to make sure that the OS booted and that SSH was set to start by default (it should be). You probably also will want to use the raspi-config tool to expand the root partition to fill the SD card, since you will want all that extra space for your redundant storage. After I confirmed I could access the Raspberry Pis remotely, I moved them away from the TV and over to a switch and rebooted them without a display connected.
|
||||
|
||||
By default, Raspbian will get its network information via DHCP; however, if you want to set up redundant services, you will want your Raspberry Pis to keep the same IP every time they boot. In my case, I updated my DHCP server so that it handed out the same IP to my Raspberry Pis every time they booted, but you also could edit the /etc/network/interfaces file on your Raspberry Pi and change:
|
||||
|
||||
iface eth0 inet dhcp
|
||||
|
||||
to:
|
||||
|
||||
auto eth0
|
||||
iface eth0 inet static
|
||||
address 192.168.0.121
|
||||
netmask 255.255.255.0
|
||||
gateway 192.168.0.1
|
||||
|
||||
Of course, modify the networking information to match your personal network, and make sure that each Raspberry Pi uses a different IP. I also changed the hostnames of each Raspberry Pi, so I could tell them apart when I logged in. To do this, just edit /etc/hostname as root and change the hostname to what you want. Then, reboot to make sure that each Raspberry Pi comes up with the proper network settings and hostname.
|
||||
|
||||
### Configure the GlusterFS Server ###
|
||||
|
||||
GlusterFS is a userspace clustered filesystem that I chose for this project because of how simple it makes configuring shared network filesystems. To start, choose a Raspberry Pi that will act as your master. What little initial setup you need to do will be done from the master node, even though once things are set up, nodes should fail over automatically. Here is the information about my environment:
|
||||
|
||||
Master hostname: pi1
|
||||
Master IP: 192.168.0.121
|
||||
Master brick path: /srv/gv0
|
||||
Secondary hostname: pi2
|
||||
Secondary IP: 192.168.0.122
|
||||
Secondary brick path: /srv/gv0
|
||||
|
||||
Before you do anything else, log in to each Raspberry Pi, and install the glusterfs-server package:
|
||||
|
||||
$ sudo apt-get install glusterfs-server
|
||||
|
||||
GlusterFS stores its files in what it calls bricks. A brick is a directory path on the server that you set aside for gluster to use. GlusterFS then combines bricks to create volumes that are accessible to clients. GlusterFS potentially can stripe data for a volume across bricks, so although a brick may look like a standard directory full of files, once you start using it with GlusterFS, you will want to modify it only via clients, not directly on the filesystem itself. In the case of the Raspberry Pi, I decided just to create a new directory called /srv/gv0 for my first brick on both Raspberry Pis:
|
||||
|
||||
$ sudo mkdir /srv/gv0
|
||||
|
||||
In this case, I will be sharing my standard SD card root filesystem, but in your case, you may want more storage. In that situation, connect a USB hard drive to each Raspberry Pi, make sure the disks are formatted, and then mount them under /srv/gv0. Just make sure that you update /etc/fstab so that it mounts your external drive at boot time. It's not required that the bricks are on the same directory path or have the same name, but the consistency doesn't hurt.
|
||||
|
||||
After the brick directory is available on each Raspberry Pi and the glusterfs-server package has been installed, make sure both Raspberry Pis are powered on. Then, log in to whatever node you consider the master, and use the `gluster peer probe` command to tell the master to trust the IP or hostname that you pass it as a member of the cluster. In this case, I will use the IP of my secondary node, but if you are fancy and have DNS set up you also could use its hostname instead:
|
||||
|
||||
pi@pi1 ~ $ sudo gluster peer probe 192.168.0.122
|
||||
Probe successful
|
||||
|
||||
Now that my pi1 server (192.168.0.121) trusts pi2 (192.168.0.122), I can create my first volume, which I will call gv0. To do this, I run the `gluster volume create` command from the master node:
|
||||
|
||||
pi@pi1 ~ $ sudo gluster volume create gv0 replica 2
|
||||
↪192.168.0.121:/srv/gv0 192.168.0.122:/srv/gv0
|
||||
Creation of volume gv0 has been successful. Please start
|
||||
the volume to access data.
|
||||
|
||||
Let's break this command down a bit. The first part, `gluster volume create`, tells the gluster command I'm going to create a new volume. The next argument, `gv0` is the name I want to assign the volume. That name is what clients will use to refer to the volume later on. After that, the `replica 2` argument configures this volume to use replication instead of striping data between bricks. In this case, it will make sure any data is replicated across two bricks. Finally, I define the two individual bricks I want to use for this volume: the /srv/gv0 directory on 192.168.0.121 and the /srv/gv0 directory on 192.168.0.122.
|
||||
|
||||
Now that the volume has been created, I just need to start it:
|
||||
|
||||
pi@pi1 ~ $ sudo gluster volume start gv0
|
||||
Starting volume gv0 has been successful
|
||||
|
||||
Once the volume has been started, I can use the `volume info` command on either node to see its status:
|
||||
|
||||
$ sudo gluster volume info
|
||||
|
||||
Volume Name: gv0
|
||||
Type: Replicate
|
||||
Status: Started
|
||||
Number of Bricks: 2
|
||||
Transport-type: tcp
|
||||
Bricks:
|
||||
Brick1: 192.168.0.121:/srv/gv0
|
||||
Brick2: 192.168.0.122:/srv/gv0
|
||||
|
||||
### onfigure the GlusterFS Client ###
|
||||
|
||||
Now that the volume is started, I can mount it as a GlusterFS type filesystem from any client that has GlusterFS support. First though, I will want to mount it from my two Raspberry Pis as I want them to be able to write to the volume themselves. To do this, I will create a new mountpoint on my filesystem on each Raspberry Pi and use the mount command to mount the volume on it:
|
||||
|
||||
$ sudo mkdir -p /mnt/gluster1
|
||||
$ sudo mount -t glusterfs 192.168.0.121:/gv0 /mnt/gluster1
|
||||
$ df
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
rootfs 1804128 1496464 216016 88% /
|
||||
/dev/root 1804128 1496464 216016 88% /
|
||||
devtmpfs 86184 0 86184 0% /dev
|
||||
tmpfs 18888 216 18672 2% /run
|
||||
tmpfs 5120 0 5120 0% /run/lock
|
||||
tmpfs 37760 0 37760 0% /run/shm
|
||||
/dev/mmcblk0p1 57288 18960 38328 34% /boot
|
||||
192.168.0.121:/gv0 1804032 1496448 215936 88% /mnt/gluster1
|
||||
|
||||
The more pedantic readers among you may be saying to yourselves, "Wait a minute, if I am specifying a specific IP address here, what happens when 192.168.0.121 goes down?" It turns out that this IP address is used only to pull down the complete list of bricks used in the volume, and from that point on, the redundant list of bricks is what will be used when accessing the volume.
|
||||
|
||||
Once you mount the filesystem, play around with creating files and then looking into /srv/gv0. You should be able to see (but again, don't touch) files that you've created from /mnt/gluster1 on the /srv/gv0 bricks on both nodes in your cluster:
|
||||
|
||||
pi@pi1 ~ $ sudo touch /mnt/gluster1/test1
|
||||
pi@pi1 ~ $ ls /mnt/gluster1/test1
|
||||
/mnt/gluster1/test1
|
||||
pi@pi1 ~ $ ls /srv/gv0
|
||||
test1
|
||||
pi@pi2 ~ $ ls /srv/gv0
|
||||
test1
|
||||
|
||||
After you are satisfied that you can mount the volume, make it permanent by adding an entry like the following to the /etc/fstab file on your Raspberry Pis:
|
||||
|
||||
192.168.0.121:/gv0 /mnt/gluster1 glusterfs defaults,_netdev 0 0
|
||||
|
||||
Note that if you also want to access this GlusterFS volume from other clients on your network, just install the GlusterFS client package for your distribution (for Debian-based distributions, it's called glusterfs-client), and then create a mountpoint and perform the same mount command as I listed above.
|
||||
|
||||
### Test Redundancy ###
|
||||
|
||||
Now that I have a redundant filesystem in place, let's test it. Since I want to make sure that I could take down either of the two nodes and still have access to the files, I configured a separate client to mount this GlusterFS volume. Then I created a simple script called glustertest inside the volume:
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
while [ 1 ]
|
||||
do
|
||||
date > /mnt/gluster1/test1
|
||||
cat /mnt/gluster1/test1
|
||||
sleep 1
|
||||
done
|
||||
|
||||
This script runs in an infinite loop and just copies the current date into a file inside the GlusterFS volume and then cats it back to the screen. Once I make the file executable and run it, I should see a new date pop up about every second:
|
||||
|
||||
# chmod a+x /mnt/gluster1/glustertest
|
||||
root@moses:~# /mnt/gluster1/glustertest
|
||||
Sat Mar 9 13:19:02 PST 2013
|
||||
Sat Mar 9 13:19:04 PST 2013
|
||||
Sat Mar 9 13:19:05 PST 2013
|
||||
Sat Mar 9 13:19:06 PST 2013
|
||||
Sat Mar 9 13:19:07 PST 2013
|
||||
Sat Mar 9 13:19:08 PST 2013
|
||||
|
||||
I noticed every now and then that the output would skip a second, but in this case, I think it was just a function of the date command not being executed exactly one second apart every time, so every now and then that extra sub-second it would take to run a loop would add up.
|
||||
|
||||
After I started the script, I then logged in to the first Raspberry Pi and typed `sudo reboot` to reboot it. The script kept on running just fine, and if there were any hiccups along the way, I couldn't tell it apart from the occasional skipping of a second that I saw beforehand. Once the first Raspberry Pi came back up, I repeated the reboot on the second one, just to confirm that I could lose either node and still be fine. This kind of redundancy is not bad considering this took only a couple commands.
|
||||
|
||||
There you have it. Now you have the foundation set with a redundant file store across two Raspberry Pis. In my next column, I will build on top of the foundation by adding a new redundant service that takes advantage of the shared storage.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/two-pi-r
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
Loading…
Reference in New Issue
Block a user