mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-29 21:41:00 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
cafd3ffc1f
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why the founder of Apache is all-in on blockchain)
|
||||
[#]: via: (https://opensource.com/article/19/9/podcast-interview-brian-behlendorf)
|
||||
[#]: author: (Gordon Haff https://opensource.com/users/ghaffhttps://opensource.com/users/wonderchookhttps://opensource.com/users/sachinpbhttps://opensource.com/users/ron-mcfarland)
|
||||
|
||||
Why the founder of Apache is all-in on blockchain
|
||||
======
|
||||
Brian Behlendorf talks about starting Apache, the tension between
|
||||
pragmatism and idealism, and why he’s excited about blockchain.
|
||||
![Data container block with hexagons][1]
|
||||
|
||||
Brian Behlendorf is perhaps best known for being a co-founder of the Apache Project, which became the Apache Software Foundation. Today, he's the executive director of the Hyperledger Foundation, an organization focused on enterprise-grade, open source, distributed ledgers (better known as blockchains). He also says he "put the first ad banner online and have been apologizing ever since."
|
||||
|
||||
In a recent [conversation on my podcast][2], Behlendorf talks about the goals of the Apache Project, the role of foundations generally, and his hopes for blockchain.
|
||||
|
||||
### Starting Apache
|
||||
|
||||
As Behlendorf tells the story, [Apache][3] came out of an environment when "we might have had a more beneficent view of technology companies. We still thought of them as leading the fight for individual empowerment."
|
||||
|
||||
At the same time, Behlendorf adds, "there was still a concern that, as the web grew, it would lose its character and its soul as this kind of funky domain, very flat space, supportive of freedoms of speech, freedoms of thought, freedoms of association that were completely novel to us at the time, but now we take for granted—or even we have found weaponized against us."
|
||||
|
||||
This led him to want Apache to address concerns that were both pragmatic in nature and more idealistic.
|
||||
|
||||
The pragmatic aspect stemmed from the fact that "iteratively improving upon the [NCSA web server][4] was just easier and certainly a lot cheaper than buying Netscape's commercial web server or thinking about [IIS][5] or any of the other commercial options at the time." Behlendorf also acknowledges, "it's nice to have other people out there who can review my code and [to] work together with."
|
||||
|
||||
There was also an "idealistic notion that tapped into that zeitgeist in the '90s," Behlendorf says. "This is a printing press. We can help people publish their own blogs, help people publish their own websites, and get as much content liberated as possible and digitized as possible. That was kind of the web movement. In particular, we felt it would be important to make sure that the printing presses remained in the hands of the people."
|
||||
|
||||
### Founding the Apache Software Foundation
|
||||
|
||||
Once the [Apache HTTPD][6] web server project grew to the point that 70% of the web was running on top of Apache HTTPD, it was clear to the project's participants that more structure was needed.
|
||||
|
||||
As Behlendorf describes it: "It was still being built by a group of people whose only connection to each other was that they were all on an email mailing list. All had commit to a CVS repository. All had shell on a Unix box that I maintained off of _Wired_'s internet connection. And otherwise [we] had no formalism between us. In a way, that was liberating; in a way, we were like, 'yeah, you know, we don't need overhead, we don't need stuffy bureaucrats.'"
|
||||
|
||||
Behlendorf and the others weren't interested in incorporating a for-profit company, given that they all had other projects and startups and weren't looking to make Apache HTTPD a full-time job. However, they recognized the legal risks of not having some sort of corporate shield, especially as the portfolio of Apache projects grew.
|
||||
|
||||
As Behlendorf puts it: "What happens if somebody who owned a patent decided to file a patent lawsuit against the developers of Apache and wanted something as simple and modest as a dollar per copy? If they won—and given patent laws, they certainly could win—they'd seek those tens or hundreds of millions of dollars from the Apache developers. For that crime of giving away free software, we could lose our homes."
|
||||
|
||||
In response, the Apache Software Foundation was incorporated in 1999 as a US 501(c)(3) charitable organization that was explicitly membership-based, in contrast to foundations like the Linux Foundation that are organized more along the lines of industry consortia. (The Linux Foundation is a US 501(c)(6) nonprofit mutual benefit corporation.)
|
||||
|
||||
Behlendorf observes that there are a lot of different models out there and he's happy "to see quite a few foundations out there and new ones showing up." Whatever the specific approach, however, he argues, "in general, if you're doing anything meaningful in open source software, your activities should be parked somewhere where there is a protective structure around it that helps answer the questions and the needs of the broader user community."
|
||||
|
||||
### Joining blockchain
|
||||
|
||||
Today, Behlendorf is executive director of the [Hyperledger Foundation][7], which he joined about three years ago, a few months after the first Hyperledger Fabric code drop in late 2015. He says, "with Hyperledger, one thing that pulled me in and got me excited was this notion that there are some really important problems we can solve with distributed systems, with distributed ledgers, and smart contract techniques. It wasn't programmable money, it wasn't regulatory arbitrage. It wasn't … the things people associate with cryptocurrencies that was the driver here. It was the sense that the digitalization of society had led to a future that looked a lot more like big, central systems. It was a very un-internet kind of worldview, but it seemed to be the trend line we were on."
|
||||
|
||||
As a result, "blockchain technology seemed urgent to get involved in [and] that lined up with these idealistic and pragmatic impulses that I've had—and I think other people in open source have had," he adds.
|
||||
|
||||
Specifically, it was the emergence of a set of use cases beyond programmable money that drew in Behlendorf. "I think the one that pulled me in was land titles and emerging markets," he recalls. It wasn't just about having a distributed database. It was about having a distributed ledger that "actually supported consensus, one that actually had the network enforcing rules about valid transactions versus invalid transactions. One that was programmable, with smart contracts on top. This started to make sense to me, and [it] was something that was appealing to me in a way that financial instruments and proof-of-work was not."
|
||||
|
||||
Behlendorf makes the point that for blockchain technology to have a purpose, the network has to be decentralized. For example, you probably want "nodes that are being run by different technology partners or … nodes being run by end-user organizations themselves because otherwise, why not just use a central database run by a single vendor or a single technology partner?" he argues.
|
||||
|
||||
### Growing open source today
|
||||
|
||||
Behlendorf rounds out our interview by discussing how open source software has continued to grow in importance, often for totally pragmatic reasons. "I think there's an entirely rational, non-idealistic business argument for why we're seeing more and more companies, even the ones we traditionally associated with very proprietary business models, be it Microsoft, be it Uber, be it Facebook, actually recognizing open source is strategically interesting," Behlendorf says.
|
||||
|
||||
He feels as if this is a continuation of the thinking the Apache Software Foundation had 20 years ago. "We thought that, if we just involved some of these parties in our projects and kept to our core principles of how to build software, of how our licenses work, how our development processes work publicly, if we made them play by our rules—we may still end up in a much better place and move further faster. I think that's been the story of the last 20 years,'" Behlendorf concludes.
|
||||
|
||||
* * *
|
||||
|
||||
**Listen to the [original podcast audio][8] [MP3, 28:42 minutes]. Download below.**
|
||||
|
||||
Introduction to Apache Hadoop, an open source software framework for storage and large scale...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/podcast-interview-brian-behlendorf
|
||||
|
||||
作者:[Gordon Haff][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ghaffhttps://opensource.com/users/wonderchookhttps://opensource.com/users/sachinpbhttps://opensource.com/users/ron-mcfarland
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_container_block.png?itok=S8MbXEYw (Data container block with hexagons)
|
||||
[2]: https://bitmason.blogspot.com/2019/08/hyperledgers-brian-behlendorf-on.html
|
||||
[3]: https://www.apache.org/
|
||||
[4]: https://en.wikipedia.org/wiki/NCSA_HTTPd
|
||||
[5]: https://en.wikipedia.org/wiki/Internet_Information_Services
|
||||
[6]: https://en.wikipedia.org/wiki/Apache_HTTP_Server
|
||||
[7]: https://www.hyperledger.org/
|
||||
[8]: https://grhpodcasts.s3.amazonaws.com/behlendorf_1908.mp3
|
@ -1,267 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Working with variables on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3387154/working-with-variables-on-linux.html#tk.rss_all)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Working with variables on Linux
|
||||
======
|
||||
Variables often look like $var, but they also look like $1, $*, $? and $$. Let's take a look at what all these $ values can tell you.
|
||||
![Mike Lawrence \(CC BY 2.0\)][1]
|
||||
|
||||
A lot of important values are stored on Linux systems in what we call “variables,” but there are actually several types of variables and some interesting commands that can help you work with them. In a previous post, we looked at [environment variables][2] and where they are defined. In this post, we're going to look at variables that are used on the command line and within scripts.
|
||||
|
||||
### User variables
|
||||
|
||||
While it's quite easy to set up a variable on the command line, there are a few interesting tricks. To set up a variable, all you need to do is something like this:
|
||||
|
||||
```
|
||||
$ myvar=11
|
||||
$ myvar2="eleven"
|
||||
```
|
||||
|
||||
To display the values, you simply do this:
|
||||
|
||||
```
|
||||
$ echo $myvar
|
||||
11
|
||||
$ echo $myvar2
|
||||
eleven
|
||||
```
|
||||
|
||||
You can also work with your variables. For example, to increment a numeric variable, you could use any of these commands:
|
||||
|
||||
```
|
||||
$ myvar=$((myvar+1))
|
||||
$ echo $myvar
|
||||
12
|
||||
$ ((myvar=myvar+1))
|
||||
$ echo $myvar
|
||||
13
|
||||
$ ((myvar+=1))
|
||||
$ echo $myvar
|
||||
14
|
||||
$ ((myvar++))
|
||||
$ echo $myvar
|
||||
15
|
||||
$ let "myvar=myvar+1"
|
||||
$ echo $myvar
|
||||
16
|
||||
$ let "myvar+=1"
|
||||
$ echo $myvar
|
||||
17
|
||||
$ let "myvar++"
|
||||
$ echo $myvar
|
||||
18
|
||||
```
|
||||
|
||||
With some of these, you can add more than 1 to a variable's value. For example:
|
||||
|
||||
```
|
||||
$ myvar0=0
|
||||
$ ((myvar0++))
|
||||
$ echo $myvar0
|
||||
1
|
||||
$ ((myvar0+=10))
|
||||
$ echo $myvar0
|
||||
11
|
||||
```
|
||||
|
||||
With all these choices, you'll probably find at least one that is easy to remember and convenient to use.
|
||||
|
||||
You can also _unset_ a variable — basically undefining it.
|
||||
|
||||
```
|
||||
$ unset myvar
|
||||
$ echo $myvar
|
||||
```
|
||||
|
||||
Another interesting option is that you can set up a variable and make it **read-only**. In other words, once set to read-only, its value cannot be changed (at least not without some very tricky command line wizardry). That means you can't unset it either.
|
||||
|
||||
```
|
||||
$ readonly myvar3=1
|
||||
$ echo $myvar3
|
||||
1
|
||||
$ ((myvar3++))
|
||||
-bash: myvar3: readonly variable
|
||||
$ unset myvar3
|
||||
-bash: unset: myvar3: cannot unset: readonly variable
|
||||
```
|
||||
|
||||
You can use any of those setting and incrementing options for assigning and manipulating variables within scripts, but there are also some very useful _internal variables_ for working within scripts. Note that you can't reassign their values or increment them.
|
||||
|
||||
### Internal variables
|
||||
|
||||
There are quite a few variables that can be used within scripts to evaluate arguments and display information about the script itself.
|
||||
|
||||
* $1, $2, $3 etc. represent the first, second, third, etc. arguments to the script.
|
||||
* $# represents the number of arguments.
|
||||
* $* represents the string of arguments.
|
||||
* $0 represents the name of the script itself.
|
||||
* $? represents the return code of the previously run command (0=success).
|
||||
* $$ shows the process ID for the script.
|
||||
* $PPID shows the process ID for your shell (the parent process for the script).
|
||||
|
||||
|
||||
|
||||
Some of these variables also work on the command line but show related information:
|
||||
|
||||
* $0 shows the name of the shell you're using (e.g., -bash).
|
||||
* $$ shows the process ID for your shell.
|
||||
* $PPID shows the process ID for your shell's parent process (for me, this is sshd).
|
||||
|
||||
|
||||
|
||||
If we throw all of these variables into a script just to see the results, we might do this:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
echo $0
|
||||
echo $1
|
||||
echo $2
|
||||
echo $#
|
||||
echo $*
|
||||
echo $?
|
||||
echo $$
|
||||
echo $PPID
|
||||
```
|
||||
|
||||
When we call this script, we'll see something like this:
|
||||
|
||||
```
|
||||
$ tryme one two three
|
||||
/home/shs/bin/tryme <== script name
|
||||
one <== first argument
|
||||
two <== second argument
|
||||
3 <== number of arguments
|
||||
one two three <== all arguments
|
||||
0 <== return code from previous echo command
|
||||
10410 <== script's process ID
|
||||
10109 <== parent process's ID
|
||||
```
|
||||
|
||||
If we check the process ID of the shell once the script is done running, we can see that it matches the PPID displayed within the script:
|
||||
|
||||
```
|
||||
$ echo $$
|
||||
10109 <== shell's process ID
|
||||
```
|
||||
|
||||
Of course, we're more likely to use these variables in considerably more useful ways than simply displaying their values. Let's check out some ways we might do this.
|
||||
|
||||
Checking to see if arguments have been provided:
|
||||
|
||||
```
|
||||
if [ $# == 0 ]; then
|
||||
echo "$0 filename"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
Checking to see if a particular process is running:
|
||||
|
||||
```
|
||||
ps -ef | grep apache2 > /dev/null
|
||||
if [ $? != 0 ]; then
|
||||
echo Apache is not running
|
||||
exit
|
||||
fi
|
||||
```
|
||||
|
||||
Verifying that a file exists before trying to access it:
|
||||
|
||||
```
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "Usage: $0 lines filename"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f $2 ]; then
|
||||
echo "Error: File $2 not found"
|
||||
exit 2
|
||||
else
|
||||
head -$1 $2
|
||||
fi
|
||||
```
|
||||
|
||||
And in this little script, we check if the correct number of arguments have been provided, if the first argument is numeric, and if the second argument is an existing file.
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "Usage: $0 lines filename"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ $1 != [0-9]* ]]; then
|
||||
echo "Error: $1 is not numeric"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
if [ ! -f $2 ]; then
|
||||
echo "Error: File $2 not found"
|
||||
exit 3
|
||||
else
|
||||
echo top of file
|
||||
head -$1 $2
|
||||
fi
|
||||
```
|
||||
|
||||
### Renaming variables
|
||||
|
||||
When writing a complicated script, it's often useful to assign names to the script's arguments rather than continuing to refer to them as $1, $2, and so on. By the 35th line, someone reading your script might have forgotten what $2 represents. It will be a lot easier on that person if you assign an important parameter's value to $filename or $numlines.
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "Usage: $0 lines filename"
|
||||
exit 1
|
||||
else
|
||||
numlines=$1
|
||||
filename=$2
|
||||
fi
|
||||
|
||||
if [[ $numlines != [0-9]* ]]; then
|
||||
echo "Error: $numlines is not numeric"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
if [ ! -f $ filename]; then
|
||||
echo "Error: File $filename not found"
|
||||
exit 3
|
||||
else
|
||||
echo top of file
|
||||
head -$numlines $filename
|
||||
fi
|
||||
```
|
||||
|
||||
Of course, this example script does nothing more than run the head command to show the top X lines in a file, but it is meant to show how internal parameters can be used within scripts to help ensure the script runs well or fails with at least some clarity.
|
||||
|
||||
**[ Watch Sandra Henry-Stocker's Two-Minute Linux Tips[to learn how to master a host of Linux commands][3] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3387154/working-with-variables-on-linux.html#tk.rss_all
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/variable-key-keyboard-100793080-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html
|
||||
[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,476 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8)
|
||||
[#]: via: (https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8
|
||||
======
|
||||
|
||||
Elastic stack widely known as **ELK stack**, it is a group of opensource products like **Elasticsearch**, **Logstash** and **Kibana**. Elastic Stack is developed and maintained by Elastic company. Using elastic stack, one can feed system’s logs to Logstash, it is a data collection engine which accept the logs or data from all the sources and normalize logs and then it forwards the logs to Elasticsearch for **analyzing**, **indexing**, **searching** and **storing** and finally using Kibana one can represent the visualize data, using Kibana we can also create interactive graphs and diagram based on user’s queries.
|
||||
|
||||
[![Elastic-Stack-Cluster-RHEL8-CentOS8][1]][2]
|
||||
|
||||
In this article we will demonstrate how to setup multi node elastic stack cluster on RHEL 8 / CentOS 8 servers. Following are details for my Elastic Stack Cluster:
|
||||
|
||||
### Elasticsearch:
|
||||
|
||||
* Three Servers with Minimal RHEL 8 / CentOS 8
|
||||
* IPs & Hostname – 192.168.56.40 (elasticsearch1.linuxtechi. local), 192.168.56.50 (elasticsearch2.linuxtechi. local), 192.168.56.60 (elasticsearch3.linuxtechi. local)
|
||||
|
||||
|
||||
|
||||
### Logstash:
|
||||
|
||||
* Two Servers with minimal RHEL 8 / CentOS 8
|
||||
* IPs & Hostname – 192.168.56.20 (logstash1.linuxtechi. local) , 192.168.56.30 (logstash2.linuxtechi. local)
|
||||
|
||||
|
||||
|
||||
### Kibana:
|
||||
|
||||
* One Server with minimal RHEL 8 / CentOS 8
|
||||
* Hostname – kibana.linuxtechi.local
|
||||
* IP – 192.168.56.10
|
||||
|
||||
|
||||
|
||||
### Filebeat:
|
||||
|
||||
* One Server with minimal CentOS 7
|
||||
* IP & hostname – 192.168.56.70 (web-server)
|
||||
|
||||
|
||||
|
||||
Let’s start with Elasticsearch cluster setup,
|
||||
|
||||
#### Setup 3 node Elasticsearch cluster
|
||||
|
||||
As I have already stated that I have kept nodes for Elasticsearch cluster, login to each node, set the hostname and configure yum/dnf repositories.
|
||||
|
||||
Use the below hostnamectl command to set the hostname on respective nodes,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch1.linuxtechi. local"
|
||||
[root@linuxtechi ~]# exec bash
|
||||
[root@linuxtechi ~]#
|
||||
[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch2.linuxtechi. local"
|
||||
[root@linuxtechi ~]# exec bash
|
||||
[root@linuxtechi ~]#
|
||||
[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch3.linuxtechi. local"
|
||||
[root@linuxtechi ~]# exec bash
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
For CentOS 8 System we don’t need to configure any OS package repository and for RHEL 8 Server, if you have valid subscription and then subscribed it with Red Hat for getting package repository. In Case you want to configure local yum/dnf repository for OS packages then refer the below url:
|
||||
|
||||
[How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File][3]
|
||||
|
||||
Configure Elasticsearch package repository on all the nodes, create a file elastic.repo file under /etc/yum.repos.d/ folder with the following content
|
||||
|
||||
```
|
||||
~]# vi /etc/yum.repos.d/elastic.repo
|
||||
[elasticsearch-7.x]
|
||||
name=Elasticsearch repository for 7.x packages
|
||||
baseurl=https://artifacts.elastic.co/packages/7.x/yum
|
||||
gpgcheck=1
|
||||
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
enabled=1
|
||||
autorefresh=1
|
||||
type=rpm-md
|
||||
```
|
||||
|
||||
save & exit the file
|
||||
|
||||
Use below rpm command on all three nodes to import Elastic’s public signing key
|
||||
|
||||
```
|
||||
~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
```
|
||||
|
||||
Add the following lines in /etc/hosts file on all three nodes,
|
||||
|
||||
```
|
||||
192.168.56.40 elasticsearch1.linuxtechi.local
|
||||
192.168.56.50 elasticsearch2.linuxtechi.local
|
||||
192.168.56.60 elasticsearch3.linuxtechi.local
|
||||
```
|
||||
|
||||
Install Java on all three Nodes using yum / dnf command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install java-openjdk -y
|
||||
[root@linuxtechi ~]# dnf install java-openjdk -y
|
||||
[root@linuxtechi ~]# dnf install java-openjdk -y
|
||||
```
|
||||
|
||||
Install Elasticsearch using beneath dnf command on all three nodes,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install elasticsearch -y
|
||||
[root@linuxtechi ~]# dnf install elasticsearch -y
|
||||
[root@linuxtechi ~]# dnf install elasticsearch -y
|
||||
```
|
||||
|
||||
**Note:** In case OS firewall is enabled and running in each Elasticsearch node then allow following ports using beneath firewall-cmd command,
|
||||
|
||||
```
|
||||
~]# firewall-cmd --permanent --add-port=9300/tcp
|
||||
~]# firewall-cmd --permanent --add-port=9200/tcp
|
||||
~]# firewall-cmd --reload
|
||||
```
|
||||
|
||||
Configure Elasticsearch, edit the file “**/etc/elasticsearch/elasticsearch.yml**” on all the three nodes and add the followings,
|
||||
|
||||
```
|
||||
~]# vim /etc/elasticsearch/elasticsearch.yml
|
||||
…………………………………………
|
||||
cluster.name: opn-cluster
|
||||
node.name: elasticsearch1.linuxtechi.local
|
||||
network.host: 192.168.56.40
|
||||
http.port: 9200
|
||||
discovery.seed_hosts: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"]
|
||||
cluster.initial_master_nodes: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"]
|
||||
……………………………………………
|
||||
```
|
||||
|
||||
**Note:** on Each node, add the correct hostname in node.name parameter and ip address in network.host parameter and other parameters will remain the same.
|
||||
|
||||
Now Start and enable the Elasticsearch service on all three nodes using following systemctl command,
|
||||
|
||||
```
|
||||
~]# systemctl daemon-reload
|
||||
~]# systemctl enable elasticsearch.service
|
||||
~]# systemctl start elasticsearch.service
|
||||
```
|
||||
|
||||
Use below ‘ss’ command to verify whether elasticsearch node is start listening on 9200 port,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ss -tunlp | grep 9200
|
||||
tcp LISTEN 0 128 [::ffff:192.168.56.40]:9200 *:* users:(("java",pid=2734,fd=256))
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Use following curl commands to verify the Elasticsearch cluster status
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# curl http://elasticsearch1.linuxtechi.local:9200
|
||||
[root@linuxtechi ~]# curl -X GET http://elasticsearch2.linuxtechi.local:9200/_cluster/health?pretty
|
||||
```
|
||||
|
||||
Output above command would be something like below,
|
||||
|
||||
![Elasticsearch-cluster-status-rhel8][1]
|
||||
|
||||
Above output confirms that we have successfully created 3 node Elasticsearch cluster and status of cluster is also green.
|
||||
|
||||
**Note:** If you want to modify JVM heap size then you have edit the file “**/etc/elasticsearch/jvm.options**” and change the below parameters that suits to your environment,
|
||||
|
||||
* -Xms1g
|
||||
* -Xmx1g
|
||||
|
||||
|
||||
|
||||
Now let’s move to Logstash nodes,
|
||||
|
||||
#### Install and Configure Logstash
|
||||
|
||||
Perform the following steps on both Logstash nodes,
|
||||
|
||||
Login to both the nodes set the hostname using following hostnamectl command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# hostnamectl set-hostname "logstash1.linuxtechi.local"
|
||||
[root@linuxtechi ~]# exec bash
|
||||
[root@linuxtechi ~]#
|
||||
[root@linuxtechi ~]# hostnamectl set-hostname "logstash2.linuxtechi.local"
|
||||
[root@linuxtechi ~]# exec bash
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Add the following entries in /etc/hosts file in both logstash nodes
|
||||
|
||||
```
|
||||
~]# vi /etc/hosts
|
||||
192.168.56.40 elasticsearch1.linuxtechi.local
|
||||
192.168.56.50 elasticsearch2.linuxtechi.local
|
||||
192.168.56.60 elasticsearch3.linuxtechi.local
|
||||
```
|
||||
|
||||
Save and exit the file
|
||||
|
||||
Configure Logstash repository on both the nodes, create a file **logstash.repo** under the folder /ete/yum.repos.d/ with following content,
|
||||
|
||||
```
|
||||
~]# vi /etc/yum.repos.d/logstash.repo
|
||||
[elasticsearch-7.x]
|
||||
name=Elasticsearch repository for 7.x packages
|
||||
baseurl=https://artifacts.elastic.co/packages/7.x/yum
|
||||
gpgcheck=1
|
||||
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
enabled=1
|
||||
autorefresh=1
|
||||
type=rpm-md
|
||||
```
|
||||
|
||||
Save and exit the file, run the following rpm command to import the signing key
|
||||
|
||||
```
|
||||
~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
```
|
||||
|
||||
Install Java OpenJDK on both the nodes using following dnf command,
|
||||
|
||||
```
|
||||
~]# dnf install java-openjdk -y
|
||||
```
|
||||
|
||||
Run the following dnf command from both the nodes to install logstash,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install logstash -y
|
||||
[root@linuxtechi ~]# dnf install logstash -y
|
||||
```
|
||||
|
||||
Now configure logstash, perform below steps on both logstash nodes,
|
||||
|
||||
Create a logstash conf file, for that first we have copy sample logstash file under ‘/etc/logstash/conf.d/’
|
||||
|
||||
```
|
||||
# cd /etc/logstash/
|
||||
# cp logstash-sample.conf conf.d/logstash.conf
|
||||
```
|
||||
|
||||
Edit conf file and update the following content,
|
||||
|
||||
```
|
||||
# vi conf.d/logstash.conf
|
||||
|
||||
input {
|
||||
beats {
|
||||
port => 5044
|
||||
}
|
||||
}
|
||||
|
||||
output {
|
||||
elasticsearch {
|
||||
hosts => ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"]
|
||||
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
|
||||
#user => "elastic"
|
||||
#password => "changeme"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Under output section, in hosts parameter specify FQDN of all three Elasticsearch nodes, other parameters leave as it is.
|
||||
|
||||
Allow logstash port “5044” in OS firewall using following firewall-cmd command,
|
||||
|
||||
```
|
||||
~ # firewall-cmd --permanent --add-port=5044/tcp
|
||||
~ # firewall-cmd –reload
|
||||
```
|
||||
|
||||
Now start and enable Logstash service, run the following systemctl commands on both the nodes
|
||||
|
||||
```
|
||||
~]# systemctl start logstash
|
||||
~]# systemctl eanble logstash
|
||||
```
|
||||
|
||||
Use below ss command to verify whether logstash service start listening on 5044,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ss -tunlp | grep 5044
|
||||
tcp LISTEN 0 128 *:5044 *:* users:(("java",pid=2416,fd=96))
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Above output confirms that logstash has been installed and configured successfully. Let’s move to Kibana installation.
|
||||
|
||||
#### Install and Configure Kibana
|
||||
|
||||
Login to Kibana node, set the hostname with **hostnamectl** command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# hostnamectl set-hostname "kibana.linuxtechi.local"
|
||||
[root@linuxtechi ~]# exec bash
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Edit /etc/hosts file and add the following lines
|
||||
|
||||
```
|
||||
192.168.56.40 elasticsearch1.linuxtechi.local
|
||||
192.168.56.50 elasticsearch2.linuxtechi.local
|
||||
192.168.56.60 elasticsearch3.linuxtechi.local
|
||||
```
|
||||
|
||||
Setup the Kibana repository using following,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/yum.repos.d/kibana.repo
|
||||
[elasticsearch-7.x]
|
||||
name=Elasticsearch repository for 7.x packages
|
||||
baseurl=https://artifacts.elastic.co/packages/7.x/yum
|
||||
gpgcheck=1
|
||||
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
enabled=1
|
||||
autorefresh=1
|
||||
type=rpm-md
|
||||
|
||||
[root@linuxtechi ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
```
|
||||
|
||||
Execute below dnf command to install kibana,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# yum install kibana -y
|
||||
```
|
||||
|
||||
Configure Kibana by editing the file “**/etc/kibana/kibana.yml**”
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vim /etc/kibana/kibana.yml
|
||||
…………
|
||||
server.host: "kibana.linuxtechi.local"
|
||||
server.name: "kibana.linuxtechi.local"
|
||||
elasticsearch.hosts: ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"]
|
||||
…………
|
||||
```
|
||||
|
||||
Start and enable kibana service
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# systemctl start kibana
|
||||
[root@linuxtechi ~]# systemctl enable kibana
|
||||
```
|
||||
|
||||
Allow Kibana port ‘5601’ in OS firewall,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# firewall-cmd --permanent --add-port=5601/tcp
|
||||
success
|
||||
[root@linuxtechi ~]# firewall-cmd --reload
|
||||
success
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Access Kibana portal / GUI using the following URL:
|
||||
|
||||
<http://kibana.linuxtechi.local:5601>
|
||||
|
||||
[![Kibana-Dashboard-rhel8][1]][4]
|
||||
|
||||
From dashboard, we can also check our Elastic Stack cluster status
|
||||
|
||||
[![Stack-Monitoring-Overview-RHEL8][1]][5]
|
||||
|
||||
This confirms that we have successfully setup multi node Elastic Stack cluster on RHEL 8 / CentOS 8.
|
||||
|
||||
Now let’s send some logs to logstash nodes via filebeat from other Linux servers, In my case I have one CentOS 7 Server, I will push all important logs of this server to logstash via filebeat.
|
||||
|
||||
Login to CentOS 7 server and install filebeat package using following rpm command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm
|
||||
Retrieving https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm
|
||||
Preparing... ################################# [100%]
|
||||
Updating / installing...
|
||||
1:filebeat-7.3.1-1 ################################# [100%]
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Edit the /etc/hosts file and add the following entries,
|
||||
|
||||
```
|
||||
192.168.56.20 logstash1.linuxtechi.local
|
||||
192.168.56.30 logstash2.linuxtechi.local
|
||||
```
|
||||
|
||||
Now configure the filebeat so that it can send logs to logstash nodes using load balancing technique, edit the file “**/etc/filebeat/filebeat.yml**” and add the following parameters,
|
||||
|
||||
Under the ‘**filebeat.inputs:**’ section change ‘**enabled: false**‘ to ‘**enabled: true**‘ and under the “**paths**” parameter specify the location log files that we can send to logstash, In output Elasticsearch section comment out “**output.elasticsearch**” and **host** parameter. In Logstash output section, remove the comments for “**output.logstash:**” and “**hosts:**” and add the both logstash nodes in hosts parameters and also “**loadbalance: true**”.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/filebeat/filebeat.yml
|
||||
……………………….
|
||||
filebeat.inputs:
|
||||
- type: log
|
||||
enabled: true
|
||||
paths:
|
||||
- /var/log/messages
|
||||
- /var/log/dmesg
|
||||
- /var/log/maillog
|
||||
- /var/log/boot.log
|
||||
#output.elasticsearch:
|
||||
# hosts: ["localhost:9200"]
|
||||
|
||||
output.logstash:
|
||||
hosts: ["logstash1.linuxtechi.local:5044", "logstash2.linuxtechi.local:5044"]
|
||||
loadbalance: true
|
||||
………………………………………
|
||||
```
|
||||
|
||||
Start and enable filebeat service using beneath systemctl commands,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# systemctl start filebeat
|
||||
[root@linuxtechi ~]# systemctl enable filebeat
|
||||
```
|
||||
|
||||
Now go to Kibana GUI, verify whether new indices are visible or not,
|
||||
|
||||
Choose Management option from Left side bar and then click on Index Management under Elasticsearch,
|
||||
|
||||
[![Elasticsearch-index-management-Kibana][1]][6]
|
||||
|
||||
As we can see above, indices are visible now, let’s create index pattern,
|
||||
|
||||
Click on “Index Patterns” from Kibana Section, it will prompt us to create a new pattern, click on “**Create Index Pattern**” and specify the pattern name as “**filebeat**”
|
||||
|
||||
[![Define-Index-Pattern-Kibana-RHEL8][1]][7]
|
||||
|
||||
Click on Next Step
|
||||
|
||||
Choose “**Timestamp**” as time filter for index pattern and then click on “Create index pattern”
|
||||
|
||||
[![Time-Filter-Index-Pattern-Kibana-RHEL8][1]][8]
|
||||
|
||||
[![filebeat-index-pattern-overview-Kibana][1]][9]
|
||||
|
||||
Now Click on Discover to see real time filebeat index pattern,
|
||||
|
||||
[![Discover-Kibana-REHL8][1]][10]
|
||||
|
||||
This confirms that Filebeat agent has been configured successfully and we are able to see real time logs on Kibana dashboard.
|
||||
|
||||
That’s all from this article, please don’t hesitate to share your feedback and comments in case these steps help you to setup multi node Elastic Stack Cluster on RHEL 8 / CentOS 8 system.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Elastic-Stack-Cluster-RHEL8-CentOS8.jpg
|
||||
[3]: https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Kibana-Dashboard-rhel8.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Stack-Monitoring-Overview-RHEL8.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Elasticsearch-index-management-Kibana.jpg
|
||||
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Define-Index-Pattern-Kibana-RHEL8.jpg
|
||||
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Time-Filter-Index-Pattern-Kibana-RHEL8.jpg
|
||||
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/filebeat-index-pattern-overview-Kibana.jpg
|
||||
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Discover-Kibana-REHL8.jpg
|
262
translated/tech/20190409 Working with variables on Linux.md
Normal file
262
translated/tech/20190409 Working with variables on Linux.md
Normal file
@ -0,0 +1,262 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Working with variables on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3387154/working-with-variables-on-linux.html#tk.rss_all)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
在 Linux 中使用变量
|
||||
======
|
||||
变量通常看起来像 $var,但它们也有 $1、$*、$? 和 $$ 这种形式。让我们来看看所有这些 $ 值可以告诉你什么。
|
||||
![Mike Lawrence \(CC BY 2.0\)][1]
|
||||
|
||||
我们称为“变量”的许多重要的值都存储在 Linux 系统中,但实际上有几种类型的变量和一些有趣的命令可以帮助你使用它们。在上一篇文章中,我们研究了[环境变量][2]以及它们在哪定义。在本文中,我们来看一看在命令行和脚本中使用的变量。
|
||||
|
||||
### 用户变量
|
||||
|
||||
虽然在命令行中设置变量非常容易,但是有一些有趣的技巧。要设置变量,你只需这样做:
|
||||
|
||||
```
|
||||
$ myvar=11
|
||||
$ myvar2="eleven"
|
||||
```
|
||||
|
||||
要显示这些值,只需这样做:
|
||||
|
||||
```
|
||||
$ echo $myvar
|
||||
11
|
||||
$ echo $myvar2
|
||||
eleven
|
||||
```
|
||||
|
||||
你也可以使用这些变量。例如,要递增一个数字变量,使用以下任意一个命令:
|
||||
|
||||
```
|
||||
$ myvar=$((myvar+1))
|
||||
$ echo $myvar
|
||||
12
|
||||
$ ((myvar=myvar+1))
|
||||
$ echo $myvar
|
||||
13
|
||||
$ ((myvar+=1))
|
||||
$ echo $myvar
|
||||
14
|
||||
$ ((myvar++))
|
||||
$ echo $myvar
|
||||
15
|
||||
$ let "myvar=myvar+1"
|
||||
$ echo $myvar
|
||||
16
|
||||
$ let "myvar+=1"
|
||||
$ echo $myvar
|
||||
17
|
||||
$ let "myvar++"
|
||||
$ echo $myvar
|
||||
18
|
||||
```
|
||||
|
||||
使用其中的一些,你可以增加一个变量的值。例如:
|
||||
|
||||
```
|
||||
$ myvar0=0
|
||||
$ ((myvar0++))
|
||||
$ echo $myvar0
|
||||
1
|
||||
$ ((myvar0+=10))
|
||||
$ echo $myvar0
|
||||
11
|
||||
```
|
||||
|
||||
通过这些选项,你可能会发现至少有一个是容易记忆且使用方便的。
|
||||
|
||||
你也可以 _删除_ 一个变量 -- 这意味着没有定义它。
|
||||
|
||||
```
|
||||
$ unset myvar
|
||||
$ echo $myvar
|
||||
```
|
||||
|
||||
另一个有趣的选项是,你可以设置一个变量并将其设为**只读**。换句话说,变量一旦设置为只读,它的值就不能改变(除非一些非常复杂的命令行魔法才可以)。这意味着你也不能删除它。
|
||||
|
||||
```
|
||||
$ readonly myvar3=1
|
||||
$ echo $myvar3
|
||||
1
|
||||
$ ((myvar3++))
|
||||
-bash: myvar3: readonly variable
|
||||
$ unset myvar3
|
||||
-bash: unset: myvar3: cannot unset: readonly variable
|
||||
```
|
||||
|
||||
你可以使用这些设置和递增选项中的任何一个来赋值和操作脚本中的变量,但也有一些非常有用的 _内部变量_ 用于在脚本中工作。注意,你无法重新赋值或增加它们的值。
|
||||
|
||||
### 内部变量
|
||||
|
||||
在脚本中可以使用很多变量来计算参数并显示有关脚本本身的信息。
|
||||
|
||||
* $1、$2、$3 等表示脚本的第一个、第二个、第三个等参数。
|
||||
* $# 表示参数的数量。
|
||||
* $* 表示所有参数。
|
||||
* $0 表示脚本的名称。
|
||||
* $? 表示先前运行的命令的返回码(0 代表成功)。
|
||||
* $$ 显示脚本的进程 ID。
|
||||
* $PPID 显示 shell 的进程 ID(脚本的父进程)。
|
||||
|
||||
其中一些变量也适用于命令行,但显示相关信息:
|
||||
|
||||
* $0 显示你正在使用的 shell 的名称(例如,-bash)。
|
||||
* $$ 显示 shell 的进程 ID。
|
||||
* $PPID 显示 shell 的父进程的进程 ID(对我来说,是 sshd)。
|
||||
|
||||
为了查看它们的结果,如果我们将所有这些变量都放入一个脚本中,比如:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
echo $0
|
||||
echo $1
|
||||
echo $2
|
||||
echo $#
|
||||
echo $*
|
||||
echo $?
|
||||
echo $$
|
||||
echo $PPID
|
||||
```
|
||||
|
||||
当我们调用这个脚本时,我们会看到如下内容:
|
||||
```
|
||||
$ tryme one two three
|
||||
/home/shs/bin/tryme <== 脚本名称
|
||||
one <== 第一个参数
|
||||
two <== 第二个参数
|
||||
3 <== 参数的个数
|
||||
one two three <== 所有的参数
|
||||
0 <== 上一条 echo 命令的返回码
|
||||
10410 <== 脚本的进程 ID
|
||||
10109 <== 父进程 ID
|
||||
```
|
||||
|
||||
如果我们在脚本运行完毕后检查 shell 的进程 ID,我们可以看到它与脚本中显示的 PPID 相匹配:
|
||||
|
||||
```
|
||||
$ echo $$
|
||||
10109 <== shell 的进程 ID
|
||||
```
|
||||
|
||||
当然,比起简单地显示它们的值,我们更多的是在需要它们的时候来使用它们。我们来看一看它们可能的用处。
|
||||
|
||||
检查是否已提供参数:
|
||||
|
||||
```
|
||||
if [ $# == 0 ]; then
|
||||
echo "$0 filename"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
检查特定进程是否正在运行:
|
||||
|
||||
```
|
||||
ps -ef | grep apache2 > /dev/null
|
||||
if [ $? != 0 ]; then
|
||||
echo Apache is not running
|
||||
exit
|
||||
fi
|
||||
```
|
||||
|
||||
在尝试访问文件之前验证文件是否存在:
|
||||
|
||||
```
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "Usage: $0 lines filename"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f $2 ]; then
|
||||
echo "Error: File $2 not found"
|
||||
exit 2
|
||||
else
|
||||
head -$1 $2
|
||||
fi
|
||||
```
|
||||
|
||||
在下面的小脚本中,我们检查是否提供了正确数量的参数、第一个参数是否为数字,以及第二个参数代表的文件是否存在。
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "Usage: $0 lines filename"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ $1 != [0-9]* ]]; then
|
||||
echo "Error: $1 is not numeric"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
if [ ! -f $2 ]; then
|
||||
echo "Error: File $2 not found"
|
||||
exit 3
|
||||
else
|
||||
echo top of file
|
||||
head -$1 $2
|
||||
fi
|
||||
```
|
||||
|
||||
### 重命名变量
|
||||
|
||||
在编写复杂的脚本时,为脚本的参数指定名称通常很有用,而不是继续将它们称为 $1, $2 等。等到第 35 行,阅读你脚本的人可能已经忘了 $2 表示什么。如果你将一个重要参数的值赋给 $filename 或 $numlines,那么他就不容易忘记。
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "Usage: $0 lines filename"
|
||||
exit 1
|
||||
else
|
||||
numlines=$1
|
||||
filename=$2
|
||||
fi
|
||||
|
||||
if [[ $numlines != [0-9]* ]]; then
|
||||
echo "Error: $numlines is not numeric"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
if [ ! -f $ filename]; then
|
||||
echo "Error: File $filename not found"
|
||||
exit 3
|
||||
else
|
||||
echo top of file
|
||||
head -$numlines $filename
|
||||
fi
|
||||
```
|
||||
|
||||
当然,这个示例脚本只是运行 head 命令来显示文件中的前 x 行,但它的目的是显示如何在脚本中使用内部参数来帮助确保脚本运行良好,或在失败时清晰地知道失败原因。
|
||||
|
||||
**观看 Sandra Henry-Stocker 的两分钟 Linux 技巧:[学习如何掌握大量 Linux 命令][3]。**
|
||||
|
||||
加入 [Facebook][4] 和 [Linkedln][5] 上的网络社区,评论最热的主题。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3387154/working-with-variables-on-linux.html#tk.rss_all
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/variable-key-keyboard-100793080-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html
|
||||
[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
Loading…
Reference in New Issue
Block a user