mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-03 23:40:14 +08:00
commit
00c9415ee0
702
sources/tech/20171024 Learn Blockchains by Building One.md
Normal file
702
sources/tech/20171024 Learn Blockchains by Building One.md
Normal file
@ -0,0 +1,702 @@
|
||||
Learn Blockchains by Building One
|
||||
======
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/2000/1*zutLn_-fZZhy7Ari-x-JWQ.jpeg)
|
||||
You’re here because, like me, you’re psyched about the rise of Cryptocurrencies. And you want to know how Blockchains work—the fundamental technology behind them.
|
||||
|
||||
But understanding Blockchains isn’t easy—or at least wasn’t for me. I trudged through dense videos, followed porous tutorials, and dealt with the amplified frustration of too few examples.
|
||||
|
||||
I like learning by doing. It forces me to deal with the subject matter at a code level, which gets it sticking. If you do the same, at the end of this guide you’ll have a functioning Blockchain with a solid grasp of how they work.
|
||||
|
||||
### Before you get started…
|
||||
|
||||
Remember that a blockchain is an _immutable, sequential_ chain of records called Blocks. They can contain transactions, files or any data you like, really. But the important thing is that they’re _chained_ together using _hashes_ .
|
||||
|
||||
If you aren’t sure what a hash is, [here’s an explanation][1].
|
||||
|
||||
**_Who is this guide aimed at?_** You should be comfy reading and writing some basic Python, as well as have some understanding of how HTTP requests work, since we’ll be talking to our Blockchain over HTTP.
|
||||
|
||||
**_What do I need?_** Make sure that [Python 3.6][2]+ (along with `pip`) is installed. You’ll also need to install Flask and the wonderful Requests library:
|
||||
|
||||
```
|
||||
pip install Flask==0.12.2 requests==2.18.4
|
||||
```
|
||||
|
||||
Oh, you’ll also need an HTTP Client, like [Postman][3] or cURL. But anything will do.
|
||||
|
||||
**_Where’s the final code?_** The source code is [available here][4].
|
||||
|
||||
* * *
|
||||
|
||||
### Step 1: Building a Blockchain
|
||||
|
||||
Open up your favourite text editor or IDE, personally I ❤️ [PyCharm][5]. Create a new file, called `blockchain.py`. We’ll only use a single file, but if you get lost, you can always refer to the [source code][6].
|
||||
|
||||
#### Representing a Blockchain
|
||||
|
||||
We’ll create a `Blockchain` class whose constructor creates an initial empty list (to store our blockchain), and another to store transactions. Here’s the blueprint for our class:
|
||||
|
||||
```
|
||||
class Blockchain(object):
|
||||
def __init__(self):
|
||||
self.chain = []
|
||||
self.current_transactions = []
|
||||
|
||||
def new_block(self):
|
||||
# Creates a new Block and adds it to the chain
|
||||
pass
|
||||
|
||||
def new_transaction(self):
|
||||
# Adds a new transaction to the list of transactions
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
def hash(block):
|
||||
# Hashes a Block
|
||||
pass
|
||||
|
||||
@property
|
||||
def last_block(self):
|
||||
# Returns the last Block in the chain
|
||||
pass
|
||||
```
|
||||
|
||||
|
||||
Our Blockchain class is responsible for managing the chain. It will store transactions and have some helper methods for adding new blocks to the chain. Let’s start fleshing out some methods.
|
||||
|
||||
#### What does a Block look like?
|
||||
|
||||
Each Block has an index, a timestamp (in Unix time), a list of transactions, a proof (more on that later), and the hash of the previous Block.
|
||||
|
||||
Here’s an example of what a single Block looks like:
|
||||
|
||||
```
|
||||
block = {
|
||||
'index': 1,
|
||||
'timestamp': 1506057125.900785,
|
||||
'transactions': [
|
||||
{
|
||||
'sender': "8527147fe1f5426f9dd545de4b27ee00",
|
||||
'recipient': "a77f5cdfa2934df3954a5c7c7da5df1f",
|
||||
'amount': 5,
|
||||
}
|
||||
],
|
||||
'proof': 324984774000,
|
||||
'previous_hash': "2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824"
|
||||
}
|
||||
```
|
||||
|
||||
At this point, the idea of a chain should be apparent—each new block contains within itself, the hash of the previous Block. This is crucial because it’s what gives blockchains immutability: If an attacker corrupted an earlier Block in the chain then all subsequent blocks will contain incorrect hashes.
|
||||
|
||||
Does this make sense? If it doesn’t, take some time to let it sink in—it’s the core idea behind blockchains.
|
||||
|
||||
#### Adding Transactions to a Block
|
||||
|
||||
We’ll need a way of adding transactions to a Block. Our new_transaction() method is responsible for this, and it’s pretty straight-forward:
|
||||
|
||||
```
|
||||
class Blockchain(object):
|
||||
...
|
||||
|
||||
def new_transaction(self, sender, recipient, amount):
|
||||
"""
|
||||
Creates a new transaction to go into the next mined Block
|
||||
:param sender: <str> Address of the Sender
|
||||
:param recipient: <str> Address of the Recipient
|
||||
:param amount: <int> Amount
|
||||
:return: <int> The index of the Block that will hold this transaction
|
||||
"""
|
||||
|
||||
self.current_transactions.append({
|
||||
'sender': sender,
|
||||
'recipient': recipient,
|
||||
'amount': amount,
|
||||
})
|
||||
|
||||
return self.last_block['index'] + 1
|
||||
```
|
||||
|
||||
After new_transaction() adds a transaction to the list, it returns the index of the block which the transaction will be added to—the next one to be mined. This will be useful later on, to the user submitting the transaction.
|
||||
|
||||
#### Creating new Blocks
|
||||
|
||||
When our Blockchain is instantiated we’ll need to seed it with a genesis block—a block with no predecessors. We’ll also need to add a “proof” to our genesis block which is the result of mining (or proof of work). We’ll talk more about mining later.
|
||||
|
||||
In addition to creating the genesis block in our constructor, we’ll also flesh out the methods for new_block(), new_transaction() and hash():
|
||||
|
||||
```
|
||||
import hashlib
|
||||
import json
|
||||
from time import time
|
||||
|
||||
|
||||
class Blockchain(object):
|
||||
def __init__(self):
|
||||
self.current_transactions = []
|
||||
self.chain = []
|
||||
|
||||
# Create the genesis block
|
||||
self.new_block(previous_hash=1, proof=100)
|
||||
|
||||
def new_block(self, proof, previous_hash=None):
|
||||
"""
|
||||
Create a new Block in the Blockchain
|
||||
:param proof: <int> The proof given by the Proof of Work algorithm
|
||||
:param previous_hash: (Optional) <str> Hash of previous Block
|
||||
:return: <dict> New Block
|
||||
"""
|
||||
|
||||
block = {
|
||||
'index': len(self.chain) + 1,
|
||||
'timestamp': time(),
|
||||
'transactions': self.current_transactions,
|
||||
'proof': proof,
|
||||
'previous_hash': previous_hash or self.hash(self.chain[-1]),
|
||||
}
|
||||
|
||||
# Reset the current list of transactions
|
||||
self.current_transactions = []
|
||||
|
||||
self.chain.append(block)
|
||||
return block
|
||||
|
||||
def new_transaction(self, sender, recipient, amount):
|
||||
"""
|
||||
Creates a new transaction to go into the next mined Block
|
||||
:param sender: <str> Address of the Sender
|
||||
:param recipient: <str> Address of the Recipient
|
||||
:param amount: <int> Amount
|
||||
:return: <int> The index of the Block that will hold this transaction
|
||||
"""
|
||||
self.current_transactions.append({
|
||||
'sender': sender,
|
||||
'recipient': recipient,
|
||||
'amount': amount,
|
||||
})
|
||||
|
||||
return self.last_block['index'] + 1
|
||||
|
||||
@property
|
||||
def last_block(self):
|
||||
return self.chain[-1]
|
||||
|
||||
@staticmethod
|
||||
def hash(block):
|
||||
"""
|
||||
Creates a SHA-256 hash of a Block
|
||||
:param block: <dict> Block
|
||||
:return: <str>
|
||||
"""
|
||||
|
||||
# We must make sure that the Dictionary is Ordered, or we'll have inconsistent hashes
|
||||
block_string = json.dumps(block, sort_keys=True).encode()
|
||||
return hashlib.sha256(block_string).hexdigest()
|
||||
```
|
||||
|
||||
The above should be straight-forward—I’ve added some comments and docstrings to help keep it clear. We’re almost done with representing our blockchain. But at this point, you must be wondering how new blocks are created, forged or mined.
|
||||
|
||||
#### Understanding Proof of Work
|
||||
|
||||
A Proof of Work algorithm (PoW) is how new Blocks are created or mined on the blockchain. The goal of PoW is to discover a number which solves a problem. The number must be difficult to find but easy to verify—computationally speaking—by anyone on the network. This is the core idea behind Proof of Work.
|
||||
|
||||
We’ll look at a very simple example to help this sink in.
|
||||
|
||||
Let’s decide that the hash of some integer x multiplied by another y must end in 0\. So, hash(x * y) = ac23dc...0\. And for this simplified example, let’s fix x = 5\. Implementing this in Python:
|
||||
|
||||
```
|
||||
from hashlib import sha256
|
||||
|
||||
x = 5
|
||||
y = 0 # We don't know what y should be yet...
|
||||
|
||||
while sha256(f'{x*y}'.encode()).hexdigest()[-1] != "0":
|
||||
y += 1
|
||||
|
||||
print(f'The solution is y = {y}')
|
||||
```
|
||||
|
||||
The solution here is y = 21\. Since, the produced hash ends in 0:
|
||||
|
||||
```
|
||||
hash(5 * 21) = 1253e9373e...5e3600155e860
|
||||
```
|
||||
|
||||
The network is able to easily verify their solution.
|
||||
|
||||
#### Implementing basic Proof of Work
|
||||
|
||||
Let’s implement a similar algorithm for our blockchain. Our rule will be similar to the example above:
|
||||
|
||||
> Find a number p that when hashed with the previous block’s solution a hash with 4 leading 0s is produced.
|
||||
|
||||
```
|
||||
import hashlib
|
||||
import json
|
||||
|
||||
from time import time
|
||||
from uuid import uuid4
|
||||
|
||||
|
||||
class Blockchain(object):
|
||||
...
|
||||
|
||||
def proof_of_work(self, last_proof):
|
||||
"""
|
||||
Simple Proof of Work Algorithm:
|
||||
- Find a number p' such that hash(pp') contains leading 4 zeroes, where p is the previous p'
|
||||
- p is the previous proof, and p' is the new proof
|
||||
:param last_proof: <int>
|
||||
:return: <int>
|
||||
"""
|
||||
|
||||
proof = 0
|
||||
while self.valid_proof(last_proof, proof) is False:
|
||||
proof += 1
|
||||
|
||||
return proof
|
||||
|
||||
@staticmethod
|
||||
def valid_proof(last_proof, proof):
|
||||
"""
|
||||
Validates the Proof: Does hash(last_proof, proof) contain 4 leading zeroes?
|
||||
:param last_proof: <int> Previous Proof
|
||||
:param proof: <int> Current Proof
|
||||
:return: <bool> True if correct, False if not.
|
||||
"""
|
||||
|
||||
guess = f'{last_proof}{proof}'.encode()
|
||||
guess_hash = hashlib.sha256(guess).hexdigest()
|
||||
return guess_hash[:4] == "0000"
|
||||
```
|
||||
|
||||
To adjust the difficulty of the algorithm, we could modify the number of leading zeroes. But 4 is sufficient. You’ll find out that the addition of a single leading zero makes a mammoth difference to the time required to find a solution.
|
||||
|
||||
Our class is almost complete and we’re ready to begin interacting with it using HTTP requests.
|
||||
|
||||
* * *
|
||||
|
||||
### Step 2: Our Blockchain as an API
|
||||
|
||||
We’re going to use the Python Flask Framework. It’s a micro-framework and it makes it easy to map endpoints to Python functions. This allows us talk to our blockchain over the web using HTTP requests.
|
||||
|
||||
We’ll create three methods:
|
||||
|
||||
* `/transactions/new` to create a new transaction to a block
|
||||
|
||||
* `/mine` to tell our server to mine a new block.
|
||||
|
||||
* `/chain` to return the full Blockchain.
|
||||
|
||||
#### Setting up Flask
|
||||
|
||||
Our “server” will form a single node in our blockchain network. Let’s create some boilerplate code:
|
||||
|
||||
```
|
||||
import hashlib
|
||||
import json
|
||||
from textwrap import dedent
|
||||
from time import time
|
||||
from uuid import uuid4
|
||||
|
||||
from flask import Flask
|
||||
|
||||
|
||||
class Blockchain(object):
|
||||
...
|
||||
|
||||
|
||||
# Instantiate our Node
|
||||
app = Flask(__name__)
|
||||
|
||||
# Generate a globally unique address for this node
|
||||
node_identifier = str(uuid4()).replace('-', '')
|
||||
|
||||
# Instantiate the Blockchain
|
||||
blockchain = Blockchain()
|
||||
|
||||
|
||||
@app.route('/mine', methods=['GET'])
|
||||
def mine():
|
||||
return "We'll mine a new Block"
|
||||
|
||||
@app.route('/transactions/new', methods=['POST'])
|
||||
def new_transaction():
|
||||
return "We'll add a new transaction"
|
||||
|
||||
@app.route('/chain', methods=['GET'])
|
||||
def full_chain():
|
||||
response = {
|
||||
'chain': blockchain.chain,
|
||||
'length': len(blockchain.chain),
|
||||
}
|
||||
return jsonify(response), 200
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(host='0.0.0.0', port=5000)
|
||||
```
|
||||
|
||||
A brief explanation of what we’ve added above:
|
||||
|
||||
* Line 15: Instantiates our Node. Read more about Flask [here][7].
|
||||
|
||||
* Line 18: Create a random name for our node.
|
||||
|
||||
* Line 21: Instantiate our Blockchain class.
|
||||
|
||||
* Line 24–26: Create the /mine endpoint, which is a GET request.
|
||||
|
||||
* Line 28–30: Create the /transactions/new endpoint, which is a POST request, since we’ll be sending data to it.
|
||||
|
||||
* Line 32–38: Create the /chain endpoint, which returns the full Blockchain.
|
||||
|
||||
* Line 40–41: Runs the server on port 5000.
|
||||
|
||||
#### The Transactions Endpoint
|
||||
|
||||
This is what the request for a transaction will look like. It’s what the user sends to the server:
|
||||
|
||||
```
|
||||
{ "sender": "my address", "recipient": "someone else's address", "amount": 5}
|
||||
```
|
||||
|
||||
```
|
||||
import hashlib
|
||||
import json
|
||||
from textwrap import dedent
|
||||
from time import time
|
||||
from uuid import uuid4
|
||||
|
||||
from flask import Flask, jsonify, request
|
||||
|
||||
...
|
||||
|
||||
@app.route('/transactions/new', methods=['POST'])
|
||||
def new_transaction():
|
||||
values = request.get_json()
|
||||
|
||||
# Check that the required fields are in the POST'ed data
|
||||
required = ['sender', 'recipient', 'amount']
|
||||
if not all(k in values for k in required):
|
||||
return 'Missing values', 400
|
||||
|
||||
# Create a new Transaction
|
||||
index = blockchain.new_transaction(values['sender'], values['recipient'], values['amount'])
|
||||
|
||||
response = {'message': f'Transaction will be added to Block {index}'}
|
||||
return jsonify(response), 201
|
||||
```
|
||||
A method for creating Transactions
|
||||
|
||||
#### The Mining Endpoint
|
||||
|
||||
Our mining endpoint is where the magic happens, and it’s easy. It has to do three things:
|
||||
|
||||
1. Calculate the Proof of Work
|
||||
|
||||
2. Reward the miner (us) by adding a transaction granting us 1 coin
|
||||
|
||||
3. Forge the new Block by adding it to the chain
|
||||
|
||||
```
|
||||
import hashlib
|
||||
import json
|
||||
|
||||
from time import time
|
||||
from uuid import uuid4
|
||||
|
||||
from flask import Flask, jsonify, request
|
||||
|
||||
...
|
||||
|
||||
@app.route('/mine', methods=['GET'])
|
||||
def mine():
|
||||
# We run the proof of work algorithm to get the next proof...
|
||||
last_block = blockchain.last_block
|
||||
last_proof = last_block['proof']
|
||||
proof = blockchain.proof_of_work(last_proof)
|
||||
|
||||
# We must receive a reward for finding the proof.
|
||||
# The sender is "0" to signify that this node has mined a new coin.
|
||||
blockchain.new_transaction(
|
||||
sender="0",
|
||||
recipient=node_identifier,
|
||||
amount=1,
|
||||
)
|
||||
|
||||
# Forge the new Block by adding it to the chain
|
||||
previous_hash = blockchain.hash(last_block)
|
||||
block = blockchain.new_block(proof, previous_hash)
|
||||
|
||||
response = {
|
||||
'message': "New Block Forged",
|
||||
'index': block['index'],
|
||||
'transactions': block['transactions'],
|
||||
'proof': block['proof'],
|
||||
'previous_hash': block['previous_hash'],
|
||||
}
|
||||
return jsonify(response), 200
|
||||
```
|
||||
|
||||
Note that the recipient of the mined block is the address of our node. And most of what we’ve done here is just interact with the methods on our Blockchain class. At this point, we’re done, and can start interacting with our blockchain.
|
||||
|
||||
### Step 3: Interacting with our Blockchain
|
||||
|
||||
You can use plain old cURL or Postman to interact with our API over a network.
|
||||
|
||||
Fire up the server:
|
||||
|
||||
```
|
||||
$ python blockchain.py
|
||||
```
|
||||
|
||||
Let’s try mining a block by making a GET request to http://localhost:5000/mine:
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*ufYwRmWgQeA-Jxg0zgYLOA.png)
|
||||
Using Postman to make a GET request
|
||||
|
||||
Let’s create a new transaction by making a POST request tohttp://localhost:5000/transactions/new with a body containing our transaction structure:
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*O89KNbEWj1vigMZ6VelHAg.png)
|
||||
Using Postman to make a POST request
|
||||
|
||||
If you aren’t using Postman, then you can make the equivalent request using cURL:
|
||||
|
||||
```
|
||||
$ curl -X POST -H "Content-Type: application/json" -d '{ "sender": "d4ee26eee15148ee92c6cd394edd974e", "recipient": "someone-other-address", "amount": 5}' "http://localhost:5000/transactions/new"
|
||||
```
|
||||
I restarted my server, and mined two blocks, to give 3 in total. Let’s inspect the full chain by requesting http://localhost:5000/chain:
|
||||
```
|
||||
{
|
||||
"chain": [
|
||||
{
|
||||
"index": 1,
|
||||
"previous_hash": 1,
|
||||
"proof": 100,
|
||||
"timestamp": 1506280650.770839,
|
||||
"transactions": []
|
||||
},
|
||||
{
|
||||
"index": 2,
|
||||
"previous_hash": "c099bc...bfb7",
|
||||
"proof": 35293,
|
||||
"timestamp": 1506280664.717925,
|
||||
"transactions": [
|
||||
{
|
||||
"amount": 1,
|
||||
"recipient": "8bbcb347e0634905b0cac7955bae152b",
|
||||
"sender": "0"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"index": 3,
|
||||
"previous_hash": "eff91a...10f2",
|
||||
"proof": 35089,
|
||||
"timestamp": 1506280666.1086972,
|
||||
"transactions": [
|
||||
{
|
||||
"amount": 1,
|
||||
"recipient": "8bbcb347e0634905b0cac7955bae152b",
|
||||
"sender": "0"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"length": 3
|
||||
```
|
||||
### Step 4: Consensus
|
||||
|
||||
This is very cool. We’ve got a basic Blockchain that accepts transactions and allows us to mine new Blocks. But the whole point of Blockchains is that they should be decentralized. And if they’re decentralized, how on earth do we ensure that they all reflect the same chain? This is called the problem of Consensus, and we’ll have to implement a Consensus Algorithm if we want more than one node in our network.
|
||||
|
||||
#### Registering new Nodes
|
||||
|
||||
Before we can implement a Consensus Algorithm, we need a way to let a node know about neighbouring nodes on the network. Each node on our network should keep a registry of other nodes on the network. Thus, we’ll need some more endpoints:
|
||||
|
||||
1. /nodes/register to accept a list of new nodes in the form of URLs.
|
||||
|
||||
2. /nodes/resolve to implement our Consensus Algorithm, which resolves any conflicts—to ensure a node has the correct chain.
|
||||
|
||||
We’ll need to modify our Blockchain’s constructor and provide a method for registering nodes:
|
||||
|
||||
```
|
||||
...
|
||||
from urllib.parse import urlparse
|
||||
...
|
||||
|
||||
|
||||
class Blockchain(object):
|
||||
def __init__(self):
|
||||
...
|
||||
self.nodes = set()
|
||||
...
|
||||
|
||||
def register_node(self, address):
|
||||
"""
|
||||
Add a new node to the list of nodes
|
||||
:param address: <str> Address of node. Eg. 'http://192.168.0.5:5000'
|
||||
:return: None
|
||||
"""
|
||||
|
||||
parsed_url = urlparse(address)
|
||||
self.nodes.add(parsed_url.netloc)
|
||||
```
|
||||
A method for adding neighbouring nodes to our Network
|
||||
|
||||
Note that we’ve used a set() to hold the list of nodes. This is a cheap way of ensuring that the addition of new nodes is idempotent—meaning that no matter how many times we add a specific node, it appears exactly once.
|
||||
|
||||
#### Implementing the Consensus Algorithm
|
||||
|
||||
As mentioned, a conflict is when one node has a different chain to another node. To resolve this, we’ll make the rule that the longest valid chain is authoritative. In other words, the longest chain on the network is the de-facto one. Using this algorithm, we reach Consensus amongst the nodes in our network.
|
||||
|
||||
```
|
||||
...
|
||||
import requests
|
||||
|
||||
|
||||
class Blockchain(object)
|
||||
...
|
||||
|
||||
def valid_chain(self, chain):
|
||||
"""
|
||||
Determine if a given blockchain is valid
|
||||
:param chain: <list> A blockchain
|
||||
:return: <bool> True if valid, False if not
|
||||
"""
|
||||
|
||||
last_block = chain[0]
|
||||
current_index = 1
|
||||
|
||||
while current_index < len(chain):
|
||||
block = chain[current_index]
|
||||
print(f'{last_block}')
|
||||
print(f'{block}')
|
||||
print("\n-----------\n")
|
||||
# Check that the hash of the block is correct
|
||||
if block['previous_hash'] != self.hash(last_block):
|
||||
return False
|
||||
|
||||
# Check that the Proof of Work is correct
|
||||
if not self.valid_proof(last_block['proof'], block['proof']):
|
||||
return False
|
||||
|
||||
last_block = block
|
||||
current_index += 1
|
||||
|
||||
return True
|
||||
|
||||
def resolve_conflicts(self):
|
||||
"""
|
||||
This is our Consensus Algorithm, it resolves conflicts
|
||||
by replacing our chain with the longest one in the network.
|
||||
:return: <bool> True if our chain was replaced, False if not
|
||||
"""
|
||||
|
||||
neighbours = self.nodes
|
||||
new_chain = None
|
||||
|
||||
# We're only looking for chains longer than ours
|
||||
max_length = len(self.chain)
|
||||
|
||||
# Grab and verify the chains from all the nodes in our network
|
||||
for node in neighbours:
|
||||
response = requests.get(f'http://{node}/chain')
|
||||
|
||||
if response.status_code == 200:
|
||||
length = response.json()['length']
|
||||
chain = response.json()['chain']
|
||||
|
||||
# Check if the length is longer and the chain is valid
|
||||
if length > max_length and self.valid_chain(chain):
|
||||
max_length = length
|
||||
new_chain = chain
|
||||
|
||||
# Replace our chain if we discovered a new, valid chain longer than ours
|
||||
if new_chain:
|
||||
self.chain = new_chain
|
||||
return True
|
||||
|
||||
return False
|
||||
```
|
||||
|
||||
The first method valid_chain() is responsible for checking if a chain is valid by looping through each block and verifying both the hash and the proof.
|
||||
|
||||
resolve_conflicts() is a method which loops through all our neighbouring nodes, downloads their chains and verifies them using the above method. If a valid chain is found, whose length is greater than ours, we replace ours.
|
||||
|
||||
Let’s register the two endpoints to our API, one for adding neighbouring nodes and the another for resolving conflicts:
|
||||
|
||||
```
|
||||
@app.route('/nodes/register', methods=['POST'])
|
||||
def register_nodes():
|
||||
values = request.get_json()
|
||||
|
||||
nodes = values.get('nodes')
|
||||
if nodes is None:
|
||||
return "Error: Please supply a valid list of nodes", 400
|
||||
|
||||
for node in nodes:
|
||||
blockchain.register_node(node)
|
||||
|
||||
response = {
|
||||
'message': 'New nodes have been added',
|
||||
'total_nodes': list(blockchain.nodes),
|
||||
}
|
||||
return jsonify(response), 201
|
||||
|
||||
|
||||
@app.route('/nodes/resolve', methods=['GET'])
|
||||
def consensus():
|
||||
replaced = blockchain.resolve_conflicts()
|
||||
|
||||
if replaced:
|
||||
response = {
|
||||
'message': 'Our chain was replaced',
|
||||
'new_chain': blockchain.chain
|
||||
}
|
||||
else:
|
||||
response = {
|
||||
'message': 'Our chain is authoritative',
|
||||
'chain': blockchain.chain
|
||||
}
|
||||
|
||||
return jsonify(response), 200
|
||||
```
|
||||
|
||||
At this point you can grab a different machine if you like, and spin up different nodes on your network. Or spin up processes using different ports on the same machine. I spun up another node on my machine, on a different port, and registered it with my current node. Thus, I have two nodes: [http://localhost:5000][9] and http://localhost:5001.
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*Dd78u-gmtwhQWHhPG3qMTQ.png)
|
||||
Registering a new Node
|
||||
|
||||
I then mined some new Blocks on node 2, to ensure the chain was longer. Afterward, I called GET /nodes/resolve on node 1, where the chain was replaced by the Consensus Algorithm:
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*SGO5MWVf7GguIxfz6S8NVw.png)
|
||||
Consensus Algorithm at Work
|
||||
|
||||
And that’s a wrap... Go get some friends together to help test out your Blockchain.
|
||||
|
||||
* * *
|
||||
|
||||
I hope that this has inspired you to create something new. I’m ecstatic about Cryptocurrencies because I believe that Blockchains will rapidly change the way we think about economies, governments and record-keeping.
|
||||
|
||||
**Update:** I’m planning on following up with a Part 2, where we’ll extend our Blockchain to have a Transaction Validation Mechanism as well as discuss some ways in which you can productionize your Blockchain.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://hackernoon.com/learn-blockchains-by-building-one-117428612f46
|
||||
|
||||
作者:[Daniel van Flymen][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://hackernoon.com/@vanflymen?source=post_header_lockup
|
||||
[1]:https://learncryptography.com/hash-functions/what-are-hash-functions
|
||||
[2]:https://www.python.org/downloads/
|
||||
[3]:https://www.getpostman.com
|
||||
[4]:https://github.com/dvf/blockchain
|
||||
[5]:https://www.jetbrains.com/pycharm/
|
||||
[6]:https://github.com/dvf/blockchain
|
||||
[7]:http://flask.pocoo.org/docs/0.12/quickstart/#a-minimal-application
|
||||
[8]:http://localhost:5000/transactions/new
|
||||
[9]:http://localhost:5000
|
@ -1,3 +1,5 @@
|
||||
lontow translating
|
||||
|
||||
5 ways open source can strengthen your job search
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI)
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating By MjSeven
|
||||
|
||||
How to install software applications on Linux
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
The Type Command Tutorial With Examples For Beginners
|
||||
======
|
||||
|
||||
|
@ -1,195 +0,0 @@
|
||||
hankchow translating
|
||||
|
||||
How to measure particulate matter with a Raspberry Pi
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bubblehands_fromRHT_520_0612LL.png?itok=_iQ2dO3S)
|
||||
We regularly measure particulate matter in the air at our school in Southeast Asia. The values here are very high, particularly between February and May, when weather conditions are very dry and hot, and many fields burn. These factors negatively affect the quality of the air. In this article, I will show you how to measure particulate matter using a Raspberry Pi.
|
||||
|
||||
### What is particulate matter?
|
||||
|
||||
Particulate matter is fine dust or very small particles in the air. A distinction is made between PM10 and PM2.5: PM10 refers to particles that are smaller than 10µm; PM2.5 refers to particles that are smaller than 2.5µm. The smaller the particles—i.e., anything smaller than 2.5µm—the more dangerous they are to one's health, as they can penetrate into the alveoli and impact the respiratory system.
|
||||
|
||||
The World Health Organization recommends [limiting particulate matter][1] to the following values:
|
||||
|
||||
* Annual average PM10 20 µg/m³
|
||||
* Annual average PM2,5 10 µg/m³ per year
|
||||
* Daily average PM10 50 µg/m³ without permitted days on which exceeding is possible.
|
||||
* Daily average PM2,5 25 µg/m³ without permitted days on which exceeding is possible.
|
||||
|
||||
|
||||
|
||||
These values are below the limits set in most countries. In the European Union, an annual average of 40 µg/m³ for PM10 is allowed.
|
||||
|
||||
### What is the Air Quality Index (AQI)?
|
||||
|
||||
The Air Quality Index indicates how “good” or “bad” air is based on its particulate measurement. Unfortunately, there is no uniform standard for AQI because not all countries calculate it the same way. The Wikipedia article on the [Air Quality Index][2] offers a helpful overview. At our school, we are guided by the classification established by the United States' [Environmental Protection Agency][3].
|
||||
|
||||
|
||||
![Air quality index][5]
|
||||
|
||||
Air quality index
|
||||
|
||||
### What do we need to measure particulate matter?
|
||||
|
||||
Measuring particulate matter requires only two things:
|
||||
|
||||
* A Raspberry Pi (every model works; a model with WiFi is best)
|
||||
* A particulates sensor SDS011
|
||||
|
||||
|
||||
|
||||
![Particulate sensor][7]
|
||||
|
||||
Particulate sensor
|
||||
|
||||
If you are using a Raspberry Pi Zero W, you will also need an adapter cable to a standard USB port because the Zero has only a Micro USB. These are available for about $20. The sensor comes with a USB adapter for the serial interface.
|
||||
|
||||
### Installation
|
||||
|
||||
For our Raspberry Pi we download the corresponding Raspbian Lite Image and [write it on the Micro SD card][8]. (I will not go into the details of setting up the WLAN connection; many tutorials are available online).
|
||||
|
||||
If you want to have SSH enabled after booting, you need to create an empty file named `ssh` in the boot partition. The IP of the Raspberry Pi can best be obtained via your own router/DHCP server. You can then log in via SSH (the default password is raspberry):
|
||||
```
|
||||
$ ssh pi@192.168.1.5
|
||||
|
||||
```
|
||||
|
||||
First we need to install some packages on the Pi:
|
||||
```
|
||||
$ sudo apt install git-core python-serial python-enum lighttpd
|
||||
|
||||
```
|
||||
|
||||
Before we can start, we need to know which serial port the USB adapter is connected to. `dmesg` helps us:
|
||||
```
|
||||
$ dmesg
|
||||
|
||||
[ 5.559802] usbcore: registered new interface driver usbserial
|
||||
|
||||
[ 5.559930] usbcore: registered new interface driver usbserial_generic
|
||||
|
||||
[ 5.560049] usbserial: USB Serial support registered for generic
|
||||
|
||||
[ 5.569938] usbcore: registered new interface driver ch341
|
||||
|
||||
[ 5.570079] usbserial: USB Serial support registered for ch341-uart
|
||||
|
||||
[ 5.570217] ch341 1–1.4:1.0: ch341-uart converter detected
|
||||
|
||||
[ 5.575686] usb 1–1.4: ch341-uart converter now attached to ttyUSB0
|
||||
|
||||
```
|
||||
|
||||
In the last line, you can see our interface: `ttyUSB0`. We now need a small Python script that reads the data and saves it in a JSON file, and then we will create a small HTML page that reads and displays the data.
|
||||
|
||||
### Reading data on the Raspberry Pi
|
||||
|
||||
We first create an instance of the sensor and then read the sensor every 5 minutes, for 30 seconds. These values can, of course, be adjusted. Between the measuring intervals, we put the sensor into a sleep mode to increase its lifespan (according to the manufacturer, the lifespan totals approximately 8000 hours).
|
||||
|
||||
We can download the script with this command:
|
||||
```
|
||||
$ wget -O /home/pi/aqi.py https://raw.githubusercontent.com/zefanja/aqi/master/python/aqi.py
|
||||
|
||||
```
|
||||
|
||||
For the script to run without errors, two small things are still needed:
|
||||
```
|
||||
$ sudo chown pi:pi /var/wwww/html/
|
||||
|
||||
$ echo[] > /var/wwww/html/aqi.json
|
||||
|
||||
```
|
||||
|
||||
Now you can start the script:
|
||||
```
|
||||
$ chmod +x aqi.py
|
||||
|
||||
$ ./aqi.py
|
||||
|
||||
PM2.5:55.3, PM10:47.5
|
||||
|
||||
PM2.5:55.5, PM10:47.7
|
||||
|
||||
PM2.5:55.7, PM10:47.8
|
||||
|
||||
PM2.5:53.9, PM10:47.6
|
||||
|
||||
PM2.5:53.6, PM10:47.4
|
||||
|
||||
PM2.5:54.2, PM10:47.3
|
||||
|
||||
…
|
||||
|
||||
```
|
||||
|
||||
### Run the script automatically
|
||||
|
||||
So that we don’t have to start the script manually every time, we can let it start with a cronjob, e.g., with every restart of the Raspberry Pi. To do this, open the crontab file:
|
||||
```
|
||||
$ crontab -e
|
||||
|
||||
```
|
||||
|
||||
and add the following line at the end:
|
||||
```
|
||||
@reboot cd /home/pi/ && ./aqi.py
|
||||
|
||||
```
|
||||
|
||||
Now our script starts automatically with every restart.
|
||||
|
||||
### HTML page for displaying measured values and AQI
|
||||
|
||||
We have already installed a lightweight webserver, `lighttpd`. So we need to save our HTML, JavaScript, and CSS files in the directory `/var/www/html/` so that we can access the data from another computer or smartphone. With the next three commands, we simply download the corresponding files:
|
||||
```
|
||||
$ wget -O /var/wwww/html/index.html https://raw.githubusercontent.com/zefanja/aqi/master/html/index.html
|
||||
|
||||
$ wget -O /var/wwww/html/aqi.js https://raw.githubusercontent.com/zefanja/aqi/master/html/aqi.js
|
||||
|
||||
$ wget -O /var/wwww/html/style.css https://raw.githubusercontent.com/zefanja/aqi/master/html/style.css
|
||||
|
||||
```
|
||||
|
||||
The main work is done in the JavaScript file, which opens our JSON file, takes the last value, and calculates the AQI based on this value. Then the background colors are adjusted according to the scale of the EPA.
|
||||
|
||||
Now you simply open the address of the Raspberry Pi in your browser and look at the current particulates values, e.g., [http://192.168.1.5:][9]
|
||||
|
||||
The page is very simple and can be extended, for example, with a chart showing the history of the last hours, etc. Pull requests are welcome.
|
||||
|
||||
The complete [source code is available on Github][10].
|
||||
|
||||
**[Enter our[Raspberry Pi week giveaway][11] for a chance at this arcade gaming kit.]**
|
||||
|
||||
### Wrapping up
|
||||
|
||||
For relatively little money, we can measure particulate matter with a Raspberry Pi. There are many possible applications, from a permanent outdoor installation to a mobile measuring device. At our school, we use both: There is a sensor that measures outdoor values day and night, and a mobile sensor that checks the effectiveness of the air conditioning filters in our classrooms.
|
||||
|
||||
[Luftdaten.info][12] offers guidance to build a similar sensor. The software is delivered ready to use, and the measuring device is even more compact because it does not use a Raspberry Pi. Great project!
|
||||
|
||||
Creating a particulates sensor is an excellent project to do with students in computer science classes or a workshop.
|
||||
|
||||
What do you use a [Raspberry Pi][13] for?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/how-measure-particulate-matter-raspberry-pi
|
||||
|
||||
作者:[Stephan Tetzel][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/stephan
|
||||
[1]:https://en.wikipedia.org/wiki/Particulates
|
||||
[2]:https://en.wikipedia.org/wiki/Air_quality_index
|
||||
[3]:https://en.wikipedia.org/wiki/United_States_Environmental_Protection_Agency
|
||||
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/air_quality_index.png?itok=FwmGf1ZS (Air quality index)
|
||||
[7]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/particulate_sensor.jpg?itok=ddH3bBwO (Particulate sensor)
|
||||
[8]:https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[9]:http://192.168.1.5/
|
||||
[10]:https://github.com/zefanja/aqi
|
||||
[11]:https://opensource.com/article/18/3/raspberry-pi-week-giveaway
|
||||
[12]:http://luftdaten.info/
|
||||
[13]:https://openschoolsolutions.org/shutdown-servers-case-power-failure%e2%80%8a-%e2%80%8aups-nut-co/
|
@ -0,0 +1,201 @@
|
||||
A Command Line Productivity Tool For Tracking Work Hours
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/03/Moro-720x340.jpg)
|
||||
Keeping track of your work hours will give you an insight about the amount of work you get done in a specific time frame. There are plenty of GUI-based productivity tools available on the Internet for tracking work hours. However, I couldn’t find a good CLI-based tool. Today, I stumbled upon a a simple, yet useful tool named **“Moro”** for tracking work hours. Moro is a Finnish word which means “Hello”. Using Moro, you can find how much time you take to complete a specific task. It is free, open source and written using **NodeJS**.
|
||||
|
||||
### Moro – A Command Line Productivity Tool For Tracking Work Hours
|
||||
|
||||
Since Moro is written using NodeJS, make sure you have installed it on your system. If you haven’t installed it already, follow the link given below to install NodeJS and NPM in your Linux box.
|
||||
|
||||
Once NodeJS and Npm installed, run the following command to install Moro.
|
||||
```
|
||||
$ npm install -g moro
|
||||
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
Moro’s working concept is very simple. It saves your work staring time, ending time and the break time in your system. At the end of each day, it will tell you how many hours you have worked!
|
||||
|
||||
When you reached to office, just type:
|
||||
```
|
||||
$ moro
|
||||
|
||||
```
|
||||
|
||||
Sample output:
|
||||
```
|
||||
💙 Moro \o/
|
||||
|
||||
✔ You clocked in at: 9:20
|
||||
|
||||
```
|
||||
|
||||
Moro will register this time as your starting time.
|
||||
|
||||
When you leave the office, again type:
|
||||
```
|
||||
$ moro
|
||||
|
||||
```
|
||||
|
||||
Sample output:
|
||||
```
|
||||
💙 Moro \o/
|
||||
|
||||
✔ You clocked out at: 19:22
|
||||
|
||||
ℹ Today looks like this so far:
|
||||
|
||||
┌──────────────────┬─────────────────────────┐
|
||||
│ Today you worked │ 9 Hours and 72 Minutes │
|
||||
├──────────────────┼─────────────────────────┤
|
||||
│ Clock in │ 9:20 │
|
||||
├──────────────────┼─────────────────────────┤
|
||||
│ Clock out │ 19:22 │
|
||||
├──────────────────┼─────────────────────────┤
|
||||
│ Break duration │ 30 minutes │
|
||||
├──────────────────┼─────────────────────────┤
|
||||
│ Date │ 2018-03-19 │
|
||||
└──────────────────┴─────────────────────────┘
|
||||
ℹ Run moro --help to learn how to edit your clock in, clock out or break duration for today
|
||||
|
||||
```
|
||||
|
||||
Moro will registers that time as your ending time.
|
||||
|
||||
Now, More will subtract the starting time from ending time and then subtracts another 30 minutes for break time from the total and gives you the total working hours on that day. Sorry I am really terrible at explaining Math calculations. Let us say you came to work at 10 am in the morning and left 17.30 in the evening. So, the total hours you spent on the office is 7.30 hours (i.e 17.30-10). Then subtract the break time (default is 30 minutes) from the total. Hence, your total working time is 7 hours. Understood? Great!
|
||||
|
||||
**Note:** Don’t confuse “moro” with “more” command like I did while writing this guide.
|
||||
|
||||
To see all your registered hours, run:
|
||||
```
|
||||
$ moro report --all
|
||||
|
||||
```
|
||||
|
||||
Just in case, you forgot to register the start time or end time, you can specify that later on the same.
|
||||
|
||||
For example, to register 10 am as start time, run:
|
||||
```
|
||||
$ moro hi 10:00
|
||||
|
||||
💙 Moro \o/
|
||||
|
||||
✔ You clocked in at: 10:00
|
||||
|
||||
⏰ Working until 18:00 will make it a full (7.5 hours) day
|
||||
|
||||
```
|
||||
|
||||
To register 17.30 as end time:
|
||||
```
|
||||
$ moro bye 17:30
|
||||
|
||||
💙 Moro \o/
|
||||
|
||||
✔ You clocked out at: 17:30
|
||||
|
||||
ℹ Today looks like this so far:
|
||||
|
||||
┌──────────────────┬───────────────────────┐
|
||||
│ Today you worked │ 7 Hours and 0 Minutes │
|
||||
├──────────────────┼───────────────────────┤
|
||||
│ Clock in │ 10:00 │
|
||||
├──────────────────┼───────────────────────┤
|
||||
│ Clock out │ 17:30 │
|
||||
├──────────────────┼───────────────────────┤
|
||||
│ Break duration │ 30 minutes │
|
||||
├──────────────────┼───────────────────────┤
|
||||
│ Date │ 2018-03-19 │
|
||||
└──────────────────┴───────────────────────┘
|
||||
ℹ Run moro --help to learn how to edit your clock in, clock out or break duration for today
|
||||
|
||||
```
|
||||
|
||||
You already know Moro will subtract 30 minutes for break time, by default. If you wanted to set a custom break time, you could simply set it using command:
|
||||
```
|
||||
$ moro break 45
|
||||
|
||||
```
|
||||
|
||||
Now, the break time is 45 minutes.
|
||||
|
||||
To clear all data:
|
||||
```
|
||||
$ moro clear --yes
|
||||
|
||||
💙 Moro \o/
|
||||
|
||||
✔ Database file deleted successfully
|
||||
|
||||
```
|
||||
|
||||
**Add notes**
|
||||
|
||||
Sometimes, you may want to add note while working. Don’t look for a separate note taking application. Moro will help you to add notes. To add a note, just run:
|
||||
```
|
||||
$ moro note mynotes
|
||||
|
||||
```
|
||||
|
||||
To search for the registered notes at later time, simply do:
|
||||
```
|
||||
$ moro search mynotes
|
||||
|
||||
```
|
||||
|
||||
**Change default settings**
|
||||
|
||||
The default full work day is 7.5 hours. Since the developer is from Finland, it’s the official work hours. You can, however, change this settings as per your country’s work hours.
|
||||
|
||||
Say for example, to set it 7 hours, run:
|
||||
```
|
||||
$ moro config --day 7
|
||||
|
||||
```
|
||||
|
||||
Also the default break time can be changed from 30 minutes like below:
|
||||
```
|
||||
$ moro config --break 45
|
||||
|
||||
```
|
||||
|
||||
**Backup your data**
|
||||
|
||||
Like I already said, Moro stores the tracking time data in your home directory, and the file name is **.moro-data.db**.
|
||||
|
||||
You can can, however, save the backup database file to different location. To do so, move the **.more-data.db** file to a different location of your choice and tell Moro to use that database file like below.
|
||||
```
|
||||
$ moro config --database-path /home/sk/personal/moro-data.db
|
||||
|
||||
```
|
||||
|
||||
As per above command, I have assigned the default database file’s location to **/home/sk/personal** directory.
|
||||
|
||||
For help, run:
|
||||
```
|
||||
$ moro --help
|
||||
|
||||
```
|
||||
|
||||
As you can see, Moro is very simple, yet useful to track how much time you’ve spent to get your work done. It is will be useful for freelancers and also anyone who must get things done under a limited time frame.
|
||||
|
||||
And, that’s all for today. Hope this helps. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/moro-a-command-line-productivity-tool-for-tracking-work-hours/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
@ -1,207 +0,0 @@
|
||||
hankchow translating
|
||||
|
||||
How to use Ansible to patch systems and install applications
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4)
|
||||
Have you ever wondered how to patch your systems, reboot, and continue working?
|
||||
|
||||
If so, you'll be interested in [Ansible][1] , a simple configuration management tool that can make some of the hardest work easy. For example, system administration tasks that can be complicated, take hours to complete, or have complex requirements for security.
|
||||
|
||||
In my experience, one of the hardest parts of being a sysadmin is patching systems. Every time you get a Common Vulnerabilities and Exposure (CVE) notification or Information Assurance Vulnerability Alert (IAVA) mandated by security, you have to kick into high gear to close the security gaps. (And, believe me, your security officer will hunt you down unless the vulnerabilities are patched.)
|
||||
|
||||
Ansible can reduce the time it takes to patch systems by running [packaging modules][2]. To demonstrate, let's use the [yum module][3] to update the system. Ansible can install, update, remove, or install from another location (e.g., `rpmbuild` from continuous integration/continuous development). Here is the task for updating the system:
|
||||
```
|
||||
- name: update the system
|
||||
|
||||
yum:
|
||||
|
||||
name: "*"
|
||||
|
||||
state: latest
|
||||
|
||||
```
|
||||
|
||||
In the first line, we give the task a meaningful `name` so we know what Ansible is doing. In the next line, the `yum module` updates the CentOS virtual machine (VM), then `name: "*"` tells yum to update everything, and, finally, `state: latest` updates to the latest RPM.
|
||||
|
||||
After updating the system, we need to restart and reconnect:
|
||||
```
|
||||
- name: restart system to reboot to newest kernel
|
||||
|
||||
shell: "sleep 5 && reboot"
|
||||
|
||||
async: 1
|
||||
|
||||
poll: 0
|
||||
|
||||
|
||||
|
||||
- name: wait for 10 seconds
|
||||
|
||||
pause:
|
||||
|
||||
seconds: 10
|
||||
|
||||
|
||||
|
||||
- name: wait for the system to reboot
|
||||
|
||||
wait_for_connection:
|
||||
|
||||
connect_timeout: 20
|
||||
|
||||
sleep: 5
|
||||
|
||||
delay: 5
|
||||
|
||||
timeout: 60
|
||||
|
||||
|
||||
|
||||
- name: install epel-release
|
||||
|
||||
yum:
|
||||
|
||||
name: epel-release
|
||||
|
||||
state: latest
|
||||
|
||||
```
|
||||
|
||||
The `shell module` puts the system to sleep for 5 seconds then reboots. We use `sleep` to prevent the connection from breaking, `async` to avoid timeout, and `poll` to fire & forget. We pause for 10 seconds to wait for the VM to come back and use `wait_for_connection` to connect back to the VM as soon as it can make a connection. Then we `install epel-release` to test the RPM installation. You can run this playbook multiple times to show the `idempotent`, and the only task that will show as changed is the reboot since we are using the `shell` module. You can use `changed_when: False` to ignore the change when using the `shell` module if you expect no actual changes.
|
||||
|
||||
So far we've learned how to update a system, restart the VM, reconnect, and install a RPM. Next we will install NGINX using the role in [Ansible Lightbulb][4].
|
||||
```
|
||||
- name: Ensure nginx packages are present
|
||||
|
||||
yum:
|
||||
|
||||
name: nginx, python-pip, python-devel, devel
|
||||
|
||||
state: present
|
||||
|
||||
notify: restart-nginx-service
|
||||
|
||||
|
||||
|
||||
- name: Ensure uwsgi package is present
|
||||
|
||||
pip:
|
||||
|
||||
name: uwsgi
|
||||
|
||||
state: present
|
||||
|
||||
notify: restart-nginx-service
|
||||
|
||||
|
||||
|
||||
- name: Ensure latest default.conf is present
|
||||
|
||||
template:
|
||||
|
||||
src: templates/nginx.conf.j2
|
||||
|
||||
dest: /etc/nginx/nginx.conf
|
||||
|
||||
backup: yes
|
||||
|
||||
notify: restart-nginx-service
|
||||
|
||||
|
||||
|
||||
- name: Ensure latest index.html is present
|
||||
|
||||
template:
|
||||
|
||||
src: templates/index.html.j2
|
||||
|
||||
dest: /usr/share/nginx/html/index.html
|
||||
|
||||
|
||||
|
||||
- name: Ensure nginx service is started and enabled
|
||||
|
||||
service:
|
||||
|
||||
name: nginx
|
||||
|
||||
state: started
|
||||
|
||||
enabled: yes
|
||||
|
||||
|
||||
|
||||
- name: Ensure proper response from localhost can be received
|
||||
|
||||
uri:
|
||||
|
||||
url: "http://localhost:80/"
|
||||
|
||||
return_content: yes
|
||||
|
||||
register: response
|
||||
|
||||
until: 'nginx_test_message in response.content'
|
||||
|
||||
retries: 10
|
||||
|
||||
delay: 1
|
||||
|
||||
```
|
||||
|
||||
And the handler that restarts the nginx service:
|
||||
```
|
||||
# handlers file for nginx-example
|
||||
|
||||
- name: restart-nginx-service
|
||||
|
||||
service:
|
||||
|
||||
name: nginx
|
||||
|
||||
state: restarted
|
||||
|
||||
```
|
||||
|
||||
In this role, we install the RPMs `nginx`, `python-pip`, `python-devel`, and `devel` and install `uwsgi` with PIP. Next, we use the `template` module to copy over the `nginx.conf` and `index.html` for the page to display. After that, we make sure the service is enabled on boot and started. Then we use the `uri` module to check the connection to the page.
|
||||
|
||||
Here is a playbook showing an example of updating, restarting, and installing an RPM. Then continue installing nginx. This can be done with any other roles/applications you want.
|
||||
```
|
||||
- hosts: all
|
||||
|
||||
roles:
|
||||
|
||||
- centos-update
|
||||
|
||||
- nginx-simple
|
||||
|
||||
```
|
||||
|
||||
Watch this demo video for more insight on the process.
|
||||
|
||||
[demo](https://asciinema.org/a/166437/embed?)
|
||||
|
||||
This was just a simple example of how to update, reboot, and continue. For simplicity, I added the packages without [variables][5]. Once you start working with a large number of hosts, you will need to change a few settings:
|
||||
|
||||
This is because on your production environment you might want to update one system at a time (not fire & forget) and actually wait a longer time for your system to reboot and continue.
|
||||
|
||||
For more ways to automate your work with this tool, take a look at the other [Ansible articles on Opensource.com][6].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/ansible-patch-systems
|
||||
|
||||
作者:[Jonathan Lozada De La Matta][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jlozadad
|
||||
[1]:https://www.ansible.com/overview/how-ansible-works
|
||||
[2]:https://docs.ansible.com/ansible/latest/list_of_packaging_modules.html
|
||||
[3]:https://docs.ansible.com/ansible/latest/yum_module.html
|
||||
[4]:https://github.com/ansible/lightbulb/tree/master/examples/nginx-role
|
||||
[5]:https://docs.ansible.com/ansible/latest/playbooks_variables.html
|
||||
[6]:https://opensource.com/tags/ansible
|
139
sources/tech/20180322 Simple Load Balancing with DNS on Linux.md
Normal file
139
sources/tech/20180322 Simple Load Balancing with DNS on Linux.md
Normal file
@ -0,0 +1,139 @@
|
||||
Simple Load Balancing with DNS on Linux
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/american-robin-920.jpg?itok=_B_RRbfj)
|
||||
When you have server back ends built of multiple servers, such as clustered or mirrowed web or file servers, a load balancer provides a single point of entry. Large busy shops spend big money on high-end load balancers that perform a wide range of tasks: proxy, caching, health checks, SSL processing, configurable prioritization, traffic shaping, and lots more.
|
||||
|
||||
But you don't want all that. You need a simple method for distributing workloads across all of your servers and providing a bit of failover and don't care whether it is perfectly efficient. DNS round-robin and subdomain delegation with round-robin provide two simple methods to achieve this.
|
||||
|
||||
DNS round-robin is mapping multiple servers to the same hostname, so that when users visit foo.example.com multiple servers are available to handle their requests.
|
||||
|
||||
Subdomain delegation with round-robin is useful when you have multiple subdomains or when your servers are geographically dispersed. You have a primary nameserver, and then your subdomains have their own nameservers. Your primary nameserver refers all subdomain requests to their own nameservers. This usually improves response times, as the DNS protocol will automatically look for the fastest links.
|
||||
|
||||
### Round-Robin DNS
|
||||
|
||||
Round-robin has nothing to do with robins. According to my favorite librarian, it was originally a French phrase, _ruban rond_ , or round ribbon. Way back in olden times, French government officials signed grievance petitions in non-hierarchical circular, wavy, or spoke patterns to conceal whoever originated the petition.
|
||||
|
||||
Round-robin DNS is also non-hierarchical, a simple configuration that takes a list of servers and sends requests to each server in turn. It does not perform true load-balancing as it does not measure loads, and does no health checks, so if one of the servers is down, requests are still sent to that server. Its virtue lies in simplicity. If you have a little cluster of file or web servers and want to spread the load between them in the simplest way, then round-robin DNS is for you.
|
||||
|
||||
All you do is create multiple A or AAAA records, mapping multiple servers to a single host name. This BIND example uses both IPv4 and IPv6 private address classes:
|
||||
```
|
||||
fileserv.example.com. IN A 172.16.10.10
|
||||
fileserv.example.com. IN A 172.16.10.11
|
||||
fileserv.example.com. IN A 172.16.10.12
|
||||
|
||||
fileserv.example.com. IN AAAA fd02:faea:f561:8fa0:1::10
|
||||
fileserv.example.com. IN AAAA fd02:faea:f561:8fa0:1::11
|
||||
fileserv.example.com. IN AAAA fd02:faea:f561:8fa0:1::12
|
||||
|
||||
```
|
||||
|
||||
Dnsmasq uses _/etc/hosts_ for A and AAAA records:
|
||||
```
|
||||
172.16.1.10 fileserv fileserv.example.com
|
||||
172.16.1.11 fileserv fileserv.example.com
|
||||
172.16.1.12 fileserv fileserv.example.com
|
||||
fd02:faea:f561:8fa0:1::10 fileserv fileserv.example.com
|
||||
fd02:faea:f561:8fa0:1::11 fileserv fileserv.example.com
|
||||
fd02:faea:f561:8fa0:1::12 fileserv fileserv.example.com
|
||||
|
||||
```
|
||||
|
||||
Note that these examples are simplified, and there are multiple ways to resolve fully-qualified domain names, so please study up on configuring DNS.
|
||||
|
||||
Use the `dig` command to check your work. Replace `ns.example.com` with your name server:
|
||||
```
|
||||
$ dig @ns.example.com fileserv A fileserv AAA
|
||||
|
||||
```
|
||||
|
||||
That should display both IPv4 and IPv6 round-robin records.
|
||||
|
||||
### Subdomain Delegation and Round-Robin
|
||||
|
||||
Subdomain delegation combined with round-robin is more work to set up, but it has some advantages. Use this when you have multiple subdomains or geographically-dispersed servers. Response times are often quicker, and a down server will not respond, so clients will not get hung up waiting for a reply. A short TTL, such as 60 seconds, helps this.
|
||||
|
||||
This approach requires multiple name servers. In the simplest scenario, you have a primary name server and two subdomains, each with its own name server. Configure your round-robin entries on the subdomain servers, then configure the delegations on your primary server.
|
||||
|
||||
In BIND on your primary name server, you'll need at least two additional configurations, a zone statement, and A/AAAA records in your zone data file. The delegation looks something like this on your primary name server:
|
||||
```
|
||||
ns1.sub.example.com. IN A 172.16.1.20
|
||||
ns1.sub.example.com. IN AAAA fd02:faea:f561:8fa0:1::20
|
||||
ns2.sub.example.com. IN A 172.16.1.21
|
||||
ns2.sub.example.com. IN AAA fd02:faea:f561:8fa0:1::21
|
||||
|
||||
sub.example.com. IN NS ns1.sub.example.com.
|
||||
sub.example.com. IN NS ns2.sub.example.com.
|
||||
|
||||
```
|
||||
|
||||
Then each of the subdomain servers have their own zone files. The trick here is for each server to return its own IP address. The zone statement in `named.conf` is the same on both servers:
|
||||
```
|
||||
zone "sub.example.com" {
|
||||
type master;
|
||||
file "db.sub.example.com";
|
||||
};
|
||||
|
||||
```
|
||||
|
||||
Then the data files are the same, except that the A/AAAA records use the server's own IP address. The SOA (start of authority) refers to the primary name server:
|
||||
```
|
||||
; first subdomain name server
|
||||
$ORIGIN sub.example.com.
|
||||
$TTL 60
|
||||
sub.example.com IN SOA ns1.example.com. admin.example.com. (
|
||||
2018123456 ; serial
|
||||
3H ; refresh
|
||||
15 ; retry
|
||||
3600000 ; expire
|
||||
)
|
||||
|
||||
sub.example.com. IN NS ns1.sub.example.com.
|
||||
sub.example.com. IN A 172.16.1.20
|
||||
ns1.sub.example.com. IN AAAA fd02:faea:f561:8fa0:1::20
|
||||
|
||||
; second subdomain name server
|
||||
$ORIGIN sub.example.com.
|
||||
$TTL 60
|
||||
sub.example.com IN SOA ns1.example.com. admin.example.com. (
|
||||
2018234567 ; serial
|
||||
3H ; refresh
|
||||
15 ; retry
|
||||
3600000 ; expire
|
||||
)
|
||||
|
||||
sub.example.com. IN NS ns1.sub.example.com.
|
||||
sub.example.com. IN A 172.16.1.21
|
||||
ns2.sub.example.com. IN AAAA fd02:faea:f561:8fa0:1::21
|
||||
|
||||
```
|
||||
|
||||
Next, make your round-robin entries on the subdomain name servers, and you're done. Now you have multiple name servers handling requests for your subdomains. Again, BIND is complex and has multiple ways to do the same thing, so your homework is to ensure that your configuration fits with the way you use it.
|
||||
|
||||
Subdomain delegations are easier in Dnsmasq. On your primary server, add lines like this in `dnsmasq.conf` to point to the name servers for the subdomains:
|
||||
```
|
||||
server=/sub.example.com/172.16.1.20
|
||||
server=/sub.example.com/172.16.1.21
|
||||
server=/sub.example.com/fd02:faea:f561:8fa0:1::20
|
||||
server=/sub.example.com/fd02:faea:f561:8fa0:1::21
|
||||
|
||||
```
|
||||
|
||||
Then configure round-robin on the subdomain name servers in `/etc/hosts`.
|
||||
|
||||
For way more details and help, refer to these resources:
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][1]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/3/simple-load-balancing-dns-linux
|
||||
|
||||
作者:[CARLA SCHRODER][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,110 @@
|
||||
chkservice – A Tool For Managing Systemd Units From Linux Terminal
|
||||
======
|
||||
systemd stand for system daemon is a new init system and system manager which is become very popular and widely adapted new standard init system by most of Linux distributions.
|
||||
|
||||
Systemctl is a systemd utility which is help us to manage systemd daemons. It control system startup and services, uses parallelization, socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux control groups, maintains mount and automount points.
|
||||
|
||||
Also it offers logging daemon, utilities to control basic system configuration like the hostname, date, locale, maintain a list of logged-in users and running containers and virtual machines, system accounts, runtime directories and settings, and daemons to manage simple network configuration, network time synchronization, log forwarding, and name resolution.
|
||||
|
||||
### What Is chkservice
|
||||
|
||||
[chkservice][1] is an ncurses-based tool for managing systemd units from the terminal. It provides the user with a comprehensive view of all systemd services and allows them to be changed easily.
|
||||
|
||||
It requires super user privileges to make changes into unit states or sysv scripts.
|
||||
|
||||
### How To Install chkservice In Linux
|
||||
|
||||
We can install chkservice in two ways, either package or manual method.
|
||||
|
||||
For **`Debian/Ubuntu`** , use [APT-GET Command][2] or [APT Command][3]to install chkservice.
|
||||
```
|
||||
$ sudo add-apt-repository ppa:linuxenko/chkservice
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install chkservice
|
||||
|
||||
```
|
||||
|
||||
For **`Arch Linux`** based systems, use [Yaourt Command][4] or [Packer Command][5] to install chkservice from AUR repository.
|
||||
```
|
||||
$ yaourt -S chkservice
|
||||
or
|
||||
$ packer -S chkservice
|
||||
|
||||
```
|
||||
|
||||
For **`Fedora`** , use [DNF Command][6] to install chkservice.
|
||||
```
|
||||
$ sudo dnf copr enable srakitnican/default
|
||||
$ sudo dnf install chkservice
|
||||
|
||||
```
|
||||
|
||||
For **`Debian Based Systems`** , use [DPKG Command][7] to install chkservice.
|
||||
```
|
||||
$ wget https://github.com/linuxenko/chkservice/releases/download/0.1/chkservice_0.1.0-amd64.deb
|
||||
$ sudo dpkg -i chkservice_0.1.0-amd64.deb
|
||||
|
||||
```
|
||||
|
||||
For **`RPM Based Systems`** , use [DNF Command][8] to install chkservice.
|
||||
```
|
||||
$ sudo yum install https://github.com/linuxenko/chkservice/releases/download/0.1/chkservice_0.1.0-amd64.rpm
|
||||
|
||||
```
|
||||
|
||||
### How To Use chkservice
|
||||
|
||||
Just fire the following command to launch the chkservice tool. The output is split to four parts.
|
||||
|
||||
* **`First Part:`** This part shows about daemons status like, enabled [X] or disabled [] or static [s] or masked -m-
|
||||
* **`Second Part:`** This part shows daemons status like, started [ >] or stopped [=]
|
||||
* **`Third Part:`** This part shows the unit name
|
||||
* **`Fourth Part:`** This part showing the unit short description
|
||||
|
||||
|
||||
```
|
||||
$ sudo chkservice
|
||||
|
||||
```
|
||||
|
||||
![][10]
|
||||
|
||||
To view help page, hit `?` button. This will shows you available options to manage the systemd services.
|
||||
![][11]
|
||||
|
||||
Select the units, which you want to enable or disable then hit `Space Bar` button.
|
||||
![][12]
|
||||
|
||||
Select the units, which you want to start or stop then hit `s` button.
|
||||
![][13]
|
||||
|
||||
Select the units, which you want to reload then hit `r` button. After hit `r` key, you can see the `updated` message at the top.
|
||||
![][14]
|
||||
|
||||
Hit `q` button to quit the utility.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/
|
||||
|
||||
作者:[Ramya Nuvvula][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/ramya/
|
||||
[1]:https://github.com/linuxenko/chkservice
|
||||
[2]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[3]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[4]:https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/
|
||||
[5]:https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/
|
||||
[6]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[7]:https://www.2daygeek.com/dpkg-command-to-manage-packages-on-debian-ubuntu-linux-mint-systems/
|
||||
[8]:https://www.2daygeek.com/rpm-command-examples/
|
||||
[9]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[10]:https://www.2daygeek.com/wp-content/uploads/2018/03/chkservice-to-manage-systemd-units-1.png
|
||||
[11]:https://www.2daygeek.com/wp-content/uploads/2018/03/chkservice-to-manage-systemd-units-2.png
|
||||
[12]:https://www.2daygeek.com/wp-content/uploads/2018/03/chkservice-to-manage-systemd-units-3.png
|
||||
[13]:https://www.2daygeek.com/wp-content/uploads/2018/03/chkservice-to-manage-systemd-units-4.png
|
||||
[14]:https://www.2daygeek.com/wp-content/uploads/2018/03/chkservice-to-manage-systemd-units-5.png
|
@ -1,16 +1,14 @@
|
||||
translating---geekpi
|
||||
|
||||
How to resolve mount.nfs: Stale file handle error
|
||||
如何解决 mount.nfs:失效的文件句柄错误
|
||||
======
|
||||
Learn how to resolve mount.nfs: Stale file handle error on Linux platform. This is Network File System error can be resolved from client or server end.
|
||||
了解如何解决 mount.nfs:Linux 平台上的失效文件句柄错误。它是可以在客户端或者服务端解决的网络文件系统错误。
|
||||
|
||||
_![][1]_
|
||||
|
||||
When you are using Network File System in your environment, you must have seen`mount.nfs: Stale file handle` error at times. This error denotes that NFS share is unable to mount since something has changed since last good known configuration.
|
||||
当你在你的环境中使用网络文件系统时,你一定不时看到 `mount.nfs:Stale file handle` 错误。此错误表示 NFS 共享无法挂载,因为自上次配置后有些东西已经更改。
|
||||
|
||||
Whenever you reboot NFS server or some of the NFS processes are not running on client or server or share is not properly exported at server; these can be reasons for this error. Moreover its irritating when this error comes to previously mounted NFS share. Because this means configuration part is correct since it was previously mounted. In such case once can try following commands:
|
||||
无论何时你重启 NFS 服务器或某些 NFS 进程未在客户端或服务器上运行,或者共享未在服务器上正确导出,这些都可能是这个错误的原因。此外,当这个错误发生在先前挂载的 NFS 共享上时,它会它令人不快。因为这意味着配置部分是正确的,因为是以前挂载的。在这种情况下,可以尝试下面的命令:
|
||||
|
||||
Make sure NFS service are running good on client and server.
|
||||
确保 NFS 服务在客户端和服务器上运行良好。
|
||||
|
||||
```
|
||||
# service nfs status
|
||||
@ -20,9 +18,9 @@ nfsd (pid 12009 12008 12007 12006 12005 12004 12003 12002) is running...
|
||||
rpc.rquotad (pid 11988) is running...
|
||||
```
|
||||
|
||||
> Stay connected to your favorite windows applications from anywhere on any device with [ windows 7 cloud desktop ][2] from CloudDesktopOnline.com. Get Office 365 with expert support and free migration from [ Apps4Rent.com ][3].
|
||||
>通过 CloudDesktopOnline.com 上的[ Windows 7 云桌面][2]在任意位置的任何设备上保持与你最喜爱的 Windows 程序的连接。从 [Apps4Rent.com][3] 获得有专家支持的 Office 365 和免费迁移。
|
||||
|
||||
If NFS share currently mounted on client, then un-mount it forcefully and try to remount it on NFS client. Check if its properly mounted by `df` command and changing directory inside it.
|
||||
如果 NFS 共享目前挂载在客户端上,则强制卸载它并尝试在 NFS 客户端上重新挂载它。通过 `df` 命令检查它是否正确挂载,并更改其中的目录。
|
||||
|
||||
```
|
||||
# umount -f /mydata_nfs
|
||||
@ -34,9 +32,9 @@ If NFS share currently mounted on client, then un-mount it forcefully and try to
|
||||
server:/nfs_share 41943040 892928 41050112 3% /mydata_nfs
|
||||
```
|
||||
|
||||
In above mount command, server can be IP or [hostname ][4]of NFS server.
|
||||
在上面的挂载命令中,服务器可以是 NFS 服务器的 IP 或[主机名][4]。
|
||||
|
||||
If you are getting error while forcefully un-mounting like below :
|
||||
如果你在强制取消挂载时遇到像下面错误:
|
||||
|
||||
```
|
||||
# umount -f /mydata_nfs
|
||||
@ -45,7 +43,7 @@ umount: /mydata_nfs: device is busy
|
||||
umount2: Device or resource busy
|
||||
umount: /mydata_nfs: device is busy
|
||||
```
|
||||
Then you can check which all processes or users are using that mount point with `lsof` command like below:
|
||||
然后你可以用 `lsof` 命令来检查哪个进程或用户正在使用该挂载点,如下所示:
|
||||
|
||||
```
|
||||
# lsof |grep mydata_nfs
|
||||
@ -57,9 +55,9 @@ bash 20092 oracle11 cwd unknown
|
||||
bash 25040 oracle11 cwd unknown /mydata_nfs/MUYR (stat: Stale NFS file handle)
|
||||
```
|
||||
|
||||
If you see in above example that 4 PID are using some files on said mount point. Try killing them off to free mount point. Once done you will be able to un-mount it properly.
|
||||
如果你在上面的示例中看到共有 4 个 PID 正在使用该挂载点上的某些文件。尝试杀死它们以释放挂载点。完成后,你将能够正确卸载它。
|
||||
|
||||
Sometimes it still give same error for mount command. Then try mounting after restarting NFS service at client using below command.
|
||||
有时 mount 命令会有相同的错误。接着使用下面的命令在客户端重启 NFS 服务后挂载。
|
||||
|
||||
```
|
||||
# service nfs restart
|
||||
@ -73,18 +71,18 @@ Starting NFS mountd: [ OK ]
|
||||
Starting NFS daemon: [ OK ]
|
||||
```
|
||||
|
||||
Also read : [How to restart NFS step by step in HPUX][5]
|
||||
另请阅读:[如何在 HPUX 中逐步重启 NFS][5]
|
||||
|
||||
Even if this didnt solve your issue, final step is to restart services at NFS server. Caution! This will disconnect all NFS shares which are exported from NFS server. All clients will see mount point disconnect. This step is where 99% you will get your issue resolved. If not then [NFS configurations][6] must be checked, provided you have changed configuration and post that you started seeing this error.
|
||||
即使这没有解决你的问题,最后一步是在 NFS 服务器上重启服务。警告!这将断开从该 NFS 服务器导出的所有 NFS 共享。所有客户端将看到挂载点断开。这一步将 99% 解决你的问题。如果没有,请务必检查[ NFS 配置][6],提供你修改的配置并发布你启动时看到的错误。
|
||||
|
||||
Outputs in above post are from RHEL6.3 server. Drop us your comments related to this post.
|
||||
上面文章中的输出来自 RHEL6.3 服务器。请将你的评论发送给我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://kerneltalks.com/troubleshooting/resolve-mount-nfs-stale-file-handle-error/
|
||||
|
||||
作者:[KernelTalks][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,38 +1,35 @@
|
||||
Translating by MjSeven
|
||||
如何在 Debian Linux 上设置和配置网桥
|
||||
=====
|
||||
|
||||
How to setup and configure network bridge on Debian Linux
|
||||
======
|
||||
我是一个新 Debian Linux 用户,我想为 Debian Linux 上运行的虚拟化环境(KVM)设置网桥。那么我该如何在 Debian Linux 9.x 服务器上的 /etc/network/interfaces 中设置桥接网络呢?
|
||||
|
||||
I am new Debian Linux user. I want to setup Bridge for virtualised environments (KVM) running on Debian Linux. How do I setup network bridging in /etc/network/interfaces on Debian Linux 9.x server?
|
||||
如何你想为你的虚拟机分配 IP 地址并使其可从你的局域网访问,则需要设置网络桥接器。默认情况下,虚拟机使用 KVM 创建的专用网桥。但你需要手动设置接口,避免与网络管理员发生冲突。
|
||||
|
||||
If you want to assign IP addresses to your virtual machines and make them accessible from your LAN you need to setup network bridge. By default, a private network bridge created when using KVM. You need to set up interfaces manually, avoiding conflicts with, network manager.
|
||||
### 怎样安装 brctl
|
||||
|
||||
### How to install the brctl
|
||||
|
||||
Type the following [nixcmdn name=”apt”]/[apt-get command][1]:
|
||||
输入以下命令 [nixcmdn name=”apt”]/[apt-get 命令][1]:
|
||||
`$ sudo apt install bridge-utils`
|
||||
|
||||
### How to setup network bridge on Debian Linux
|
||||
### 怎样在 Debian Linux 上设置网桥
|
||||
|
||||
You need to edit /etc/network/interface file. However, I recommend to drop a brand new config in /etc/network/interface.d/ directory. The procedure to configure network bridge on Debian Linux is as follows:
|
||||
你需要编辑 /etc/network/interface 文件。不过,我建议在 /etc/network/interface.d/ 目录下放置一个全新的配置。在 Debian Linux 配置网桥的过程如下:
|
||||
|
||||
#### Step 1 – Find out your physical interface
|
||||
#### 步骤 1 - 找出你的物理接口
|
||||
|
||||
Use the [ip command][2]:
|
||||
使用 [ip 命令][2]:
|
||||
`$ ip -f inet a s`
|
||||
Sample outputs:
|
||||
示例输出如下:
|
||||
```
|
||||
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
|
||||
inet 192.168.2.23/24 brd 192.168.2.255 scope global eno1
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
eno1 是我的物理网卡。
|
||||
|
||||
eno1 is my physical interface.
|
||||
#### 步骤 2 - 更新 /etc/network/interface 文件
|
||||
|
||||
#### Step 2 – Update /etc/network/interface file
|
||||
|
||||
Make sure only lo (loopback is active in /etc/network/interface). Remove any config related to eno1. Here is my config file printed using [cat command][3]:
|
||||
确保只有 lo(loopback 在 /etc/network/interface 中处于活动状态)。(译注:loopback 指本地环回接口,也称为回送地址)删除与 eno1 相关的任何配置。这是我使用 [cat 命令][3] 打印的配置文件:
|
||||
`$ cat /etc/network/interface`
|
||||
```
|
||||
# This file describes the network interfaces available on your system
|
||||
@ -46,11 +43,11 @@ iface lo inet loopback
|
||||
```
|
||||
|
||||
|
||||
#### Step 3 – Configuring bridging (br0) in /etc/network/interfaces.d/br0
|
||||
#### 步骤 3 - 在 /etc/network/interfaces.d/br0 中配置网桥(br0)
|
||||
|
||||
Create a text file using a text editor such as vi command:
|
||||
使用文本编辑器创建一个 text 文件,比如 vi 命令:
|
||||
`$ sudo vi /etc/network/interfaces.d/br0`
|
||||
Append the following config:
|
||||
在其中添加配置:
|
||||
```
|
||||
## static ip config file for br0 ##
|
||||
auto br0
|
||||
@ -70,7 +67,7 @@ iface br0 inet static
|
||||
bridge_fd 0 # no forwarding delay
|
||||
```
|
||||
|
||||
If you want bridge to get an IP address using DHCP:
|
||||
如果你想使用 DHCP 来获得 IP 地址:
|
||||
```
|
||||
## DHCP ip config file for br0 ##
|
||||
auto br0
|
||||
@ -80,35 +77,35 @@ auto br0
|
||||
bridge_ports eno1
|
||||
```
|
||||
|
||||
[在 vi/vim 中保存并关闭文件][4]。
|
||||
|
||||
[Save and close the file in vi/vim][4].
|
||||
#### 步骤 4 - [重新启动网络服务][5]
|
||||
|
||||
#### Step 4 – [Restart networking service in Linux][5]
|
||||
|
||||
Before you restart the networking service make sure firewall is disabled. The firewall may refer to older interface such as eno1. Once service restarted, you must update firewall rule for interface br0. Type the following restart the networking service:
|
||||
在重新启动网络服务之前,请确保防火墙已关闭。防火墙可能会引用较老的接口,例如 eno1。一旦服务重新启动,你必须更新 br0 接口的防火墙规则。键入以下命令重新启动防火墙:
|
||||
`$ sudo systemctl restart network-manager`
|
||||
Verify that service has been restarted:
|
||||
确认服务已经重新启动:
|
||||
`$ systemctl status network-manager`
|
||||
Look for new br0 interface and routing table with the help of [ip command][2]:
|
||||
借助 [ip 命令][2]寻找新的 br0 接口和路由表:
|
||||
`$ ip a s $ ip r $ ping -c 2 cyberciti.biz`
|
||||
Sample outputs:
|
||||
示例输出:
|
||||
![](https://www.cyberciti.biz/media/new/faq/2018/02/How-to-setup-and-configure-network-bridge-on-Debian-Linux.jpg)
|
||||
You can also use the brctl command to view info about your bridges:
|
||||
你可以使用 brctl 命令查看网桥有关信息:
|
||||
`$ brctl show`
|
||||
Show current bridges:
|
||||
显示当前网桥:
|
||||
`$ bridge link`
|
||||
![](https://www.cyberciti.biz/media/new/faq/2018/02/Show-current-bridges-and-what-interfaces-they-are-connected-to-on-Linux.jpg)
|
||||
|
||||
### About the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the **latest tutorials on SysAdmin, Linux/Unix and open source topics via[RSS/XML feed][6]** or [weekly email newsletter][7].
|
||||
### 关于作者
|
||||
|
||||
作者是 nixCraft 的创建者,也是经验丰富的系统管理员,DevOps 工程师以及 Linux 操作系统/ Unix shell 脚本的培训师。通过订阅 [RSS/XML feed][6] 或者 [weekly email newsletter][7]获得**关于 SysAdmin, Linux/Unix 和开源主题的最新教程。**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-configuring-bridging-in-debian-linux/
|
||||
|
||||
作者:[Vivek GIte][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,188 @@
|
||||
如何使用树莓派测定颗粒物
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bubblehands_fromRHT_520_0612LL.png?itok=_iQ2dO3S)
|
||||
我们在东南亚的学校定期测定空气中的颗粒物。这里的测定值非常高,尤其是在二到五月之间,干燥炎热、土地干旱等各种因素都对空气质量产生了不利的影响。我将会在这篇文章中展示如何使用树莓派来测定颗粒物。
|
||||
|
||||
### 什么是颗粒物?
|
||||
|
||||
颗粒物就是粉尘或者空气中的微小颗粒。其中 PM10 和 PM2.5 之间的差别就是 PM10 指的是粒径小于10微米的颗粒,而 PM2.5 指的是粒径小于2.5微米的颗粒。在粒径小于2.5微米的的情况下,由于它们能被吸入肺泡中并且对呼吸系统造成影响,因此颗粒越小,对人的健康危害越大。
|
||||
|
||||
世界卫生组织的建议[颗粒物浓度][1]是:
|
||||
|
||||
* 年均 PM10 不高于20 µg/m³
|
||||
* 年均 PM2.5 不高于10 µg/m³
|
||||
* 不允许超标时,日均 PM10 不高于50 µg/m³
|
||||
* 不允许超标时,日均 PM2.5 不高于25 µg/m³
|
||||
|
||||
以上数值实际上是低于大多数国家的标准的,例如欧盟对于 PM10 所允许的年均值是不高于40 µg/m³。
|
||||
|
||||
### 什么是空气质量指数(AQI, Air Quality Index)?
|
||||
|
||||
空气质量指数按照颗粒物的测定值来评价空气质量的好坏,然而由于各国之间的计算方式有所不同,这个指数并没有统一的标准。维基百科上关于[空气质量指数][2]的词条对此给出了一个概述。我们学校则以[美国环境保护协会][3](EPA, Environment Protection Agency)建立的分类法来作为依据。
|
||||
|
||||
![空气质量指数][5]
|
||||
|
||||
空气质量指数
|
||||
|
||||
### 测定颗粒物需要哪些准备?
|
||||
|
||||
测定颗粒物只需要以下两种器材:
|
||||
* 树莓派(款式不限,最好带有 WiFi)
|
||||
* SDS011 颗粒物传感器
|
||||
|
||||
|
||||
|
||||
![颗粒物传感器][7]
|
||||
|
||||
颗粒物传感器
|
||||
|
||||
如果是只带有 Micro USB的树莓派Zero W,那还需要一根连接到标准 USB 端口的适配线,只需要20美元,而传感器则自带适配串行接口的 USB 适配器。
|
||||
|
||||
### 安装过程
|
||||
|
||||
对于树莓派,只需要下载对应的 Raspbian Lite 镜像并且[写入到 Micro SD 卡][8]上就可以了(网上很多教程都有介绍如何设置 WLAN 连接,我就不细说了)。
|
||||
|
||||
如果要使用 SSH,那还需要在启动分区建立一个名为 `ssh` 的空文件。树莓派的 IP 通过路由器或者 DHCP 服务器获取,随后就可以通过 SSH 登录到树莓派了(默认密码是 raspberry):
|
||||
```
|
||||
$ ssh pi@192.168.1.5
|
||||
|
||||
```
|
||||
|
||||
首先我们需要在树莓派上安装一下这些包:
|
||||
```
|
||||
$ sudo apt install git-core python-serial python-enum lighttpd
|
||||
|
||||
```
|
||||
|
||||
在开始之前,我们可以用 `dmesg` 来获取 USB 适配器连接的串行接口:
|
||||
```
|
||||
$ dmesg
|
||||
|
||||
[ 5.559802] usbcore: registered new interface driver usbserial
|
||||
|
||||
[ 5.559930] usbcore: registered new interface driver usbserial_generic
|
||||
|
||||
[ 5.560049] usbserial: USB Serial support registered for generic
|
||||
|
||||
[ 5.569938] usbcore: registered new interface driver ch341
|
||||
|
||||
[ 5.570079] usbserial: USB Serial support registered for ch341-uart
|
||||
|
||||
[ 5.570217] ch341 1–1.4:1.0: ch341-uart converter detected
|
||||
|
||||
[ 5.575686] usb 1–1.4: ch341-uart converter now attached to ttyUSB0
|
||||
|
||||
```
|
||||
|
||||
在最后一行,可以看到接口 `ttyUSB0`。然后我们需要写一个 Python 脚本来读取传感器的数据并以 JSON 格式存储,在通过一个 HTML 页面就可以把数据展示出来了。
|
||||
|
||||
### 在树莓派上读取数据
|
||||
|
||||
首先创建一个传感器实例,每5分钟读取一次传感器的数据,持续30秒,这些数值后续都可以调整。在每两次测定的间隔,我们把传感器调到睡眠模式以延长它的使用寿命(厂商认为元件的寿命大约8000小时)。
|
||||
|
||||
我们可以使用以下命令来下载 Python 脚本:
|
||||
```
|
||||
$ wget -O /home/pi/aqi.py https://raw.githubusercontent.com/zefanja/aqi/master/python/aqi.py
|
||||
|
||||
```
|
||||
|
||||
另外还需要执行以下两条命令来保证脚本正常运行:
|
||||
```
|
||||
$ sudo chown pi:pi /var/wwww/html/
|
||||
|
||||
$ echo[] > /var/wwww/html/aqi.json
|
||||
|
||||
```
|
||||
|
||||
下面就可以执行脚本了:
|
||||
```
|
||||
$ chmod +x aqi.py
|
||||
|
||||
$ ./aqi.py
|
||||
|
||||
PM2.5:55.3, PM10:47.5
|
||||
|
||||
PM2.5:55.5, PM10:47.7
|
||||
|
||||
PM2.5:55.7, PM10:47.8
|
||||
|
||||
PM2.5:53.9, PM10:47.6
|
||||
|
||||
PM2.5:53.6, PM10:47.4
|
||||
|
||||
PM2.5:54.2, PM10:47.3
|
||||
|
||||
…
|
||||
|
||||
```
|
||||
|
||||
### 自动化执行脚本
|
||||
|
||||
只需要使用诸如 crontab 的服务,我们就不需要每次都手动启动脚本了。按照以下命令打开 crontab 文件:
|
||||
```
|
||||
$ crontab -e
|
||||
|
||||
```
|
||||
|
||||
在文件末尾添加这一行:
|
||||
```
|
||||
@reboot cd /home/pi/ && ./aqi.py
|
||||
|
||||
```
|
||||
|
||||
现在我们的脚本就会在树莓派每次重启后自动执行了。
|
||||
|
||||
### 展示颗粒物测定值和空气质量指数的 HTML 页面
|
||||
|
||||
我们在前面已经安装了一个轻量级的 web 服务器 `lighttpd`,所以我们需要把 HTML、JavaScript、CSS 文件放置在 `/var/www/html` 目录中,这样就能通过电脑和智能手机访问到相关数据了。执行下面的三条命令,可以下载到对应的文件:
|
||||
|
||||
```
|
||||
$ wget -O /var/wwww/html/index.html https://raw.githubusercontent.com/zefanja/aqi/master/html/index.html
|
||||
|
||||
$ wget -O /var/wwww/html/aqi.js https://raw.githubusercontent.com/zefanja/aqi/master/html/aqi.js
|
||||
|
||||
$ wget -O /var/wwww/html/style.css https://raw.githubusercontent.com/zefanja/aqi/master/html/style.css
|
||||
|
||||
```
|
||||
|
||||
在 JavaScript 文件中,实现了打开 JSON 文件、提取数据、计算空气质量指数的过程,随后页面的背景颜色将会根据 EPA 的划分标准而变化。
|
||||
|
||||
你只需要用浏览器访问树莓派的地址,就可以看到当前颗粒物浓度值等数据了。[http://192.168.1.5:][9]
|
||||
|
||||
这个页面比较简单而且可扩展,比如可以添加一个展示过去数小时历史数据的表格等等。
|
||||
|
||||
这是[Github上的完整源代码][10]。
|
||||
|
||||
### 总结
|
||||
|
||||
在资金相对紧张的情况下,树莓派是一种选择。除此以外,还有很多可以用来测定颗粒物的应用,包括室外固定装置、移动测定设备等等。我们学校则同时采用了这两种:固定装置在室外测定全天颗粒物浓度,而移动测定设备在室内检测空调过滤器的效果。
|
||||
|
||||
[Luftdaten.info][12]提供了一个如何设计类似的传感器的介绍,其中的软件效果出众,而且因为它没有使用树莓派,所以硬件更是小巧。
|
||||
|
||||
对于学生来说,设计一个颗粒物传感器确实算得上是一个优秀的课外项目。
|
||||
|
||||
你又打算如何使用你的[树莓派][13]呢?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/how-measure-particulate-matter-raspberry-pi
|
||||
|
||||
作者:[Stephan Tetzel][a]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/stephan
|
||||
[1]:https://en.wikipedia.org/wiki/Particulates
|
||||
[2]:https://en.wikipedia.org/wiki/Air_quality_index
|
||||
[3]:https://en.wikipedia.org/wiki/United_States_Environmental_Protection_Agency
|
||||
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/air_quality_index.png?itok=FwmGf1ZS (Air quality index)
|
||||
[7]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/particulate_sensor.jpg?itok=ddH3bBwO (Particulate sensor)
|
||||
[8]:https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[9]:http://192.168.1.5/
|
||||
[10]:https://github.com/zefanja/aqi
|
||||
[11]:https://opensource.com/article/18/3/raspberry-pi-week-giveaway
|
||||
[12]:http://luftdaten.info/
|
||||
[13]:https://openschoolsolutions.org/shutdown-servers-case-power-failure%e2%80%8a-%e2%80%8aups-nut-co/
|
@ -0,0 +1,205 @@
|
||||
如何使用 Ansible 打补丁以及安装应用
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4)
|
||||
你有没有想过,如何打补丁、重启系统,然后继续工作?
|
||||
|
||||
如果你的回答是肯定的,那就需要了解一下 [Ansible][1] 了。它是一个配置管理工具,对于一些复杂的系统管理任务有时候需要几个小时才能完成,又或者对安全性有比较高要求的时候,使用 Ansible 能够大大简化工作流程。
|
||||
|
||||
以我作为系统管理员的经验,打补丁是一项最有难度的工作。每次遇到公共漏洞和暴露(CVE, Common Vulnearbilities and Exposure)通知或者信息安全漏洞预警(IAVA, Information Assurance Vulnerability Alert)时都必须要高度关注安全漏洞,否则安全部门将会严肃追究自己的责任。
|
||||
|
||||
使用 Ansible 可以通过运行[封装模块][2]以缩短打补丁的时间,下面以[yum模块][3]更新系统为例,使用 Ansible 可以执行安装、更新、删除、从其它地方安装(例如持续集成/持续开发中的 `rpmbuild`)。以下是系统更新的任务:
|
||||
```
|
||||
- name: update the system
|
||||
|
||||
yum:
|
||||
|
||||
name: "*"
|
||||
|
||||
state: latest
|
||||
|
||||
```
|
||||
|
||||
在第一行,我们给这个任务命名,这样可以清楚 Ansible 的工作内容。第二行表示使用 `yum` 模块在CentOS虚拟机中执行更新操作。第三行 `name: "*"` 表示更新所有程序。最后一行 `state: latest` 表示更新到最新的 RPM。
|
||||
|
||||
系统更新结束之后,需要重新启动并重新连接:
|
||||
```
|
||||
- name: restart system to reboot to newest kernel
|
||||
|
||||
shell: "sleep 5 && reboot"
|
||||
|
||||
async: 1
|
||||
|
||||
poll: 0
|
||||
|
||||
|
||||
|
||||
- name: wait for 10 seconds
|
||||
|
||||
pause:
|
||||
|
||||
seconds: 10
|
||||
|
||||
|
||||
|
||||
- name: wait for the system to reboot
|
||||
|
||||
wait_for_connection:
|
||||
|
||||
connect_timeout: 20
|
||||
|
||||
sleep: 5
|
||||
|
||||
delay: 5
|
||||
|
||||
timeout: 60
|
||||
|
||||
|
||||
|
||||
- name: install epel-release
|
||||
|
||||
yum:
|
||||
|
||||
name: epel-release
|
||||
|
||||
state: latest
|
||||
|
||||
```
|
||||
|
||||
`shell` 字段中的命令让系统在5秒休眠之后重新启动,我们使用 `sleep` 来保持连接不断开,使用 `async` 设定最大等待时长以避免发生超时,`poll` 设置为0表示直接执行不需要等待执行结果。等待10秒钟,使用 `wait_for_connection` 在虚拟机恢复连接后尽快连接。随后由 `install epel-release` 任务检查 RPM 的安装情况。你可以对这个剧本执行多次来验证它的幂等性,唯一会显示造成影响的是重启操作,因为我们使用了 `shell` 模块。如果不想造成实际的影响,可以在使用 `shell` 模块的时候 `changed_when: False`。
|
||||
|
||||
现在我们已经知道如何对系统进行更新、重启虚拟机、重新连接、安装 RPM 包。下面我们通过 [Ansible Lightbulb][4] 来安装 NGINX:
|
||||
```
|
||||
- name: Ensure nginx packages are present
|
||||
|
||||
yum:
|
||||
|
||||
name: nginx, python-pip, python-devel, devel
|
||||
|
||||
state: present
|
||||
|
||||
notify: restart-nginx-service
|
||||
|
||||
|
||||
|
||||
- name: Ensure uwsgi package is present
|
||||
|
||||
pip:
|
||||
|
||||
name: uwsgi
|
||||
|
||||
state: present
|
||||
|
||||
notify: restart-nginx-service
|
||||
|
||||
|
||||
|
||||
- name: Ensure latest default.conf is present
|
||||
|
||||
template:
|
||||
|
||||
src: templates/nginx.conf.j2
|
||||
|
||||
dest: /etc/nginx/nginx.conf
|
||||
|
||||
backup: yes
|
||||
|
||||
notify: restart-nginx-service
|
||||
|
||||
|
||||
|
||||
- name: Ensure latest index.html is present
|
||||
|
||||
template:
|
||||
|
||||
src: templates/index.html.j2
|
||||
|
||||
dest: /usr/share/nginx/html/index.html
|
||||
|
||||
|
||||
|
||||
- name: Ensure nginx service is started and enabled
|
||||
|
||||
service:
|
||||
|
||||
name: nginx
|
||||
|
||||
state: started
|
||||
|
||||
enabled: yes
|
||||
|
||||
|
||||
|
||||
- name: Ensure proper response from localhost can be received
|
||||
|
||||
uri:
|
||||
|
||||
url: "http://localhost:80/"
|
||||
|
||||
return_content: yes
|
||||
|
||||
register: response
|
||||
|
||||
until: 'nginx_test_message in response.content'
|
||||
|
||||
retries: 10
|
||||
|
||||
delay: 1
|
||||
|
||||
```
|
||||
|
||||
And the handler that restarts the nginx service:
|
||||
```
|
||||
# 安装 nginx 的操作文件
|
||||
|
||||
- name: restart-nginx-service
|
||||
|
||||
service:
|
||||
|
||||
name: nginx
|
||||
|
||||
state: restarted
|
||||
|
||||
```
|
||||
|
||||
在这个角色里,我们使用 RPM 安装了 `nginx`、`python-pip`、`python-devel`、`devel`,用 PIP 安装了 `uwsgi`,接下来使用 `template` 模块复制 `nginx.conf` 和 `index.html` 以显示页面,并确保服务在系统启动时启动。然后就可以使用 `uri` 模块检查到页面的连接了。
|
||||
|
||||
这个是一个系统更新、系统重启、安装 RPM 包的剧本示例,后续可以继续安装 nginx,当然这里可以替换成任何你想要的角色和应用程序。
|
||||
```
|
||||
- hosts: all
|
||||
|
||||
roles:
|
||||
|
||||
- centos-update
|
||||
|
||||
- nginx-simple
|
||||
|
||||
```
|
||||
|
||||
观看演示视频了解了解这个过程。
|
||||
|
||||
[demo](https://asciinema.org/a/166437/embed?)
|
||||
|
||||
这只是关于如何更新系统、重启以及后续工作的示例。简单起见,我只添加了不带[变量][5]的包,当你在操作大量主机的时候,你就需要修改其中的一些设置了:
|
||||
|
||||
这是由于在生产环境中如果你想逐一更新每一台主机的系统,你需要花相当一段时间去等待主机重启才能够继续下去。
|
||||
|
||||
有关 Ansible 进行自动化工作的更多用法,请查阅[其它文章][6]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/ansible-patch-systems
|
||||
|
||||
作者:[Jonathan Lozada De La Matta][a]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jlozadad
|
||||
[1]:https://www.ansible.com/overview/how-ansible-works
|
||||
[2]:https://docs.ansible.com/ansible/latest/list_of_packaging_modules.html
|
||||
[3]:https://docs.ansible.com/ansible/latest/yum_module.html
|
||||
[4]:https://github.com/ansible/lightbulb/tree/master/examples/nginx-role
|
||||
[5]:https://docs.ansible.com/ansible/latest/playbooks_variables.html
|
||||
[6]:https://opensource.com/tags/ansible
|
Loading…
Reference in New Issue
Block a user