Merge pull request #2 from LCTT/master

This commit is contained in:
Fuliang.Li 2016-12-29 16:57:46 +08:00 committed by GitHub
commit 43f0d25899
17 changed files with 2059 additions and 591 deletions

View File

@ -1,4 +1,4 @@
PyCharm - Linux 下最好的 Python IDE(集成开发环境)
PyCharm - Linux 下最好的 Python IDE
=========
![](https://fthmb.tqn.com/AVEbzYN3BPH_8cGYkPflIx58-XE=/768x0/filters:no_upscale()/about/pycharm2-57e2d5ee5f9b586c352c7493.png)
@ -12,9 +12,7 @@ PyCharm 是由 [Jetbrains][3] 开发的一个编辑器和调试器,[Jetbrains]
### 如何安装 PyCharm
我已经写了一篇关于如何获取 PyCharm 的指南,下载,解压文件,然后运行。
[点击链接][4].
我已经[写了一篇][4]关于如何获取 PyCharm 的指南,下载、解压文件,然后运行。
### 欢迎界面
@ -24,7 +22,7 @@ PyCharm 是由 [Jetbrains][3] 开发的一个编辑器和调试器,[Jetbrains]
* 创建新项目
* 打开项目
* 版本控制检查
* 从版本控制仓库检出
还有一个配置设置选项,你可以通过它设置默认 Python 版本或者一些其他设置。
@ -46,7 +44,7 @@ PyCharm 是由 [Jetbrains][3] 开发的一个编辑器和调试器,[Jetbrains]
* Twitter Bootstrap
* Web Starter Kit
这不是一个编程教程,所以我没必要说明这些项目类型是什么。如果你想创建一个可以运行在 Windows、Linux 和 Mac 上的简单桌面运行程序,那么你可以选择 Pure Python 项目,然后使用 QT 库来开发图形应用程序,这样的图形应用程序无论在何操作系统上运行,看起来都像是原生的,就像是在该系统上开发的一样。
这不是一个编程教程,所以我没必要说明这些项目类型是什么。如果你想创建一个可以运行在 Windows、Linux 和 Mac 上的简单桌面运行程序,那么你可以选择 Pure Python 项目,然后使用 Qt 库来开发图形应用程序,这样的图形应用程序无论在何操作系统上运行,看起来都像是原生的,就像是在该系统上开发的一样。
选择了项目类型以后,你需要输入一个项目名字并且选择一个 Python 版本来进行开发。
@ -58,15 +56,15 @@ PyCharm 是由 [Jetbrains][3] 开发的一个编辑器和调试器,[Jetbrains]
PyCharm 提供了从各种在线资源查看项目源码的选项,在线资源包括 [GitHub][5]、[CVS][6]、Git、[Mercurial][7] 以及 [Subversion][8]。
### PyCharm IDE(集成开发环境)
### PyCharm IDE(集成开发环境)
PyCharm IDE 可以通过顶部的一个菜单打开,在这个菜单下面你可以为每个打开的项目‘贴上’标签。
PyCharm IDE 中可以打开顶部的菜单,在这个菜单下方你可以看到每个打开的项目的标签。
屏幕右方是调试选项区,可以单步运行代码。
左面板有一系列项目文件和外部库。
面板有项目文件和外部库的列表
如果想在项目中新建一个文件,你可以‘右击’项目名字,然后选择‘新建’。然后你可以在下面这些文件类型中选择一种添加到项目中:
如果想在项目中新建一个文件,你可以鼠标右击项目的名字,然后选择‘新建’。然后你可以在下面这些文件类型中选择一种添加到项目中:
* 文件
* 目录
@ -101,13 +99,12 @@ PyCharm IDE 可以通过顶部的一个菜单打开,在这个菜单下面你
当你运行到一行代码的时候,你可以对这行代码中出现的变量进行监视,这样当变量值改变的时候你能够看到。
另一个不错的选择是运行检查器覆盖的代码。在过去这些年里,编程界发生了很大的变化,现在,对于开发人员来说,进行测试驱动开发是很常见的,这样他们可以检查对程序所做的每一个改变,确保不会破坏系统的另一部分。
另一个不错的选择是使用覆盖检查器运行代码。在过去这些年里,编程界发生了很大的变化,现在,对于开发人员来说,进行测试驱动开发是很常见的,这样他们可以检查对程序所做的每一个改变,确保不会破坏系统的另一部分。
检查器能够很好的帮助你运行程序,执行一些测试,运行结束以后,它会以百分比的形式告诉你测试运行所覆盖的代码有多少。
覆盖检查器能够很好的帮助你运行程序,执行一些测试,运行结束以后,它会以百分比的形式告诉你测试运行所覆盖的代码有多少。
还有一个工具可以显示‘类函数’或‘类’的名字,以及一个项目被调用的次数和在一个特定代码片段运行所花费的时间。
### 代码重构
PyCharm 一个很强大的特性是代码重构选项。
@ -122,7 +119,7 @@ PyCharm 一个很强大的特性是代码重构选项。
你不必遵循 PyCharm 的所有规则。这些规则大部分只是好的编码准则,与你的代码是否能够正确运行无关。
代码菜单还有其重构选项。比如,你可以进行代码清理以及检查文件或项目问题。
代码菜单还有其它的重构选项。比如,你可以进行代码清理以及检查文件或项目问题。
### 总结
@ -130,11 +127,11 @@ PyCharm 是 Linux 系统上开发 Python 代码的一个优秀编辑器,并且
--------------------------------------------------------------------------------
via: https://www.lifewire.com/how-to-install-the-pycharm-python-ide-in-linux-4091033
via: https://www.lifewire.com/pycharm-the-best-linux-python-ide-4091045
作者:[Gary Newell ][a]
作者:[Gary Newell][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,382 @@
translating---geekpi
Securing Your Server
============================================================
### Update Your SystemFrequently
Keeping your software up to date is the single biggest security precaution you can take for any operating system. Software updates range from critical vulnerability patches to minor bug fixes, and many software vulnerabilities are actually patched by the time they become public.
### Automatic Security Updates
There are arguments for and against automatic updates on servers. [Fedoras Wiki][15] has a good breakdown of the pros and cons, but the risk of automatic updates will be minimal if you limit them to security updates.
The practicality of automatic updates is something you must judge for yourself because it comes down to what _you_ do with your Linode. Bear in mind that automatic updates apply only to packages sourced from repositories, not self-compiled applications. You may find it worthwhile to have a test environment that replicates your production server. Updates can be applied there and reviewed for issues before being applied to the live environment.
* CentOS uses _[yum-cron][2]_ for automatic updates.
* Debian and Ubuntu use _[unattended upgrades][3]_.
* Fedora uses _[dnf-automatic][4]_.
### Add a Limited User Account
Up to this point, you have accessed your Linode as the `root` user, which has unlimited privileges and can execute _any_ commandeven one that could accidentally disrupt your server. We recommend creating a limited user account and using that at all times. Administrative tasks will be done using `sudo` to temporarily elevate your limited users privileges so you can administer your server.
> Not all Linux distributions include `sudo` on the system by default, but all the images provided by Linode have sudo in their package repositories. If you get the output `sudo: command not found`, install sudo before continuing.
To add a new user, first [log in to your Linode][16] via SSH.
### CentOS / Fedora
1. Create the user, replacing `example_user` with your desired username, and assign a password:
```
useradd example_user && passwd example_user
```
2. Add the user to the `wheel` group for sudo privileges:
```
usermod -aG wheel example_user
```
### Ubuntu
1. Create the user, replacing `example_user` with your desired username. Youll then be asked to assign the user a password:
```
adduser example_user
```
2. Add the user to the `sudo` group so youll have administrative privileges:
```
adduser example_user sudo
```
### Debian
1. Debian does not include `sudo` among their default packages. Use `apt-get` to install it:
```
apt-get install sudo
```
2. Create the user, replacing `example_user` with your desired username. Youll then be asked to assign the user a password:
```
adduser example_user
```
3. Add the user to the `sudo` group so youll have administrative privileges:
```
adduser example_user sudo
```
After creating your limited user, disconnect from your Linode:
```
exit
```
Log back in as your new user. Replace `example_user` with your username, and the example IP address with your Linodes IP address:
```
ssh example_user@203.0.113.10
```
Now you can administer your Linode from your new user account instead of `root`. Nearly all superuser commands can be executed with `sudo` (example: `sudo iptables -L -nv`) and those commands will be logged to `/var/log/auth.log`.
### Harden SSH Access
By default, password authentication is used to connect to your Linode via SSH. A cryptographic key-pair is more secure because a private key takes the place of a password, which is generally much more difficult to brute-force. In this section well create a key-pair and configure the Linode to not accept passwords for SSH logins.
### Create an Authentication Key-pair
1. This is done on your local computer, **not** your Linode, and will create a 4096-bit RSA key-pair. During creation, you will be given the option to encrypt the private key with a passphrase. This means that it cannot be used without entering the passphrase, unless you save it to your local desktops keychain manager. We suggest you use the key-pair with a passphrase, but you can leave this field blank if you dont want to use one.
**Linux / OS X**
> If youve already created an RSA key-pair, this command will overwrite it, potentially locking you out of other systems. If youve already created a key-pair, skip this step. To check for existing keys, run `ls ~/.ssh/id_rsa*`.
```
ssh-keygen -b 4096
```
Press **Enter** to use the default names `id_rsa` and `id_rsa.pub` in `/home/your_username/.ssh` before entering your passphrase.
**Windows**
This can be done using PuTTY as outlined in our guide: [Use Public Key Authentication with SSH][6].
2. Upload the public key to your Linode. Replace `example_user` with the name of the user you plan to administer the server as, and `203.0.113.10` with your Linodes IP address.
**Linux**
From your local computer:
```
ssh-copy-id example_user@203.0.113.10
```
**OS X**
On your Linode (while signed in as your limited user):
```
mkdir -p ~/.ssh && sudo chmod -R 700 ~/.ssh/
```
From your local computer:
```
scp ~/.ssh/id_rsa.pub example_user@203.0.113.10:~/.ssh/authorized_keys
```
> `ssh-copy-id` is available in [Homebrew][5] if you prefer it over SCP. Install with `brew install ssh-copy-id`.
**Windows**
* **Option 1**: This can be done using [WinSCP][1]. In the login window, enter your Linodes public IP address as the hostname, and your non-root username and password. Click _Login_ to connect.
Once WinSCP has connected, youll see two main sections. The section on the left shows files on your local computer and the section on the right shows files on your Linode. Using the file explorer on the left, navigate to the file where youve saved your public key, select the public key file, and click _Upload_ in the toolbar above.
Youll be prompted to enter a path where youd like to place the file on your Linode. Upload the file to `/home/example_user/.ssh/authorized_keys`, replacing `example_user` with your username.
* **Option 2:** Copy the public key directly from the PuTTY key generator into the terminal emulator connected to your Linode (as a non-root user):
```
mkdir ~/.ssh; nano ~/.ssh/authorized_keys
```
The above command will open a blank file called `authorized_keys` in a text editor. Copy the public key into the text file, making sure it is copied as a single line exactly as it was generated by PuTTY. Press **CTRL+X**, then **Y**, then **Enter** to save the file.
Finally, youll want to set permissions for the public key directory and the key file itself:
```
sudo chmod 700 -R ~/.ssh && chmod 600 ~/.ssh/authorized_keys
```
These commands provide an extra layer of security by preventing other users from accessing the public key directory as well as the file itself. For more information on how this works, see our guide on [how to modify file permissions][7].
3. Now exit and log back into your Linode. If you specified a passphrase for your private key, youll need to enter it.
### SSH Daemon Options
1. **Disallow root logins over SSH.** This requires all SSH connections be by non-root users. Once a limited user account is connected, administrative privileges are accessible either by using `sudo` or changing to a root shell using `su -`.
```
# Authentication:
...
PermitRootLogin no
```
2. **Disable SSH password authentication.** This requires all users connecting via SSH to use key authentication. Depending on the Linux distribution, the line `PasswordAuthentication` may need to be added, or uncommented by removing the leading `#`.
```
# Change to no to disable tunnelled clear text passwords
PasswordAuthentication no
```
> You may want to leave password authentication enabled if you connect to your Linode from many different computers. This will allow you to authenticate with a password instead of generating and uploading a key-pair for every device.
3. **Listen on only one internet protocol.** The SSH daemon listens for incoming connections over both IPv4 and IPv6 by default. Unless you need to SSH into your Linode using both protocols, disable whichever you do not need. _This does not disable the protocol system-wide, it is only for the SSH daemon._
Use the option:
* `AddressFamily inet` to listen only on IPv4.
* `AddressFamily inet6` to listen only on IPv6.
The `AddressFamily` option is usually not in the `sshd_config` file by default. Add it to the end of the file:
```
echo 'AddressFamily inet' | sudo tee -a /etc/ssh/sshd_config
```
4. Restart the SSH service to load the new configuration.
If youre using a Linux distribution which uses systemd (CentOS 7, Debian 8, Fedora, Ubuntu 15.10+)
```
sudo systemctl restart sshd
```
If your init system is SystemV or Upstart (CentOS 6, Debian 7, Ubuntu 14.04):
```
sudo service ssh restart
```
### Use Fail2Ban for SSH Login Protection
[_Fail2Ban_][17] is an application that bans IP addresses from logging into your server after too many failed login attempts. Since legitimate logins usually take no more than three tries to succeed (and with SSH keys, no more than one), a server being spammed with unsuccessful logins indicates attempted malicious access.
Fail2Ban can monitor a variety of protocols including SSH, HTTP, and SMTP. By default, Fail2Ban monitors SSH only, and is a helpful security deterrent for any server since the SSH daemon is usually configured to run constantly and listen for connections from any remote IP address.
For complete instructions on installing and configuring Fail2Ban, see our guide: [Securing Your Server with Fail2ban][18].
### Remove Unused Network-Facing Services
Most Linux distributions install with running network services which listen for incoming connections from the internet, the loopback interface, or a combination of both. Network-facing services which are not needed should be removed from the system to reduce the attack surface of both running processes and installed packages.
### Determine Running Services
To see your Linodes running network services:
```
sudo netstat -tulpn
```
> If netstat isnt included in your Linux distribution by default, install the package `net-tools` or use the `ss -tulpn`command instead.
The following is an example of netstats output. Note that because distributions run different services by default, your output will differ:
```
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 7315/rpcbind
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3277/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 3179/exim4
tcp 0 0 0.0.0.0:42526 0.0.0.0:* LISTEN 2845/rpc.statd
tcp6 0 0 :::48745 :::* LISTEN 2845/rpc.statd
tcp6 0 0 :::111 :::* LISTEN 7315/rpcbind
tcp6 0 0 :::22 :::* LISTEN 3277/sshd
tcp6 0 0 ::1:25 :::* LISTEN 3179/exim4
udp 0 0 127.0.0.1:901 0.0.0.0:* 2845/rpc.statd
udp 0 0 0.0.0.0:47663 0.0.0.0:* 2845/rpc.statd
udp 0 0 0.0.0.0:111 0.0.0.0:* 7315/rpcbind
udp 0 0 192.0.2.1:123 0.0.0.0:* 3327/ntpd
udp 0 0 127.0.0.1:123 0.0.0.0:* 3327/ntpd
udp 0 0 0.0.0.0:123 0.0.0.0:* 3327/ntpd
udp 0 0 0.0.0.0:705 0.0.0.0:* 7315/rpcbind
udp6 0 0 :::111 :::* 7315/rpcbind
udp6 0 0 fe80::f03c:91ff:fec:123 :::* 3327/ntpd
udp6 0 0 2001:DB8::123 :::* 3327/ntpd
udp6 0 0 ::1:123 :::* 3327/ntpd
udp6 0 0 :::123 :::* 3327/ntpd
udp6 0 0 :::705 :::* 7315/rpcbind
udp6 0 0 :::60671 :::* 2845/rpc.statd
```
Netstat tells us that services are running for [Remote Procedure Call][19] (rpc.statd and rpcbind), SSH (sshd), [NTPdate][20] (ntpd) and [Exim][21] (exim4).
#### TCP
See the **Local Address** column of the netstat readout. The process `rpcbind` is listening on `0.0.0.0:111` and `:::111` for a foreign address of `0.0.0.0:*` or `:::*`. This means that its accepting incoming TCP connections from other RPC clients on any external address, both IPv4 and IPv6, from any port and over any network interface. We see similar for SSH, and that Exim is listening locally for traffic from the loopback interface, as shown by the `127.0.0.1` address.
#### UDP
UDP sockets are _[stateless][14]_, meaning they are either open or closed and every processs connection is independent of those which occurred before and after. This is in contrast to TCP connection states such as _LISTEN_, _ESTABLISHED_ and _CLOSE_WAIT_.
Our netstat output shows that NTPdate is: 1) accepting incoming connections on the Linodes public IP address; 2) communicates over localhost; and 3) accepts connections from external sources. These are over port 123, and both IPv4 and IPv6\. We also see more sockets open for RPC.
### Determine Which Services to Remove
If you were to do a basic TCP and UDP [nmap][22] scan of your Linode without a firewall enabled, SSH, RPC and NTPdate would be present in the result with ports open. By [configuring a firewall][23] you can filter those ports, with the exception of SSH because it must allow your incoming connections. Ideally, however, the unused services should be disabled.
* You will likely be administering your server primarily through an SSH connection, so that service needs to stay. As mentioned above, [RSA keys][8] and [Fail2Ban][9] can help protect SSH.
* NTP is necessary for your servers timekeeping but there are alternatives to NTPdate. If you prefer a time synchronization method which does not hold open network ports, and you do not need nanosecond accuracy, then you may be interested in replacing NTPdate with [OpenNTPD][10].
* Exim and RPC, however, are unnecessary unless you have a specific use for them, and should be removed.
> This section focused on Debian 8\. Different Linux distributions have different services enabled by default. If you are unsure of what a service does, do an internet search to understand what it is before attempting to remove or disable it.
### Uninstall the Listening Services
How to remove the offending packages will differ depending on your distributions package manager.
**Arch**
```
sudo pacman -Rs package_name
```
**CentOS**
```
sudo yum remove package_name
```
**Debian / Ubuntu**
```
sudo apt-get purge package_name
```
**Fedora**
```
sudo dnf remove package_name
```
Run `sudo netstat -tulpn` again. You should now only see listening services for SSH (sshd) and NTP (ntpdate, network time protocol).
### Configure a Firewall
Using a _firewall_ to block unwanted inbound traffic to your Linode provides a highly effective security layer. By being very specific about the traffic you allow in, you can prevent intrusions and network mapping. A best practice is to allow only the traffic you need, and deny everything else. See our documentation on some of the most common firewall applications:
* [Iptables][11] is the controller for netfilter, the Linux kernels packet filtering framework. Iptables is included in most Linux distributions by default.
* [FirewallD][12] is the iptables controller available for the CentOS / Fedora family of distributions.
* [UFW][13] provides an iptables frontend for Debian and Ubuntu.
### Next Steps
These are the most basic steps to harden any Linux server, but further security layers will depend on its intended use. Additional techniques can include application configurations, using [intrusion detection][24] or installing a form of [access control][25].
Now you can begin setting up your Linode for any purpose you choose. We have a library of documentation to assist you with a variety of topics ranging from [migration from shared hosting][26] to [enabling two-factor authentication][27] to [hosting a website][28].
--------------------------------------------------------------------------------
via: https://www.linode.com/docs/security/securing-your-server/
作者:[Phil Zona ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linode.com/docs/security/securing-your-server/
[1]:http://winscp.net/
[2]:https://fedoraproject.org/wiki/AutoUpdates#Fedora_21_or_earlier_versions
[3]:https://help.ubuntu.com/lts/serverguide/automatic-updates.html
[4]:https://dnf.readthedocs.org/en/latest/automatic.html
[5]:http://brew.sh/
[6]:https://www.linode.com/docs/security/use-public-key-authentication-with-ssh#windows-operating-system
[7]:https://www.linode.com/docs/tools-reference/modify-file-permissions-with-chmod
[8]:https://www.linode.com/docs/security/securing-your-server/#create-an-authentication-key-pair
[9]:https://www.linode.com/docs/security/securing-your-server/#use-fail2ban-for-ssh-login-protection
[10]:https://en.wikipedia.org/wiki/OpenNTPD
[11]:https://www.linode.com/docs/security/firewalls/control-network-traffic-with-iptables
[12]:https://www.linode.com/docs/security/firewalls/introduction-to-firewalld-on-centos
[13]:https://www.linode.com/docs/security/firewalls/configure-firewall-with-ufw
[14]:https://en.wikipedia.org/wiki/Stateless_protocol
[15]:https://fedoraproject.org/wiki/AutoUpdates#Why_use_Automatic_updates.3F
[16]:https://www.linode.com/docs/getting-started#logging-in-for-the-first-time
[17]:http://www.fail2ban.org/wiki/index.php/Main_Page
[18]:https://www.linode.com/docs/security/using-fail2ban-for-security
[19]:https://en.wikipedia.org/wiki/Open_Network_Computing_Remote_Procedure_Call
[20]:http://support.ntp.org/bin/view/Main/SoftwareDownloads
[21]:http://www.exim.org/
[22]:https://nmap.org/
[23]:https://www.linode.com/docs/security/securing-your-server/#configure-a-firewall
[24]:https://linode.com/docs/security/ossec-ids-debian-7
[25]:https://en.wikipedia.org/wiki/Access_control#Access_Control
[26]:https://www.linode.com/docs/migrate-to-linode/migrate-from-shared-hosting
[27]:https://www.linode.com/docs/security/linode-manager-security-controls
[28]:https://www.linode.com/docs/websites/hosting-a-website

View File

@ -0,0 +1,409 @@
Introduction to FirewallD on CentOS
============================================================
[FirewallD][4] is frontend controller for iptables used to implement persistent network traffic rules. It provides command line and graphical interfaces and is available in the repositories of most Linux distributions. Working with FirewallD has two main differences compared to directly controlling iptables:
1. FirewallD uses _zones_ and _services_ instead of chain and rules.
2. It manages rulesets dynamically, allowing updates without breaking existing sessions and connections.
> FirewallD is a wrapper for iptables to allow easier management of iptables rulesit is **not** an iptables replacement. While iptables commands are still available to FirewallD, its recommended to use only FirewallD commands with FirewallD.
This guide will introduce you to FirewallD, its notions of zones and services, and show you some basic configuration steps.
### Installing and Managing FirewallD
FirewallD is included by default with CentOS 7 and Fedora 20+ but its inactive. Controlling it is the same as with other systemd units.
1. To start the service and enable FirewallD on boot:

```
sudo systemctl start firewalld
sudo systemctl enable firewalld
```
|
To stop and disable it:

```
sudo systemctl stop firewalld
sudo systemctl disable firewalld
```
2. Check the firewall status. The output should say either `running` or `not running`.

```
sudo firewall-cmd --state
```
3. To view the status of the FirewallD daemon:
```
sudo systemctl status firewalld
```
Example output:

```
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled)
Active: active (running) since Wed 2015-09-02 18:03:22 UTC; 1min 12s ago
Main PID: 11954 (firewalld)
CGroup: /system.slice/firewalld.service
└─11954 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
```
4. To reload a FirewallD configuration:

```
sudo firewall-cmd --reload
```
### Configuring FirewallD
Firewalld is configured with XML files. Except for very specific configurations, you wont have to deal with them and **firewall-cmd** should be used instead.
Configuration files are located in two directories:
* `/usr/lib/FirewallD` holds default configurations like default zones and common services. Avoid updating them because those files will be overwritten by each firewalld package update.
* `/etc/firewalld` holds system configuration files. These files will overwrite a default configuration.
### Configuration Sets
Firewalld uses two _configuration sets_: Runtime and Permanent. Runtime configuration changes are not retained on reboot or upon restarting FirewallD whereas permanent changes are not applied to a running system.
By default, `firewall-cmd` commands apply to runtime configuration but using the `--permanent` flag will establish a persistent configuration. To add and activate a permanent rule, you can use one of two methods.
1. Add the rule to both the permanent and runtime sets.

```
sudo firewall-cmd --zone=public --add-service=http --permanent
sudo firewall-cmd --zone=public --add-service=http
```
2. Add the rule to the permanent set and reload FirewallD.

```
sudo firewall-cmd --zone=public --add-service=http --permanent
sudo firewall-cmd --reload
```
> The reload command drops all runtime configurations and applies a permanent configuration. Because firewalld manages the ruleset dynamically, it wont break an existing connection and session.
### Firewall Zones
Zones are pre-constructed rulesets for various trust levels you would likely have for a given location or scenario (e.g. home, public, trusted, etc.). Different zones allow different network services and incoming traffic types while denying everything else. After enabling FirewallD for the first time, _Public_will be the default zone.
Zones can also be applied to different network interfaces. For example, with separate interfaces for both an internal network and the Internet, you can allow DHCP on an internal zone but only HTTP and SSH on external zone. Any interface not explicitly set to a specific zone will be attached to the default zone.
To view the default zone:

```
sudo firewall-cmd --get-default-zone
```
To change the default zone:
```
sudo firewall-cmd --set-default-zone=internal
```
To see the zones used by your network interface(s):
```
sudo firewall-cmd --get-active-zones
```
Example output:

```
public
interfaces: eth0
```
To get all configurations for a specific zone:
```
sudo firewall-cmd --zone=public --list-all
```
Example output:

```
public (default, active)
interfaces: ens160
sources:
services: dhcpv6-client http ssh
ports: 12345/tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
```
To get all configurations for all zones:

```
sudo firewall-cmd --list-all-zones
```
Example output:
```
block
interfaces:
sources:
services:
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
...
work
interfaces:
sources:
services: dhcpv6-client ipp-client ssh
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
```
### Working with Services
FirewallD can allow traffic based on predefined rules for specific network services. You can create your own custom serivce rules and add them to any zone. The configuration files for the default supported services are located at `/usr/lib/firewalld/services` and user-created service files would be in `/etc/firewalld/services`.
To view the default available services:

```
sudo firewall-cmd --get-services
```
As an example, to enable or disable the HTTP service:

```
sudo firewall-cmd --zone=public --add-service=http --permanent
sudo firewall-cmd --zone=public --remove-service=http --permanent
```
### Allowing or Denying an Arbitrary Port/Protocol
As an example: Allow or disable TCP traffic on port 12345.
```
sudo firewall-cmd --zone=public --add-port=12345/tcp --permanent
sudo firewall-cmd --zone=public --remove-port=12345/tcp --permanent
```
### Port Forwarding
The example rule below forwards traffic from port 80 to port 12345 on **the same server**.
```
sudo firewall-cmd --zone="public" --add-forward-port=port=80:proto=tcp:toport=12345
```
To forward a port to **a different server**:
1. Activate masquerade in the desired zone.

```
sudo firewall-cmd --zone=public --add-masquerade
```

2. Add the forward rule. This example forwards traffic from local port 80 to port 8080 on _a remote server_ located at the IP address: 123.456.78.9.

```
sudo firewall-cmd --zone="public" --add-forward-port=port=80:proto=tcp:toport=8080:toaddr=123.456.78.9
```
To remove the rules, substitute `--add` with `--remove`. For example:

```
sudo firewall-cmd --zone=public --remove-masquerade
```
### Constructing a Ruleset with FirewallD
As an example, here is how you would use FirewallD to assign basic rules to your Linode if you were running a web server.
1. Assign the _dmz_ zone as the default zone to eth0\. Of the default zones offered, dmz (demilitarized zone) is the most desirable to start with for this application because it allows only SSH and ICMP.
```
sudo firewall-cmd --set-default-zone=dmz
sudo firewall-cmd --zone=dmz --add-interface=eth0
```
2. Add permanent service rules for HTTP and HTTPS to the dmz zone:

```
sudo firewall-cmd --zone=dmz --add-service=http --permanent
sudo firewall-cmd --zone=dmz --add-service=https --permanent
```

3. Reload FirewallD so the rules take effect immediately:

```
sudo firewall-cmd --reload
```

If you now run `firewall-cmd --zone=dmz --list-all`, this should be the output:

```
dmz (default)
interfaces: eth0
sources:
services: http https ssh
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
```

This tells us that the **dmz** zone is our **default** which applies to the **eth0 interface**, all network **sources** and **ports**. Incoming HTTP (port 80), HTTPS (port 443) and SSH (port 22) traffic is allowed and since there are no restrictions on IP versioning, this will apply to both IPv4 and IPv6. **Masquerading** and **port forwarding** are not allowed. We have no **ICMP blocks**, so ICMP traffic is fully allowed, and no **rich rules**. All outgoing traffic is allowed.
### Advanced Configuration
Services and ports are fine for basic configuration but may be too limiting for advanced scenarios. Rich Rules and Direct Interface allow you to add fully custom firewall rules to any zone for any port, protocol, address and action.
### Rich Rules
Rich rules syntax is extensive but fully documented in the [firewalld.richlanguage(5)][5] man page (or see `man firewalld.richlanguage` in your terminal). Use `--add-rich-rule`, `--list-rich-rules` and `--remove-rich-rule` with firewall-cmd command to manage them.
Here are some common examples:
Allow all IPv4 traffic from host 192.168.0.14.

```
sudo firewall-cmd --zone=public --add-rich-rule 'rule family="ipv4" source address=192.168.0.14 accept'
```
Deny IPv4 traffic over TCP from host 192.168.1.10 to port 22.

```
sudo firewall-cmd --zone=public --add-rich-rule 'rule family="ipv4" source address="192.168.1.10" port port=22 protocol=tcp reject'
```
Allow IPv4 traffic over TCP from host 10.1.0.3 to port 80, and forward it locally to port 6532.

```
sudo firewall-cmd --zone=public --add-rich-rule 'rule family=ipv4 source address=10.1.0.3 forward-port port=80 protocol=tcp to-port=6532'
```
Forward all IPv4 traffic on port 80 to port 8080 on host 172.31.4.2 (masquerade should be active on the zone).

```
sudo firewall-cmd --zone=public --add-rich-rule 'rule family=ipv4 forward-port port=80 protocol=tcp to-port=8080 to-addr=172.31.4.2'
```
To list your current Rich Rules:

```
sudo firewall-cmd --list-rich-rules
```
### iptables Direct Interface
For the most advanced usage, or for iptables experts, FirewallD provides a direct interface that allows you to pass raw iptables commands to it. Direct Interface rules are not persistent unless the `--permanent` is used.
To see all custom chains or rules added to FirewallD:
```
firewall-cmd --direct --get-all-chains
firewall-cmd --direct --get-all-rules
```
Discussing iptables syntax details goes beyond the scope of this guide. If you want to learn more, you can review our [iptables guide][6].
### More Information
You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.
* [FirewallD Official Site][1]
* [RHEL 7 Security Guide: Introduction to FirewallD][2]
* [Fedora Wiki: FirewallD][3]
--------------------------------------------------------------------------------
via: https://www.linode.com/docs/security/firewalls/introduction-to-firewalld-on-centos
作者:[ Linode][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linode.com/docs/security/firewalls/introduction-to-firewalld-on-centos
[1]:http://www.firewalld.org/
[2]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html#sec-Introduction_to_firewalld
[3]:https://fedoraproject.org/wiki/FirewallD
[4]:http://www.firewalld.org/
[5]:https://jpopelka.fedorapeople.org/firewalld/doc/firewalld.richlanguage.html
[6]:https://www.linode.com/docs/networking/firewalls/control-network-traffic-with-iptables

View File

@ -1,3 +1,5 @@
translating by ypingcn.
How to Use Old Xorg Apps in Unity 8 on Ubuntu 16.10
====

View File

@ -0,0 +1,142 @@
使用 NTP 进行时间同步
============================================================
NTP 是通过网络来同步时间的一种 TCP/IP 协议。通常客户端向服务器请求当前的时间,并根据结果来设置其时钟。
Behind this simple description, there is a lot of complexity - there are tiers of NTP servers, with the tier one NTP servers connected to atomic clocks, and tier two and three servers spreading the load of actually handling requests across the Internet. Also the client software is a lot more complex than you might think - it has to factor out communication delays, and adjust the time in a way that does not upset all the other processes that run on the server. But luckily all that complexity is hidden from you!
Ubuntu uses ntpdate and ntpd.
* [timedatectl][4]
* [timesyncd][5]
* [ntpdate][6]
* [timeservers][7]
* [ntpd][8]
* [安装][9]
* [配置][10]
* [View status][11]
* [PPS Support][12]
* [参考资料][13]
### timedatectl
In recent Ubuntu releases timedatectl replaces ntpdate. By default timedatectl syncs the time once on boot and later on uses socket activation to recheck once network connections become active.
If ntpdate / ntp is installed timedatectl steps back to let you keep your old setup. That shall ensure that no two time syncing services are fighting and also to retain any kind of old behaviour/config that you had through an upgrade. But it also implies that on an upgrade from a former release ntp/ntpdate might still be installed and therefore renders the new systemd based services disabled.
### timesyncd
In recent Ubuntu releases timesyncd replaces the client portion of ntpd. By default timesyncd regularly checks and keeps the time in sync. It also stores time updates locally, so that after reboots monotonically advances if applicable.
The current status of time and time configuration via timedatectl and timesyncd can be checked with timedatectl status.
```
timedatectl status
Local time: Fri 2016-04-29 06:32:57 UTC
Universal time: Fri 2016-04-29 06:32:57 UTC
RTC time: Fri 2016-04-29 07:44:02
Time zone: Etc/UTC (UTC, +0000)
Network time on: yes
NTP synchronized: no
RTC in local TZ: no
```
If NTP is installed and replaces the activity of timedatectl the line "NTP synchronized" is set to yes.
The nameserver to fetch time for timedatectl and timesyncd from can be specified in /etc/systemd/timesyncd.conf and with flexible additional config files in /etc/systemd/timesyncd.conf.d/.
### ntpdate
ntpdate is considered deprecated in favour of timedatectl and thereby no more installed by default. If installed it will run once at boot time to set up your time according to Ubuntu's NTP server. Later on anytime a new interface comes up it retries to update the time - while doing so it will try to slowly drift time as long as the delta it has to cover isn't too big. That behaviour can be controlled with the -B/-b switches.
```
ntpdate ntp.ubuntu.com
```
### timeservers
By default the systemd based tools request time information at ntp.ubuntu.com. In classic ntpd based service uses the pool of [0-3].ubuntu.pool.ntp.org Of the pool number 2.ubuntu.pool.ntp.org as well as ntp.ubuntu.com also support ipv6 if needed. If one needs to force ipv6 there also is ipv6.ntp.ubuntu.com which is not configured by default.
### ntpd
The ntp daemon ntpd calculates the drift of your system clock and continuously adjusts it, so there are no large corrections that could lead to inconsistent logs for instance. The cost is a little processing power and memory, but for a modern server this is negligible.
### 安装
To install ntpd, from a terminal prompt enter:
```
sudo apt install ntp
```
### 配置
Edit /etc/ntp.conf to add/remove server lines. By default these servers are configured:
```
# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
# on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for
# more information.
server 0.ubuntu.pool.ntp.org
server 1.ubuntu.pool.ntp.org
server 2.ubuntu.pool.ntp.org
server 3.ubuntu.pool.ntp.org
```
After changing the config file you have to reload the ntpd:
```
sudo systemctl reload ntp.service
```
### View status
Use ntpq to see more info:
```
# sudo ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
+stratum2-2.NTP. 129.70.130.70 2 u 5 64 377 68.461 -44.274 110.334
+ntp2.m-online.n 212.18.1.106 2 u 5 64 377 54.629 -27.318 78.882
*145.253.66.170 .DCFa. 1 u 10 64 377 83.607 -30.159 68.343
+stratum2-3.NTP. 129.70.130.70 2 u 5 64 357 68.795 -68.168 104.612
+europium.canoni 193.79.237.14 2 u 63 64 337 81.534 -67.968 92.792
```
### PPS Support
Since 16.04 ntp supports PPS discipline which can be used to augment ntp with local timesources for better accuracy. For more details on configuration see the external pps ressource listed below.
### 参考资料
* See the [Ubuntu Time][1] wiki page for more information.
* [ntp.org, home of the Network Time Protocol project][2]
* [ntp.org faq on configuring PPS][3]
--------------------------------------------------------------------------------
via: https://help.ubuntu.com/lts/serverguide/NTP.html
作者:[Ubuntu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://help.ubuntu.com/lts/serverguide/NTP.html
[1]:https://help.ubuntu.com/community/UbuntuTime
[2]:http://www.ntp.org/
[3]:http://www.ntp.org/ntpfaq/NTP-s-config-adv.htm#S-CONFIG-ADV-PPS
[4]:https://help.ubuntu.com/lts/serverguide/NTP.html#timedatectl
[5]:https://help.ubuntu.com/lts/serverguide/NTP.html#timesyncd
[6]:https://help.ubuntu.com/lts/serverguide/NTP.html#ntpdate
[7]:https://help.ubuntu.com/lts/serverguide/NTP.html#timeservers
[8]:https://help.ubuntu.com/lts/serverguide/NTP.html#ntpd
[9]:https://help.ubuntu.com/lts/serverguide/NTP.html#ntp-installation
[10]:https://help.ubuntu.com/lts/serverguide/NTP.html#timeservers-conf
[11]:https://help.ubuntu.com/lts/serverguide/NTP.html#ntp-status
[12]:https://help.ubuntu.com/lts/serverguide/NTP.html#ntp-pps
[13]:https://help.ubuntu.com/lts/serverguide/NTP.html#ntp-references

View File

@ -1,3 +1,5 @@
Rusking translating
Manage Samba4 Active Directory Infrastructure from Windows10 via RSAT Part 3
============================================================

View File

@ -1,3 +1,5 @@
Rusking translating
Manage Samba4 AD Domain Controller DNS and Group Policy from Windows Part 4
============================================================

View File

@ -0,0 +1,334 @@
# Kprobes Event Tracing on ARMv8
![core-dump](http://www.linaro.org/wp-content/uploads/2016/02/core-dump.png)
### Introduction
Kprobes is a kernel feature that allows instrumenting the kernel by setting arbitrary breakpoints that call out to developer-supplied routines before and after the breakpointed instruction is executed (or simulated). See the kprobes documentation[[1]][2] for more information. Basic kprobes functionality is selected withCONFIG_KPROBES. Kprobes support was added to mainline for arm64 in the v4.8 release.
In this article we describe the use of kprobes on arm64 using the debugfs event tracing interfaces from the command line to collect dynamic trace events. This feature has been available for some time on several architectures (including arm32), and is now available on arm64\. The feature allows use of kprobes without having to write any code.
### Types of Probes
The kprobes subsystem provides three different types of dynamic probes described below.
### Kprobes
The basic probe is a software breakpoint kprobes inserts in place of the instruction you are probing, saving the original instruction for eventual single-stepping (or simulation) when the probe point is hit.
### Kretprobes
Kretprobes is a part of kprobes that allows intercepting a returning function instead of having to set a probe (or possibly several probes) at the return points. This feature is selected whenever kprobes is selected, for supported architectures (including ARMv8).
### Jprobes
Jprobes allows intercepting a call into a function by supplying an intermediary function with the same calling signature, which will be called first. Jprobes is a programming interface only and cannot be used through the debugfs event tracing subsystem. As such we will not be discussing jprobes further here. Consult the kprobes documentation if you wish to use jprobes.
### Invoking Kprobes
Kprobes provides a set of APIs which can be called from kernel code to set up probe points and register functions to be called when probe points are hit. Kprobes is also accessible without adding code to the kernel, by writing to specific event tracing debugfs files to set the probe address and information to be recorded in the trace log when the probe is hit. The latter is the focus of what this document will be talking about. Lastly kprobes can be accessed through the perf command.
### Kprobes API
The kernel developer can write functions in the kernel (often done in a dedicated debug module) to set probe points and take whatever action is desired right before and right after the probed instruction is executed. This is well documented in kprobes.txt.
### Event Tracing
The event tracing subsystem has its own documentation[[2]][3] which might be worth a read to understand the background of event tracing in general. The event tracing subsystem serves as a foundation for both tracepoints and kprobes event tracing. The event tracing documentation focuses on tracepoints, so bear that in mind when consulting that documentation. Kprobes differs from tracepoints in that there is no predefined list of tracepoints but instead arbitrary dynamically created probe points that trigger the collection of trace event information. The event tracing subsystem is controlled and monitored through a set of debugfs files. Event tracing (CONFIG_EVENT_TRACING) will be selected automatically when needed by something like the kprobe event tracing subsystem.
#### Kprobes Events
With the kprobes event tracing subsystem the user can specify information to be reported at arbitrary breakpoints in the kernel, determined simply by specifying the address of any existing probeable instruction along with formatting information. When that breakpoint is encountered during execution kprobes passes the requested information to the common parts of the event tracing subsystem which formats and appends the data to the trace log, much like how tracepoints works. Kprobes uses a similar but mostly separate collection of debugfs files to control and display trace event information. This feature is selected withCONFIG_KPROBE_EVENT. The kprobetrace documentation[[3]][4] provides the essential information on how to use kprobes event tracing and should be consulted to understand details about the examples presented below.
### Kprobes and Perf
The perf tools provide another command line interface to kprobes. In particular “perf probe” allows probe points to be specified by source file and line number, in addition to function name plus offset, and address. The perf interface is really a wrapper for using the debugfs interface for kprobes.
### Arm64 Kprobes
All of the above aspects of kprobes are now implemented for arm64, in practice there are some differences from other architectures though:
* Register name arguments are, of course, architecture specific and can be found in the ARM ARM.
* Not all instruction types can currently be probed. Currently unprobeable instructions include mrs/msr(except DAIF read), exception generation instructions, eret, and hint (except for the nop variant). In these cases it is simplest to just probe a nearby instruction instead. These instructions are blacklisted from probing because the changes they cause to processor state are unsafe to do during kprobe single-stepping or instruction simulation, because the single-stepping context kprobes constructs is inconsistent with what the instruction needs, or because the instruction cant tolerate the additional processing time and exception handling in kprobes (ldx/stx).
* An attempt is made to identify instructions within a ldx/stx sequence and prevent probing, however it is theoretically possible for this check to fail resulting in allowing a probed atomic sequence which can never succeed. Be careful when probing around atomic code sequences.
* Note that because of the details of Linux ARM64 calling conventions it is not possible to reliably duplicate the stack frame for the probed function and for that reason no attempt is made to do so with jprobes, unlike the majority of other architectures supporting jprobes. The reason for this is that there is insufficient information for the callee to know for certain the amount of the stack that is needed.
* Note that the stack pointer information recorded from a probe will reflect the particular stack pointer in use at the time the probe was hit, be it the kernel stack pointer or the interrupt stack pointer.
* There is a list of kernel functions which cannot be probed, usually because they are called as part of kprobes processing. Part of this list is architecture-specific and also includes things like exception entry code.
### Using Kprobes Event Tracing
One common use case for kprobes is instrumenting function entry and/or exit. It is particularly easy to install probes for this since one can just use the function name for the probe address. Kprobes event tracing will look up the symbol name and determine the address. The ARMv8 calling standard defines where the function arguments and return values can be found, and these can be printed out as part of the kprobe event processing.
### Example: Function entry probing
Instrumenting a USB ethernet driver reset function:
```
_$ pwd
/sys/kernel/debug/tracing
$ cat > kprobe_events <<EOF
p ax88772_reset %x0
EOF
$ echo 1 > events/kprobes/enable_
```
At this point a trace event will be recorded every time the drivers _ax8872_reset()_ function is called. The event will display the pointer to the _usbnet_ structure passed in via X0 (as per the ARMv8 calling standard) as this functions only argument. After plugging in a USB dongle requiring this ethernet driver we see the following trace information:
```
_$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 1/1   #P:8
#
#                           _—=> irqs-off
#                          / _—-=> need-resched
#                         | / _—=> hardirq/softirq
#                         || / _=> preempt-depth
#                         ||| / delay
#        TASK-PID   CPU#  |||| TIMESTAMP  FUNCTION
#           | |    |   ||||    |      |
kworker/0:0-4             [000] d… 10972.102939:   p_ax88772_reset_0:
(ax88772_reset+0x0/0x230)   arg1=0xffff800064824c80_
```
Here we can see the value of the pointer argument passed in to our probed function. Since we did not use the optional labelling features of kprobes event tracing the information we requested is automatically labeled_arg1_.  Note that this refers to the first value in the list of values we requested that kprobes log for this probe, not the actual position of the argument to the function. In this case it also just happens to be the first argument to the function weve probed.
### Example: Function entry and return probing
The kretprobe feature is used specifically to probe a function return. At function entry the kprobes subsystem will be called and will set up a hook to be called at function return, where it will record the requested event information. For the most common case the return information, typically in the X0 register, is quite useful. The return value in %x0 can also be referred to as _$retval_. The following example also demonstrates how to provide a human-readable label to be displayed with the information of interest.
Example of instrumenting the kernel __do_fork()_ function to record arguments and results using a kprobe and a kretprobe:
```
_$ cd /sys/kernel/debug/tracing
$ cat > kprobe_events <<EOF
p _do_fork %x0 %x1 %x2 %x3 %x4 %x5
r _do_fork pid=%x0
EOF
$ echo 1 > events/kprobes/enable_
```
At this point every call to _do_fork() will produce two kprobe events recorded into the “_trace_” file, one reporting the calling argument values and one reporting the return value. The return value shall be labeled “_pid_” in the trace file. Here are the contents of the trace file after three fork syscalls have been made:
```
_$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 6/6   #P:8
#
#                              _—=> irqs-off
#                             / _—-=> need-resched
#                            | / _—=> hardirq/softirq
#                            || / _=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
              bash-1671  [001] d…   204.946007: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
              bash-1671  [001] d..1   204.946391: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x724
              bash-1671  [001] d…   208.845749: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
              bash-1671  [001] d..1   208.846127: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x725
              bash-1671  [001] d…   214.401604: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
              bash-1671  [001] d..1   214.401975: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x726_
```
### Example: Dereferencing pointer arguments
For pointer values the kprobe event processing subsystem also allows dereferencing and printing of desired memory contents, for various base data types. It is necessary to manually calculate the offset into structures in order to display a desired field.
Instrumenting the `_do_wait()` function:
```
_$ cat > kprobe_events <<EOF
p:wait_p do_wait wo_type=+0(%x0):u32 wo_flags=+4(%x0):u32
r:wait_r do_wait $retval
EOF
$ echo 1 > events/kprobes/enable_
```
Note that the argument labels used in the first probe are optional and can be used to more clearly identify the information recorded in the trace log. The signed offset and parentheses indicate that the register argument is a pointer to memory contents to be recorded in the trace log. The “_:u32_” indicates that the memory location contains an unsigned four-byte wide datum (an enum and an int in a locally defined structure in this case).
The probe labels (after the colon) are optional and will be used to identify the probe in the log. The label must be unique for each probe. If unspecified a useful label will be automatically generated from a nearby symbol name, as has been shown in earlier examples.
Also note the “_$retval_” argument could just be specified as “_%x0_“.
Here are the contents of the “_trace_” file after two fork syscalls have been made:
```
_$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 4/4   #P:8
#
#                              _—=> irqs-off
#                             / _—-=> need-resched
#                            | / _—=> hardirq/softirq
#                            || / _=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
             bash-1702  [001] d…   175.342074: wait_p: (do_wait+0x0/0x260) wo_type=0x3 wo_flags=0xe
             bash-1702  [002] d..1   175.347236: wait_r: (SyS_wait4+0x74/0xe4 <- do_wait) arg1=0x757
             bash-1702  [002] d…   175.347337: wait_p: (do_wait+0x0/0x260) wo_type=0x3 wo_flags=0xf
             bash-1702  [002] d..1   175.347349: wait_r: (SyS_wait4+0x74/0xe4 <- do_wait) arg1=0xfffffffffffffff6_
```
### Example: Probing arbitrary instruction addresses
In previous examples we have inserted probes for function entry and exit, however it is possible to probe an arbitrary instruction (with a few exceptions). If we are placing a probe inside a C function the first step is to look at the assembler version of the code to identify where we want to place the probe. One way to do this is to use gdb on the vmlinux file and display the instructions in the function where you wish to place the probe. An example of doing this for the _module_alloc_ function in arch/arm64/kernel/modules.c follows. In this case, because gdb seems to prefer using the weak symbol definition and its associated stub code for this function, we get the symbol value from System.map instead:
```
_$ grep module_alloc System.map
ffff2000080951c4 T module_alloc
ffff200008297770 T kasan_module_alloc_
```
In this example were using cross-development tools and we invoke gdb on our host system to examine the instructions comprising our function of interest:
```
_$ ${CROSS_COMPILE}gdb vmlinux
(gdb) x/30i 0xffff2000080951c4
        0xffff2000080951c4 <module_alloc>:    sub    sp, sp, #0x30
        0xffff2000080951c8 <module_alloc+4>:    adrp    x3, 0xffff200008d70000
        0xffff2000080951cc <module_alloc+8>:    add    x3, x3, #0x0
        0xffff2000080951d0 <module_alloc+12>:    mov    x5, #0x713             // #1811
        0xffff2000080951d4 <module_alloc+16>:    mov    w4, #0xc0              // #192
        0xffff2000080951d8 <module_alloc+20>:
              mov    x2, #0xfffffffff8000000    // #-134217728
        0xffff2000080951dc <module_alloc+24>:    stp    x29, x30, [sp,#16]         0xffff2000080951e0 <module_alloc+28>:    add    x29, sp, #0x10
        0xffff2000080951e4 <module_alloc+32>:    movk    x5, #0xc8, lsl #48
        0xffff2000080951e8 <module_alloc+36>:    movk    w4, #0x240, lsl #16
        0xffff2000080951ec <module_alloc+40>:    str    x30, [sp]         0xffff2000080951f0 <module_alloc+44>:    mov    w7, #0xffffffff        // #-1
        0xffff2000080951f4 <module_alloc+48>:    mov    x6, #0x0               // #0
        0xffff2000080951f8 <module_alloc+52>:    add    x2, x3, x2
        0xffff2000080951fc <module_alloc+56>:    mov    x1, #0x8000            // #32768
        0xffff200008095200 <module_alloc+60>:    stp    x19, x20, [sp,#32]         0xffff200008095204 <module_alloc+64>:    mov    x20, x0
        0xffff200008095208 <module_alloc+68>:    bl    0xffff2000082737a8 <__vmalloc_node_range>
        0xffff20000809520c <module_alloc+72>:    mov    x19, x0
        0xffff200008095210 <module_alloc+76>:    cbz    x0, 0xffff200008095234 <module_alloc+112>
        0xffff200008095214 <module_alloc+80>:    mov    x1, x20
        0xffff200008095218 <module_alloc+84>:    bl    0xffff200008297770 <kasan_module_alloc>
        0xffff20000809521c <module_alloc+88>:    tbnz    w0, #31, 0xffff20000809524c <module_alloc+136>
        0xffff200008095220 <module_alloc+92>:    mov    sp, x29
        0xffff200008095224 <module_alloc+96>:    mov    x0, x19
        0xffff200008095228 <module_alloc+100>:    ldp    x19, x20, [sp,#16]         0xffff20000809522c <module_alloc+104>:    ldp    x29, x30, [sp],#32
        0xffff200008095230 <module_alloc+108>:    ret
        0xffff200008095234 <module_alloc+112>:    mov    sp, x29
        0xffff200008095238 <module_alloc+116>:    mov    x19, #0x0               // #0_
```
In this case we are going to display the result from the following source line in this function:
```
_p = __vmalloc_node_range(size, MODULE_ALIGN, VMALLOC_START,
VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
NUMA_NO_NODE, __builtin_return_address(0));_
```
…and also the return value from the function call in this line:
```
_if (p && (kasan_module_alloc(p, size) < 0)) {_
```
We can identify these in the assembler code from the call to the external functions. To display these values we will place probes at 0xffff20000809520c _and _0xffff20000809521c on our target system:
```
_$ cat > kprobe_events <<EOF
p 0xffff20000809520c %x0
p 0xffff20000809521c %x0
EOF
$ echo 1 > events/kprobes/enable_
```
Now after plugging an ethernet adapter dongle into the USB port we see the following written into the trace log:
```
_$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 12/12   #P:8
#
#                           _—=> irqs-off
#                          / _—-=> need-resched
#                         | / _—=> hardirq/softirq
#                         || / _=> preempt-depth
#                         ||| / delay
#        TASK-PID   CPU#  |||| TIMESTAMP  FUNCTION
#           | |    |   ||||    |      |
      systemd-udevd-2082  [000] d… 77.200991: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001188000
      systemd-udevd-2082  [000] d… 77.201059: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      systemd-udevd-2082  [000] d… 77.201115: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001198000
      systemd-udevd-2082  [000] d… 77.201157: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      systemd-udevd-2082  [000] d… 77.227456: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011a0000
      systemd-udevd-2082  [000] d… 77.227522: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      systemd-udevd-2082  [000] d… 77.227579: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011b0000
      systemd-udevd-2082  [000] d… 77.227635: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      modprobe-2097  [002] d… 78.030643: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011b8000
      modprobe-2097  [002] d… 78.030761: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      modprobe-2097  [002] d… 78.031132: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001270000
      modprobe-2097  [002] d… 78.031187: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0_
```
One more feature of the kprobes event system is recording of statistics information, which can be found inkprobe_profile.  After the above trace the contents of that file are:
```
_$ cat kprobe_profile
 p_0xffff20000809520c                                    6            0
p_0xffff20000809521c                                    6            0_
```
This indicates that there have been a total of 8 hits each of the two breakpoints we set, which of course is consistent with the trace log data.  More kprobe_profile features are described in the kprobetrace documentation.
There is also the ability to further filter kprobes events.  The debugfs files used to control this are listed in the kprobetrace documentation while the details of their contents are (mostly) described in the trace events documentation.
### Conclusion
Linux on ARMv8 now is on parity with other architectures supporting the kprobes feature. Work is being done by others to also add uprobes and systemtap support. These features/tools and other already completed features (e.g.: perf, coresight) allow the Linux ARMv8 user to debug and test performance as they would on other, older architectures.
* * *
Bibliography
[[1]][5] Jim Keniston, Prasanna S. Panchamukhi, Masami Hiramatsu. “Kernel Probes (Kprobes).” _GitHub_. GitHub, Inc., 15 Aug. 2016\. Web. 13 Dec. 2016.
[[2]][6] Tso, Theodore, Li Zefan, and Tom Zanussi. “Event Tracing.” _GitHub_. GitHub, Inc., 3 Mar. 2016\. Web. 13 Dec. 2016.
[[3]][7] Hiramatsu, Masami. “Kprobe-based Event Tracing.” _GitHub_. GitHub, Inc., 18 Aug. 2016\. Web. 13 Dec. 2016.
----------------
作者简介 [David Long][8]David works as an engineer in the Linaro Kernel - Core Development team. Before coming to Linaro he spent several years in the commercial and defense industries doing both embedded realtime work, and software development tools for Unix. That was followed by a dozen years at Digital (aka Compaq) doing Unix standards, C compiler, and runtime library work. After that David went to a series of startups doing embedded Linux and Android, embedded custom OS's, and Xen virtualization. He has experience with MIPS, Alpha, and ARM platforms (amongst others). He has used most flavors of Unix starting in 1979 with Bell Labs V6, and has been a long-time Linux user and advocate. He has also occasionally been known to debug a device driver with a soldering iron and digital oscilloscope.
--------------------------------------------------------------------------------
via: http://www.linaro.org/blog/kprobes-event-tracing-armv8/
作者:[ David Long][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linaro.org/author/david-long/
[1]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#
[2]:https://github.com/torvalds/linux/blob/master/Documentation/kprobes.txt
[3]:https://github.com/torvalds/linux/blob/master/Documentation/trace/events.txt
[4]:https://github.com/torvalds/linux/blob/master/Documentation/trace/kprobetrace.txt
[5]:https://github.com/torvalds/linux/blob/master/Documentation/kprobes.txt
[6]:https://github.com/torvalds/linux/blob/master/Documentation/trace/events.txt
[7]:https://github.com/torvalds/linux/blob/master/Documentation/trace/kprobetrace.txt
[8]:http://www.linaro.org/author/david-long/
[9]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#comments
[10]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#
[11]:http://www.linaro.org/tag/arm64/
[12]:http://www.linaro.org/tag/armv8/
[13]:http://www.linaro.org/tag/jprobes/
[14]:http://www.linaro.org/tag/kernel/
[15]:http://www.linaro.org/tag/kprobes/
[16]:http://www.linaro.org/tag/kretprobes/
[17]:http://www.linaro.org/tag/perf/
[18]:http://www.linaro.org/tag/tracing/

View File

@ -1,394 +0,0 @@
How to Manage Samba4 AD Infrastructure from Linux Command Line Part 2
============================================================
This tutorial will cover [some basic daily commands][4] you need to use in order to manage Samba4 AD Domain Controller infrastructure, such as adding, removing, disabling or listing users and groups.
Well also take a look on how to manage domain security policy and how to bind AD users to local PAM authentication in order for AD users to be able to perform local logins on Linux Domain Controller.
#### Requirements
1. [Create an AD Infrastructure with Samba4 on Ubuntu 16.04 Part 1][1]
2. [Manage Samba4 Active Directory Infrastructure from Windows10 via RSAT Part 3][2]
3. [Manage Samba4 AD Domain Controller DNS and Group Policy from Windows Part 4][3]
### Step 1: Manage Samba AD DC from Command Line
1. Samba AD DC can be managed through samba-tool command line utility which offers a great interface for administrating your domain.
With the help of samba-tool interface you can directly manage domain users and groups, domain Group Policy, domain sites, DNS services, domain replication and other critical domain functions.
To review the entire functionality of samba-tool just type the command with root privileges without any option or parameter.
```
# samba-tool -h
```
[
![samba-tool - Manage Samba Administration Tool](http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Administration-Tool.png)
][5]
samba-tool Manage Samba Administration Tool
2. Now, lets start using samba-tool utility to administer Samba4 Active Directory and manage our users.
In order to create a user on AD use the following command:
```
# samba-tool user add your_domain_user
```
To add a user with several important fields required by AD, use the following syntax:
```
--------- review all options ---------
# samba-tool user add -h
# samba-tool user add your_domain_user --given-name=your_name --surname=your_username --mail-address=your_domain_user@tecmint.lan --login-shell=/bin/bash
```
[
![Create User on Samba AD](http://www.tecmint.com/wp-content/uploads/2016/11/Create-User-on-Samba-AD.png)
][6]
Create User on Samba AD
3. A listing of all samba AD domain users can be obtained by issuing the following command:
```
# samba-tool user list
```
[
![List Samba AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-AD-Users.png)
][7]
List Samba AD Users
4. To delete a samba AD domain user use the below syntax:
```
# samba-tool user delete your_domain_user
```
5. Reset a samba domain user password by executing the below command:
```
# samba-tool user setpassword your_domain_user
```
6. In order to disable or enable an samba AD User account use the below command:
```
# samba-tool user disable your_domain_user
# samba-tool user enable your_domain_user
```
7. Likewise, samba groups can be managed with the following command syntax:
```
--------- review all options ---------
# samba-tool group add h
# samba-tool group add your_domain_group
```
8. Delete a samba domain group by issuing the below command:
```
# samba-tool group delete your_domain_group
```
9. To display all samba domain groups run the following command:
```
# samba-tool group list
```
10. To list all the samba domain members in a specific group use the command:
```
# samba-tool group listmembers "your_domain group"
```
[
![List Samba Domain Members of Group](http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-Domain-Members-of-Group.png)
][8]
List Samba Domain Members of Group
11. Adding/Removing a member from a samba domain group can be done by issuing one of the following commands:
```
# samba-tool group addmembers your_domain_group your_domain_user
# samba-tool group remove members your_domain_group your_domain_user
```
12. As mentioned earlier, samba-tool command line interface can also be used to manage your samba domain policy and security.
To review your samba domain password settings use the below command:
```
# samba-tool domain passwordsettings show
```
[
![Check Samba Domain Password](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Domain-Password.png)
][9]
Check Samba Domain Password
13. In order to modify samba domain password policy, such as the password complexity level, password ageing, length, how many old password to remember and other security features required for a Domain Controller use the below screenshot as a guide.
```
---------- List all command options ----------
# samba-tool domain passwordsettings -h
```
[
![Manage Samba Domain Password Settings](http://www.tecmint.com/wp-content/uploads/2016/11/Manage-Samba-Domain-Password-Settings.png)
][10]
Manage Samba Domain Password Settings
Never use the password policy rules as illustrated above on a production environment. The above settings are used just for demonstration purposes.
### Step 2: Samba Local Authentication Using Active Directory Accounts
14. By default, AD users cannot perform local logins on the Linux system outside Samba AD DCenvironment.
In order to login on the system with an Active Directory account you need to make the following changes on your Linux system environment and modify Samba4 AD DC.
First, open samba main configuration file and add the below lines, if missing, as illustrated on the below screenshot.
```
$ sudo nano /etc/samba/smb.conf
```
Make sure the following statements appear on the configuration file:
```
winbind enum users = yes
winbind enum groups = yes
```
[
![Samba Authentication Using Active Directory User Accounts](http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Authentication-Using-Active-Directory-Accounts.png)
][11]
Samba Authentication Using Active Directory User Accounts
15. After youve made the changes, use testparm utility to make sure no errors are found on samba configuration file and restart samba daemons by issuing the below command.
```
$ testparm
$ sudo systemctl restart samba-ad-dc.service
```
[
![Check Samba Configuration for Errors](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Configuration-for-Errors.png)
][12]
Check Samba Configuration for Errors
16. Next, we need to modify local PAM configuration files in order for Samba4 Active Directory accounts to be able to authenticate and open a session on the local system and create a home directory for users at first login.
Use the pam-auth-update command to open PAM configuration prompt and make sure you enable all PAM profiles using `[space]` key as illustrated on the below screenshot.
When finished hit `[Tab]` key to move to Ok and apply changes.
```
$ sudo pam-auth-update
```
[
![Configure PAM for Samba4 AD](http://www.tecmint.com/wp-content/uploads/2016/11/PAM-Configuration-for-Samba4-AD.png)
][13]
Configure PAM for Samba4 AD
[
![Enable PAM Authentication Module for Samba4 AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/Enable-PAM-Authentication-Module-for-Samba4-AD.png)
][14]
Enable PAM Authentication Module for Samba4 AD Users
17. Now, open /etc/nsswitch.conf file with a text editor and add winbind statement at the end of the password and group lines as illustrated on the below screenshot.
```
$ sudo vi /etc/nsswitch.conf
```
[
![Add Windbind Service Switch for Samba](http://www.tecmint.com/wp-content/uploads/2016/11/Add-Windbind-Service-Switch-for-Samba.png)
][15]
Add Windbind Service Switch for Samba
18. Finally, edit /etc/pam.d/common-password file, search for the below line as illustrated on the below screenshot and remove the use_authtok statement.
This setting assures that Active Directory users can change their password from command line while authenticated in Linux. With this setting on, AD users authenticated locally on Linux cannot change their password from console.
```
password [success=1 default=ignore] pam_winbind.so try_first_pass
```
[
![Allow Samba AD Users to Change Passwords](http://www.tecmint.com/wp-content/uploads/2016/11/Allow-Samba-AD-Users-to-Change-Password.png)
][16]
Allow Samba AD Users to Change Passwords
Remove use_authtok option each time PAM updates are installed and applied to PAM modules or each time you execute pam-auth-update command.
19. Samba4 binaries comes with a winbindd daemon built-in and enabled by default.
For this reason youre no longer required to separately enable and run winbind daemon provided by winbind package from official Ubuntu repositories.
In case the old and deprecated winbind service is started on the system make sure you disable it and stop the service by issuing the below commands:
```
$ sudo systemctl disable winbind.service
$ sudo systemctl stop winbind.service
```
Although, we no longer need to run old winbind daemon, we still need to install Winbind package from repositories in order to install and use wbinfo tool.
Wbinfo utility can be used to query Active Directory users and groups from winbindd daemon point of view.
The following commands illustrates how to query AD users and groups using wbinfo.
```
$ wbinfo -g
$ wbinfo -u
$ wbinfo -i your_domain_user
```
[
![Check Samba4 AD Information ](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Information-of-Samba4-AD.png)
][17]
Check Samba4 AD Information
[
![Check Samba4 AD User Info](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Info.png)
][18]
Check Samba4 AD User Info
20. Apart from wbinfo utility you can also use getent command line utility to query Active Directory database from Name Service Switch libraries which are represented in /etc/nsswitch.conf file.
Pipe getent command through a grep filter in order to narrow the results regarding just your AD realm user or group database.
```
# getent passwd | grep TECMINT
# getent group | grep TECMINT
```
[
![Get Samba4 AD Details](http://www.tecmint.com/wp-content/uploads/2016/11/Get-Samba4-AD-Details.png)
][19]
Get Samba4 AD Details
### Step 3: Login in Linux with an Active Directory User
21. In order to authenticate on the system with a Samba4 AD user, just use the AD username parameter after `su -` command.
At the first login a message will be displayed on the console which notifies you that a home directory has been created on `/home/$DOMAIN/` system path with the mane of your AD username.
Use id command to display extra information about the authenticated user.
```
# su - your_ad_user
$ id
$ exit
```
[
![Check Samba4 AD User Authentication on Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Authentication-on-Linux.png)
][20]
Check Samba4 AD User Authentication on Linux
22. To change the password for an authenticated AD user type passwd command in console after you have successfully logged into the system.
```
$ su - your_ad_user
$ passwd
```
[
![Change Samba4 AD User Password](http://www.tecmint.com/wp-content/uploads/2016/11/Change-Samba4-AD-User-Password.png)
][21]
Change Samba4 AD User Password
23. By default, Active Directory users are not granted with root privileges in order to perform administrative tasks on Linux.
To grant root powers to an AD user you must add the username to the local sudo group by issuing the below command.
Make sure you enclose the realm, slash and AD username with single ASCII quotes.
```
# usermod -aG sudo 'DOMAIN\your_domain_user'
```
To test if AD user has root privileges on the local system, login and run a command, such as apt-get update, with sudo permissions.
```
# su - tecmint_user
$ sudo apt-get update
```
[
![Grant sudo Permission to Samba4 AD User](http://www.tecmint.com/wp-content/uploads/2016/11/Grant-sudo-Permission-to-Samba4-AD-User.png)
][22]
Grant sudo Permission to Samba4 AD User
24. In case you want to add root privileges for all accounts of an Active Directory group, edit /etc/sudoers file using visudo command and add the below line after root privileges line, as illustrated on the below screenshot:
```
%DOMAIN\\your_domain\ group ALL=(ALL:ALL) ALL
```
Pay attention to sudoers syntax so you dont break things out.
Sudoers file doesnt handles very well the use of ASCII quotation marks, so make sure you use `%` to denote that youre referring to a group and use a backslash to escape the first slash after the domain name and another backslash to escape spaces if your group name contains spaces (most of AD built-in groups contain spaces by default). Also, write the realm with uppercases.
[
![Give Sudo Access to All Samba4 AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/Give-Sudo-Access-to-All-Samba4-AD-Users.png)
][23]
Give Sudo Access to All Samba4 AD Users
Thats all for now! Managing Samba4 AD infrastructure can be also achieved with several tools from Windows environment, such as ADUC, DNS Manager, GPM or other, which can be obtained by installing RSAT package from Microsoft download page.
To administer Samba4 AD DC through RSAT utilities, its absolutely necessary to join the Windows system into Samba4 Active Directory. This will be the subject of our next tutorial, till then stay tuned to TecMint.
------
作者简介I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/
作者:[Matei Cezar ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:http://www.tecmint.com/install-samba4-active-directory-ubuntu/
[2]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
[3]:http://www.tecmint.com/manage-samba4-dns-group-policy-from-windows/
[4]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/
[5]:http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Administration-Tool.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/11/Create-User-on-Samba-AD.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-AD-Users.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-Domain-Members-of-Group.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Domain-Password.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/11/Manage-Samba-Domain-Password-Settings.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Authentication-Using-Active-Directory-Accounts.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Configuration-for-Errors.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/11/PAM-Configuration-for-Samba4-AD.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/11/Enable-PAM-Authentication-Module-for-Samba4-AD.png
[15]:http://www.tecmint.com/wp-content/uploads/2016/11/Add-Windbind-Service-Switch-for-Samba.png
[16]:http://www.tecmint.com/wp-content/uploads/2016/11/Allow-Samba-AD-Users-to-Change-Password.png
[17]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Information-of-Samba4-AD.png
[18]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Info.png
[19]:http://www.tecmint.com/wp-content/uploads/2016/11/Get-Samba4-AD-Details.png
[20]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Authentication-on-Linux.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/11/Change-Samba4-AD-User-Password.png
[22]:http://www.tecmint.com/wp-content/uploads/2016/11/Grant-sudo-Permission-to-Samba4-AD-User.png
[23]:http://www.tecmint.com/wp-content/uploads/2016/11/Give-Sudo-Access-to-All-Samba4-AD-Users.png
[24]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/#
[25]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/#
[26]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/#
[27]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/#
[28]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/#comments

View File

@ -1,129 +0,0 @@
translating---geekpi
# LXD 2.0: LXD and OpenStack [11/12]
This is the eleventh blog post in [this series about LXD 2.0][1].
![LXD logo](https://linuxcontainers.org/static/img/containers.png)
Introduction
============================================================
First of all, sorry for the delay. It took quite a long time before I finally managed to get all of this going. My first attempts were using devstack which ran into a number of issues that had to be resolved. Yet even after all that, I still wasnt be able to get networking going properly.
I finally gave up on devstack and tried “conjure-up” to deploy a full Ubuntu OpenStack using Juju in a pretty user friendly way. And it finally worked!
So below is how to run a full OpenStack, using LXD containers instead of VMs and running all of this inside a LXD container (nesting!).
# Requirements
This post assumes youve got a working LXD setup, providing containers with network access and that you have a pretty beefy CPU, around 50GB of space for the container to use and at least 16GB of RAM.
Remember, were running a full OpenStack here, this thing isnt exactly light!
# Setting up the container
OpenStack is made of a lof of different components, doing a lot of different things. Some require some additional privileges so to make our live easier, well use a privileged container.
Well configure that container to support nesting, pre-load all the required kernel modules and allow it access to /dev/mem (as is apparently needed).
Please note that this means that most of the security benefit of LXD containers are effectively disabled for that container. However the containers that will be spawned by OpenStack itself will be unprivileged and use all the normal LXD security features.
```
lxc launch ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch"
lxc config device add openstack mem unix-char path=/dev/mem
```
There is a small bug in LXD where it would attempt to load kernel modules that have already been loaded on the host. This has been fixed in LXD 2.5 and will be fixed in LXD 2.0.6 but until then, this can be worked around with:
```
lxc exec openstack -- ln -s /bin/true /usr/local/bin/modprobe
```
Then we need to add a couple of PPAs and install conjure-up, the deployment tool well use to get OpenStack going.
```
lxc exec openstack -- apt-add-repository ppa:conjure-up/next -y
lxc exec openstack -- apt-add-repository ppa:juju/stable -y
lxc exec openstack -- apt update
lxc exec openstack -- apt dist-upgrade -y
lxc exec openstack -- apt install conjure-up -y
```
And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:
* Use the “dir” storage backend (“zfs” doesnt work in a nested container)
* Do NOT configure IPv6 networking (conjure-up/juju dont play well with it)
```
lxc exec openstack -- lxd init
```
And thats it for the container configuration itself, now we can deploy OpenStack!
# Deploying OpenStack with conjure-up
As mentioned earlier, well be using conjure-up to deploy OpenStack.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.
Start it with:
```
lxc exec openstack -- sudo -u ubuntu -i conjure-up
```
* Select “OpenStack with NovaLXD”
* Then select “localhost” as the deployment target (uses LXD)
* And hit “Deploy all remaining applications”
This will now deploy OpenStack. The whole process can take well over an hour depending on what kind of machine youre running this on. Youll see all services getting a container allocated, then getting deployed and finally interconnected.
![Conjure-Up deploying OpenStack](https://www.stgraber.org/wp-content/uploads/2016/10/conjure-up.png)
Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.
# Access the dashboard and spawn a container
The dashboard runs inside a container, so you cant just hit it from your web browser.
The easiest way around this is to setup a NAT rule with:
```
lxc exec openstack -- iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to <IP>
```
Where “<ip>” is the dashboard IP address conjure-up gave you at the end of the installation.
You can now grab the IP address of the “openstack” container (from “lxc info openstack”) and point your web browser to: http://<container ip>/horizon
This can take a few minutes to load the first time around. Once the login screen is loaded, enter the default login and password (admin/openstack) and youll be greeted by the OpenStack dashboard!
![oslxd-dashboard](https://www.stgraber.org/wp-content/uploads/2016/10/oslxd-dashboard.png)
You can now head to the “Project” tab on the left and the “Instances” page. To start a new instance using nova-lxd, click on “Launch instance”, select what image you want, network, … and your instance will get spawned.
Once its running, you can assign it a floating IP which will let you reach your instance from within your “openstack” container.
# Conclusion
OpenStack is a pretty complex piece of software, its also not something you really want to run at home or on a single server. But its certainly interesting to be able to do it anyway, keeping everything contained to a single container on your machine.
Conjure-Up is a great tool to deploy such complex software, using Juju behind the scene to drive the deployment, using LXD containers for every individual service and finally for the instances themselves.
Its also one of the very few cases where multiple level of container nesting actually makes sense!
--------------------------------------------------------------------------
作者简介Im Stéphane Graber. Im probably mostly known as the LXC and LXD project leader, currently working as a technical lead for LXD at Canonical Ltd. from my home in Montreal, Quebec, Canada.
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/10/26/lxd-2-0-lxd-and-openstack-1112/
作者:[Stéphane Graber ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.stgraber.org/author/stgraber/
[1]:https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

View File

@ -0,0 +1,251 @@
如何用UFW配置防火墙
============================================================
UFW或者称之为_uncomplicated firewall_是一个Arch Linux、Debian或Ubuntu中管理防火墙规则的前端。 UFW通过命令行使用尽管它有可用的GUI它的目的是使防火墙配置简单或不复杂
![How to Configure a Firewall with UFW](https://www.linode.com/docs/assets/ufw_tg.png "How to Configure a Firewall with UFW")
### 开始之前
1. 熟悉我们的[入门][1]指南并完成设置Linode主机名和时区的步骤。
2. 本指南将尽可能使用`sudo`。 完成[保护你的服务器][2]指南的部分创建一个标准用户帐户加强SSH访问和删除不必要的网络服务。 **不要**遵循创建防火墙部分 - 本指南是介绍使用UFW的它对于iptables而言是一个单独的控制防火墙的方法。
3. 升级系统
**Arch Linux**
```
sudo pacman -Syu
```
**Debian / Ubuntu**
```
sudo apt-get update && sudo apt-get upgrade
```
### 安装 UFW
UFW默认包含在Ubuntu中但必须安装在Arch和Debian中。 Debian将自动启用UFW的systemd单元并使其在重新启动时启动但Arch不会。 _这与告诉UFW启用防火墙规则不同_因为使用systemd或者upstart启用UFW仅告知init系统打开UFW守护程序。
默认情况下UFW的规则集为空因此即使守护程序正在运行也不会强制执行任何防火墙规则。 强制执行防火墙规则集的部分[在下面][3]。
### Arch Linux
1. 安装 UFW:
```
sudo pacman -S ufw
```
2. 启动并启用UFW的systemd单元:
```
sudo systemctl start ufw
sudo systemctl enable ufw
```
### Debian / Ubuntu
1. 安装 UFW
```
sudo apt-get install ufw
```
### 使用UFW管理防火墙规则
### 设置默认规则
大多数系统只需要少量的端口打开传入连接,并且所有剩余的端口都关闭。 要一个简单的规则基础开始,`ufw default`命令可以用于设置对传入和传出连接的默认响应。 要拒绝所有传入并允许所有传出连接,那么运行:
```
sudo ufw default allow outgoing
sudo ufw default deny incoming
```
`ufw default`也允许使用`reject`参数。
> 除非明确允许规则否则配置默认deny或reject规则会锁定你的Linode。确保在应用默认deny或reject规则之前已按照下面的部分配置了SSH和其他关键服务的允许规则。
### 添加规则
可以有两种方式添加规则:用**端口号**或者**服务名**表示。
要允许SSH的22端口的传入和传出连接你可以运行
```
sudo ufw allow ssh
```
你也可以运行:
```
sudo ufw allow 22
```
相似的要在特定端口比如111上**deny**流量,你需要运行:
```
sudo ufw deny 111
```
为了更好地调整你的规则你也可以允许基于TCP或者UDP的包。下面例子会允许80端口的TCP包
```
sudo ufw allow 80/tcp
sudo ufw allow http/tcp
```
这个会允许1725端口上的UDP包
```
sudo ufw allow 1725/udp
```
### 高级规则
除了基于端口的允许或阻止UFW还允许您通过IP地址、子网和IP地址/子网/端口组合来允许/阻止。
允许从IP地址连接
```
sudo ufw allow from 123.45.67.89
```
允许特定子网的连接:
```
sudo ufw allow from 123.45.67.89/24
```
允许特定IP/端口组合:
```
sudo ufw allow from 123.45.67.89 to any port 22 proto tcp
```
`proto tcp`可以删除或者根据你的需求变成`proto udp`,所有例子的`allow`都可以根据需要变成`deny`。
### 删除规则
要删除一条规则,在规则的前面加上`delete`。如果你希望不在允许HTTP流量你可以运行
```
sudo ufw delete allow 80
```
删除规则同样允许基于服务名。
### 编辑UFW的配置文件
虽然可以通过命令行添加简单的规则,但仍有可能需要添加或删除更高级或特定的规则。 在通过终端运行规则输入之前UFW将运行一个文件`before.rules`它允许回环、ping和DHCP。要添加或改变这些规则编辑`/etc/ufw/before.rules`这个文件。 `before6.rules`文件也位于IPv6的同一目录中。
还存在一个`after.rule`和`after6.rule`文件用于添加在UFW运行添加命令行规则后需要添加的任何规则。
额外的配置文件位于`/etc/default/ufw`。 从此处可以禁用或启用IPv6可以设置默认规则并可以设置UFW以管理内置防火墙链。
### UFW状态
你可以在任何时候使用命令:`sudo ufw status`查看UFW的状态。这会显示所有规则列表以及UFW是否是激活状态
```
Status: active
To Action From
-- ------ ----
22 ALLOW Anywhere
80/tcp ALLOW Anywhere
443 ALLOW Anywhere
22 (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
```
### 启用防火墙
随着你选择规则完成,你初始运行`ufw status`可能会输出`Status: inactive`。 启用UFW并强制执行防火墙规则
```
sudo ufw enable
```
相似地禁用UFW规则
```
sudo ufw disable
```
> 这让UFW继续运行并且在下次启动时再次启动。
### 日志记录
你可以用下面的命令启动日志记录:
```
sudo ufw logging on
```
可以通过运行`sudo ufw logging low|medium|high`设计日志级别,可以选择`low`、 `medium` 或者 `high`。默认级别是`low`。
常规日志类似于下面这样,位于`/var/logs/ufw`
```
Sep 16 15:08:14 <hostname> kernel: [UFW BLOCK] IN=eth0 OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:00:00 SRC=123.45.67.89 DST=987.65.43.21 LEN=40 TOS=0x00 PREC=0x00 TTL=249 ID=8475 PROTO=TCP SPT=48247 DPT=22 WINDOW=1024 RES=0x00 SYN URGP=0
```
初始的值有你的Linode的日期、时间、主机名。额外的信息包括
* ** [UFW BLOCK]**此位置是记录事件的描述所在的位置。在这种例子中,它阻止了连接。
* ** IN**如果这包含一个值,那么事件传入的
* ** OUT**如果这包含一个值,那么事件是传出的
* ** MAC**目的地和源MAC地址的组合
* ** SRC**包源的IP
* ** DST**包目的地的IP
* ** LEN**数据包长度
* ** TTL**数据包TTL或称为_time to live_。 如果没有找到目的地,它将在路由器之间跳跃,直到它过期。
* ** PROTO**数据包的协议
* ** SPT**包的源端口
* ** DPT**包的目标端口
* ** WINDOW**发送方可以接收的数据包的大小
* ** SYN URGP**指示是否需要三次握手。 `0`表示不是。
--------------------------------------------------------------------------------
via: https://www.linode.com/docs/security/firewalls/configure-firewall-with-ufw
作者:[Linode ][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linode.com/docs/security/firewalls/configure-firewall-with-ufw
[1]:https://www.linode.com/docs/getting-started
[2]:https://www.linode.com/docs/security/securing-your-server
[3]:http://localhost:4567/docs/security/firewalls/configure-firewall-with-ufw#enable-the-firewall

View File

@ -0,0 +1,63 @@
在Apache中重定向URL从一台服务器到另外一台服务器上
============================================================
如我们前面两篇文章([使用mod_rewrite执行内部重定向][1]和[基于浏览器显示自定义内容][2]中所承诺的在本文中我们将解释如何在Apache中使用mod_rewrite模块将已移动的资源重定向到不同服务器上。
假设你正在重新设计公司的网站。你已决定将内容和样式HTML文件JavaScript和CSS存储在一个服务器上将文档存储在另一个服务器上 - 这样可能会更稳健。
**建议阅读:** [5个提高Apache Web服务器性能的提示][3]
但是,你希望这个更改对用户透明,以便他们仍然能够通过常用网址访问文档。
在下面的例子中名为“assets.pdf”的文件已从192.168.0.100主机名web中的/var/www /html移动到192.168.0.101主机名web2中的相同位置。
为了让用户在浏览到“192.168.0.100/assets.pdf”时访问此文件请打开192.168.0.100上的Apache配置文件并添加以下重写规则或者也可以将以下规则添加到[.htaccess文件][4])中:
```
RewriteRule "^(/assets\.pdf$)" "http://192.168.0.101$1" [R,L]
```
其中`$1`是与括号中的正则表达式匹配的任何内容的占位符。
现在保存更改不要忘记重新启动Apache让我们看看当我们打开192.168.0.100/assets.pdf尝试访问assets.pdf时会发生什么
**建议阅读:** [25有用的网站的'.htaccess'技巧] [5]
在下面我们就可以看到为192.168.0.100上的assets.pdf所做的请求实际上是由192.168.0.101处理的。
```
# tail -n 1 /var/log/apache2/access.log
```
[
![Check Apache Logs](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Apache-Logs.png)
][6]
检查Apache日志
在本文中,我们讨论了如何对已移动到其他服务器的资源进行重定向。 总而言之,我强烈建议你看看[mod_rewrite][7]指南和[Apache重定向指南][8],以供将来参考。
一如既往那样,如果您对本文有任何疑虑,请随时使用下面的评论栏回复。 我们期待你的回音!
--------------------------------------------------------------------------------
作者简介Gabriel Cánepa是来自阿根廷圣路易斯Villa Mercedes的GNU/Linux系统管理员和Web开发人员。 他在一家全球领先的消费品公司工作非常高兴使用FOSS工具来提高他日常工作领域的生产力。
-----------
via: http://www.tecmint.com/redirect-website-url-from-one-server-to-different-server/
作者:[Gabriel Cánepa][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/redirection-with-mod_rewrite-in-apache/
[2]:http://www.tecmint.com/mod_rewrite-redirect-requests-based-on-browser/
[3]:http://www.tecmint.com/apache-performance-tuning/
[4]:http://www.tecmint.com/tag/htaccess/
[5]:http://www.tecmint.com/apache-htaccess-tricks/
[6]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Apache-Logs.png
[7]:http://mod-rewrite-cheatsheet.com/
[8]:https://httpd.apache.org/docs/2.4/rewrite/remapping.html

View File

@ -0,0 +1,119 @@
如何在Linux中找出最近或今天被修改的文件
============================================================
在本文中,我们将解释两个简单的[命令行小贴士][5],它可以帮你列出今天的所有文件。
Linux用户在命令行上遇到的常见问题之一是[定位具有特定名称的文件][6],当你知道真实的文件名时可能会容易得多。
但是,假设你忘记了在白天早些时候创建的文件的名称(在你包含了数百个文件的`home`文件夹中),但你有急用。
下面用不同的方式只[列出所有你今天创建或修改的文件][7](直接或间接)。
1.使用[ls命令][8]你只能按如下所示在你的home文件夹中列出今天的文件其中
1. `-a` - 列出所有文件,包括隐藏文件
2. `-l` - 启用长列表格式
3. `--time-style = FORMAT` - 显示指定FORMAT的时间
4. `+D` - 以m/d/y格式显示/使用日期
```
# ls -al --time-style=+%D | grep 'date +%D'
```
[
![Find Recent Files in Linux](http://www.tecmint.com/wp-content/uploads/2016/12/Find-Recent-Files-in-Linux.png)
][9]
在Linux中找出最近的文件
In addition, you can [sort the resultant list alphabetically][10] by including the `-X` flag:
此外,你使用可以`-X`标志来[按字母顺序对结果排序][10]
```
# ls -alX --time-style=+%D | grep 'date +%D'
```
你也可以使用`-S`标志来基于大小(大的优先)来排序:
```
# ls -alS --time-style=+%D | grep 'date +%D'
```
2. 另外使用[find命令][11]会更灵活并且提供比ls更多的选项用于以下相同的目的。
1. `-maxdepth`级别用于指定要执行搜索操作的起点(在这个情况下为当前目录)下的搜索层级(按子目录)。
2. `-newerXY`如果有问题的文件的时间戳X比引用文件的时间戳Y更新那么这个就能用了。 X和Y表示以下任何字母
     1. a - 文件引用的访问时间
     2. B - 文件引用的创建时间
     3. c - 文件引用的inode状态改变时间
     4.m - 文件引用的修改时间
     5. t - 引用直接解释为一个时间
下面的命令意味着只有在2016-12-06修改的文件将被找出
```
# find . -maxdepth 1 -newermt "2016-12-06"
```
[
![Find Today's Files in Linux](http://www.tecmint.com/wp-content/uploads/2016/12/Find-Todays-Files-in-Linux.png)
][12]
在Linux中找出今天的文件
重要:使参考上面的[find命令][13]中正确的日期格式,一旦你使用了错误的格式,你会得到如下错误:
```
# find . -maxdepth 1 -newermt "12-06-2016"
find: I cannot figure out how to interpret '12-06-2016' as a date or time
```
或者使用下面正确的格式:
```
# find . -maxdepth 1 -newermt "12/06/2016"
或者
# find . -maxdepth 1 -newermt "12/06/16"
```
[
![Find Todays Modified Files in Linux](http://www.tecmint.com/wp-content/uploads/2016/12/Find-Todays-Modified-Files.png)
][14]
在Linux中找出今天修改的文件
你可以在我们的下面一系列文章中获得`ls`和`find`命令的更多使用信息。
1. [用15例子的掌握Linux ls 命令][1]
2. [对Linux用户有用的7个奇怪的技巧][2]
3. [用35个例子掌握Linux find 命令][3]
4. [在Linux中使用扩展查找多个文件名的方法][4]
在本文中我们解释了如何使用ls和find命令帮助只列出今天的文件。 使用以下反馈栏向我们发送有关该主题的任何问题或意见。 你也可以提醒我们其他可以用于这个目的的命令。
--------------------------------------------------------------------------------
作者简介Aaron Kili是一名Linux和F.O.S.S的爱好者将来的Linux系统管理员、网站开发人员目前是TecMint的内容创作者他喜欢用电脑工作并坚信分享知识。
------------------
via: http://www.tecmint.com/find-recent-modified-files-in-linux/
作者:[Aaron Kili][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/
[2]:http://www.tecmint.com/linux-ls-command-tricks/
[3]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/
[4]:http://www.tecmint.com/linux-find-command-to-search-multiple-filenames-extensions/
[5]:http://www.tecmint.com/tag/linux-tricks/
[6]:http://www.tecmint.com/linux-find-command-to-search-multiple-filenames-extensions/
[7]:http://www.tecmint.com/sort-ls-output-by-last-modified-date-and-time/
[8]:http://www.tecmint.com/tag/linux-ls-command/
[9]:http://www.tecmint.com/wp-content/uploads/2016/12/Find-Recent-Files-in-Linux.png
[10]:http://www.tecmint.com/sort-command-linux/
[11]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/
[12]:http://www.tecmint.com/wp-content/uploads/2016/12/Find-Todays-Files-in-Linux.png
[13]:http://www.tecmint.com/find-directory-in-linux/
[14]:http://www.tecmint.com/wp-content/uploads/2016/12/Find-Todays-Modified-Files.png

View File

@ -1,27 +1,27 @@
Building an Email Server on Ubuntu Linux, Part 2
在Ubuntu上搭建一台Email服务器
============================================================
### [dovecot-email.jpg][4]
![Dovecot email](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dovecot-email.jpg?itok=tY4veggw "Dovecot email")
Part 2 in this tutorial shows how to use Dovecot to move messages off your Postfix server and into your users' email inboxes.[Creative Commons Zero][2]Pixabay
本教程的第2部分将介绍如何使用Dovecot将邮件从Postfix服务器移动到用户的收件箱。以[Creative Commons Zero][2]Pixabay方式授权发布
In [part 1][5], we installed and tested the Postfix SMTP server. Postfix, or any SMTP server, isn't a complete mail server because all it does is move messages between SMTP servers. We need Dovecot to move messages off your Postfix server and into your users' email inboxes.
在[第一部分][5]中我们安装并测试了Postfix SMTP服务器。Postfix或任何SMTP服务器都不是一个完整的邮件服务器因为它所做的是在SMTP服务器之间移动邮件。我们需要Dovecot将邮件从Postfix服务器移动到用户的收件箱中。
Dovecot supports the two standard mail protocols, IMAP (Internet Message Access Protocol) and POP3 (Post Office Protocol). An IMAP server retains all messages on the server. Your users have the option to download messages to their computers or access them only on the server. IMAP is convenient for users who have multiple machines. It's more work for you because you have to ensure that your server is always available, and IMAP servers require a lot of storage and memory.
Dovecot支持两种标准邮件协议IMAPInternet邮件访问协议和POP3邮局协议。 IMAP服务器保留服务器上的所有邮件。您的用户可以选择将邮件下载到计算机或仅在服务器上访问它们。 IMAP对于有多台机器的用户是方便的。但对你而言会有更多的工作因为你必须确保你的服务器始终可用而且IMAP服务器需要大量的存储和内存。
POP3 is an older protocol. A POP3 server can serve many more users than an IMAP server because messages are downloaded to your users' computers. Most mail clients have the option to leave messages on the server for a certain number of days, so POP3 can behave somewhat like IMAP. But it's not IMAP, and when you do this messages are often downloaded multiple times or deleted unexpectedly.
POP3是较旧的协议。POP3服务器可以比IMAP服务器服务更多的用户因为邮件会下载到用户的计算机。大多数邮件客户端可以选择在服务器上保留一定天数的邮件因此POP3的行为有点像IMAP。但它不是IMAP当你像IMAP那样做那么常常会下载多次或意外删除。
### Install Dovecot
### 安装 Dovecot
Fire up your trusty Ubuntu system and install Dovecot:
启动你信任的Ubuntu系统并安装Dovecot
```
$ sudo apt-get install dovecot-imapd dovecot-pop3d
```
It installs with a working configuration and automatically starts after installation, which you can confirm with `ps ax | grep dovecot`:
它会在安装可用的配置并在完成后自动启动,你可以用`ps ax | grep dovecot`确认:
```
@ -31,7 +31,7 @@ $ ps ax | grep dovecot
15991 ? S 0:00 dovecot/log
```
Open your main Postfix configuration file, `/etc/postfix/main.cf`, and make sure it is configured for maildirs and not mbox mail stores; mbox is single giant file for each user, while maildir gives each message its own file. Lots of little files are more stable and easier to manage than giant bloaty files. Add these two lines; the second line tells Postfix you want maildir format, and to create a `.Mail` directory for every user in their home directories. You can name this directory anything you want, it doesn't have to be `.Mail`:
打开你的Postfix配置文件`/etc/postfix/main.cf`确保配置了maildirs而不是mbox邮件存储mbox是对于每个用户的大文件而maildir是每条消息都有一个文件。大量的小文件比一个庞大的文件更稳定且易于管理。下面添加两行第二行告诉Postfix你需要maildir格式并且在每个用户的家目录下创建一个`.Mail`目录。你可以取任何名字,不一定要是`.Mail`
```
@ -39,14 +39,14 @@ mail_spool_directory = /var/mail
home_mailbox = .Mail/
```
Now tweak your Dovecot configuration. First rename the original `dovecot.conf` file to get it out of the way, because it calls a host of `conf.d` files and it is better to keep things simple while you're learning:
现在调整你的Dovecot配置。首先把原始的`dovecot.conf`文件重命名,因为它会调用`conf.d`中的文件来让事情简单些:
```
$ sudo mv /etc/dovecot/dovecot.conf /etc/dovecot/dovecot-oldconf
```
Now create a clean new `/etc/dovecot/dovecot.conf` with these contents:
现在创建一个新的`/etc/dovecot/dovecot.conf`
```
@ -74,7 +74,7 @@ userdb {
}
```
Note that `mail_location = maildir` must match the `home_mailbox` parameter in `main.cf`. Save your changes and reload both Postfix and Dovecot's configurations:
注意`mail_location = maildir` 必须和`main.cf`中的`home_mailbox`参数匹配。保存你的更改并重新加载Postfix和Dovecot配置
```
@ -82,9 +82,9 @@ $ sudo postfix reload
$ sudo dovecot reload
```
### Fast Way to Dump Configurations
### 快速导出配置
Use these commands to quickly review your Postfix and Dovecot configurations:
使用下面的命令来查看你的Postfix和Dovecot配置
```
@ -92,9 +92,9 @@ $ postconf -n
$ doveconf -n
```
### Test Dovecot
### 测试 Dovecot
Now let's put telnet to work again, and send ourselves a test message. The lines in bold are the commands that you type. `studio` is my server's hostname, so of course you must use your own:
现在再次启动telnet并且给自己发送一条测试消息。粗体显示的是你输入的命令。`studio`是我服务器的主机名,因此你必须用自己的:
```
@ -132,7 +132,7 @@ quit
Connection closed by foreign host.
```
Now query Dovecot to fetch your new message. Log in using your Linux username and password:
现在请求Dovecot来取回你的新消息使用你的Linux用户名和密码登录
```
@ -173,7 +173,7 @@ quit
Connection closed by foreign host.
```
Take a moment to compare the message entered in the first example, and the message received in the second example. It is easy to spoof the return address and date, but Postfix is not fooled. Most mail clients default to displaying a minimal set of headers, but you need to read the full headers to see the true backtrace.
花一点时间比较第一个例子中输入的消息和第二个例子中接收的消息。 它很容易欺骗返回地址和日期但Postfix不会这样。大多数邮件客户端默认显示一个最小的标头集但是你需要读取完整的标头以查看真实的回溯。
You can also read your messages by looking in your `~/Mail/cur` directory. They are plain text. Mine has two test messages:
@ -184,9 +184,9 @@ $ ls .Mail/cur/
1480555224.V806I28e000eM41463.studio:2,S
```
### Testing IMAP
### 测试 IMAP
Our Dovecot configuration enables both POP3 and IMAP, so let's use telnet to test IMAP.
我们Dovecot同时启用了POP3和IMAP因此我们使用telnet测试IMAP。
```
@ -221,26 +221,27 @@ A4 OK Logout completed.
Connection closed by foreign host
```
### Thunderbird Mail Client
### Thunderbird邮件客户端
图1中的屏幕截图显示了我局域网上另一台主机上的图形邮件客户端中的邮件。
This screenshot in Figure 1 shows what my messages look like in a graphical mail client on another host on my LAN.
### [thunderbird-mail.png][3]
![thunderbird mail](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/thunderbird-mail.png?itok=IkWK5Ti_ "thunderbird mail")
Figure 1: Thunderbird mail.[Used with permission][1]
图1 Thunderbird mail.[Used with permission][1]
At this point, you have a working IMAP and POP3 mail server, and you know how to test your server. Your users will choose which protocol they want to use when they set up their mail clients. If you want to support only one mail protocol, then name just the one in your Dovecot configuration.
此时你已有一个工作的IMAP和POP3邮件服务器并且你也知道该如何测试你的服务器。你的用户将在他们设置邮件客户端时选择要使用的协议。如果您只想支持一个邮件协议那么只需要命名您的Dovecot配置中的一个。
However, you are far from finished. This is a very simple, wide-open setup with no encryption. It also works only for users on the same system as your mail server. This is not scalable and has some security risks, such as no protection for passwords. Come back [next week ][6]to learn how to create mail users that are separate from system users, and how to add encryption.
然而,这还远远没有完成。这是一个非常简单、没有加密的开放的安装。它也只适用于与邮件服务器在同一系统上的用户。这是不可扩展的,并具有一些安全风险,例如没有密码保护。 我们会在[下周][6]了解如何创建与系统用户分开的邮件用户,以及如何添加加密。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/sysadmin/building-email-server-ubuntu-linux-part-2
作者:[ CARLA SCHRODER][a]
译者:[译者ID](https://github.com/译者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,22 +1,23 @@
Building an Email Server on Ubuntu Linux, Part 3
在Ubuntu上搭建一台Email服务器
============================================================
### [mail-server.jpg][2]
![Mail server](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mail-server.jpg?itok=Ox1SCDsV "Mail server")
In the final part of this tutorial series, we go into detail on how to set up virtual users and mailboxes in Dovecot and Postfix.[Creative Commons Zero][1]pixabay
本系列的第四部分我们将详细介绍在Dovecot和Postfix中设置虚拟用户。以[Creative Commons Zero][2]Pixabay方式授权发布
Welcome back, me hearty Linux syadmins! In [part 1][3] and [part 2][4] of this series, we learned to how to put Postfix and Dovecot together to make a nice IMAP and POP3 mail server. Now we will learn to make virtual users so that we can manage all of our users in Dovecot.
欢迎回来我热心的Linux系统管理员们 在本系列的[第一部分][3]和[第二部分][4]中我们学习了如何将Postfix和Dovecot组合在一起搭建一个不错的IMAP和POP3邮件服务器。 现在我们将学习设置虚拟用户以便我们可以管理所有在Dovecot中的用户。
### Sorry, No SSL. Yet.
### 抱歉还不能配置SSL
I know I promised to show you how to set up a proper SSL-protected server. Unfortunately, I underestimated how large that topic is. So, I will realio trulio write a comprehensive how-to by next month.
我知道我答应教你们如何设置一个正确的受SSL保护的服务器。 不幸的是,我低估了这个话题的范围。 所以,我会下个月再写一个全面的教程。
For today, in this final part of this series, we'll go into detail on how to set up virtual users and mailboxes in Dovecot and Postfix. It's a bit weird to wrap your mind around, so the following examples are as simple as I can make them. We'll use plain flat files and plain-text authentication. You have the options of using database back ends and nice strong forms of encrypted authentication; see the links at the end for more information on these.
对于今天在本系列的最后一部分中我们将详细介绍如何在Dovecot和Postfix中设置虚拟用户和邮箱。 在你看来这是有点奇怪,所以我尽量让下面的例子是简单点。我们将使用纯文件和纯文本来进行身份验证。 你也可以选择使用数据库后端和很好的加密认证形式,具体请参阅文末链接了解有关这些的更多信息。
### Virtual Users
### 虚拟用户
You want virtual users on your email server and not Linux system users. Using Linux system users does not scale, and it exposes their logins, and your Linux server, to unnecessary risk. Setting up virtual users requires editing configuration files in both Postfix and Dovecot. We'll start with Postfix. First, we'll start with a clean, simplified `/etc/postfix/main.cf`. Move your original `main.cf` out of the way and create a new clean one with these contents:
你希望电子邮件服务器上的是虚拟用户而不是Linux系统用户。使用Linux系统用户不能扩展并且它们会暴露登录账号以及会给你的服务器带来不必要的风险。 设置虚拟用户需要在Postfix和Dovecot中编辑配置文件。我们将从Postfix开始。首先我们将从一个干净、简单的`/etc /postfix/main.cf`开始。移动你原始的`main.cf`到别处,创建一个新的干净的文件:
```
@ -43,9 +44,9 @@ virtual_gid_maps = static:5000
virtual_transport = lmtp:unix:private/dovecot-lmtp0
```
You may copy this exactly, except for the `192.168.0.0/24` parameter for `mynetworks`, as this should reflect your own local subnet.
你或许可以直接拷贝这份文件除了`mynetworks`的参数`192.168.0.0/24`,它反映了你的本地子网掩码。
Next, create the user and group `vmail`, which will own your virtual mailboxes. The virtual mailboxes are stored in `vmail's` home directory.
接下来,创建用户和组`vmail`,它会拥有你的虚拟邮箱。虚拟邮箱存在 `vmail`的家目录下。
```
@ -53,7 +54,7 @@ $ sudo groupadd -g 5000 vmail
$ sudo useradd -m -u 5000 -g 5000 -s /bin/bash vmail
```
Then reload the Postfix configurations:
接下来重新加载Postfix配置
```
@ -62,16 +63,16 @@ $ sudo postfix reload
postfix/postfix-script: refreshing the Postfix mail system
```
### Dovecot Virtual Users
### Dovecot虚拟用户
We'll use Dovecot's `lmtp` protocol to connect it to Postfix. You probably need to install it:
我们会使用Dovecot的`lmtp`协议来连接到Postfix。你可以这样安装
```
$ sudo apt-get install dovecot-lmtpd
```
The last line in our example `main.cf` references `lmtp`. Copy this example `/etc/dovecot/dovecot.conf`, replacing your existing file. Again, we are using just this single file, rather than calling the files in `/etc/dovecot/conf.d`.
`main.cf`的最后一行参考`lmtp`。复制这个例子`/etc/dovecot/dovecot.conf`来替换已存在的文件。再说一次,我们只使用这个文件,而不是`/etc/dovecot/conf.d`内的所有文件。
```
@ -111,7 +112,7 @@ service lmtp {
}
```
At last, you can create the file that holds your users and passwords, `/etc/dovecot/passwd`. For simple plain text authorization we need only our users' full email addresses and passwords:
最后,你快可以创建一个含有用户和密码的文件 `/etc/dovecot/passwd`。对于纯文本验证,我们只需要用户的完整邮箱地址和密码:
```
@ -122,7 +123,7 @@ molly@studio:{PLAIN}password
benny@studio:{PLAIN}password
```
The Dovecot virtual users are independent of the Postfix virtual users, so you will manage your users in Dovecot. Save all of your changes and restart Postfix and Dovecot:
Dovecot虚拟用户独立于Postfix虚拟用户因此你需要管理Dovecot中的用户。保存所有的设置并重启Postfix和Dovecot
```
@ -130,7 +131,7 @@ $ sudo service postfix restart
$ sudo service dovecot restart
```
Now let's use good old telnet to see if Dovecot is set up correctly.
现在让我们使用较旧的telnet来看下Dovecot是否设置正确了。
```
@ -148,7 +149,7 @@ quit
Connection closed by foreign host.
```
So far so good! Now let's send some test messages to our users with the `mail` command. Make sure to use the whole user's email address and not just the username.
现在一切都好!让我们用`mail`测试发送消息给我们的用户。确保使用用户的电子邮箱地址而不只是用户名。
```
@ -158,7 +159,7 @@ Please enjoy your new mail account!
.
```
The period on the last line sends your message. Let's see if it landed in the correct mailbox.
最后一行的点是发送消息。让我们看下它是否到达了正确的邮箱。
```
@ -169,7 +170,7 @@ drwx------ 5 vmail vmail 4096 Dec 14 12:39 ..
-rw------- 1 vmail vmail 525 Dec 14 12:39 1481747995.M696591P5790.studio,S=525,W=540
```
And there it is. It is a plain text file that we can read:
找到了。这是一封我们可以阅读的纯文本文件:
```
$ less 1481747995.M696591P5790.studio,S=525,W=540
@ -190,22 +191,22 @@ From: carla@localhost (carla)
Please enjoy your new mail account!
```
You could also use telnet for testing, as in the previous segments of this series, and set up accounts in your favorite mail client, such as Thunderbird, Claws-Mail, or KMail.
你还可以使用telnet进行测试如本系列前面部分所述并在你最喜欢的邮件客户端中设置帐户如ThunderbirdClaws-Mail或KMail。
### Troubleshooting
### 故障排查
When things don't work, check your logfiles (see the configuration examples), and run `journalctl -xe`. This should give you all the information you need to spot typos, uninstalled packages, and nice search terms for Google.
当它不工作时,请检查日志文件(请参阅配置示例),然后运行`journalctl -xe`。 这时应该就会给你提供输入错误、已卸载的包和可以谷歌的字词了。
### What Next?
### 接下来?
Assuming your LAN name services are correctly configured, you now have a nice usable LAN mail server. Obviously, sending messages in plain text is not optimal, and an absolute no-no for Internet mail. See [Dovecot SSL configuration][5] and [Postfix TLS Support][6]. [VirtualUserFlatFilesPostfix][7] covers TLS and database back ends. And watch for my upcoming SSL how-to. Really.
假设你的LAN名称服务配置正确你现在有一台很好用的LAN邮件服务器。 显然以纯文本发送消息不是最佳的并且对于Internet邮件也是绝对否定的。 请参阅[Dovecot SSL配置][5]和[Postfix TLS支持][6]。 [VirtualUserFlatFilesPostfix][7]涵盖TLS和数据库后端。并记得看即将到来的SSL指南。这次我说的是真的。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/sysadmin/building-email-server-ubuntu-linux-part-3
作者:[ CARLA SCHRODER][a]
译者:[译者ID](https://github.com/译者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,158 @@
sshpass一个很棒的无交互SSH登录工具 - 不要在生产服务器上使用
============================================================
在大多数情况下Linux系统管理员使用SSH通过密码或[无密码SSH登录][1]或基于密钥的SSH身份验证登录到远程Linux服务器。
如果你想自动在SSH中提供密码和用户名怎么办这是可以用sshpass了。
sshpass是一个简单、轻量级的命令行工具使我们能够向命令提示符本身提供密码非交互式密码验证以便可以通过[cron调度器][2]执行自动化的shell脚本进行备份。
ssh直接使用TTY访问以确保密码是用户键盘输入的。 sshpass在专门的tty中运行ssh以误导它相信它是从用户接收到的密码。
重要使用sshpass被认为是最不安全的因为它通过简单的“ps”命令就可在命令行上显示所有系统用户的密码。我强烈建议使用[SSH无密码身份验证][3]。
### 在Linux中安装sshpass
在基于RedHat/CentOS的系统中首先需要[启用Epel仓库][4]并使用[yum命令安装][5]它。
```
# yum install sshpass
# dnf install sshpass [On Fedora 22+ versions]
```
在Debian/Ubuntu和它的衍生版中你可以使用[apt-get命令][6]来安装。
```
$ sudo apt-get install sshpass
```
另外你也可以从最新的源码安装sshpass首先下载源码并从tar文件中解压出内容
```
$ wget http://sourceforge.net/projects/sshpass/files/latest/download -O sshpass.tar.gz
$ tar -xvf sshpass.tar.gz
$ cd sshpass-1.06
$ ./configure
# sudo make install
```
### 如何在Linux中使用sshpass
sshpass与ssh一起使用可以使用下面的命令查看sshpass的使用使用选项的完整描述
```
$ sshpass -h
```
sshpass Help
```
Usage: sshpass [-f|-d|-p|-e] [-hV] command parameters
-f filename Take password to use from file
-d number Use number as file descriptor for getting password
-p password Provide password as argument (security unwise)
-e Password is passed as env-var "SSHPASS"
With no parameters - password will be taken from stdin
-h Show help (this screen)
-V Print version information
At most one of -f, -d, -p or -e should be used
```
正如我之前提到的sshpass在用于脚本时才更可靠及更有用考虑下面的示例命令。
使用用户名和密码登录到远程Linux ssh服务器10.42.0.1),并如图所示[检查文件系统磁盘使用情况] [7]。
```
$ sshpass -p 'my_pass_here' ssh aaronkilik@10.42.0.1 'df -h'
```
重要提示:此处,密码在命令行中提供,实际上不安全,不建议使用此选项。
[
![sshpass - Linux Remote Login via SSH](http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Linux-Remote-Login.png)
][8]
sshpass 使用SSH远程登录Linux
但是,为了防止在屏幕上显示密码,可以使用`-e`标志并输入密码作为SSHPASS环境变量的值如下所示
```
$ export SSHPASS='my_pass_here'
$ echo $SSHPASS
$ sshpass -e ssh aaronkilik@10.42.0.1 'df -h'
```
[
![sshpass - Hide Password in Prompt](http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Hide-Password-in-Prompt.png)
][9]
sshpass 在终端中隐藏密码
注意在上面的示例中SSHPASS环境变量仅用于临时目的并将在重新启动后删除。
要永久设置SSHPASS环境变量打开/etc/profile文件并在文件开头输入export语句
```
export SSHPASS='my_pass_here'
```
保存文件并退出,接着运行下面的命令使更改生效:
```
$ source /etc/profile
```
另一方面,你也可以使用`-f'标志,并把密码放在一个文件中。 这样,您可以从文件中读取密码,如下所示:
```
$ sshpass -f password_filename ssh aaronkilik@10.42.0.1 'df -h'
```
[
![sshpass - Supply Password File to Login](http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Provide-Password-File.png)
][10]
sshpass 在登录时提供密码文件
你也可以使用sshpass[使用scp传输文件][11]或者[使用rsync备份/同步文件][12],如下所示:
```
------- Transfer Files Using SCP -------
$ scp -r /var/www/html/example.com --rsh="sshpass -p 'my_pass_here' ssh -l aaronkilik" 10.42.0.1:/var/www/html
------- Backup or Sync Files Using Rsync -------
$ rsync --rsh="sshpass -p 'my_pass_here' ssh -l aaronkilik" 10.42.0.1:/data/backup/ /backup/
```
更多的用法我建议你阅读一下sshpass的man页面输入
```
$ man sshpass
```
在本文中我们解释了sshpass是一个启用非交互式密码验证的简单工具。 虽然这个工具可能是有帮助的但是强烈建议使用更安全的ssh公钥认证机制。
请在下面的评论栏写下任何问题或评论,以便可以进一步讨论。
--------------------------------------------------------------------------------
作者简介Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
-----------
via: http://www.tecmint.com/sshpass-non-interactive-ssh-login-shell-script-ssh-password/
作者:[Aaron Kili][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
[2]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/
[3]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
[4]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
[5]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/
[6]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/
[7]:http://www.tecmint.com/how-to-check-disk-space-in-linux/
[8]:http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Linux-Remote-Login.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Hide-Password-in-Prompt.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Provide-Password-File.png
[11]:http://www.tecmint.com/scp-commands-examples/
[12]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/

View File

@ -0,0 +1,128 @@
LXD 2.0 系列LXD和OpenStack
======================================
这是 [LXD 2.0 系列介绍文章][1]的第十一篇。
![LXD logo](https://linuxcontainers.org/static/img/containers.png)
介绍
============================================================
首先对这次的延期抱歉。为了让一切正常我花了很长时间。我第一次尝试是使用devstack时遇到了一些必须解决问题。 然而即使这样,我还是不能够使网络正常。
我终于放弃了devstack并使用用户友好的Juju尝试使用“conjure-up”部署完整的Ubuntu OpenStack。它终于工作了
下面是如何运行一个完整的OpenStack使用LXD容器而不是VM并在LXD容器中运行所有这些嵌套的
# 要求
这篇文章假设你有一个可以工作的LXD设置提供容器网络访问并且你有一个非常强大的CPU大约50GB给容器空间和至少16GB的内存。
记住我们在这里运行一个完整的OpenStack这东西不是很轻量
# 设置容器
OpenStack由大量不同做不同事情的组件组成。 一些需要一些额外的特权,这样可以使设置更简单,我们将使用特权容器。
我们将配置支持嵌套的容器,预加载所有需要的内核模块,并允许它访问/dev/mem显然是需要的
请注意这意味着LXD容器的大部分安全特性对该容器被禁用。 然而由OpenStack自身产生的容器将是无特权的并且可以正常使用LXD的安全特性。
```
lxc launch ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch"
lxc config device add openstack mem unix-char path=/dev/mem
```
LXD中有一个小bug它会尝试加载已经加载到主机上的内核模块。这已在LXD 2.5中得到修复并将在LXD 2.0.6中修复,但在此之前,可以使用以下方法:
```
lxc exec openstack -- ln -s /bin/true /usr/local/bin/modprobe
```
我们需要加几条PPA并安装conjure-up它是我们用来安装Openstack的部署工具。
```
lxc exec openstack -- apt-add-repository ppa:conjure-up/next -y
lxc exec openstack -- apt-add-repository ppa:juju/stable -y
lxc exec openstack -- apt update
lxc exec openstack -- apt dist-upgrade -y
lxc exec openstack -- apt install conjure-up -y
```
最后一步是在容器内部配置LXD网络。
所有问题都选择默认,除了:
* 使用“dir”存储后端“zfs”不在嵌套容器中有用
* 不要配置IPv6网络conjure-up/juju不太兼容它
```
lxc exec openstack -- lxd init
```
现在配置完容器了现在我们部署OpenStack
# 用conjure-up部署OpenStack
如先前提到的我们用conjure-up部署OpenStack。
这是一个很棒的用户友好的可以与Juju交互来部署复杂服务的工具。
首先:
```
lxc exec openstack -- sudo -u ubuntu -i conjure-up
```
* 选择“OpenStack with NovaLXD”
* 选择“localhost”作为部署目标使用LXD
* 点击“Deploy all remaining applications”
接下来会部署OpenStack。整个过程会花费一个多小时这取决于你运行的机器。你将看到所有服务会被分配一个容器然后部署并最终互连。
![Conjure-Up deploying OpenStack](https://www.stgraber.org/wp-content/uploads/2016/10/conjure-up.png)
部署完成后会显示一个安装完成的界面。它会导入一些初始镜像、设置SSH权限、配置网络最后会显示面板的IP地址。
# 访问面板并生成一个容器
面板运行在一个容器中,因此你不能直接从浏览器中访问。
最简单的方法是设置一条NAT规则
```
lxc exec openstack -- iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to <IP>
```
其中“<ip>”是conjure-up在安装结束时给你的面板IP地址。
你现在可以获取“openstack”容器的IP地址来自“lxc info openstack”并将浏览器指向http://<container ip>/horizon
第一次加载可能需要几分钟。 一旦显示了登录界面输入默认登录名和密码admin/openstack你就会看到OpenStack的欢迎面板
  [oslxd-dashboard]https://www.stgraber.org/wp-content/uploads/2016/10/oslxd-dashboard.png
现在可以选择左边的“Project”选项卡进入“Instances”页面。 要启动一个使用nova-lxd的新实例点击“Launch instance”选择你想要的镜像网络等接着你的实例就产生了。
一旦它运行后你可以为它分配一个浮动IP它将允许你从你的“openstack”容器中访问你的实例。
# 总结
OpenStack是一个非常复杂的软件你也不会想在家里或在单个服务器上运行它。 但是,不管怎样在你的机器上包含这些服务在一个容器中都是非常有趣的。
conjure-up是部署这种复杂软件的一个很好的工具背后使用Juju驱动部署为每个单独的服务使用LXD容器最后是实例本身。
它也是少数几个容器嵌套多层并实际上有意义的情况之一!
--------------------------------------------------------------------------
作者简介我是Stéphane Graber。我是LXC和LXD项目的领导者目前在加拿大魁北克蒙特利尔的家所在的Canonical有限公司担任LXD的技术主管。
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/10/26/lxd-2-0-lxd-and-openstack-1112/
作者:[Stéphane Graber ][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.stgraber.org/author/stgraber/
[1]:https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/