2
0
mirror of https://github.com/LCTT/TranslateProject.git synced 2025-03-27 02:30:10 +08:00

Merge branch 'master' of github.com:LCTT/TranslateProject into pulls

This commit is contained in:
Kenneth Hawk 2017-03-24 18:32:17 +08:00
commit 304690cd2c
61 changed files with 5239 additions and 1554 deletions
published
sources
talk
tech
20131113 Your visual how-to guide for SELinux policy enforcement.md20150112 Data-Oriented Hash Table.md20151127 5 ways to change GRUB background in Kali Linux.md20161027 Network management with LXD.md20161220 TypeScript the missing introduction.md201701 GraphQL In Use Building a Blogging Engine API with Golang and PostgreSQL.md20170101 FTPS vs SFTP.md20170111 Git in 2016.md20170111 NMAP Common Scans – Part One.md20170117 How to Keep Hackers out of Your Linux Machine Part 2: Three More Easy Security Tips.md20170123 Linux command line navigation tipstricks 3 - the CDPATH environment variable.md20170205 Hosting Django With Nginx and Gunicorn on Linux.md20170210 Use tmux for a more powerful terminal.md20170214 10 Best Linux Terminal Emulators For Ubuntu And Fedora.md20170214 CentOS Vs. Ubuntu.md20170214 How to Install Ubuntu with Separate Root and Home Hard Drives.md20170217 Understanding the difference between sudo and su.md20170220 An introduction to the Linux boot and startup processes.md20170227 How to setup a Linux server on Amazon AWS.md20170302 Installation of Devuan Linux Fork of Debian.md20170308 Many SQL Performance Problems Stem from Unnecessary, Mandatory Work.md20170309 How to Change Root Password of MySQL or MariaDB in Linux.md20170314 Integrate Ubuntu 16.04 to AD as a Domain Member with Samba and Winbind – Part 8.md20170316 An introduction to GRUB2 configuration for your Linux machine.md20170316 What is Linux VPS Hosting.md20170317 AWS cloud terminology.md20170317 How to Build Your Own Media Center with OpenELEC.md20170317 How to control GPIO pins and operate relays with the Raspberry Pi.md20170317 Join CentOS 7 Desktop to Samba4 AD as a Domain Member – Part 9.md20170317 Make Container Management Easy With Cockpit.md20170324 A formal spec for GitHub Flavored Markdown.md20170324 This Xfce Bug Is Wrecking Users Monitors.md
translated/tech

View File

@ -0,0 +1,172 @@
看漫画学 SELinux 强制策略
============================================================
![SELinux policy guide](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life-uploads/selinux_rules_lead_image.png?itok=jxV7NgtD "Your visual how-to guide for SELinux policy enforcement")
>图像来自:  opensource.com
今年是我们一起庆祝 SELinux 纪念日的第十个年头了LCTT 译者注:本文发表于 2013 年。真是太难以置信了SELinux 最初在 Fedora Core 3 中被引入,随后加入了红帽企业版 Linux 4。从来没有使用过 SELinux 的家伙,你可要好好儿找个理由了……
SElinux 是一个标签型系统。每一个进程都有一个标签。操作系统中的每一个文件/目录客体object也都有一个标签。甚至连网络端口、设备乃至潜在的主机名都被分配了标签。我们把控制访问进程的标签的规则写入一个类似文件的客体标签中这些规则我们称之为策略policy。内核强制实施了这些规则。有时候这种“强制”被称为强制访问控制体系Mandatory Access ControlMAC
一个客体的拥有者对客体的安全属性并没有自主权。标准 Linux 访问控制体系,拥有者/分组 + 权限标志如 rwx常常被称作自主访问控制Discretionary Access ControlDAC。SELinux 没有文件 UID 或拥有权的概念。一切都被标签控制,这意味着在没有至高无上的 root 权限进程时,也可以设置 SELinux 系统。
**注意:** _SELinux不允许你摒弃 DAC 控制。SELinux 是一个并行的强制模型。一个应用必须同时支持 SELinux 和 DAC 来完成特定的行为。这可能会导致管理员迷惑为什么进程被拒绝访问。管理员被拒绝访问是因为在 DAC 中有些问题,而不是在 SELinux 标签。
### 类型强制
让我们更深入的研究下标签。SELinux 最主要的“模型”或“强制”叫做类型强制type enforcement。基本上这意味着我们根据进程的类型来定义其标签以及根据文件系统客体的类型来定义其标签。
_打个比方_
想象一下在一个系统里定义客体的类型为猫和狗。猫CAT和狗DOG都是进程类型process type
![Image showing a cartoon of a cat and dog.](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_01_catdog.png)
我们有一类希望能与之交互的客体,我们称之为食物。而我希望能够为食物增加类型:`cat_food` (猫的食物)和 `dog_food`(狗的食物)。
![Cartoon Cat eating Cat Food and Dog eating Dog Food](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_03_foods.png)
作为一个策略制定者,我可以说一只狗有权限去吃狗粮(`dog_chow`),而一只猫有权限去吃猫粮(`cat_chow`)。在 SELinux 中我可以将这条规则写入策略中。
![allow cat cat_chow:food eat; allow dog dog_chow:food eat](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_04_policy.png "SELinux rule")
`allow cat cat_chow:food eat;`
`允许 猫 猫粮:食物 吃;`
`allow dog dog_chow:food eat;`
`允许 狗 狗粮:食物 吃;`
有了这些规则,内核会允许猫进程去吃打上猫粮标签 `cat_chow` 的食物,允许狗去吃打上狗粮标签 `dog_chow` 的食物。
![Cartoon Cat eating Cat Food and Dog eating Dog Food](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_02_eat.png)
此外,在 SELinux 系统中,由于禁止是默认规则,这意味着,如果狗进程想要去吃猫粮 `cat_chow`,内核会阻止它。
![](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_06_tux-dog-leash.png)
同理,猫也不允许去接触狗粮。
![Cartoon cat not allowed to eat dog fooda](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_07_tux-cat-no.png "Cartoon cat not allowed to eat dog fooda")
_现实例子_
我们将 Apache 进程标为 `httpd_t`,将 Apache 上下文标为 `httpd_sys_content_t``httpdsys_content_rw_t`。假设我们把信用卡数据存储在 MySQL 数据库中,其标签为 `msyqld_data_t`。如果一个 Apache 进程被劫持,黑客可以获得 `httpd_t` 进程的控制权,从而能够去读取 `httpd_sys_content_t` 文件并向 `httpd_sys_content_rw_t` 文件执行写操作。但是黑客却不允许去读信用卡数据(`mysqld_data_t`),即使 Apache 进程是在 root 下运行。在这种情况下 SELinux 减轻了这次闯入的后果。
### 多类别安全强制
_打个比方_
上面我们定义了狗进程和猫进程但是如果你有多个狗进程Fido 和 Spot而你想要阻止 Fido 去吃 Spot 的狗粮 `dog_chow` 怎么办呢?
![SELinux rule](https://opensource.com/sites/default/files/resize/images/life-uploads/mcs-enforcement_02_fido-eat-spot-food-500x251.png "SELinux rule")
一个解决方式是创建大量的新类型,如 `Fido_dog``Fido_dog_chow`。但是这很快会变得难以驾驭因为所有的狗都有差不多相同的权限。
为了解决这个问题我们发明了一种新的强制形式叫做多类别安全Multi Category SecurityMCS。在 MCS 中,我们在狗进程和狗粮的标签上增加了另外一部分标签。现在我们将狗进程标记为 `dog:random1(Fido)``dog:random2(Spot)`
![Cartoon of two dogs fido and spot](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_01_fido-spot.png)
我们将狗粮标记为 `dog_chow:random1(Fido)``dog_chow:random2(Spot)`
![SELinux rule](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_03_foods.png "SELinux rule")
MCS 规则声明如果类型强制规则被遵守而且该 MCS 随机标签正确匹配,则访问是允许的,否则就会被拒绝。
Fido (`dog:random1`) 尝试去吃 `cat_chow:food` 被类型强制拒绝了。
![Cartoon of Kernel (Penquin) holding leash to prevent Fido from eating cat food.](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_04-bad-fido-cat-chow.png)
Fido (`dog:random1`) 允许去吃 `dog_chow:random1`
![Cartoon Fido happily eating his dog food](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_05_fido-eat-fido-food.png)
Fido (`dog:random1`) 去吃 spot(`dog_chow:random2`)的食物被拒绝。
![Cartoon of Kernel (Penquin) holding leash to prevent Fido from eating spots dog food.](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_06_fido-no-spot-food.png)
_现实例子_
在计算机系统中我们经常有很多具有同样访问权限的进程但是我们又希望它们各自独立。有时我们称之为多租户环境multi-tenant environment。最好的例子就是虚拟机。如果我有一个运行很多虚拟机的服务器而其中一个被劫持我希望能够阻止它去攻击其它虚拟机和虚拟机镜像。但是在一个类型强制系统中 KVM 虚拟机被标记为 `svirt_t` 而镜像被标记为 `svirt_image_t`。 我们允许 `svirt_t` 可以读/写/删除标记为 `svirt_image_t` 的上下文。通过使用 libvirt 我们不仅实现了类型强制隔离,而且实现了 MCS 隔离。当 libvirt 将要启动一个虚拟机时,它会挑选出一个 MCS 随机标签如 `s0:c1,c2`,接着它会将 `svirt_image_t:s0:c1,c2` 标签分发给虚拟机需要去操作的所有上下文。最终,虚拟机以 `svirt_t:s0:c1,c2` 为标签启动。因此SELinux 内核控制 `svirt_t:s0:c1,c2` 不允许写向 `svirt_image_t:s0:c3,c4`,即使虚拟机被一个黑客劫持并接管,即使它是运行在 root 下。
我们在 OpenShift 中使用[类似的隔离策略][8]。每一个 gearuser/app process都有相同的 SELinux 类型(`openshift_t`LCTT 译注gear 为 OpenShift 的计量单位)。策略定义的规则控制着 gear 类型的访问权限,而一个独一无二的 MCS 标签确保了一个 gear 不能影响其他 gear。
请观看[这个短视频][9]来看 OpenShift gear 切换到 root 会发生什么。
### 多级别安全强制
另外一种不经常使用的 SELinux 强制形式叫做多级别安全Multi Level SecurityMLS它开发于上世纪 60 年代,并且主要使用在受信操作系统上如 Trusted Solaris。
其核心观点就是通过进程使用的数据等级来控制进程。一个 _secret_ 进程不能读取 _top secret_ 数据。
MLS 很像 MCS除了它在强制策略中增加了支配的概念。MCS 标签必须完全匹配,但一个 MLS 标签可以支配另一个 MLS 标签并且获得访问。
_打个比方_
不讨论不同名字的狗,我们现在来看不同种类。我们现在有一只格雷伊猎犬和一只吉娃娃。
![Cartoon of a Greyhound and a Chihuahua](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_01_chigrey.png)
我们可能想要允许格雷伊猎犬去吃任何狗粮,但是吉娃娃如果尝试去吃格雷伊猎犬的狗粮可能会被呛到。
我们把格雷伊猎犬标记为 `dog:Greyhound`,把它的狗粮标记为 `dog_chow:Greyhound`,把吉娃娃标记为 `dog:Chihuahua`,把它的狗粮标记为 `dog_chow:Chihuahua`
![Cartoon of a Greyhound dog food and a Chihuahua dog food.](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_04_mlstypes.png)
使用 MLS 策略,我们可以使 MLS 格雷伊猎犬标签支配吉娃娃标签。这意味着 `dog:Greyhound` 允许去吃 `dog_chow:Greyhound` 和 `dog_chow:Chihuahua`
![SELinux rule](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_05_chigreyeating.png "SELinux rule")
但是 `dog:Chihuahua` 不允许去吃 `dog_chow:Greyhound`
![Cartoon of Kernel (Penquin) stopping the Chihahua from eating the greyhound food. Telling him it would be a big too beefy for him.](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_03_chichoke.png)
当然,由于类型强制, `dog:Greyhound` 和 `dog:Chihuahua` 仍然不允许去吃 `cat_chow:Siamese`,即使 MLS 类型 GreyHound 支配 Siamese。
![Cartoon of Kernel (Penquin) holding leash to prevent both dogs from eating cat food.](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_06_nocatchow.png)
_现实例子_
有两个 Apache 服务器:一个以 `httpd_t:TopSecret` 运行,一个以 `httpd_t:Secret` 运行。如果 Apache 进程 `httpd_t:Secret` 被劫持,黑客可以读取 `httpd_sys_content_t:Secret` 但会被禁止读取 `httpd_sys_content_t:TopSecret`
但是如果运行 `httpd_t:TopSecret` 的 Apache 进程被劫持,它可以读取 `httpd_sys_content_t:Secret` 数据和 `httpd_sys_content_t:TopSecret` 数据。
我们在军事系统上使用 MLS一个用户可能被允许读取 _secret_ 数据,但是另一个用户在同一个系统上可以读取 _top secret_ 数据。
### 结论
SELinux 是一个功能强大的标签系统,控制着内核授予每个进程的访问权限。最主要的特性是类型强制,策略规则定义的进程访问权限基于进程被标记的类型和客体被标记的类型。也引入了另外两个控制手段,分离有着同样类型进程的叫做 MCS而 MLS则允许进程间存在支配等级。
_*所有的漫画都来自 [Máirín Duffy][6]_
--------------------------------------------------------------------------------
作者简介:
Daniel J Walsh - Daniel Walsh 已经在计算机安全领域工作了将近 30 年。Daniel 与 2001 年 8 月加入红帽。
-------------------------
via: https://opensource.com/business/13/11/selinux-policy-guide
作者:[Daniel J Walsh][a]
译者:[xiaow6](https://github.com/xiaow6)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/rhatdan
[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu
[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu
[3]:https://opensource.com/article/16/11/managing-devices-linux?src=linux_resource_menu
[4]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ
[5]:https://opensource.com/tags/linux?src=linux_resource_menu
[6]:https://opensource.com/users/mairin
[7]:https://opensource.com/business/13/11/selinux-policy-guide?rate=XNCbBUJpG2rjpCoRumnDzQw-VsLWBEh-9G2hdHyB31I
[8]:http://people.fedoraproject.org/~dwalsh/SELinux/Presentations/openshift_selinux.ogv
[9]:http://people.fedoraproject.org/~dwalsh/SELinux/Presentations/openshift_selinux.ogv
[10]:https://opensource.com/user/16673/feed
[11]:https://opensource.com/business/13/11/selinux-policy-guide#comments
[12]:https://opensource.com/users/rhatdan

View File

@ -0,0 +1,66 @@
如何更改 Linux 的 I/O 调度器
==================================
Linux 的 I/O 调度器是一个从存储卷以块式 I/O 访问的进程有时也叫磁盘调度器。Linux I/O 调度器的工作机制是控制块设备的请求队列:确定队列中哪些 I/O 的优先级更高以及何时下发 I/O 到块设备,以此来减少磁盘寻道时间,从而提高系统的吞吐量。
目前 Linux 上有如下几种 I/O 调度算法:
1. noop - 通常用于内存存储的设备。
2. cfq - 绝对公平调度器。进程平均使用IO带宽。
3. Deadline - 针对延迟的调度器,每一个 I/O都有一个最晚执行时间。
4. Anticipatory - 启发式调度,类似 Deadline 算法,但是引入预测机制提高性能。
查看设备当前的 I/O 调度器:
```
# cat /sys/block/<Disk_Name>/queue/scheduler
```
假设磁盘名称是 `/dev/sdc`
```
# cat /sys/block/sdc/queue/scheduler
noop anticipatory deadline [cfq]
```
### 如何改变硬盘设备 I/O 调度器
使用如下指令:
```
# echo {SCHEDULER-NAME} > /sys/block/<Disk_Name>/queue/scheduler
```
比如设置 noop 调度器:
```
# echo noop > /sys/block/sdc/queue/scheduler
```
以上设置重启后会失效,要想重启后配置仍生效,需要在内核启动参数中将 `elevator=noop` 写入 `/boot/grub/menu.lst`
#### 1. 备份 menu.lst 文件
```
cp -p /boot/grub/menu.lst /boot/grub/menu.lst-backup
```
#### 2. 更新 /boot/grub/menu.lst
`elevator=noop` 添加到文件末尾,比如:
```
kernel /vmlinuz-2.6.16.60-0.91.1-smp root=/dev/sysvg/root splash=silent splash=off showopts elevator=noop
```
--------------------------------------------------------------------------------
via: http://linuxroutes.com/change-io-scheduler-linux/
作者:[UX Techno][a]
译者:[honpey](https://github.com/honpey)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linuxroutes.com/change-io-scheduler-linux/

View File

@ -1,92 +1,89 @@
在 CentOS 7 上利用 FirewallD 设置和配置防火墙
CentOS 7 上的 FirewallD 简明指南
============================================================
![](https://www.rosehosting.com/blog/wp-content/uploads/2017/02/set-up-and-configure-a-firewall-with-firewalld-on-centos-7.jpg)
![](https://www.rosehosting.com/blog/wp-content/uploads/2017/02/set-up-and-configure-a-firewall-with-firewalld-on-centos-7.jpg)
FirewallD 是 CentOS 7 服务器上默认可用的防火墙管理工具。基本上,它是 iptables 的封装,有图形配置工具 firewall-config 和命令行工具 `firewall-cmd`。使用 iptables 服务,每次改动都要求刷新旧规则,并且从 `/etc/sysconfig/iptables` 读取新规则,然而 firewalld 只应用改动了的不同部分。
### FirewallD 的区域zone
FirewallD 是 CentOS 7 服务器上的一个默认可用的防火墙管理工具。基本上,它是 iptables 的封装,有图形配置工具 firewall-config 和命令行工具 firewall-cmd。使用 iptables 服务每次改动都要求刷新旧规则,并且从 `/etc/sysconfig/iptables` 读取新规则,然而 firewalld 仅仅会应用改动了的不同部分
FirewallD 使用服务service 和区域zone来代替 iptables 的规则rule和链chain
### FirewallD zones
默认情况下有以下的区域zone可用
* **drop** 丢弃所有传入的网络数据包并且无回应,只有传出网络连接可用。
* **block** — 拒绝所有传入网络数据包并回应一条主机禁止的 ICMP 消息,只有传出网络连接可用。
* **public** — 只接受被选择的传入网络连接,用于公共区域。
* **external** — 用于启用了地址伪装的外部网络,只接受选定的传入网络连接。
* **dmz** — DMZ 隔离区,外部受限地访问内部网络,只接受选定的传入网络连接。
*   **work** — 对于处在你工作区域内的计算机,只接受被选择的传入网络连接。
* **home** — 对于处在你家庭区域内的计算机,只接受被选择的传入网络连接。
* **internal** — 对于处在你内部网络的计算机,只接受被选择的传入网络连接。
* **trusted** — 所有网络连接都接受。
FirewallD 使用 services 和 zones 代替 iptables 的 rules 和 chains 。
要列出所有可用的区域,运行:
默认情况下,有以下的 zones 可用:
* drop 丢弃所有传入的网络数据包并且无回应,只有传出网络连接可用。
* block — 拒绝所有传入网络数据包并回应一条主机禁止 ICMP 的消息,只有传出网络连接可用。
* public — 只接受被选择的传入网络连接,用于公共区域。
* external — 用于启用伪装的外部网络,只接受被选择的传入网络连接。
* dmz — DMZ 隔离区,外部受限地访问内部网络,只接受被选择的传入网络连接。
* work — 对于处在你家庭区域内的计算机,只接受被选择的传入网络连接。
* home — 对于处在你家庭区域内的计算机,只接受被选择的传入网络连接。
* internal — 对于处在你内部网络的计算机,只接受被选择的传入网络连接。
* trusted — 所有网络连接都接受。
列出所有可用的 zones
```
# firewall-cmd --get-zones
work drop internal external trusted home dmz public block
```
列出默认的 zone
列出默认的区域
```
# firewall-cmd --get-default-zone
public
```
改变默认的 zone
改变默认的区域
```
# firewall-cmd --set-default-zone=dmz
# firewall-cmd --get-default-zone
dmz
```
### FirewallD services
### FirewallD 服务
FirewallD services 使用 XML 配置文件为 firewalld 录入服务信息。
FirewallD 服务使用 XML 配置文件,记录了 firewalld 服务信息。
列出所有可用的服务:
列出所有可用的 services
```
# firewall-cmd --get-services
amanda-client amanda-k5-client bacula bacula-client ceph ceph-mon dhcp dhcpv6 dhcpv6-client dns docker-registry dropbox-lansync freeipa-ldap freeipa-ldaps freeipa-replication ftp high-availability http https imap imaps ipp ipp-client ipsec iscsi-target kadmin kerberos kpasswd ldap ldaps libvirt libvirt-tls mdns mosh mountd ms-wbt mysql nfs ntp openvpn pmcd pmproxy pmwebapi pmwebapis pop3 pop3s postgresql privoxy proxy-dhcp ptp pulseaudio puppetmaster radius rpc-bind rsyncd samba samba-client sane smtp smtps snmp snmptrap squid ssh synergy syslog syslog-tls telnet tftp tftp-client tinc tor-socks transmission-client vdsm vnc-server wbem-https xmpp-bosh xmpp-client xmpp-local xmpp-server
```
XML 配置文件存储在 `/usr/lib/firewalld/services/` 和 `/etc/firewalld/services/` 目录。
XML 配置文件存储在 `/usr/lib/firewalld/services/` 和 `/etc/firewalld/services/` 目录下。
### 用 FirewallD 配置你的防火墙
作为一个例子,假设你正在运行一个 web 服务器SSH 服务端口为 7022 ,以及邮件服务,你可以利用 FirewallD 这样配置你的服务器:
作为一个例子,假设你正在运行一个 web 服务,端口为 7022 的 SSH 服务和邮件服务,你可以利用 FirewallD 这样配置你的 [RoseHosting VPS][6]:
首先设置默认区为 dmz。
首先设置默认 zone 为 dmz。
```
# firewall-cmd --set-default-zone=dmz
# firewall-cmd --get-default-zone
dmz
```
添加持久性的 HTTP 和 HTTPS service 规则到 dmz zone
为 dmz 区添加持久性的 HTTP 和 HTTPS 规则:
```
# firewall-cmd --zone=dmz --add-service=http --permanent
# firewall-cmd --zone=dmz --add-service=https --permanent
```
开启端口 25 (SMTP) 和端口 465 (SMTPS)
开启端口 25 (SMTP) 和端口 465 (SMTPS) :
```
firewall-cmd --zone=dmz --add-service=smtp --permanent
firewall-cmd --zone=dmz --add-service=smtps --permanent
```
开启 IMAP、IMAPS、POP3 和 POP3S 端口:
开启 IMAP IMAPS POP3 和 POP3S 端口:
```
firewall-cmd --zone=dmz --add-service=imap --permanent
firewall-cmd --zone=dmz --add-service=imaps --permanent
@ -94,23 +91,23 @@ firewall-cmd --zone=dmz --add-service=pop3 --permanent
firewall-cmd --zone=dmz --add-service=pop3s --permanent
```
因为将 SSH 端口改到了 7022所以要移除 ssh 服务(端口 22开启端口 7022
将 SSH 端口改到 7022 后,我们移除 ssh service (端口 22并且开启端口 7022
```
firewall-cmd --remove-service=ssh --permanent
firewall-cmd --add-port=7022/tcp --permanent
```
要实现这些更改,我们需要重新加载防火墙:
要应用这些更改,我们需要重新加载防火墙:
```
firewall-cmd --reload
```
最后可以列出这些规则:
### firewall-cmd list-all
```
# firewall-cmd list-all
dmz
target: default
icmp-block-inversion: no
@ -129,11 +126,7 @@ rich rules:
* * *
当然,如果你使用任何一个我们的 [CentOS VPS hosting][7] 服务,你完全不用做这些。在这种情况下,你可以直接叫我们的专家 Linux 管理员为你设置。他们提供 24x7 h 的帮助并且会马上回应你的请求。
PS. 如果你喜欢这篇文章,请按分享按钮分享给你社交网络上的朋友或者直接在下面留下一个回复。谢谢。
PS. 如果你喜欢这篇文章,请在下面留下一个回复。谢谢。
--------------------------------------------------------------------------------
@ -145,7 +138,7 @@ via: https://www.rosehosting.com/blog/set-up-and-configure-a-firewall-with-firew
作者:[rosehosting.com][a]
译者:[Locez](https://github.com/locez)
校对:[校对者ID](https://github.com/校对者ID)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,83 @@
CentOS 与 Ubuntu 有什么不同?
============
[![centos vs. ubuntu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/centos-vs-ubuntu_orig.jpg)
][4]
Linux 中的可选项似乎“无穷无尽”,因为每个人都可以通过修改一个已经发行的版本或者新的[白手起家的版本][7] (LFS) 来构建 Linux。
关于 Linux 发行版的选择,我们关注的因素包括用户界面、文件系统、软件包分发、新的特性以及更新周期和可维护性等。
在这篇文章中,我们会讲到两个较为熟知的 Linux 发行版,实际上,更多的是介绍两者之间的不同,以及在哪些方面一方比另一方更好。
### 什么是 CentOS
CentOSCommunity Enterprise Operating System是脱胎于 Red Hat Enterprise Linux (RHEL) 并与之兼容的由社区支持的克隆版 Linux 发行版,所以我们可以认为 CentOS 是 RHEL 的一个免费版。CentOS 的每一套发行版都有 10 年的维护期,每个新版本的释出周期为 2 年。在 2014 年 1 月 8 日,[CentOS 声明正式加入红帽](https://linux.cn/article-2453-1.html),为新的 CentOS 董事会所管理,但仍然保持与 RHEL 的独立性。
扩展阅读:[如何安装 CentOS?][1]
#### CentOS 的历史和第一次释出
[CentOS][8] 第一次释出是在 2004 年,当时名叫 cAOs Linux它是由社区维护和管理的一套基于 RPM 的发行版。
CentOS 结合了包括 Debian、Red Hat Linux/Fedora 和 FreeBSD 等在内的许多方面,使其能够令服务器和集群稳定工作 3 到 5 年的时间。它有一群开源软件开发者作为拥趸是一个大型组织CAOS 基金会)的一部分。
在 2006 年 6 月David Parsley 宣布由他开发的 TAO Linux另一个 RHEL 克隆版本)退出历史舞台并全力转入 CentOS 的开发工作。不过,他的领域转移并不会影响之前的 TAO 用户, 因为他们可以通过使用 `yum update` 来更新系统以迁移到 CentOS。
2014 年 1 月,红帽开始赞助 CentOS 项目,并移交了所有权和商标。
#### CentOS 设计
确切地说CentOS 是付费 RHEL (Red Had Enterprise Edition) 版本的克隆。RHEL 提供源码以供之后 CentOS 修改和变更(移除商标和 logo并完善为最终的成品。
### Ubuntu
Ubuntu 是一个基于 Debian 的 Linux 操作系统应用于桌面、服务器、智能手机和平板电脑等多个领域。Ubuntu 是由一个英国的名为 Canonical Ltd. 的公司发行的,由南非的 Mark Shuttleworth 创立并赞助。
扩展阅读:[安装完 Ubuntu 16.10 必须做的 10 件事][2]
#### Ubuntu 的设计
Ubuntu 是一个在全世界的开发者共同努力下生成的开源发行版。在这些年的悉心经营下Ubuntu 的界面变得越来越现代化和人性化,整个系统运行也更加流畅、安全,并且有成千上万的应用可供下载。
由于它是基于 [Debian][10] 的,因此它也支持 .deb 包、较新的包系统和更为安全的 [snap 包格式 (snappy)][11]。
这种新的打包系统允许分发的应用自带满足所需的依赖性。
扩展阅读:[点评 Ubuntu 16.10 中的 Unity 8][3]
### CentOS 与 Ubuntu 的区别
* Ubuntu 基于 DebianCentOS 基于 RHEL
* Ubuntu 使用 .deb 和 .snap 的软件包CentOS 使用 .rpm 和 flatpak 软件包;
* Ubuntu 使用 apt 来更新CentOS 使用 yum
* CentOS 看起来会更稳定,因为它不会像 Ubuntu 那样对包做常规性更新,但这并不意味着 Ubuntu 就不比 CentOS 安全;
* Ubuntu 有更多的文档和免费的问题、信息支持;
* Ubuntu 服务器版本在云服务和容器部署上的支持更多。
### 结论
不论你的选择如何,**是 Ubuntu 还是 CentOS**,两者都是非常优秀稳定的发行版。如果你想要一个发布周期更短的版本,那么就选 Ubuntu如果你想要一个不经常变更包的版本那么就选 CentOS。在下方留下的评论说出你更钟爱哪一个吧
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/centos-vs-ubuntu
作者:[linuxandubuntu.com][a]
译者:[Meditator-hkx](http://www.kaixinhuang.com)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxandubuntu.com/home/centos-vs-ubuntu
[1]:http://www.linuxandubuntu.com/home/how-to-install-centos
[2]:http://www.linuxandubuntu.com/home/10-things-to-do-after-installing-ubuntu-16-04-xenial-xerus
[3]:http://www.linuxandubuntu.com/home/linuxandubuntu-review-of-unity-8-preview-in-ubuntu-1610
[4]:http://www.linuxandubuntu.com/home/centos-vs-ubuntu
[5]:http://www.linuxandubuntu.com/home/centos-vs-ubuntu
[6]:http://www.linuxandubuntu.com/home/centos-vs-ubuntu#comments
[7]:http://www.linuxandubuntu.com/home/how-to-create-a-linux-distro
[8]:http://www.linuxandubuntu.com/home/10-things-to-do-after-installing-centos
[9]:https:]:http://www.linuxandubuntu.com/home/linuxandubuntu-review-of-unity-8-preview-in-ubuntu-1610
[10]:https://www.debian.org/
[11]:https://en.wikipedia.org/wiki/Snappy_(package_manager)

View File

@ -0,0 +1,86 @@
在独立的 Root 和 Home 硬盘驱动器上安装 Ubuntu
============================================================
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/ubuntu-feature-image.jpg "How to Install Ubuntu with Separate Root and Home Hard Drivess")
安装 Linux 系统时,可以有两种不同的方式。第一种方式是在一个超快的固态硬盘上进行安装,这样可以保证迅速开机和高速访问数据。第二种方式是在一个较慢但很强大的普通硬盘驱动器上安装,这样的硬盘转速快并且存储容量大,从而可以存储大量的应用程序和数据。
然而,一些 Linux 用户都知道,固态硬盘很棒,但是又很贵,而普通硬盘容量很大但速度较慢。如果我告诉你,可以同时利用两种硬盘来安装 Linux 系统,会怎么样?一个超快、现代化的固态硬盘驱动 Linux 内核,一个容量很大的普通硬盘来存储其他数据。
在这篇文章中,我将阐述如何通过分离 Root 目录和 Home 目录安装 Ubuntu 系统 — Root 目录存于 SSD固态硬盘Home 目录存于普通硬盘中。
### 没有多余的硬盘驱动器?尝试一下 SD 卡(内存卡)!
![ubuntu-sd-card](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/ubuntu-sd-card.jpg "ubuntu-sd-card")
在多个驱动器上安装 Linux 系统是很不错的,并且每一个高级用户都应该学会这样做。然而,还有一种情况使得用户应该这样安装 Linux 系统 在低存储容量的笔记本电脑上安装系统。可能你有一台很便宜、没有花费太多的笔记本电脑,上面安装了 Linux 系统,电脑上没有多余的硬盘驱动,但有一个 SD 卡插槽。
这篇教程也是针对这种类型的电脑的。跟随这篇教程,可以为笔记本电脑买一个高速的 SD 卡来存储 Home 目录,而不是使用另一个硬盘驱动。本教程也适用于这种使用情况。
### 制作 USB 启动盘
首先去[这个网站][11]下载最新的 Ubuntu Linux 版本。然后下载 [Etcher][12]- USB 镜像制作工具。这是一个使用起来很简单的工具,并且支持所有主流的操作系统。你还需要一个至少有 2GB 大小的 USB 驱动器。
![ubuntu-browse-for-ubuntu-iso](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/ubuntu-browse-for-ubuntu-iso.jpg "ubuntu-browse-for-ubuntu-iso")
安装好 Etcher 以后,直接打开。点击 <ruby>选择镜像<rt>Select Image</rt></ruby> 按钮来制作镜像。这将提示用户浏览、寻找 ISO 镜像,找到前面下载的 Ubuntu ISO 文件并选择。然后,插入 USB 驱动器Etcher 应该会自动选择它。之后,点击 “Flash!” 按钮Ubuntu 启动盘的制作过程就开始了。
为了能够启动 Ubuntu 系统,需要配置 BIOS。这是必需的这样计算机才能启动新创建的 Ubuntu 启动盘。为了进入 BIOS在插入 USB 的情况下重启电脑然后按正确的键Del、F2 或者任何和你的电脑相应的键)。找到从 USB 启动的选项,然后启用这个选项。
如果你的个人电脑不支持 USB 启动,那么把 Ubuntu 镜像刻入 DVD 中。
### 安装
当用启动盘第一次加载 Ubuntu 时,欢迎界面会出现两个选项。请选择 “安装 Ubuntu” 选项。在下一页中Ubiquity 安装工具会请求用户选择一些选项。这些选项不是强制性的,可以忽略。然而,建议两个选项都勾选,因为这样可以节省安装系统以后的时间,特别是安装 MP3 解码器和更新系统。LCTT 译注:当然如果你的网速不够快,还是不要勾选的好。)
![ubuntu-preparing-to-install](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/ubuntu-preparing-to-install.jpg "ubuntu-preparing-to-install")
勾选了<ruby>“准备安装 Ubuntu”<rt>Preparing to install Ubuntu</rt></ruby>页面中的两个选项以后,需要选择安装类型了。有许多种安装类型。然而,这个教程需要选择自定义安装类型。为了进入自定义安装页面,勾选<ruby>“其他”<rt>something else</rt></ruby>选项,然后点击“继续”。
现在将显示 Ubuntu 自定义安装分区工具。它将显示任何/所有能够安装 Ubuntu 系统的磁盘。如果两个硬盘均可用,那么它们都会显示。如果插有 SD 卡,那么它也会显示。
选择用于 Root 文件系统的硬盘驱动器。如果上面已经有分区表,编辑器会显示出来,请使用分区工具把它们全部删除。如果驱动没有格式化也没有分区,那么使用鼠标选择驱动器,然后点击<ruby>“新建分区表”<rt>new partition table</rt></ruby>。对所有驱动器执行这个操作从而使它们都有分区表。LCTT 译注:警告,如果驱动器上有你需要的数据,请先备份,否则重新分区后将永远丢失。)
![ubuntu-create-mount-point](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/ubuntu-create-mount-point.jpg "ubuntu-create-mount-point")
现在所有分区都有了分区表(并已删除分区),可以开始进行配置了。在第一个驱动器下选择空闲空间,然后点击加号按钮来创建新分区。然后将会出现一个“创建分区窗口”。允许工具使用整个硬盘。然后转到<ruby>“挂载点”<rt>Mount Point</rt></ruby>下拉菜单。选择 `/` Root作为挂载点之后点击 OK 按钮确认设置。
对第二个驱动器做相同的事,这次选择 `/home` 作为挂载点。两个驱动都设置好以后,选择要放入引导装载器的驱动器,然后点击 <ruby>“现在安装”<rt>install now</rt></ruby>,安装进程就开始了。
![ubuntu-multi-drive-layout](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/ubuntu-multi-drive-layout.jpg "ubuntu-multi-drive-layout")
从这以后的安装进程是标准安装。创建用户名、选择时区等。
**注:** 你是以 UEFI 模式进行安装吗?如果是,那么需要给 boot 创建一个 512 MB 大小的 FAT32 分区。在创建其他任何分区前做这件事。确保选择 “/boot” 作为这个分区的挂载点。
如果你需要一个交换分区,那么,在创建用于 `/` 的分区前,在第一个驱动器上进行创建。可以通过点击 + 按钮,然后输入所需大小,选择下拉菜单中的<ruby>“交换区域”<rt>swap area</rt></ruby>来创建交换分区。
### 结论
Linux 最好的地方就是可以自己按需配置。有多少其他操作系统可以让你把文件系统分割在不同的硬盘驱动上?并不多,这是肯定的。我希望有了这个指南,你将意识到 Ubuntu 能够提供的真正力量。
安装 Ubuntu 系统时你会用多个驱动器吗?请在下面的评论中让我们知道。
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/install-ubuntu-with-different-root-home-hard-drives/
作者:[Derrik Diener][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/derrikdiener/
[1]:https://www.maketecheasier.com/author/derrikdiener/
[2]:https://www.maketecheasier.com/install-ubuntu-with-different-root-home-hard-drives/#respond
[3]:https://www.maketecheasier.com/category/linux-tips/
[4]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.maketecheasier.com%2Finstall-ubuntu-with-different-root-home-hard-drives%2F
[5]:http://twitter.com/share?url=https%3A%2F%2Fwww.maketecheasier.com%2Finstall-ubuntu-with-different-root-home-hard-drives%2F&amp;text=How+to+Install+Ubuntu+with+Separate+Root+and+Home+Hard+Drives
[6]:mailto:?subject=How%20to%20Install%20Ubuntu%20with%20Separate%20Root%20and%20Home%20Hard%20Drives&amp;body=https%3A%2F%2Fwww.maketecheasier.com%2Finstall-ubuntu-with-different-root-home-hard-drives%2F
[7]:https://www.maketecheasier.com/byb-dimmable-eye-care-desk-lamp/
[8]:https://www.maketecheasier.com/download-appx-files-from-windows-store/
[9]:https://support.google.com/adsense/troubleshooter/1631343
[10]:http://www.maketecheasier.com/tag/ssd
[11]:http://ubuntu.com/download
[12]:https://etcher.io/

View File

@ -0,0 +1,139 @@
如何在 Amazon AWS 上设置一台 Linux 服务器
============================================================
AWSAmazon Web Services是全球领先的云服务器提供商之一。你可以使用 AWS 平台在一分钟内设置完服务器。在 AWS 上,你可以微调服务器的许多技术细节,如 CPU 数量,内存和磁盘空间,磁盘类型(更快的 SSD 或者经典的 IDE等。关于 AWS 最好的一点是你只需要为你使用到的服务付费。在开始之前AWS 提供了一个名为 “Free Tier” 的特殊帐户,你可以免费使用一年的 AWS 技术服务,但会有一些小限制,例如,你每个月使用服务器时长不能超过 750 小时,超过这个他们就会向你收费。你可以在 [aws 官网][3]上查看所有相关的规则。
因为我的这篇文章是关于在 AWS 上创建 Linux 服务器,因此拥有 “Free Tier” 帐户是先决条件。要注册帐户,你可以使用此[链接][4]。请注意,你需要在创建帐户时输入信用卡详细信息。
让我们假设你已经创建了 “Free Tier” 帐户。
在继续之前,你必须了解 AWS 中的一些术语以了解设置:
1. EC2弹性计算云此术语用于虚拟机。
2. AMIAmazon 机器镜像):表示操作系统实例。
3. EBS弹性块存储AWS 中的一种存储环境类型。
通过以下链接登录 AWS 控制台:[https://console.aws.amazon.com/][5] 。
AWS 控制台将如下所示:
[
![Amazon AWS console](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_console.JPG)
][6]
### 在 AWS 中设置 Linux VM
1、 创建一个 EC2虚拟机实例在开始安装系统之前你必须在 AWS 中创建一台虚拟机。要创建虚拟机,在“<ruby>计算<rt>compute</rt></ruby>”菜单下点击 EC2
[
![Create an EC2 instance](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_console_ec21.png)
][7]
2、 现在在<ruby>创建实例<rt>Create instance</rt></ruby>下点击<ruby>“启动实例”<rt>Launch Instance</rt></ruby>按钮。
[
![Launch the EC2 instance](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_launch_ec2.png)
][8]
3、 现在,当你使用的是一个 “Free Tier” 帐号,接着最好选择 “Free Tier” 单选按钮以便 AWS 可以过滤出可以免费使用的实例。这可以让你不用为使用 AWS 的资源而付费。
[
![Select Free Tier instances only](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_free_tier_radio1.png)
][9]
4、 要继续操作,请选择以下选项:
a、 在经典实例向导中选择一个 AMIAmazon Machine Image然后选择使用 **Red Hat Enterprise Linux 7.2HVMSSD 存储**
b、 选择 “**t2.micro**” 作为实例详细信息。
c、 **配置实例详细信息**:不要更改任何内容,只需单击下一步。
d、 **添加存储**:不要更改任何内容,只需点击下一步,因为此时我们将使用默认的 10GiB硬盘。
e、 **添加标签**:不要更改任何内容只需点击下一步。
f、 **配置安全组**:现在选择用于 ssh 的 22 端口,以便你可以在任何地方访问此服务器。
[
![Configure AWS server](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_ssh_port1.png)
][10]
g、 选择“<ruby>查看并启动<rt>Review and Launch</rt></ruby>”按钮。
h、 如果所有的详情都无误,点击 “<ruby>启动<rt>Launch</rt></ruby>”按钮。
i、 单击“<ruby>启动<rt>Launch</rt></ruby>”按钮后,系统会像下面那样弹出一个窗口以创建“密钥对”:选择选项“<ruby>创建密钥对<rt>create a new key pair</rt></ruby>”,并给密钥对起个名字,然后下载下来。在使用 ssh 连接到服务器时,需要此密钥对。最后,单击“<ruby>启动实例<rt>Launch Instance</rt></ruby>”按钮。
[
![Create Key pair](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_key_pair.png)
][11]
j、 点击“<ruby>启动实例<rt>Launch Instance</rt></ruby>”按钮后,转到左上角的服务。选择“<ruby>计算<rt>compute</rt></ruby>”--> “EC2”。现在点击“<ruby>运行实例<rt>Running Instances</rt></ruby>”:
[
![Go to the running EC2 instance](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_running_instance.png)
][12]
k、 现在你可以看到,你的新 VM 的状态是 “<ruby>运行中<rt>running</rt></ruby>”。选择实例,请记下登录到服务器所需的 “<ruby>公开 DNS 名称<rt>Public DNS</rt></ruby>”。
[
![Public DNS value of the VM](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_dns_value.png)
][13]
现在你已完成创建一台运行 Linux 的 VM。要连接到服务器请按照以下步骤操作。
### 从 Windows 中连接到 EC2 实例
1、 首先,你需要有 putty gen 和 Putty exe 用于从 Windows 连接到服务器(或 Linux 上的 SSH 命令)。你可以通过下面的[链接][14]下载 putty。
2、 现在打开 putty gen `puttygen.exe`。
3、 你需要单击 “Load” 按钮浏览并选择你从亚马逊上面下载的密钥对文件pem 文件)。
4、 你需要选择 “ssh2-RSA” 选项,然后单击保存私钥按钮。请在下一个弹出窗口中选择 “yes”。
5、 将文件以扩展名 `.ppk` 保存。
6、 现在你需要打开 `putty.exe`。在左侧菜单中点击 “connect”然后选择 “SSH”然后选择 “Auth”。你需要单击浏览按钮来选择我们在步骤 4 中创建的 .ppk 文件。
7、 现在点击 “session” 菜单并在“host name” 中粘贴在本教程中 “k” 步骤中的 DNS 值,然后点击 “open” 按钮。
8、 在要求用户名和密码时,输入 `ec2-user` 和空白密码,然后输入下面的命令。
```
$ sudo su -
```
哈哈,你现在是在 AWS 云上托管的 Linux 服务器上的主人啦。
[
![Logged in to AWS EC2 server](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_putty1.JPG)
][15]
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-setup-linux-server-with-aws/
作者:[MANMOHAN MIRKAR][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/how-to-setup-linux-server-with-aws/
[1]:https://www.howtoforge.com/tutorial/how-to-setup-linux-server-with-aws/#setup-a-linux-vm-in-aws
[2]:https://www.howtoforge.com/tutorial/how-to-setup-linux-server-with-aws/#connect-to-an-ec-instance-from-windows
[3]:http://aws.amazon.com/free/
[4]:http://aws.amazon.com/ec2/
[5]:https://console.aws.amazon.com/
[6]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_console.JPG
[7]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_console_ec21.png
[8]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_launch_ec2.png
[9]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_free_tier_radio1.png
[10]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_ssh_port1.png
[11]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_key_pair.png
[12]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_running_instance.png
[13]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_dns_value.png
[14]:http://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
[15]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_putty1.JPG

View File

@ -1,14 +1,13 @@
如何在 CentOS 7 上安装和安全配置 MariaDB 10
===========================================
**MariaDB** 是 MySQL 数据库的自由开源分支,与 MySQL 在思想上同出一源,在未来仍将是自由且开源的。
**MariaDB** 是 MySQL 数据库的自由开源分支,与 MySQL 在设计思想上同出一源,在未来仍将是自由且开源的。
在这篇博文中,我将会介绍如何在当前使用最广的 RHEL/CentOS 和 Fedora 发行版上安装 **MariaDB 10.1** 稳定版。
目前了解到的情况是Red Hat Enterprise Linux/CentOS 7.0 发行版已将默认的数据库从 MySQL 切换到 MariaDB。
在本文中需要注意的是,我们假定您能够在服务器中使用 root 帐号工作,或者可以使用 [sudo command][7] 运行任何命令。
在本文中需要注意的是,我们假定您能够在服务器中使用 root 帐号工作,或者可以使用 [sudo][7] 命令运行任何命令。
### 第一步:添加 MariaDB yum 仓库
@ -39,6 +38,7 @@ baseurl = http://yum.mariadb.org/10.1/rhel7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
```
[
![Add MariaDB Yum Repo](http://www.tecmint.com/wp-content/uploads/2017/02/Add-MariaDB-Repo.png)
][8]
@ -52,19 +52,21 @@ gpgcheck=1
```
# yum install MariaDB-server MariaDB-client -y
```
[
![Install MariaDB in CentOS 7](http://www.tecmint.com/wp-content/uploads/2017/02/Install-MariaDB-in-CentOS-7.png)
][9]
*在 CentOS 7 中安装 MariaDB*
3. MariaDB 包安装完毕后,立即启动数据库服务守护进程,并可以通过下面的操作设置,在操作系统重启后自动启动服务。
3 MariaDB 包安装完毕后,立即启动数据库服务守护进程,并可以通过下面的操作设置,在操作系统重启后自动启动服务。
```
# systemctl start mariadb
# systemctl enable mariadb
# systemctl status mariadb
```
[
![Start MariaDB Service in CentOS 7](http://www.tecmint.com/wp-content/uploads/2017/02/Start-MariaDB-Service-in-CentOS-7.png)
][10]
@ -73,7 +75,7 @@ gpgcheck=1
### 第三步:在 CentOS 7 中对 MariaDB 进行安全配置
4. 现在可以通过以下操作进行安全配置:设置 MariaDB 的 root 账户密码,禁用 root 远程登录,删除测试数据库以及测试帐号,最后需要使用下面的命令重新加载权限。
4 现在可以通过以下操作进行安全配置:设置 MariaDB 的 root 账户密码,禁用 root 远程登录,删除测试数据库以及测试帐号,最后需要使用下面的命令重新加载权限。
```
# mysql_secure_installation
@ -84,13 +86,14 @@ gpgcheck=1
*CentOS 7 中的 MySQL 安全配置*
5. 在配置完数据库的安全配置后,你可能想检查下 MariaDB 的特性,比如:版本号,默认参数列表,以及通过 MariaDB 命令行登录。如下所示:
5 在配置完数据库的安全配置后,你可能想检查下 MariaDB 的特性,比如:版本号、默认参数列表、以及通过 MariaDB 命令行登录。如下所示:
```
# mysql -V
# mysqld --print-defaults
# mysql -u root -p
```
[
![Verify MySQL Version](http://www.tecmint.com/wp-content/uploads/2017/02/Verify-MySQL-Version.png)
][12]
@ -101,15 +104,15 @@ gpgcheck=1
如果你刚开始学习使用 MySQL/MariaDB可以通过以下指南学习
1. [Learn MySQL / MariaDB for Beginners Part 1][1]
2. [Learn MySQL / MariaDB for Beginners Part 2][2]
3. [MySQL Basic Database Administration Commands Part III][3]
4. [20 MySQL (Mysqladmin) Commands for Database Administration Part IV][4]
1. [新手学习 MySQL / MariaDB][1]
2. [新手学习 MySQL / MariaDB][2]
3. [MySQL 数据库基础管理命令(三)][3]
4. [20 MySQL 管理命令 Mysqladmin][4]
同样查看下面的文档学习如何优化你的 MySQL/MariaDB 服务,并使用工具监控数据库的活动情况。
1. [15 Tips to Tune and Optimize Your MySQL/MariaDB Performance][5]
2. [4 Useful Tools to Monitor MySQL/MariaDB Database Activities][6]
1. [15 个 MySQL/MariaDB 调优技巧][5]
2. [4 监控 MySQL/MariaDB 数据库的工具][6]
文章到此就结束了,本文内容比较浅显,文中主要展示了如何在 RHEL/CentOS 和 Fefora 操作系统中安装 **MariaDB 10.1** 稳定版。您可以通过下面的联系方式将您遇到的任何问题或者想法发给我们。

View File

@ -0,0 +1,99 @@
在 Linux 中修改 MySQL 或 MariaDB 的 Root 密码
============================================================
如果你是第一次[安装 MySQL 或 MariaDB][1],你可以执行 `mysql_secure_installation` 脚本来实现基本的安全设置。
其中的一个设置是数据库的 root 密码 —— 该密码必须保密,并且只在必要的时候使用。如果你需要修改它(例如,当数据库管理员换了人 —— 或者被解雇了!)。
**建议阅读:**[在 Linux 中恢复 MySQL 或 MariaDB 的 Root 密码][2]
这篇文章迟早会派上用场的。我们讲说明怎样来在 Linux 中修改 MySQL 或 MariaDB 数据库服务器的 root 密码。
尽管我们会在本文中使用 MariaDB 服务器,但本文中的用法说明对 MySQL 也有效。
### 修改 MySQL 或 MariaDB 的 root 密码
你知道 root 密码,但是想要重置它,对于这样的情况,让我们首先确定 MariaDB 正在运行:
```
------------- CentOS/RHEL 7 and Fedora 22+ -------------
# systemctl is-active mariadb
------------- CentOS/RHEL 6 and Fedora -------------
# /etc/init.d/mysqld status
```
[
![Check MySQL Status](http://www.tecmint.com/wp-content/uploads/2017/03/Check-MySQL-Status.png)
][3]
*检查 MysQL 状态*
如果上面的命令返回中没有 `active` 这个关键词,那么该服务就是停止状态,你需要在进行下一步之前先启动数据库服务:
```
------------- CentOS/RHEL 7 and Fedora 22+ -------------
# systemctl start mariadb
------------- CentOS/RHEL 6 and Fedora -------------
# /etc/init.d/mysqld start
```
接下来,我们将以 root 登录进数据库服务器:
```
# mysql -u root -p
```
为了兼容不同版本,我们将使用下面的声明来更新 mysql 数据库的用户表。注意,你需要将 `YourPasswordHere` 替换为你为 root 选择的新密码。
```
MariaDB [(none)]> USE mysql;
MariaDB [(none)]> UPDATE user SET password=PASSWORD('YourPasswordHere') WHERE User='root' AND Host = 'localhost';
MariaDB [(none)]> FLUSH PRIVILEGES;
```
要验证是否操作成功,请输入以下命令退出当前 MariaDB 会话。
```
MariaDB [(none)]> exit;
```
然后,敲回车。你现在应该可以使用新密码连接到服务器了。
[
![Change MySQL/MariaDB Root Password](http://www.tecmint.com/wp-content/uploads/2017/03/Change-MySQL-Root-Password.png)
][4]
*修改 MysQL/MariaDB Root 密码*
##### 小结
在本文中,我们说明了如何修改 MariaDB / MySQL 的 root 密码 —— 或许你知道当前所讲的这个方法,也可能不知道。
像往常一样,如果你有任何问题或者反馈,请尽管使用下面的评论框来留下你宝贵的意见或建议,我们期待着您的留言。
--------------------------------------------------------------------------------
作者简介:
Gabriel Cánepa是一位来自阿根廷圣路易斯的 Villa Mercedes 的 GNU/Linux 系统管理员和 web 开发者。他为世界范围内的主要的消费产品公司工作,也很钟情于在他日常工作的方方面面中使用 FOSS 工具来提高生产效率。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/change-mysql-mariadb-root-password/
作者:[Gabriel Cánepa][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/install-mariadb-in-centos-7/
[2]:http://www.tecmint.com/reset-mysql-or-mariadb-root-password/
[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-MySQL-Status.png
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Change-MySQL-Root-Password.png
[5]:http://www.tecmint.com/author/gacanepa/
[6]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[7]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,111 @@
如何在 Linux 中安装最新的 Python 3.6 版本
============================================================
在这篇文章中,我将展示如何在 CentOS/RHEL 7、Debian 以及它的衍生版本比如 Ubuntu最新的 Ubuntu 16.04 LTS 版本已经安装了最新的 Python 版本)或 Linux Mint 上安装和使用 Python 3.x 。我们的重点是安装可用于命令行的核心语言工具。
然后,我们也会阐述如何安装 Python IDLE 一个基于 GUI 的工具,它允许我们运行 Python 代码和创建独立函数。
### 在 Linux 中安装 Python 3.6
在我写这篇文章的时候2017 年三月中旬),在 CentOS 和 Debian 8 中可用的最新 Python 版本分别是 Python 3.4 和 Python 3.5 。
虽然我们可以使用 [yum][1] 和 [aptitude][2](或 [apt-get][3])安装核心安装包以及它们的依赖,但在这儿,我将阐述如何使用源代码进行安装。
为什么理由很简单这样我们能够获取语言的最新的稳定发行版3.6),并且提供了一种和 Linux 版本无关的安装方法。
在 CentOS 7 中安装 Python 之前,请确保系统中已经有了所有必要的开发依赖:
```
# yum -y groupinstall development
# yum -y install zlib-devel
```
在 Debian 中,我们需要安装 gcc、make 和 zlib 压缩/解压缩库:
```
# aptitude -y install gcc make zlib1g-dev
```
运行下面的命令来安装 Python 3.6
```
# wget https://www.python.org/ftp/python/3.6.0/Python-3.6.0.tar.xz
# tar xJf Python-3.6.0.tar.xz
# cd Python-3.6.0
# ./configure
# make && make install
```
现在,放松一下,或者饿的话去吃个三明治,因为这可能需要花费一些时间。安装完成以后,使用 `which` 命令来查看主要二进制代码的位置:
```
# which python3
# python3 -V
```
上面的命令的输出应该和这相似:
[
![Check Python Version in Linux](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Python-Version-in-Linux.png)
][4]
*查看 Linux 系统中的 Python 版本*
要退出 Python 提示符,只需输入:
```
quit()
exit()
```
然后按回车键。
恭喜Python 3.6 已经安装在你的系统上了。
### 在 Linux 中安装 Python IDLE
Python IDLE 是一个基于 GUI 的 Python 工具。如果你想安装 Python IDLE请安装叫做 idleDebian或 python-toolsCentOS的包
```
# apt-get install idle [On Debian]
# yum install python-tools [On CentOS]
```
输入下面的命令启动 Python IDLE
```
# idle
```
### 总结
在这篇文章中,我们阐述了如何从源代码安装最新的 Python 稳定版本。
最后但不是不重要,如果你之前使用 Python 2那么你可能需要看一下 [从 Python 2 迁移到 Python 3 的官方文档][5]。这是一个可以读入 Python 2 代码,然后转化为有效的 Python 3 代码的程序。
你有任何关于这篇文章的问题或想法吗?请使用下面的评论栏与我们联系
--------------------------------------------------------------------------------
作者简介:
Gabriel Cánepa - 一位来自阿根廷圣路易斯梅塞德斯镇 (Villa Mercedes, San Luis, Argentina) 的 GNU/Linux 系统管理员Web 开发者。就职于一家世界领先级的消费品公司,乐于在每天的工作中能使用 FOSS 工具来提高生产力。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/install-python-in-linux/
作者:[Gabriel Cánepa][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/
[2]:http://www.tecmint.com/linux-package-management/
[3]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Python-Version-in-Linux.png
[5]:https://docs.python.org/3.6/library/2to3.html

View File

@ -0,0 +1,154 @@
Kgif一个从活动窗口创建 GIF 的简单脚本
============================================================
[Kgif][2] 是一个简单的 shell 脚本,它可以从活动窗口创建一个 GIF 文件。我觉得这个程序专门是为捕获终端活动设计的,我经常用于这个。
它将窗口的活动捕获为一系列的 PNG 图片然后组合在一起创建一个GIF 动画。脚本以 0.5 秒的间隔截取活动窗口。如果你觉得这不符合你的要求,你可以根据你的需要修改脚本。
最初它是为了捕获 tty 输出以及创建 github 项目的预览图创建的。
确保你在运行 Kgif 之前已经安装了 scrot 和 ImageMagick 软件包。
推荐阅读:[Peek - 在 Linux 中创建一个 GIF 动画录像机][3]。
什么是 ImageMagickImageMagick 是一个命令行工具,用于图像转换和编辑。它支持所有类型的图片格式(超过 200 种),如 PNG、JPEG、JPEG-2000、GIF、TIFF、DPX、EXR、WebP、Postscript、PDF 和 SVG。
什么是 ScrotScrot 代表 SCReenshOT它是一个开源的命令行工具用于捕获桌面、终端或特定窗口的屏幕截图。
#### 安装依赖
Kgif 需要 scrot 以及 ImageMagick。
对于基于 Debian 的系统:
```
$ sudo apt-get install scrot imagemagick
```
对于基于 RHEL/CentOS 的系统:
```
$ sudo yum install scrot ImageMagick
```
对于 Fedora 系统:
```
$ sudo dnf install scrot ImageMagick
```
对于 openSUSE 系统:
```
$ sudo zypper install scrot ImageMagick
```
对于基于 Arch Linux 的系统:
```
$ sudo pacman -S scrot ImageMagick
```
#### 安装 Kgif 及使用
安装 Kgif 并不困难,因为不需要安装。只需从开发者的 github 页面克隆源文件,你就可以运行 `kgif.sh` 文件来捕获活动窗口了。默认情况下它的延迟为 1 秒,你可以用 `--delay` 选项来修改延迟。最后,按下 `Ctrl + c` 来停止捕获。
```
$ git clone https://github.com/luminousmen/Kgif
$ cd Kgif
$ ./kgif.sh
Setting delay to 1 sec
Capturing...
^C
Stop capturing
Converting to gif...
Cleaning...
Done!
```
检查系统中是否已存在依赖。
```
$ ./kgif.sh --check
OK: found scrot
OK: found imagemagick
```
设置在 N 秒延迟后开始捕获。
```
$ ./kgif.sh --delay=5
Setting delay to 5 sec
Capturing...
^C
Stop capturing
Converting to gif...
Cleaning...
Done!
```
它会将文件保存为 `terminal.gif`,并且每次在生成新文件时都会覆盖。因此,我建议你添加 `--filename` 选项将文件保存为不同的文件名。
```
$ ./kgif.sh --delay=5 --filename=2g-test.gif
Setting delay to 5 sec
Capturing...
^C
Stop capturing
Converting to gif...
Cleaning...
Done!
```
使用 `--noclean` 选项保留 png 截图。
```
$ ./kgif.sh --delay=5 --noclean
```
要了解更多的选项:
```
$ ./kgif.sh --help
usage: ./kgif.sh [--delay] [--filename ] [--gifdelay] [--noclean] [--check] [-h]
-h, --help Show this help, exit
--check Check if all dependencies are installed, exit
--delay= Set delay in seconds to specify how long script will wait until start capturing.
--gifdelay= Set delay in seconds to specify how fast images appears in gif.
--filename= Set file name for output gif.
--noclean Set if you don't want to delete source *.png screenshots.
```
默认捕获输出。
[
![](http://www.2daygeek.com/wp-content/uploads/2017/03/kgif-test.gif)
][4]
我感觉默认的捕获非常快,接着我做了一些修改并得到了合适的输出。
[
![](http://www.2daygeek.com/wp-content/uploads/2017/03/kgif-test-delay-modified.gif)
][5]
--------------------------------------------------------------------------------
via: http://www.2daygeek.com/kgif-create-animated-gif-file-active-window-screen-recorder-capture-arch-linux-mint-fedora-ubuntu-debian-opensuse-centos/
作者:[MAGESH MARUTHAMUTHU][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.2daygeek.com/author/magesh/
[1]:http://www.2daygeek.com/author/magesh/
[2]:https://github.com/luminousmen/Kgif
[3]:http://www.2daygeek.com/kgif-create-animated-gif-file-active-window-screen-recorder-capture-arch-linux-mint-fedora-ubuntu-debian-opensuse-centos/www.2daygeek.com/peek-create-animated-gif-screen-recorder-capture-arch-linux-mint-fedora-ubuntu/
[4]:http://www.2daygeek.com/wp-content/uploads/2017/03/kgif-test.gif
[5]:http://www.2daygeek.com/wp-content/uploads/2017/03/kgif-test-delay-modified.gif

View File

@ -0,0 +1,100 @@
ELRepo - Enterprise Linux RHEL、CentOS 及 SL的社区仓库
============================================================
如果你正在使用 Enterprise Linux 发行版Red Hat Enterprise Linux 或其衍生产品,如 CentOS 或 Scientific Linux并且需要对特定硬件或新硬件支持那么你找对地方了。
在本文中,我们将讨论如何启用 ELRepo 仓库,该软件源包含文件系统驱动以及网络摄像头驱动程序等等(支持显卡、网卡、声音设备甚至[新内核][1]
### 在 Enterprise Linux 中启用 ELRepo
虽然 ELRepo 是第三方仓库,但它有 Freenode#elrepo上的一个活跃社区以及用户邮件列表的良好支持。
如果你仍然对在软件源中添加一个独立的仓库表示担心,请注意 CentOS 已在它的 wiki[参见此处][2])将它列为是可靠的。如果你仍然有疑虑,请随时在评论中提问!
需要注意的是 ELRepo 不仅提供对 Enterprise Linux 7 提供支持,还支持以前的版本。考虑到 CentOS 5 在本月底2017 年 3 月结束支持EOL这可能看起来并不是一件很大的事但请记住CentOS 6 的 EOL 不会早于 2020 年 3 月之前。
不管你用的 EL 是何版本,在实际启用时需要先导入 GPG 密钥:
```
# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
```
**在 EL5 中启用 ELRepo**
```
# rpm -Uvh http://www.elrepo.org/elrepo-release-5-5.el5.elrepo.noarch.rpm
```
**在 EL6 中启用 ELRepo**
```
# rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm
```
**在 EL7 中启用 ELRepo**
```
# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
```
这篇文章只会覆盖 EL7在接下来的小节中分享几个例子。
### 理解 ELRepo 频道
为了更好地组织仓库中的软件ELRepo 共分为 4 个独立频道:
* elrepo 是主频道,默认情况下启用。它不包含正式发行版中的包。
* elrepo-extras 包含可以替代发行版提供的软件包。默认情况下不启用。为了避免混淆,当需要从该仓库中安装或更新软件包时,可以通过以下方式临时启用该频道(将软件包替换为实际软件包名称):`# yum --enablerepo=elrepo-extras install package`
* elrepo-testing 提供将放入主频道中,但是仍在测试中的软件包。
* elrepo-kernel 提供长期及稳定的主线内核,它们已经特别为 EL 配置过。
默认情况下elrepo-testing 和 elrepo-kernel 都被禁用,如果我们[需要从中安装或更新软件包][3],可以像 elrepo-extras 那样启用它们。
要列出每个频道中的可用软件包,请运行以下命令之一:
```
# yum --disablerepo="*" --enablerepo="elrepo" list available
# yum --disablerepo="*" --enablerepo="elrepo-extras" list available
# yum --disablerepo="*" --enablerepo="elrepo-testing" list available
# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
```
下面的图片说明了第一个例子:
[
![List ELRepo Available Packages](http://www.tecmint.com/wp-content/uploads/2017/03/List-ELRepo-Available-Packages.png)
][4]
*列出 ELRepo 可用的软件包*
##### 总结
本篇文章中,我们已经解释 ELRepo 是什么,以及你从如何将它们添加到你的软件源。
如果你对本文有任何问题或意见,请随时在评论栏中联系我们。我们期待你的回音!
--------------------------------------------------------------------------------
作者简介:
Gabriel Cánepa - 一位来自阿根廷圣路易斯梅塞德斯镇 (Villa Mercedes, San Luis, Argentina) 的 GNU/Linux 系统管理员Web 开发者。就职于一家世界领先级的消费品公司,乐于在每天的工作中能使用 FOSS 工具来提高生产力。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/enable-elrepo-in-rhel-centos-scientific-linux/
作者:[Gabriel Cánepa][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/install-upgrade-kernel-version-in-centos-7/
[2]:https://wiki.centos.org/AdditionalResources/Repositories
[3]:http://www.tecmint.com/auto-install-security-patches-updates-on-centos-rhel/
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/List-ELRepo-Available-Packages.png
[5]:http://www.tecmint.com/author/gacanepa/
[6]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[7]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -1,3 +1,4 @@
# rusking translating
What a Linux Desktop Does Better
============================================================

View File

@ -0,0 +1,76 @@
Hire a DDoS service to take down your enemies
========================
>With the rampant availability of IoT devices, cybercriminals offer denial of service attacks to take advantage of password problems.
![](http://images.techhive.com/images/article/2016/12/7606416730_e659cea89c_o-100698667-large.jpg)
With the onrush of connected internet of things (IoT) devices, distributed denial-of-service attacks are becoming a dangerous trend. Similar to what happened to [DNS service provider Dyn last fall][3], anyone and everyone is in the crosshairs. The idea of using unprotected IoT devices as a way to bombard networks is gaining momentum.
The advent of DDoS-for-hire services means that even the least tech-savvy individual can exact  revenge on some website. Step on up to the counter and purchase a stresser that can systemically take down a company.
According to [Neustar][4], almost three quarters of all global brands, organizations and companies have been victims of a DDoS attack. And more than 3,700 [DDoS attacks occur each day][5].
#### [■ RELATED: How can you detect a fake ransom letter?][1]
Chase Cunningham, director of cyber operations at A10 Networks, said to find IoT-enabled devices, all you have to do is go on an underground site and ask around for the Mirai scanner code. Once you have that you can scan for anything talking to the internet that can be used for that type of attack.  
“Or you can go to a site like Shodan and craft a couple of simple queries to look for device specific requests. Once you get that information you just go to your DDoS for hire tool and change the configuration to point at the right target and use the right type of traffic emulator and bingo, nuke whatever you like,” he said.
“Basically everything is for sale," he added. "You can buy a 'stresser', which is just a simple botnet type offering that will allow anyone who knows how to click the start button access to a functional DDoS botnet.”
>Once you get that information you just go to your DDoS for hire tool and change the configuration to point at the right target and use the right type of traffic emulator and bingo, nuke whatever you like.
>Chase Cunningham, A10 director of cyber operations
Cybersecurity vendor Imperva says for just a few dozen dollars, users can quickly get an attack up and running. The company writes on its website that these kits contain the bot payload and the CnC (command and control) files. Using these, aspiring bot masters (a.k.a. herders) can start distributing malware, infecting devices through a use of spam email, vulnerability scanners, brute force attacks and more.
Most [stressers and booters][6] have embraced a commonplace SaaS (software as a service) business model, based on subscriptions. As the Incapsula [Q2 2015 DDoS report][7] has shown, the average one hour/month DDoS package will cost $38 (with $19.99 at the lower end of the scale).
![ddos hire](http://images.techhive.com/images/article/2017/03/ddos-hire-100713247-large.jpg)
“Stresser and booter services are just a byproduct of a new reality, where services that can bring down businesses and organizations are allowed to operate in a dubious grey area,” Imperva wrote.
While cost varies, [attacks can run businesses anywhere from $14,000 to $2.35 million per incident][8]. And once a business is attacked, theres an [82 percent chance theyll be attacked again][9].
DDoS of Things (DoT) use IoT devices to build botnets that create large DDoS attacks. The DoT attacks have leveraged hundreds of thousands of IoT devices to attack anything from large service providers to enterprises. 
“Most of the reputable DDoS sellers have changeable configurations for their tool sets so you can easily set the type of attack you want to take place. I havent seen many yet that specifically include the option to purchase an IoT-specific traffic emulator but Im sure its coming. If it were me running the service I would definitely have that as an option,” Cunningham said.
According to an IDG News Service story, building a DDoS-for-service can also be easy. Often the hackers will rent six to 12 servers, and use them to push out internet traffic to whatever target. In late October, HackForums.net [shut down][10] its "Server Stress Testing" section, amid concerns that hackers were peddling DDoS-for-hire services through the site for as little as $10 a month.
Also in December, law enforcement agencies in the U.S. and Europe [arrested][11] 34 suspects involved in DDoS-for-hire services.
If it is so easy to do so, why dont these attacks happen more often?  
Cunningham said that these attacks do happen all the time, in fact they happen every second of the day. “You just dont hear about it because a lot of these are more nuisance attacks than big time bring down the house DDoS type events,” he said.
Also a lot of the attack platforms being sold only take systems down for an hour or a bit longer. Usually an hour-long attack on a site will cost anywhere from $15 to $50\. It depends, though, sometimes for better attack platforms it can hundreds of dollars an hour, he said.
The solution to cutting down on these attacks involves users resetting factory preset passwords on anything connected to the internet. Change the default password settings and disable things that you really dont need.
--------------------------------------------------------------------------------
via: http://www.csoonline.com/article/3180246/data-protection/hire-a-ddos-service-to-take-down-your-enemies.html
作者:[Ryan Francis][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.csoonline.com/author/Ryan-Francis/
[1]:http://csoonline.com/article/3103122/security/how-can-you-detect-a-fake-ransom-letter.html#tk.cso-infsb
[2]:https://www.incapsula.com/ddos/ddos-attacks/denial-of-service.html
[3]:http://csoonline.com/article/3135986/security/ddos-attack-against-overwhelmed-despite-mitigation-efforts.html
[4]:https://ns-cdn.neustar.biz/creative_services/biz/neustar/www/resources/whitepapers/it-security/ddos/2016-apr-ddos-report.pdf
[5]:https://www.a10networks.com/resources/ddos-trends-report
[6]:https://www.incapsula.com/ddos/booters-stressers-ddosers.html
[7]:https://www.incapsula.com/blog/ddos-global-threat-landscape-report-q2-2015.html
[8]:http://www.datacenterknowledge.com/archives/2016/05/13/number-of-costly-dos-related-data-center-outages-rising/
[9]:http://www.networkworld.com/article/3064677/security/hit-by-ddos-you-will-likely-be-struck-again.html
[10]:http://www.pcworld.com/article/3136730/hacking/hacking-forum-cuts-section-allegedly-linked-to-ddos-attacks.html
[11]:http://www.pcworld.com/article/3149543/security/dozens-arrested-in-international-ddos-for-hire-crackdown.html

View File

@ -0,0 +1,80 @@
Why AlphaGo Is Not AI
============================================================
![null](http://spectrum.ieee.org/img/icub-1458246741752.jpg)
>Photo: RobotCub
>“There is no AI without robotics,” the author argues.
_This is a guest post. The views expressed here are solely those of the author and do not represent positions of _ IEEE Spectrum _ or the IEEE._
What is AI and what is not AI is, to some extent, a matter of definition. There is no denying that AlphaGo, the Go-playing artificial intelligence designed by Google DeepMind that [recently beat world champion Lee Sedol][1], and similar [deep learning approaches][2] have managed to solve quite hard computational problems in recent years. But is it going to get us to  _full AI_ , in the sense of an artificial general intelligence, or [AGI][3], machine? Not quite, and here is why.
One of the key issues when building an AGI is that it will have to make sense of the world for itself, to develop its own, internal meaning for everything it will encounter, hear, say, and do. Failing to do this, you end up with todays AI programs where all the meaning is actually provided by the designer of the application: the AI basically doesnt understand what is going on and has a narrow domain of expertise.
The problem of meaning is perhaps the most fundamental problem of AI and has still not been solved today. One of the first to express it was cognitive scientist Stevan Harnad, in his 1990 paper about “The Symbol Grounding Problem.” Even if you dont believe we are explicitly manipulating symbols, which is indeed questionable, the problem remains:  _the grounding of whatever representation exists inside the system into the real world outside_ .
To be more specific, the problem of meaning leads us to four sub-problems:
1. How do you structure the information the agent (human or AI) is receiving from the world?
2. How do you link this structured information to the world, or, taking the above definition, how do you build “meaning” for the agent?
3. How do you synchronize this meaning with other agents? (Otherwise, there is no communication possible and you get an incomprehensible, isolated form of intelligence.)
4. Why does the agent do something at all rather than nothing? How to set all this into motion?
The first problem, about structuring information, is very well addressed by deep learning and similar unsupervised learning algorithms, used for example in the [AlphaGo program][4]. We have made tremendous progress in this area, in part because of the recent gain in computing power and the use of GPUs that are especially good at parallelizing information processing. What these algorithms do is take a signal that is extremely redundant and expressed in a high dimensional space, and reduce it to a low dimensionality signal, minimizing the loss of information in the process. In other words, it “captures” what is important in the signal, from an information processing point of view.
“There is no AI without robotics . . . This realization is often called the embodiment problem and most researchers in AI now agree that intelligence and embodiment are tightly coupled issues. Every different body has a different form of intelligence, and you see that pretty clearly in the animal kingdom.”</aside>
The second problem, about linking information to the real world, or creating “meaning,” is fundamentally tied to robotics. Because you need a body to interact with the world, and you need to interact with the world to build this link. Thats why I often say that there is no AI without robotics (although there can be pretty good robotics without AI, but thats another story). This realization is often called the “embodiment problem” and most researchers in AI now agree that intelligence and embodiment are tightly coupled issues. Every different body has a different form of intelligence, and you see that pretty clearly in the animal kingdom.
It starts with simple things like making sense of your own body parts, and how you can control them to produce desired effects in the observed world around you, how you build your own notion of space, distance, color, etc. This has been studied extensively by researchers like [J. Kevin ORegan][5] and his “sensorimotor theory.” It is just a first step however, because then you have to build up more and more abstract concepts, on top of those grounded sensorimotor structures. We are not quite there yet, but thats the current state of research on that matter.
The third problem is fundamentally the question of the origin of culture. Some animals show some simple form of culture, even transgenerational acquired competencies, but it is very limited and only humans have reached the threshold of exponentially growing acquisition of knowledge that we call culture. Culture is the essential catalyst of intelligence and an AI without the capability to interact culturally would be nothing more than an academic curiosity.
However, culture can not be hand coded into a machine; it must be the result of a learning process. The best way to start looking to try to understand this process is in developmental psychology, with the work of Jean Piaget and Michael Tomasello, studying how children acquire cultural competencies. This approach gave birth to a new discipline in robotics called “developmental robotics,” which is taking the child as a model (as illustrated by the [iCub robot][6], pictured above).
“Culture is the essential catalyst of intelligence and an AI without the capability to interact culturally would be nothing more than an academic curiosity. However, culture can not be hand coded into a machine; it must be the result of a learning process.”</aside>
It is also closely linked to the study of language learning, which is one of the topics that I mostly focused on as a researcher myself. The work of people like [Luc Steels][7] and many others have shown that we can see language acquisition as an evolutionary process: the agent creates new meanings by interacting with the world, use them to communicate with other agents, and select the most successful structures that help to communicate (that is, to achieve joint intentions, mostly). After hundreds of trial and error steps, just like with biological evolution, the system evolves the best meaning and their syntactic/grammatical translation.
This process has been tested experimentally and shows striking resemblance with how natural languages evolve and grow. Interestingly, it accounts for instantaneous learning, when a concept is acquired in one shot, something that heavily statistical models like deep learning are  _not_  capable to explain. Several research labs are now trying to go further into acquiring grammar, gestures, and more complex cultural conventions using this approach, in particular the [AI Lab][8] that I founded at [Aldebaran][9], the French robotics company—now part of the SoftBank Group—that created the robots [Nao][10], [Romeo][11], and [Pepper][12] (pictured below).
![img](http://spectrum.ieee.org/image/MjczMjg3Ng)
>Aldebarans humanoid robots: Nao, Romeo, and Pepper.</figcaption>
Finally, the fourth problem deals with what is called “intrinsic motivation.” Why does the agent do anything at all, rather than nothing. Survival requirements are not enough to explain human behavior. Even perfectly fed and secure, humans dont just sit idle until hunger comes back. There is more: they explore, they try, and all of that seems to be driven by some kind of intrinsic curiosity. Researchers like [Pierre-Yves Oudeyer][13] have shown that simple mathematical formulations of curiosity, as an expression of the tendency of the agent to maximize its rate of learning, are enough to account for incredibly complex and surprising behaviors (see, for example, [the Playground experiment][14] done at Sony CSL).
It seems that something similar is needed inside the system to drive its desire to go through the previous three steps: structure the information of the world, connect it to its body and create meaning, and then select the most “communicationally efficient” one to create a joint culture that enables cooperation. This is, in my view, the program of AGI.
Again, the rapid advances of deep learning and the recent success of this kind of AI at games like Go are very good news because they could lead to lots of really useful applications in medical research, industry, environmental preservation, and many other areas. But this is only one part of the problem, as Ive tried to show here. I dont believe deep learning is the silver bullet that will get us to true AI, in the sense of a machine that is able to learn to live in the world, interact naturally with us, understand deeply the complexity of our emotions and cultural biases, and ultimately help us to make a better world.
**[Jean-Christophe Baillie][15] is founder and president of [Novaquark][16], a Paris-based virtual reality startup developing [Dual Universe][17], a next-generation online world where participants will be able to create entire civilizations through fully emergent gameplay. A graduate from the École Polytechnique in Paris, Baillie received a PhD in AI from Paris IV University and founded the Cognitive Robotics Lab at ENSTA ParisTech and, later, Gostai, a robotics company acquired by the Aldebaran/SoftBank Group in 2012\. This article originally [appeared][18] in LinkedIn.**
--------------------------------------------------------------------------------
via: http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/why-alphago-is-not-ai
作者:[Jean-Christophe Baillie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linkedin.com/in/jcbaillie
[1]:http://spectrum.ieee.org/tech-talk/computing/networks/alphago-wins-match-against-top-go-player
[2]:http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/facebook-ai-director-yann-lecun-on-deep-learning
[3]:https://en.wikipedia.org/wiki/Artificial_general_intelligence
[4]:http://spectrum.ieee.org/tech-talk/computing/software/monster-machine-defeats-prominent-pro-player
[5]:http://nivea.psycho.univ-paris5.fr/
[6]:http://www.icub.org/
[7]:https://ai.vub.ac.be/members/steels
[8]:http://a-labs.aldebaran.com/labs/ai-lab
[9]:https://www.aldebaran.com/en
[10]:http://spectrum.ieee.org/automaton/robotics/humanoids/aldebaran-new-nao-robot-demo
[11]:http://spectrum.ieee.org/automaton/robotics/humanoids/france-developing-advanced-humanoid-robot-romeo
[12]:http://spectrum.ieee.org/robotics/home-robots/how-aldebaran-robotics-built-its-friendly-humanoid-robot-pepper
[13]:http://www.pyoudeyer.com/
[14]:http://www.pyoudeyer.com/SS305OudeyerP-Y.pdf
[15]:https://www.linkedin.com/in/jcbaillie
[16]:http://www.dualthegame.com/novaquark
[17]:http://www.dualthegame.com/
[18]:https://www.linkedin.com/pulse/why-alphago-ai-jean-christophe-baillie

View File

@ -0,0 +1,98 @@
#rusking translating
Why do you use Linux and open source software?
============================================================
>LinuxQuestions.org readers share reasons they use Linux and open source technologies. How will Opensource.com readers respond?
![Why do you use Linux and open source software?](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUS_consensuscollab2.png?itok=j5vPMv-V "Why do you use Linux and open source software?")
>Image by : opensource.com
As I mentioned when [The Queue][4] launched, although typically I will answer questions from readers, sometimes I'll switch that around and ask readers a question. I haven't done so since that initial column, so it's overdue. I recently asked two related questions at LinuxQuestions.org and the response was overwhelming. Let's see how the Opensource.com community answers both questions, and how those responses compare and contrast to those on LQ.
### Why do you use Linux?
The first question I asked the LinuxQuestions.org community is: **[What are the reasons you use Linux?][1]**
### Answer highlights
_oldwierdal_ : I use Linux because it is fast, safe, and reliable. With contributors from all over the world, it has become, perhaps, the most advanced and innovative software available. And, here is the icing on the red-velvet cake; It is free!
_Timothy Miller_ : I started using it because it was free as in beer and I was poor so couldn't afford to keep buying new Windows licenses.
_ondoho_ : Because it's a global community effort, self-governed grassroot operating system. Because it's free in every sense. Because there's good reason to trust in it.
_joham34_ : Stable, free, safe, runs in low specs PCs, nice support community, little to no danger for viruses.
_Ook_ : I use Linux because it just works, something Windows never did well for me. I don't have to waste time and money getting it going and keeping it going.
_rhamel_ : I am very concerned about the loss of privacy as a whole on the internet. I recognize that compromises have to be made between privacy and convenience. I may be fooling myself but I think Linux gives me at least the possibility of some measure of privacy.
_educateme_ : I use Linux because of the open-minded, learning-hungry, passionately helpful community. And, it's free.
_colinetsegers_ : Why I use Linux? There's not only one reason. In short I would say:
1. The philosophy of free shared knowledge.
2. Feeling safe while surfing the web.
3. Lots of free and useful software.
_bamunds_ : Because I love freedom.
_cecilskinner1989_ : I use linux for two reasons: stability and privacy.
### Why do you use open source software?
The second questions is, more broadly: **[What are the reasons you use open source software?][2]** You'll notice that, although there is a fair amount of overlap here, the general tone is different, with some sentiments receiving more emphasis, and others less.
### Answer highlights
_robert leleu_ : Warm and cooperative atmosphere is the main reason of my addiction to open source.
_cjturner_ : Open Source is an answer to the Pareto Principle as applied to Applications; OOTB, a software package ends up meeting 80% of your requirements, and you have to get the other 20% done. Open Source gives you a mechanism and a community to share this burden, putting your own effort (if you have the skills) or money into your high-priority requirements.
_Timothy Miller_ : I like the knowledge that I  _can_  examine the source code to verify that the software is secure if I so choose.
_teckk_ : There are no burdensome licensing requirements or DRM and it's available to everyone.
_rokytnji_ : Beer money. Motorcycle parts. Grandkids birthday presents.
_timl_ : Privacy is impossible without free software
_hazel_ : I like the philosophy of free software, but I wouldn't use it just for philosophical reasons if Linux was a bad OS. I use Linux because I love Linux, and because you can get it for free as in free beer. The fact that it's also free as in free speech is a bonus, because it makes me feel good about using it. But if I find that a piece of hardware on my machine needs proprietary firmware, I'll use proprietary firmware.
_lm8_ : I use open source software because I don't have to worry about it going obsolete when a company goes out of business or decides to stop supporting it. I can continue to update and maintain the software myself. I can also customize it if the software does almost everything I want, but it would be nice to have a few more features. I also like open source because I can share my favorite programs with friend and coworkers.
_donguitar_ : Because it empowers me and enables me to empower others.
### Your turn
So, what are the reasons  _**you**_  use Linux? What are the reasons  _**you**_  use open source software? Let us know in the comments.
### Fill The Queue
Lastly, what questions would you like to see answered in a future article? From questions on building and maintaining communities, to what you'd like to know about contributing to an open source project, to questions more technical in nature—[submit your Linux and open source questions][5].
--------------------------------------------------------------------------------
作者简介:
Jeremy Garcia - Jeremy Garcia is the founder of LinuxQuestions.org and an ardent but realistic open source advocate. Follow Jeremy on Twitter: @linuxquestions
------------------
via: https://opensource.com/article/17/3/why-do-you-use-linux-and-open-source-software
作者:[Jeremy Garcia ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jeremy-garcia
[1]:http://www.linuxquestions.org/questions/linux-general-1/what-are-the-reasons-you-use-linux-4175600842/
[2]:http://www.linuxquestions.org/questions/linux-general-1/what-are-the-reasons-you-use-open-source-software-4175600843/
[3]:https://opensource.com/article/17/3/why-do-you-use-linux-and-open-source-software?rate=lVazcbF6Oern5CpV86PgNrRNZltZ8aJZwrUp7SrZIAw
[4]:https://opensource.com/tags/queue-column
[5]:https://opensource.com/thequeue-submit-question
[6]:https://opensource.com/user/86816/feed
[7]:https://opensource.com/article/17/3/why-do-you-use-linux-and-open-source-software#comments
[8]:https://opensource.com/users/jeremy-garcia

View File

@ -1,176 +0,0 @@
translating by xiaow6
Your visual how-to guide for SELinux policy enforcement
============================================================
![SELinux policy guide](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life-uploads/selinux_rules_lead_image.png?itok=jxV7NgtD "Your visual how-to guide for SELinux policy enforcement")
>Image by : opensource.com
We are celebrating the SELinux 10th year anversary this year. Hard to believe it. SELinux was first introduced in Fedora Core 3 and later in Red Hat Enterprise Linux 4. For those who have never used SELinux, or would like an explanation...
More Linux resources
* [What is Linux?][1]
* [What are Linux containers?][2]
* [Managing devices in Linux][3]
* [Download Now: Linux commands cheat sheet][4]
* [Our latest Linux articles][5]
SElinux is a labeling system. Every process has a label. Every file/directory object in the operating system has a label. Even network ports, devices, and potentially hostnames have labels assigned to them. We write rules to control the access of a process label to an a object label like a file. We call this  _policy_ . The kernel enforces the rules. Sometimes this enforcement is called Mandatory Access Control (MAC). 
The owner of an object does not have discretion over the security attributes of a object. Standard Linux access control, owner/group + permission flags like rwx, is often called Discretionary Access Control (DAC). SELinux has no concept of UID or ownership of files. Everything is controlled by the labels. Meaning an SELinux system can be setup without an all powerful root process. 
**Note:**  _SELinux does not let you side step DAC Controls. SELinux is a parallel enforcement model. An application has to be allowed by BOTH SELinux and DAC to do certain activities. This can lead to confusion for administrators because the process gets Permission Denied. Administrators see Permission Denied means something is wrong with DAC, not SELinux labels._
### Type enforcement
Lets look a little further into the labels. The SELinux primary model or enforcement is called  _type enforcement_ . Basically this means we define the label on a process based on its type, and the label on a file system object based on its type.
_Analogy_
Imagine a system where we define types on objects like cats and dogs. A cat and dog are process types.
_*all cartoons by [Máirín Duffy][6]_
![Image showing a cartoon of a cat and dog.](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_01_catdog.png)
We have a class of objects that they want to interact with which we call food. And I want to add types to the food,  _cat_food_  and  _dog_food_ . 
![Cartoon Cat eating Cat Food and Dog eating Dog Food](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_03_foods.png)
As a policy writer, I would say that a dog has permission to eat  _dog_chow_  food and a cat has permission to eat  _cat_chow_  food. In SELinux we would write this rule in policy.
![allow cat cat_chow:food eat; allow dog dog_chow:food eat](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_04_policy.png "SELinux rule")
allow cat cat_chow:food eat;
allow dog dog_chow:food eat;
With these rules the kernel would allow the cat process to eat food labeled  _cat_chow _ and the dog to eat food labeled  _dog_chow_ .
![Cartoon Cat eating Cat Food and Dog eating Dog Food](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_02_eat.png)
But in an SELinux system everything is denied by default. This means that if the dog process tried to eat the  _cat_chow_ , the kernel would prevent it.
![](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_06_tux-dog-leash.png)
Likewise cats would not be allowed to touch dog food.
![Cartoon cat not allowed to eat dog fooda](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_07_tux-cat-no.png "Cartoon cat not allowed to eat dog fooda")
_Real world_
We label Apache processes as  _httpd_t_  and we label Apache content as  _httpd_sys_content_t _ and  _httpd_sys_content_rw_t_ . Imagine we have credit card data stored in a mySQL database which is labeled  _msyqld_data_t_ . If an Apache process is hacked, the hacker could get control of the  _httpd_t process_  and would be allowed to read  _httpd_sys_content_t_  files and write to  _httpd_sys_content_rw_t_ . But the hacker would not be allowed to read the credit card data ( _mysqld_data_t_ ) even if the process was running as root. In this case SELinux has mitigated the break in.
### MCS enforcement
_Analogy _
Above, we typed the dog process and cat process, but what happens if you have multiple dogs processes: Fido and Spot. You want to stop Fido from eating Spot's  _dog_chow_ .
![SELinux rule](https://opensource.com/sites/default/files/resize/images/life-uploads/mcs-enforcement_02_fido-eat-spot-food-500x251.png "SELinux rule")
One solution would be to create lots of new types, like  _Fido_dog_  and  _Fido_dog_chow_ . But, this will quickly become unruly because all dogs have pretty much the same permissions.
To handle this we developed a new form of enforcement, which we call Multi Category Security (MCS). In MCS, we add another section of the label which we can apply to the dog process and to the dog_chow food. Now we label the dog process as  _dog:random1 _ (Fido) and  _dog:random2_  (Spot).
![Cartoon of two dogs fido and spot](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_01_fido-spot.png)
We label the dog chow as  _dog_chow:random1 (Fido)_  and  _dog_chow:random2_ (Spot).
![SELinux rule](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_03_foods.png "SELinux rule")
MCS rules say that if the type enforcement rules are OK and the random MCS labels match exactly, then the access is allowed, if not it is denied.  
Fido (dog:random1) trying to eat  _cat_chow:food_  is denied by type enforcement.
![Cartoon of Kernel (Penquin) holding leash to prevent Fido from eating cat food.](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_04-bad-fido-cat-chow.png)
Fido (dog:random1) is allowed to eat  _dog_chow:random1._
![Cartoon Fido happily eating his dog food](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_05_fido-eat-fido-food.png)
Fido (dog:random1) denied to eat spot's ( _dog_chow:random2_ ) food.
![Cartoon of Kernel (Penquin) holding leash to prevent Fido from eating spots dog food.](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_06_fido-no-spot-food.png)
_Real world_
In computer systems we often have lots of processes all with the same access, but we want them separated from each other. We sometimes call this a  _multi-tenant environment_ . The best example of this is virtual machines. If I have a server running lots of virtual machines, and one of them gets hacked, I want to prevent it from attacking the other virtual machines and virtual machine images. But in a type enforcement system the KVM virtual machine is labeled  _svirt_t_  and the image is labeled  _svirt_image_t_ . We have rules that say  _svirt_t_  can read/write/delete content labeled  _svirt_image_t_ . With libvirt we implemented not only type enforcement separation, but also MCS separation. When libvirt is about to launch a virtual machine it picks out a random MCS label like  _s0:c1,c2_ , it then assigns the  _svirt_image_t:s0:c1,c2_  label to all of the content that the virtual machine is going to need to manage. Finally, it launches the virtual machine as  _svirt_t:s0:c1,c2_ . Then, the SELinux kernel controls that  _svirt_t:s0:c1,c2_  can not write to  _svirt_image_t:s0:c3,c4_ , even if the virtual machine is controled by a hacker and takes it over. Even if it is running as root.
We use [similar separation][8] in OpenShift. Each gear (user/app process)runs with the same SELinux type (openshift_t). Policy defines the rules controlling the access of the gear type and a unique MCS label to make sure one gear can not interact with other gears.
Watch [this short video][9] on what would happen if an Openshift gear became root.
### MLS enforcement
Another form of SELinux enforcement, used much less frequently, is called Multi Level Security (MLS); it was developed back in the 60s and is used mainly in trusted operating systems like Trusted Solaris.
The main idea is to control processes based on the level of the data they will be using. A  _secret _ process can not read  _top secret_  data.
MLS is very similar to MCS, except it adds a concept of dominance to enforcement. Where MCS labels have to match exactly, one MLS label can dominate another MLS label and get access.
_Analogy_
Instead of talking about different dogs, we now look at different breeds. We might have a Greyhound and a Chihuahua.
![Cartoon of a Greyhound and a Chihuahua](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_01_chigrey.png)
We might want to allow the Greyhound to eat any dog food, but a Chihuahua could choke if it tried to eat Greyhound dog food.
We want to label the Greyhound as  _dog:Greyhound_  and his dog food as  _dog_chow:Greyhound, _ and label the Chihuahua as  _dog:Chihuahua_  and his food as  _dog_chow:Chihuahua_ .
![Cartoon of a Greyhound dog food and a Chihuahua dog food.](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_04_mlstypes.png)
With the MLS policy, we would have the MLS Greyhound label dominate the Chihuahua label. This means  _dog:Greyhound_  is allowed to eat  _dog_chow:Greyhound _ and  _dog_chow:Chihuahua_ .
![SELinux rule](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_05_chigreyeating.png "SELinux rule")
But  _dog:Chihuahua_  is not allowed to eat  _dog_chow:Greyhound_ .
![Cartoon of Kernel (Penquin) stopping the Chihahua from eating the greyhound food. Telling him it would be a big too beefy for him.](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_03_chichoke.png)
Of course,  _dog:Greyhound_  and  _dog:Chihuahua_  are still prevented from eating  _cat_chow:Siamese_  by type enforcement, even if the MLS type Greyhound dominates Siamese.
![Cartoon of Kernel (Penquin) holding leash to prevent both dogs from eating cat food.](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_06_nocatchow.png)
_Real world_
I could have two Apache servers: one running as  _httpd_t:TopSecret_  and another running as  _httpd_t:Secret_ . If the Apache process  _httpd_t:Secret_  were hacked, the hacker could read  _httpd_sys_content_t:Secret_  but would be prevented from reading  _httpd_sys_content_t:TopSecret_ .
However, if the Apache server running  _httpd_t:TopSecret_  was hacked, it could read  _httpd_sys_content_t:Secret data_  as well as  _httpd_sys_content_t:TopSecret_ .
We use the MLS in military environments where a user might only be allowed to see  _secret _ data, but another user on the same system could read  _top secret_  data.
### Conclusion
SELinux is a powerful labeling system, controlling access granted to individual processes by the kernel. The primary feature of this is type enforcement where rules define the access allowed to a process is allowed based on the labeled type of the process and the labeled type of the object. Two additional controls have been added to separate processes with the same type from each other called MCS, total separtion from each other, and MLS, allowing for process domination.
--------------------------------------------------------------------------------
作者简介:
Daniel J Walsh - Daniel Walsh has worked in the computer security field for almost 30 years. Dan joined Red Hat in August 2001.
-------------------------
via: https://opensource.com/business/13/11/selinux-policy-guide
作者:[Daniel J Walsh ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/rhatdan
[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu
[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu
[3]:https://opensource.com/article/16/11/managing-devices-linux?src=linux_resource_menu
[4]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ
[5]:https://opensource.com/tags/linux?src=linux_resource_menu
[6]:https://opensource.com/users/mairin
[7]:https://opensource.com/business/13/11/selinux-policy-guide?rate=XNCbBUJpG2rjpCoRumnDzQw-VsLWBEh-9G2hdHyB31I
[8]:http://people.fedoraproject.org/~dwalsh/SELinux/Presentations/openshift_selinux.ogv
[9]:http://people.fedoraproject.org/~dwalsh/SELinux/Presentations/openshift_selinux.ogv
[10]:https://opensource.com/user/16673/feed
[11]:https://opensource.com/business/13/11/selinux-policy-guide#comments
[12]:https://opensource.com/users/rhatdan

View File

@ -0,0 +1,142 @@
[Data-Oriented Hash Table][1]
============================================================
In recent years, theres been a lot of discussion and interest in “data-oriented design”—a programming style that emphasizes thinking about how your data is laid out in memory, how you access it and how many cache misses its going to incur. With memory reads taking orders of magnitude longer for cache misses than hits, the number of misses is often the key metric to optimize. Its not just about performance-sensitive code—data structures designed without sufficient attention to memory effects may be a big contributor to the general slowness and bloatiness of software.
The central tenet of cache-efficient data structures is to keep things flat and linear. For example, under most circumstances, to store a sequence of items you should prefer a flat array over a linked list—every pointer you have to chase to find your data adds a likely cache miss, while flat arrays can be prefetched and enable the memory system to operate at peak efficiency.
This is pretty obvious if you know a little about how the memory hierarchy works—but its still a good idea to test things sometimes, even if theyre “obvious”! [Baptiste Wicht tested `std::vector` vs `std::list` vs `std::deque`][4] (the latter of which is commonly implemented as a chunked array, i.e. an array of arrays) a couple of years ago. The results are mostly in line with what youd expect, but there are a few counterintuitive findings. For instance, inserting or removing values in the middle of the sequence—something lists are supposed to be good at—is actually faster with an array, if the elements are a POD type and no bigger than 64 bytes (i.e. one cache line) or so! It turns out to actually be faster to shift around the array elements on insertion/removal than to first traverse the list to find the right position and then patch a few pointers to insert/remove one element. Thats because of the many cache misses in the list traversal, compared to relatively few for the array shift. (For larger element sizes, non-POD types, or if you already have a pointer into the list, the list wins, as youd expect.)
Thanks to data like Baptistes, we know a good deal about how memory layout affects sequence containers. But what about associative containers, i.e. hash tables? There have been some expert recommendations: [Chandler Carruth tells us to use open addressing with local probing][5] so that we dont have to chase pointers, and [Mike Acton suggests segregating keys from values][6] in memory so that we get more keys per cache line, improving locality when we have to look at multiple keys. These ideas make good sense, but again, its a good idea to test things, and I couldnt find any data. So I had to collect some of my own!
### [][7]The Tests
I tested four different quick-and-dirty hash table implementations, as well as `std::unordered_map`. All five used the same hash function, Bob Jenkins [SpookyHash][8] with 64-bit hash values. (I didnt test different hash functions, as that wasnt the point here; Im also not looking at total memory consumption in my analysis.) The implementations are identified by short codes in the results tables:
* **UM**: `std::unordered_map`. In both VS2012 and libstdc++-v3 (used by both gcc and clang), UM is implemented as a linked list containing all the elements, and an array of buckets that store iterators into the list. In VS2012, its a doubly-linked list and each bucket stores both begin and end iterators; in libstdc++, its a singly-linked list and each bucket stores just a begin iterator. In both cases, the list nodes are individually allocated and freed. Max load factor is 1.
* **Ch**: separate chaining—each bucket points to a singly-linked list of element nodes. The element nodes are stored in a flat array pool, to avoid allocating each node individually. Unused nodes are kept on a free list. Max load factor is 1.
* **OL**: open addressing with linear probing—each bucket stores a 62-bit hash, a 2-bit state (empty, filled, or removed), key, and value. Max load factor is 2/3.
* **DO1**: “data-oriented 1”—like OL, but the hashes and states are segregated from the keys and values, in two separate flat arrays.
* **DO2**: “data-oriented 2”—like OL, but the hashes/states, keys, and values are segregated in three separate flat arrays.
All my implementations, as well as VS2012s UM, use power-of-2 sizes by default, growing by 2x upon exceeding their max load factor. In libstdc++, UM uses prime-number sizes by default and grows to the next prime upon exceeding its max load factor. However, I dont think these details are very important for performance. The prime-number thing is a hedge against poor hash functions that dont have enough entropy in their lower bits, but were using a good hash function.
The OL, DO1 and DO2 implementations will collectively be referred to as OA (open addressing), since well find later that their performance characteristics are often pretty similar.
For each of these implementations, I timed several different operations, at element counts from 100K to 1M and for payload sizes (i.e. total key+value size) from 8 to 4K bytes. For my purposes, keys and values were always POD types and keys were always 8 bytes (except for the 8-byte payload, in which key and value were 4 bytes each). I kept the keys to a consistent size because my purpose here was to test memory effects, not hash function performance. Each test was repeated 5 times and the minimum timing was taken.
The operations tested were:
* **Fill**: insert a randomly shuffled sequence of unique keys into the table.
* **Presized fill**: like Fill, but first reserve enough memory for all the keys well insert, to prevent rehashing and reallocing during the fill process.
* **Lookup**: perform 100K lookups of random keys, all of which are in the table.
* **Failed lookup**: perform 100K lookups of random keys, none of which are in the table.
* **Remove**: remove a randomly chosen half of the elements from a table.
* **Destruct**: destroy a table and free its memory.
You can [download my test code here][9]. It builds for Windows or Linux, in 64-bit only. There are some flags near the top of `main()` that you can toggle to turn on or off different tests—with all of them on, it will likely take an hour or two to run. The results I gathered are also included, in an Excel spreadsheet in that archive. (Beware that the Windows and Linux results are run on different CPUs, so timings arent directly comparable.) The code also runs unit tests to verify that all the hash table implementations are behaving correctly.
Incidentally, I also tried two additional implementations: separate chaining with the first node stored in the bucket instead of the pool, and open addressing with quadratic probing. Neither of these was good enough to include in the final data, but the code for them is still there.
### [][10]The Results
Theres a ton of data here. In this section Ill discuss the results in some detail, but if your eyes are glazing over in this part, feel free to skip down to the conclusions in the next section.
### [][11]Windows
Here are the graphed results of all the tests, compiled with Visual Studio 2012, and run on Windows 8.1 on a Core i7-4710HQ machine. (Click to zoom.)
[
![Results for VS 2012, Windows 8.1, Core i7-4710HQ](http://reedbeta.com/blog/data-oriented-hash-table/results-vs2012.png "Results for VS 2012, Windows 8.1, Core i7-4710HQ")
][12]
From left to right are different payload sizes, from top to bottom are the various operations, and each graph plots time in milliseconds versus hash table element count for each of the five implementations. (Note that not all the Y-axes have the same scale!) Ill summarize the main trends for each operation.
**Fill**: Among my hash tables, chaining is a bit better than any of the OA variants, with the gap widening at larger payloads and table sizes. I guess this is because chaining only has to pull an element off the free list and stick it on the front of its bucket, while OA may have to search a few buckets to find an empty one. The OA variants perform very similarly to each other, but DO1 appears to have a slight advantage.
All of my hash tables beat UM by quite a bit at small payloads, where UM pays a heavy price for doing a memory allocation on every insert. But theyre about equal at 128 bytes, and UM wins by quite a bit at large payloads: there, all of my implementations are hamstrung by the need to resize their element pool and spend a lot of time moving the large elements into the new pool, while UM never needs to move elements once theyre allocated. Notice the extreme “steppy” look of the graphs for my implementations at large payloads, which confirms that the problem comes when resizing. In contrast, UM is quite linear—it only has to resize its bucket array, which is cheap enough not to make much of a bump.
**Presized fill**: Generally similar to Fill, but the graphs are more linear, not steppy (since theres no rehashing), and theres less difference between all the implementations. UM is still slightly faster than chaining at large payloads, but only slightly—again confirming that the problem with Fill was the resizing. Chaining is still consistently faster than the OA variants, but DO1 has a slight advantage over the other OAs.
**Lookup**: All the implementations are closely clustered, with UM and DO2 the front-runners, except at the smallest payload, where it seems like DO1 and OL may be faster. Its impressive how well UM is doing here, actually; its holding its own against the data-oriented variants despite needing to traverse a linked list.
Incidentally, its interesting to see that the lookup time weakly depends on table size. Hash table lookup is expected constant-time, so from the asymptotic view it shouldnt depend on table size at all. But thats ignoring cache effects! When we do 100K lookups on a 10K-entry table, for instance, well get a speedup because most of the table will be in L3 after the first 10K20K lookups.
**Failed lookup**: Theres a bit more spread here than the successful lookups. DO1 and DO2 are the front-runners, with UM not far behind, and OL a good deal worse than the rest. My guess is this is probably a case of OL having longer searches on average, especially in the case of a failed lookup; with the hash values spaced out in memory between keys and values, that hurts. DO1 and DO2 have equally-long searches, but they have all the hash values packed together in memory, and that turns things around.
**Remove**: DO2 is the clear winner, with DO1 not far behind, chaining further behind, and UM in a distant last place due to the need to free memory on every remove; the gap widens at larger payloads. The remove operation is the only one that doesnt touch the value data, only the hashes and keys, which explains why DO1 and DO2 are differentiated from each other here but pretty much equal in all the other tests. (If your value type was non-POD and needed to run a destructor, that difference would presumably disappear.)
**Destruct**: Chaining is the fastest except at the smallest payload, where its about equal to the OA variants. All the OA variants are essentially equal. Note that for my hash tables, all theyre doing on destruction is freeing a handful of memory buffers, but [on Windows, freeing memory has a cost proportional to the amount allocated][13]. (And its a significant cost—an allocation of ~1 GB is taking ~100 ms to free!)
UM is the slowest to destruct—by an order of magnitude at small payloads, and only slightly slower at large payloads. The need to free each individual element instead of just freeing a couple of arrays really hurts here.
### [][14]Linux
I also ran tests with gcc 4.8 and clang 3.5, on Linux Mint 17.1 on a Core i5-4570S machine. The gcc and clang results were very similar, so Ill only show the gcc ones; the full set of results are in the code download archive, linked above. (Click to zoom.)
[
![Results for g++ 4.8, Linux Mint 17.1, Core i5-4570S](http://reedbeta.com/blog/data-oriented-hash-table/results-g++4.8.png "Results for g++ 4.8, Linux Mint 17.1, Core i5-4570S")
][15]
Most of the results are quite similar to those in Windows, so Ill just highlight a few interesting differences.
**Lookup**: Here, DO1 is the front-runner, where DO2 was a bit faster on Windows. Also, UM and chaining are way behind all the other implementations, which is actually what I expected to see in Windows as well, given that they have to do a lot of pointer chasing while the OA variants just stride linearly through memory. Its not clear to me why the Windows and Linux results are so different here. UM is also a good deal slower than chaining, especially at large payloads, which is odd; Id expect the two of them to be about equal.
**Failed lookup**: Again, UM is way behind all the others, even slower than OL. Again, its puzzling to me why this is so much slower than chaining, and why the results differ so much between Linux and Windows.
**Destruct**: For my implementations, the destruct cost was too small to measure at small payloads; at large payloads, it grows quite linearly with table size—perhaps proportional to the number of virtual memory pages touched, rather than the number allocated? Its also orders of magnitude faster than the destruct cost on Windows. However, this isnt anything to do with hash tables, really; were seeing the behavior of the respective OSes and runtimes memory systems here. It seems that Linux frees large blocks memory a lot faster than Windows (or it hides the cost better, perhaps deferring work to process exit, or pushing things off to another thread or process).
UM with its per-element frees is now orders of magnitude slower than all the others, across all payload sizes. In fact, I cut it from the graphs because it was screwing up the Y-axis scale for all the others.
### [][16]Conclusions
Well, after staring at all that data and the conflicting results for all the different cases, what can we conclude? Id love to be able to tell you unequivocally that one of these hash table variants beats out the others, but of course its not that simple. Still, there is some wisdom we can take away.
First, in many cases its  _easy_  to do better than `std::unordered_map`. All of the implementations I built for these tests (and theyre not sophisticated; it only took me a couple hours to write all of them) either matched or improved upon `unordered_map`, except for insertion performance at large payload sizes (over 128 bytes), where `unordered_map`s separately-allocated per-node storage becomes advantageous. (Though I didnt test it, I also expect `unordered_map` to win with non-POD payloads that are expensive to move.) The moral here is that if you care about performance at all, dont assume the data structures in your standard library are highly optimized. They may be optimized for C++ standard conformance, not performance. :P
Second, you could do a lot worse than to just use DO1 (open addressing, linear probing, with the hashes/states segregated from keys/values in separate flat arrays) whenever you have small, inexpensive payloads. Its not the fastest for insertion, but its not bad either (still way better than `unordered_map`), and its very fast for lookup, removal, and destruction. What do you know—“data-oriented design” works!
Note that my test code for these hash tables is far from production-ready—they only support POD types, dont have copy constructors and such, dont check for duplicate keys, etc. Ill probably build some more realistic hash tables for my utility library soon, though. To cover the bases, I think Ill want two variants: one based on DO1, for small, cheap-to-move payloads, and another that uses chaining and avoids ever reallocating and moving elements (like `unordered_map`) for large or expensive-to-move payloads. That should give me the best of both worlds.
In the meantime, I hope this has been illuminating. And remember, if Chandler Carruth and Mike Acton give you advice about data structures, listen to them. 😉
--------------------------------------------------------------------------------
作者简介:
Im a graphics programmer, currently freelancing in Seattle. Previously I worked at NVIDIA on the DevTech software team, and at Sucker Punch Productions developing rendering technology for the Infamous series of games for PS3 and PS4.
Ive been interested in graphics since about 2002 and have worked on a variety of assignments, including fog, atmospheric haze, volumetric lighting, water, visual effects, particle systems, skin and hair shading, postprocessing, specular models, linear-space rendering, and GPU performance measurement and optimization.
You can read about what Im up to on my blog. In addition to graphics, Im interested in theoretical physics, and in programming language design.
You can contact me at nathaniel dot reed at gmail dot com, or follow me on Twitter (@Reedbeta) or Google+. I can also often be found answering questions at Computer Graphics StackExchange.
--------------
via: http://reedbeta.com/blog/data-oriented-hash-table/
作者:[Nathan Reed][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://reedbeta.com/about/
[1]:http://reedbeta.com/blog/data-oriented-hash-table/
[2]:http://reedbeta.com/blog/category/coding/
[3]:http://reedbeta.com/blog/data-oriented-hash-table/#comments
[4]:http://baptiste-wicht.com/posts/2012/12/cpp-benchmark-vector-list-deque.html
[5]:https://www.youtube.com/watch?v=fHNmRkzxHWs
[6]:https://www.youtube.com/watch?v=rX0ItVEVjHc
[7]:http://reedbeta.com/blog/data-oriented-hash-table/#the-tests
[8]:http://burtleburtle.net/bob/hash/spooky.html
[9]:http://reedbeta.com/blog/data-oriented-hash-table/hash-table-tests.zip
[10]:http://reedbeta.com/blog/data-oriented-hash-table/#the-results
[11]:http://reedbeta.com/blog/data-oriented-hash-table/#windows
[12]:http://reedbeta.com/blog/data-oriented-hash-table/results-vs2012.png
[13]:https://randomascii.wordpress.com/2014/12/10/hidden-costs-of-memory-allocation/
[14]:http://reedbeta.com/blog/data-oriented-hash-table/#linux
[15]:http://reedbeta.com/blog/data-oriented-hash-table/results-g++4.8.png
[16]:http://reedbeta.com/blog/data-oriented-hash-table/#conclusions

View File

@ -0,0 +1,255 @@
5 ways to change GRUB background in Kali Linux
============================================================
This is a simple guide on how to change GRUB background in Kali Linux (i.e. its actually Kali Linux GRUB splash image). Kali dev team did few things that seems almost too much work, so in this article I will explain one of two things about GRUB and somewhat make this post little unnecessarily long and boring cause I like to write! So here goes …
[
![Change GRUB background in Kali Linux - blackMORE OPs -10](http://www.blackmoreops.com/wp-content/uploads/2015/11/Change-GRUB-background-in-Kali-Linux-blackMORE-OPs-10.jpg)
][10]
### Finding GRUB settings
This is usually the first issue everyone faces, where do I look? Theres a many ways to find GRUB settings. Users might have their own opinion but I always found that `update-grub` is the easiest way. If you run `update-grub` in a VMWare/VirtualBox, you will see something like this:
```
root@kali:~# update-grub
Generating grub configuration file ...
Found background image: /usr/share/images/desktop-base/desktop-grub.png
Found linux image: /boot/vmlinuz-4.0.0-kali1-amd64
Found initrd image: /boot/initrd.img-4.0.0-kali1-amd64
No volume groups found
done
root@kali:~#
```
If youre using a Dual Boot, Triple Boot then you will see GRUB goes in and finds other OSes as well. However, the part were interested is the background image part, in my case this is what I see (you will see exactly the same thing):
```
Found background image: /usr/share/images/desktop-base/desktop-grub.png
```
### GRUB splash image search order
In grub-2.02, it will search for the splash image in the following order for a Debian based system:
1. GRUB_BACKGROUND line in `/etc/default/grub`
2. First image found in `/boot/grub/` ( more images found, it will be taken alphanumerically )
3. The image specified in `/usr/share/desktop-base/grub_background.sh`
4. The file listed in the WALLPAPER line in `/etc/grub.d/05_debian_theme`
Now hang onto this info and we will soon revisit it.
### Kali Linux GRUB splash image
As I use Kali Linux (cause I like do stuff), we found that Kali is using a background  image from here: `/usr/share/images/desktop-base/desktop-grub.png`
Just to be sure, lets check that `.png` file and its properties.
```
root@kali:~#
root@kali:~# ls -l /usr/share/images/desktop-base/desktop-grub.png
lrwxrwxrwx 1 root root 30 Oct 8 00:31 /usr/share/images/desktop-base/desktop-grub.png -> /etc/alternatives/desktop-grub
root@kali:~#
```
[
![Change GRUB background in Kali Linux - blackMORE OPs -1](http://www.blackmoreops.com/wp-content/uploads/2015/11/Change-GRUB-background-in-Kali-Linux-blackMORE-OPs-1.jpg)
][11]
What? Its just a symbolic link to `/etc/alternatives/desktop-grub` file? But `/etc/alternatives/desktop-grub` is not an image file. Looks like I need to check that file and its properties as well.
```
root@kali:~#
root@kali:~# ls -l /etc/alternatives/desktop-grub
lrwxrwxrwx 1 root root 44 Oct 8 00:27 /etc/alternatives/desktop-grub -> /usr/share/images/desktop-base/kali-grub.png
root@kali:~#
```
[
![Change GRUB background in Kali Linux - blackMORE OPs -3](http://www.blackmoreops.com/wp-content/uploads/2015/11/Change-GRUB-background-in-Kali-Linux-blackMORE-OPs-3.jpg)
][12]
Alright, thats confusing as! So `/etc/alternatives/desktop-grub` is another symbolic link which points back to
`/usr/share/images/desktop-base/kali-grub.png`
which is in the same folder we started from. doh! Thats all I can say. But at least now we can just replace that file and get it over with.
Before we do that we need to check the properties of this file `/usr/share/images/desktop-base/kali-grub.png` and ensure that we will download same type and dimension file.
```
root@kali:~#
root@kali:~# file /usr/share/images/desktop-base/kali-grub.png
/usr/share/images/desktop-base/kali-grub.png: PNG image data, 640 x 480, 8-bit/color RGB, non-interlaced
root@kali:~#
```
So this file is DEFINITELY a PNG image data, 640 x 480 dimension.
### GRUB background image properties
GRUB 2 can use `PNG`, `JPG`/`JPEG` and `TGA` images for the background. The image must meet the following specifications:
* `JPG`/`JPEG` images must be `8-bit` (`256 color`)
* Images should be non-indexed, `RGB`
By default, if `desktop-base` package is installed, images conforming to the above specification will be located in `/usr/share/images/desktop-base/` directory. A quick Google search found similar files. Out of those, I picked one.
```
root@kali:~#
root@kali:~# file Downloads/wallpaper-1.png
Downloads/wallpaper-1.png: PNG image data, 640 x 480, 8-bit/color RGB, non-interlaced
root@kali:~#
```
[
![Change GRUB background in Kali Linux - blackMORE OPs -6](http://www.blackmoreops.com/wp-content/uploads/2015/11/Change-GRUB-background-in-Kali-Linux-blackMORE-OPs-6.jpg)
][13]
### Option 1: replace the image
Now we simply need to replace this `/usr/share/images/desktop-base/kali-grub.png` file with our new file. Note that this is the easiest way without mucking around `grub-config` files. If you are familiar with GRUB, then go ahead and simpy modify GRUB default config and run `update-grub`.
As usual, I will make a backup of the original file by renaming it to `kali-grub.png.bkp`
```
root@kali:~#
root@kali:~# mv /usr/share/images/desktop-base/kali-grub.png /usr/share/images/desktop-base/kali-grub.png.bkp
root@kali:~#
```
[
![Change GRUB background in Kali Linux - blackMORE OPs -4](http://www.blackmoreops.com/wp-content/uploads/2015/11/Change-GRUB-background-in-Kali-Linux-blackMORE-OPs-4.jpg)
][14]
Now lets copy our downloaded file and rename that to `kali-grub.png.bkp`.
```
root@kali:~#
root@kali:~# cp Downloads/wallpaper-1.png /usr/share/images/desktop-base/kali-grub.png
root@kali:~#
```
[
![Change GRUB background in Kali Linux - blackMORE OPs -5](http://www.blackmoreops.com/wp-content/uploads/2015/11/Change-GRUB-background-in-Kali-Linux-blackMORE-OPs-5.jpg)
][15]
And finally run `update-grub`
```
root@kali:~# update-grub
Generating grub configuration file ...
Found background image: /usr/share/images/desktop-base/desktop-grub.png
Found linux image: /boot/vmlinuz-4.0.0-kali1-amd64
Found initrd image: /boot/initrd.img-4.0.0-kali1-amd64
No volume groups found
done
root@kali:~#
```
[
![Change GRUB background in Kali Linux - blackMORE OPs -7](http://www.blackmoreops.com/wp-content/uploads/2015/11/Change-GRUB-background-in-Kali-Linux-blackMORE-OPs-7.jpg)
][16]
Next time you restart your Kali Linux, you will see your own image as the GRUB background. (GRUB splash image).
Following is what my new GRUB splash image looks like in Kali Linux now. What about you? Tried this method yet?
[
![Change GRUB background in Kali Linux - blackMORE OPs -9](http://www.blackmoreops.com/wp-content/uploads/2015/11/Change-GRUB-background-in-Kali-Linux-blackMORE-OPs-9.jpg)
][17]
This was the easiest and safest way, if you muck it up the worst, you will see a Blue background in GRUB but will still be able to login and fix things later. Now if youre confident, lets move to better ways (bit complex) of changing GRUB settings. Next steps are more fun and works with any Linux using GRUB bootloader.
Now remember those 4 places GRUB looks for a background splash image? Here are those again:
1. GRUB_BACKGROUND line in `/etc/default/grub`
2. First image found in `/boot/grub/` ( more images found, it will be taken alphanumerically )
3. The image specified in `/usr/share/desktop-base/grub_background.sh`
4. The file listed in the `WALLPAPER` line in `/etc/grub.d/05_debian_theme`
So lets again try few of these options in Kali Linux (or any Linux using GRUB2).
### Option 2: Define an image path in GRUB_BACKGROUND
So you can use any of the above in the order of priority to make GRUB display your own images. The following is the content of `/etc/default/grub` file on my system.
```
root@kali:~# vi /etc/default/grub
```
Add a line similar to this: GRUB_BACKGROUND=”/root/World-Map.jpg” where World-Map.jpg is the image file you want to use as GRUB background.
```
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_TIMEOUT=15
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="initrd=/install/gtk/initrd.gz"
GRUB_BACKGROUND="/root/World-Map.jpg"
```
Once changes has been done using any of the above methods, make sure you execute `update-grub` command as shown below.
```
root@kali:~# update-grub
Generating grub configuration file ...
Found background: /root/World-Map.jpg
Found background image: /root/World-Map.jpg
Found linux image: /boot/vmlinuz-4.0.0-kali1-amd64
Found initrd image: /boot/initrd.img-4.0.0-kali1-amd64
  No volume groups found
done
root@kali:~#
```
Now, when you boot your machine, you will see the customized image in GRUB.
### Option 3: Put an image on /boot/grub/ folder
If nothing is specified in `GRUB_BACKGROUND` in `/etc/default/grub` file, GRUB ideally should pick first image found in `/boot/grub/` folder and use that a its background. If GRUB finds more than one image in /boot/grub/ folder, it will use the first alphanumerically image name.
### Option 4: Specify an image path in grub_background.sh
If nothing is specified in `GRUB_BACKGROUND` in `/etc/default/grub` file or there is no image in `/boot/grub/` folder, GRUB will start looking into `/usr/share/desktop-base/grub_background.sh` file and search for the image path specified. For Kali Linux, it was defined in here. Every Linux distro has its own take on it.
### Option 5: Define an image in WALLPAPER line in /etc/grub.d/05_debian_theme file
This would be that last part GRUB looking for a Background image. It will search here if everything else failed.
### Conclusion
This post was long, but I wanted to cover a few important basic things. If youve followed it carefully, you will understand how to follow symbolic links back and forth in Kali Linux. You will get a VERY good idea on exactly which places you need to search to find GRUB Background image in any Linux. Just read a bit more to understand how the colors in GRUB works and youre all set.
--------------------------------------------------------------------------------
via: https://www.blackmoreops.com/2015/11/27/change-grub-background-in-kali-linux/
作者:[https://www.blackmoreops.com/][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.blackmoreops.com/2015/11/27/change-grub-background-in-kali-linux/
[1]:http://www.facebook.com/sharer.php?u=https://www.blackmoreops.com/?p=5958
[2]:https://twitter.com/intent/tweet?text=5+ways+to+change+GRUB+background+in+Kali+Linux%20via%20%40blackmoreops&url=https://www.blackmoreops.com/?p=5958
[3]:https://plusone.google.com/_/+1/confirm?hl=en&url=https://www.blackmoreops.com/?p=5958&name=5+ways+to+change+GRUB+background+in+Kali+Linux
[4]:https://www.blackmoreops.com/how-to/
[5]:https://www.blackmoreops.com/kali-linux/
[6]:https://www.blackmoreops.com/kali-linux-2-x-sana/
[7]:https://www.blackmoreops.com/administration/
[8]:https://www.blackmoreops.com/usability/
[9]:https://www.blackmoreops.com/2015/11/27/change-grub-background-in-kali-linux/#comments
[10]:http://www.blackmoreops.com/wp-content/uploads/2015/11/Change-GRUB-background-in-Kali-Linux-blackMORE-OPs-10.jpg
[11]:http://www.blackmoreops.com/2015/11/27/change-grub-background-in-kali-linux/change-grub-background-in-kali-linux-blackmore-ops-1/
[12]:http://www.blackmoreops.com/2015/11/27/change-grub-background-in-kali-linux/change-grub-background-in-kali-linux-blackmore-ops-3/
[13]:http://www.blackmoreops.com/2015/11/27/change-grub-background-in-kali-linux/change-grub-background-in-kali-linux-blackmore-ops-6/
[14]:http://www.blackmoreops.com/2015/11/27/change-grub-background-in-kali-linux/change-grub-background-in-kali-linux-blackmore-ops-4/
[15]:http://www.blackmoreops.com/2015/11/27/change-grub-background-in-kali-linux/change-grub-background-in-kali-linux-blackmore-ops-5/
[16]:http://www.blackmoreops.com/2015/11/27/change-grub-background-in-kali-linux/change-grub-background-in-kali-linux-blackmore-ops-7/
[17]:http://www.blackmoreops.com/wp-content/uploads/2015/11/Change-GRUB-background-in-Kali-Linux-blackMORE-OPs-9.jpg

View File

@ -1,3 +1,5 @@
translating by flankershen
# Network management with LXD (2.3+)
![LXD logo](https://linuxcontainers.org/static/img/containers.png)

View File

@ -1,3 +1,5 @@
[HaitaoBio](https://github.com/HaitaoBio)
TypeScript: the missing introduction
============================================================

View File

@ -1,85 +0,0 @@
Yuan0302 Translating
FTPS (FTP over SSL) vs SFTP (SSH File Transfer Protocol)
============================================================
[
![ftps sftp](http://www.techmixer.com/pic/2015/07/ftps-sftp.png "ftps sftp")
][5]
![ftps sftp](data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs= "ftps sftp")
**SSH File transfer protocol, SFTP** or **File Transfer protocol via Secure Socket Layer, **FTPS are the most common secure FTP communication technologies used to transfer computer files from one host to another host over a TCP networks. Both SFTP and FTPS offer a high level file transfer security protection with strong algorithms such as AES and Triple DES to encrypt any data transferred.
But the most notable differences between SFTP and FTPS is how connections are authenticated and managed.
FTPS is FTP utilising Secure Secure Layer (SSL) certificate for Security. The entire secure FTP connection is authenticated using an User ID, Password and SSL certificate. Once FTPS connection established, [FTP client software][6] will check destination [FTP server ][7]if the servers certificate is trusted.
The SSL certificate will considered trusted if either the certificate was signed off by a known certificate authority (CA) or if the certificate was self-signed (by your partner) and you have a copy of their public certificate in your trusted key store. All username and password information for FTPS will be encrypted through secure FTP connection.
### Below are the FTPS pros and cons:
Pros:
* The communication can be read and understood by a human
* Provides services for server-to-server file transfer
* SSL/TLS has good authentication mechanisms (X.509 certificate features)
* FTP and SSL support is built into many internet communications frameworks
Cons:
* Does not have a uniform directory listing format
* Requires a secondary DATA channel, which makes it hard to use behind firewalls
* Does not define a standard for file name character sets (encodings)
* Not all FTP servers support SSL/TLS
* Does not have a standard way to get and change file or directory attributes
SFTP or SSH File Transfer Protocol is another secure Secure File Transfer Protocol is designed as a SSH extension to provide file transfer capability, so it usually uses only the SSH port for both data and control. When your [FTP client][8] software connect to SFTP server, it will transmit public key to the server for authentication. If the keys match, along with any user/password supplied, then the authentication will succeed.
### Below are the SFTP Pros and Cons:
Pros:
* Has only one connection (no need for a DATA connection).
* FTP connection is always secured
* FTP directory listing is uniform and machine-readable
* FTP protocol includes operations for permission and attribute manipulation, file locking, and more functionality.
Cons:
* The communication is binary and can not be logged “as is” for human reading
SSH keys are harder to manage and validate.
* The standards define certain things as optional or recommended, which leads to certain compatibility problems between different software titles from different vendors.
* No server-to-server copy and recursive directory removal operations
* No built-in SSH/SFTP support in VCL and .NET frameworks.
Overall most of FTP server software support both secure FTP technologies with strong authentication options.
But SFTP will be clear winner since its very firewall friendly. SFTP only needs a single port number (default of 22) to be opened through the firewall.  This port will be used for all SFTP communications, including the initial authentication, any commands issued, as well as any data transferred.
FTPS will be more difficult to implement through a tightly secure firewall since FTPS uses multiple network port numbers. Every time a file transfer request (get, put) or directory listing request is made, another port number needs to be opened.  Therefore it have to open a range of ports in your firewalls to allow for FTPS connections, which can be a security risk for your network.
FTP Server software that supports FTPS and SFTP:
1. [Cerberus FTP Server][2]
2. [FileZilla Most famous free FTPs and FTPS server software][3]
3. [Serv-U FTP Server][4]
--------------------------------------------------------------------------------
via: http://www.techmixer.com/ftps-sftp/
作者:[Techmixer.com][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.techmixer.com/
[1]:http://www.techmixer.com/ftps-sftp/#respond
[2]:http://www.cerberusftp.com/
[3]:http://www.techmixer.com/free-ftp-server-best-windows-ftp-server-download/
[4]:http://www.serv-u.com/
[5]:http://www.techmixer.com/pic/2015/07/ftps-sftp.png
[6]:http://www.techmixer.com/free-ftp-file-transfer-protocol-softwares/
[7]:http://www.techmixer.com/free-ftp-server-best-windows-ftp-server-download/
[8]:http://www.techmixer.com/best-free-mac-ftp-client-connect-ftp-server/

View File

@ -1,11 +1,12 @@
tranlating by xiaow6
Git in 2016
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*1SiSsLMsNSyAk6khb63W9g.png)
Git had a  _huge_  year in 2016, with five feature releases[¹][57] ( _v2.7_  through  _v2.11_ ) and sixteen patch releases[²][58]. 189 authors[³][59] contributed 3,676 commits[⁴][60] to `master`, which is up 15%[⁵][61] over 2015! In total, 1,545 files were changed with 276,799 lines added and 100,973 lines removed[⁶][62].
However, commit counts and LOC are pretty terrible ways to measure productivity. Until deep learning develops to the point where it can qualitatively grok code, were going to be stuck with human judgment as the arbiter of productivity.
@ -632,7 +633,6 @@ Or if you cant wait til then, head over to Atlassians excellent selecti
_If you scrolled to the end looking for the footnotes from the first paragraph, please jump to the _ [ _[Citation needed]_ ][77] _ section for the commands used to generate the stats. Gratuitous cover image generated using _ [ _instaco.de_ ][78] _ _
--------------------------------------------------------------------------------
via: https://hackernoon.com/git-in-2016-fad96ae22a15#.t5c5cm48f

View File

@ -1,95 +0,0 @@
wcnnbdk1 translating
NMAP Common Scans Part One
========================
In a previous article, [NMAP Installation][1], a listing of ten different ZeNMAP Profiles were listed. Most of the Profiles used various parameters. Most of the parameters represented different scans which can be performed. This article will cover the common four scan types.
**The Common Four Scan Types**
The four main scan types which are used most often are the following:
1. PING Scan (-sP)
2. TCP SYN Scan (-sS)
3. TCP Connect() Scan (-sT)
4. UDP Scan (-sU)
When using NMAP to perform scans these four scans are the four to keep in mind. The main thing to keep in mind about them is what they do and how they do it. This article covers the PING and UDP scans. The next article will cover the TCP scans.
**PING Scan (-sP)**
Some scans can flood the network with packets, but the Ping Scan only puts, at most, two packets on the network. The two packets do not count DNS Lookup or ARP Requests if needed. A minimum of one packet is required per IP Address being scanned.
A typical PING operation is used to determine if a network host is on-line with the IP Address specified. For example, if I were on the Internet and found that I could not reach a specific Web Server I could PING the Server to determine if it were on-line. The PING would also verify that the route between my system and the Web Server was also functioning.
**NOTE:** When discussing TCP/IP the information is both useful for the Internet and a Local Area Network (LAN) using TCP/IP. The procedures work for both. The procedures would also work for a Wide Area Network (WAN) just as well.
If the Domain Name Service (DNS) Server is needed to find the IP Address (if a Domain Name is given) then extra packets are generated. For example, to ping linuxforum.com would first require that the IP Address (98.124.199.63) be found for the Domain Name (linuxforum.com). If the command ping 98.124.199.63 was executed then the DNS Lookup is not needed. If the MAC Address is unknown, then an ARP Request is sent to find the MAC Address of the system with the specified IP Address.
The PING command sends an Internet Control Message Protocol (ICMP) packet to the given IP Address. The packet is an ICMP Echo Request which needs a response. A response will be sent back if the system is on-line. If a Firewall exists between the two systems a PING can be dropped by the Firewall. Some servers can be configured to ignore PING requests as well to prevent the possibility of a PING of Death.
**NOTE:** The PING of Death is a malformed PING packet which is sent to a system and causes it to leave a connection open to wait for the rest of the packet. Once a bunch of these are sent to the same system it will refuse any connections since it has all available connection opened. The system is then technically unavailable.
Once a system receives the ICMP Echo Request it will respond with an ICMP Echo Reply. Once the source system receives the ICMP Echo Reply then it knows the system is on-line.
Using NMAP you specify a single IP Address or a range of IP Addresses. A PING is then performed on each IP Address when a PING Scan (-sP) is specified.
In Figure 1 you can see I performed the command nmap -sP 10.0.0.1-10. The program will try to contact every system with an IP Address of 10.0.0.1 to 10.0.0.10\. An ARP is sent out, three for each IP Address given to the command. In this case thirty requests went out two for each of the ten IP Addresses.
![Figure 01.jpg](https://www.linuxforum.com/attachments/figure-01-jpg.105/)
**FIGURE 1**
Figure 2 shows the Wireshark capture from another machine on the network yes it is a Windows system. Line 1 shows the first request sent out to IP Address 10.0.0.2\. The IP Address 10.0.0.1 was skipped due to it being the local system on which NMAP was being run. Now we can say that there were only 27 ARP Requests since the local one was skipped. Line 2 shows the ARP Response from the system with the IP Address of 10.0.0.2\. Lines 3 through 10 are ARP Requests for the remaining IP Addresses. Line 11 is another response from the system at IP Address 10.0.0.2 since it has not heard back from the requesting system (10.0.0.1). Line 12 is a response from the source system to 10.0.0.2 responding with SYN at Sequence 0\. Line 13 and 14 are the system at 10.0.0.2 responding twice with the Restart (RST) and Synchronize (SYN) response to close the two connections it had opened on Lines 2 and 11\. Notice the Sequence ID is 1 - the source Sequence ID + 1\. Lines 15 on are a continuation of the same.
![Figure 02.jpg](https://www.linuxforum.com/attachments/figure-02-jpg.106/)
**FIGURE 2**
Looking back at Figure 1 we can see that there were two hosts found up and running. Of course the local system was found (10.0.0.1) and one other (10.0.0.2). The whole scan took a total time of 14.40 seconds.
The PING Scan is a fast scan used to find systems which are up and running. No other information is really found about the network or the systems from the scan. The scan is a good start to see what is available on a network so you can perform more complex scans on the on-line systems only. You may also be able to find systems on the network which should not exist. Rogue systems on a network can be dangerous because they can be gathering internal network and system information easily.
Once you have a list of on-line systems you can then detect what Ports may be open on each system with a UDP Scan.
**UDP Scan (-sU)**
Now that you know what systems are available to scan you can concentrate on these IP Addresses only. It is not a good idea to flood a network with a lot of scan activity. Administrators can have programs monitor network traffic and alert them when large amounts of suspicious activities occur.
The User Datagram Protocol (UDP) is useful to determine open Ports on an on-line system. Since UDP is a connectionless protocol, a response is not needed. This scan can send a UDP packet to a system with a specified Port number. If the target system does not respond then the Port is either closed or filtered. If the Port is open then a response should be made. In most cases a target system will send an ICMP message back that the Port is unreachable. The ICMP information lets NMAP know that the Port is closed. If a Port is open then the target system should respond with an ICMP message to let NMAP know it is an available Port.
**NOTE: **Only the top 1,000 most used Ports are scanned. A deeper scan will be covered in later articles.
In my scan I will only perform the scan on the system with the IP Address 10.0.0.2 since I know it is on-line. The scan sends and receives a total of 3,278 packets. The result of the NMAP command sudo nmap -sU 10.0.0.2 is shown in Figure 3.
![Figure 03.jpg](https://www.linuxforum.com/attachments/figure-03-jpg.107/)
**FIGURE 3**
Here you can see that one Port was found open 137 (netbios-ns). The results from Wireshark are shown in Figure 4\. Not much to see but a bunch of UDP packets.
![Figure 4.jpg](https://www.linuxforum.com/attachments/figure-4-jpg.108/)
**FIGURE 4**
What would happen if I turned off the Firewall on the target system? My results are quite a bit different. The NMAP command and results are shown in Figure 5.
![Figure 05.png](https://www.linuxforum.com/attachments/figure-05-png.109/)
**FIGURE 5**
**NOTE:** When performing a UDP Scan you are required to have root permissions.
The high quantity of the number of packets is due to the fact that UDP is being used. Once the NMAP system sends a request it is not guaranteed that the packet was received. Because of the possible loss of packets the packets are sent multiple times.
--------------------------------------------------------------------------------
via: https://www.linuxforum.com/threads/nmap-common-scans-part-one.3637/
作者:[Jarret][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxforum.com/members/jarret.268/
[1]:https://www.linuxforum.com/threads/nmap-installation.3431/

View File

@ -1,70 +0,0 @@
How to Keep Hackers out of Your Linux Machine Part 2: Three More Easy Security Tips
============================================================
![security tips](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/security-tips.jpg?itok=JMp34oc3 "security tips")
In this series, well cover essential information for keeping hackers out of your system. Watch the free webinar on-demand for more information.[Creative Commons Zero][1]Pixabay
In [part 1][3] of this series, I shared two easy ways to prevent hackers from eating your Linux machine. Here are three more tips from my recent Linux Foundation webinar where I shared more tactics, tools and methods hackers use to invade your space. Watch the entire [webinar on-demand][4] for free.
### Easy Linux Security Tip #3
**Sudo.**
Sudo is really, really important. I realize this is just really basic stuff but these basic things make my life as a hacker so much more difficult. If you don't have it configured, configure it.
Also, all your users must use their password. Don't all “sudo all” with no password. That doesn't do anything other than make my life easy when I have a user that has “sudo all” with no password. If I can “sudo <blah>” and hit you without having to authenticate again and I have your SSH key with no passphrase, that makes it pretty easy to get around. I now have root on your machine.
Keep the timeout low. We like to hijack sessions, and if you have a user that has Sudo and the timeout is three hours and I hijack your session, then you've given me a free pass again even though you require a password.
I recommend a timeout of about 10 minutes, or even 5 minutes. Theyll enter their password over and over again but if you keep the timeout low, then you reduce your attack surface.
Also limit the available commands and don't allow shell access with sudo. Most default distributions right now will allow you to do “sudo bash” and get a root shell, which is great if you are doing massive amounts of admin tasks. However, most users should have a limited amount of commands that they need to actually run. The more you limit them, the smaller your attack surface. If you give me shell access I am going to be able to do all kinds of stuff.
### Easy Linux Security Tip #4
**Limit running services.**
Firewalls are great. Your perimeter firewall is awesome. There are several manufacturers out there that do a fantastic job when the traffic comes across your network. But what about the people on the inside?
Are you using a host-based firewall or host-based intrusion detection system? If so, configure it right. How do you know if something goes wrong that you are still protected?
The answer is to limit the services that are currently running. Don't run mySQL on a machine that doesn't need it. If you have a distribution that installs a full LAMP stack by default and you're not running anything on top of it, then uninstall it. Disable those services and don't start them.
And make sure users don't have default credentials. Make sure that those contents are configured securely. If you are running Tomcat, you are not allowed to upload your own applets. Make sure they don't run as root. If I am able to run an applet, I don't want to be able to run an applet as root and give myself access. The more you can restrict the amount of things that people can do the better off it is going to be.
### Easy Linux Security Tip #5
**Watch your logs.**
Look at them. Seriously. Watch your logs. We ran into an issue six months ago where one of our customers wasn't looking at their logs and they have been owned for a very, very long time. Had they been watching it, they would have been able to tell that their machines have been compromised and their whole network was wide open. I do this at home. I have a regimen every morning. I get up, I check my email. I go through my logs, and it takes me 15 minutes but it tells me a wealth of information about what's going on.
Just this morning, I had three systems fail in the cabinet and I had to go and reboot them, and I have no idea why but I could tell in my logs that they weren't responding. They were lab systems. I really don't care about them but other people do.
Centralizing your logging via Syslog or Splunk or any of those logging consolidation tools is fantastic. It is better than keeping them local. My favorite thing to do is to edit your logs so you don't know that I have been there. If I can do that then you have no clue. It's much more difficult for me to modify a central set of logs than a local set.
Just like your significant other, bring them flowers, aka, disk space. Make sure you have plenty of disk space available for logging. Going into a read-only file system is not a good thing.
Also, know what's abnormal. Its such a difficult thing to do but in the long run it is going to pay dividends. Youll know what's going on and when somethings wrong. Be sure you know that.
In the [third and final blog post][5], Ill answer some of the excellent security questions asked during the webinar. [Watch the entire free webinar on-demand][6] now.
_Mike Guthrie works for the Department of Energy doing Red Team engagements and penetration testing._
--------------------------------------------------------------------------------
via: https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-2-three-more-easy-security-tips
作者:[MIKE GUTHRIE][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/anch
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/files/images/security-tipsjpg
[3]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-1-top-two-security-tips
[4]:http://portal.on24.com/view/channel/index.html?showId=1101876&showCode=linux&partnerref=linco
[5]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-3-your-questions-answered
[6]:http://bit.ly/2j89ISJ

View File

@ -1,130 +0,0 @@
[HaitaoBio](https://github.com/HaitaoBio) translating...
Linux command line navigation tips/tricks 3 - the CDPATH environment variable
============================================================
### On this page
1. [The CDPATH environment variable][1]
2. [Points to keep in mind][2]
3. [Conclusion][3]
In the first part of this series, we discussed the **cd -** command in detail, and the in the second part, we took an in-depth look into the **pushd** and **popd** commands as well as the scenarios where-in they come in handy.
Continuing with our discussion on the command line navigation aspects, in this tutorial, we'll discuss the **CDPATH** environment variable through easy to understand examples. We'll also discuss some advance details related to this variable.
_But before we proceed, it's worth mentioning that all the examples in this tutorial have been tested on Ubuntu 14.04 with Bash version 4.3.11(1)._
### The CDPATH environment variable
Even if your command line work involves performing all operations under a particular directory - say your home directory - then also you have to provide absolute paths while switching directories. For example, consider a situation where-in I am in _/home/himanshu/Downloads_ directory:
$ pwd
/home/himanshu/Downloads
And the requirement is to switch to the _/home/himanshu/Desktop_ directory. To do this, usually, I'll have to either run:
cd /home/himanshu/Desktop/
or 
cd ~/Desktop/
or
cd ../Desktop/
Wouldn't it be easy if I could just run the following command:
cd Desktop
Yes, that's possible. And this is where the CDPATH environment variable comes in.You can use this variable to define the base directory for the **cd** command.
If you try printing its value, you'll see that this env variable is empty by default:
$ echo $CDPATH
$
Now, considering the case we've been discussing so far, let's use this environment variable to define _/home/himanshu_ as the base directory for the cd command.
The easiest way to do this is:
export CDPATH=/home/himanshu
And now, I can do what I wasn't able to do earlier - from within the _/home/himanshu/Downloads_ directory, run the _cd Desktop_ command successfully.
$ pwd
/home/himanshu/Downloads
$ **cd Desktop/**
**/home/himanshu/Desktop**
$
This means that I can now do a cd to any directory under _/home/himanshu_ without explicitly specifying _/home/himanshu_ or _~_ or _../_ (or multiple _../_)in the cd command.
### Points to keep in mind
So you now know how we used the CDPATH environment variable to easily switch to/from _/home/himanshu/Downloads_ from/to _/home/himanshu/Desktop_. Now, consider a situation where-in the _/home/himanshu/Desktop_ directory contains a subdirectory named _Downloads_, and it's the latter where you intend to switch.
But suddenly you realize that doing a _cd Desktop_ will take you to _/home/himanshu/Desktop_. So, to make sure that doesn't happen, you do:
cd ./Downloads
While there's no problem in the aforementioned command per se, that's an extra effort on your part (howsoever little it may be), especially considering that you'll have to do this each time such a situation arises. A more elegant solution to this problem can be to originally set the CDPATH variable in the following way:
export CDPATH=".:/home/himanshu"
This means, you're telling the cd command to first look for the directory in the current working directory, and then try searching the _/home/himanshu_ directory. Of course, whether or not you want the cd command to behave this way depends entirely on your preference or requirement - my idea behind discussing this point was to let you know that this kind of situation may arise.
As you would have understood by now, once the CDPATH env variable is set, it's value - or the set of paths it contains - are the only places on the system where the cd command searches for directories (except of course the scenarios where-in you use absolute paths). So, it's entirely up to you to make sure that the behavior of the command remains consistent.
Moving on, if there's a bash script that uses the cd command with relative paths, then it's better to clear or unset the CDPATH environment variable first, unless you are ok with getting trapped into unforeseen problems. Alternatively, rather than using the _export_ command on the terminal to set CDPATH, you can set the environment variable in your `.bashrc` file after testing for interactive/non-interactive shells to make sure that the change you're trying to make is only reflected in interactive shells.
The order in which paths appear in the environment variable's value is also important. For example, if current directory is listed before _/home/himanshu_, then the cd command will first search for a directory in the present working directory and then move on to _/home/himanshu_. However, if the value is _"/home/himanshu:."_ then the first search will be made in _/home/himanshu_ and after that the current directory. Needless to say, this will effect what the cd command does, and may cause problems if you aren't aware of the order of paths.
Always keep in mind that the CDPATH environment variable, as the name suggests, works only for the cd command. This means that while inside the _/home/himanshu/Downloads_ directory, you can run the _cd Desktop_ command to switch to _/home/himanshu/Desktop_ directory, but you can't do an _ls_. Here's an example:
$ pwd
/home/himanshu/Downloads
**$ ls Desktop**
**ls: cannot access Desktop: No such file or directory**
$
However, there could be some simple workarounds. For example, we can achieve what we want with minimal effort in the following way:
$ **cd Desktop/;ls**
/home/himanshu/Desktop
backup backup~ Downloads gdb.html outline~ outline.txt outline.txt~
But yeah, there might not be a workaround for every situation.
Another important point: as you might have observed, whenever you use the cd command with the CDPATH environment variable set, the command produces the full path of directory you are switching to in the output. Needless to say, not everybody would want to have this information each time they run the cd command on their machine. 
To make sure this output gets suppressed, you can use the following command:
alias cd='>/dev/null cd'
The aforementioned command will mute the output whenever the cd command is successful, but will allow the error messages to be produced whenever the command fails.
Lastly, in case you face a problem where-in after setting the CDPATH environment variable, you can't use the shell's tab completion feature, then you can try installing and enabling bash-completion - more on it [here][4].
### Conclusion
The CDPATH environment variable is a double edged sword - if not used with caution and complete knowledge, it may land you in some complex traps that may require a lot of your time precious time to get resolved. Of course, that doesn't mean you should never give it a try; just evaluate all the available options and if you conclude that using CDPATH would be of great help, then do go ahead and use it.
Have you been using CDPATH like a pro? Do you have some more tips to share? Please share your thoughts in comments below.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/linux-command-line-tips-tricks-part-3-cdpath/
作者:[Ansh][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/linux-command-line-tips-tricks-part-3-cdpath/
[1]:https://www.howtoforge.com/tutorial/linux-command-line-tips-tricks-part-3-cdpath/#the-cdpath-environment-variable
[2]:https://www.howtoforge.com/tutorial/linux-command-line-tips-tricks-part-3-cdpath/#points-to-keep-in-mind
[3]:https://www.howtoforge.com/tutorial/linux-command-line-tips-tricks-part-3-cdpath/#conclusion
[4]:http://bash-completion.alioth.debian.org/

View File

@ -1,174 +0,0 @@
Translating by Flowsnow
### Hosting Django With Nginx and Gunicorn on Linux
![](https://linuxconfig.org/images/gunicorn_logo.png?58963dfd)
Contents
* * [1. Introduction][4]
* [2. Gunicorn][5]
* [2.1. Installation][1]
* [2.2. Configuration][2]
* [2.3. Running][3]
* [3. Nginx][6]
* [4. Closing Thoughts][7]
### Introduction
Hosting Django web applications is fairly simple, though it can get more complex than a standard PHP application. There are a few ways to handle making Django interface with a web server. Gunicorn is easily one of the simplest. 
Gunicorn(short for Green Unicorn) acts as in intermediary server between your web server, Nginx in this case, and Django itself. It handles serving the application itself while Nginx picks up the static content.
### Gunicorn
### Installation
Installing Gunicorn is super easy with Pip. If you've already set up your Django project using virtualenv, you have Pip and should be familiar with the way it works. So, install Gunicorn in your virtualenv.
```
$ pip install gunicorn
```
### Configuration
One of the things that makes Gunicorn an appealing choice is the simplicity of its configuration. The best way to handle the configuration is to create a `Gunicorn` folder in the root directory of your Django project. Inside that folder, create a configuration file. 
For this guide, it'll be called `gunicorn-conf.py`. In that file, create something similar to the configuration below.
```
import multiprocessing
bind = 'unix:///tmp/gunicorn1.sock'
workers = multiprocessing.cpu_count() * 2 + 1
reload = True
daemon = True
```
In the case of the above configuration, Gunicorn will create a Unix socket at `/tmp/gunicorn1.sock`. It will also spin up a number of worker processes equivalent to the double the number of CPU cores plus one. It will also automatically reload and run as a daemonized process.
### Running
The command to run Gunicorn is a bit long, but it has additional configuration options specified in it. The most important part is to point Gunicorn to your project's `.wsgi` file.
```
gunicorn -c gunicorn/gunicorn-conf.py -D --error-logfile gunicorn/error.log yourproject.wsgi
```
The command above should be run from your project's root. It tells Gunicorn to use the configuration that you created with the `-c` flag. `-D` once again specifies that it should be daemonized. The last part specifies the location of Gunicorn's error long in the `Gunicorn` folder that you created. The command ends by telling Gunicorn the location of your `.wsgi`file.
### Nginx
Now that Gunicorn is configured and running, you can set up Nginx to connect with it and serve your static files. This guide is going to assume that you have Nginx already configured and that you are using separate `server` blocks for the sites hosted through it. It is also going to include some SSL info. 
If you want to learn how to get free SSL certificates for your site, take a look at our [LetsEncrypt Guide][8].
```
# Set up the connection to Gunicorn
upstream yourproject-gunicorn {
server unix:/tmp/gunicorn1.sock fail_timeout=0;
}
# Redirect unencrypted traffic to the encrypted site
server {
listen 80;
server_name yourwebsite.com;
return 301 https://yourwebsite.com$request_uri;
}
# The main server block
server {
# Set the port to listen on and specify the domain to listen for
listen 443 default ssl;
client_max_body_size 4G;
server_name yourwebsite.com;
# Specify log locations
access_log /var/log/nginx/yourwebsite.access_log main;
error_log /var/log/nginx/yourwebsite.error_log info;
# Point Nginx to your SSL certs
ssl on;
ssl_certificate /etc/letsencrypt/live/yourwebsite.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourwebsite.com/privkey.pem;
# Set your root directory
root /var/www/yourvirtualenv/yourproject;
# Point Nginx at your static files
location /static/ {
# Autoindex the files to make them browsable if you want
autoindex on;
# The location of your files
alias /var/www/yourvirtualenv/yourproject/static/;
# Set up caching for your static files
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
# Point Nginx at your uploaded files
location /media/ {
Autoindex if you want
autoindex on;
# The location of your uploaded files
alias /var/www/yourvirtualenv/yourproject/media/;
# Set up aching for your uploaded files
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
# Try your static files first, then redirect to Gunicorn
try_files $uri @proxy_to_app;
}
# Pass off requests to Gunicorn
location @proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://njc-gunicorn;
}
# Caching for HTML, XML, and JSON
location ~* \.(html?|xml|json)$ {
expires 1h;
}
# Caching for all other static assets
location ~* \.(jpg|jpeg|png|gif|ico|css|js|ttf|woff2)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
}
```
Okay, so that's a bit much, and there can be a lot more. The important points to note are the `upstream` block that points to Gunicorn and the `location` blocks that pass traffic to Gunicorn. Most of the rest is fairly optional, but you should do it in some form. The comments in the configuration should help you with the specifics. 
Once that file is saved, you can restart Nginx for the changes to take effect.
```
# systemctl restart nginx
```
Once Nginx comes back online, your site should be accessible via your domain.
### Closing Thoughts
There is much more that can be done with Nginx, if you want to dig deep. The configurations provided, though, are a good starting point and are something you can actually use. If you're used to Apache and bloated PHP applications, the speed of a server configuration like this should come as a pleasant surprise.
--------------------------------------------------------------------------------
via: https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux
作者:[Nick Congleton][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux
[1]:https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux#h2-1-installation
[2]:https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux#h2-2-configuration
[3]:https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux#h2-3-running
[4]:https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux#h1-introduction
[5]:https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux#h2-gunicorn
[6]:https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux#h3-nginx
[7]:https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux#h4-closing-thoughts
[8]:https://linuxconfig.org/generate-ssl-certificates-with-letsencrypt-debian-linux

View File

@ -1,3 +1,5 @@
translating by Flowsnow
# [Use tmux for a more powerful terminal][3]
@ -45,7 +47,7 @@ Stretch your terminal window to make it much larger. Now lets experiment with
* Hit  _Ctrl+b, “_  to split the current single pane horizontally. Now you have two command line panes in the window, one on top and one on bottom. Notice that the new bottom pane is your active pane.
* Hit  _Ctrl+b, %_  to split the current pane vertically. Now you have three command line panes in the window. The new bottom right pane is your active pane.
![tmux window with three panes](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-54-59.png)
![tmux window with three panes](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-54-59.png)
Notice the highlighted border around your current pane. To navigate around panes, do any of the following:
@ -119,9 +121,9 @@ via: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/pfrields/
[1]:http://man.openbsd.org/OpenBSD-current/man1/tmux.1
[2]:https://pragprog.com/book/bhtmux2/tmux-2
[3]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/
[4]:http://www.cryptonomicon.com/beginning.html
[5]:https://fedoramagazine.org/howto-use-sudo/
[a]: https://fedoramagazine.org/author/pfrields/
[1]: http://man.openbsd.org/OpenBSD-current/man1/tmux.1
[2]: https://pragprog.com/book/bhtmux2/tmux-2
[3]: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
[4]: http://www.cryptonomicon.com/beginning.html
[5]: https://fedoramagazine.org/howto-use-sudo/

View File

@ -1,82 +0,0 @@
Yoo-4x translating
# [CentOS Vs. Ubuntu][5]
[
![centos vs. ubuntu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/centos-vs-ubuntu_orig.jpg)
][4]Linux options available are almost “limitless” because, everyone can build it, either by changing an already existing distro or a new [Linux From Scratch][7] (LFS).Our choices on getting a Linux Distributions include its user interfaces, file system, package distribution, new features options and even updates periods and maintenance.
On this article we will talk about the two big Linux Distributions, actually, it will be the difference between one another, where one is better than another and other features.
### What is CentOS?
CentOS ( _Community Enterprise Operating System_ ) is Linux cloned community-supported distribution derived from Red Hat Enterprise Linux (RHEL) and is compatible with it (RHEL), so we can say that CentOS is a free version of RHEL. Every Distribution is maintained for 10 years and each version released every 2 years. It was on January 14th that CentOS announced the official joining with Red Hat, while staying independent from RHEL under a new CentOS board.
Also Read - [How To Install CentOS?][1]
### History and first release of CentOS
[CentOS][8] was first released in 2004 as cAOs Linux which was an RPM-based distribution and was community maintained and managed.
It combined aspects of Debian, Red Hat Linux/Fedora and FreeBSD in a way that was stable enough for servers and clusters in a life cycle of 3 to 5 years. It was a part of a larger organization (the CAOS Foundation) with a group of open source developers[1].
In June 2006 TAO Linux, another RHEL clone developed by David Parsley announced the retirement of TAO Linux and its rolling into development of CentOS. His migration to CentOS didnt affect his previous users (TAO users), as they were able to migrate just by upgrading their system using yum update.
In January 2014 Red Hat started sponsoring CentOS Project transferring the ownership and trademarks to it.
[[1\. Open Source Software][9]]
### CentOS Design
CentOS is exactly the clone of the paid Red Hat version RHEL (Red Had Enterprise Edition). RHEL provides its source code that is later changed (removed the brand and logos) and modified to be released as a final CentOS product.
### Ubuntu
Ubuntu is a Linux operating system that is based on Debian, currently used on desktops, servers, smartphones and tablets. Ubuntu is launched by a company called Canonical Ltd based in the UK founded and funded by South African Mark Shuttleworth.
Also Read - [10 Things To Do After Installing Ubuntu 16.10][2]
### Ubuntu Design
Ubuntu is an Open source distro with many contributions from developers around the world. Along the years it has evolved to a stated where its interface has become more intuitive and modern, the Whole system has become fast in response, more secure and with tons of applications to download.
Since is based on [Debian][10] it supports .deb packages and the post recent package system and more secure [snap package format (snappy)][11].
This new packaging system allows applications to be delivered with all dependencies satisfied.
Also Read - [LinuxAndUbuntu Review Of Unity 8 In Ubuntu 16.10][3]
### Differences between CentOS and Ubuntu
* While ubuntu is based on Debian, CentOS is based on RHEL;
* Ubuntu uses .deb and .snap packages and centOS uses .rpm and flatpak;
* Ubuntu uses apt for updates while centOS uses YUM;
* CentOS seems to be more stable because doesnt have regular updates to their packages like ubuntu but, this doesnt mean that ubuntu is less secure;
* Ubuntu has more documentation and free support for problem solving and information;
* Ubuntu Server version has more support for cloud and container deployments.
### Conclusion
Regardless of your choice, **Ubuntu or CentOS**, both distros are very good distros and stable. If you want a distro with a short release cycle stick with ubuntu and if you want a distro that doesn't change its package so often go with CentOS. Leave your comments below and tell us which one you prefer. |
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/centos-vs-ubuntu
作者:[linuxandubuntu.com][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxandubuntu.com/home/centos-vs-ubuntu
[1]:http://www.linuxandubuntu.com/home/how-to-install-centos
[2]:http://www.linuxandubuntu.com/home/10-things-to-do-after-installing-ubuntu-16-04-xenial-xerus
[3]:http://www.linuxandubuntu.com/home/linuxandubuntu-review-of-unity-8-preview-in-ubuntu-1610
[4]:http://www.linuxandubuntu.com/home/centos-vs-ubuntu
[5]:http://www.linuxandubuntu.com/home/centos-vs-ubuntu
[6]:http://www.linuxandubuntu.com/home/centos-vs-ubuntu#comments
[7]:http://www.linuxandubuntu.com/home/how-to-create-a-linux-distro
[8]:http://www.linuxandubuntu.com/home/10-things-to-do-after-installing-centos
[9]:https://en.wikipedia.org/wiki/Open-source_software
[10]:https://www.debian.org/
[11]:https://en.wikipedia.org/wiki/Snappy_(package_manager)

View File

@ -1,89 +0,0 @@
ucasFL translating
How to Install Ubuntu with Separate Root and Home Hard Drives
============================================================
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/ubuntu-feature-image.jpg "How to Install Ubuntu with Separate Root and Home Hard Drivess")
When building a Linux installation, there are two options. The first option is to find a super-fast solid state drive. This will ensure very fast boot times and overall speed when accessing data. The second option is to go for a slower but beefier spinning disk hard drive one with fast RPMs and a large amount of storage. This ensures a massive amount of storage for applications and data.
However, as some Linux users are aware, [solid state drives][10] are nice, but expensive, and spinning disk drives have a lot of storage but tend to be slow. What if I told you that it was possible to have both? A super-fast, modern solid state drive powering the core of your Linux and a large spinning disk drive for all the data.
In this article well go over how to install Ubuntu Linux with separate root and home hard drives with root folder in the SSD and home folder in the spinning disk hard drive.
### No extra hard drives? Try SD cards!
![ubuntu-sd-card](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/ubuntu-sd-card.jpg "ubuntu-sd-card")
Setting up a multi-drive Linux installation is great and something advanced users should get behind. However, there is another reason for users to do a setup like this low-storage-capacity laptops. Maybe you have a cheap laptop that didnt cost much, and Linux was installed on it. Its not much, but the laptop has an SD card slot.
This guide is for those types of computers as well. Follow this guide, and instead of a second hard drive, maybe go out and buy a fast and speedy SD card for the laptop, and use that as a home folder. This tutorial will work for that use case too!
### Making the USB disk
Start out by heading over to [this website][11] to download the latest version of Ubuntu Linux. Then [download][12] the Etcher USB imaging tool. This is a very easy-to-use tool and supports all major operating systems. You will also need a USB drive of at least 2 GB in size.
![ubuntu-browse-for-ubuntu-iso](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/ubuntu-browse-for-ubuntu-iso.jpg "ubuntu-browse-for-ubuntu-iso")
Install Etcher, then launch it. Make an image by clicking the “Select Image” button. This will prompt the user to browse for the ISO image. Find the Ubuntu ISO file downloaded earlier and select it. From here, insert the USB drive. Etcher should automatically select it. Then, click the “Flash!” button. The Ubuntu live disk creation process will begin.
To boot into Ubuntu, configure the BIOS. This is needed so that the computer will boot the newly-created Ubuntu live USB. To get into the BIOS, reboot with the USB in, and press the correct key (Del, F2, or whatever the key is on your particular machine). Find where the option is to enable booting from USB and enable it.
If your PC does not support booting from USB, burn the Ubuntu image to a DVD.
### Installation
When Ubuntu first loads, the welcome screen appears with two options. Select the “Install Ubuntu” button to proceed. On the next page the Ubiquity installation tool asks the user to select some options. These options arent mandatory and can be ignored. However, it is recommended that both boxes be checked, as they save time after the installation, specifically with the installation of MP3 codecs and updating the system.
![ubuntu-preparing-to-install](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/ubuntu-preparing-to-install.jpg "ubuntu-preparing-to-install")
After selecting both boxes in the “Preparing to install Ubuntu” page, it will be time to select the installation type. There are many. However, with this tutorial the option required is the custom one. To get to the custom installation page, select the “something else” box, then click continue.
This reveals Ubuntus custom installation partitioning tool. It will show any and all disks that can install Ubuntu. If two hard drives are available, they will appear. If an SD card is plugged in, it will appear.
Select the hard drive that you plan to use for the root file system. If there is already a partition table on it, the editor will show partitions. Delete all of them, using the tool. If the drive isnt formatted and has no partitions, select the drive with the mouse, then click “new partition table.” Do this for all drives so that they both have partition tables.
![ubuntu-create-mount-point](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/ubuntu-create-mount-point.jpg "ubuntu-create-mount-point")
Now that both drives have partition tables (and partitions deleted), the configuration can begin. Select the free space under drive one, then click the plus sign button to create a new partition. This will bring up the “Create partition window.” Allow the tool to use the entire hard drive, then go to the “Mount Point” drop-down menu. Select `/` as the mount point, then the OK button to confirm the settings.
Do the same with the second drive. This time select `/home` as the mount point. With both drives set up, select the correct drive the boot loader will go to, then click the “install now” button to start the installation process.
![ubuntu-multi-drive-layout](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/ubuntu-multi-drive-layout.jpg "ubuntu-multi-drive-layout")
The installation process from here is the standard installation. Create a username, select the timezone, etc.
**Notes:** Are you installing in UEFI mode? A 512 MB, FAT32 partition will need to be created for boot. Do this before creating any other partitions. Be sure to select “/boot” as the mount point for this partition as well.
If you require Swap, create a partition on the first drive before making the partition used for `/`. This can be done by clicking the “+” (plus) button, entering the desired size, and selecting “swap area” in the drop-down.
### Conclusion
The best thing about Linux is how configurable it is. How many other operating systems let you split up the file system onto separate hard drives? Not many, thats for sure! I hope that with this guide youll realize the true power Ubuntu can offer!
Would you use multiple drives in your Ubuntu installation? Let us know below in the comments.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/install-ubuntu-with-different-root-home-hard-drives/
作者:[Derrik Diener][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/derrikdiener/
[1]:https://www.maketecheasier.com/author/derrikdiener/
[2]:https://www.maketecheasier.com/install-ubuntu-with-different-root-home-hard-drives/#respond
[3]:https://www.maketecheasier.com/category/linux-tips/
[4]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.maketecheasier.com%2Finstall-ubuntu-with-different-root-home-hard-drives%2F
[5]:http://twitter.com/share?url=https%3A%2F%2Fwww.maketecheasier.com%2Finstall-ubuntu-with-different-root-home-hard-drives%2F&text=How+to+Install+Ubuntu+with+Separate+Root+and+Home+Hard+Drives
[6]:mailto:?subject=How%20to%20Install%20Ubuntu%20with%20Separate%20Root%20and%20Home%20Hard%20Drives&body=https%3A%2F%2Fwww.maketecheasier.com%2Finstall-ubuntu-with-different-root-home-hard-drives%2F
[7]:https://www.maketecheasier.com/byb-dimmable-eye-care-desk-lamp/
[8]:https://www.maketecheasier.com/download-appx-files-from-windows-store/
[9]:https://support.google.com/adsense/troubleshooter/1631343
[10]:http://www.maketecheasier.com/tag/ssd
[11]:http://ubuntu.com/download
[12]:https://etcher.io/

View File

@ -1,159 +0,0 @@
申请翻译
Understanding the difference between sudo and su
============================================================
### On this page
1. [The su command in Linux][7]
1. [su -][1]
2. [su -c][2]
2. [Sudo vs Su][8]
1. [Password][3]
2. [Default behavior][4]
3. [Logging][5]
4. [Flexibility][6]
3. [Sudo su][9]
In one of our[ earlier articles][11], we discussed the 'sudo' command in detail. Towards the ends of that tutorial, there was a mention of another similar command 'su' in a small note. Well, in this article, we will discuss in detail the 'su' command as well as how it differs from the 'sudo' command.
But before we do that, please note that all the instructions and examples mentioned in this tutorial have been tested on Ubuntu 14.04LTS.
### The su command in Linux
The main work of the su command is to let you switch to some other user during a login session. In other words, the tool lets you assume the identity of some other user without having to logout and then login (as that user).
The su command is mostly used to switch to the superuser/root account (as root privileges are frequently required while working on the command line), but - as already mentioned - you can use it to switch to any other, non-root user as well.
Here's how you can use this command to switch to the root user:
[
![The su cmmand without commandline options](https://www.howtoforge.com/images/sudo-vs-su/su-command.png)
][12]
The password that this command requires is also of the root user. So in general, the su command requires you to enter the password of the target user. After the correct password is entered, the tool starts a sub-session inside the existing session on the terminal.
### su -
There's another way to switch to the root user: run the 'su -' command:
[
![The su - command](https://www.howtoforge.com/images/sudo-vs-su/su-hyphen-command.png)
][13]
Now, what's the difference between 'su' and 'su -' ? Well, the former keeps the environment of the old/original user even after the switch to root has been made, while the latter creates a new environment (as dictated by the ~/.bashrc of the root user), similar to the case when you explicitly log in as root user from the log-in screen.
The man page of 'su' also makes it clear:
```
The optional argument - may be used to provide an environment similar to what the user would expect had the user logged in directly.
```
So, you'll agree that logging in with 'su -' makes more sense. But as the 'su' command also exists, one might wonder when that's useful. The following excerpt - taken from the [ArchLinux wiki website][14] - gives a good idea about the benefits and pitfalls of the 'su' command:
* It sometimes can be advantageous for a system administrator to use the shell account of an ordinary user rather than its own. In particular, occasionally the most efficient way to solve a user's problem is to log into that user's account in order to reproduce or debug the problem.
* However, in many situations it is not desirable, or it can even be dangerous, for the root user to be operating from an ordinary user's shell account and with that account's environmental variables rather than from its own. While inadvertently using an ordinary user's shell account, root could install a program or make other changes to the system that would not have the same result as if they were made while using the root account. For instance, a program could be installed that could give the ordinary user power to accidentally damage the system or gain unauthorized access to certain data.
Note: In case you want to pass more arguments after - in 'su -', then you should use the -l command line option that the command offers (instead of -). Here's the definition of - and the -l command line option:
```
-, -l, --login
Provide an environment similar to what the user would expect had the user logged in directly.
When - is used, it must be specified as the last su option. The other forms (-l and --login) do not have this restriction.
```
### su -c
There's another option of the 'su' command that's worth mentioning: -c. It lets you provide a command that you want to run after switching to the target user.
The man page of 'su' explains it as:
```
-c, --command COMMAND
Specify a command that will be invoked by the shell using its -c.
The executed command will have no controlling terminal. This option cannot be used to execute interactive programs which need a controlling TTY.
```
Consider the following example template:
su [target-user] -c [command-to-run]
So in this case, the 'command-to-run' will be executed as:
[shell] -c [command-to-run]
Where 'shell' would be replaced by 'target-user' shell defined in the /etc/passwd file.
### Sudo vs Su
Now since we have discussed the basics of the 'su' command as well, it's time we discuss the differences between the 'sudo' and the 'su' commands.
### Password
The primary difference between the two is the password they require: while 'sudo' requires current user's password, 'su' requires you to enter the root user password.
Quite clearly, 'sudo' is a better alternative between the two as far as security is concerned. For example, consider the case of computer being used by multiple users who also require root access. Using 'su' in such a scenario means sharing the root password with all of them, which is not a good practice in general.
Moreover, in case you want to revoke the superuser/root access of a particular user, the only way is to change the root password and then redistribute the new root password among all the other users.
With Sudo, on the other hand, you can handle both these scenarios effortlessly. Given that 'sudo' requires users to enter their own password, you don't need to share the root password will all the users in the first place. And to stop a particular user from accessing root privileges, all you have to do is to tweak the corresponding entry in the 'sudoers' file.
### Default behavior
The other difference between the two commands is in their default behavior. While 'sudo' only allows you to run a single command with elevated privileges, the 'su' command launches a new shell, allowing you to run as many commands as you want with root privileges until you explicitly exit that sell.
So the default behavior of the 'su' command is potentially dangerous given the possibility that the user can forget the fact that they are working as root, and might inadvertently make some irrecoverable changes (such as run the 'rm -rf' command in wrong directory). For a detailed discussion on why it's not encouraged to always work as root, head [here][10].
### Logging
Although commands run through 'sudo' are executed as the target user (which is 'root' by default), they are tagged with the sudoer's user-name. But in case of 'su', it's not possible to directly trace what a user did after they su'd to the root account.
### Flexibility
The 'sudo' command is far more flexible in that you can even limit the commands that you want the sudo-ers to have access to. In other words, users with access to 'sudo' can only be given access to commands that are required for their job. However, with 'su' that's not possible - either you have the privilege to do everything or nothing.
### Sudo su
Presumably due to the potential risks involved with using 'su' or logging directly as root, some Linux distributions - like Ubuntu - disable the root user account by default. Users are encouraged to use 'sudo' whenever they need root privileges.
However, you can still do 'su' successfully, i.e, without entering the root password. All you need to do is to run the following command:
sudo su
Since you're running the command with 'sudo', you'll only be required to enter your password. So once that is done, the 'su' command will be run as root, meaning it won't ask for any passwords.
**PS**: In case you want to enable the root account on your system (although that's strongly discouraged because you can always use 'sudo' or 'sudo su'), you'll have to set the root password manually, which you can do that using the following command:
sudo passwd root
### Conclusion
Both this as well as our previous tutorial (which focuses on 'sudo') should give you a good idea about the available tools that let you do tasks that require escalated (or a completely different set of) privileges. In case you have something to share about 'su' or 'sudo', or want to share your own experience, you are welcome to do that in comments below.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/sudo-vs-su/
作者:[Himanshu Arora][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/sudo-vs-su/
[1]:https://www.howtoforge.com/tutorial/sudo-vs-su/#su-
[2]:https://www.howtoforge.com/tutorial/sudo-vs-su/#su-c
[3]:https://www.howtoforge.com/tutorial/sudo-vs-su/#password
[4]:https://www.howtoforge.com/tutorial/sudo-vs-su/#default-behavior
[5]:https://www.howtoforge.com/tutorial/sudo-vs-su/#logging
[6]:https://www.howtoforge.com/tutorial/sudo-vs-su/#flexibility
[7]:https://www.howtoforge.com/tutorial/sudo-vs-su/#the-su-command-in-linux
[8]:https://www.howtoforge.com/tutorial/sudo-vs-su/#sudo-vs-su
[9]:https://www.howtoforge.com/tutorial/sudo-vs-su/#sudo-su
[10]:http://askubuntu.com/questions/16178/why-is-it-bad-to-login-as-root
[11]:https://www.howtoforge.com/tutorial/sudo-beginners-guide/
[12]:https://www.howtoforge.com/images/sudo-vs-su/big/su-command.png
[13]:https://www.howtoforge.com/images/sudo-vs-su/big/su-hyphen-command.png
[14]:https://wiki.archlinux.org/index.php/Su

View File

@ -0,0 +1,257 @@
# rusking translating
An introduction to the Linux boot and startup processes
============================================================
> Ever wondered what it takes to get your system ready to run applications? Here's what is going on under the hood.
![The boot process](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/linux_boot.png?itok=pSGmf8Ca "The boot process")
>Image by : [Penguin][15], [Boot][16]. Modified by Opensource.com. [CC BY-SA 4.0][17].
Understanding the Linux boot and startup processes is important to being able to both configure Linux and to resolving startup issues. This article presents an overview of the bootup sequence using the [GRUB2 bootloader][18] and the startup sequence as performed by the [systemd initialization system][19].
In reality, there are two sequences of events that are required to boot a Linux computer and make it usable:  _boot_  and  _startup_ . The  _boot_  sequence starts when the computer is turned on, and is completed when the kernel is initialized and systemd is launched. The  _startup_  process then takes over and finishes the task of getting the Linux computer into an operational state.
Overall, the Linux boot and startup process is fairly simple to understand. It is comprised of the following steps which will be described in more detail in the following sections.
More Linux resources
* [What is Linux?][1]
* [What are Linux containers?][2]
* [Managing devices in Linux][3]
* [Download Now: Linux commands cheat sheet][4]
* [Our latest Linux articles][5]
* BIOS POST
* Boot loader (GRUB2)
* Kernel initialization
* Start systemd, the parent of all processes.
Note that this article covers GRUB2 and systemd because they are the current boot loader and initialization software for most major distributions. Other software options have been used historically and are still found in some distributions.
### The boot process
The boot process can be initiated in one of a couple ways. First, if power is turned off, turning on the power will begin the boot process. If the computer is already running a local user, including root or an unprivileged user, the user can programmatically initiate the boot sequence by using the GUI or command line to initiate a reboot. A reboot will first do a shutdown and then restart the computer.
### BIOS POST
The first step of the Linux boot process really has nothing whatever to do with Linux. This is the hardware portion of the boot process and is the same for any operating system. When power is first applied to the computer it runs the POST (Power On Self Test) which is part of the BIOS (Basic I/O System).
When IBM designed the first PC back in 1981, BIOS was designed to initialize the hardware components. POST is the part of BIOS whose task is to ensure that the computer hardware functioned correctly. If POST fails, the computer may not be usable and so the boot process does not continue.
BIOS POST checks the basic operability of the hardware and then it issues a BIOS [interrupt][20], INT 13H, which locates the boot sectors on any attached bootable devices. The first boot sector it finds that contains a valid boot record is loaded into RAM and control is then transferred to the code that was loaded from the boot sector.
The boot sector is really the first stage of the boot loader. There are three boot loaders used by most Linux distributions, GRUB, GRUB2, and LILO. GRUB2 is the newest and is used much more frequently these days than the other older options.
### GRUB2
GRUB2 stands for "GRand Unified Bootloader, version 2" and it is now the primary bootloader for most current Linux distributions. GRUB2 is the program which makes the computer just smart enough to find the operating system kernel and load it into memory. Because it is easier to write and say GRUB than GRUB2, I may use the term GRUB in this document but I will be referring to GRUB2 unless specified otherwise.
GRUB has been designed to be compatible with the [multiboot specification][21] which allows GRUB to boot many versions of Linux and other free operating systems; it can also chain load the boot record of proprietary operating systems.
GRUB can also allow the user to choose to boot from among several different kernels for any given Linux distribution. This affords the ability to boot to a previous kernel version if an updated one fails somehow or is incompatible with an important piece of software. GRUB can be configured using the /boot/grub/grub.conf file.
GRUB1 is now considered to be legacy and has been replaced in most modern distributions with GRUB2, which is a rewrite of GRUB1\. Red Hat based distros upgraded to GRUB2 around Fedora 15 and CentOS/RHEL 7\. GRUB2 provides the same boot functionality as GRUB1 but GRUB2 is also a mainframe-like command-based pre-OS environment and allows more flexibility during the pre-boot phase. GRUB2 is configured with /boot/grub2/grub.cfg.
The primary function of either GRUB is to get the Linux kernel loaded into memory and running. Both versions of GRUB work essentially the same way and have the same three stages, but I will use GRUB2 for this discussion of how GRUB does its job. The configuration of GRUB or GRUB2 and the use of GRUB2 commands is outside the scope of this article.
Although GRUB2 does not officially use the stage notation for the three stages of GRUB2, it is convenient to refer to them in that way, so I will in this article.
#### Stage 1
As mentioned in the BIOS POST section, at the end of POST, BIOS searches the attached disks for a boot record, usually located in the Master Boot Record (MBR), it loads the first one it finds into memory and then starts execution of the boot record. The bootstrap code, i.e., GRUB2 stage 1, is very small because it must fit into the first 512-byte sector on the hard drive along with the partition table. The total amount of space allocated for the actual bootstrap code in a [classic generic MBR][22] is 446 bytes. The 446 Byte file for stage 1 is named boot.img and does not contain the partition table which is added to the boot record separately.
Because the boot record must be so small, it is also not very smart and does not understand filesystem structures. Therefore the sole purpose of stage 1 is to locate and load stage 1.5\. In order to accomplish this, stage 1.5 of GRUB must be located in the space between the boot record itself and the first partition on the drive. After loading GRUB stage 1.5 into RAM, stage 1 turns control over to stage 1.5.
#### Stage 1.5
As mentioned above, stage 1.5 of GRUB must be located in the space between the boot record itself and the first partition on the disk drive. This space was left unused historically for technical reasons. The first partition on the hard drive begins at sector 63 and with the MBR in sector 0, that leaves 62 512-byte sectors—31,744 bytes—in which to store the core.img file which is stage 1.5 of GRUB. The core.img file is 25,389 Bytes so there is plenty of space available between the MBR and the first disk partition in which to store it.
Because of the larger amount of code that can be accommodated for stage 1.5, it can have enough code to contain a few common filesystem drivers, such as the standard EXT and other Linux filesystems, FAT, and NTFS. The GRUB2 core.img is much more complex and capable than the older GRUB1 stage 1.5\. This means that stage 2 of GRUB2 can be located on a standard EXT filesystem but it cannot be located on a logical volume. So the standard location for the stage 2 files is in the /boot filesystem, specifically /boot/grub2.
Note that the /boot directory must be located on a filesystem that is supported by GRUB. Not all filesystems are. The function of stage 1.5 is to begin execution with the filesystem drivers necessary to locate the stage 2 files in the /boot filesystem and load the needed drivers.
#### Stage 2
All of the files for GRUB stage 2 are located in the /boot/grub2 directory and several subdirectories. GRUB2 does not have an image file like stages 1 and 2\. Instead, it consists mostly of runtime kernel modules that are loaded as needed from the /boot/grub2/i386-pc directory.
The function of GRUB2 stage 2 is to locate and load a Linux kernel into RAM and turn control of the computer over to the kernel. The kernel and its associated files are located in the /boot directory. The kernel files are identifiable as they are all named starting with vmlinuz. You can list the contents of the /boot directory to see the currently installed kernels on your system.
GRUB2, like GRUB1, supports booting from one of a selection of Linux kernels. The Red Hat package manager, DNF, supports keeping multiple versions of the kernel so that if a problem occurs with the newest one, an older version of the kernel can be booted. By default, GRUB provides a pre-boot menu of the installed kernels, including a rescue option and, if configured, a recovery option.
Stage 2 of GRUB2 loads the selected kernel into memory and turns control of the computer over to the kernel.
### Kernel
All of the kernels are in a self-extracting, compressed format to save space. The kernels are located in the /boot directory, along with an initial RAM disk image, and device maps of the hard drives.
After the selected kernel is loaded into memory and begins executing, it must first extract itself from the compressed version of the file before it can perform any useful work. Once the kernel has extracted itself, it loads [systemd][23], which is the replacement for the old [SysV init][24] program, and turns control over to it.
This is the end of the boot process. At this point, the Linux kernel and systemd are running but unable to perform any productive tasks for the end user because nothing else is running.
### The startup process
The startup process follows the boot process and brings the Linux computer up to an operational state in which it is usable for productive work.
### systemd
systemd is the mother of all processes and it is responsible for bringing the Linux host up to a state in which productive work can be done. Some of its functions, which are far more extensive than the old init program, are to manage many aspects of a running Linux host, including mounting filesystems, and starting and managing system services required to have a productive Linux host. Any of systemd's tasks that are not related to the startup sequence are outside the scope of this article.
First, systemd mounts the filesystems as defined by **/etc/fstab**, including any swap files or partitions. At this point, it can access the configuration files located in /etc, including its own. It uses its configuration file, **/etc/systemd/system/default.target**, to determine which state or target, into which it should boot the host. The **default.target** file is only a symbolic link to the true target file. For a desktop workstation, this is typically going to be the graphical.target, which is equivalent to **runlevel**** 5** in the old SystemV init. For a server, the default is more likely to be the **multi-user.target** which is like **runlevel**** 3** in SystemV. The **emergency.target** is similar to single user mode.
Note that targets and services are systemd units.
Table 1, below, is a comparison of the systemd targets with the old SystemV startup runlevels. The **systemd target aliases** are provided by systemd for backward compatibility. The target aliases allow scripts—and many sysadmins like myself—to use SystemV commands like **init 3** to change runlevels. Of course, the SystemV commands are forwarded to systemd for interpretation and execution.
|SystemV Runlevel | systemd target | systemd target aliases | Description |
|:--
|   | halt.target |   | Halts the system without powering it down. |
| 0 | poweroff.target | runlevel0.target | Halts the system and turns the power off. |
| S | emergency.target |   | Single user mode. No services are running; filesystems are not mounted. This is the most basic level of operation with only an emergency shell running on the main console for the user to interact with the system. |
| 1 | rescue.target | runlevel1.target | A base system including mounting the filesystems with only the most basic services running and a rescue shell on the main console. |
| 2 |   | runlevel2.target | Multiuser, without NFS but all other non-GUI services running. |
| 3 | multi-user.target | runlevel3.target | All services running but command line interface (CLI) only. |
| 4 |   | runlevel4.target | Unused. |
| 5 | graphical.target | runlevel5.target | multi-user with a GUI. |
| 6 | reboot.target | runlevel6.target | Reboot |
|   | default.target |   | This target is always aliased with a symbolic link to either multi-user.target or graphical.target. systemd always uses the default.target to start the system. The default.target should never be aliased to halt.target, poweroff.target, or reboot.target. |
_Table 1: Comparison of SystemV runlevels with systemd targets and some target aliases._
Each target has a set of dependencies described in its configuration file. systemd starts the required dependencies. These dependencies are the services required to run the Linux host at a specific level of functionality. When all of the dependencies listed in the target configuration files are loaded and running, the system is running at that target level.
systemd also looks at the legacy SystemV init directories to see if any startup files exist there. If so, systemd used those as configuration files to start the services described by the files. The deprecated network service is a good example of one of those that still use SystemV startup files in Fedora.
Figure 1, below, is copied directly from the **bootup** [man page][25]. It shows the general sequence of events during systemd startup and the basic ordering requirements to ensure a successful startup.
The **sysinit.target** and **basic.target** targets can be considered as checkpoints in the startup process. Although systemd has as one of its design goals to start system services in parallel, there are still certain services and functional targets that must be started before other services and targets can be started. These checkpoints cannot be passed until all of the services and targets required by that checkpoint are fulfilled.
So the **sysinit.target** is reached when all of the units on which it depends are completed. All of those units, mounting filesystems, setting up swap files, starting udev, setting the random generator seed, initiating low-level services, and setting up cryptographic services if one or more filesystems are encrypted, must be completed, but within the **sysinit****.target **those tasks can be performed in parallel.
The **sysinit.target** starts up all of the low-level services and units required for the system to be marginally functional and that are required to enable moving on to the basic.target.
|
```
local-fs-pre.target
|
v
(various mounts and (various swap (various cryptsetup
fsck services...) devices...) devices...) (various low-level (various low-level
| | | services: udevd, API VFS mounts:
v v v tmpfiles, random mqueue, configfs,
local-fs.target swap.target cryptsetup.target seed, sysctl, ...) debugfs, ...)
| | | | |
\__________________|_________________ | ___________________|____________________/
\|/
v
sysinit.target
|
____________________________________/|\________________________________________
/ | | | \
| | | | |
v v | v v
(various (various | (various rescue.service
timers...) paths...) | sockets...) |
| | | | v
v v | v rescue.target
timers.target paths.target | sockets.target
| | | |
v \_________________ | ___________________/
\|/
v
basic.target
|
____________________________________/| emergency.service
/ | | |
| | | v
v v v emergency.target
display- (various system (various system
manager.service services services)
| required for |
| graphical UIs) v
| | multi-user.target
| | |
\_________________ | _________________/
\|/
v
graphical.target
```
|
_Figure 1: The systemd startup map._
After the **sysinit.target** is fulfilled, systemd next starts the **basic.target**, starting all of the units required to fulfill it. The basic target provides some additional functionality by starting units that re required for the next target. These include setting up things like paths to various executable directories, communication sockets, and timers.
Finally, the user-level targets, **multi-user.target** or **graphical.target** can be initialized. Notice that the **multi-user.****target**must be reached before the graphical target dependencies can be met.
The underlined targets in Figure 1, are the usual startup targets. When one of these targets is reached, then startup has completed. If the **multi-user.target** is the default, then you should see a text mode login on the console. If **graphical.target** is the default, then you should see a graphical login; the specific GUI login screen you see will depend on the default [display manager][26] you use.
### Issues
I recently had a need to change the default boot kernel on a Linux computer that used GRUB2\. I found that some of the commands did not seem to work properly for me, or that I was not using them correctly. I am not yet certain which was the case, and need to do some more research.
The grub2-set-default command did not properly set the default kernel index for me in the **/etc/default/grub** file so that the desired alternate kernel did not boot. So I manually changed /etc/default/grub **GRUB_DEFAULT=saved** to **GRUB_DEFAULT=****2** where 2 is the index of the installed kernel I wanted to boot. Then I ran the command **grub2-mkconfig ****> /boot/grub2/grub.cfg** to create the new grub configuration file. This circumvention worked as expected and booted to the alternate kernel.
### Conclusions
GRUB2 and the systemd init system are the key components in the boot and startup phases of most modern Linux distributions. Despite the fact that there has been controversy surrounding systemd especially, these two components work together smoothly to first load the kernel and then to start up all of the system services required to produce a functional Linux system.
Although I do find both GRUB2 and systemd more complex than their predecessors, they are also just as easy to learn and manage. The man pages have a great deal of information about systemd, and freedesktop.org has the complete set of [systemd man pages][27] online. Refer to the resources, below, for more links.
### Additional resources
* [GNU GRUB][6] (Wikipedia)
* [GNU GRUB Manual][7] (GNU.org)
* [Master Boot Record][8] (Wikipedia)
* [Multiboot specification][9] (Wikipedia)
* [systemd][10] (Wikipedia)
* [sy][11][stemd bootup process][12] (Freedesktop.org)
* [systemd index of man pages][13] (Freedesktop.org)
--------------------------------------------------------------------------------
作者简介:
David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years.
---------------------------------------
via: https://opensource.com/article/17/2/linux-boot-and-startup
作者:[David Both ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dboth
[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu
[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu
[3]:https://opensource.com/article/16/11/managing-devices-linux?src=linux_resource_menu
[4]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ
[5]:https://opensource.com/tags/linux?src=linux_resource_menu
[6]:https://en.wikipedia.org/wiki/GNU_GRUB
[7]:https://www.gnu.org/software/grub/manual/grub.html
[8]:https://en.wikipedia.org/wiki/Master_boot_record
[9]:https://en.wikipedia.org/wiki/Multiboot_Specification
[10]:https://en.wikipedia.org/wiki/Systemd
[11]:https://www.freedesktop.org/software/systemd/man/bootup.html
[12]:https://www.freedesktop.org/software/systemd/man/bootup.html
[13]:https://www.freedesktop.org/software/systemd/man/index.html
[14]:https://opensource.com/article/17/2/linux-boot-and-startup?rate=zi3QD2ADr8eV0BYSxcfeaMxZE3mblRhuswkBOhCQrmI
[15]:https://pixabay.com/en/penguins-emperor-antarctic-life-429136/
[16]:https://pixabay.com/en/shoe-boots-home-boots-house-1519804/
[17]:https://creativecommons.org/licenses/by-sa/4.0/
[18]:https://en.wikipedia.org/wiki/GNU_GRUB
[19]:https://en.wikipedia.org/wiki/Systemd
[20]:https://en.wikipedia.org/wiki/BIOS_interrupt_call
[21]:https://en.wikipedia.org/wiki/Multiboot_Specification
[22]:https://en.wikipedia.org/wiki/Master_boot_record
[23]:https://en.wikipedia.org/wiki/Systemd
[24]:https://en.wikipedia.org/wiki/Init#SysV-style
[25]:http://man7.org/linux/man-pages/man7/bootup.7.html
[26]:https://opensource.com/article/16/12/yearbook-best-couple-2016-display-manager-and-window-manager
[27]:https://www.freedesktop.org/software/systemd/man/index.html
[28]:https://opensource.com/user/14106/feed
[29]:https://opensource.com/article/17/2/linux-boot-and-startup#comments
[30]:https://opensource.com/users/dboth

View File

@ -1,144 +0,0 @@
How to setup a Linux server on Amazon AWS
============================================================
### On this page
1. [Setup a Linux VM in AWS][1]
2. [Connect to an EC2 instance from Windows][2]
AWS (Amazon Web Services) is one of the leading cloud server providers worldwide. You can setup a server within a minute using the AWS platform. On AWS, you can fine tune many techncal details of your server like the number of CPU's, Memory and HDD space, type of HDD (SSD which is faster or a classic IDE) and so on. And the best thing about the AWS is that you need to pay only for the services that you have used. To get started, AWS provides a special account called "Free tier" where you can use the AWS technology free for one year with some minor restrictions like you can use the server only upto 750 Hours a month, when you cross this theshold they will charge you. You can check all the rules related this on [aws portal][3].
Since I am writing this post about creating a Linux server on AWS, having a "Free Tier" account is the main pre-requisite. To sign up for this account you can use this [link][4]. Kindly note that you need to enter your credit card details while creating the account.
So let's assume that you have created the "free tier" account.
Before we proceed, you must know some of the terminologies in AWS to understand the setup:
1. EC2 (Elastic compute cloud): This term used for the virtual machine.
2. AMI (Amazon machine image): Used for the OS instance.
3. EBS (Elastic block storage): one of the type Storage environment in AWS.
Now login to AWS console at below location:
[https://console.aws.amazon.com/][5]
The AWS console will look like this:
[
![Amazon AWS console](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_console.JPG)
][6]
### Setup a Linux VM in AWS
1: Create an EC2 (virtual machine) instance: Before installing the OS on you must create a VM in AWS. To create this, click on EC2 under compute menu:
[
![Create an EC2 instance](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_console_ec21.png)
][7]
2\. Now click on "Launch Instance" Button under Create instance.
[
![Launch the EC2 instance](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_launch_ec2.png)
][8]
3\. Now, when you are using a free tier account, then better select the Free Tier radio button so that AWS will filter the instances which are used for free usage. This will keep you aside from paying money to AWS for using billed resources under AWS.
[
![Select Free Tier instances only](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_free_tier_radio1.png)
][9]
4\. To proceed further, select following options:
a. **Choose an AMI in the classic instance wizard: selection --> I'll use Red Hat Enterprise Linux 7.2 (HVM), SSD Volume Type here**
b. Select "**t2.micro**" for the instance details.
c. **Configure Instance Details**: Do not change anything simply click next.
d. **Add Storage: **Do not change anything simply click next as we will using default Size 10 (GiB) Hard disk in this case.
e. **Add Tags**: Do not change anything simply click next.
f. **Configure Security Group**: Now select port 22 which is used for ssh so that you can access this server from anywhere.
[
![Configure AWS server](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_ssh_port1.png)
][10]
g. **Select review and launch button**
h. If all the details are Ok now press the "**Launch**" button,
i. Once you clicked the Launch button, a popup window gets displayed to create a "Key pair" as shown below: Select the option "create a new key pair" and give a name to key pair. Then download the same. You require this key pair while connecting to the server using ssh. At the end, click the "Launch Instance" button.
[
![Create Key pair](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_key_pair.png)
][11]
j. After clicking Launch instance Button, go to services at the left top side. Select Compute--> EC2\. Now click on running instance link as below:
[
![Go to the running EC2 instance](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_running_instance.png)
][12]
k. Now you can see that your new VM is ready with status "running" as shown below. Select the Instance and Please note down the "Public DNS value" which is required for logging on to the server.
[
![Public DNS value of the VM](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_dns_value.png)
][13]
Now you are done with creating a sample Linux installed VM. To connect to the server, follow below steps.
### Connect to an EC2 instance from Windows
1\. First of all, you need to have putty gen and Putty exe's for connecting to the server from Windows (or the SSH command on Linux). You can download putty by following this [Link][14].
2\. Now open the putty gen "puttygen.exe".
3\. You need to click on the "Load button", browse and select the keypair file (pem file) that you downloaded above from Amazon.
4\. You need to select the "ssh2-RSA" option and click on the save private key button. Kindly select yes on the next pop-up.
5\. Save the file with the file extension .ppk.
6\. Now you need to open Putty.exe. Go to connection at the left side menu then select "SSH" and then select "Auth". You need to click on the browse button to select the .ppk file that we created in the step 4.
7\. Now click on the "session" menu and paste the DNS value captured during the 'k' step of this tutorial in the "host name" box and hit the open button.
8\. Upon asking for username and password, enter "**ec2-user**" and blank password and then give below command.
$sudo su -
Hurray, you are now root on the Linux server which is hosted on AWS cloud.
[
![Logged in to AWS EC2 server](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_putty1.JPG)
][15]
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-setup-linux-server-with-aws/
作者:[MANMOHAN MIRKAR][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/how-to-setup-linux-server-with-aws/
[1]:https://www.howtoforge.com/tutorial/how-to-setup-linux-server-with-aws/#setup-a-linux-vm-in-aws
[2]:https://www.howtoforge.com/tutorial/how-to-setup-linux-server-with-aws/#connect-to-an-ec-instance-from-windows
[3]:http://aws.amazon.com/free/
[4]:http://aws.amazon.com/ec2/
[5]:https://console.aws.amazon.com/
[6]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_console.JPG
[7]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_console_ec21.png
[8]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_launch_ec2.png
[9]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_free_tier_radio1.png
[10]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_ssh_port1.png
[11]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_key_pair.png
[12]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_running_instance.png
[13]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_dns_value.png
[14]:http://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
[15]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_putty1.JPG

View File

@ -1,3 +1,4 @@
ucasFL translating
Installation of Devuan Linux (Fork of Debian)
============================================================

View File

@ -1,3 +1,5 @@
申请翻译
Many SQL Performance Problems Stem from “Unnecessary, Mandatory Work”
============================================================ 

View File

@ -1,98 +0,0 @@
How to Change Root Password of MySQL or MariaDB in Linux
============================================================
If youre [installing MySQL or MariaDB in Linux][1] for the first time, chances are you will be executing mysql_secure_installation script to secure your MySQL installation with basic settings.
One of these settings is, database root password which you must keep secret and use only when it is required. If you need to change it (for example, when a database administrator changes roles or is laid off!).
**Suggested Read:** [Recover MySQL or MariaDB Root Password in Linux][2]
This article will come in handy. We will explain how to change a root password of MySQL or MariaDB database server in Linux.
Although we will use a MariaDB server in this article, the instructions should work for MySQL as well.
### Change MySQL or MariaDB Root Password
You know the root password and want to reset it, in this case, lets make sure MariaDB is running:
```
------------- CentOS/RHEL 7 and Fedora 22+ -------------
# systemctl is-active mariadb
------------- CentOS/RHEL 6 and Fedora -------------
# /etc/init.d/mysqld status
```
[
![Check MySQL Status](http://www.tecmint.com/wp-content/uploads/2017/03/Check-MySQL-Status.png)
][3]
Check MySQL Status
If the above command does not return the word `active` as output or its stopped, you will need to start the database service before proceeding:
```
------------- CentOS/RHEL 7 and Fedora 22+ -------------
# systemctl start mariadb
------------- CentOS/RHEL 6 and Fedora -------------
# /etc/init.d/mysqld start
```
Next, we will login to the database server as root:
```
# mysql -u root -p
```
For compatibility across versions, we will use the following statement to update the user table in the mysql database. Note that you need to replace `YourPasswordHere` with the new password you have chosen for root.
```
MariaDB [(none)]> USE mysql;
MariaDB [(none)]> UPDATE user SET password=PASSWORD('YourPasswordHere') WHERE User='root' AND Host = 'localhost';
MariaDB [(none)]> FLUSH PRIVILEGES;
```
To validate, exit your current MariaDB session by typing.
```
MariaDB [(none)]> exit;
```
and then press Enter. You should now be able to connect to the server using the new password.
[
![Change MySQL/MariaDB Root Password](http://www.tecmint.com/wp-content/uploads/2017/03/Change-MySQL-Root-Password.png)
][4]
Change MySQL/MariaDB Root Password
##### Summary
In this article we have explained how to change the MariaDB / MySQL root password whether you know the current one or not.
As always, feel free to drop us a note if you have any questions or feedback using our comment form below. We look forward to hearing from you!
--------------------------------------------------------------------------------
作者简介:
Gabriel Cánepa is a GNU/Linux sysadmin and web developer from Villa Mercedes, San Luis, Argentina. He works for a worldwide leading consumer product company and takes great pleasure in using FOSS tools to increase productivity in all areas of his daily work.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/change-mysql-mariadb-root-password/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/install-mariadb-in-centos-7/
[2]:http://www.tecmint.com/reset-mysql-or-mariadb-root-password/
[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-MySQL-Status.png
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Change-MySQL-Root-Password.png
[5]:http://www.tecmint.com/author/gacanepa/
[6]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[7]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,339 @@
Integrate Ubuntu 16.04 to AD as a Domain Member with Samba and Winbind Part 8
============================================================
This tutorial describes how to join an Ubuntu machine into a Samba4 Active Directory domain in order to authenticate AD accounts with local ACL for files and directories or to create and map volume shares for domain controller users (act a as file server).
#### Requirements:
1. [Create an Active Directory Infrastructure with Samba4 on Ubuntu][1]
### Step 1: Initial Configurations to Join Ubuntu to Samba4 AD
1. Before starting to join an Ubuntu host into an Active Directory DC you need to assure that some services are configured properly on local machine.
An important aspect of your machine represents the hostname. Setup a proper machine name before joining the domain with the help of hostnamectl command or by manually editing /etc/hostname file.
```
# hostnamectl set-hostname your_machine_short_name
# cat /etc/hostname
# hostnamectl
```
[
![Set System Hostname](http://www.tecmint.com/wp-content/uploads/2017/03/Set-Ubuntu-System-Hostname.png)
][2]
Set System Hostname
2. On the next step, open and manually edit your machine network settings with the proper IP configurations. The most important settings here are the DNS IP addresses which points back to your domain controller.
Edit /etc/network/interfaces file and add dns-nameservers statement with your proper AD IP addresses and domain name as illustrated on the below screenshot.
Also, make sure that the same DNS IP addresses and the domain name are added to /etc/resolv.conf file.
[
![Configure Network Settings for AD](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network-Settings-for-AD.png)
][3]
Configure Network Settings for AD
On the above screenshot, 192.168.1.254 and 192.168.1.253 are the IP addresses of the Samba4 AD DC and Tecmint.lan represents the name of the AD domain which will be queried by all machines integrated into realm.
3. Restart the network services or reboot the machine in order to apply the new network configurations. Issue a ping command against your domain name in order to test if DNS resolution is working as expected.
The AD DC should replay with its FQDN. In case you have configured a DHCP server in your network to automatically assign IP settings for your LAN hosts, make sure you add AD DC IP addresses to the DHCP server DNS configurations.
```
# systemctl restart networking.service
# ping -c2 your_domain_name
```
4. The last important configuration required is represented by time synchronization. Install ntpdate package, query and sync time with the AD DC by issuing the below commands.
```
$ sudo apt-get install ntpdate
$ sudo ntpdate -q your_domain_name
$ sudo ntpdate your_domain_name
```
[
![Time Synchronization with AD](http://www.tecmint.com/wp-content/uploads/2017/03/Time-Synchronization-with-AD.png)
][4]
Time Synchronization with AD
5. On the next step install the software required by Ubuntu machine to be fully integrated into the domain by running the below command.
```
$ sudo apt-get install samba krb5-config krb5-user winbind libpam-winbind libnss-winbind
```
[
![Install Samba4 in Ubuntu Client](http://www.tecmint.com/wp-content/uploads/2017/03/Install-Samba4-in-Ubuntu-Client.png)
][5]
Install Samba4 in Ubuntu Client
While the Kerberos packages are installing you should be asked to enter the name of your default realm. Use the name of your domain with uppercases and press Enter key to continue the installation.
[
![Add AD Domain Name](http://www.tecmint.com/wp-content/uploads/2017/03/Add-AD-Domain-Name.png)
][6]
Add AD Domain Name
6. After all packages finish installing, test Kerberos authentication against an AD administrative account and list the ticket by issuing the below commands.
```
# kinit ad_admin_user
# klist
```
[
![Check Kerberos Authentication with AD](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Kerberos-Authentication-with-AD.png)
][7]
Check Kerberos Authentication with AD
### Step 2: Join Ubuntu to Samba4 AD DC
7. The first step in integrating the Ubuntu machine into the Samba4 Active Directory domain is to edit Samba configuration file.
Backup the default configuration file of Samba, provided by the package manager, in order to start with a clean configuration by running the following commands.
```
# mv /etc/samba/smb.conf /etc/samba/smb.conf.initial
# nano /etc/samba/smb.conf 
```
On the new Samba configuration file add the below lines:
```
[global]
workgroup = TECMINT
realm = TECMINT.LAN
netbios name = ubuntu
security = ADS
dns forwarder = 192.168.1.1
idmap config * : backend = tdb
idmap config *:range = 50000-1000000
template homedir = /home/%D/%U
template shell = /bin/bash
winbind use default domain = true
winbind offline logon = false
winbind nss info = rfc2307
winbind enum users = yes
winbind enum groups = yes
vfs objects = acl_xattr
map acl inherit = Yes
store dos attributes = Yes
```
[
![Configure Samba for AD](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Samba.png)
][8]
Configure Samba for AD
Replace workgroup, realm, netbios name and dns forwarder variables with your own custom settings.
The winbind use default domain parameter causes winbind service to treat any unqualified AD usernames as users of the AD. You should omit this parameter if you have local system accounts names which overlap AD accounts.
8. Now you should restart all samba daemons and stop and remove unnecessary services and enable samba services system-wide by issuing the below commands.
```
$ sudo systemctl restart smbd nmbd winbind
$ sudo systemctl stop samba-ad-dc
$ sudo systemctl enable smbd nmbd winbind
```
9. Join Ubuntu machine to Samba4 AD DC by issuing the following command. Use the name of an AD DC account with administrator privileges in order for the binding to realm to work as expected.
```
$ sudo net ads join -U ad_admin_user
```
[
![Join Ubuntu to Samba4 AD DC](http://www.tecmint.com/wp-content/uploads/2017/03/Join-Ubuntu-to-Samba4-AD-DC.png)
][9]
Join Ubuntu to Samba4 AD DC
10. From a [Windows machine with RSAT tools installed][10] you can open AD UC and navigate to Computers container. Here, your Ubuntu joined machine should be listed.
[
![Confirm Ubuntu Client in Windows AD DC](http://www.tecmint.com/wp-content/uploads/2017/03/Confirm-Ubuntu-Client-in-RSAT-.png)
][11]
Confirm Ubuntu Client in Windows AD DC
### Step 3: Configure AD Accounts Authentication
11. In order to perform authentication for AD accounts on the local machine, you need to modify some services and files on the local machine.
First, open and edit The Name Service Switch (NSS) configuration file.
```
$ sudo nano /etc/nsswitch.conf
```
Next append winbind value for passwd and group lines as illustrated on the below excerpt.
```
passwd: compat winbind
group: compat winbind
```
[
![Configure AD Accounts Authentication](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-AD-Accounts-Authentication.png)
][12]
Configure AD Accounts Authentication
12. In order to test if the Ubuntu machine was successfully integrated to realm run wbinfo command to list domain accounts and groups.
```
$ wbinfo -u
$ wbinfo -g
```
[
![List AD Domain Accounts and Groups](http://www.tecmint.com/wp-content/uploads/2017/03/List-AD-Domain-Accounts-and-Groups.png)
][13]
List AD Domain Accounts and Groups
13. Also, check Winbind nsswitch module by issuing the getent command and pipe the results through a filter such as grep to narrow the output only for specific domain users or groups.
```
$ sudo getent passwd| grep your_domain_user
$ sudo getent group|grep 'domain admins'
```
[
![Check AD Domain Users and Groups](http://www.tecmint.com/wp-content/uploads/2017/03/Check-AD-Domain-Users-and-Groups.png)
][14]
Check AD Domain Users and Groups
14. In order to authenticate on Ubuntu machine with domain accounts you need to run pam-auth-update command with root privileges and add all the entries required for winbind service and to automatically create home directories for each domain account at the first login.
Check all entries by pressing `[space]` key and hit ok to apply configuration.
```
$ sudo pam-auth-update
```
[
![Authenticate Ubuntu with Domain Accounts](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Ubuntu-with-Domain-Accounts.png)
][15]
Authenticate Ubuntu with Domain Accounts
15. On Debian systems you need to manually edit /etc/pam.d/common-account file and the following line in order to automatically create homes for authenticated domain users.
```
session required pam_mkhomedir.so skel=/etc/skel/ umask=0022
```
[
![Authenticate Debian with Domain Accounts](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Debian-with-Domain-Accounts.png)
][16]
Authenticate Debian with Domain Accounts
16. In order for Active Directory users to be able to change password from command line in Linux open /etc/pam.d/common-password file and remove the use_authtok statement from password line to finally look as on the below excerpt.
```
password [success=1 default=ignore] pam_winbind.so try_first_pass
```
[
![Users Allowed to Change Password](http://www.tecmint.com/wp-content/uploads/2017/03/AD-Domain-Users-Change-Password.png)
][17]
Users Allowed to Change Password
17. To authenticate on Ubuntu host with a Samba4 AD account use the domain username parameter after su command. Run id command to get extra info about the AD account.
```
$ su - your_ad_user
```
[
![Find AD User Information](http://www.tecmint.com/wp-content/uploads/2017/03/Find-AD-User-Information.png)
][18]
Find AD User Information
Use [pwd command][19] to see your domain user current directory and passwd command if you want to change password.
18. To use a domain account with root privileges on your Ubuntu machine, you need to add the AD username to the sudo system group by issuing the below command:
```
$ sudo usermod -aG sudo your_domain_user
```
Login to Ubuntu with the domain account and update your system by running apt-get update command to check if the domain user has root privileges.
[
![Add Sudo User Root Group](http://www.tecmint.com/wp-content/uploads/2017/03/Add-Sudo-User-Root-Group.png)
][20]
Add Sudo User Root Group
19. To add root privileges for a domain group, open end edit /etc/sudoers file using visudo command and add the following line as illustrated on the below screenshot.
```
%YOUR_DOMAIN\\your_domain\ group ALL=(ALL:ALL) ALL
```
[
![Add Root Privileges to Domain Group](http://www.tecmint.com/wp-content/uploads/2017/03/Add-Root-Privileges-to-Domain-Group.jpg)
][21]
Add Root Privileges to Domain Group
Use backslashes to escape spaces contained into your domain group name or to escape the first backslash. In the above example the domain group for TECMINT realm is named “domain admins”.
The preceding percent sign `(%)` symbol indicates that we are referring to a group, not a username.
20. In case you are running the graphical version of Ubuntu and you want to login on the system with a domain user, you need to modify LightDM display manager by editing /usr/share/lightdm/lightdm.conf.d/50-ubuntu.conf file, add the following lines and reboot the machine to reflect changes.
```
greeter-show-manual-login=true
greeter-hide-users=true
```
It should now be able to perform logins on Ubuntu Desktop with a domain account using either your_domain_username or your_domain_username@your_domain.tld or your_domain\your_domain_username format.
--------------------------------------------------------------------------------
作者简介:
I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/join-ubuntu-to-active-directory-domain-member-samba-winbind/
作者:[Matei Cezar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:http://www.tecmint.com/install-samba4-active-directory-ubuntu/
[2]:http://www.tecmint.com/wp-content/uploads/2017/03/Set-Ubuntu-System-Hostname.png
[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network-Settings-for-AD.png
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Time-Synchronization-with-AD.png
[5]:http://www.tecmint.com/wp-content/uploads/2017/03/Install-Samba4-in-Ubuntu-Client.png
[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Add-AD-Domain-Name.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Kerberos-Authentication-with-AD.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Samba.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Join-Ubuntu-to-Samba4-AD-DC.png
[10]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
[11]:http://www.tecmint.com/wp-content/uploads/2017/03/Confirm-Ubuntu-Client-in-RSAT-.png
[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-AD-Accounts-Authentication.png
[13]:http://www.tecmint.com/wp-content/uploads/2017/03/List-AD-Domain-Accounts-and-Groups.png
[14]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-AD-Domain-Users-and-Groups.png
[15]:http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Ubuntu-with-Domain-Accounts.png
[16]:http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Debian-with-Domain-Accounts.png
[17]:http://www.tecmint.com/wp-content/uploads/2017/03/AD-Domain-Users-Change-Password.png
[18]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-AD-User-Information.png
[19]:http://www.tecmint.com/pwd-command-examples/
[20]:http://www.tecmint.com/wp-content/uploads/2017/03/Add-Sudo-User-Root-Group.png
[21]:http://www.tecmint.com/wp-content/uploads/2017/03/Add-Root-Privileges-to-Domain-Group.jpg
[22]:http://www.tecmint.com/author/cezarmatei/
[23]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[24]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,139 @@
#rusking translating
An introduction to GRUB2 configuration for your Linux machine
============================================================
> Learn how the GRUB boot loader works to prepare your system and launch your operating system kernel.
![An introduction to GRUB2 configuration in Linux](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/car-penguin-drive-linux-yellow.png?itok=ueZE5mph "An introduction to GRUB2 configuration in Linux")
>Image by : Internet Archive [Book][5] [Images][6]. Modified by Opensource.com. CC BY-SA 4.0
When researching my article from last month,  _[An introduction to the Linux][1][ boot and startup process][2]_ , I became interested in learning more about GRUB2\. This article provides a quick introduction to configuring GRUB2, which I will mostly refer to as GRUB for simplicity.
### GRUB
GRUB stands for  _GRand Unified Bootloader_ . Its function is to take over from BIOS at boot time, load itself, load the Linux kernel into memory, and then turn over execution to the kernel. Once the kernel takes over, GRUB has done its job and it is no longer needed.
GRUB supports multiple Linux kernels and allows the user to select between them at boot time using a menu. I have found this to be a very useful tool because there have been many instances that I have encountered problems with an application or system service that fails with a particular kernel version. Many times, booting to an older kernel can circumvent issues such as these. By default, three kernels are keptthe newest and two previouswhen **yum** or **dnf** are used to perform upgrades. The number of kernels to be kept before the package manager erases them is configurable in the **/etc/dnf/dnf.conf** or **/etc/yum.conf** files. I usually change the **installonly_limit** value to 9 to retain a total of nine kernels. This has come in handy on a couple occasions when I had to revert to a kernel that was several versions down-level.
### GRUB menu
The function of the GRUB menu is to allow the user to select one of the installed kernels to boot in the case where the default kernel is not the desired one. Using the up and down arrow keys allows you to select the desired kernel and pressing the **Enter** key continues the boot process using the selected kernel.
The GRUB menu also provides a timeout so that, if the user does not make any other selection, GRUB will continue to boot with the default kernel without user intervention. Pressing any key on the keyboard except the **Enter** key terminates the countdown timer which is displayed on the console. Pressing the **Enter** key immediately continues the boot process with either the default kernel or an optionally selected one.
The GRUB menu also provides a "rescue" kernel, in for use when troubleshooting or when the regular kernels don't complete the boot process for some reason. Unfortunately, this rescue kernel does not boot to rescue mode. More on this later in this article.
### The grub.cfg file
The **grub.cfg** file is the GRUB configuration file. It is generated by the **grub2-mkconfig** program using a set of primary configuration files and the grub default file as a source for user configuration specifications. The **/****boot/grub2/****grub.cfg** file is first generated during Linux installation and regenerated when a new kernel is installed.
The **grub.cfg** file contains Bash-like code and a list of installed kernels in an array ordered by sequence of installation. For example, if you have four installed kernels, the most recent kernel will be at index 0, the previous kernel will be at index 1, and the oldest kernel will be index 3\. If you have access to a **grub.****cfg** file you should look at it to get a feel for what one looks like. The **grub.cfg** file is just too large to be included in this article.
### GRUB configuration files
The main set of configuration files for **grub.cfg** is located in the **/etc/grub.d **directory. Each of the files in that directory contains GRUB code that is collected into the final grub.cfg file. The numbering scheme used in the names of these configuration files is designed to provide ordering so that the final **grub.cfg** file is assembled into the correct sequence. Each of these files has a comment to denote the beginning and end of the section, and those comments are also part of the final grub.cfg file so that it is possible to see from which file each section is generated. The delimiting comments look like this:
```
### BEGIN /etc/grub.d/10_linux ###
### END /etc/grub.d/10_linux ###
```
These files should not be modified unless you are a GRUB expert and understand what the changes will do. Even then you should always keep a backup copy of the original, working **grub.****cfg** file. The specific files, **40_custom** and **41_custom** are intended to be used to generate user modifications to the GRUB configuration. You should still be aware of the consequences of any changes you make to these files and maintain a backup of the original **grub.****cfg** file.
You can also add your own files to the /etc/grub.d directory. One reason for doing that might be to add a menu line for a non-Linux operating system. Just be sure to follow the naming convention to ensure that the additional menu item is added either immediately before or after the **10_linux** entry in the configuration file.
### GRUB defaults file
Configuration of the original GRUB was fairly simple and straightforward. I would just modify **/boot/grub/grub.conf** and be good to go. I could still modify GRUB2 by changing **/boot/grub2/grub.****cfg**, but the new version is considerably more complex than the original GRUB. In addition, **grub.cfg** may be overwritten when a new kernel is installed, so any changes may disappear. However, the GNU.org GRUB Manual does discuss direct creation and modification of **/boot/grub2/grub.cfg**.
Changing the configuration for GRUB2 is fairly easy once you actually figure out how to do it. I only discovered this while researching GRUB2 for a previous article. The secret formula is in the **/etc/default **directory, with a file called, naturally enough, grub, which is then used in concert with a simple terminal command. The **/etc/default** directory contains configuration files for a few programs such as Google Chrome, useradd, and grub.
The **/etc/default/grub** file is very simple. The grub defaults file has a number of valid key/value pairs listed already. You can simply change the values of existing keys or add other keys that are not already in the file. Listing 1, below, shows an unmodified **/etc/default/gru**b file.
```
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g'
/etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.lvm.lv=fedora_fedora25vm/root
rd.lvm.lv=fedora_fedora25vm/swap
rd.lvm.lv=fedora_fedora25vm/usr rhgb quiet"
GRUB_DISABLE_RECOVERY="true"
```
_Listing 1: An original grub default file for Fedora 25._
[Section 5.1 of the GRUB Manual][7] contains information about all of the possible keys that can be included in the grub file. I have never had needed to do anything other than modifying the values of some of the keys that are already in the grub default file. Let's look at what each of these keys means as well as some that don't appear in the grub default file.
* **GRUB_TIMEOUT **The value of this key determines the length of time that the GRUB selection menu is displayed. GRUB offers the capability to keep multiple kernels installed simultaneously and choose between them at boot time using the GRUB menu. The default value for this key is 5 seconds, but I usually change that to 10 seconds to allow more time to view the choices and make a selection.
* **GRUB_DISTRIBUTOR **This key defines a [sed][3] expression that extracts the distribution release number from the /etc/system-release file. This information is used to generate the text names for each kernel release that appear in the GRUB menu, such as "Fedora". Due to variations in the structure of the data in the system-release file between distributions, this sed expression may be different on your system.
* **GRUB_DEFAULT **Determines which kernel is booted by default. That is the "saved" kernel which is the most recent kernel. Other options here are a number which represents the index of the list of kernels in **grub.cfg**. Using an index such as 3, however, to load the fourth kernel in the list will always load the fourth kernel in the list even after a new kernel is installed. So using an index will load a different kernel after a new kernel is installed. The only way to ensure that a specific kernel release is booted is to set the value of **GRUB_DEFAULT** to the name of the desired kernel, like 4.8.13-300.fc25.x86_64.
* **GRUB_SAVEDEFAULT **Normally, this option is not specified in the grub defaults file. Normal operation when a different kernel is selected for boot, that kernel is booted only that one time. The default kernel is not changed. When set to "true" and used with **GRUB_DEFAULT=saved** this option saves a different kernel as the default. This happens when a different kernel is selected for boot.
* **GRUB_DISABLE_SUBMENU **Some people may wish to create a hierarchical menu structure of kernels for the GRUB menu screen. This key, along with some additional configuration of the kernel stanzas in **grub.****cfg** allow creating such a hierarchy. For example, the one might have the main menu with "production" and "test" sub-menus where each sub-menu would contain the appropriate kernels. Setting this to "false" would enable the use of sub-menus.
* **GRUB_TERMINAL_OUTPUT **In some environments it may be desirable or necessary to redirect output to a different display console or terminal. The default is to send output to the default terminal, usually the "console" which equates to the standard display on an Intel class PC. Another useful option is to specify "serial" in a data center or lab environment in which serial terminals or Integrated Lights Out (ILO) terminal connections are in use.
* **GRUB_TERMINAL_INPUT **As with **GRUB_TERMINAL_OUTPUT**, it may be desirable or necessary to redirect input from a serial terminal or ILO device rather than the standard keyboard input.
* **GRUB_CMDLINE_LINUX **This key contains the command line arguments that will be passed to the kernel at boot time. Note that these arguments will be added to the kernel line of grub.cfg for all installed kernels. This means that all installed kernels will have the same arguments when booted. I usually remove the "rhgb" and "quiet" arguments so that I can see all of the very informative messages output by the kernel and systemd during the boot and startup.
* **GRUB_DISABLE_RECOVERY **When the value of this key is set to "false," a recovery entry is created in the GRUB menu for every installed kernel. When set to "true" no recovery entries are created. Regardless of this setting, the last kernel entry is always a "rescue" option. However, I encountered a problem with the rescue option, which I'll talk more about below.
There are other keys that I have not covered here that you might find useful. Their descriptions are located in Section 5.1 of the [GRUB Manual 2][8].
### Generate grub.cfg
After completing the desired configuration it is necessary to generate the **/boot/grub2/grub.****cfg** file. This is accomplished with the following command.
```
grub2-mkconfig > /boot/grub2/grub.cfg
```
This command takes the configuration files located in /etc/grub.d in sequence to build the **grub.****cfg** file, and uses the contents of the grub defaults file to modify the output to achieve the final desired configuration. The **grub2-mkconfig** command attempts to locate all of the installed kernels and creates an entry for each in the **10_Linux** section of the **grub.****cfg** file. It also creates a "rescue" entry to provide a method for recovering from significant problems that prevent Linux from booting.
It is strongly recommended that you do not edit the **grub.****cfg** file manually because any direct modifications to the file will be overwritten the next time a new kernel is installed or **grub2-mkconfig** is run manually.
### Issues
I encountered one problem with GRUB2 that could have serious consequences if you are not aware of it. The rescue kernel does not boot, instead, one of the other kernels boots. I found that to be the kernel at index 1 in the list, i.e., the second kernel in the list. Additional testing showed that this problem occurred whether using the original **grub.****cfg** configuration file or one that I generated. I have tried this on both virtual and real hardware and the problem is the same on each. I only tried this with Fedora 25 so it may not be an issue with other Fedora releases.
Note that the "recovery" kernel entry that is generated from the "rescue" kernel does work and boots to a maintenance mode login.
I recommend changing **GRUB_DISABLE_RECOVERY** to "false" in the grub defaults file, and generating your own **grub.cfg**. This will generate usable recovery entries in the GRUB menu for each of the installed kernels. These recovery configurations work as expected and boot to runlevel 1—according to the runlevel command—at a command line entry that requests a password to enter maintenance mode. You could also press **Ctrl-D** to continue a normal boot to the default runlevel.
### Conclusions
GRUB is the first step after BIOS in the sequence of events that boot a Linux computer to a usable state. Understanding how to configure GRUB is important to be able to recover from or to circumvent various types of problems.
I have had to boot to recovery or rescue mode many times over the years to resolve many types of problems. Some of those problems were actual boot problems due to things like improper entries in **/etc/fstab** or other configuration files, and others were due to problems with application or system software that was incompatible with the newest kernel. Hardware compatibility issues might also prevent a specific kernel from booting.
I hope this information will help you get started with GRUB configuration.
--------------------------------------------------------------------------------
作者简介:
David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years.
----------------
via: https://opensource.com/article/17/3/introduction-grub2-configuration-linux
作者:[David Both ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dboth
[1]:https://opensource.com/article/17/2/linux-boot-and-startup
[2]:https://opensource.com/article/17/2/linux-boot-and-startup
[3]:https://en.wikipedia.org/wiki/Sed
[4]:https://opensource.com/article/17/3/introduction-grub2-configuration-linux?rate=QrIzRpQ3YhewYlBD0AFp0JiF133SvhyAq783LOxjr4c
[5]:https://www.flickr.com/photos/internetarchivebookimages/14746482994/in/photolist-ot6zCN-odgbDq-orm48o-otifuv-otdyWa-ouDjnZ-otGT2L-odYVqY-otmff7-otGamG-otnmSg-rxnhoq-orTmKf-otUn6k-otBg1e-Gm6FEf-x4Fh64-otUcGR-wcXsxg-tLTN9R-otrWYV-otnyUE-iaaBKz-ovcPPi-ovokCg-ov4pwM-x8Tdf1-hT5mYr-otb75b-8Zk6XR-vtefQ7-vtehjQ-xhhN9r-vdXhWm-xFBgtQ-vdXdJU-vvTH6R-uyG5rH-vuZChC-xhhGii-vvU5Uv-vvTNpB-vvxqsV-xyN2Ai-vdXcFw-vdXuNC-wBMhes-xxYmxu-vdXxwS-vvU8Zt
[6]:https://www.flickr.com/photos/internetarchivebookimages/14774719031/in/photolist-ovAie2-otPK99-xtDX7p-tmxqWf-ow3i43-odd68o-xUPaxW-yHCtWi-wZVsrD-DExW5g-BrzB7b-CmMpC9-oy4hyF-x3UDWA-ow1m4A-x1ij7w-tBdz9a-tQMoRm-wn3tdw-oegTJz-owgrs2-rtpeX1-vNN6g9-owemNT-x3o3pX-wiJyEs-CGCC4W-owg22q-oeT71w-w6PRMn-Ds8gyR-x2Aodm-owoJQm-owtGp9-qVxppC-xM3Gw7-owgV5J-ou9WEs-wihHtF-CRmosE-uk9vB3-wiKdW6-oeGKq3-oeFS4f-x5AZtd-w6PNuv-xgkofr-wZx1gJ-EaYPED-oxCbFP
[7]:https://www.gnu.org/software/grub/manual/grub.html#Simple-configuration
[8]:https://www.gnu.org/software/grub/manual/grub.html#Simple-configuration
[9]:https://opensource.com/user/14106/feed
[10]:https://opensource.com/article/17/3/introduction-grub2-configuration-linux#comments
[11]:https://opensource.com/users/dboth

View File

@ -0,0 +1,89 @@
vim-kakali translating
What is Linux VPS Hosting?
============================================================
![what is linux vps hosting](https://www.rosehosting.com/blog/wp-content/uploads/2017/03/what-is-linux-vps-hosting.jpg)
If you have a site that gets a lot of traffic, or at least, is expected to generate a lot of traffic, then you might want to consider getting a [Linux VPS hosting][6] package. A Linux VPS hosting package is also one of your best options if you want more control over the things that are installed on the server where your website is hosted at. Here are some of the frequently asked questions about Linux VPS hosting, answered.
### What does Linux VPS stand for?
Basically, Linux VPS stands for a virtual private server running on a Linux system. A virtual private server is a virtual server hosted on a physical server. A server is virtual if it runs in a host computers memory. The host computer, in turn, can run a few other virtual servers.
### So I have to share a server with other users?
In most cases, yes. However, this does not mean that you will suffer from downtime or decreased performance. Each virtual server can run its own operating system, and each of these systems can be administered independently of each other. A virtual private server has its own operating system, data, and applications that are separated from all the other systems, applications, and data on the physical host server and the other virtual servers.
Despite sharing the physical server with other virtual private servers, you can still enjoy the benefits of a more expensive dedicated server without spending a lot of money for the service.
### What are the benefits of a Linux VPS hosting service?
There are many benefits when using a Linux VPS hosting service, including ease of use, increased security, and improved reliability at a lower total cost of ownership. However, for most webmasters, programmers, designers, and developers, the true benefit of a Linux VPS is the flexibility. Each virtual private server is isolated with its own operating environment, which means that you can easily and safely install the operating system that you prefer or need—in this case, Linux—as well as remove or add software and applications easily whenever you want to.
You can also modify the environment of your VPS to suit your performance needs, as well as improve the experience of your sites users or visitors. Flexibility can be the advantage you need to set you apart from your competitors.
Note that some Linux VPS providers wont give you full root access to your Linux VPS, in which case youll have limited functionality. Be sure to get a [Linux VPS where youll have full access to the VPS][7], so you can modify anything you want.
### Is Linux VPS hosting for everyone?
Yes! Even if you run a personal blog dedicated to your interests, you can surely benefit from a Linux VPS hosting package. If you are building and developing a website for a company, you would also enjoy the benefits. Basically, if you are expecting growth and heavy site traffic on your website, then a Linux VPS is for you.
Individuals and companies that want more flexibility in their customization and development options should definitely go for a Linux VPS, especially those who are looking to get great performance and service without paying for a dedicated server, which could eat up a huge chunk of the sites operating costs.
### I dont know how to work with Linux, can I still use a Linux VPS?
Of course! If you get a fully managed Linux VPS, your provider will manage the server for you, and most probably, will install and configure anything you want to run on your Linux VPS. If you get a VPS from us, well take care of your server 24/7 and well install, configure and optimize anything for you. All these services are included for free with all our [Managed Linux VPS hosting][8] packages.
So if you use our hosting services, it means that you get to enjoy the benefits of a Linux VPS, without any knowledge of working with Linux.
Another option to easy the use of a Linux VPS for beginners is to get a [VPS with cPanel][9], [DirectAdmin][10] or any [other hosting control panel][11]. If you use a control panel, you can manage your server via a GUI, which is a lot easier, especially for beginners. Although, [managing a Linux VPS from the command line][12] is fun and you can learn a lot by doing that.
### How different is a Linux VPS from a dedicated server?
As mentioned earlier, a virtual private server is just a virtual partition on a physical host computer. The physical server is divided into several virtual servers, which could diffuse the cost and overhead expenses between the users of the virtual partitions. This is why a Linux VPS is comparatively cheaper than a [dedicated server][13], which, as its name implies, is dedicated to only one user. For a more detailed overview of the differences, check our [Physical Server (dedicated) vs Virtual Server (VPS) comparison][14].
Aside from being more cost-efficient than dedicated servers, Linux virtual private servers often run on host computers that are more powerful than dedicated servers—performance and capacity are often greater than dedicated servers.
### I want to move from a shared hosting environment to a Linux VPS, can I do that?
If you currently use [shared hosting][15], you can easily move to a Linux VPS. One option is to [do it yourself][16], but the migration process can be a bit complicated and is definitely not recommended for beginners. Your best option is to find a host that offers [free website migrations][17] and let them do it for you. You can even move from shared hosting with a control panel to a Linux VPS without a control panel.
### Any more questions?
Feel free to leave a comment below.
If you get a VPS from us, our expert Linux admins will help you with anything you need with your Linux VPS and will answer any questions you have about working with your Linux VPS. Our admins are available 24/7 and will take care of your request immediately.
PS. If you liked this post please share it with your friends on the social networks using the buttons below or simply leave a reply in the comments section. Thanks.
--------------------------------------------------------------------------------
via: https://www.rosehosting.com/blog/what-is-linux-vps-hosting/
作者:[https://www.rosehosting.com ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.rosehosting.com/blog/what-is-linux-vps-hosting/
[1]:https://www.rosehosting.com/blog/what-is-linux-vps-hosting/
[2]:https://www.rosehosting.com/blog/what-is-linux-vps-hosting/#comments
[3]:https://www.rosehosting.com/blog/category/guides/
[4]:https://plus.google.com/share?url=https://www.rosehosting.com/blog/what-is-linux-vps-hosting/
[5]:http://www.linkedin.com/shareArticle?mini=true&url=https://www.rosehosting.com/blog/what-is-linux-vps-hosting/&title=What%20is%20Linux%20VPS%20Hosting%3F&summary=If%20you%20have%20a%20site%20that%20gets%20a%20lot%20of%20traffic,%20or%20at%20least,%20is%20expected%20to%20generate%20a%20lot%20of%20traffic,%20then%20you%20might%20want%20to%20consider%20getting%20a%20Linux%20VPS%20hosting%20package.%20A%20Linux%20VPS%20hosting%20package%20is%20also%20one%20of%20your%20best%20options%20if%20you%20want%20more%20...
[6]:https://www.rosehosting.com/linux-vps-hosting.html
[7]:https://www.rosehosting.com/linux-vps-hosting.html
[8]:https://www.rosehosting.com/linux-vps-hosting.html
[9]:https://www.rosehosting.com/cpanel-hosting.html
[10]:https://www.rosehosting.com/directadmin-hosting.html
[11]:https://www.rosehosting.com/control-panel-hosting.html
[12]:https://www.rosehosting.com/blog/basic-shell-commands-after-putty-ssh-logon/
[13]:https://www.rosehosting.com/dedicated-servers.html
[14]:https://www.rosehosting.com/blog/physical-server-vs-virtual-server-all-you-need-to-know/
[15]:https://www.rosehosting.com/linux-shared-hosting.html
[16]:https://www.rosehosting.com/blog/from-shared-to-vps-hosting/
[17]:https://www.rosehosting.com/website-migration.html

View File

@ -0,0 +1,218 @@
AWS cloud terminology
============================================================
* * *
![AWS Cloud terminology](http://cdn2.kerneltalks.com/wp-content/uploads/2017/03/AWS-Cloud-terminology-150x150.png)
_Understand AWS cloud terminology of 71 services! Get acquainted with terms used in AWS world to start with your AWS cloud career!_
* * *
AWS i.e. Amazon Web Services is cloud platform providing list of web services on pay per use basis. Its one of the famous cloud platform to date. Due to flexibility, availability, elasticity, scalability and no-maintenance many corporate are moving to cloud.  Since many companies using these services its become necessary that sysadmin or devOps should be aware of AWS.
This article aims at listing services provided by AWS and explaining terminology used in AWS world.
As of today, AWS offers total of 71 services which are grouped together in 17 groups as below :
* * *
_Compute _
Its a cloud computing means virtual server provisioning. This group provides below services.
1. EC2 : EC2 stands for Elastic Compute Cloud. This service provides you scalable [virtual machines per your requirement.][11]
2. EC2 container service : Its high performance, high scalable which allows running services on EC2 clustered environment
3. Lightsail : This service enables user to launch and manage virtual servers (EC2) very easily.
4. Elastic Beanstalk : This service manages capacity provisioning, load balancing, scaling, health monitoring of your application automatically thus reducing your management load.
5. Lambda : It allows to run your code only when needed without managing servers for it.
6. Batch : It enables users to run computing workloads (batches) in customized managed way.
* * *
_Storage_
Its a cloud storage i.e. cloud storage facility provided by Amazon. This group includes :
1. S3 : S3 stands for Simple Storage Service (3 times S). This provides you online storage to store/retrive any data at any time, from anywhere.
2. EFS : EFS stands for Elastic File System. Its a online storage which can be used with EC2 servers.
3. Glacier : Its a low cost/slow performance data storage solution mainly aimed at archives or long term backups.
4. Storage Gateway : Its interface which connects your on-premise applications (hosted outside AWS) with AWS storage.
* * *
_Database_
AWS also offers to host databases on their Infra so that client can benefit with cutting edge tech Amazon have for faster/efficient/secured data processing. This group includes :
1. RDS : RDS stands for Relational Database Service. Helps to setup, operate, manage relational database on cloud.
2. DynamoDB : Its noSQL database providing fast processing and high scalability.
3. ElastiCache : Its a way to manage in-memory cache for your web application to run them faster!
4. Redshift : Its a huge (petabyte-size) fully scalable, data warehouse service in cloud.
* * *
_Networking & Content Delivery_
As AWS provides cloud EC2 server, its corollary that networking will be in picture too. Content delivery is used to serve files to users from their geographically nearest location. This is pretty much famous for speeding up websites now a days.
1. VPC : VPC stands for Virtual Private Cloud. Its your very own virtual network dedicated to your AWS account.
2. CloudFront : Its content delivery network by AWS.
3. Direct Connect : Its a network way of connecting your datacenter/premises with AWS to increase throughput, reduce network cost and avoid connectivity issues which may arise due to internet-based connectivity.
4. Route 53 : Its a cloud domain name system DNS web service.
* * *
_Migration_
Its a set of services to help you migrate from on-premises services to AWS. It includes :
1. Application Discovery Service : A service dedicated to analyse your servers, network, application to help/speed up migration.
2. DMS : DMS stands for Database Migration Service. It is used to migrate your data from on-premises DB to RDS or DB hosted on EC2.
3. Server Migration : Also called as SMS (Server Migration Service) is a agentless service which moves your workloads from on-premises to AWS.
4. Snowball :  Intended to use when you want to transfer huge amount of data in/out of AWS using physical storage appliances (rather than internet/network based transfers)
* * *
_Developer Tools_
As name suggest, its a group of services helping developers to code easy/better way on cloud.
1. CodeCommit : Its a secure, scalable, managed source control service to host code repositories.
2. CodeBuild : Code builder on cloud. Executes, tests codes and build software packages for deployments.
3. CodeDeploy : Deployment service to automate application deployments on AWS servers or on-premises.
4. CodePipeline : This deployment service enables coders to visualize their application before release.
5. X-Ray : Analyse applications with event calls.
* * *
_Management Tools_
Group of services which helps you manage your web services in AWS cloud.
1. CloudWatch : Monitoring service to monitor your AWS resources or applications.
2. CloudFormation : Infrastructure as a code! Its way of managing AWS relative infra in collective and orderly manner.
3. CloudTrail : Audit & compliance tool for AWS account.
4. Config : AWS resource inventory, configuration history, and configuration change notifications to enable security and governance.
5. OpsWorks : Automation to configure, deploy EC2 or on-premises compute
6. Service Catalog : Create and manage IT service catalogs which are approved to use in your/company account
7. Trusted Advisor : Its AWS AI helping you to have better, money saving AWS infra by inspecting your AWS Infra.
8. Managed Service : Provides ongoing infra management
* * *
_Security, Identity & compliance_
Important group of AWS services helping you secure your AWS space.
1. IAM : IAM stands for Identity and Access Management. Controls user access to your AWS resources and services.
2. Inspector : Automated security assessment helping you to secure and compliance your apps on AWS.
3. Certificate Manager : Provision, manage and deploy SSL/TLS certificates for AWS applications.
4. Directory Service : Its Microsoft Active Directory for AWS.
5. WAF & Shield : WAF stands for Web Application Firewall. Monitors and controls access to your content on CloudFront or Load balancer.
6. Compliance Reports : Compliance reporting of your AWS infra space to make sure your apps an dinfra are compliant to your policies.
* * *
_Analytics_
Data analytics of your AWS space to help you see, plan, act on happenings in your account.
1. Athena : Its a SQL based query service to analyse S3 stored data.
2. EMR : EMR stands for Elastic Map Reduce. Service for big data processing and analysis.
3. CloudSearch : Search capability of AWS within application and services.
4. Elasticsearch Service : To create a domain and deploy, operate, and scale Elasticsearch clusters in the AWS Cloud
5. Kinesis : Streams large amount of data in real time.
6. Data Pipeline : Helps to move data between different AWS services.
7. QuickSight : Collect, analyse and present insight of business data on AWS.
* * *
_Artificial Intelligence_
AI in AWS!
1. Lex : Helps to build conversational interfaces in application using voice and text.
2. Polly : Its a text to speech service.
3. Rekognition : Gives you ability to add image analysis to applications
4. Machine Learning : It has algorithms to learn patterns in your data.
* * *
_Internet of Things_
This service enables AWS highly available on different devices.
1. AWS IoT : It lets connected hardware devices to interact with AWS applications.
* * *
_Game Development_
As name suggest this services aims at Game Development.
1. Amazon GameLift : This service aims for deplyoing, managing dedicated gaming servers for session based multiplayer games.
* * *
_Mobile Services_
Group of services mainly aimed at handheld devices
1. Mobile Hub : Helps you to create mobile app backend features and integrate them to mobile apps.
2. Cognito : Controls mobile users authentication and access to AWS on internet connected devices.
3. Device Farm : Mobile app testing service enables you to test apps across android, iOS on real phones hosted by AWS.
4. Mobile Analytics : Measure, track and analyze mobile app data on AWS.
5. Pinpoint : Targeted push notification and mobile engagements.
* * *
_Application Services_
Its a group of services which can be used with your applications in AWS.
1. Step Functions : Define and use various functions in your applications
2. SWF : SWF stands for Simple Workflow Service. Its cloud workflow management helps developers to co-ordinate and contribute at different stages of application life cycle.
3. API Gateway : Helps developers to create, manage, host APIs
4. Elastic Transcoder : Helps developers to converts media files to play of various devices.
* * *
_Messaging_
Notification and messaging services in AWS
1. SQS : SQS stands for Simple Queue Service. Fully managed messaging queue service to communicate between services and apps in AWS.
2. SNS : SNS stands for Simple Notification Service. Push notification service for AWS users to alert them about their services in AWS space.
3. SES : SES stands for Simple Email Service. Its cost effective email service from AWS for its own customers.
* * *
_Business Productivity_
Group of services to help boost your business productivity.
1. WorkDocs : Collaborative file sharing, storing and editing service.
2. WorkMail : Secured business mail, calendar service
3. Amazon Chime : Online business meetings!
* * *
_Desktop & App Streaming_
Its desktop app streaming over cloud.
1. WorkSpaces : Fully managed, secured desktop computing service on cloud
2. AppStream 2.0 : Stream desktop applications from cloud.
--------------------------------------------------------------------------------
via: http://kerneltalks.com/virtualization/aws-cloud-terminology/
作者:[Shrikant Lavhate][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://kerneltalks.com/virtualization/aws-cloud-terminology/

View File

@ -0,0 +1,125 @@
How to Build Your Own Media Center with OpenELEC
============================================================
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-media-center.jpg "How to Build Your Own Media Center with OpenELECs")
Have you ever wanted to make your own home theater system? If so, this is the guide for you! In this article well go over how to set up a home entertainment system powered by OpenELEC and Kodi. Well go over how to make the installation medium, what devices can run the software, how to install it and everything else there is to know!
### Choosing a device
Before setting up the software in the media center, youll need to choose a device. OpenELEC supports a multitude of devices. From regular desktops and laptops to the Raspberry Pi 2/3, etc. With a device chosen, think about how youll access the media on the OpenELEC system and get it ready to use.
**Note:** as OpenELEC is based on Kodi, there are many ways to load playable media (Samba network shares, external devices, etc.).
### Making the installation disk
The OpenELEC installation disk requires a USB flash drive of at least 1 GB. This is the only way to install the software, as the developers do not currently distribute an ISO file. A raw IMG file needs to be created instead. Choose the link that corresponds with your device and [download][10] the raw disk image. With the image downloaded, open a terminal and use the command to extract the data from the archive.
**On Linux/macOS**
```
cd ~/Downloads
gunzip -d OpenELEC*.img.gz
```
**On Windows**
Download [7zip][11], install it, and then extract the archive.
With the raw .IMG file extracted, download the [Etcher USB creation tool][12] and follow the instructions on the page to install it and create the USB disk.
**Note:** for Raspberry Pi users, Etcher supports burning to SD cards as well.
### Installing OpenELEC
The OpenELEC installation process is probably one of the easiest operating systems to install. To start, plug in the USB device and configure your device to boot from the USB drive. For some, this can be accomplished by pressing the DEL key or F2\. However, as all BIOS are different, it is best to look into the manual and find out.
![openelec-installer-selection](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-installer-selection.png "openelec-installer-selection")
Once in the BIOS, configure it to load the USB stick directly. This will allow the computer to boot the drive, which will bring you to the Syslinux boot screen. Enter “installer” in the prompt, then press the Enter key.
![openelec-installation-selection-menu](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-installation-selection-menu.png "openelec-installation-selection-menu")
By default, the quick installation option is selected. Press Enter to start the install. This will move the installer onto the drive selection page. Select the hard drive where OpenELEC should be installed, then press the Enter key to start the installation process.
![openelec-installation-in-progress](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-installation-in-progress.png "openelec-installation-in-progress")
Once done, reboot the system and load OpenELEC.
### Configuring OpenELEC
![openelec-wireless-network-setup](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-wireless-network-setup.jpg "openelec-wireless-network-setup")
On first boot, the user must configure a few things. If your media center device has a wireless network card, OpenELEC will prompt the user to connect it to a wireless access point. Select a network from the list and enter the access code.
![openelec-sharing-setup](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-sharing-setup.jpg "openelec-sharing-setup")
On the next “Welcome to OpenELEC” screen, the user must configure various sharing settings (SSH and Samba). It is advised that you turn these settings on, as this will make it easier to remotely transfer media files as well as gain command-line access.
### Adding Media
To add media to OpenElec (Kodi), first select the section that you want to add media to. Adding media for Photos, Music, etc., is the same process. In this guide well focus on adding videos.
![openelec-add-files-to-kodi](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-add-files-to-kodi.jpg "openelec-add-files-to-kodi")
Click the “Video” option on the home screen to go to the videos area. Select the “Files” option. On the next page click “Add videos…” This will take the user to the Kodi add-media screen. From here it is possible to add new media sources (both internal and external).
![openelec-add-media-source-kodi](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-add-media-source-kodi.jpg "openelec-add-media-source-kodi")
OpenELEC automatically mounts external devices (like USB, DVD data discs, etc.), and it can be added by browsing for the folders mount point. Usually these devices are placed in “/run.” Alternatively, go back to the page where you clicked on “Add videos…” and click on the device there. Any external device, including DVDs/CDs, will show up there and can be accessed directly. This is a good option for those who dont understand how to find mount points.
![openelec-name-video-source-folder](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-name-video-source-folder.jpg "openelec-name-video-source-folder")
Now that the device is selected within Kodi, the interface will ask the user to browse for the individual directory on the device with the media files using the media centers file browser tool. Once the directory that holds the files is found, add it, give the directory a name and press the OK button to save it.
![openelec-show-added-media-kodi](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-show-added-media-kodi.jpg "openelec-show-added-media-kodi")
When a user browses “Videos,” theyll see a clickable folder which brings up the media added from an external device. These folders can easily be played on the system.
### Using OpenElec
When the user logs in theyll see a “home screen.” This home screen has several sections the user is able to click on and go to: Pictures, Videos, Music, Programs, etc. When hovering over any of these sections, subsections appear. For example, when hovering over “Pictures,” the subsections “files” and “Add-ons” appear.
![openelec-navigation-bar](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-navigation-bar.jpg "openelec-navigation-bar")
If a user clicks on one of the subsections under a section, like “add-ons,”  the Kodi add-on chooser appears. This installer will allow users to either browse for new add-ons to install in relation to this subsection (like Picture-related add-ons, etc.) or to launch existing picture-related ones that are already on the system.
Additionally, clicking the files subsection of any section (e.g. Videos) takes the user directly to any available files in that section.
### System Settings
![openelec-system-settings](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-system-settings.jpg "openelec-system-settings")
Kodi has an extensive settings area. To get to the Settings, hover the mouse to the right, and the menu selector will scroll right and reveal “System.” Click on it to open the global system settings area.
Any setting can be modified and changed by the user, from installing add-ons from the Kodi-repository, to activating various services, to changing the theme, and even the weather. To exit the settings area and return to the home screen, press the “home” icon in the bottom-right corner.
### Conclusion
With the OpenELEC installed and configured, you are now free to go and use your very own Linux-powered home-theater system. Out of all of the home-theater-based Linux distributions, this one is the most user-friendly. Do keep in mind that although this operating system is known as “OpenELEC,” it runs Kodi and is compatible with all of the different Kodi add-ons, tools, and programs.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/build-media-center-with-openelec/
作者:[Derrik Diener][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/derrikdiener/
[1]:https://www.maketecheasier.com/author/derrikdiener/
[2]:https://www.maketecheasier.com/build-media-center-with-openelec/#comments
[3]:https://www.maketecheasier.com/category/linux-tips/
[4]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.maketecheasier.com%2Fbuild-media-center-with-openelec%2F
[5]:http://twitter.com/share?url=https%3A%2F%2Fwww.maketecheasier.com%2Fbuild-media-center-with-openelec%2F&text=How+to+Build+Your+Own+Media+Center+with+OpenELEC
[6]:mailto:?subject=How%20to%20Build%20Your%20Own%20Media%20Center%20with%20OpenELEC&body=https%3A%2F%2Fwww.maketecheasier.com%2Fbuild-media-center-with-openelec%2F
[7]:https://www.maketecheasier.com/permanently-disable-windows-defender-windows-10/
[8]:https://www.maketecheasier.com/repair-mac-hard-disk-with-fsck/
[9]:https://support.google.com/adsense/troubleshooter/1631343
[10]:http://openelec.tv/get-openelec/category/1-openelec-stable-releases
[11]:http://www.7-zip.org/
[12]:https://etcher.io/

View File

@ -0,0 +1,410 @@
How to control GPIO pins and operate relays with the Raspberry Pi
============================================================
> Learn how to operate relays and control GPIO pins with the Pi using PHP and a temperature sensor.
![How to control GPIO pins and operate relays with the Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/raspberry_pi_day_lead_0.jpeg?itok=lCxmviRD "How to control GPIO pins and operate relays with the Raspberry Pi")
>Image by : opensource.com
Ever wondered how to control items like your fans, lights, and more using your phone or computer from anywhere?
I was looking to control my Christmas lights using any mobile phone, tablet, laptop... simply by using a Raspberry Pi. Let me show you how to operate relays and control GPIO pins with the Pi using PHP and a temperature sensor. I put them all together using AJAX.
### Hardware requirements
* Raspberry Pi
* SD Card with Raspbian installed (any SD card would work, but I prefer to use a 32GB class 10 card)
* Power adapter
* Jumper wires (female to female and male to female)
* Relay board (I use a 12V relay board with for relays)
* DS18B20 temperature probe
* Wi-Fi adapter for Raspberry Pi
* Router (for Internet access, you need to have a port-forwarding supported router)
* 10K-ohm resistor
### Software requirements
* Download and install Raspbian on your SD Card
* Working Internet connection
* Apache web server
* PHP
* WiringPi
* SSH client on a Mac or Windows client
### General configurations and setup
1\. Insert the SD card into Raspberry Pi and connect it to the router using an Ethernet cable
2\. Connect the Wi-Fi Adapter.
3\. Now SSH to Pi and edit the **interfaces** file using:
**sudo nano /etc/network/interfaces**
This will open the file in an editor called **nano**. It is a very simple text editor that is easy to approach and use. If you're not familiar to a Linux-based operating systems, just use the arrow keys.
After opening the file in **nano** you will see a screen like this:
![File editor nano](https://opensource.com/sites/default/files/putty_0.png "File editor nano")
4\. To configure your wireless network, modify the file as follows:
**iface lo inet loopback**
**iface eth0 inet dhcp**
**allow-hotplug wlan0**
**auto wlan0**
**iface wlan0 inet dhcp**
**   wpa-ssid "Your Network SSID"**
**   wpa-psk "Your Password"**
5\. Press CTRL + O to save it, and then CTRL + X to exit the editor.
At this point, everything is configured and all you need to do is reload the network interfaces by running:
**sudo service networking reload**
(Warning: if you are connected using a remote connection it will disconnect now.)
### Software configurations
### Installing Apache Web Server
Apache is a popular web server application you can install on the Raspberry Pi to allow it to serve web pages. On its own, Apache can serve HTML files over HTTP, and with additional modules it can serve dynamic web pages using scripting languages such as PHP.
Install Apache by typing the following command on the command line:
**sudo apt-get install apache2 -y**
Once the installation is complete, type in the IP Address of your Pi to test the server. If you get the next image, then you have installed and set up your server successfully.
![Successful server setup](https://opensource.com/sites/default/files/itworks.png "Successful server setup")
To change this default page and add your own html file, go to **var/www/html**:
**cd /var/www/html**
To test this, add any file to this folder.
### Installing PHP
PHP is a preprocessor, meaning this is code that runs when the server receives a request for a web page. It runs, works out what needs to be shown on the page, then sends that page to the browser. Unlike static HTML, PHP can show different content under different circumstances. Other languages are capable of this, but since WordPress is written in PHP it's what you need to use this time. PHP is a very popular language on the web, with large projects like Facebook and Wikipedia written in it.
Install the PHP and Apache packages with the following command:
**sudo apt-get install php5 libapache2-mod-php5 -y**
### Testing PHP
Create the file **index.php**:
**sudo nano index.php**
Put some PHP content in it:
**<?php echo "hello world"; ?>**
Save the file. Next, delete "index.html" because it takes precedence over "index.php":
**sudo rm index.html**
Refresh your browser. You should see “hello world.” This is not dynamic, but it is still served by PHP. If you see the raw PHP above instead of “hello world,” reload and restart Apache with:
**sudo /etc/init.d/apache2 reload**
**sudo /etc/init.d/apache2 restart**
### Installing WiringPi
WiringPi is maintained under **git** for ease of change tracking; however, you have a plan B if youre unable to use **git** for whatever reason. (Usually your firewall will be blocking you, so do check that first!)
If you do not have **git** installed, then under any of the Debian releases (e.g., Raspbian), you can install it with:
**sudo apt-get install git-core**
If you get any errors here, make sure your Pi is up to date with the latest version of Raspbian:
**sudo apt-get update sudo apt-get upgrade**
To obtain WiringPi using **git**:
**sudo git clone git://git.drogon.net/wiringPi**
If you have already used the clone operation for the first time, then:
**cd wiringPi git pull origin**
It will fetch an updated version, and then you can re-run the build script below.
To build/install there is a new simplified script:
**cd wiringPi ./build**
The new build script will compile and install it all for you. It does use the **sudo** command at one point, so you may wish to inspect the script before running it.
### Testing WiringPi
Run the **gpio** command to check the installation:
**gpio -v gpio readall**
This should give you some confidence that its working OK.
### Connecting DS18B20 To Raspberry Pi
* The Black wire on your probe is for GND
* The Red wire is for VCC
* The Yellow wire is the GPIO wire
![GPIO image](https://opensource.com/sites/default/files/gpio_0.png "GPIO image")
Connect:
* VCC to 3V Pin 1
* GPIO wire to Pin 7 (GPIO 04)
* Ground wire to any GND Pin 9
### Software Configuration
For using DS18B20 temperature sensor module with PHP, you need to activate the kernel module for the GPIO pins on the Raspberry Pi and the DS18B20 by executing the commands:
**sudo modprobe w1-gpio**
**sudo modprobe w1-therm**
You do not want to do that manually every time the Raspberry reboots, so you want to enable these modules on every boot. This is done by adding the following lines to the file **/etc/modules**:
**sudo nano /etc/modules/**
Add the following lines to it:
**w1-gpio**
**w1-therm**
To test this, type in:
**cd /sys/bus/w1/devices/**
Now type **ls. **
You should see your device information. In the device drivers, your DS18B20 sensor should be listed as a series of numbers and letters. In this case, the device is registered as 28-000005e2fdc3\. You then need to access the sensor with the cd command, replacing my serial number with your own: **cd 28-000005e2fdc3. **
The DS18B20 sensor periodically writes to the **w1_slave** file, so you simply use the cat command to read it**: cat w1_slave.**
This yields the following two lines of text, with the output **t=** showing the temperature in degrees Celsius. Place a decimal point after the first two digits (e.g., the temperature reading I received is 30.125 degrees Celsius).
### Connecting the relay
1\. Take two jumper wires and connect one of them to the GPIO 24 (Pin18) on the Pi and the other one to the GND Pin. You may refer the following diagram.
2\. Now connect the other ends of the wire to the relay board. Connect the GND to the GND on the relay and GPIO Output wire to the relay channel pin number, which depends on the relay that you are using. Remember theGNDgoes to GND on the relay and GPIO Output goes to the relay input pin.
![Headers](https://opensource.com/sites/default/files/headers.png "Headers")
Caution! Be very careful with the relay connections with Pi because if it causes a backflow of current, you with have a short circuit.
3\. Now connect the power supply to the relay, either using 12V power adapter or by connecting the VCC Pin to 3.3V or 5V on the Pi.
### Controlling the relay using PHP
Let's create a PHP script to control the GPIO pins on the Raspberry Pi, with the help of the WiringPi software.
1\. Create a file in the Apache servers root web directory. Navigate using:
**cd ../../../**
**cd var/www/html/**
2\. Create a new folder called Home:
**sudo mkdir Home**
3\. Create a new PHP file called **on.php**:
**sudo nano on.php**
4\. Add the following code to it:
```
<?php
          system(“ gpio-g mode 24 out “) ;
          system(“ gpio-g write 24 1”) ;
?>
```
5\. Save the file using CTRL + O and exit using CTRL + X
In the code above, in the first line you've set the GPIO Pin 24 to output mode using the command:
```
system(“ gpio-g mode 24 out “) ;
```
In the second line, youve turned on the GPIO Pin 24, Using “1,” where “1” in binary refers to ON and “0” Means OFF.
6\. To turn off the relay, create another file called **off.php** and replace “1” with “0.”
```
<?php
system(“ gpio-g mode 24 out “) ;
system(“ gpio-g write 24 0”) ;
?>
```
7\. If you have your relay connected to the Pi, visit your web browser and type in the IP Address of your Pi followed by the directory name and file name:       
**http://{IPADDRESS}/home/on.php**
This will turn ON the relay.
8\. To turn it OFF, open the page called **off.php**,
**http://{IPADDRESS}/home/off.php**
Now you need to control both these things from a single page without refreshing or visiting the pages individually. For that you'll use AJAX.
9\. Create a new HTML file and add this code to it.
```
[html + php + ajax codeblock]
<html>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
<script type="text/javascript">// <![CDATA[
$(document).ready(function() {
$('#on').click(function(){
var a= new XMLHttpRequest();
a.open("GET", "on.php"); a.onreadystatechange=function(){
if(a.readyState==4){ if(a.status ==200){
 } else alert ("http error"); } }
a.send();
});
});
$(document).ready(function()
{ $('#Off').click(function(){
var a= new XMLHttpRequest();
a.open("GET", "off.php");
a.onreadystatechange=function(){
if(a.readyState==4){
if(a.status ==200){
 } else alert ("http error"); } }
a.send();
});
});
</script>
<button id="on" type="button"> Switch Lights On </button>
<button id="off" type="button"> Switch Lights Off </button>
```
10\. Save the file, go to your web browser, and open that page. Youll see two buttons, which will turn lights on and off. Based on the same idea, you can create a beautiful web interface using bootstrap and CSS skills.
### Viewing temperature on this web page
1\. Create a file called **temperature.php**:
```
sudo nano temperature.php
```
2\. Add the following code to it, replace 10-000802292522 with your device ID:
```
<?php
//File to read
$file = '/sys/devices/w1_bus_master1/10-000802292522/w1_slave';
//Read the file line by line
$lines = file($file);
//Get the temp from second line
$temp = explode('=', $lines[1]);
//Setup some nice formatting (i.e., 21,3)
$temp = number_format($temp[1] / 1000, 1, ',', '');
//And echo that temp
echo $temp . " °C";
?>
```
3\. Go to the HTML file that you just created, and create a new **<div>** with the **id** “screen”: **<div id=“screen”></div>.**
4\. Add the following code after the **<body>** tag or at the end of the document:
```
<script>
$(document).ready(function(){
setInterval(function(){
$("#screen").load('temperature.php')
}, 1000);
});
</script>
```
In this, **#screen** is the **id** of **<div>** in which you want to display the temperature. It loads the **temperature.php** file every 1000 milliseconds.
I have used bootstrap to make a beautiful panel for displaying temperature. You can add multiple icons and glyphicons as well to make it more attractive.
This was just a basic system that controls a relay board and displays the temperature. You can develop it even further by creating event-based triggers based on timings, temperature readings from the thermostat, etc.
--------------------------------------------------------------------------------
作者简介:
Abdul Hannan Mustajab - I'm 17 years old and live in India. I am pursuing an education in science, math, and computer science. I blog about my projects at spunkytechnology.com. I've been working on AI based IoT using different micro controllers and boards .
--------
via: https://opensource.com/article/17/3/operate-relays-control-gpio-pins-raspberry-pi
作者:[ Abdul Hannan Mustajab][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mustajabhannan
[1]:http://www.php.net/system
[2]:http://www.php.net/system
[3]:http://www.php.net/system
[4]:http://www.php.net/system
[5]:http://www.php.net/system
[6]:http://www.php.net/file
[7]:http://www.php.net/explode
[8]:http://www.php.net/number_format
[9]:https://opensource.com/article/17/3/operate-relays-control-gpio-pins-raspberry-pi?rate=RX8QqLzmUb_wEeLw0Ee0UYdp1ehVokKZ-JbbJK_Cn5M
[10]:https://opensource.com/user/123336/feed
[11]:https://opensource.com/users/mustajabhannan

View File

@ -0,0 +1,316 @@
#rusking translating
Join CentOS 7 Desktop to Samba4 AD as a Domain Member Part 9
============================================================
by [Matei Cezar][23] | Published: March 17, 2017 | Last Updated: March 17, 2017
 Download Your Free eBooks NOW - [10 Free Linux eBooks for Administrators][24] | [4 Free Shell Scripting eBooks][25]
This guide will describe how you can integrate CentOS 7 Desktop to Samba4 Active Directory Domain Controller with Authconfig-gtk in order to authenticate users across your network infrastructure from a single centralized account database held by Samba.
#### Requirements
1. [Create an Active Directory Infrastructure with Samba4 on Ubuntu][1]
2. [CentOS 7.3 Installation Guide][2]
### Step 1: Configure CentOS Network for Samba4 AD DC
1. Before starting to join CentOS 7 Desktop to a Samba4 domain you need to assure that the network is properly setup to query domain via DNS service.
Open Network Settings and turn off the Wired network interface if enabled. Hit on the lower Settings button as illustrated in the below screenshots and manually edit your network settings, especially the DNS IPs that points to your Samba4 AD DC.
When you finish, Apply the configurations and turn on your Network Wired Card.
[
![Network Settings](http://www.tecmint.com/wp-content/uploads/2017/03/Network-Settings.jpg)
][3]
Network Settings
[
![Configure Network](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network.jpg)
][4]
Configure Network
2. Next, open your network interface configuration file and add a line at the end of file with the name of your domain. This line assures that the domain counterpart is automatically appended by DNS resolution (FQDN) when you use only a short name for a domain DNS record.
```
$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
```
Add the following line:
```
SEARCH="your_domain_name"
```
[
![Network Interface Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Network-Interface-Configuration.jpg)
][5]
Network Interface Configuration
3. Finally, restart the network services to reflect changes, verify if the resolver configuration file is correctly configured and issue a series of ping commands against your DCs short names and against your domain name in order to verify if DNS resolution is working.
```
$ sudo systemctl restart network
$ cat /etc/resolv.conf
$ ping -c1 adc1
$ ping -c1 adc2
$ ping tecmint.lan
```
[
![Verify Network Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Verify-Network-Configuration.jpg)
][6]
Verify Network Configuration
4. Also, configure your machine hostname and reboot the machine to properly apply the settings by issuing the following commands:
```
$ sudo hostnamectl set-hostname your_hostname
$ sudo init 6
```
Verify if hostname was correctly applied with the below commands:
```
$ cat /etc/hostname
$ hostname
```
5. The last setting will ensure that your system time is in sync with Samba4 AD DC by issuing the below commands:
```
$ sudo yum install ntpdate
$ sudo ntpdate -ud domain.tld
```
### Step 2: Install Required Software to Join Samba4 AD DC
6. In order to integrate CentOS 7 to an Active Directory domain install the following packages from command line:
```
$ sudo yum install samba samba samba-winbind krb5-workstation
```
7. Finally, install the graphical interface software used for domain integration provided by CentOS repos: Authconfig-gtk.
```
$ sudo yum install authconfig-gtk
```
### Step 3: Join CentOS 7 Desktop to Samba4 AD DC
8. The process of joining CentOS to a domain controller is very straightforward. From command line open Authconfig-gtk program with root privileges and make the following changes as described below:
```
$ sudo authconfig-gtk
```
On Identity & Authentication tab.
* User Account Database = select Winbind
* Winbind Domain = YOUR_DOMAIN
* Security Model = ADS
* Winbind ADS Realm = YOUR_DOMAIN.TLD
* Domain Controllers = domain machines FQDN
* Template Shell = /bin/bash
* Allow offline login = checked
[
![Authentication Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Authentication-Configuration.jpg)
][7]
Authentication Configuration
On Advanced Options tab.
* Local Authentication Options = check Enable fingerprint reader support
* Other Authentication Options = check Create home directories on the first login
[
![Authentication Advance Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Authentication-Advance-Configuration.jpg)
][8]
Authentication Advance Configuration
9. After youve added all required values, return to Identity & Authentication tab and hit on Join Domain button and the Save button from alert window to save settings.
[
![Identity and Authentication](http://www.tecmint.com/wp-content/uploads/2017/03/Identity-and-Authentication.jpg)
][9]
Identity and Authentication
[
![Save Authentication Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Save-Authentication-Configuration.jpg)
][10]
Save Authentication Configuration
10. After the configuration has been saved you will be asked to provide a domain administrator account in order to join the domain. Supply the credentials for a domain administrator user and hit OK button to finally join the domain.
[
![Joining Winbind Domain](http://www.tecmint.com/wp-content/uploads/2017/03/Joining-Winbind-Domain.jpg)
][11]
Joining Winbind Domain
11. After your machine has been integrated into the realm, hit on Apply button to reflect changes, close all windows and reboot the machine.
[
![Apply Authentication Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Apply-Authentication-Configuration.jpg)
][12]
Apply Authentication Configuration
12. In order to verify if the system has been joined to Samba4 AD DC open AD Users and Computers from a Windows machine with [RSAT tools installed][13] and navigate to your domain Computers container.
The name of your CentOS machine should be listed on the right plane.
[
![Active Directory Users and Computers](http://www.tecmint.com/wp-content/uploads/2017/03/Active-Directory-Users-and-Computers.jpg)
][14]
Active Directory Users and Computers
### Step 4: Login to CentOS Desktop with a Samba4 AD DC Account
13. In order to login to CentOS Desktop hit on Not listed? link and add the username of a domain account preceded by the domain counterpart as illustrated below.
```
Domain\domain_account
or
Domain_user@domain.tld
```
[
![Not listed Users](http://www.tecmint.com/wp-content/uploads/2017/03/Not-listed-Users.jpg)
][15]
Not listed Users
[
![Enter Domain Username](http://www.tecmint.com/wp-content/uploads/2017/03/Enter-Domain-Username.jpg)
][16]
Enter Domain Username
14. To authenticate with a domain account from command line in CentOS use one of the following syntaxes:
```
$ su - domain\domain_user
$ su - domain_user@domain.tld
```
[
![Authenticate Domain Username](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Domain-User.jpg)
][17]
Authenticate Domain Username
[
![Authenticate Domain User Email](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Domain-User-Email.jpg)
][18]
Authenticate Domain User Email
15. To add root privileges for a domain user or group, edit sudoers file using visudo command with root powers and add the following lines as illustrated on the below excerpt:
```
YOUR_DOMAIN\\domain_username ALL=(ALL:ALL) ALL #For domain users
%YOUR_DOMAIN\\your_domain\ group ALL=(ALL:ALL) ALL #For domain groups
```
[
![Assign Permission to User and Group](http://www.tecmint.com/wp-content/uploads/2017/03/Assign-Permission-to-User-and-Group.jpg)
][19]
Assign Permission to User and Group
16. To display a summary about the domain controller use the following command:
```
$ sudo net ads info
```
[
![Check Domain Controller Info](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Domain-Controller-Info.jpg)
][20]
Check Domain Controller Info
17. In order to verify if the trust machine account created when CentOS was added to the Samba4 AD DC is functional and list domain accounts from command line install Winbind client by issuing the below command:
```
$ sudo yum install samba-winbind-clients
```
Then issue a series of checks against Samba4 AD DC by executing the following commands:
```
$ wbinfo -p #Ping domain
$ wbinfo -t #Check trust relationship
$ wbinfo -u #List domain users
$ wbinfo -g #List domain groups
$ wbinfo -n domain_account #Get the SID of a domain account
```
[
![Get Samba4 AD DC Details](http://www.tecmint.com/wp-content/uploads/2017/03/Get-Samba4-AD-DC-Details.jpg)
][21]
Get Samba4 AD DC Details
18. In case you want to leave the domain issue the following command against your domain name by using an domain account with administrator privileges:
```
$ sudo net ads leave your_domain -U domain_admin_username
```
[
![Leave Domain from Samba4 AD](http://www.tecmint.com/wp-content/uploads/2017/03/Leave-Domain-from-Samba4-AD.jpg)
][22]
Leave Domain from Samba4 AD
Thats all! Although this procedure is focused on joining CentOS 7 to a Samba4 AD DC, the same steps described in this documentation are also valid for integrating a CentOS 7 Desktop machine to a Microsoft Windows Server 2008 or 2012 domain.
--------------------------------------------------------------------------------
作者简介:
I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/join-centos-7-to-samba4-active-directory/
作者:[Matei Cezar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:http://www.tecmint.com/install-samba4-active-directory-ubuntu/
[2]:http://www.tecmint.com/centos-7-3-installation-guide/
[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Network-Settings.jpg
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network.jpg
[5]:http://www.tecmint.com/wp-content/uploads/2017/03/Network-Interface-Configuration.jpg
[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Verify-Network-Configuration.jpg
[7]:http://www.tecmint.com/wp-content/uploads/2017/03/Authentication-Configuration.jpg
[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Authentication-Advance-Configuration.jpg
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Identity-and-Authentication.jpg
[10]:http://www.tecmint.com/wp-content/uploads/2017/03/Save-Authentication-Configuration.jpg
[11]:http://www.tecmint.com/wp-content/uploads/2017/03/Joining-Winbind-Domain.jpg
[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Apply-Authentication-Configuration.jpg
[13]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
[14]:http://www.tecmint.com/wp-content/uploads/2017/03/Active-Directory-Users-and-Computers.jpg
[15]:http://www.tecmint.com/wp-content/uploads/2017/03/Not-listed-Users.jpg
[16]:http://www.tecmint.com/wp-content/uploads/2017/03/Enter-Domain-Username.jpg
[17]:http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Domain-User.jpg
[18]:http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Domain-User-Email.jpg
[19]:http://www.tecmint.com/wp-content/uploads/2017/03/Assign-Permission-to-User-and-Group.jpg
[20]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Domain-Controller-Info.jpg
[21]:http://www.tecmint.com/wp-content/uploads/2017/03/Get-Samba4-AD-DC-Details.jpg
[22]:http://www.tecmint.com/wp-content/uploads/2017/03/Leave-Domain-from-Samba4-AD.jpg
[23]:http://www.tecmint.com/author/cezarmatei/
[24]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[25]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,174 @@
Make Container Management Easy With Cockpit
============================================================
![cockpit](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cockpit-containers.jpg?itok=D3MMNlkg "cockpit")
If youre looking for an easy way to manage a Linux server that includes containers, you should check out Cockpit.[Creative Commons Zero][6]
If you administer a Linux server, youve probably been in search of a solid administration tool. That quest has probably taken you to such software as [Webmin][14] and [cPanel][15]. But if youre looking for an easy way to manage a Linux server that also includes Docker, one tool stands above the rest for that particular purpose: [Cockpit][16].
Why Cockpit? Because it includes the ability to handle administrative tasks such as:
* Connect and Manage multiple machines
* Manage containers via Docker
* Interact with a Kubernetes or Openshift clusters
* Modify network settings
* Manage user accounts
* Access a web-based shell
* View system performance information by way of helpful graphs
* View system services and log files
Cockpit can be installed on [Debian][17], [Red Hat][18], [CentOS][19], [Arch Linux][20], and [Ubuntu][21]. Here, I will focus on installing the system on a Ubuntu 16.04 server that already includes Docker.
Out of the list of features, the one that stands out is the container management. Why? Because it make installing and managing containers incredibly simple. In fact, you might be hard-pressed to find a better container management solution.
With that said, lets install this solution and see just how easy it is to use.
### Installation
As I mentioned earlier, I will be installing Cockpit on an instance of Ubuntu 16.04, with Docker already running. The steps for installation are quite simple. The first thing you must do is log into your Ubuntu server. Next you must add the necessary repository with the command:
```
sudo add-apt-repository ppa:cockpit-project/cockpit
```
When prompted, hit the Enter key on your keyboard and wait for the prompt to return. Once you are back at your bash prompt, update apt with the command:
```
sudo apt-get get update
```
Install Cockpit by issuing the command:
```
sudo apt-get -y install cockpit cockpit-docker
```
After the installation completes, it is necessary to start the Cockpit service and then enable it so it auto-starts at boot. To do this, issue the following two commands:
```
sudo systemctl start cockpit
sudo systemctl enable cockpit
```
Thats all there is to the installation.
### Logging into Cockpit
To gain access to the Cockpit web interface, point a browser (that happens to be on the same network as the Cockpit server) to http://IP_OF_SERVER:9090, and you will be presented with a login screen (Figure 1).
![login](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cockpit_a.jpg?itok=RViOst2V "login")
Figure 1: The Cockpit login screen.[Used with permission][1]
A word of warning with using Cockpit and Ubuntu. Many of the tasks that can be undertaken with Cockpit require administrative access. If you log in with a standard user, you wont be able to work with some of the tools like Docker. To get around that, you can enable the root user on Ubuntu. This isnt always a good idea. By enabling the root account, you are bypassing the security system that has been in place for years. However, for the purpose of this article, I will enable the root user with the following two commands:
```
sudo passwd root
sudo passwd -u root
```
NOTE: Make sure you give the root account a very challenging password.
Should you want to revert this change, you only need issue the command:
```
sudo passwd -l root
```
With other distributions, such as CentOS and Red Hat, you will be able to log into Cockpit with the username _root_ and the root password, without having to go through the extra hopes as described above.
If youre hesitant to enable the root user, you can always pull down the images, from the server terminal (using the command  _docker pull IMAGE_NAME w_ here _IMAGE_NAME_  is the image you want to pull). That would add the image to your docker server, which can then be managed via a regular user. The only caveat to this is that the regular user must be added to the Docker group with the command:
```
sudo usermod -aG docker USER
```
Where USER is the actual username to be added to the group. Once youve done that, log out, log back in, and then restart Docker with the command:
```
sudo service docker restart
```
Now the regular user can start and stop the added Docker images/containers without having to enable the root user. The only caveat is that user will not be able to add new images via the Cockpit interface.
Using Cockpit
Once youve logged in, you will be treated to the Cockpit main window (Figure 2).
![main window](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cockpit_b.jpg?itok=tZCHcq-Y "main window")
Figure 2: The Cockpit main window.[Used with permission][2]
You can go through each of the sections to check on the status of the server, work with users, etc., but we want to go right to the containers. Click on the Containers section to display the current running contains as well as the available images (Figure 3).
![Cockpit](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cockpit_c.jpg?itok=OOYJt2yv "Cockpit")
Figure 3: Managing containers is incredibly simple with Cockpit.[Used with permission][3]
To start an image, simply locate the image and click the associated start button. From the resulting popup window (Figure 4), you can check all the information about the image (and adjust as needed), before clicking the Run button.
![Running Docker image](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cockpit_d.jpg?itok=8uldEq_r "Running Docker image")
Figure 4: Running a Docker image with the help of Cockpit.[Used with permission][4]
Once the image is running, you can check its status by clicking on the entry under the Containers section and then Stop, Restart, or Delete the instance. You can also click Change resource limits and then adjust either the Memory limit and/or CPU priority.
### Adding new images
Say you have logged on as the root user. If so, you can add new images with the help of the Cockpit GUI. From the Containers section, click the Get new image button and then, in the resulting window, search for the image you want to add. Say you want to add the latest official build of Centos. Type centos in the search field and then, once the search results populate, select the official listing and click Download (Figure 5).
![Adding image](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cockpit_f.jpg?itok=_S5g8Da2 "Adding image")
Figure 5: Adding the latest build of the official Centos images to Docker, via Cockpit.[Used with permission][5]
Once the image has downloaded, it will be available to Docker and can be run via Cockpit.
### As simple as it gets
Managing Docker doesnt get any easier. Yes, there is a caveat when working with Cockpit on Ubuntu, but if its your only option, there are ways to make it work. With the help of Cockpit, you can not only easily manage Docker images, you can do so from any web browser that has access to your Linux server. Enjoy your newfound Docker ease.
_Learn more about Linux through the free ["Introduction to Linux" ][13]course from The Linux Foundation and edX._
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2017/3/make-container-management-easy-cockpit
作者:[JACK WALLEN][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/used-permission
[3]:https://www.linux.com/licenses/category/used-permission
[4]:https://www.linux.com/licenses/category/used-permission
[5]:https://www.linux.com/licenses/category/used-permission
[6]:https://www.linux.com/licenses/category/creative-commons-zero
[7]:https://www.linux.com/files/images/cockpitajpg
[8]:https://www.linux.com/files/images/cockpitbjpg
[9]:https://www.linux.com/files/images/cockpitcjpg
[10]:https://www.linux.com/files/images/cockpitdjpg
[11]:https://www.linux.com/files/images/cockpitfjpg
[12]:https://www.linux.com/files/images/cockpit-containersjpg
[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[14]:http://www.webmin.com/
[15]:http://cpanel.com/
[16]:http://cockpit-project.org/
[17]:https://www.debian.org/
[18]:https://www.redhat.com/en
[19]:https://www.centos.org/
[20]:https://www.archlinux.org/
[21]:https://www.ubuntu.com/

View File

@ -0,0 +1,112 @@
[A formal spec for GitHub Flavored Markdown][8]
============================================================
We are glad we chose Markdown as the markup language for user content at GitHub. It provides a powerful yet straightforward way for users (both technical and non-technical) to write plain text documents that can be rendered richly as HTML.
Its main limitation, however, is the lack of standarization on the most ambiguous details of the language. Things like how many spaces are needed to indent a line, how many empty lines you need to break between different elements, and a plethora of other trivial corner cases change between implementations: very similar looking Markdown documents can be rendered as wildly different outputs depending on your Markdown parser of choice.
Five years ago, we started building GitHubs custom version of Markdown, GFM (GitHub Flavored Markdown) on top of [Sundown][13], a parser which we specifically developed to solve some of the shortcomings of the existing Markdown parsers at the time.
Today were hoping to improve on this situation by releasing a formal specification of the syntax for GitHub Flavored Markdown, and its corresponding reference implementation.
This formal specification is based on [CommonMark][14], an ambitious project to formally specify the Markdown syntax used by many websites on the internet in a way that reflects its real world usage. CommonMark allows people to continue using Markdown the same way they always have, while offering developers a comprehensive specification and reference implementations to interoperate and display Markdown in a consistent way between platforms.
#### The Specification
Taking the CommonMark spec and re-engineering our current user content stack around it is not a trivial endeavour. The main issue we struggled with is that the spec (and hence its reference implementations) focuses strictly on the common subset of Markdown that is supported by the original Perl implementation. This does not include some of the extended features that have been always available on GitHub. Most notably, support for  _tables, strikethrough, autolinks and task lists_  are missing.
In order to fully specify the version of Markdown we use at GitHub (known as GFM), we had to formally define the syntax and semantics of these features, something which we had never done before. We did this on top of the existing CommonMark spec, taking special care to ensure that our extensions are a strict and optional superset of the original specification.
When reviewing [the GFM spec][15], you can clearly tell which parts are GFM-specific additions because theyre highlighted as such. You can also tell that no parts of the original spec have been modified and therefore should remain fully compliant with all other implementations.
#### The Implementation
To ensure that the rendered Markdown in our website is fully compliant with the CommonMark spec, the new backend implementation for GFM parsing on GitHub is based on `cmark`, the reference implementation for CommonMark developed by [John MacFarlane][16] and many other [fantastic contributors][17].
Just like the spec itself, `cmark` focuses on parsing a strict subset of Markdown, so we had to also implement support for parsing GitHubs custom extensions on top of the existing parser. You can find these changes on our [fork of `cmark`][18]; in order to track the always-improving upstream project, we continuously rebase our patches on top of the upstream master. Our hope is that once a formal specification for these extensions is settled, this patchset can be used as a base to upstream the changes in the original project.
Besides implementing the GFM-specific features in our fork of `cmark`, weve also contributed many changes of general interest to the upstream. The vast majority of these contributions are focused around performance and security. Our backend renders a massive volume of Markdown documents every day, so our main concern lies in ensuring were doing these operations as efficiently as possible, and making sure that its not possible to abuse malicious Markdown documents to attack our servers.
The first Markdown parsers in C had a terrible security history: it was feasible to cause stack overflows (and sometimes even arbitrary code execution) simply by nesting particular Markdown elements sufficiently deep. The `cmark` implementation, just like our earlier parser Sundown, has been designed from scratch to be resistant to these attacks. The parsing algorithms and its AST-based output are thought out to gracefully handle deep recursion and other malicious document formatting.
The performance side of `cmark` is a tad more rough: weve contributed many optimizations upstream based on performance tricks we learnt while implementing Sundown, but despite all these changes, the current version of `cmark` is still not faster than Sundown itself: Our benchmarks show it to be between 20% to 30% slower on most documents.
The old optimization adage that  _“the fastest code is the code that doesnt run”_  applies here: the fact is that `cmark` just does  _more things_  than Sundown ever did. Amongst other functionality, `cmark` is UTF8 aware, has better support for references, cleaner interfaces for extension, and most importantly: it doesnt  _translate_  Markdown into HTML, like Sundown did. It actually generates an AST (Abstract Syntax Tree) out of the source Markdown, which we can transform and eventually render into HTML.
If you consider the amount of HTML parsing that we had to do with Sundowns original implementation (particularly regarding finding user mentions and issue references in the documents, inserting task lists, etc), `cmark`s AST-based approach saves us a tremendous amount of time  _and_  complexity in our user content stack. The Markdown AST is an incredibly powerful tool, and well worth the performance cost that `cmark` pays to generate it.
### The Migration
Changing our user content stack to be CommonMark compliant is not as simple as switching the library we use to parse Markdown: the fundamental roadblock we encountered here is that the corner cases that CommonMark specifies (and that the original Markdown documentation left ambiguous) could cause some old Markdown content to render in unexpected ways.
Through synthetic analysis of GitHubs massive Markdown corpus, we determined that less than 1% of the existing user content would be affected by the new implementation: we gathered these stats by rendering a large set of Markdown documents with both the old (Sundown) and the new (`cmark`, CommonMark compliant) libraries, normalizing the resulting HTML, and diffing their trees.
1% of documents with minor rendering issues seems like a reasonable tradeoff to swap in a new implementation and reap its benefits, but at GitHubs scale, 1% is a lot of content, and a lot of affected users. We really dont want anybody to check back on an old issue and see that a table that was previously rendering as HTML now shows as ASCII that is bad user experience, even though obviously none of the original content was lost.
Because of this, we came up with ways to soften the transition. The first thing we did was gathering separate statistics on the two different kinds of Markdown user content we host on the website: comments by the users (such as in Gists, issues, Pull Requests, etc), and Markdown documents inside the Git repositories.
There is a fundamental difference between these two kinds of content: the user comments are stored in our databases, which means their Markdown syntax can be normalized (e.g. by adding or removing whitespace, fixing the indentation, or inserting missing Markdown specifiers until they render properly). The Markdown documents stored in Git repositories, however, cannot be touched  _at all_ , as their contents are hashed as part of Gits storage model.
Fortunately, we discovered that the vast majority of user content that was using complex Markdown features were user comments (particularly Issue bodies and Pull Request bodies), while the documents stored in Git repositories were rendering properly with both the old and the new renderer in the overwhelming majority of cases.
With this in mind, we proceeded to normalize the syntax of the existing user comments, as to make them render identically in both the old and the new implementations.
Our approach to translation was rather pragmatic: Our old Markdown parser, Sundown, has always acted as a translator more than a parser. Markdown content is fed in, and a set of semantic callbacks convert the original Markdown document into the corresponding markup for the target language (in our use case, this was always HTML5). Based on this design approach, we decided to use the semantic callbacks to make Sundown translate from Markdown to CommonMark-compliant Markdown, instead of HTML.
More than translation, this was effectively a normalization pass, which we had high confidence in because it was performed by the same parser weve been using for the past 5 years, and hence all the existing documents should be parsed cleanly while keeping their original semantic meaning.
Once we updated Sundown to normalize input documents and sufficiently tested it, we were ready to start the transition process. The first step of the process was flipping the switch on the new `cmark` implementation for all new user content, as to ensure that we had a finite cut-off point to finish the transition at. We actually enabled CommonMark for all **new** user comments in the website several months ago, with barely anybody noticing this is a testament to the CommonMark teams fantastic job at formally specifying the Markdown language in a way that is representative of its real world usage.
In the background, we started a MySQL transition to update in-place the contents of all Markdown user content. After running each comment through the normalization process, and before writing it back to the database, wed render it with the new implementation and compare the tree to the previous implementation, as to ensure that the resulting HTML output was visually identical and that user data was never destroyed in any circumstances. All in all, less than 1% of the input documents were modified by the normalization process, matching our expectations and again proving that the CommonMark spec really represents the real-world usage of the language.
The whole process took several days, and the end result was that all the Markdown user content on the website was updated to conform to the new Markdown standard while ensuring that the final rendered output was visually identical to our users.
#### The Conclusion
Starting today, weve also enabled CommonMark rendering for all the Markdown content stored in Git repositories. As explained earlier, no normalization has been performed on the existing documents, as we expect the overwhelming majority of them to render just fine.
We are really excited to have all the Markdown content in GitHub conform to a live and pragmatic standard, and to be able to provide our users with a [clear and authoritative reference][19] on how GFM is parsed and rendered.
We also remain committed to following the CommonMark specification as it irons out any last bugs before a final point release. We hope GitHub.com will be fully conformant to the 1.0 spec as soon as it is released.
To wrap up, here are some useful links for those willing to learn more about CommonMark or implement it on their own applications:
* [The CommonMark website][1], with information on the project.
* [The CommonMark discussion forum][2], where questions and changes to the specification can be proposed.
* [The CommonMark specification][3]
* [The reference C Implementation][4]
* [Our fork with support for all GFM extensions][5]
* [The GFM specification][6], based on the original spec.
* [A list of CommonMark implementations in many programming languages][7]
--------------------------------------------------------------------------------
via: https://githubengineering.com/a-formal-spec-for-github-markdown/?imm_mid=0ef032&cmp=em-prog-na-na-newsltr_20170318
作者:[Yuki Izumi][a][Vicent Martí][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://github.com/kivikakk
[b]:https://github.com/vmg
[1]:http://commonmark.org/
[2]:http://talk.commonmark.org/
[3]:http://spec.commonmark.org/
[4]:https://github.com/jgm/cmark/
[5]:https://github.com/github/cmark/
[6]:https://github.github.com/gfm/
[7]:https://github.com/jgm/CommonMark/wiki/List-of-CommonMark-Implementations
[8]:https://githubengineering.com/a-formal-spec-for-github-markdown/
[9]:https://github.com/vmg
[10]:https://github.com/vmg
[11]:https://github.com/kivikakk
[12]:https://github.com/kivikakk
[13]:https://github.com/vmg/sundown
[14]:http://commonmark.org/
[15]:https://github.github.com/gfm/
[16]:https://github.com/jgm
[17]:https://github.com/jgm/cmark/#authors
[18]:https://github.com/github/cmark
[19]:https://github.github.com/gfm/

View File

@ -0,0 +1,56 @@
申领翻译
# This Xfce Bug Is Wrecking Users Monitors
The Xfce desktop environment for Linux may be fast and flexible — but its currently affected by a very serious flaw.
Users of this lightweight alternative to GNOME and KDE have reported that the choice of default wallpaper in Xfce is causing damaging to laptop displays and LCD monitors.
And theres damning photographic evidence to back the claims up.
### Xfce Bug #12117
_“The default desktop startup screen causes damage to monitor!”_  screams one user in [a bug filed][1] on the Xfce bugzilla.
_“The defualt wallpaper is having my animal scritch (sic) all the plastic off my LED MONITOR! Can we choose a different wallpaper? I cannot expect the scratches and whu not? Lets end the mouse games over here.”_
[
![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/03/cat-xfce-bug-2-750x801.jpg)
][6]
The flaw — or should that be claw? — is not isolated to just one users desktop either. Other users have been able to reproduce the issue, albeit inconsistently, as this second, separate photo an affected Reddtior proves:
![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/03/cat-xfce-bug-1-750x395.jpeg)
Its not clear whether the  lay with Xfce or with cats. If its the latter the hope for a fix is moot; like cheap Android phones cats do not receive upgrades from their OEM.
for users of Xfce on other Linux distributions the outlook is less paw-sitive
Thankfully Xubuntu users are not affected by clawful issue. This is because the Xfce-based Ubuntu flavor ships with its own, mouse-free desktop wallpaper.
But for users of Xfce on other Linux distributions the outlook is less paw-sitive.
A patch has been proposed to fix the foul up but is yet to be accepted upstream. If youre at all concerned by bug #12117 you can apply the patch manually on your own system by downloading the image below and setting it as your wallpaper.
[
![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/03/xfce-dog-wallpaper-750x363.jpg)
][7]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2017/03/xfce-wallpaper-cat-bug
作者:[JOEY SNEDDON ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://bugzilla.xfce.org/show_bug.cgi?id=12117
[2]:https://plus.google.com/117485690627814051450/?rel=author
[3]:http://www.omgubuntu.co.uk/category/random-2
[4]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/02/xubuntu.jpg
[5]:http://www.omgubuntu.co.uk/2017/03/xfce-wallpaper-cat-bug
[6]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/03/cat-xfce-bug-2.jpg
[7]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/03/xfce-dog-wallpaper.jpg

View File

@ -0,0 +1,85 @@
Yuan0302翻译
FTPS基于SSL的FTPvs SFTPSSH文件传输协议
================================================== ==========
[
[ftps sftp]http://www.techmixer.com/pic/2015/07/ftps-sftp.png“ftps sftp”
] [5]
[ftps sftp]dataimage / gif; base64R0lGODdhAQABAPAAAP /// wAAACwAAAAAAQABAEACAkQBADs =“ftps sftp”
** SSH文件传输协议SFTP **通过安全套接层的文件传输协议,** FTPS是最常见的安全FTP通信技术用于通过TCP协议将计算机文件从一个主机传输到另一个主机。SFTP和FTPS都通过强大的算法如AES和Triple DES提供高级文件传输安全保护以加密传输的任何数据。
但是SFTP和FTPS之间最显着的区别是如何验证和管理连接。
FTPS是使用安全套接层SSL证书的FTP安全技术。整个安全FTP连接使用用户ID密码和SSL证书进行身份验证。一旦建立FTPS连接如果服务器的证书是可信的[FTP客户端软件] [6]将检查目的地[FTP服务器] [7]。
如果证书由已知的证书颁发机构CA签发或者证书由您的合作伙伴自己签发并且您的信任密钥存储区中有其公开证书的副本则SSL证书将被视为受信任的证书。FTPS的所有用户名和密码信息将通过安全的FTP连接加密。
###以下是FTPS的优点和缺点
优点:
*通信可以被人们读取和理解
*提供服务器到服务器文件传输的服务
* SSL / TLS具有良好的身份验证机制X.509证书功能)
* FTP和SSL支持内置于许多互联网通信框架中
缺点:
*没有统一的目录列表格式
*需要辅助数据通道,这使得难以在防火墙后使用
*未定义文件名字符集(编码)的标准
*并非所有FTP服务器都支持SSL / TLS
*没有标准的方式来获取和更改文件或目录属性
SFTP或SSH文件传输协议是另一种安全的安全文件传输协议设计为SSH扩展以提供文件传输功能因此它通常仅使用SSH端口用于数据传输和控制。当[FTP客户端] [8]软件连接到SFTP服务器时它会将公钥传输到服务器进行认证。如果密钥匹配提供任何用户/密码,身份验证就会成功。
###以下是SFTP优点和缺点
优点:
*只有一个连接不需要DATA连接
* FTP连接始终保持安全
* FTP目录列表是一致的和机器可读的
* FTP协议包括操作权限和属性操作文件锁定和更多的功能。
缺点:
*通信是二进制的,不能“按原样”记录下来用于人类阅读
并且SSH密钥更难以管理和验证。
*这些标准定义了某些可选或推荐的选项,这会导致不同供应商的不同软件之间存在某些兼容性问题。
*没有服务器到服务器的副本和递归目录删除操作
*在VCL和.NET框架中没有内置的SSH / SFTP支持。
大多数FTP服务器软件同时支持安全FTP技术与强大的身份验证选项。
但SFTP将是防火墙赢家因为它很友好。SFTP只需要通过防火墙打开一个端口号默认为22。此端口将用于所有SFTP通信包括初始认证发出的任何命令以及传输的任何数据。
FTPS将更难以实现通过紧密安全的防火墙因为FTPS使用多个网络端口号。每次进行文件传输请求getput或目录列表请求时需要打开另一个端口号。因此它必须在您的防火墙中打开一系列端口以允许FTPS连接这可能是您的网络的安全风险。
支持FTPS和SFTP的FTP服务器软件
1. [Cerberus FTP服务器] [2]
2. [FileZilla - 最着名的免费FTP和FTPS服务器软件] [3]
3. [Serv-U FTP服务器] [4]
-------------------------------------------------- ------------------------------
通过http://www.techmixer.com/ftps-sftp/
作者:[Techmixer.com] [a]
译者:[译者ID]https://github.com/译者ID
校对:[校对者ID]https://github.com/校对者ID
本文由[LCTT]https://github.com/LCTT/TranslateProject原创编译[Linux中国]https://linux.cn/)荣誉推出
[a]http//www.techmixer.com/
[1]http//www.techmixer.com/ftps-sftp/#respond
[2]http//www.cerberusftp.com/
[3]http//www.techmixer.com/free-ftp-server-best-windows-ftp-server-download/
[4]http//www.serv-u.com/
[5]http//www.techmixer.com/pic/2015/07/ftps-sftp.png
[6]http//www.techmixer.com/free-ftp-file-transfer-protocol-softwares/
[7]http//www.techmixer.com/free-ftp-server-best-windows-ftp-server-download/
[8]http//www.techmixer.com/best-free-mac-ftp-client-connect-ftp-server/

View File

@ -2,110 +2,110 @@
============================================================
![Min Browser Muffles the Web's Noise](http://www.linuxinsider.com/ai/637666/browser-tabs.jpg)
[Min][1] 是一款具有最小设计的 web 浏览器,可以通过简单的功能提供快速操作
[Min][1] 是一款精简设计的 web 浏览器,功能简便,响应迅速
当涉及到软件设计时,“最小”并不意味着潜在的低级功能或未开发。如果你喜欢文本编辑器和笔记程序中的最小防干扰工具,那么你会在 Min 浏览器中有同样舒适的感觉。
在软件设计中,“简单”并不意味着功能低级、有待改进。你如果喜欢花哨工具比较少的文本编辑器和笔记程序,那么在 Min 浏览器中会有同样舒适的感觉。
大多在我的台式机和笔记本电脑上使用 Google Chrome、Chromium和 Firefox。我研究了很多它们的附加功能所以我可以在我的长期研究和工作中可以访问所有的专业服务。
经常在台式机和笔记本电脑上使用 Google Chrome、Chromium和 Firefox。我研究了它们的很多附加功能所以我在长期的研究和工作中可以享用它们的特色服务。
然而,我有时喜欢一个快速、整洁的替代品来上网。随着多个项目的进行,我可以很快打开一大批选项卡甚至是独立窗口的强大浏览器。
然而,有时我希望有个快速、整洁的替代品来上网。随着多个项目的进行,我需要很快打开一大批选项卡甚至是独立窗口的强大浏览器。
我试过其他浏览器选项但很少成功。替代品通常有自己的一套分散注意力的附件和功能,它们会让我开小差。
我试过其他浏览器但很少能令我满意。替代品通常有一套独特的花哨的附件和功能,它们会让我开小差。
Min 浏览器不这样。它是一个易于使用并在 GitHub 开源的 web浏览器不会使我分心。
Min 浏览器不这样。它是一个易于使用并在 GitHub 开源的 web 浏览器,不会使我分心。
![Min browser ](http://www.linuxinsider.com/article_images/2017/84212_620x514.jpg)
Min 浏览器是最小化浏览器,提供了简单的功能以及迅速的操作。只是不要指望马上上手
Min 浏览器是精简的浏览器,提供了简单的功能以及快速的响应。只是不要指望马上上手。
### 它做些什么
Min 浏览器提供了 Debian Linux 版本、Windows 和 Mac 机器的版本。它不能与主流跨平台 web 浏览器中的可用功能竞争。
Min 浏览器提供了 Debian Linux、Windows 和 Mac 机器的版本。它不能与功能众多的主流跨平台 web 浏览器竞争。
它不必竞争,但是它的声誉非常好,它可能是补充而不是取代它们
但它不必竞争,它很有名的原因应该是补充而不是取代那些主流浏览器
其中一个主要原因是其内置的广告拦截功能。开箱即用的 Min 浏览器不需要配置或寻找兼容的第三方应用程序来拦截广告。
在 Edit/Preferences 中,关于内容阻止你有三个选项可以点击/取消点击。它很容易修改屏蔽策略来适应你的喜好。阻止跟踪器和广告选项使用 EasyList 和 EasyPrivacy。 如果没有其他原因,请保持此选项选中。
在 Edit/Preferences 中,你可以通过三个选项来设置阻止的内容。它很容易修改屏蔽策略来满足你的喜好。阻止跟踪器和广告选项使用 EasyList 和 EasyPrivacy。 如果没有其他原因,请保持此选项选中。
你还可以阻止脚本和图像。这样做可以最大限度地提高网站加载速度,并真正提高你对恶意代码的防御
你还可以阻止脚本和图像。这样做可以最大限度地提高网站加载速度,并能有效防御恶意代码
### 按你的方式搜索
如果你花费大量时间在搜索上,你会喜欢 Min 处理搜索的方式。这是一个顶级的功能。
如果你在搜索上花费大量时间,你会喜欢 Min 处理搜索的方式。这是一个顶级的功能。
可以直接在浏览器的网址栏中访问搜索功能。Min 使用搜索引擎有 DuckDuckGo 和维基百科。你可以直接在 web 地址栏中输入搜索查询
可以直接在浏览器的网址栏中使用搜索功能。Min 使用搜索引擎 DuckDuckGo 和维基百科的内容进行搜索。你可以直接在 web 地址栏中输入要搜索的东西
这种方法很节省时间,因为你不必先进入搜索引擎窗口。 一个额外的好处是可以搜索你的书签。
这种方法很节省时间,因为你不必先进入搜索引擎窗口。 还有一个好处是可以搜索你的书签。
在 Edit/Preferences 菜单中,选择默认的搜索引擎。该列表包括 DuckDuckGo、Google、Bing、Yahoo、Baidu、Wikipedia 和 Yandex。
尝试将 DuckDuckGo 作为默认搜索引擎。 Min 默认使用这个选项,但它不会强加给你
尝试将 DuckDuckGo 作为默认搜索引擎。 Min 默认使用这个引擎,但你也能更换
![Min browser search function ](http://www.linuxinsider.com/article_images/2017/84212_620x466.jpg)
Min 浏览器的搜索功能是 URL 栏的一部分。Min 使用 DuckDuckGo 和维基百科作为搜索引擎。你可以直接在 web 地址栏中输入搜索查询
Min 浏览器的搜索功能是 URL 栏的一部分。Min 利用搜索引擎 DuckDuckGo 和维基百科的内容。你可以直接在 web 地址栏中输入要搜索的东西
搜索栏会非常快速地显示问题的答案。它会使用 DuckDuckGo 的信息,包括维基百科条目、计算器以及更多
搜索栏会非常快速地显示问题的答案。它会使用 DuckDuckGo 的信息,包括维基百科条目、计算器和其它的内容
它能提供快速片段、答案和网络建议。它是基于 Google 环境的一个替代。
它能快速提供片段、答案和网络建议。它有点像不是基于 Goolge 环境的替代
### 导航辅助
Min 允许你使用模糊搜索快速跳转到任何网站。它几乎能立即向你抛出建议。
Min 允许你使用模糊搜索快速跳转到任何网站。它能立即向你提出建议。
我喜欢在当前标签旁边打开标签的方式。你不必设置此选项。它在默认情况下没有其他选择,但它是有道理的
我喜欢在当前标签旁边打开标签的方式。你不必设置此选项。它在默认情况下没有其他选择,但这也有道理
[
![Min browser Tasks](http://www.linuxinsider.com/article_images/2017/84212_620x388-small.jpg)
][2]
Min 的一个很酷的操作是将标签整理到任务中,这样你可以随时搜索。(点击图片放大)
Min 的一个很酷的功能是将标签整理到任务栏中,这样你随时都可以搜索。(点击图片放大)
用一直点击标签。这使你可以专注于当前的任务,而不会分心。
不点击标签,过一会儿它就会消失。这使你可以专注于当前的任务,而不会分心。
Min 不需要附加工具来控制多个标签。浏览器会显示标签列表,并允许你将它们分组。
### 保持专注
Min 在“视图”菜单中隐藏了一个可选的“聚焦模式”。启用后,除了你打开的选项卡外,它会隐藏所有选项卡。 你必须返回到菜单以关闭“聚焦模式”,然后才能打开新选项卡。
Min 在“视图”菜单中有一个可选的“聚焦模式”。启用后,除了你打开的选项卡外,它会隐藏其它所有选项卡。 你必须返回到菜单,关闭“聚焦模式”,才能打开新选项卡。
任务功能还可以帮助你保持专注。你可以从“文件”菜单或使用 Ctrl+Shift+N 创建任务。如果要打开新选项卡,可以在“文件”菜单中选择该选项,或使用 Control+T。
任务功能还可以帮助你保持专注。你可以在“文件File”菜单或使用 Ctrl+Shift+N 创建任务。如果要打开新选项卡,可以在“文件”菜单中选择该选项,或使用 Control+T。
调用符合你的风格的新任务。我喜欢能够组织与显示与工作项目或与我的研究的特定部分相关联的所有标签。我可以在任何时间召回整个列表,以轻松快速的方式找到我的浏览记录。
按照你的风格打开新任务。我喜欢按组来管理和显示标签,这组标签与工作项目或研究的某些部分相关。我可以在任何时间重新打开整个列表,从而轻松快速的方式找到我的浏览记录。
另一个整洁的功能是在 tab 区域可以找到段落对齐按钮。单击它启用阅读模式。此模式会保存文章以供将来参考,并删除页面上的一切,以便你可以专注于阅读任务。
另一个好用的功能是可以在 tab 区域找到段落对齐按钮。单击它启用阅读模式。此模式会保存文章以供将来参考,并删除页面上的一切,以便你可以专注于阅读任务。
### 并不完美
Min 浏览器并不是强大的,功能丰富的完美替代品。它有一些明显的弱点,开发人员花了太长时间而不能改正。
Min 浏览器并不是强大的,功能丰富的完美替代品。它有一些明显的缺点,开发人员花了很多时间也没有修正。
例如,它缺乏一个支持论坛和详细用户指南的开发人员网站。可能部分原因是它的官网在 GitHub而不是一个独立的开发人员网站。尽管如此对新用户而言这是一个点。
例如,它缺乏一个支持论坛和详细用户指南的开发人员网站。可能部分原因是它的官网在 GitHub而不是一个独立的开发人员网站。尽管如此对新用户而言这是一个点。
没有网站支持,用户被迫在 GitHub 上寻找自述文件和各种目录列表。你也可以在 Min 浏览器的帮助菜单中访问它们 - 但这没有太多帮助。
一个例子是当你启动浏览器时,屏幕会显示欢迎界面。它会显示两个按钮,一个人是 “Start Browsing”另一个是 “Take a Tour.”。但是没有一个按钮可以使用
一个例子是当你启动浏览器时,屏幕会显示欢迎界面。它会显示两个按钮,一个人是 “Start Browsing”另一个是 “Take a Tour.”。但是没有一个按钮可以使用
但是,你可以通过单击 Min 窗口顶部的菜单栏开始浏览。但是,缺少导览还没有解决办法。
但是,你可以通过单击 Min 窗口顶部的菜单栏开始浏览。但是,还没有解决缺少概览办法。
### 底线
Min 并不是一个有完整功能的 web 浏览器。它不是为通常在成熟的 web 浏览器中有的插件和其他许多功能而设计的。然而Min 通过提供速度和免打扰来达到它重要的目的
Min 并不是一个功能完善、丰富的 web 浏览器。你在功能完善的主流浏览器中所用的插件和其它许多功能都不是 Min 的设计目标。然而Min 在快速响应和免打扰方面很有用
我越使用 Min 浏览器,它对我来说越有效率 - 但是当你第一次使用它时要小心。
我越使用 Min 浏览器,我越觉得它高效 - 但是当你第一次使用它时要小心。
Min 并不复杂或让人困惑 - 它只是有点古怪。你必须要玩弄一下才能明白它如何使用。
Min 并不复杂,也不难操作 - 它只是有点古怪。你必须要玩弄一下才能明白它如何使用。
### 想要提建议么?
有没有一个你想提议 Linux 程序或发行版?有没有你爱的或者想要了解的?
有没有你建议回顾的 Linux 程序或发行版?有没有你爱的或者想要了解的?
请[在电子邮件中给我发送你的想法][3],我会考虑将来在 Linux Picks and Pans 专栏上登出。
请[在电子邮件中给我发送你的想法][3],我会考虑将来在 Linux Picks and Pans 专栏上登出。
并使用下面的读者评论功能提出你的想法!
可以使用下方的读者评论功能说出你的想法!
--------------------------------------------------------------------------------
作者简介:
Jack M. Germain 从苹果 II 和 PC 的早期起就一直在写关于计算机技术。他仍然有他原来的 IBM PC-Jr 和一些其他遗留的 DOS 和 Windows 盒子。他为 Linux 桌面的开源世界留下过共享软件。他运行几个版本的 Windows 和 Linux 操作系统,还通常不能决定是否用他的平板电脑、上网本或 Android 智能手机,而不是用他的台式机或笔记本电脑。你可以在 Google+ 上与他联系。
Jack M. Germain 从苹果 II 和 PC 的早期起就一直在写关于计算机技术。他仍然有他原来的 IBM PC-Jr 和一些其他遗留的 DOS 和 Windows 盒子。他为 Linux 桌面的开源世界留下过共享软件。他运行几个版本的 Windows 和 Linux 操作系统,还通常不能决定是否用他的平板电脑、上网本或 Android 智能手机,是用他的台式机或笔记本电脑。你可以在 Google+ 上与他联系。
--------------------------------------------------------------------------------
@ -113,7 +113,7 @@ via: http://www.linuxinsider.com/story/84212.html?rss=1
作者:[Jack M. Germain][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[GitFuture](https://github.com/GitFuture)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,94 @@
NMAP 常用扫描简介 - 第一部分
========================
我们之前在‘[NMAP 的安装][1]’一文中,列出了 10 种不同的 ZeNMAP 扫描模式(这里将 Profiles 翻译成了模式,不知是否合适)。大多数的模式使用了各种参数。大多数的参数代表了执行不同的扫描模式。这篇文章将介绍其中的四种通用的扫描类型。
**四种通用扫描类型**
下面列出了最常使用的四种扫描类型:
1. PING 扫描 (-sP)
2. TCP SYN 扫描 (-sS)
3. TCP Connect() 扫描 (-sT)
4. UDP 扫描 (-sU)
当我们利用 NMAP 来执行扫描的时候,这四种扫描类型是我们需要熟练掌握的。更重要的是需要知道这些命令做了什么并且需要知道这些命令是怎么做的。本文将介绍 PING 扫描和 UDP 扫描。在之后的文中会介绍 TCP 扫描。
**PING 扫描 (-sP)**
某些扫描会造成网络拥塞,然而 Ping 扫描在网络中最多只会产生两个包。当然这两个包不包括可能需要的 DNS 搜索和 ARP 请求。每个被扫描的 IP 最少只需要一个包来完成 Ping 扫描。
通常 Ping 扫描是用来查看在指定的 IP 地址上是否有在线的主机存在。例如,当我拥有网络连接却联不上一台指定的网络服务器的时候,我就可以使用 PING 来判断这台服务器是否在线。PING 同样也可以用来验证我的当前设备与网络服务器之间的路由是否正常。
**注意:** 当我们讨论 TCP/IP 的时候,相关信息在使用 TCP/IP 协议的英特网与局域网LAN中都是相当有用的。这些程序都能工作。同样在广域网WAN也能工作得相当好。
当参数给出的是一个域名的时候,我们就需要域名解析服务来找到相对应的 IP 地址,这个时候将会生成一些额外的包。例如,当我们执行 ping linuxforum.com 的时候需要首先请求域名linuxforum.com的 IP 地址98.124.199.63)。当我们执行 ping 98.124.199.63 的时候 DNS 查询就不需要了。当 MAC 地址未知的时候,就需要发送 ARP 请求来获取指定 IP 地址的 MAC 地址了(这里的指定 IP 地址,未必是目的 IP
Ping 命令会向指定的 IP 地址发送一个英特网信息控制协议ICMP包。这个包是需要响应的 ICMP Echo 请求。当服务器系统在线的状态下我们会得到一个响应包。当两个系统之间存在防火墙的时候PING 请求包可能会被防火墙丢弃。一些服务器也会被配置成不响应 PING 请求来避免可能发生的死亡之 PING。现在的操作系统似乎不太可能
**注意:** 死亡之 PING 是一种恶意构造的 PING 包当它被发送到系统的时候,会造成被打开的连接等待一个 rest 包。一旦有一堆这样的恶意请求被系统响应,由于所有的可用连接都已经被打开所以系统将会拒绝所有其它的连接。技术上来说这种状态下的系统就是不可达的。
当系统收到 ICMP Echo 请求后它将会返回一个 ICMP Echo 响应。当源系统收到 ICMP Echo 响应后我们就能知道目的系统是在线可达的。
使用 NMAP 的时候你可以指定单个 IP 地址也可以指定 某个 IP 地址段。当被指定为 PING 扫描(-sP的时候PING 命令将会对每一个 IP 地址执行。
在图 1 中你可以看到我执行nmap -sP 10.0.0.1-10命令后的结果。An ARP is sent out, three for each IP Address given to the command. In this case thirty requests went out two for each of the ten IP Addresses.(这两句话就没有读懂不清楚具体指的是什么意思从图2看的话第一句里的三指的是两个 ARP 包和一个 ICMP 包,按照下面一段话的描述的话就是每个 IP 地址会有三个 ARP 请求,但是自己试的时候 Centos6 它发了两个 ARP 请求没获取到 MAC 地址也就就结束了,这里不清楚究竟怎么理解)
![Figure 01.jpg](https://www.linuxforum.com/attachments/figure-01-jpg.105/)
**图 1**
图 2 中展示了利用 Wireshark 抓取的从网络上另一台计算机发出的请求-的确是在 Windows 系统下完成这次抓取的。第一行展示了发出的第一条请求,广播请求的是 10.0.0.2 IP 地址对应 MAC 地址。由于 NMAP 是在 10.0.0.1 这台机器上执行的,因此 10.0.0.1 被略过了。由于本机 IP 地址被略过,我们现在可以说总共只发出了 27 个 ARP 请求。第二行展示了 10.0.0.2 这台机器的 ARP 响应。第三行到第十行是其它八个 IP 地址的 ARP 请求。第十一行是由于没有收到请求系统10.0.0.1)的反馈所以发送的另一个 ARP 响应。(自己试的话它发送一个请求收到一个响应就结束了,也没有搜到相关的重发响应是否存在的具体说明,不是十分清楚)第十二行是源系统向 10.0.0.2 响应的 SYN 和 Sequence 0。这行感觉更像是三次握手里的首包第十三行和第十四行的两次 RestartRST和 SynchronizeSYN响应是用来关闭第二行和第十一行所打开的连接的。这个描述似乎有问题 ARP 请求怎么会需要 TCP 来关闭连接呢,感觉像是第十二行的响应)注意 Sequence ID 是 1 - 是源 Sequence ID + 1。(这个不理解,不是应该 ACK = seq + 1 的么)第十五行开始就是类似相同的内容。
![Figure 02.jpg](https://www.linuxforum.com/attachments/figure-02-jpg.106/)
**图 2**
回到图 1 中我们可以看到有两台主机在线。其中一台是本机10.0.0.1另一台是10.0.0.2)。整个扫描花费了 14.40 秒。
PING 扫描是一种用来发现在线主机的快速扫描方式。扫描结果中没有关于网络、系统的其它信息。这是一种较好的初步发现网络上在线主机的方式,接着你就可以针对在线系统执行更加复杂的扫描了。你可能还会发现一些不应该出现在网络上的系统。出现在网络上的流氓软件是很危险的,他们可以很轻易的收集内网信息和相关的系统信息。
一旦你获得了在线系统的列表,你就可以使用 UDP 扫描来查看哪些端口是可能开启了的。
**UDP 扫描 (-sU)**
现在你已经知道了有那些系统是在线的,你的扫描就可以聚焦在这些 IP 地址之上。在整个网络上执行大量的没有针对性的扫描活动可不是一个好主意。系统管理员可以使用程序来监控网络流量当有大量可以活动发生的时候就会触发警报。
用户数据报协议UDP在发现在线系统的开放端口方面十分有用。由于 UDP 不是一个面向连接的协议,因此是不需要响应的。这种扫描方式可以向指定的端口发送一个 UDP 包。如果目标系统没有回应那么这个端口可能是关闭的也可能是被过滤了的。如果端口是开放状态的那么应该会有一个响应。在大多数的情况下目标系统会返回一个 ICMP 信息说端口不可达。ICMP 信息让 NMAP 知道端口是被关闭了。如果端口是开启的状态那么目标系统应该响应 ICMP 信息来告知 NMAP 端口可达。
**注意: **只有最前面的1024个常用端口会被扫描。这里将 1000 改成了1024因为手册中写的是默认扫描 1 到 1024 端口)在后面的文章中我们会介绍如何进行深度扫描。
由于我知道 10.0.0.2 这个主机是在线的,因此我只会针对这个 IP 地址来执行扫描。扫描过程中总共收发了 3278 个包。sudo nmap -sU 10.0.0.2’这个命令的输出结果在图 3 中展现。
![Figure 03.jpg](https://www.linuxforum.com/attachments/figure-03-jpg.107/)
**图 3**
在这副图中你可以看见端口 137netbios-ns被发现是开放的。在图 4 中展示了 Wireshark 抓包的结果。不能看到所有抓取的包,但是可以看到一长串的 UDP 包。
![Figure 4.jpg](https://www.linuxforum.com/attachments/figure-4-jpg.108/)
**图 4**
如果我把目标系统上的防火墙关闭之后会发生什么呢我的结果有那么一点的不同。NMAP 命令的执行结果在图 5 中展示。
![Figure 05.png](https://www.linuxforum.com/attachments/figure-05-png.109/)
**图 5**
**注意:** 当你执行 UDP 扫描的时候是需要 root 权限的。
会产生大量的包是由于我们使用了 UDP。当 NMAP 发送 UDP 请求时它是不保证数据包会被收到的。因为数据包可能会在中途丢失因此它会多次发送请求。
--------------------------------------------------------------------------------
via: https://www.linuxforum.com/threads/nmap-common-scans-part-one.3637/
作者:[Jarret][a]
译者:[wcnnbdk1](https://github.com/wcnnbdk1)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxforum.com/members/jarret.268/
[1]:https://www.linuxforum.com/threads/nmap-installation.3431/

View File

@ -0,0 +1,75 @@
## 如何阻止黑客入侵你的Linux机器之2另外三个建议
![security tips](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/security-tips.jpg?itok=JMp34oc3 "security tips")
在这个系列中, 我们会讨论一些重要信息来阻止黑客入侵你的系统。观看这个免费的网络研讨会on-demand获取更多的信息。[Creative Commons Zero][1]Pixabay
在这个系列的[第一部分][3]中,我分享过其他两种简单的方法来阻止黑客黑掉你的Linux主机。这里是另外三条来自于我最近的Linux基础网络研讨会的建议在这次研讨会中我分享了更多的黑客用来入侵你的主机的策略、工具和方法。完整的[研讨会on-demand][4]视频可以在网上免费观看。
### 简单的Linux安全建议 #3
** Sudo. **
Sudo是非常、非常的重要。我认为这只是很基本的东西但就是这些基本的东西让我黑客的生活变得更困难。如果你没有配置sdo还请配置好它。
还有你主机上所有的用户必须使用他们自己的密码。不要都免密码使用sudo执行所有命令那样做毫无意义除了让我黑客的生活变得更简单这时我能获取一个不需要密码就能以sudo方式运行所有命令的帐号。如果我可以使用sudo命令而且不需要再次确认那么我就能入侵你同时当我获得你的没有使用密码的SSH密钥后我就能十分容易的开始任何黑客活动。现在我已经拥有了你机器的root权限。
保持较低的超时时间。我们喜欢劫持用户的会话如果你的某个用户能够使用sudo,并且设置的超时时间是3小时当我劫持了你的会话那么你就再次给了我一个自由的通道哪怕你需要一个密码。
我推荐超时时间大约为10分钟甚至是5分钟。用户们将需要反复地输入他们的密码但是如果你设置了较低的超时时间你将减少你的受攻击面。
还要限制可以访问的命令和禁止通过sudo来访问shell。大多数Linux发行版目前默认允许你使用"sudo bash"来获取一个root身份的shell当你需要做大量的系统管理的任务时这种机制是非常好的。然而应该对大多数用户实际需要运行的命令有一个限制。你对他们限制越多你主机的受攻击面就越小。如果你允许我shell访问我将能够做任何类型的事情。
### 简单的Linux安全建议 #4
** 限制正在运行的服务 **
防火墙是很棒的。你的边界防火墙非常的强大。当流量流经你的网络时,防火墙外面的几家制造商做着极不寻常的工作。但是防火墙内的人呢?
你正在使用基于主机的防火墙或者基于主机的入侵检测系统吗?如果是,请正确配置好它。怎样可以知道你的正在受到保护的东西是否出了问题呢?
答案是限制当前正在运行的服务。不要在不需要提供mySQL服务的机器上运行它。如果你有一个默认会安装完整的LAMP套件的Linux发行版而你不会在它上面运行任何东西那么卸载它。禁止那些服务不要开启它们。
同时确保用户没有默认的证书确保那些内容已被安全地配置。如何你正在运行Tomcat你不被允许上传你自己的小程序。确保他们不会以root的身份运行。如果我能够运行一个小程序我不想以管理员的身份来运行它也不想我自己访问权限。你对人们能够做的事情限制越多你的机器就将越安全。
### 简单的Linux安全建议 #5
** 小心你的日志记录 **
看看它们认真地小心你的日志记录。六个月前我们遇到一个问题。我们的一个顾客从来不去看日志记录尽管他们已经拥有了很久、很久的日志记录。假如他们曾经看过日志记录他们就会发现他们的机器早就已经被入侵了并且他们的整个网络都是对外开放的。我在家里处理的这个问题。每天早上起来我都有一个习惯我会检查我的email我会浏览我的日志记录。这仅花费我15分钟但是它却能告诉我很多关于什么正在发生的信息。
就在这个早上,机房里的三台电脑死机了,我不得不去重启它们。我不知道为什么会出现这样的情况,但是我可以从日志记录里面查出什么出了问题。它们是实验室的机器,我并不关心它们,但是有人需要关心。
通过Syslog、Splunk或者任何其他日志整合工具将你的日志进行集中是极佳的选择。这比将日志保存在本地要好。我最喜欢做是事情就是修改你的日志记录让你不知道我曾经入侵过你的电脑。如果我做了那你将不会有任何线索。对我来说修改集中的日志记录比修改本地的日志更难。
就像你的很重要的人,送给他们鲜花,磁盘空间。确保你有足够的磁盘空间用来记录日志。进入一个只能读的文件系统不是一件很好的事情。
还需要知道什么是不正常的。这是一件非常困难的事情,但是从长远来看,这将使你日后受益匪浅。你应该知道什么正在进行和什么时候出现了一些异常。确保你知道那。
在[第三封和最后的博客][5]里,我将就这次研讨会中问到的一些比较好的安全问题进行回答。[现在开始看这个完整的免费的网络研讨会on-demand][6]吧。
*** Mike Guthrie 就职于能源部,主要做红队交战和渗透测试 ***
--------------------------------------------------------------------------------
via: https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-2-three-more-easy-security-tips
作者:[MIKE GUTHRIE][a]
译者:[zhousiyu325](https://github.com/zhousiyu325)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/anch
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/files/images/security-tipsjpg
[3]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-1-top-two-security-tips
[4]:http://portal.on24.com/view/channel/index.html?showId=1101876&showCode=linux&partnerref=linco
[5]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-3-your-questions-answered
[6]:http://bit.ly/2j89ISJ

View File

@ -1,23 +1,17 @@
Linux 命令行导航贴士pushd 和 popd 命令基础
Linux 命令行工具使用小贴士及技巧(二)pushd 和 popd 命令基础
============================================================
### 在本篇中
1. [pushd 和 popd 命令基础][1]
2. [一些高级用法][2]
3. [总结][3]
在本系列的[第一部分][4]中,我们通过讨论 **cd -** 命令的用法,重点介绍了 Linux 中的命令行导航。还讨论了一些其他相关要点/概念。现在进一步讨论,在本文中,我们将讨论如何使用 pushd 和 popd 命令在 Linux 命令行上获得更快的导航体验。
在我们开始之前,值得分享的一点是,此后提到的所有指导和命令已经在 Ubuntu 14.04 和 Bash shell4.3.11)上测试过。
在我们开始之前,值得说明的一点是,此后提到的所有指导和命令已经在 Ubuntu 14.04 和 Bash shell4.3.11)上测试过。
### pushd 和 popd 命令基础
为了更好地理解 pushd 和 popd 命令的作用,让我们先讨论堆栈的概念。想象一下你厨房案板上的一个空白区域,现在想象一下你想在上面放一套盘子。你会怎么做?很简单,一个接一个地放在上面。
为了更好地理解 pushd 和 popd 命令的作用,让我们先讨论堆栈的概念。想象你厨房案板上有一个空白区域,你想在上面放一套盘子。你会怎么做?很简单,一个接一个地放在上面。
所以在整个过程的最后,案板上的第一个盘子是盘子中的最后一个,你手中最后一个盘子是盘子堆中的第一个。现在当你需要一个盘子时,你选择在堆的顶部使用它,然后在下次需要时选择下一个。
所以在整个过程的最后,案板上的第一个盘子是盘子中的最后一个,你手中最后一个盘子是盘子堆中的第一个。现在当你需要一个盘子时,你选择在堆的顶部的那个盘子并使用它,然后需要时选择下一个。
pushd 和 popd 命令是类似的概念。在Linux系统上有一个目录堆栈你可以堆叠目录路径以供将来使用。你可以使用 **dirs** 命令来在任何时间点快速查看堆栈的内容。
pushd 和 popd 命令是类似的概念。在 Linux 系统上有一个目录堆栈,你可以堆叠目录路径以供将来使用。你可以使用 **dirs** 命令来在任何时间点快速查看堆栈的内容。
下面的例子显示了在命令行终端启动后立即在我的系统上使用 dirs 命令的输出:
@ -38,7 +32,7 @@ pushd /home/himanshu/Downloads/
输出显示现在堆栈中有两个目录路径:一个是用户的主目录,还有用户的下载目录。它们的保存顺序是:主目录位于底部,新添加的 Downloads 目录位于其上。
要验证 pushd 的输出是正确的你还可以使用dirs命令
要验证 pushd 的输出是正确的,你还可以使用 dirs 命令:
$ dirs
~/Downloads ~
@ -73,7 +67,7 @@ $ dirs
pushd +1
这里是上面的命令对目录堆栈做的:
上面的命令对目录堆栈做的结果
$ dirs
/usr/lib ~ ~/Downloads ~/Desktop
@ -120,11 +114,11 @@ $ dirs -v
上述命令确保 popd 保持静默(不产生任何输出)。同样,你也可以静默 pushd。
pushd 和 popd 命令也被 Linux 服务器管理员使用,他们通常在几个相同的目录之间移动。 [这里][5]一些其他真正有信息量的案例解释
pushd 和 popd 命令也被 Linux 服务器管理员使用,他们通常在几个相同的目录之间移动。 在[这里][5]介绍了一些其他有用的使用场景
### 总结
我同意 pushd 和 popd 的概念不是很直接。但是,它需要的只是一点练习 - 是的,你需要让你实践。花一些时间在这些命令上,你就会开始喜欢他们,特别是如果有一些能方便你生活的用例存在时。
我同意 pushd 和 popd 的概念不是很直接。但是,它需要的只是一点练习 - 是的,你需要多实践。花一些时间在这些命令上,你就会开始喜欢它们,特别是当它们提供了方便时。
--------------------------------------------------------------------------------
@ -132,7 +126,7 @@ via: https://www.howtoforge.com/tutorial/linux-command-line-tips-tricks-part-2/
作者:[Ansh ][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,149 @@
Linux 命令行工具使用小贴士及技巧(三) - 环境变量 CDPATH
============================================================
在这个系列的第一部分,我们详细地讨论了 `cd -` 命令,在第二部分,我们深入探究了 `pushd``popd` 两个命令,以及它们使用的场景。
继续对命令行的讨论,在这篇教程中,我们将会通过简单易懂的实例来讨论 `CDPATH` 这个环境变量。我们也会讨论关于此变量的一些进阶细节。
_在这之前先声明一下此教程中的所有实例都已经在 Ubuntu 14.04 和 4.3.11(1) 版本的 Bash 下测试过。_
### 环境变量 CDPATH
即使你的命令行所有操作都在特定的目录下 - 例如你的主目录 - 在切换目录时你也不得不提供绝对路径。比如,考虑我现在的情况,就是在 _/home/himanshu/Downloads_ 目录下:
```
$ pwd
/home/himanshu/Downloads
```
现在要求切换至 _/home/himanshu/Desktop_ 目录,我一般会这样做:
```sh
cd /home/himanshu/Desktop/
```
或者
```sh
cd ~/Desktop/
```
或者
```sh
cd ../Desktop/
```
能不能只是运行以下命令就能简单地实现呢:
```sh
cd Desktop
```
是的,这完全有可能。这就是环境变量 `CDPATH` 出现的时候了。你可使用这个变量来为 `cd` 命令定义基础目录。
如果你尝试打印它的值,你会看见这个环境变量默认是空值的:
```sh
$ echo $CDPATH
$
```
现在 ,考虑到上面提到的场景,我们使用这个环境变量,将 _/home/himanshu_ 作为 `cd` 命令的基础目录来使用。
最简单的做法这样:
```sh
export CDPATH=/home/himanshu
```
现在,我能做到之前所不能做到的事了 - 当前工作目录在 _/home/himanshu/Downloads_ 目录里时,成功地运行了 `cd Desktop` 命令。
```sh
$ pwd
/home/himanshu/Downloads
$ cd Desktop/
/home/himanshu/Desktop
$
```
这表明了我可以使用 `cd` 命令来到达 _`/home/himanshu`_ 下的任意一个目录,而不需要在 `cd ` 命令中显式地指定 _`/home/himanshu`_ 或者 _`~`_,又或者是 _`../`_ (或者多个 _`../`_)。
### 要点
现在你应该知道了怎样利用环境变量 CDPATH 在 _/home/himanshu/Downloads__/home/himanshu/Desktop_ 之间轻松切换。现在,考虑以下这种情况, 在 _/home/himanshu/Desktop_ 目录里包含一个名字叫做 _Downloads_ 的子目录,这是将要切换到的目录。
但突然你会意识到 _cd Desktop_ 会切换到 _/home/himanshu/Desktop_。所以,为了确保这不会发生,你可以这样做:
```sh
cd ./Downloads
```
虽然上述命令本身没有问题,但你还是需要耗费点额外的精力( 虽然很小 ),尤其是每次这种情况发生时你都不得不这样做。所以,有一个更加优雅的解决方案来处理,就是以如下方式来设定 `CDPATH` 环境变量。
```sh
export CDPATH=".:/home/himanshu"
```
它的意思是告诉 `cd` 命令先在当前的工作目录查找该目录,然后再尝试搜寻 _/home/himanshu_ 目录。当然, `cd` 命令是否以这样的方式运行,完全取决于你的偏好和要求 - 讨论这一点的目的是为了让你知道这种情况可能会发生。
就如你现在所知道的,一旦环境变量 `CDPATH` 被设置,它的值 - 或者它所包含的路径集合 - 就是系统中 `cd` 命令搜索目录的地方 ( 当然除了使用绝对路径的场景 )。所以,完全取决于你来确保该命令行为的一致性。
继续说,如果一个 bash 脚本以相对路径使用 `cd` 命令,最好还是先清除或者重置环境变量 `CDPATH`,除非你觉得遇上不可预测的麻烦也无所谓。还有一个可选的方法,比起在终端使用 `export` 命令来设置 `CDPATH`,你可以在测试完交互式/非交互式 shell 之后,在你的 `.bashrc` 文件里设置环境变量,这样可以确保你对环境变量的改动只对交互式 shell 生效。
环境变量中,路径出现的顺序同样也是很重要。举个例子,如果当前目录是在 _/home/himanshu_ 目录之前列出来,`cd` 命令就会先搜索当前的工作目录然后才会移动到 _/home/himanshu_ 目录。然而,如果该值为 _"/home/himanshu:."_,搜索就首先从 _/home/himanshu_ 开始,然后到当前目录。不用说,这会影响 `cd` 命令的行为,并且不注意路径的顺序可能会导致一些麻烦。
要牢记在心的是,环境变量 `CDPATH`,就像其名字表达的,只对 `cd` 命令有作用。意味着在 _/home/himanshu/Downloads_ 目录里面时,你能运行 `_cd Desktop_` 命令来切换到 _/home/himanshu/Desktop_ 目录,但你不能使用 `ls`。以下是一个例子:
```sh
$ pwd
/home/himanshu/Downloads
$ ls Desktop
ls: cannot access Desktop: No such file or directory
$
```
然而,这还是有简单的变通处理的。例如,我们可以用以下不怎么费力的方式来达到目的:
```sh
$ cd Desktop/;ls
/home/himanshu/Desktop
backup backup~ Downloads gdb.html outline~ outline.txt outline.txt~
```
不过,不是每种情况就能变通处理的。
另一个重点是: 就像你可能已经观察到的,每次你使用 `CDPATH` 环境变量集来运行 `cd` 命令时,该命令都会在输出里显示你切换到的目录的完整路径。不用说,不是所有人都想在每次运行 `cd` 命令时看到这些信息。
为了确保该输出被制止,你可以使用以下命令:
```sh
alias cd='>/dev/null cd'
```
如果 `cd` 命令运行成功,上述命令不会输出任何东西,如果失败,则允许产生错误信息。
最后,假如你遇到设置 CDPATH 环境变量后,不能使用 shell 的 tab 自动补全功能的问题,可以尝试安装并启动 bash 自动补全( bash-completion )。更多请参考 [这里][4]。
### 总结
`CDPATH` 环境变量时一把双刃剑,如果没有掌握完善的知识和随意使用,可能会令你陷入困境,并花费你大量宝贵时间去解决问题。当然,这不代表你不应该试一下;只需要了解一下所有的可用选项,如果你得出结论,使用 CDPATH 会带来很大的帮助,就继续使用它吧。
你已经能够熟练地使用 `CDPATH` 了吗?你有更多的贴士要分享?请在评论区里发表一下你的想法吧。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/linux-command-line-tips-tricks-part-3-cdpath/
作者:[Ansh][a]
译者:[HaitaoBio](https://github.com/HaitaoBio)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/linux-command-line-tips-tricks-part-3-cdpath/
[1]:https://www.howtoforge.com/tutorial/linux-command-line-tips-tricks-part-3-cdpath/#the-cdpath-environment-variable
[2]:https://www.howtoforge.com/tutorial/linux-command-line-tips-tricks-part-3-cdpath/#points-to-keep-in-mind
[3]:https://www.howtoforge.com/tutorial/linux-command-line-tips-tricks-part-3-cdpath/#conclusion
[4]:http://bash-completion.alioth.debian.org/

View File

@ -1,155 +1,119 @@
### 在 Linux 上用火狐保护你的隐私
在 Linux 上用火狐保护你的隐私
=============================
内容
[1. 介绍][12]
[2. 火狐设置][13]
[2.1. 健康报告][1]
[2.2. 搜索][2]
[2.3. 请勿跟踪][3]
[2.4. 禁用 Pocket][4]
[3. 附加组件][14]
[3.1. HTTPS Everywhere][5]
[3.2. Privacy Badger][6]
[3.3. Ublock Origin][7]
[3.4. NoScript][8]
[3.5. Disconnect][9]
[3.6. Random Agent Spoofer][10]
[4. 系统设置][15]
[4.1. 私人 DNS][11]
[5. 关闭联想][16]
## 介绍
### 介绍
隐私和安全正在逐渐成为一个重要的话题。虽然不可能做到 100% 安全,但是,特别是在 Linux 上,还是有几个你能做措施,在你浏览网页的时候保卫你的在线隐私安全。
隐私和安全正在逐渐成为一个重要的话题。虽然不可能做到 100% 安全,但是,还是能采取一些措施,特别是在 Linux 上,在你浏览网页的时候保护你的在线隐私安全。
基于这些目的选择浏览器的时候,火狐或许是你的最佳选择。谷歌 Chrome 不能信任。它是属于谷歌的,一个众所周知的数据收集公司,而且它是闭源的。 Chromium 或许还可以,但并不能保证。只有火狐保持了一定程度的用户权利承诺。
### 火狐设置
![火狐隐私浏览](linuxconfig.orgimagesprivate-browsing-firefox-linux.jpg)
## 火狐设置
火狐里有几个你能设定的设置,能更好地保护你的隐私。这些设置唾手可得,能帮你控制那些在你浏览的时候分享的数据。
### 健康报告
能设置以限制数据发送总量的第一件事就是火狐的健康报告。当然,这些数据只是被发送到 Mozilla ,但是它仍然在传输数据。
你首先可以设置的是对火狐健康报告发送的限制,以限制数据发送总量。当然,这些数据只是被发送到 Mozilla但这也是传输数据。
打开火狐的菜单,点击选项。来到侧边栏里的高级选项卡,点击数据反馈。这里你能禁用任意数据的报告。
打开火狐的菜单,点击<ruby>“选项”<rt>Preferences</rt></ruby>。来到侧边栏里的<ruby>“高级”<rt>Advanced</rt></ruby>选项卡,点击<ruby>“数据选项”<rt>Data Choices</rt></ruby>。这里你能禁用任意数据的报告。
### 搜索
新版的火狐浏览器默认使用雅虎搜索引擎。一些发行版更改设置,替代使用的是谷歌。两个方法都不理想。火狐有默认使用 DuckDuckGo 的选项。
![在火狐中使用 DuckDuckGo ](https://linuxconfig.org/images/ff-ddg.jpg?58cf18fd)
![在火狐中使用 DuckDuckGo ](httpslinuxconfig.orgimagesff-ddg.jpg)
center
为了启用 DuckDuckGo你得打开火狐菜单点击<ruby>“选项”<rt>Preferences</rt></ruby>。直接来到侧边栏的<ruby>“搜索”<rt>Search</rt></ruby>选项卡。然后,用<ruby>“默认搜索引擎”<rt>Default Search Engine</rt></ruby>的下拉菜单来选择 DuckDuckGo 。
为了启用 DuckDuckGo你得打开火狐菜单点击选项。直接来到侧边栏的搜索选项卡。然后用默认搜索引擎的下拉菜单来选择 DuckDuckGo 。
### <ruby>请勿跟踪<rt>Do Not Track</rt></ruby>
### 请勿跟踪
这个功能并不完美,但它确实向站点发送了一个信号,告诉它们不要使用分析工具来记录你的活动。这些网页或许会遵从,会许不会。但是,最好启用请勿跟踪,也许它们会遵从呢。
请勿跟踪并不完美,但它确实向网页发送了一个信号,告诉他们不要使用分析工具来记录你的活动。这些网页或许会遵从,会许不会。但是,万一他们会遵从,最好启用请勿跟踪。
![启用火狐中的请勿跟踪](httpslinuxconfig.orgimagesff-tracking.jpg)
再次打开火狐的菜单,点击选项,然后是隐私。页面的最上面有一个跟踪部分。点击那一行写着 “ 您还可以管理您的 ‘请勿跟踪’ 设置 ” 的链接。会出现一个有单选框的弹出窗口,那里允许你启用请勿跟踪设置。
![启用火狐中的请勿跟踪](https://linuxconfig.org/images/ff-tracking.jpg?58cf18fc)
再次打开火狐的菜单,点击<ruby>“选项”<rt>Preferences</rt></ruby>,然后是<ruby>“隐私”<rt>Privacy</rt></ruby>。页面的最上面有一个<ruby>“跟踪”<rt>Tracking</rt></ruby>部分。点击那一行写着<ruby>“您还可以管理您的‘请勿跟踪’设置”<rt>You can also manage your Do Not Track settings</rt></ruby>的链接。会出现一个有复选框的弹出窗口,那里允许你启用“请勿跟踪”设置。
### 禁用 Pocket
没有任何证据显示 Pocket 正在做一些不好的事情,但是禁用它或许更好,因为它确实连接了一个专有的应用。
禁用 Pocket 不是太难,但是你得注意只改变 Pocket 相关设置。为了来到你所需的配置页面,在火狐的地址栏里输入`about:config`。
禁用 Pocket 不是太难,但是你得注意 Pocket 是唯一扰乱你的东西。为了来到你所需的配置页面,在火狐的地址栏里输入`about:config`。
页面会加载一个设置表格,在表格的最上面是搜索栏,在那儿搜索 Pocket 。
你将会看到一个包含结果的新表格。找一下名为 extensions.pocket.enabled 的设置。当你找到它的时候,双击使其转变为“否”。你也能在这儿编辑 Pocket 的其他相关设置。不过没什么必要。注意不要编辑那些跟 Pocket 扩展不直接相关的任何东西。
![禁用火狐的 Pocket](https://linuxconfig.org/images/ff-pocket.jpg?58cf18fd)
这些页面会加载一个设置表格,在表格的最上面是搜索栏,在那儿搜索 Pocket 。
## <ruby>附加组件<rt>Add-ons</rt></ruby>
你将会看到一个包含结果的新表格。找一下名为 extensions.pocket.enabled 的设置。当你找到它的时候,双击使其转变为否。你也能在这儿编辑 Pocket 的其他相关设置。虽说这不是必要的。只是得保证不要编辑那些不是直接跟 Pocket 应用相关设置的任何东西。
![禁用火狐的 Pocket](httpslinuxconfig.orgimagesff-pocket.jpg)
### 附加组件
![安全化火狐的附加组件](https://linuxconfig.orgimagesff-addons.jpg)
火狐最有效地保护你隐私和安全的方式来自附加组件。火狐有大量的附加组件库,有许多附加组件是免费、开源的。在这篇指导中着重提到的附加组件,对于安全化你的浏览器方面是名列前茅的。
![安全化火狐的附加组件](https://linuxconfig.org/images/ff-addons.jpg?58cf18fd)
火狐最有效地保护你隐私和安全的方式来自附加组件。火狐有大量的附加组件库,其中很多是免费、开源的。在这篇指导中着重提到的附加组件,在使浏览器更安全方面是名列前茅的。
### HTTPS Everywhere
电子前线基金会开发了HTTPS Everywhere这是对大量没有使用 SSL 证书的网页、许多不使用`https`前缀的链接、指引用户前往不安全版本的网页等做出的反应。HTTPS Everywhere 确保了如果存在有一个加密版本的网页,用户将会使用它。
针对大量没有使用 SSL 证书的网页、许多不使用 `https` 前缀的链接、指引用户前往不安全版本的网页等现状,<ruby>电子前线基金会<rt>Electronic Frontier Foundation</rt></ruby>开发了 HTTPS Everywhere。HTTPS Everywhere 确保了如果存在有一个加密版本的网页,用户将会使用它。
给火狐设计的 HTTPS Everywhere 已经可以使用,在火狐的附加组件搜索网页上。`https://addons.mozilla.org/en-us/firefox/addon/https-everywhere/`(译注:对应的中文页面是`https://addons.mozilla.org/zh-CN/firefox/addon/https-everywhere/`
给火狐设计的 HTTPS Everywhere 已经可以使用,在火狐的附加组件搜索网页上。`https://addons.mozilla.org/en-us/firefox/addon/https-everywhere/`LCTT 译注:对应的中文页面是 `https://addons.mozilla.org/zh-CN/firefox/addon/https-everywhere/`
### Privacy Badger
电子前线基金会同样开发了 Privacy Badger。 Privacy Badger 旨在通过阻止不想要的网页跟踪,弥补请勿跟踪功能的不足之处。它同样能通过火狐附加组件仓库安装。`https://addons.mozilla.org/en-us/firefox/addon/privacy-badger17`. (译者注:对应的中文页面是`https://addons.mozilla.org/zh-CN/firefox/addon/privacy-badger17/`
电子前线基金会同样开发了 Privacy Badger。 Privacy Badger 旨在通过阻止不想要的网页跟踪,弥补“请勿跟踪”功能的不足之处。它同样能通过火狐附加组件仓库安装。`https://addons.mozilla.org/en-us/firefox/addon/privacy-badger17`。LCTT 译注:对应的中文页面是`https://addons.mozilla.org/zh-CN/firefox/addon/privacy-badger17/`
### Ublock Origin
现在有一类更通用的的隐私附加组件,屏蔽广告。这里的选择是 uBlock Origin uBlock Origin 是个更轻量级的广告屏蔽插件,几乎不遗漏所有它会屏蔽的广告。 uBlock Origin 将主要屏蔽所有广告,特别是侵略性的广告。你能在这儿找到它。`https://addons.mozilla.org/en-us/firefox/addon/ublock-origin/`.(译者注:对应的中文页面是`https://addons.mozilla.org/zh-CN/firefox/addon/ublock-origin/`
现在有一类更通用的的隐私附加组件,屏蔽广告。这里的选择是 uBlock OriginuBlock Origin 是个更轻量级的广告屏蔽插件,几乎不遗漏所有它会屏蔽的广告。 uBlock Origin 将主要屏蔽所有广告,特别是侵略性的广告。你能在这儿找到它。`https://addons.mozilla.org/en-us/firefox/addon/ublock-origin/`LCTT 译注:对应的中文页面是 `https://addons.mozilla.org/zh-CN/firefox/addon/ublock-origin/`
### NoScript
阻止 JavaScript 是有点争议, JavaScript 虽说驱动了那么多的网站,但还是臭名昭著,因为 JavaScript 成为侵略隐私和攻击的媒介。NoScript 是应对 JavaScript 的绝佳方案。
![向 NoScript 的白名单添加网页](https://linuxconfig.org/images/ff-noscript.jpg?58cf18fc)
![向 NoScript 的白名单添加网页](https://linuxconfig.org/images/ff-noscript.jpg)
NoScript 是一个 JavaScript 的白名单,它通常会屏蔽 JavaScript除非该站点被添加进白名单中。可以通过插件的“选项”菜单事先将一个站点加入白名单或者通过在页面上点击 NoScript 图标的方式添加。
NoScript 是一个 JavaScript 的白名单,它通常会屏蔽 JavaScript 直到一个网页被添加进白名单中。添加一个网页进白名单能提前完成,通过插件的选项菜单,或者能通过点击页面上的 NoScript 图标完成。
![添加你所在的网页到 NoScript 的白名单中](https://linuxconfig.org/images/ff-noscript2.jpg)
![添加你所在的网页到 NoScript 的白名单中](https://linuxconfig.org/images/ff-noscript2.jpg?58cf18fd)
通过火狐附加组件仓库可以安装 NoScript `https://addons.mozilla.org/en-US/firefox/addon/noscript/`
如果网页提示不支持你使用的火狐版本,点“无论如何下载”。它已经测试过能在Firefox 51 上使用
如果网页提示不支持你使用的火狐版本,点<ruby>“无论如何下载”<rt>Download Anyway</rt></ruby>。这已经在 Firefox 51 上测试有效。
### Disconnect
Disconnect 做很多跟 Privacy Badger 一样的事情,它只是提供了另一个保护的方法。你能在附加组件仓库中找到它 `https://addons.mozilla.org/en-US/firefox/addon/disconnect/` (译者注:对应的中文页面是`https://addons.mozilla.org/zh-CN/firefox/addon/disconnect/`如果网页提示不支持你使用的火狐版本点“无论如何下载”。它已经测试过能在Firefox 51 上使用。
Disconnect 做很多跟 Privacy Badger 一样的事情,它只是提供了另一个保护的方法。你能在附加组件仓库中找到它 `https://addons.mozilla.org/en-US/firefox/addon/disconnect/` LCTT 译注:对应的中文页面是`https://addons.mozilla.org/zh-CN/firefox/addon/disconnect/`)。如果网页提示不支持你使用的火狐版本,点<ruby>“无论如何下载”<rt>Download Anyway</rt></ruby>。这已经在 Firefox 51 上测试有效。
### Random Agent Spoofer
Random Agent Spoofer 能改变火狐浏览器的签名,让浏览器看起来像是在其他任意平台上的其他任意浏览器。虽然有许多其他的应用,但是它也能预防浏览器指纹侦查。
浏览器指纹侦查是网站基于所使用的浏览器和操作系统来跟踪用户的另一个方式。相比于 Windows 用户,浏览器指纹侦查更多影响到 Linux 和其他替代性操作系统用户,因为他们的浏览器特征更独特。
<ruby>浏览器指纹侦查<rt>Browser Fingerprinting</rt></ruby>是网站基于所使用的浏览器和操作系统来跟踪用户的另一个方式。相比于 Windows 用户,浏览器指纹侦查更多影响到 Linux 和其他替代性操作系统用户,因为他们的浏览器特征更独特。
你能通过火狐附加插件仓库添加 Random Agent Spoofer。`https://addons.mozilla.org/en-us/firefox/addon/random-agent-spoofer/`(译注:对应的中文页面是`https://addons.mozilla.org/zh-CN/firefox/addon/random-agent-spoofer/`)像其他附加组件那样,页面或许会提示在最新版的火狐兼容性不好。再说一次,那并不是真的。
你能通过火狐附加插件仓库添加 Random Agent Spoofer。`https://addons.mozilla.org/en-us/firefox/addon/random-agent-spoofer/`LCTT 译注:对应的中文页面是`https://addons.mozilla.org/zh-CN/firefox/addon/random-agent-spoofer/`像其他附加组件那样,页面或许会提示它不兼容最新版的火狐。再说一次,那并不是真的。
![在火狐上使用Random Agent Spoofer ](https://linuxconfig.org/images/ff-random-agent.jpg)
![在火狐上使用Random Agent Spoofer ](https://linuxconfig.org/images/ff-random-agent.jpg?58cf18fc)
通过点击火狐菜单栏上的图标来使用 Random Agent Spoofer。点开后将会出现一个下拉菜单有不同模拟的浏览器选项。最好的选项之一是选择"Random Desktop" 和任意的改变时间。这样,就不会有绝对的模式来跟踪,也保证了你只能获得网页的桌面版本。
可以通过点击火狐菜单栏上的图标来使用 Random Agent Spoofer。点开后将会出现一个下拉菜单有不同模拟的浏览器选项。最好的选项之一是选择"Random Desktop" 和任意的改变时间。这样,就不会有绝对的模式来跟踪,也保证了你只能获得网页的桌面版本。
### 系统设置
## 系统设置
### 私人 DNS
避免使用公共或者 ISP 的 DNS 服务器。即使你配置了你的浏览器满足绝对的隐私标准,你向公共 DNS 服务器发出的 DNS 请求暴露了所有你访问过的网页。服务,例如谷歌公共 DNSIP8.8.8.8 、8.8.4.4)将会记录你的 IP 地址、关于你的 ISP 和地理位置信息。这些信息或许会被任何合法程序或者强制性的政府请求所分享。
请避免使用公共或者 ISP 的 DNS 服务器!即使你配置你的浏览器满足绝对的隐私标准,你向公共 DNS 服务器发出的 DNS 请求却暴露了所有你访问过的网页。诸如谷歌公共 DNSIP8.8.8.8 、8.8.4.4)这类的服务将会记录你的 IP 地址、你的 ISP 和地理位置信息。这些信息或许会被任何合法程序或者强制性的政府请求所分享。
> **当我在使用谷歌公共 DNS 服务时,谷歌会记录什么信息?**
>
> 谷歌公共 DNS 隐私页面有一个完整的收集信息列表。谷歌公共 DNS 遵循谷歌主隐私政策,在我们的隐私中心可以看到。 你客户端 IP 地址是唯一会被临时记录的(一到两天后删除),但是为了让我们的服务更快、更好、更安全,关于 ISP 和城市/都市级别的信息将会被保存更长的时间。
> 谷歌公共 DNS 隐私页面有一个完整的收集信息列表。谷歌公共 DNS 遵循谷歌的主隐私政策,在<ruby>“隐私中心”<rt>Privacy Center</rt></ruby>可以看到。 用户的客户端 IP 地址是唯一会被临时记录的(一到两天后删除),但是为了让我们的服务更快、更好、更安全,关于 ISP 和城市/都市级别的信息将会被保存更长的时间。
> 参考资料: `https://developers.google.com/speed/public-dns/faq#privacy`
以上原因,如果可能的话,配置并使用你私人的非转发 DNS 服务器。现在,这项任务或许跟在本地部署一些预先配置好的 DNS 服务器 Docker 容器一样琐碎。例如,假设 docker 服务已经在你的系统安装完成,下列命令将会部署你的私人本地 DNS 服务器:
由于以上原因,如果可能的话,配置并使用你私人的非转发 DNS 服务器。现在,这项任务或许跟在本地部署一些预先配置好的 DNS 服务器 Docker 容器一样简单。例如,假设 docker 服务已经在你的系统安装完成,下列命令将会部署你的私人本地 DNS 服务器:
```
# docker run -d --name bind9 -p 53:53/udp -p 53:53 fike/bind9
@ -175,7 +139,7 @@ DNS 服务器现在已经启动并正在运行:
google.com. 242 IN A 216.58.199.46
```
现在,在`/etc/resolv.conf `里设置你的域名服务器:
现在,在 `/etc/resolv.conf` 里设置你的域名服务器:
```
@ -183,9 +147,9 @@ google.com. 242 IN A 216.58.199.46
nameserver 127.0.0.1
```
### 关闭联想
## 结束语
没有完美的安全隐私解决方案。虽然这篇指导里的步骤明显是个改进。如果你真的很在乎隐私Tor 浏览器`https://www.torproject.org/projects/torbrowser.html.en`是最佳选择。Tor 对于日常使用有点过犹不及,但是它的确使用了同样在这篇指导里列出的一些措施。
没有完美的安全隐私解决方案。虽然本篇指导里的步骤可以明显改进它们。如果你真的很在乎隐私Tor 浏览器 `https://www.torproject.org/projects/torbrowser.html.en` 是最佳选择。Tor 对于日常使用有点过犹不及,但是它的确使用了这篇指导里列出的一些措施。
--------------------------------------------------------------------------------
@ -193,24 +157,8 @@ via: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux
作者:[Nick Congleton][a]
译者:[ypingcn](https://ypingcn.github.io/wiki/lctt)
校对:[校对者ID](https://github.com/校对者ID)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux
[1]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h2-1-health-report
[2]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h2-2-search
[3]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h2-3-do-not-track
[4]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h2-4-disable-pocket
[5]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h3-1-https-everywhere
[6]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h3-2-privacy-badger
[7]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h3-3-ublock-origin
[8]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h3-4-noscript
[9]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h3-5-disconnect
[10]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h3-6-random-agent-spoofer
[11]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h4-1-private-dns
[12]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h1-introduction
[13]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h2-firefox-settings
[14]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h3-add-ons
[15]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h4-system-settings
[16]: https://linuxconfig.org/protecting-your-privacy-with-firefox-on-linux#h5-closing-thoughts

View File

@ -0,0 +1,178 @@
### 在Linux上使用Nginx和Gunicorn托管Django
![](https://linuxconfig.org/images/gunicorn_logo.png?58963dfd)
内容
* * [1. 介绍][4]
* [2. Gunicorn][5]
* [2.1. 安装][1]
* [2.2. 配置][2]
* [2.3. 运行][3]
* [3. Nginx][6]
* [4. 结语][7]
### 介绍
托管Django Web应用程序相当简单虽然它比标准的PHP应用程序更复杂一些。 处理带Web服务器的Django接口的方法有很多。 Gunicorn就是其中最简单的一个。
GunicornGreen Unicorn的缩写在你的Web服务器Django之间作为中间服务器使用在这里Web服务器就是Nginx。 Gunicorn服务于应用程序而Nginx处理静态内容。
### Gunicorn
### 安装
使用Pip安装Gunicorn是超级简单的。 如果你已经使用virtualenv搭建好了你的Django项目那么你就有了Pip并且应该熟悉Pip的工作方式。 所以在你的virtualenv中安装Gunicorn。
```
$ pip install gunicorn
```
### 配置
Gunicorn 最有吸引力的一个地方就是它的配置非常简单。处理配置最好的方法就是在Django项目的根目录下创建一个名叫Gunicorn的文件夹。然后 在该文件夹内,创建一个配置文件。
在本篇教程中,配置文件名称是`gunicorn-conf.py`。在改文件中,创建类似于下面的配置
```
import multiprocessing
bind = 'unix:///tmp/gunicorn1.sock'
workers = multiprocessing.cpu_count() * 2 + 1
reload = True
daemon = True
```
在上述配置的情况下Gunicorn会在`/tmp/`目录下创建一个名为`gunicorn1.sock`的Unix套接字。 还会启动一些工作进程进程数量相当于CPU内核数量的2倍。 它还会自动重新加载并作为守护进程运行。
### 运行
Gunicorn的运行命令有点长指定了一些附加的配置项。 最重要的部分是将Gunicorn指向你项目的`.wsgi`文件。
```
gunicorn -c gunicorn/gunicorn-conf.py -D --error-logfile gunicorn/error.log yourproject.wsgi
```
上面的命令应该从项目的根目录运行。 Gunicorn会使用你用`-c`选项创建的配置。 `-D`再次指定gunicorn为守护进程。 最后一部分指定Gunicorn的错误日志文件在`Gunicorn`文件夹中的位置。 命令结束部分就是为Gunicorn指定`.wsgi`file的位置。
### Nginx
现在Gunicorn配置好了并且已经开始运行了你可以设置Nginx连接它为你的静态文件提供服务。 本指南假定你已经配置了Nginx而且你通过它托管的站点使用了单独的服务块。 它还将包括一些SSL信息。
如果你想知道如何让你的网站获得免费的SSL证书请查看我们的[LetsEncrypt指南][8]。
```nginx
# 连接到Gunicorn
upstream yourproject-gunicorn {
server unix:/tmp/gunicorn1.sock fail_timeout=0;
}
# 将未加密的流量重定向到加密的网站
server {
listen 80;
server_name yourwebsite.com;
return 301 https://yourwebsite.com$request_uri;
}
# 主服务块
server {
# 设置监听的端口,指定监听的域名
listen 443 default ssl;
client_max_body_size 4G;
server_name yourwebsite.com;
# 指定日志位置
access_log /var/log/nginx/yourwebsite.access_log main;
error_log /var/log/nginx/yourwebsite.error_log info;
# 将nginx指向你的ssl证书
ssl on;
ssl_certificate /etc/letsencrypt/live/yourwebsite.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourwebsite.com/privkey.pem;
# 设置根目录
root /var/www/yourvirtualenv/yourproject;
# 为Nginx指定静态文件路径
location /static/ {
# Autoindex the files to make them browsable if you want
autoindex on;
# The location of your files
alias /var/www/yourvirtualenv/yourproject/static/;
# Set up caching for your static files
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
# 为Nginx指定你上传文件的路径
location /media/ {
Autoindex if you want
autoindex on;
# The location of your uploaded files
alias /var/www/yourvirtualenv/yourproject/media/;
# Set up aching for your uploaded files
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
# Try your static files first, then redirect to Gunicorn
try_files $uri @proxy_to_app;
}
# 将请求传递给Gunicorn
location @proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://njc-gunicorn;
}
# 缓存HTMLXML和JSON
location ~* \.(html?|xml|json)$ {
expires 1h;
}
# 缓存所有其他的静态资源
location ~* \.(jpg|jpeg|png|gif|ico|css|js|ttf|woff2)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
}
```
配置文件有点长,但是还可以更长一些。其中重点是指向 Gunicorn 的`upstream`块以及将流量传递给 Gunicorn 的`location`块。大多数其他的配置项都是可选,但是你应该按照一定的形式来配置。配置中的注释应该可以帮助你了解具体细节。
保存文件之后你可以重启Nginx让修改的配置生效。
```
# systemctl restart nginx
```
一旦Nginx在线生效你的站点就可以通过域名访问了。
### 结语
如果你想深入研究Nginx可以做很多事情。但是上面提供的配置是一个很好的开始并且你可以用于实践中。 如果你习惯于Apache和臃肿的PHP应用程序像这样的服务器配置的速度应该是一个惊喜。
--------------------------------------------------------------------------------
via: https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux
作者:[Nick Congleton][a]
译者:[Flowsnow](https://github.com/Flowsnow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux
[1]: https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux#h2-1-installation
[2]: https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux#h2-2-configuration
[3]: https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux#h2-3-running
[4]: https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux#h1-introduction
[5]: https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux#h2-gunicorn
[6]: https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux#h3-nginx
[7]: https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux#h4-closing-thoughts
[8]: https://linuxconfig.org/generate-ssl-certificates-with-letsencrypt-debian-linux

View File

@ -0,0 +1,163 @@
# [Ubuntu 和 Fedora 上 10 个最好的 Linux 终端仿真器][12]
[
![10 Best Linux Terminals](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/10-best-linux-terminals_orig.jpg)
][3]
对于 Linux 用户来说,最重要的应用程序之一就是终端仿真器。它允许每个用户获得对 shell 的访问。Bash 是 Linux 和 UNIX 发行版中最常用的 shell它很强大对于新手和高级用户来说掌握 bash 都很有必要。因此,在这篇文章中,你可以了解 Linux 用户有哪些优秀的终端仿真器可以选择。
### 1、Terminator
这个项目的目标是创造一个能够很好排列终端的有用工具。它受到一些如 gnome-multi-term、quadkonsole 等程序的启发,重点是以网格的形式排列终端。
#### 特性浏览
* 以网格形式排列终端
* Tab 设定
* 通过拖放重排终端
* 大量的快捷键
* 通过 GUI 参数编辑器保存多个布局和配置文件
* 同时对不同分组的终端进行输入
[
![terminator linux terminals](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/terminator-linux-terminals.png?1487082905)
][4]
你可以通过下面的命令安装 Terminator
```
sudo apt-get install terminator
```
### 2、Tilda 一个可以拖动的终端
**Tilda** 的独特之处在于它不像一个普通的窗口,相反,你可以使用一个特殊的热键从屏幕的顶部上下拖动它。
另外Tilda 是高度可配置的,可以自定义绑定热键,改变外观,以及其他许多能够影响 Tilda 特性的选项。
在 Ubuntu 和 Fedora 上都可以使用包管理器安装 Tilda当然你也可以查看它的 [GitHub 仓库][14]。
[
![tilda linux terminal](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/tilda-linux-terminal_orig.png)
][5]
### 3、Guake
Guake 是一个和 Tilda 或 yakuake 类似的可拖动终端仿真器。如果你知道一些关于 Python、Git 和 GTK 的知识的话,你可以给 Guake 添加一些新的特性。
Guake 在许多发行版上均可用,所以如果你想安装它,你可以查看你的版本仓库。
#### 特性浏览
* 轻量
* 简单、容易且很优雅
* 从终端到 GUI 的流畅集成
* 当你使用的时候出现,一旦按下预定义热键便消失(默认情况下是 F12
* Compiz 透明支持
* 多重 Tab
* 丰富的调色板
* 还有更多……
主页: [http://guake-project.org/][15]
### 4、ROXTerm
如果你正在寻找一个轻量型、高度可定制的终端仿真器,那么 ROXTerm 就是专门为你准备的。这是一个旨在提供和 gnome-terminal 相似特性的终端仿真器,它们都基于相同的 VTE 库。它的最初设计只占用很小的空间并且能够快速启动,它具有比 gnome-terminal 更强的可配置性,更加针对经常使用终端的 “Power” 用户。
[
![roxterm linux terminal](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/roxterm-linux-terminal_orig.png)
][6]
[http://roxterm.sourceforge.net/index.php?page=index&lang=en][16]
### 5、XTerm
Xterm 是 Linux 和 UNIX 系统上最受欢迎的终端仿真器,因为它是 X 窗口系统的默认终端仿真器,并且很轻量、很简单。
[
![xterm linux terminal](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/xterm-linux-terminal.png?1487083067)
][7]
### 6、Eterm
如果你正在寻找一个漂亮、强大的终端仿真器,那么 Eterm 是你最好的选择。Eterm 是一个彩色 vt102 终端仿真器,被当作是 Xterm 的替代品。它按照自由选择的哲学思想进行设计,将尽可能多的权利、灵活性和自由交到用户手中。
[
![etern linux terminal](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/etern-linux-terminal.jpg?1487083129)
][8]
官网: [http://www.eterm.org/][17]
### 7、Gnome Terminal
Gnome Terminal 是最受欢迎的终端仿真器之一,它被许多 Linux 用户使用,因为它默认安装在 Gnome 桌面环境中,而 Gnome 桌面很常用。它有许多特性并且支持大量主题。
在许多 Linux 发行版中都默认安装有 Gnome Terminal但你也可以使用你的包管理器来安装它。
[
![gnome terminal](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-terminal_orig.jpg)
][9]
### 8、Sakura
Sakura 是一个基于 GTK 和 VTE 的终端仿真器。它是一个只有很少依赖的终端仿真器,所以你不需要先安装一个完整的 GNOME 桌面才能有一个像样的终端仿真器。
你可以使用你的包管理器来安装它,因为 Sakura 在绝大多数发行版中都是可用的。
### 9、LilyTerm
LilyTerm 是一个基于 libvte 的终端仿真器,旨在快速和轻量,是 GPLv3 授权许可的。
#### 特性浏览
* 低资源占用
* 多重 Tab
* 配色方案丰富
* 支持超链接
* 支持全屏
* 还有更多的……
[
![lilyterm linux terminal](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/lilyterm-linux-terminal.jpg?1487083285)
][10]
### 10、Konsole
如果你是一名 KDE 或 Plasma 用户,那么你一定知道 Konsole因为它是 KDE 桌面的默认终端仿真器,也是我最喜爱的终端仿真器之一,因为它很舒适易用。
它在 Ubuntu 和 fedora 上均可用,但如果你在使用 Ubuntu (Unity),那么你需要选择别的终端仿真器,或者你可以考虑使用 Kubuntu 。
[
![konsole linux terminal](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/konsole-linux-terminal.png?1487083345)
][11]
### 结论
我们是 Linux 用户,根据自己的需求,可以有许多选择来挑选更好的应用。因此,你可以选择**最好的终端**来满足个人需求,虽然你也可以选择另一个 shell 来满足个人需求,比如你也可以使用 fish shell。
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/10-best-linux-terminals-for-ubuntu-and-fedora
作者:[Mohd Sohail][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://disqus.com/by/MohdSohail1/
[1]:http://www.linuxandubuntu.com/home/terminator-a-linux-terminal-emulator-with-multiple-terminals-in-one-window
[2]:http://www.linuxandubuntu.com/home/another-linux-terminal-app-guake
[3]:http://www.linuxandubuntu.com/home/10-best-linux-terminals-for-ubuntu-and-fedora
[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/terminator-linux-terminals_orig.png
[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/tilda-linux-terminal_orig.png
[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/roxterm-linux-terminal_orig.png
[7]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/xterm-linux-terminal_orig.png
[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/etern-linux-terminal_orig.jpg
[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-terminal_orig.jpg
[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/lilyterm-linux-terminal_orig.jpg
[11]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/konsole-linux-terminal_orig.png
[12]:http://www.linuxandubuntu.com/home/10-best-linux-terminals-for-ubuntu-and-fedora
[13]:http://www.linuxandubuntu.com/home/10-best-linux-terminals-for-ubuntu-and-fedora#comments
[14]:https://github.com/lanoxx/tilda
[15]:http://guake-project.org/
[16]:http://roxterm.sourceforge.net/index.php?page=index&amp;lang=en
[17]:http://www.eterm.org/

View File

@ -0,0 +1,168 @@
理解 sudo 与 su 之间的区别
============================================================
### 本文导航
1. [Linux su 命令][7]
1. [su -][1]
2. [su -c][2]
2. [Sudo vs Su][8]
2. [Sudo vs Su][8]
1. [关于密码][3]
2. [默认行为][4]
3. [日志记录][5]
4. [灵活性][6]
3. [Sudo su][9]
在[早前的一篇文章][11]中,我们深入讨论了 `sudo` 命令的相关内容。同时,在该文章的末尾有提到相关的命令 `su` 的部分内容。本文,我们将详细讨论关于 su 命令与 sudo 命令之间的区别。
在开始之前有必要说明一下,文中所涉及到的示例教程都已经在 Ubuntu 14.04 LTS 上测试通过。
### Linux su 命令
su 命令的主要作用是让你可以在已登录的会话中切换到另外一个用户。换句话说,这个工具可以让你在不登出当前用户的情况下登录另外一个用户(以该用户的身份)。
su 命令经常被用于切换到超级用户或 root 用户(因为在命令行下工作,经常需要 root 权限),但是 - 正如前面所提到的 - su 命令也可以用于切换到任意非 root 用户。
如何使用 su 命令切换到 root 用户,如下:
[
![不带命令行参数的 su 命令](https://www.howtoforge.com/images/sudo-vs-su/su-command.png)
][12]
如上su 命令要求输入的密码是 root 用户密码。所以,一般 su 命令需要输入目标用户的密码。在输入正确的密码之后su 命令会在终端的当前会话中打开一个子会话。
### su -
还有一种方法可以切换到 root 用户:运行 `su -` 命令,如下:
[
![su - 命令](https://www.howtoforge.com/images/sudo-vs-su/su-hyphen-command.png)
][13]
那么,`su` 命令与 `su -` 命令之间有什么区别呢?前者在切换到 root 用户之后仍然保持旧的或原始用户的环境,而后者则是创建一个新的环境(由 root 用户 ~/.bashrc 文件所设置的环境),相当于使用 root 用户正常登录(从登录屏幕显示登录)。
`su` 命令手册页很清楚地说明了这一点:
```
可选参数 `-` 可提供的环境为用户在直接登录时的环境。
```
因此,你会觉得使用 `su -` 登录更有意义。但是,同时存在 `su` 命令,那么大家可能会想知道它在什么时候用到。以下内容摘自[ArchLinux wiki website][14] - 关于 `su` 命令的好处和坏处:
* 有的时候,对于系统管理员来讲,使用其他普通用户的 Shell 账户而不是自己的 Shell 账户更会好一些。尤其是在处理用户问题时,最有效的方法就是是:登录目标用户以便重现以及调试问题。
* 然而,在多数情况下,当从普通用户切换到 root 用户进行操作时,如果还使用普通用户的环境变量的话,那是不可取甚至是危险的操作。因为是在无意间切换使用普通用户的环境,所以当使用 root 用户进行程序安装或系统更改时,会产生与正常使用 root 用户进行操作时不相符的结果。例如,可以给普通用户安装电源意外损坏系统的程序或获取对某些数据的未授权访问的程序。
注意:如果你想在 `su -` 命令后面传递更多的参数,那么你必须使用 `su -l` 来实现。以下是 `-``-l` 命令行选项的说明:
```
-, -l, --login
提供相当于用户在直接登录时所期望的环境。
当使用 - 时,必须放在 su 命令的最后一个选项。其他选项(-l 和 --login无此限制。
```
### su -c
还有一个值得一提的 `su` 命令行选项为:`-c`。该选项允许你提供在切换到目标用户之后要运行的命令。
`su` 命令手册页是这样说明:
```
-c, --command COMMAND
使用 -c 选项指定由 Shell 调用的命令。
被执行的命令无法控制终端。所以,此选项不能用于执行需要控制 TTY 的交互式程序。
```
参考示例:
```
su [target-user] -c [command-to-run]
```
示例中,`command-to-run` 将会被这样执行:
```
[shell] -c [command-to-run]
```
示例中的 `shell` 类型将会被目标用户在 `/etc/passwd` 文件中定义的登录 shell 类型所替代。
### Sudo vs Su
现在,我们已经讨论了关于 `su` 命令的基础知识,是时候来探讨一下 `sudo``su` 命令之间的区别了。
### 关于密码
两个命令的最大区别是:`sudo` 命令需要输入当前用户的密码,`su` 命令需要输入 root 用户的密码。
很明显,就安全而言,`sudo` 命令更好。例如,考虑到需要 root 访问权限的多用户使用的计算机。在这种情况下,使用 `su` 意味着需要与其他用户共享 root 用户密码,这显然不是一种好习惯。
此外,如果要撤销特定用户的超级用户/root 用户的访问权限,唯一的办法就是更改 root 密码,然后再告知所有其他用户新的 root 密码。
而使用 `sudo` 命令就不一样了,你可以很好的处理以上的两种情况。鉴于 `sudo` 命令要求输入的是其他用户的密码,所以,不需要共享 root 密码。同时,想要阻止特定用户访问 root 权限,只需要调整 `sudoers` 文件中的相应配置即可。
### 默认行为
两个命令之间的另外一个区别是默认行为。`sudo` 命令只允许使用提升的权限运行单个命令,而 `su` 命令会启动一个新的 shell同时允许使用 root 权限运行尽可能多的命令,直到显示退出登录。
因此,`su` 命令的默认行为是有风险的,因为用户很有可能会忘记他们正在以 root 用户身份进行工作,于是,无意中做出了一些不可恢复的更改(例如:对错误的目录运行 `rm -rf` 命令)。关于为什么不鼓励以 root 用户身份进行工作的详细内容,请参考[这里][10]
### 日志记录
尽管 `sudo` 命令是以目标用户(默认情况下是 root 用户)的身份执行命令,但是他们会使用 sudoer 所配置的用户名来记录是谁执行命令。而 `su` 命令是无法直接跟踪记录用户切换到 root 用户之后执行了什么操作。
### 灵活性
`sudo` 命令会比 `su` 命令灵活很多,因为你甚至可以限制 sudo 用户可以访问哪些命令。换句话说,用户通过 `sudo` 命令只能访问他们工作需要的命令。而 `su` 命令让用户有权限做任何事情。
### Sudo su
大概是因为使用 `su` 命令或直接以 root 用户身份登录有风险,所以,一些 Linux 发行版(如 Ubuntu默认禁用 root 用户帐户。鼓励用户在需要 root 权限时使用 `sudo` 命令。
However, you can still do 'su' successfully, i.e, without entering the root password. All you need to do is to run the following command:
然而,您还是可以成功执行 `su` 命令,即不用输入 root 用户的密码。运行以下命令:
```
sudo su
```
由于你使用 `sudo` 运行命令,你只需要输入当前用户的密码。所以,一旦完成操作,`su` 命令将会以 root 用户身份运行,这意味着它不会再要求输入任何密码。
** PS **:如果你想在系统中启用 root 用户帐户(虽然强烈反对,但你还是可以使用 `sudo` 命令或 `sudo su` 命令),你必须手动设置 root 用户密码 可以使用以下命令:
```
sudo passwd root
```
### 结论
这篇文章以及之前的教程(其中侧重于 `sudo` 命令)应该能给你一个比较好的建议,当你需要可用的工具来提升(或一组完全不同的)权限来执行任务时。 如果您也想分享关于 `su``sudo` 的相关内容或者经验,欢迎您在下方进行评论。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/sudo-vs-su/
作者:[Himanshu Arora][a]
译者:[zhb127](https://github.com/zhb127)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/sudo-vs-su/
[1]:https://www.howtoforge.com/tutorial/sudo-vs-su/#su-
[2]:https://www.howtoforge.com/tutorial/sudo-vs-su/#su-c
[3]:https://www.howtoforge.com/tutorial/sudo-vs-su/#password
[4]:https://www.howtoforge.com/tutorial/sudo-vs-su/#default-behavior
[5]:https://www.howtoforge.com/tutorial/sudo-vs-su/#logging
[6]:https://www.howtoforge.com/tutorial/sudo-vs-su/#flexibility
[7]:https://www.howtoforge.com/tutorial/sudo-vs-su/#the-su-command-in-linux
[8]:https://www.howtoforge.com/tutorial/sudo-vs-su/#sudo-vs-su
[9]:https://www.howtoforge.com/tutorial/sudo-vs-su/#sudo-su
[10]:http://askubuntu.com/questions/16178/why-is-it-bad-to-login-as-root
[11]:https://www.howtoforge.com/tutorial/sudo-beginners-guide/
[12]:https://www.howtoforge.com/images/sudo-vs-su/big/su-command.png
[13]:https://www.howtoforge.com/images/sudo-vs-su/big/su-hyphen-command.png
[14]:https://wiki.archlinux.org/index.php/Su

View File

@ -0,0 +1,86 @@
如何在 AWS EC2 的 Linux 服务器上打开端口
============================================================
_这是一篇用屏幕截图解释如何在 AWS EC2 Linux 服务器上打开端口的教程。它能帮助你管理 EC2 服务器上特定端口的服务。_
* * *
AWS即 Amazon Web Services不是 IT 世界中的新术语了。它是亚马逊提供的云服务平台。它的免费帐户能为你提供一年的有限免费服务。这是尝试新技术而不用花费金钱的最好的方式之一。
AWS 提供服务器计算作为他们的服务之一,他们称之为 EC弹性计算。使用它可以构建我们的 Linux 服务器。我们已经看到了[如何在 AWS 上设置免费的 Linux 服务器][11]了。
默认情况下,所有基于 EC2 的 Linux 服务器都只打开 22 端口,即 SSH 服务端口(所有 IP 的入站)。因此,如果你托管了任何特定端口的服务,则要为你的服务器在 AWS 防火墙上打开相应端口。
同样它的 1 到 65535 的端口是打开的(所有出站流量)。如果你想改变这个,你可以使用下面的方法编辑出站规则。
在 AWS 上为你的服务器设置防火墙规则很容易。你能够在几秒钟内为你的服务器打开端口。我将用截图指导你如何打开 EC2 服务器的端口。
 _步骤 1 _
登录 AWS 帐户并进入 **EC2 管理控制台**。进入<ruby>“网络及安全”<rt>Network & Security </rt></ruby>菜单下的<ruby>**安全组**<rt>Security Groups</rt></ruby>,如下高亮显示:
![AWS EC2 management console](http://cdn2.kerneltalks.com/wp-content/uploads/2017/03/AWS-EC2-management-console.jpg)
*AWS EC2 管理控制台*
* * *
_步骤 2 :_
<ruby>安全组<rt>Security Groups</rt></ruby>中选择你的 EC2 服务器,并在 <ruby>**行动**<rt>Actions</rt></ruby> 菜单下选择 <ruby>**编辑入站规则**<rt>Edit inbound rules</rt></ruby>
![AWS inbound rules](http://cdn2.kerneltalks.com/wp-content/uploads/2017/03/AWS-inbound-rules.jpg)
*AWS 入站规则菜单*
_步骤 3:_
现在你会看到入站规则窗口。你可以在此处添加/编辑/删除入站规则。这有几个如 http、nfs 等列在下拉菜单中,它们可以为你自动填充端口。如果你有自定义服务和端口,你也可以定义它。
![AWS add inbound rule](http://cdn2.kerneltalks.com/wp-content/uploads/2017/03/AWS-add-inbound-rule.jpg)
*AWS 添加入站规则*
比如,如果你想要打开 80 端口,你需要选择:
* 类型http
* 协议TCP
* 端口范围80
* 源:任何来源(打开 80 端口接受来自任何IP0.0.0.0/0的请求我的 IP那么它会自动填充你当前的公共互联网 IP
* * *
_步骤 4:_
就是这样了。保存完毕后,你的服务器入站 80 端口将会打开!你可以通过 telnet 到 EC2 服务器公共域名的 80 端口来检验(可以在 EC2 服务器详细信息中找到)。
你也可以在 [ping.eu][12] 等网站上检验。
* * *
同样的方式可以编辑出站规则,这些更改都是即时生效的。
--------------------------------------------------------------------------------
via: http://kerneltalks.com/virtualization/how-to-open-port-on-aws-ec2-linux-server/
作者:[Shrikant Lavhate ][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://kerneltalks.com/virtualization/how-to-open-port-on-aws-ec2-linux-server/
[1]:http://kerneltalks.com/virtualization/how-to-open-port-on-aws-ec2-linux-server/#
[2]:http://kerneltalks.com/virtualization/how-to-open-port-on-aws-ec2-linux-server/#
[3]:http://kerneltalks.com/virtualization/how-to-open-port-on-aws-ec2-linux-server/#
[4]:http://kerneltalks.com/virtualization/how-to-open-port-on-aws-ec2-linux-server/#
[5]:http://kerneltalks.com/virtualization/how-to-open-port-on-aws-ec2-linux-server/#
[6]:http://kerneltalks.com/virtualization/how-to-open-port-on-aws-ec2-linux-server/#
[7]:http://kerneltalks.com/virtualization/how-to-open-port-on-aws-ec2-linux-server/#
[8]:http://kerneltalks.com/virtualization/how-to-open-port-on-aws-ec2-linux-server/#
[9]:http://kerneltalks.com/virtualization/how-to-open-port-on-aws-ec2-linux-server/#
[10]:http://kerneltalks.com/author/shrikant/
[11]:http://kerneltalks.com/howto/install-ec2-linux-server-aws-with-screenshots/
[12]:http://ping.eu/port-chk/

View File

@ -1,76 +1,69 @@
如何在树莓派上安装 Fedora 25
============================================================
### 看看这个分布的教程
### 继续阅读,了解 Fedora 第一个官方支持 Pi 的版本
![How to install Fedora 25 on your Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/gnome_raspberry_pi_fedora.jpg?itok=Efm6IKxP "How to install Fedora 25 on your Raspberry Pi")
>图片提供 opensource.com
2016 年 10 月Fedora 25 Beta 发布了,随之而来的还有对[ Raspberry Pi 2 和 3 的初步支持][6]。Fedora 25 的最终“通用”版在一个月后发布,从那时起,我一直在树莓派上尝试不同的 Fedora 版本
2016 年 10 月Fedora 25 Beta 发布了,随之而来的还有对[ Raspberry Pi 2 和 3 的初步支持][6]。Fedora 25 的最终“通用”版在一个月后发布,从那时起,我一直在树莓派上尝试不同的 Fedora spins
这篇文章不仅是一篇 Raspberry Pi 3 上的 Fedora 25 的评论,还集合了提示、截图以及我对官方初步对 Pi 支持的 Fedora 的一些个人想法。
这篇文章不仅是一篇 Raspberry Pi 3 上的 Fedora 25 的评论,还集合了提示、截图以及我对 Fedora 第一个官方支持 Pi 的这个版本的一些个人想法。
更多关于树莓派的
在我开始之前,需要说一下的是,为写这篇文章所做的所有工作都是在我的运行 Fedora 25 的个人笔记本电脑上完成的。我使用一张 microSD 插到 SD 适配器中,复制和编辑所有的 Fedora 镜像到 32GB 的 microSD 卡中,然后用它在一台三星电视上启动了 Raspberry Pi 3。 因为 Fedora 25 尚不支持内置 Wi-Fi所以 Raspberry Pi 3 还使用以太网线缆进行网络连接。最后,我使用了 Logitech K410 无线键盘和触摸板进行输入。
* [我们最新的树莓派信息][1]
* [什么是树莓派?][2]
* [树莓派入门][3]
* [给我们发送你的树莓派项目和教程][4]
如果你没有机会使用以太网线连接,在你的树莓派上玩 Fedora 25我曾经有一个 Edimax Wi-Fi USB 适配器,它也可以在 Fedora 25 上工作,但在本文中,我只使用了以太网连接。
在我开始之前,需要说一下的是,我为写这篇文章所做的所有工作都是在我的个人笔记本电脑上运行 Fedora 25 。我使用一张 microSD 插到 SD 适配器中复制和编辑所有的 Fedora 镜像到 32GB 的 microSD 卡中,我在连接到一台三星电视后启动了 Raspberry Pi 3。 因为 Fedora 25 还尚不支持内置Wi-Fi所以 Raspberry Pi 3 还使用以太网线缆进行网络连接。最后,我使用了 Logitech K410 无线键盘和触摸板进行输入。
## 在树莓派上安装 Fedora 25 之前
如果你没有机会使用以太网线连接你的树莓派上玩 Fedora 25我有一个 Edimax Wi-Fi USB 适配器,它可以在 Fedora 25 上工作,但在本文中,我只使用了以太网连接
阅读 Fedora 项目 wiki 上 的[树莓派支持文档][7]。你可以从 wiki 下载 Fedora 25 安装所需的镜像,那里还列出了所有支持和不支持的内容
### 在你开始在树莓派上安装 Fedora 25 之前
阅读 Fedora 项目 wiki上 的[树莓派支持文档][7]。你可以从 wiki 下载 Fedora 25 安装所需的镜像,并且列出了所有支持和不支持的内容。
此外,请注意,这是初始支持版本,并且还有许多新的工作和支持将随着 Fedora 26 的发布而出现,所以请随时报告 bug并通过[Bugzilla][8],这个 Fedora的[ ARM 邮件列表][9],或者在 或 Freenode IRC 频道fedora-arm 上分享你在树莓派上使用 Fedora 25 的体验反馈。
此外,请注意,这是初始支持版本,还有许多新的工作和支持将随着 Fedora 26 的发布而出现,所以请随时报告 bug并通过 [Bugzilla][8]、Fedora 的[ ARM 邮件列表][9]、或者 Freenode IRC 频道fedora-arm分享你在树莓派上使用 Fedora 25 的体验反馈。
### 安装
我下载并安装了五个不同的 Fedora 25 版本GNOME工作站默认、KDE、Minimal、LXDE 和 Xfce。在多数情况下它们都有一致和易于遵循的步骤以确保我的 Raspberry Pi 3 上启动正常。有些已知 bug 是人们正在解决的,有些通过Fedora wik 遵循标准操作程序。
我下载并安装了五个不同的 Fedora 25 spinGNOME工作站默认、KDE、Minimal、LXDE 和 Xfce。在多数情况下它们都有一致和易于遵循的步骤以确保我的 Raspberry Pi 3 上启动正常。有的 spin 有人们正在解决的已知 bug有的通过 Fedora wik 遵循标准操作程序。
![GNOME on Raspberry Pi](https://opensource.com/sites/default/files/gnome_on_rpi.png "GNOME on Raspberry Pi")
Raspberry Pi 3 上的 Fedora 25 workstation、 GNOME 版本
*Raspberry Pi 3 上的 Fedora 25 workstation、 GNOME 版本*
### 安装步骤
1\. 在你的笔记本上,从支持文档页面的链接下载树莓派的 Fedora 25 镜像。
2\. 在笔记本上,使用 **fedora-arm-installer**或命令行将镜像复制到 microSD
2\. 在笔记本上,使用 **fedora-arm-installer** 或命令行将镜像复制到 microSD
**xzcat Fedora-Workstation-armhfp-25-1.3-sda.raw.xz | dd bs=4M status=progress of=/dev/mmcblk0**
注意:**/dev/mmclk0**是我的 microSD 插到 SD 适配器后安装在我的笔记本电脑上的设备,即使我在笔记本上使用 Fedora可以使用 **fedora-arm-installer**,但是喜欢命令行。
注意:**/dev/mmclk0** 是我的 microSD 插到 SD 适配器后,在我的笔记本电脑上挂载的设备。虽然我在笔记本上使用 Fedora可以使用 **fedora-arm-installer**,但我还是喜欢命令行。
3\. 复制完镜像后_先不要启动你的系统_。我知道你很想这么做但你仍然需要进行几个调整。
4\. 为了使镜像文件尽可能小以便下载,镜像上的根文件系统是很小的,因此你必须增加根文件系统的大小。如果你不这么做,你仍然可以启动你的派,但如果你一旦运行 **dnf** 来升级你的系统,它就会填满文件系统,还会有糟糕的事情会发生,所以乘着 microSD 还在你的笔记本上进行分区:
4\. 为了使镜像文件尽可能小以便下载,镜像上的根文件系统是很小的,因此你必须增加根文件系统的大小。如果你不这么做,你仍然可以启动你的派,但如果你一旦运行 **dnf update** 来升级你的系统,它就会填满文件系统,导致糟糕的事情发生,所以趁着 microSD 还在你的笔记本上进行分区:
**growpart /dev/mmcblk0 4
resize2fs /dev/mmcblk0p4**
注意:在 Fedora 中,** growpart** 命令由 **cloud-utils-growpart.noarch** 这个 RPM 提供。
注意:在 Fedora 中,**growpart** 命令由 **cloud-utils-growpart.noarch** 这个 RPM 提供。
5\.文件系统更新后,您需要将 **vc4** 模块列入黑名单。[有关此 bug 的详细信息。][10]
5\.文件系统更新后,您需要将 **vc4** 模块列入黑名单。[更多有关此 bug 的信息。][10]
我建议在启动树莓派之前这样做,因为不同的版本将以不同的方式表现。例如,(至少对我来说)在没有黑名单 **vc4** 的情况下GNOME 在我启动后首先出现,但在系统更新后,它将不再出现。 KDE 的版本在第一次启动时根本不会出现。因此我们可能需要在我们的第一次启动之前将 **vc4** 加入黑名单,直到错误解决。
我建议在启动树莓派之前这样做,因为不同的 spin 将以不同的方式表现。例如,(至少对我来说)在没有黑名单 **vc4** 的情况下GNOME 在我启动后首先出现,但在系统更新后,它不再出现。 KDE spin 在第一次启动时根本不会出现。因此我们可能需要在我们的第一次启动之前将 **vc4** 加入黑名单,直到错误解决。
黑名单应该出现在两个不同的地方。首先,在你的 microSD 分区上,在 **etc/modprode.d/** 下创建一个 **vc4.conf**,内容是:**blacklist vc4**。第二,在你的 microSD 启动分区添加 **rd.driver.blacklist=vc4****extlinux/extlinux.conf** 的末尾。
黑名单应该出现在两个不同的地方。首先,在你的 microSD 分区上,在 **etc/modprode.d/** 下创建一个 **vc4.conf**,内容是:**blacklist vc4**。第二,在你的 microSD 启动分区添加 **rd.driver.blacklist=vc4****extlinux/extlinux.conf** 的末尾。
6\. 现在,你可以启动你的树莓派了。
### 启动
你要有耐心,特别是对于 GNOME 和 KDE 发行版来说。在 SSD固态驱动器和几乎即时启动的时代特别是第一次启动时,你很容易就对派的启动速度感到不耐烦。在第一次启动 Window Manager 之前,它将弹出一个初始配置页面,它允许你配置 root 密码、常规用户、时区和网络。一旦你得到配置,你应该能够 SSH 到你的树莓派上,这样你就可以方便调试显示问题了。
你要有耐心,特别是对于 GNOME 和 KDE 发行版来说。在 SSD固态驱动器和几乎即时启动的时代你很容易就对派的启动速度感到不耐烦,特别是第一次启动时。在第一次启动 Window Manager 之前,会先弹出一个初始配置页面,可以配置 root 密码、常规用户、时区和网络。配置完毕后,你应该能够 SSH 到你的树莓派上,方便调试显示问题了。
### 系统更新
一旦你在树莓派上运行 Fedora 25就会最终(或立即)想要系统更新
在树莓派上运行 Fedora 25,你最终(或立即)想要更新系统。
首先,进行内核升级时,先熟悉你的 **/boot/extlinux/extlinux.conf** 文件。如果升级内核,下次启动时,除非手动选择正确的内核,否则很可能会启动进入 Rescue 模式。避免这种情况发生最好方法是,在你的 **extlinux.conf**中将定义 Rescue 镜像的那五行移动到文件的底部,这样最新的内核将在下次自动启动。你可以直接在派上或通过在笔记本挂载来编辑 **/boot/extlinux/extlinux.conf** directly
首先,进行内核升级时,先熟悉你的 **/boot/extlinux/extlinux.conf** 文件。如果升级内核,下次启动时,除非手动选择正确的内核,否则很可能会启动进入 Rescue 模式。避免这种情况发生最好方法是,在你的 **extlinux.conf** 中将定义 Rescue 镜像的那五行移动到文件的底部,这样最新的内核将在下次自动启动。你可以直接在派上或通过在笔记本挂载来编辑 **/boot/extlinux/extlinux.conf**
**label Fedora 25 Rescue fdcb76d0032447209f782a184f35eebc (4.9.9-200.fc25.armv7hl)
            kernel /vmlinuz-0-rescue-fdcb76d0032447209f782a184f35eebc
@ -82,25 +75,25 @@ resize2fs /dev/mmcblk0p4**
![KDE on Raspberry Pi 3](https://opensource.com/sites/default/files/kde_on_rpi.png "KDE on Raspberry Pi 3")
Raspberry Pi 3 上的 Fedora 25 workstation、 KDE 版本
*Raspberry Pi 3 上的 Fedora 25 workstation、 KDE 版本*
### Fedora 版本
### Fedora Spins
所有的 Fedora 版本我都试过,唯一一个有问题是 XFCE,我相信这是由于这个[已知的 bug][11]。
在我尝试过的所有 Fedora Spin 中,唯一有问题的是 XFCE spin,我相信这是由于这个[已知的 bug][11]。
当我按照我在这里分享的步骤来时GNOME、KDE、LXDE 和 minimal 都运行得很好。考虑到 KDE 和 GNOME 会占用更多资源,我会推荐想要在树莓派上使用 Fedora 25 的人 使用 LXDE 和 Minimal。如果你是一个想要一台廉价的支持 SELinux 的服务器来覆盖你的安全考虑的系统管理员,你想要使用树莓派作为你的服务器,并需要支持 22 端口和 vi,那就用 Minimal 版本。对于开发人员或开始学习 Linux 的人来说LXDE 可能是更好的方式,因为它可以快速方便地访问所有基于 GUI 的工具如浏览器、IDE 和你可能需要的客户端。
按照我在这里分享的步骤操作GNOME、KDE、LXDE 和 minimal 都运行得很好。考虑到 KDE 和 GNOME 会占用更多资源,我会推荐想要在树莓派上使用 Fedora 25 的人 使用 LXDE 和 Minimal。如果你是一位系统管理员,想要一台廉价的 SELinux 支持的服务器来满足你的安全考虑,而且只是想要使用树莓派作为你的服务器,开放 22 端口以及 vi 可用,那就用 Minimal 版本。对于开发人员或开始学习 Linux 的人来说LXDE 可能是更好的方式,因为它可以快速方便地访问所有基于 GUI 的工具如浏览器、IDE 和你可能需要的客户端。
 [LXES on Raspberry Pi 3]https://opensource.com/sites/default/files/lxde_on_rpi.png“LXDE on Raspberry Pi 3”
![LXES on Raspberry Pi ](https://opensource.com/sites/default/files/lxde_on_rpi.png "LXDE on Raspberry Pi 3")
Raspberry Pi 3 上的 Fedora 25 workstation、LXDE。
*Raspberry Pi 3 上的 Fedora 25 workstation、LXDE。*
看到越来越多的 Linux 发行版在基于 ARM 的树莓派上可用那真是太棒了。对于其第一个支持的版本Fedora 团队为日常 Linux 用户提供了更好的体验。我一定会期待 Fedora 26 的改进和 bug 修复。
看到越来越多的 Linux 发行版在基于 ARM 的树莓派上可用那真是太棒了。对于其第一个支持的版本Fedora 团队为日常 Linux 用户提供了更好的体验。我期待 Fedora 26 的改进和 bug 修复。
--------------------------------------------------------------------------------
作者简介:
Anderson Silva - Anderson 于 1996 年开始使用 Linux。更精确地说是 Red Hat Linux。 2007 年,他作为 IT 部门的发布工程师时加入红帽时,他的主要梦想成为了现实。此后,他在红帽担任过多个不同角色,从发布工程师到系统管理员、高级经理和信息系统工程师。他是一名 RHCE 和 RHCA 以及一名活跃的 Fedora 包维护者。
Anderson Silva - Anderson 于 1996 年开始使用 Linux。更精确地说是 Red Hat Linux。 2007 年,他作为 IT 部门的发布工程师时加入红帽,他的职业梦想成为了现实。此后,他在红帽担任过多个不同角色,从发布工程师到系统管理员、高级经理和信息系统工程师。他是一名 RHCE 和 RHCA 以及一名活跃的 Fedora 包维护者。
----------------
@ -108,7 +101,7 @@ via: https://opensource.com/article/17/3/how-install-fedora-on-raspberry-pi
作者:[Anderson Silva][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,46 @@
# [EPEL-5 的终点][1]
![](https://cdn.fedoramagazine.org/wp-content/uploads/2017/03/epel5-eol-945x400.png)
在过去十年中Fedora 项目一直在为同一版本的其他操作系统构建软件包。**然而,在 2017 年 3 月 31 日,它将会随着 Red Hat Enterprise LinuxRHEL5 一起停止支持**。
### EPEL 的简短历史
RHEL 是 Fedora 发布的一个子集的下游重建Red Hat 认为它可以获得多年的支持。虽然这些软件包可用于完整的操作系统,但系统管理员一直都需要“更多”软件包。在 RHEL-5 之前,许多软件包会由不同的人打包并提供。随着 Fedora Extras 逐渐包含了许多软件包,并有几位打包者加入了 Fedora随之出现了一个想法结合力量并创建一个专门的子项目重建特定 RHEL 版本的 Fedora 软件包,然后从 Fedora 的中心化服务器上分发。
经过多次讨论,然而还是未能提出一个引人注目的名称,但是 Fedora 的 Enterprise Linux或 EPEL子项目的额外包已经创建了。在首次为 RHEL-4 重建软件包时,主要目标是在 RHEL-5 发布时提供尽可能多的用于 RHEL-5 的软件包。打包者已经采取了很多艰苦的工作,但大部分工作是在制定 EPEL 在未来十年的规则以及指导。[从所有人能够看到的邮件归档中][2]我们可以看到 Fedora 贡献者的激烈讨论,它们担心将 Fedora 的发布重心转移到外部贡献者会与已经存在的软件包产生冲突。
最后EPEL-5 在 2007 年 4 月的某个时候上线,在接下来的十年中,它已经成长为一个仓库,其中包含 5000 多个源码包以及每天会有 20 万个左右独立 IP 地址检查,并在 2013 年初达到 24 万的高峰。虽然为 EPEL 构建的每个包都是使用 RHEL 软件包完成的,但所有这些软件包对于 RHEL 的各种社区重建版本CentOS、Scientific Linux、Amazon Linux都是有用的。这意味着随着 RHEL 版本的推出,这些生态系统的增长带来了更多的用户使用 EPEL 并帮助打包随后的 RHEL 版本。然而随着新版本以及重建的使用增加EPEL-5 的用户数量逐渐下降为每天大约 16 万个独立 IP 地址。此外,在此期间,开发人员支持的软件包数量已经下降,仓库大小已缩小到 2000 个源代码包。
收缩的部分原因是由于 2007 年的规定。当时Red Hat Enterprise Linux 被认为只有 6 年活跃的生命周期。有人认为,在这样一个“有限”的周期中,软件包可能就像在 RHEL 中那样在 EPEL 中被“冻结”。这意味着无论何时有可能的修复需要向后移植,都不允许有主要的修改。因为没有人来打包,软件包将不断从 EPEL-5 中移除因为打包者不再想尝试并向后移植。尽管各种规则被放宽以允许更大的更改Fedora 使用的打包规则从 2007 年开始不断地改变和改进。这使得在较旧的操作系统上尝试重新打包一个较新的版本变得越来越难。
### 2017 年 3 月 31 日会发生什么
如上所述3 月 31 日,红帽将终止支持并不再为普通客户提供 RHEL-5 的更新。这意味着 Fedora 和各种重建者将开始各种归档流程。对于 EPEL 项目,这意味着我们将遵循 Fedora 发行版每年发布的步骤。
1. 在 ** 3 月 27 日**,任何新版本将不会被允许推送到 EPEL-5以便仓库本质上被冻结。这允许镜像拥有一个清晰的文件树。
2. EPEL-5 中的所有包将从主镜像 `/pub/epel/5/` 以及 `/pub/epel/testing/5/` 移动到 `/pub/archives/epel/`。 **这将会在 27 号开始*,因此所有的归档镜像可以用它写入磁盘。
3. 因为 3 月 31 日是星期五,系统管理员并不喜欢周五惊喜,所以它不会有变化。**4 月 3 日**,镜像管理器将更新指向归档。
4. **4 月 6 日**/pub/epel/5/ 树将被删除,镜像也将相应更新。
对于使用 cron 执行 yum 更新的系统管理员而言,这应该只是一个小麻烦。系统能继续更新甚至安装归档中的任何软件包。那些直接使用脚本从镜像下载的系统管理员会有点麻烦,需要将脚本更改到 /pub/archive/epel/5/ 这个新的位置。
虽然令人讨厌,但是对于仍使用旧版 Linux 的许多系统管理员有一个虚假的祝福。由于软件包定期从 EPEL-5 中删除各种支持邮件列表以及irc频道都有系统管理员定期想要知道他们需要的哪些软件包已经消失。归档完成后这将不会是一个问题因为不会更多的包会被删除:)。
对于受此问题影响的系统管理员,较旧的 EPEL 软件包仍然可用,但速度较慢。所有 EPEL 软件包都是在 Fedora Koji 系统中构建的,所以你可以使用[ Koji 搜索][3]到较旧版本的软件包。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/the-end-of-the-line-for-epel-5/
作者:[smooge][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://smooge.id.fedoraproject.org/
[1]:https://fedoramagazine.org/the-end-of-the-line-for-epel-5/
[2]:https://www.redhat.com/archives/epel-devel-list/2007-March/thread.html
[3]:https://koji.fedoraproject.org/koji/search

View File

@ -0,0 +1,107 @@
如何在树莓派上部署 Kubernetes
============================================================
> 只用几步,使用 Weave Net 在树莓派上设置 Kubernetes。
![How to deploy Kubernetes on the Raspberry Pi ](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/raspberrypi_cartoon.png?itok=sntNdheJ "How to deploy Kubernetes on the Raspberry Pi ")
>图片提供 opensource.com
当我开始对[ARM][6]设备,特别是 Raspberry Pi 感兴趣时,我的第一个项目是一个 OpenVPN 服务器。
通过将 Raspberry Pi 作为家庭网络的安全网关,我可以使用我的手机来控制我的桌面,并远程播放 Spotify打开文档以及一些其他有趣的东西。我在第一个项目中使用了一个现有的教程因为我害怕自己使用命令行。
更多关于 Raspberry Pi 的:
* [最新的 Raspberry Pi][1]
* [什么是 Raspberry Pi][2]
* [开始使用 Raspberry Pi][3]
* [给我们发送你的 Raspberry Pi 项目和教程][4]
几个月后,这种恐惧消失了。我扩展了我的原始项目,并使用[ Samba 服务器][7]从文件服务器隔离了 OpenVPN 服务器。这是我第一个没有完全按照教程来的项目。不幸的是,在我的 Samba 项目结束后,我意识到我没有记录任何东西,所以我无法复制它。为了重新创建它,我不得不重新参考那些单独的教程,并将它们放在一起。
我学到了关于开发人员工作流程的宝贵经验 - 跟踪你所有的更改。我在本地做了一个小的 git 仓库,并记录了我输入的所有命令。
### 发现 Kubernetes
2015 年 5 月,我发现了 Linux 容器和 Kubernetes。我觉得 Kubernetes 很有魅力,我可以使用仍在技术上发展的概念 - 并且我实际上可以用它。平台本身及其所呈现的可能性令人兴奋。直到那时,我刚刚在一块 Raspberry Pi 上运行了一个程序。有了 Kubernetes我可以做出比以前更高级的配置。
那时候Docker还是 v1.6,如果我记得正确的话)在 ARM 上有一个 bug这意味着在 Raspberry Pi 上运行 Kubernetes 实际上是不可能的。在早期的 0.x 版本中Kubernetes 的变化很快。每次我在 AMD64 上找到一篇关于如何设置 Kubernetes 的指南时,它还是一个较旧的版本,并且与我当时使用的完全不兼容。
我用自己的方法在 Raspberry Pi 上创建了一个 Kubernetes 节点,而在 Kubernetes v1.0.1 中,我使用 Docker v1.7.1 [让它工作了][8]。这是完全将 Kubernetes 部署到 ARM 的方法。
在 Raspberry Pi 上运行 Kubernetes 的优势在于,由于 ARM 设备非常小巧,因此不会产生大量的功耗。如果程序以正确的方式构建,那么同样可以在 AMD64 上用同样的方法运行程序。有一块小型 IoT 板为教育创造了巨大的机会。用来做演示如在会议中也是非常有用的。使用 Raspberry Pi (通常)比在大型英特尔机器要容易得多。
现在符合[我建议][9]的 ARM32位和64位的支持已被合并到核心中。ARM 的二进制文件会自动与 Kubernetes 一起发布。虽然我们还没有为 ARM 提供自动化的 CI持续集成系统它可以自动在 PR 合并之前确定在它可在 ARM 上工作,它仍然工作得不错。
### Raspberry Pi 上的分布式网络
我在 [kubeadm][10] 发现了 Weave Net。[Weave Mesh][11]是一个有趣的分布式网络解决方案,因此我开始阅读更多关于它的内容。在 2016 年 12 月,我在 [Weaveworks][12] 收到了第一份合同工作。我是 Weave Net 中 ARM 支持团队的一员。
我很高兴可以在 Raspberry Pi 上运行 Weave Net 的工业案例,比如那些需要更加移动化的工厂。目前,将 Weave Scope 或 Weave Cloud 部署到 Raspberry Pi 可能是不可能的(尽管可以考虑使用其他 ARM 设备),因为我猜这个软件需要更多的内存来运行。理想情况下,随着 Raspberry Pi 升级成了 2GB 内存,我想我可以在它上面运行 Weave Cloud 了。
在 Weave Net 1.9 中Weave Net 支持了 ARM。Kubeadm通常是 Kubernetes在多个平台上工作。你可以使用 Weave 将 Kubernetes 部署到 ARM就像在任何 AMD64 设备上一样上安装 Docker、kubeadm、kubectl 和 kubelet。然后在控制面板下初始化主机
```
kubeadm init
```
接下来,用下面的命令安装你的 pod 网络:
```
kubectl apply -f https://git.io/weave-kube
```
在此之前在 ARM 上,你只可以用 Flannel 安装一个 pod 网络,但是在 Weave Net 1.9 中已经改变了,它官方支持了 ARM。
最后,记入你的节点:
```
kubeadm join --token <token> <master-ip>
```
就是这样了Kubernetes 已经部署到了 Raspberry Pi 上了。相比 Intel/AMD64你不用做特别的比较Weave Net 在 ARM 上也能工作。
### Raspberry Pi 社区
我希望 Raspberry Pi 社区成长起来,并他们的心态传播到世界其他地方。他们在英国和其他国家已经取得了成功,但在芬兰并不是很成功。我希望生态系统能够继续扩展,以让更多的人学习如何部署 Kubernetes 或 Weave 到 ARM 设备上。毕竟,这些是我学到的。通过在 Raspberry Pi 设备上自学,我更好地了解了 ARM 以及其上的软件部署。
### 最后的思考
我通过加入用户社区、提出问题和不同程度的测试了解的关于 Raspberry Pi 和 Kubernetes 的一切,。
我是居住在芬兰的说瑞典语的高中生,到目前为止,我从来没有参加过编程或计算机课程。但我仍然能够加入开源社区,因为它对年龄或教育没有限制:你的工作是根据其优点来判断的。
对于任何那些第一次参与开源项目的人而言:深入进去,因为这是完全值得的。你做什么没有任何限制,你将永远不知道开源世界将为你提供哪些机会。这会很有趣,我保证!
--------------------------------------------------------------------------------
作者简介:
Lucas Käldström - 谢谢你发现我!我是一名来自芬兰的说瑞典语的高中生。
------------------
via: 网址
作者:[ Lucas Käldström][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/luxas
[1]:https://opensource.com/tags/raspberry-pi?src=raspberry_pi_resource_menu
[2]:https://opensource.com/resources/what-raspberry-pi?src=raspberry_pi_resource_menu
[3]:https://opensource.com/article/16/12/getting-started-raspberry-pi?src=raspberry_pi_resource_menu
[4]:https://opensource.com/article/17/2/raspberry-pi-submit-your-article?src=raspberry_pi_resource_menu
[5]:https://opensource.com/article/17/3/kubernetes-raspberry-pi?rate=xHFaLw4Y4mkFiZww6sIHYnkEleqbqObgjXTC0ALUn9s
[6]:https://en.wikipedia.org/wiki/ARM_architecture
[7]:https://www.samba.org/samba/what_is_samba.html
[8]:https://github.com/luxas/kubernetes-on-arm
[9]:https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multi-platform.md
[10]:https://kubernetes.io/docs/getting-started-guides/kubeadm/
[11]:https://github.com/weaveworks/mesh
[12]:https://www.weave.works/
[13]:https://opensource.com/user/113281/feed
[14]:https://opensource.com/users/luxas