mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
commit
7f7f965a29
@ -0,0 +1,155 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Yufei-Yan)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12411-1.html)
|
||||
[#]: subject: (What is IoT? The internet of things explained)
|
||||
[#]: via: (https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html)
|
||||
[#]: author: (Josh Fruhlinger https://www.networkworld.com/author/Josh-Fruhlinger/)
|
||||
|
||||
物联网(IoT)简介
|
||||
======
|
||||
|
||||
> 物联网(IoT)是一个由智能设备连接起来的网络,并提供了丰富的数据,但是它也有可能是一场安全领域的噩梦。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202007/13/144427tnrjnnh686n5kgtn.jpg)
|
||||
|
||||
<ruby>物联网<rt>Internet of Things</rt></ruby>(IoT)是一个统称,指的是越来越多不属于传统计算设备,但却连接到互联网接收或发送数据,或既接收也发送的电子设备组成的网络。
|
||||
|
||||
现在有数不胜数的东西可以归为这一类:可以联网的“智能”版传统设备,比如说电冰箱和灯泡;那些只能运行于有互联网环境的小设备,比如像 Alexa 之类的电子助手;与互联网连接的传感器,它们正在改变着工厂、医疗、运输、物流中心和农场。
|
||||
|
||||
### 什么是物联网?
|
||||
|
||||
物联网将互联网、数据处理和分析的能力带给了现实的实物世界。对于消费者来说,这就意味着不需要键盘和显示器这些东西,就能和这个全球信息网络进行互动;他们的日常用品当中,很多都可以通过该网络接受操作指令,而只需很少的人工干预。
|
||||
|
||||
互联网长期以来为知识工作提供了便利,在企业环境当中,物联网也能为制造和分销带来同样的效率。全球数以百万计甚至数十亿计的嵌入式具有互联网功能的传感器正在提供令人难以置信丰富的数据集,企业可以利用这些数据来保证他们运营的安全、跟踪资产和减少人工流程。研究人员也可以使用物联网来获取人们的喜好和行为数据,尽管这些行为可能会严重影响隐私和安全。
|
||||
|
||||
### 它有多大?
|
||||
|
||||
一句话:非常庞大。[Priceonomics 对此进行了分析][2]:在 2020 年的时候,有超过 50 亿的物联网设备,这些设备可以生成 4.4 <ruby>泽字节<rt>zettabyte</rt></ruby>(LCTT 译注:1 zettabyte = 10<sup>9</sup> terabyte = 10<sup>12</sup> gigabyte)的数据。相比较,物联网设备在 2013 年仅仅产生了 1000 亿<ruby>千兆字节<rt>gigabyte</rt></ruby>的数据。在物联网市场上可能挣到的钱也同样让人瞠目;到 2025 年,这块市场的价值可以达到 1.6 万亿美元到 14.4 万亿美元不等。
|
||||
|
||||
### 物联网的历史
|
||||
|
||||
一个联网设备和传感器无处不在的世界,是科幻小说中最经典的景象之一。物联网传说中将 1970 年 [卡耐基•梅隆大学的一台连接到 APRANET 的自动贩卖机][3] 称之为世界上第一个物联网设备,而且许多技术都被吹捧为可以实现 “智能” 的物联网式特征,使其颇具有未来主义的光彩。但是“物联网”这个词是由英国的技术专家 [Kevin Ashton][4] 于 1999 年提出来的。
|
||||
|
||||
一开始,技术是滞后于当时对未来的憧憬的。每个与互联网相连的设备都需要一个处理器和一种能和其他东西通信的方式,无线的最好,这些因素都增加了物联网大规模实际应用的成本和性能要求,这种情况至少一直持续到 21 世纪头十年中期,直到摩尔定律赶上来。
|
||||
|
||||
一个重要的里程碑是 [RFID 标签的大规模使用][5],这种价格低廉的极简转发器可以被贴在任何物品上,然后这些物品就可以连接到更大的互联网上了。对于设计者来说,无处不在的 Wi-Fi 和 4G 让任何地方的无线连接都变得非常简单。而且,IPv6 的出现再也不用让人们担心把数十亿小设备连接到互联网上会将 IP 地址耗尽。(相关报道:[物联网网络可以促进 IPv6 的使用吗?][6])
|
||||
|
||||
### 物联网是如何工作的?
|
||||
|
||||
物联网的基本元素是收集数据的设备。广义地说,它们是和互联网相连的设备,所以每一个设备都有 IP 地址。它们的复杂程度不一,这些设备涵盖了从工厂运输货物的自动驾驶车辆到监控建筑温度的简单传感器。这其中也包括每天统计步数的个人手环。为了让这些数据变得有意义,就需要对其收集、处理、过滤和分析,每一种数据都可以通过多种方式进行处理。
|
||||
|
||||
采集数据的方式是将数据从设备上传输到采集点。可以通过各种无线或者有线网络进行数据的转移。数据可以通过互联网发送到具有存储空间或者计算能力的数据中心或者云端,或者这些数据也可以分段进行传输,由中间设备汇总数据后再沿路径发送。
|
||||
|
||||
处理数据可以在数据中心或者云端进行,但是有时候这不太可行。对于一些非常重要的设备,比如说工业领域的关停设备,从设备上将数据发送到远程数据中心的延迟太大了。发送、处理、分析数据和返回指令(在管道爆炸之前关闭阀门)这些操作,来回一趟的时间可能要花费非常多的时间。在这种情况下,<ruby>边缘计算<rt>edge-computing</rt></ruby>就可以大显身手了,智能边缘设备可以汇总数据、分析数据,在需要的时候进行回应,所有这些都在相对较近的物理距离内进行,从而减少延迟。边缘设备可以有上游连接,这样数据就可以进一步被处理和储存。
|
||||
|
||||
![][7]
|
||||
|
||||
*物联网是如何工作的。*
|
||||
|
||||
### 物联网设备的一些例子
|
||||
|
||||
本质上,任何可以搜集来自于真实世界数据,并且可以发送回去的设备都可以参与到物联网生态系统中。典型的例子包括智能家居设备、射频识别标签(RFID)和工业传感器。这些传感器可以监控一系列的因素,包括工业系统中的温度和压力、机器中关键设备的状态、患者身上与生命体征相关的信号、水电的使用情况,以及其它许许多多可能的东西。
|
||||
|
||||
整个工厂的机器人可以被认为是物联网设备,在工业环境和仓库中移动产品的自主车辆也是如此。
|
||||
|
||||
其他的例子包括可穿戴设备和家庭安防系统。还有一些其它更基础的设备,比如说[树莓派][8]和[Arduino][9],这些设备可以让你构建你自己的物联网终端节点。尽管你可能会认为你的智能手机是一台袖珍电脑,但它很可能也会以非常类似物联网的方式将你的位置和行为数据传送到后端服务。
|
||||
|
||||
#### 设备管理
|
||||
|
||||
为了能让这些设备一起工作,所有这些设备都需要进行验证、分配、配置和监控,并且在必要时进行修复和更新。很多时候,这些操作都会在一个单一的设备供应商的专有系统中进行;要么就完全不会进行这些操作,而这样也是最有风险的。但是整个业界正在向[标准化的设备管理模式][10]过渡,这使得物联网设备之间可以相互操作,并保证设备不会被孤立。
|
||||
|
||||
#### 物联网通信标准和协议
|
||||
|
||||
当物联网上的小设备和其他设备通信的时候,它们可以使用各种通信标准和协议,这其中许多都是为这些处理能力有限和电源功率不大的设备专门定制的。你一定听说过其中的一些,尽管有一些设备使用的是 Wi-Fi 或者蓝牙,但是更多的设备是使用了专门为物联网世界定制的标准。比如,ZigBee 就是一个低功耗、远距离传输的无线通信协议,而 MQTT(<ruby>消息队列遥测传输<rt>Message Queuing Telemetry Transport</rt></ruby>)是为连接在不可靠或者易发生延迟的网络上的设备定制的一个发布/订阅信息协议。(参考 Network World 的词汇表:[物联网标准和协议](11)。)
|
||||
|
||||
物联网也会受益于 5G 为蜂窝网络带来的高速度和高带宽,尽管这种使用场景会[滞后于普通的手机][12]。
|
||||
|
||||
### 物联网、边缘计算和云
|
||||
|
||||
![][13]
|
||||
|
||||
*边缘计算如何使物联网成为可能。*
|
||||
|
||||
对于许多物联网系统来说,大量的数据会以极快的速度涌来,这种情况催生了一个新的科技领域,<ruby>[边缘计算][14]<rt>edge computing</rt></ruby>,它由放置在物联网设备附近的设备组成,处理来自那些设备的数据。这些机器对这些数据进行处理,只将相关的素材数据发送到一个更集中的系统系统进行分析。比如,想象一个由几十个物联网安防摄像头组成的网络,边缘计算会直接分析传入的视频,而且只有当其中一个摄像头检测到有物体移动的时候才向安全操作中心(SoC)发出警报,而不会是一下子将所有的在线数据流全部发送到建筑物的 SoC。
|
||||
|
||||
一旦这些数据已经被处理过了,它们又去哪里了呢?好吧,它也许会被送到你的数据中心,但是更多情况下,它最终会进入云。
|
||||
|
||||
对于物联网这种间歇或者不同步的数据来往场景来说,具有弹性的云计算是再适合不过的了。许多云计算巨头,包括[谷歌][15]、[微软][16]和[亚马逊][17],都有物联网产品。
|
||||
|
||||
### 物联网平台
|
||||
|
||||
云计算巨头们正在尝试出售的,不仅仅是存放传感器搜集的数据的地方。他们正在提供一个可以协调物联网系统中各种元素的完整平台,平台会将很多功能捆绑在一起。本质上,物联网平台作为中间件,将物联网设备和边缘网关与你用来处理物联网数据的应用程序连接起来。也就是说,每一个平台的厂商看上去都会对物联网平台应该是什么这个问题有一些稍微不同的解释,以更好地[与其他竞争者拉开差距][18]。
|
||||
|
||||
### 物联网和数据
|
||||
|
||||
正如前面所提到的,所有这些物联网设备收集了 ZB 级的数据,这些数据通过边缘网关被发送到平台上进行处理。在很多情况下,这些数据就是要部署物联网的首要原因。通过从现实世界中的传感器搜集来的数据,企业就可以实时的作出灵活的决定。
|
||||
|
||||
例如,Oracle 公司[假想了一个这样的场景][19],当人们在主题公园的时候,会被鼓励下载一个可以提供公园信息的应用。同时,这个程序会将 GPS 信号发回到公园的管理部门来帮助他们预测排队时间。有了这些信息,公园就可以在短期内(比如通过增加员工数量来提升景点的一些容量)和长期内(通过了解哪些设施最受欢迎,那些最不受欢迎)采取行动。
|
||||
|
||||
这些决定可以在没有人工干预的情况作出。比如,从化工厂管道中的压力传感器收集的数据可以通过边缘设备的软件进行分析,从而发现管道破裂的威胁,这样的信息可以触发关闭阀门的信号,从而避免泄漏。
|
||||
|
||||
### 物联网和大数据分析
|
||||
|
||||
主题公园的例子可以让你很容易理解,但是和许多现实世界中物联网收集数据的操作相比,就显得小菜一碟了。许多大数据业务都会使用到来自物联网设备收集的信息,然后与其他数据关联,这样就可以分析预测到人类的行为。Software Advice 给出了[一些例子][20],其中包括由 Birst 提供的一项服务,该服务将从联网的咖啡机中收集的咖啡冲泡的信息与社交媒体上发布的帖子进行匹配,看看顾客是否在网上谈论咖啡品牌。
|
||||
|
||||
另一个最近才发生的戏剧性的例子,X-Mode 发布了一张基于位置追踪数据的地图,地图上显示了在 2020 年 3 月春假的时候,正当新冠病毒在美国加速传播的时候,人们在<ruby>劳德代尔堡<rt>Ft. Lauderdale</rt></ruby>聚会完[最终都去了哪里][21]。这张地图令人震撼,不仅仅是因为它显示出病毒可能的扩散方向,更是因为它说明了物联网设备是可以多么密切地追踪我们。(更多关于物联网和分析的信息,请点击[此处][22]。)
|
||||
|
||||
### 物联网数据和 AI
|
||||
|
||||
物联网设备能够收集的数据量远远大于任何人类能够以有效的方式处理的数据量,而且这肯定也不是能实时处理的。我们已经看到,仅仅为了理解从物联网终端传来的原始数据,就需要边缘计算设备。此外,还需要检测和处理可能就是[完全错误的数据][23]。
|
||||
|
||||
许多物联网供应商也同时提供机器学习和人工智能的功能,可以用来理解收集来的数据。比如,IBM 的 Jeopardy!-winning Watson 平台就可以在[物联网数据集上进行训练][24],这样就可以在预测性维护领域产生有用的结果 —— 比如说,分析来自无人机的数据,可以区分桥梁上轻微的损坏和需要重视的裂缝。同时,ARM 也在研发[低功耗芯片][25],它可以在物联网终端上提供 AI 的能力。
|
||||
|
||||
### 物联网和商业
|
||||
|
||||
物联网的商业用途包括跟踪客户、库存和重要部件的状态。[IoT for All][26] 列举了四个已经被物联网改变的行业:
|
||||
|
||||
* **石油和天然气**:与人工干预相比,物联网传感器可以更好的检测孤立的钻井现场。
|
||||
* **农业**:通过物联网传感器获得的田间作物的数据,可以用来提高产量。
|
||||
* **采暖通风**:制造商可以监控全国各地的气候控制系统。
|
||||
* **实体零售**:当顾客在商店的某些地方停留的时候,可以通过手机进行微目标定位,提供优惠信息。
|
||||
|
||||
更普遍的情况是,企业正在寻找能够在[四个领域][27]上获得帮助的物联网解决方案:能源使用、资产跟踪、安全领域和客户体验。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
|
||||
|
||||
作者:[Josh Fruhlinger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Yufei-Yan](https://github.com/Yufei-Yan)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Josh-Fruhlinger/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/category/internet-of-things/
|
||||
[2]: https://priceonomics.com/the-iot-data-explosion-how-big-is-the-iot-data/
|
||||
[3]: https://www.machinedesign.com/automation-iiot/article/21836968/iot-started-with-a-vending-machine
|
||||
[4]: https://www.visioncritical.com/blog/kevin-ashton-internet-of-things
|
||||
[5]: https://www.networkworld.com/article/2319384/rfid-readers-route-tag-traffic.html
|
||||
[6]: https://www.networkworld.com/article/3338106/can-iot-networking-drive-adoption-of-ipv6.html
|
||||
[7]: https://images.idgesg.net/images/article/2020/05/nw_how_iot_works_diagram-100840757-orig.jpg
|
||||
[8]: https://www.networkworld.com/article/3176091/10-killer-raspberry-pi-projects-collection-1.html
|
||||
[9]: https://www.networkworld.com/article/3075360/arduino-targets-the-internet-of-things-with-primo-board.html
|
||||
[10]: https://www.networkworld.com/article/3258812/the-future-of-iot-device-management.html
|
||||
[11]: https://www.networkworld.com/article/3235124/internet-of-things-definitions-a-handy-guide-to-essential-iot-terms.html
|
||||
[12]: https://www.networkworld.com/article/3291778/what-s-so-special-about-5g-and-iot.html
|
||||
[13]: https://images.idgesg.net/images/article/2017/09/nw_how_edge_computing_works_diagram_1400x1717-100736111-orig.jpg
|
||||
[14]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[15]: https://cloud.google.com/solutions/iot
|
||||
[16]: https://azure.microsoft.com/en-us/overview/iot/
|
||||
[17]: https://aws.amazon.com/iot/
|
||||
[18]: https://www.networkworld.com/article/3336166/why-are-iot-platforms-so-darn-confusing.html
|
||||
[19]: https://blogs.oracle.com/bigdata/how-big-data-powers-the-internet-of-things
|
||||
[20]: https://www.softwareadvice.com/resources/iot-data-analytics-use-cases/
|
||||
[21]: https://www.cnn.com/2020/04/04/tech/location-tracking-florida-coronavirus/index.html
|
||||
[22]: https://www.networkworld.com/article/3311919/iot-analytics-guide-what-to-expect-from-internet-of-things-data.html
|
||||
[23]: https://www.networkworld.com/article/3396230/when-iot-systems-fail-the-risk-of-having-bad-iot-data.html
|
||||
[24]: https://www.networkworld.com/article/3449243/watson-iot-chief-ai-can-broaden-iot-services.html
|
||||
[25]: https://www.networkworld.com/article/3532094/ai-everywhere-iot-chips-coming-from-arm.html
|
||||
[26]: https://www.iotforall.com/4-unlikely-industries-iot-changing/
|
||||
[27]: https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html
|
@ -0,0 +1,286 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12413-1.html)
|
||||
[#]: subject: (Turn your Raspberry Pi homelab into a network filesystem)
|
||||
[#]: via: (https://opensource.com/article/20/5/nfs-raspberry-pi)
|
||||
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
|
||||
|
||||
把你的树莓派家庭实验室变成一个网络文件系统
|
||||
======
|
||||
|
||||
> 使用 NFS 服务器将共享文件系统添加到你的家庭实验室。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202007/14/154349bgrgrwzexluuvzev.jpg)
|
||||
|
||||
共享文件系统是为家庭实验室增加通用性和功能性的好方法。在实验室中为客户端共享一个集中的文件系统,使得组织数据、进行备份和共享数据变得相当容易。这对于在多个服务器上进行负载均衡的 Web 应用和 [Kubernetes][2] 使用的持久化卷来说,尤其有用,因为它允许在任何数量的节点上用持久化数据来轮转 Pod。
|
||||
|
||||
无论你的家庭实验室是由普通计算机、多余的企业服务器,还是树莓派或其他单板计算机(SBC)组成,共享文件系统都是一种有用的资产,而网络文件系统(NFS)服务器是创建共享文件系统的好方法。
|
||||
|
||||
我之前写过关于[建立“家庭私有云”][3]的文章,这是一个由树莓派或其他 SBC 组成的家庭实验室,也许还有其他一些消费类硬件或台式 PC。NFS 服务器是这些组件之间共享数据的理想方式。由于大多数 SBC 的操作系统是通过 SD 卡运行的,所以存在一些挑战。尤其是在用作计算机的操作系统磁盘时,SD 卡的故障率会增加,它们并不是用来不断地读写的。你实际需要的是一个真正的硬盘:它们通常比 SD 卡的每 GB 价格便宜,特别是对于较大的磁盘,而且它们不太可能持续发生故障。树莓派 4 现在带有 USB 3.0 接口,而 USB 3.0 硬盘无处不在,价格也很实惠。这是一个完美的搭配。在这个项目中,我将使用一个 2TB 的 USB 3.0 外置硬盘插入到运行 NFS 服务器的树莓派 4 中。
|
||||
|
||||
![Raspberry Pi with a USB hard disk][4]
|
||||
|
||||
### 安装 NFS 服务器软件
|
||||
|
||||
我在树莓派上运行 Fedora 服务器,但这个项目也可以在其他发行版上运行。要在 Fedora 上运行 NFS 服务器,你需要 `nfs-utils` 包,幸运的是它已经安装好了(至少在 Fedora 31 中是这样)。如果你打算运行 NFSv3 服务,你还需要 `rpcbind` 包,但它不是 NFSv4 的严格要求。
|
||||
|
||||
如果你的系统中还没有这些软件包,请使用 `dnf` 命令安装它们。
|
||||
|
||||
```
|
||||
# 安装 nfs-utils 和 rpcbind
|
||||
$ sudo dnf install nfs-utils rpcbind
|
||||
```
|
||||
|
||||
Raspbian 是另一个与树莓派一起使用的流行操作系统,设置几乎完全相同。软件包名称不同而已,但这是唯一的主要区别。要在运行 Raspbian 的系统上安装 NFS 服务器,你需要以下软件包。
|
||||
|
||||
* `nfs-common`:这些文件是 NFS 服务器和客户端的通用文件。
|
||||
* `nfs-kernel-server`:主要的 NFS 服务器软件包。
|
||||
|
||||
Raspbian 使用 `apt-get` 来管理软件包(而不是像 Fedora 那样使用 `dnf`),所以用它来安装软件包。
|
||||
|
||||
```
|
||||
# 对于 Raspbian 系统,使用 apt-get 来安装 NFS 软件包
|
||||
$ sudo apt-get install nfs-common nfs-kernel-server
|
||||
```
|
||||
|
||||
### 准备一个 USB 硬盘作为存储设备
|
||||
|
||||
正如我上面提到的,USB 硬盘是为树莓派或其他 SBC 提供存储的好选择,尤其是用于操作系统磁盘镜像的 SD 卡并不适合这个用途。对于家庭私有云,你可以使用廉价的 USB 3.0 硬盘进行大规模存储。插入磁盘,使用 `fdisk` 找出分配给它的设备 ID,就可以使用它工作了。
|
||||
|
||||
```
|
||||
# 使用 fdisk 找到你的硬盘
|
||||
# 无关的硬盘信息已经省略
|
||||
$ sudo fdisk -l
|
||||
|
||||
Disk /dev/sda: 1.84 TiB, 2000398933504 bytes, 3907029167 sectors
|
||||
Disk model: BUP Slim BK
|
||||
Units: sectors of 1 * 512 = 512 bytes
|
||||
Sector size (logical/physical): 512 bytes / 512 bytes
|
||||
I/O size (minimum/optimal): 512 bytes / 512 bytes
|
||||
Disklabel type: dos
|
||||
Disk identifier: 0xe3345ae9
|
||||
|
||||
Device Boot Start End Sectors Size Id Type
|
||||
/dev/sda1 2048 3907028991 3907026944 1.8T 83 Linux
|
||||
```
|
||||
|
||||
为了清楚起见,在上面的例子输出中,除了我感兴趣的那个磁盘,我省略了所有其它磁盘的信息。你可以看到我想使用的 USB 磁盘被分配了设备 `/dev/sda`,你可以看到一些关于型号的信息(`Disk model: BUP Slim BK`),这有助于我识别正确的磁盘。该磁盘已经有了一个分区,它的大小也证实了它就是我要找的磁盘。
|
||||
|
||||
注意:请确保正确识别你的设备的磁盘和分区。它可能与上面的例子不同。
|
||||
|
||||
在驱动器上创建的每个分区都有一个特殊的通用唯一标识符(UUID)。计算机使用 UUID 来确保它使用 `/etc/fstab` 配置文件将正确的分区挂载到正确的位置。你可以使用 `blkid` 命令检索分区的 UUID。
|
||||
|
||||
```
|
||||
# 获取该分区的块设备属性
|
||||
# 确保你使用了合适的分区,它应该有所不同。
|
||||
$ sudo blkid /dev/sda1
|
||||
|
||||
/dev/sda1: LABEL="backup" UUID="bd44867c-447c-4f85-8dbf-dc6b9bc65c91" TYPE="xfs" PARTUUID="e3345ae9-01"
|
||||
```
|
||||
|
||||
在这里,`/dev/sda1` 的 UUID 是 `bd44867c-447c-4f85-8dbf-dc6b9bc65c91`。你的 UUID 会有所不同,所以要记下来。
|
||||
|
||||
### 配置树莓派在启动时挂载这个磁盘,然后挂载它
|
||||
|
||||
现在你已经确定了要使用的磁盘和分区,你需要告诉计算机如何挂载它,每次启动时都要这样做。现在就去挂载它。因为这是一个 USB 磁盘,可能会被拔掉,所以你还要配置树莓派在启动时如果磁盘没有插入或有其它不可用情况时不要等待。
|
||||
|
||||
在 Linux 中,通过将分区添加到 `/etc/fstab` 配置文件中,包括你希望它被挂载的位置和一些参数来告诉计算机如何处理它。这个例子将把分区挂载到 `/srv/nfs`,所以先创建这个路径:
|
||||
|
||||
```
|
||||
# 创建该磁盘分区的挂载点
|
||||
$ sudo mkdir -p /srv/nfs
|
||||
```
|
||||
|
||||
接下来,使用以下语法格式修改 `/etc/fstab` 文件:
|
||||
|
||||
```
|
||||
<disk id> <mountpoint> <filesystem type> <options> <fs_freq> <fs_passno>
|
||||
```
|
||||
|
||||
使用你之前确定的 UUID 作为磁盘 ID。正如我在上一步提到的,挂载点是 `/srv/nfs`。对于文件系统类型,通常最好选择其实际的文件系统,但是因为这是一个 USB 磁盘,所以使用 `auto`。
|
||||
|
||||
对于选项值,使用 `nosuid,nodev,nofail`。
|
||||
|
||||
#### 关于手册页的一个旁白
|
||||
|
||||
其实,有*很多*可能的选项,手册页(`man`)是查看它们的最好方法。查看 `fstab` 的手册页是一个很好的开始。
|
||||
|
||||
```
|
||||
# 打开 fstab 的手册页
|
||||
$ man fstab
|
||||
```
|
||||
|
||||
这将打开与 `fstab` 命令相关的手册/文档。在手册页中,每个选项都被分解成了不同的内容,以显示它的作用和常用的选择。例如,“第四个字段(fs_mntopts)”给出了该字段中可用选项的一些基本信息,并引导你到 `man 8 mount` 中获取 `mount` 选项更深入的描述。这是有道理的,因为 `/etc/fstab` 文件,本质上是告诉计算机如何自动挂载磁盘,就像你手动使用 `mount` 命令一样。
|
||||
|
||||
你可以从 `mount` 的手册页中获得更多关于你将使用的选项的信息。数字 8 表示手册页的章节。在这里,第 8 章节是*系统管理工具和守护进程*。
|
||||
|
||||
你可以从 `man` 的手册页中得到标准章节的列表。
|
||||
|
||||
回到挂载磁盘,让我们看看 `man 8 mount`。
|
||||
|
||||
```
|
||||
# 打开第 8 章节的 mount 手册页
|
||||
$ man 8 mount
|
||||
```
|
||||
|
||||
在这个手册页中,你可以查看上面列出的选项的作用。
|
||||
|
||||
* `nosuid`:不理会 suid/guid 位。不允许放在 U 盘上的任何文件以 root 身份执行。这是一个良好的安全实践。
|
||||
* `nodev`:不识别文件系统中的字符或块特殊设备,即不理会在 U 盘上的任何设备节点。另一个良好的安全实践。
|
||||
* `nofail`:如果设备不存在,不要记录任何错误。这是一个 U 盘,可能没有插入,所以在这种情况下,它将被忽略。
|
||||
|
||||
回到你正在添加到 `/etc/fstab` 文件的那一行,最后还有两个选项:`fs_freq` 和 `fs_passno`。它们的值与一些过时的选项有关,*大多数*现代系统对这两个选项都只用 `0`,特别是对 USB 磁盘上的文件系统而言。`fs_freq` 的值与 `dump` 命令和文件系统的转储有关。`fs_passno` 的值定义了启动时要 `fsck` 的文件系统及其顺序,如果设置了这个值,通常根分区是 `1`,其他文件系统是 `2`,将该值设置为 `0` 以跳过在该分区上使用 `fsck`。
|
||||
|
||||
在你喜欢的编辑器中,打开 `/etc/fstab` 文件,添加 U 盘上分区的条目,将这里的值替换成前面步骤中得到的值。
|
||||
|
||||
```
|
||||
# With sudo, or as root, add the partition info to the /etc/fstab file
|
||||
UUID="bd44867c-447c-4f85-8dbf-dc6b9bc65c91" /srv/nfs auto nosuid,nodev,nofail,noatime 0 0
|
||||
```
|
||||
|
||||
### 启用并启动 NFS 服务器
|
||||
|
||||
安装好软件包,并将分区添加到你的 `/etc/fstab` 文件中,现在你可以开始启动 NFS 服务器了。在 Fedora 系统中,你需要启用和启动两个服务:`rpcbind` 和 `nfs-server`。使用 `systemctl` 命令来完成这项工作。
|
||||
|
||||
```
|
||||
# 启动 NFS 服务器和 rpcbind
|
||||
$ sudo systemctl enable rpcbind.service
|
||||
$ sudo systemctl enable nfs-server.service
|
||||
$ sudo systemctl start rpcbind.service
|
||||
$ sudo systemctl start nfs-server.service
|
||||
```
|
||||
|
||||
在 Raspbian 或其他基于 Debian 的发行版上,你只需要使用 `systemctl` 命令启用并启动 `nfs-kernel-server` 服务即可,方法同上。
|
||||
|
||||
#### RPCBind
|
||||
|
||||
rpcbind 工具用于将远程过程调用(RPC)服务映射到其监听的端口。根据 rpcbind 手册页:
|
||||
|
||||
> “当一个 RPC 服务启动时,它会告诉 rpcbind 它正在监听的地址,以及它准备服务的 RPC 程序号。当客户机想对给定的程序号进行 RPC 调用时,它首先与服务器机器上的 rpcbind 联系,以确定 RPC 请求应该发送到哪里的地址。”
|
||||
|
||||
在 NFS 服务器这个案例中,rpcbind 会将 NFS 的协议号映射到 NFS 服务器监听的端口上。但是,NFSv4 不需要使用 rpcbind。如果你*只*使用 NFSv4 (通过从配置中删除版本 2 和版本 3),则不需要使用 rpcbind。我把它放在这里是为了向后兼容 NFSv3。
|
||||
|
||||
### 导出挂载的文件系统
|
||||
|
||||
NFS 服务器根据另一个配置文件 `/etc/exports` 来决定与哪些远程客户端共享(导出)哪些文件系统。这个文件只是一个 IP(或子网)与要共享的文件系统的映射,以及一些选项(只读或读写、root 去除等)。该文件的格式是:
|
||||
|
||||
```
|
||||
<目录> <主机>(选项)
|
||||
```
|
||||
|
||||
在这个例子中,你将导出挂载到 `/srv/nfs` 的分区。这是“目录”部分。
|
||||
|
||||
第二部分,主机,包括你要导出这个分区的主机。这些主机可以是单个主机:使用具有完全限定域名(FQDN)或主机名、主机的 IP 地址来指定;也可以是一组主机:使用通配符字符来匹配域(如 *.example.org)、IP 网络(如无类域间路由 CIDR 标识)或网组表示。
|
||||
|
||||
第三部分包括应用于该导出的选项。
|
||||
|
||||
* `ro/rw`:将文件系统导出为只读或读写。
|
||||
* `wdelay`:如果即将进行另一次写入,则推迟对磁盘的写入,以提高性能(如果你使用的是固态 USB 磁盘,这*可能*没有那么有用)
|
||||
* `root_squash`:防止客户机上的任何 root 用户在主机上有 root 权限,并将 root UID 设置为 `nfsnobody` 作为安全防范措施。
|
||||
|
||||
测试导出你挂载在 `/srv/nfs` 处的分区到一个客户端 —— 例如,一台笔记本电脑。确定你的客户机的 IP 地址(我的笔记本是 `192.168.2.64`,但你的可能会不同)。你可以把它共享到一个大的子网,但为了测试,请限制在单个 IP 地址上。这个 IP 的 CIDR 标识是 `192.168.2.64/32`,`/32` 子网代表一个 IP。
|
||||
|
||||
使用你喜欢的编辑器编辑 `/etc/exports` 文件,写上你的目录、主机 CIDR 以及 `rw` 和 `root_squash` 选项。
|
||||
|
||||
```
|
||||
# 像这样编辑你的 /etc/exports 文件,替换为你的系统上的信息
|
||||
/srv/nfs 192.168.2.64/32(rw,root_squash)
|
||||
```
|
||||
|
||||
注:如果你从另一个地方复制了 `/etc/exports` 文件,或者用副本覆盖了原文件,你可能需要恢复该文件的 SELinux 上下文。你可以使用 `restorecon` 命令来恢复。
|
||||
|
||||
```
|
||||
# 恢复 /etc/exports 文件的 SELinux 上下文
|
||||
$ sudo restorecon /etc/exports
|
||||
```
|
||||
|
||||
完成后,重新启动 NFS 服务器,以接收对 `/etc/exports` 文件的更改。
|
||||
|
||||
```
|
||||
# 重新启动 NFS 服务器
|
||||
$ sudo systemctl restart nfs-server.service
|
||||
```
|
||||
|
||||
### 给 NFS 服务打开防火墙
|
||||
|
||||
有些系统,默认不运行[防火墙服务][6]。比如 Raspbian,默认是开放 iptables 规则,不同服务打开的端口在机器外部立即就可以使用。相比之下,Fedora 服务器默认运行的是 firewalld 服务,所以你必须为 NFS 服务器(以及 rpcbind,如果你将使用 NFSv3)打开端口。你可以通过 `firewall-cmd` 命令来实现。
|
||||
|
||||
检查 firewalld 使用的区域并获取默认区域。对于 Fedora 服务器,这是 `FedoraServer` 区域。
|
||||
|
||||
```
|
||||
# 列出区域
|
||||
# 出于简洁省略了部分输出
|
||||
$ sudo firewall-cmd --list-all-zones
|
||||
|
||||
# R获取默认区域信息
|
||||
# 记下默认区域
|
||||
$ sudo firewall-cmd --get-default-zone
|
||||
|
||||
# 永久加入 nfs 服务到允许端口列表
|
||||
$ sudo firewall-cmd --add-service=nfs --permanent
|
||||
|
||||
# 对于 NFSv3,我们需要再加一些端口: nfsv3、 rpc-mountd、 rpc-bind
|
||||
$ sudo firewall-cmd --add-service=(nfs3,mountd,rpc-bind)
|
||||
|
||||
# 查看默认区域的服务,以你的系统中使用的默认区域相应替换
|
||||
$ sudo firewall-cmd --list-services --zone=FedoraServer
|
||||
|
||||
# 如果一切正常,重载 firewalld
|
||||
$ sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
就这样,你已经成功地将 NFS 服务器与你挂载的 U 盘分区配置在一起,并将其导出到你的测试系统中进行共享。现在你可以在你添加到导出列表的系统上测试挂载它。
|
||||
|
||||
### 测试 NFS 导出
|
||||
|
||||
首先,从 NFS 服务器上,在 `/srv/nfs` 目录下创建一个文件来读取。
|
||||
|
||||
```
|
||||
# 创建一个测试文件以共享
|
||||
echo "Can you see this?" >> /srv/nfs/nfs_test
|
||||
```
|
||||
|
||||
现在,在你添加到导出列表中的客户端系统上,首先确保 NFS 客户端包已经安装好。在 Fedora 系统上,它是 `nfs-utils` 包,可以用 `dnf` 安装。Raspbian 系统有 `libnfs-utils` 包,可以用 `apt-get` 安装。
|
||||
|
||||
安装 NFS 客户端包:
|
||||
|
||||
```
|
||||
# 用 dnf 安装 nfs-utils 软件包
|
||||
$ sudo dnf install nfs-utils
|
||||
```
|
||||
|
||||
一旦安装了客户端包,你就可以测试 NFS 的导出了。同样在客户端,使用带有 NFS 服务器 IP 和导出路径的 `mount` 命令,并将其挂载到客户端的一个位置,在这个测试中是 `/mnt` 目录。在这个例子中,我的 NFS 服务器的 IP 是 `192.168.2.109`,但你的可能会有所不同。
|
||||
|
||||
```
|
||||
# 挂载 NFS 服务器的输出到客户端主机
|
||||
# 确保替换为你的主机的相应信息
|
||||
$ sudo mount 192.168.2.109:/srv/nfs /mnt
|
||||
|
||||
# 查看 nfs_test 文件是不是可见
|
||||
$ cat /mnt/nfs_test
|
||||
Can you see this?
|
||||
```
|
||||
|
||||
成功了!你现在已经有了一个可以工作的 NFS 服务器,可以与多个主机共享文件,允许多个读/写访问,并为你的数据提供集中存储和备份。家庭实验室的共享存储有很多选择,但 NFS 是一种古老的、高效的、可以添加到你的“家庭私有云”家庭实验室中的好选择。本系列未来的文章将扩展如何在客户端上自动挂载 NFS 共享,以及如何将 NFS 作为 Kubernetes 持久卷的存储类。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/5/nfs-raspberry-pi
|
||||
|
||||
作者:[Chris Collins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clcollins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO (Blue folders flying in the clouds above a city skyline)
|
||||
[2]: https://opensource.com/resources/what-is-kubernetes
|
||||
[3]: https://linux.cn/article-12277-1.html
|
||||
[4]: https://opensource.com/sites/default/files/uploads/raspberrypi_with_hard-disk.jpg (Raspberry Pi with a USB hard disk)
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]: https://opensource.com/article/18/9/linux-iptables-firewalld
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12421-1.html)
|
||||
[#]: subject: (Build a Kubernetes cluster with the Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/20/6/kubernetes-raspberry-pi)
|
||||
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
|
||||
@ -12,52 +12,51 @@
|
||||
|
||||
> 将 Kubernetes 安装在多个树莓派上,实现自己的“家庭私有云”容器服务。
|
||||
|
||||
![树莓派板的卡通图形][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202007/15/234152ivw1y2wwhmhmpuvo.jpg)
|
||||
|
||||
[Kubernetes][2] 是从一开始就被设计为云原生的企业级容器编排系统。它已经成长为事实上的云容器平台,并由于接受了容器原生虚拟化和无服务器计算等新技术而继续发展。
|
||||
[Kubernetes][2] 从一开始就被设计为云原生的企业级容器编排系统。它已经成长为事实上的云容器平台,并由于接受了容器原生虚拟化和无服务器计算等新技术而继续发展。
|
||||
|
||||
从微型的边缘计算到大规模的容器环境,无论是公有云还是私有云环境,Kubernetes 都可以管理其中的容器。它是“家庭私有云”项目的理想选择,既提供了强大的容器编排,又有机会了解一项这样的技术 —— 它的需求如此之大,与云计算结合得如此彻底以至于它的名字几乎就是“云计算”的代名词。
|
||||
从微型的边缘计算到大规模的容器环境,无论是公有云还是私有云环境,Kubernetes 都可以管理其中的容器。它是“家庭私有云”项目的理想选择,既提供了强大的容器编排,又让你有机会了解一项这样的技术 —— 它的需求如此之大,与云计算结合得如此彻底,以至于它的名字几乎就是“云计算”的代名词。
|
||||
|
||||
没有什么比 Kubernetes 更能说明“云”,也没有什么能比树莓派更合适“集群起来”!在廉价的树莓派硬件上运行本地的 Kubernetes 集群是获得在真正的云技术巨头上进行管理和开发的经验的好方法。
|
||||
没有什么比 Kubernetes 更懂“云”,也没有什么能比树莓派更合适“集群起来”!在廉价的树莓派硬件上运行本地的 Kubernetes 集群是获得在真正的云技术巨头上进行管理和开发的经验的好方法。
|
||||
|
||||
### 在树莓派上安装 Kubernetes 集群
|
||||
|
||||
本练习将在三个及以上运行 Ubuntu 20.04 的树莓派 4 上安装一个 Kubernetes 1.18.2 集群。Ubuntu 20.04(Focal Fossa)提供了针对 64 位 ARM(ARM64)的树莓派镜像(64 位内核和用户空间)。由于目标是使用这些树莓派来运行 Kubernetes 集群,因此运行 AArch64 容器镜像的能力非常重要:很难找到 32 位的通用软件镜像乃至于标准基础镜像。Ubuntu 20.04 通过其 ARM64 镜像,允许你将 64 位容器镜像与 Kubernetes 一同使用。
|
||||
|
||||
本练习将在三个或更多运行 Ubuntu 20.04 的树莓派 4 上安装 Kubernetes 1.18.2 集群。Ubuntu 20.04(Focal Fossa)提供了针对 64 位 ARM(ARM64)的树莓派镜像(64 位内核和用户空间)。由于目标是使用这些树莓派来运行 Kubernetes 集群,因此运行 AArch64 容器镜像的能力非常重要:很难找到 32 位的通用软件镜像乃至于标准基础镜像。借助 Ubuntu 20.04 的 ARM64 镜像,可以让你在 Kubernetes 上使用 64 位容器镜像。
|
||||
|
||||
#### AArch64 vs. ARM64;32 位 vs. 64 位;ARM vs. x86
|
||||
|
||||
请注意,AArch64 和 ARM64 实际上是同一种东西。不同的名称源于它们在不同社区中的使用。许多容器镜像都标为 AArch64,并能在标为 ARM64 的系统上正常运行。采用 AArch64/ARM64 架构的系统能够运行 32 位的 ARM 镜像,但反之则不然:32 位的 ARM 系统无法运行 64 位的容器镜像。这就是 Ubuntu 20.04 ARM64 镜像如此有用的原因。
|
||||
请注意,AArch64 和 ARM64 实际上是同一种东西。不同的名称源于它们在不同社区中的使用。许多容器镜像都标为 AArch64,并能在标为 ARM64 的系统上正常运行。采用 AArch64/ARM64 架构的系统也能够运行 32 位的 ARM 镜像,但反之则不然:32 位的 ARM 系统无法运行 64 位的容器镜像。这就是 Ubuntu 20.04 ARM64 镜像如此有用的原因。
|
||||
|
||||
不需要太深入地解释不同的架构类型,值得注意的是,ARM64/AArch64 和 x86\_64 架构是不同的,运行在 64 位 ARM 架构上的 Kubernetes 节点无法运行为 x86\_64 构建的容器镜像。在实践中,你会发现有些镜像不是为两种架构构建的,这些镜像可能无法在你的集群中使用。你还需要在基于 Arch64 的系统上构建自己的镜像,或者跳过一些束缚以让你的常规的 x86\_64 系统构建 Arch64 镜像。在“家庭私有云”项目的后续文章中,我将介绍如何在常规系统上构建 AArch64 镜像。
|
||||
这里不会太深入地解释不同的架构类型,值得注意的是,ARM64/AArch64 和 x86\_64 架构是不同的,运行在 64 位 ARM 架构上的 Kubernetes 节点无法运行为 x86\_64 构建的容器镜像。在实践中,你会发现有些镜像没有为两种架构构建,这些镜像可能无法在你的集群中使用。你还需要在基于 Arch64 的系统上构建自己的镜像,或者跳过一些限制以让你的常规的 x86\_64 系统构建 Arch64 镜像。在“家庭私有云”项目的后续文章中,我将介绍如何在常规系统上构建 AArch64 镜像。
|
||||
|
||||
为了达到两全其美的效果,在本教程中设置好 Kubernetes 集群后,你可以在以后向其中添加 x86\_64 节点。你可以通过使用 [Kubernetes 的<ruby>污点<rt>taint</rt></ruby> 和<ruby>容忍<rt>toleration</rt></ruby>][3],由 Kubernetes 的调度器将给定架构的镜像调度到相应的节点上运行。
|
||||
为了达到两全其美的效果,在本教程中设置好 Kubernetes 集群后,你可以在以后向其中添加 x86\_64 节点。你可以通过使用 [Kubernetes 的<ruby>污点<rt>taint</rt></ruby> 和<ruby>容忍<rt>toleration</rt></ruby>][3] 能力,由 Kubernetes 的调度器将给定架构的镜像调度到相应的节点上运行。
|
||||
|
||||
关于架构和镜像的内容就不多说了。是时候安装 Kubernetes 了,我们走!
|
||||
关于架构和镜像的内容就不多说了。是时候安装 Kubernetes 了,开始吧!
|
||||
|
||||
#### 前置需求
|
||||
|
||||
这个练习的要求很低。你将需要。
|
||||
这个练习的要求很低。你将需要:
|
||||
|
||||
* 三台(或更多)树莓派 4(最好是 4GB 内存的型号)。
|
||||
* 在全部树莓派上安装 Ubuntu 20.04 ARM64。
|
||||
|
||||
为了简化初始设置,请阅读《[修改磁盘镜像来创建基于树莓派的家庭实验室][4]》,在将 Ubuntu 镜像写入 SD 卡并安装在树莓派上之前,添加一个用户和 SSH 授权密钥。
|
||||
为了简化初始设置,请阅读《[修改磁盘镜像来创建基于树莓派的家庭实验室][4]》,在将 Ubuntu 镜像写入 SD 卡并安装在树莓派上之前,添加一个用户和 SSH 授权密钥(`authorized_keys`)。
|
||||
|
||||
### 配置主机
|
||||
|
||||
在 Ubuntu 被安装在树莓派上,并且它们可以通过 SSH 访问后,你需要在安装 Kubernetes 之前做一些修改。
|
||||
在 Ubuntu 被安装在树莓派上,并且可以通过 SSH 访问后,你需要在安装 Kubernetes 之前做一些修改。
|
||||
|
||||
#### 安装和配置 Docker。
|
||||
#### 安装和配置 Docker
|
||||
|
||||
截至目前,Ubuntu 20.04 在基础软件库中提供了最新版本的 Docker,即 v19.03,可以直接使用 `apt` 命令安装它。请注意,包名是 `docker.io`。请在所有的树莓派上安装 Docker:
|
||||
截至目前,Ubuntu 20.04 在 base 软件库中提供了最新版本的 Docker,即 v19.03,可以直接使用 `apt` 命令安装它。请注意,包名是 `docker.io`。请在所有的树莓派上安装 Docker:
|
||||
|
||||
```
|
||||
# 安装 docker.io 软件包
|
||||
$ sudo apt install -y docker.io
|
||||
```
|
||||
|
||||
安装好软件包后,你需要做一些修改来启用 [cgroup][5](控制组)。cgroup 允许 Linux 内核限制和隔离资源。实际上,这允许 Kubernetes 更好地管理其运行的容器所使用的资源,并通过让容器彼此隔离来增加安全性。
|
||||
安装好软件包后,你需要做一些修改来启用 [cgroup][5](控制组)。cgroup 允许 Linux 内核限制和隔离资源。实际上,这可以让 Kubernetes 更好地管理其运行的容器所使用的资源,并通过让容器彼此隔离来增加安全性。
|
||||
|
||||
在对所有树莓派进行以下修改之前,请检查 `docker info` 的输出:
|
||||
|
||||
@ -77,7 +76,7 @@ WARNING: No oom kill disable support
|
||||
|
||||
上面的输出突出显示了需要修改的部分:cgroup 驱动和限制支持。
|
||||
|
||||
首先,将 Docker 使用的默认 cgroup 驱动从 `cgroups` 改为 `systemd`,让 systemd 充当 cgroup 管理器,确保只有一个 cgroup 管理器在使用。这有助于系统的稳定性,也是 Kubernetes 所推荐的。要做到这一点,请将 `/etc/docker/daemon.json` 文件创建或替换为:
|
||||
首先,将 Docker 使用的默认 cgroup 驱动从 `cgroups` 改为 `systemd`,让 systemd 充当 cgroup 管理器,确保只有一个 cgroup 管理器在使用。这有助于系统的稳定性,这也是 Kubernetes 所推荐的。要做到这一点,请创建 `/etc/docker/daemon.json` 文件或将内容替换为:
|
||||
|
||||
```
|
||||
# 创建或替换 /etc/docker/daemon.json 以启用 cgroup 的 systemd 驱动
|
||||
@ -132,17 +131,17 @@ $ sudo sysctl --system
|
||||
|
||||
#### 安装 Ubuntu 的 Kubernetes 包
|
||||
|
||||
由于你使用的是 Ubuntu,你可以从 Kubernetes.io 的 Apt 仓库中安装 Kubernetes软件包。目前没有 Ubuntu 20.04(Focal)的仓库,但最近的 Ubuntu LTS 仓库 Ubuntu 18.04(Xenial) 中有 Kubernetes 1.18.2。最新的 Kubernetes 软件包可以从那里安装。
|
||||
由于你使用的是 Ubuntu,你可以从 Kubernetes.io 的 apt 仓库中安装 Kubernetes 软件包。目前没有 Ubuntu 20.04(Focal)的仓库,但最近的 Ubuntu LTS 仓库 Ubuntu 18.04(Xenial) 中有 Kubernetes 1.18.2。最新的 Kubernetes 软件包可以从那里安装。
|
||||
|
||||
将 Kubernetes 软件库添加到 Ubuntu 的源列表之中。
|
||||
将 Kubernetes 软件库添加到 Ubuntu 的源列表之中:
|
||||
|
||||
```
|
||||
# 添加 packages.cloud.google.com 的 atp 密钥
|
||||
$ curl -s <https://packages.cloud.google.com/apt/doc/apt-key.gpg> | sudo apt-key add -
|
||||
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
|
||||
|
||||
# 添加 Kubernetes 软件库
|
||||
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
|
||||
deb <https://apt.kubernetes.io/> kubernetes-xenial main
|
||||
deb https://apt.kubernetes.io/ kubernetes-xenial main
|
||||
EOF
|
||||
```
|
||||
|
||||
@ -170,9 +169,9 @@ kubectl set on hold.
|
||||
|
||||
### 创建 Kubernetes 集群
|
||||
|
||||
在安装了 Kubernetes 软件包之后,你现在可以继续创建集群了。在开始之前,你需要做一些决定。首先,其中一个树莓派需要被指定为控制平面(即主)节点。其余的节点将被指定为计算节点。
|
||||
在安装了 Kubernetes 软件包之后,你现在可以继续创建集群了。在开始之前,你需要做一些决定。首先,其中一个树莓派需要被指定为控制平面节点(即主节点)。其余的节点将被指定为计算节点。
|
||||
|
||||
你还需要选择一个 [CIDR][6](无类别域间路由)地址用于 Kubernetes 集群中的 Pod。在集群创建过程中设置`pod-network-cidr` 可以确保设置了 `podCIDR` 值,它以后可以被<ruby>容器网络接口<rt>Container Network Interface</rt></ruby>(CNI)加载项使用。本练习使用的是 [Flannel][7] CNI。你选择的 CIDR 不应该与你的家庭网络中当前使用的任何 CIDR 重叠,也不应该与你的路由器或 DHCP 服务器管理的 CIDR 重叠。确保使用一个比你预期需要的更大的子网:**总是**有比你最初计划的更多的 Pod!在这个例子中,我将使用 CIDR 地址 `10.244.0.0/16`,但你可以选择一个适合你的。
|
||||
你还需要选择一个 [CIDR][6](无类别域间路由)地址用于 Kubernetes 集群中的 Pod。在集群创建过程中设置 `pod-network-cidr` 可以确保设置了 `podCIDR` 值,它以后可以被<ruby>容器网络接口<rt>Container Network Interface</rt></ruby>(CNI)加载项使用。本练习使用的是 [Flannel][7] CNI。你选择的 CIDR 不应该与你的家庭网络中当前使用的任何 CIDR 重叠,也不应该与你的路由器或 DHCP 服务器管理的 CIDR 重叠。确保使用一个比你预期需要的更大的子网:**总是**有比你最初计划的更多的 Pod!在这个例子中,我将使用 CIDR 地址 `10.244.0.0/16`,但你可以选择一个适合你的。
|
||||
|
||||
有了这些决定,你就可以初始化控制平面节点了。用 SSH 或其他方式登录到你为控制平面指定的节点。
|
||||
|
||||
@ -216,9 +215,9 @@ kubeadm join 192.168.2.114:6443 --token zqqoy7.9oi8dpkfmqkop2p5 \
|
||||
--discovery-token-ca-cert-hash sha256:71270ea137214422221319c1bdb9ba6d4b76abfa2506753703ed654a90c4982b
|
||||
```
|
||||
|
||||
注意两点:第一,Kubernetes 的 `kubectl` 连接信息已经写入到 `/etc/kubernetes/admin.conf`。这个 kubeconfig 文件可以复制到用户的 `~/.kube/config` 中,用户可以是主节点上的 root 用户或普通用户,也可以是远程机器。这样你就可以用 `kubectl` 命令来控制你的集群。
|
||||
注意两点:第一,Kubernetes 的 `kubectl` 连接信息已经写入到 `/etc/kubernetes/admin.conf`。这个 kubeconfig 文件可以复制到用户的 `~/.kube/config` 中,可以是主节点上的 root 用户或普通用户,也可以是远程机器。这样你就可以用 `kubectl` 命令来控制你的集群。
|
||||
|
||||
其次,输出中以 `kubernetes join` 开头的最后一行是你可以运行的命令,以加入更多的节点到集群中。
|
||||
其次,输出中以 `kubernetes join` 开头的最后一行是你可以运行的命令,你可以运行这些命令加入更多的节点到集群中。
|
||||
|
||||
将新的 kubeconfig 复制到你的用户可以使用的地方后,你可以用 `kubectl get nodes` 命令来验证控制平面是否已经安装:
|
||||
|
||||
@ -232,14 +231,14 @@ elderberry Ready master 7m32s v1.18.2
|
||||
|
||||
#### 安装 CNI 加载项
|
||||
|
||||
CNI 加载项负责 Pod 网络的配置和清理。如前所述,这个练习使用的是 Flannel CNI 插件,在已经设置好 `podCIDR` 值的情况下,你只需下载 Flannel YAML 并使用 `kubectl apply` 将其安装到集群中。这可以用 `kubectl apply -f -` 从标准输入中获取数据,用一行命令完成。这将创建管理 Pod 网络所需的 ClusterRoles、ServiceAccounts 和 DaemonSets 等。
|
||||
CNI 加载项负责 Pod 网络的配置和清理。如前所述,这个练习使用的是 Flannel CNI 加载项,在已经设置好 `podCIDR` 值的情况下,你只需下载 Flannel YAML 并使用 `kubectl apply` 将其安装到集群中。这可以用 `kubectl apply -f -` 从标准输入中获取数据,用一行命令完成。这将创建管理 Pod 网络所需的 ClusterRoles、ServiceAccounts 和 DaemonSets 等。
|
||||
|
||||
下载并应用 Flannel YAML 数据到集群中:
|
||||
|
||||
```
|
||||
# 下载 Flannel YAML 数据并应用它
|
||||
# (输出略)
|
||||
$ curl -sSL <https://raw.githubusercontent.com/coreos/flannel/v0.12.0/Documentation/kube-flannel.yml> | kubectl apply -f -
|
||||
$ curl -sSL https://raw.githubusercontent.com/coreos/flannel/v0.12.0/Documentation/kube-flannel.yml | kubectl apply -f -
|
||||
```
|
||||
|
||||
#### 将计算节点加入到集群中
|
||||
@ -252,7 +251,7 @@ $ sudo kubeadm join 192.168.2.114:6443 --token zqqoy7.9oi8dpkfmqkop2p5 \
|
||||
--discovery-token-ca-cert-hash sha256:71270ea137214422221319c1bdb9ba6d4b76abfa2506753703ed654a90c4982b
|
||||
```
|
||||
|
||||
一旦你完成了每个节点的加入过程,你应该能够在 `kubectl get nodes` 的输出中看到新节点:
|
||||
一旦你完成了每个节点的加入,你应该能够在 `kubectl get nodes` 的输出中看到新节点:
|
||||
|
||||
```
|
||||
# 显示 Kubernetes 集群中的节点
|
||||
@ -268,7 +267,7 @@ huckleberry Ready <none> 17s v1.18.2
|
||||
|
||||
此时,你已经拥有了一个完全正常工作的 Kubernetes 集群。你可以运行 Pod、创建部署和作业等。你可以使用[服务][8]从集群中的任何一个节点访问集群中运行的应用程序。你可以通过 NodePort 服务或入口控制器实现外部访问。
|
||||
|
||||
要验证集群正在运行,请创建一个新的命名空间、部署和服务,并检查在部署中运行的 Pod 是否按预期响应。此部署使用 `quay.io/clcollins/kube-verify:01` 镜像,这是一个监听请求的 Nginx 容器(实际上,与文章《[使用Cloud-init 添加节点到你的私有云][9]》中使用的镜像相同)。你可以在[这里][10]查看镜像的 Containerfile。
|
||||
要验证集群正在运行,请创建一个新的命名空间、部署和服务,并检查在部署中运行的 Pod 是否按预期响应。此部署使用 `quay.io/clcollins/kube-verify:01` 镜像,这是一个监听请求的 Nginx 容器(实际上,与文章《[使用 Cloud-init 将节点添加到你的私有云][9]》中使用的镜像相同)。你可以在[这里][10]查看镜像的容器文件。
|
||||
|
||||
为部署创建一个名为 `kube-verify` 的命名空间:
|
||||
|
||||
@ -386,15 +385,15 @@ $ curl 10.98.188.200
|
||||
|
||||
### 去吧,Kubernetes
|
||||
|
||||
“Kubernetes”(κυβερνήτης)在希腊语中是飞行员的意思 —— 但这是否意味着驾驶船只的个人以及引导船只的动作?诶,不是。“Kubernan”(κυβερνάω)是希腊语“驾驶”或“引导”的意思,所以去吧,Kubernan,如果你看到我出去参加会议什么的,请试着给我一个动词或名词的通行证。以另一种语言,我不会说的语言。
|
||||
“Kubernetes”(κυβερνήτης)在希腊语中是飞行员的意思 —— 但这是否意味着驾驶船只以及引导船只的人?诶,不是。“Kubernan”(κυβερνάω)是希腊语“驾驶”或“引导”的意思,因此,去吧,Kubernan,如果你在会议上或其它什么活动上看到我,请试着给我一个动词或名词的通行证,以另一种语言 —— 我不会说的语言。
|
||||
|
||||
免责声明:如前所述,我不会读也不会讲希腊语,尤其是古希腊语,所以我选择相信我在网上读到的东西。你知道那是怎么一回事。带着盐分,给我一点休息时间,因为我没有开“对我来说都是希腊语”的玩笑。然而,只是提一下,我,可以玩笑,但是实际没开,所以我要么偷偷摸摸,要么聪明,要么两者兼而有之。或者,两者都不是。我并没有说这是个好笑话。
|
||||
免责声明:如前所述,我不会读也不会讲希腊语,尤其是古希腊语,所以我选择相信我在网上读到的东西。你知道那是怎么一回事。我对此有所保留,放过我吧,因为我没有开“对我来说都是希腊语”这种玩笑。然而,只是提一下,虽然我可以开玩笑,但是实际上没有,所以我要么偷偷摸摸,要么聪明,要么两者兼而有之。或者,两者都不是。我并没有说这是个好笑话。
|
||||
|
||||
所以,去吧,像专业人员一样在你的家庭私有云中用自己的 Kubernetes 容器服务来试运行你的容器吧!当你越来越得心应手时,你可以修改你的 Kubernetes 集群,尝试不同的选项,比如前面提到的入口控制器和用于持久卷的动态 StorageClasses。
|
||||
|
||||
这种持续学习是 [DevOps][14] 的核心,新服务的持续集成和交付反映了敏捷方法论,我们已经接受了这两种方法,因为我们已经学会了处理云实现的大规模,并发现我们的传统做法无法跟上步伐。
|
||||
这种持续学习是 [DevOps][14] 的核心,持续集成和新服务交付反映了敏捷方法论,当我们学会了处理云实现的大规模扩容,并发现我们的传统做法无法跟上步伐时,我们就接受了这两种方法论。
|
||||
|
||||
你看,那是什么?技术、政策、哲学、一小段希腊语和一个可怕的元笑话,都在一篇文章中。
|
||||
你看,技术、策略、哲学、一小段希腊语和一个可怕的原始笑话,都汇聚在一篇文章当中。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -403,7 +402,7 @@ via: https://opensource.com/article/20/6/kubernetes-raspberry-pi
|
||||
作者:[Chris Collins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -417,9 +416,9 @@ via: https://opensource.com/article/20/6/kubernetes-raspberry-pi
|
||||
[6]: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing
|
||||
[7]: https://github.com/coreos/flannel
|
||||
[8]: https://kubernetes.io/docs/concepts/services-networking/service/
|
||||
[9]: https://opensource.com/article/20/5/create-simple-cloud-init-service-your-homelab
|
||||
[9]: https://linux.cn/article-12407-1.html
|
||||
[10]: https://github.com/clcollins/homelabCloudInit/blob/master/simpleCloudInitService/data/Containerfile
|
||||
[11]: http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"\>
|
||||
[11]: http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"
|
||||
[12]: https://opensource.com/article/20/4/http-kubernetes-skipper
|
||||
[13]: https://opensource.com/article/20/5/nfs-raspberry-pi
|
||||
[13]: https://linux.cn/article-12413-1.html
|
||||
[14]: https://opensource.com/tags/devops
|
@ -0,0 +1,339 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12417-1.html)
|
||||
[#]: subject: (Protect your system with fail2ban and firewalld blacklists)
|
||||
[#]: via: (https://fedoramagazine.org/protect-your-system-with-fail2ban-and-firewalld-blacklists/)
|
||||
[#]: author: (hobbes1069 https://fedoramagazine.org/author/hobbes1069/)
|
||||
|
||||
使用 fail2ban 和 FirewallD 黑名单保护你的系统
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
如果你运行的服务器有面向公众的 SSH 访问,你可能遇到过恶意登录尝试。本文介绍了如何使用两个实用程序来防止入侵者进入我们的系统。
|
||||
|
||||
为了防止反复的 ssh 登录尝试,我们来看看 fail2ban。而且,如果你不经常旅行,基本上停留在一两个国家,你可以将 FirewallD 配置为只允许[从你选择的国家访问][2]。
|
||||
|
||||
首先,让我们为不熟悉这些应用程序的人员介绍一些术语,以完成这项工作:
|
||||
|
||||
**fail2ban**:一个守护进程,用于禁止发生多次认证错误的主机。fail2ban 将监控 SystemD 日志,以查找对任何已启用的“<ruby>监狱<rt>jail</rt></ruby>”的失败的验证尝试。在达到指定失败次数后,它将添加一个防火墙规则,在配置的时间内阻止该特定 IP 地址。
|
||||
|
||||
**FirewallD**:一个带有 D-Bus 接口的防火墙守护进程,提供动态防火墙。除非你另行决定使用传统的 iptables,否则你已经在所有支持的 Fedora 和 CentOS 上安装了 FirewallD。
|
||||
|
||||
### 假定前提
|
||||
|
||||
* 主机系统有一个互联网连接,并且要么是直接暴露在互联网上,要么是通过 DMZ(这两个都是非常糟糕的想法,除非你知道你在做什么),要么是有一个端口从路由器转发过来。
|
||||
* 虽然大部分的内容可能适用于其他系统,但本文假设当前系统是 Fedora(31 及以上)或 RHEL/CentOS 8 版本。在 CentOS 上,你必须用 `sudo dnf install epel-release` 启用 Fedora EPEL 仓库。
|
||||
|
||||
### 安装与配置
|
||||
|
||||
#### Fail2Ban
|
||||
|
||||
很有可能已经有某个 Firewalld 区已经允许 SSH 访问,但 sshd 服务本身默认没有启用。要手动启动它,并且不在启动时永久启用它:
|
||||
|
||||
```
|
||||
$ sudo systemctl start sshd
|
||||
```
|
||||
|
||||
或者在系统启动时启用,并同时启动它:
|
||||
|
||||
```
|
||||
$ sudo systemctl enable --now sshd
|
||||
```
|
||||
|
||||
下一步就是安装、配置、启用 fail2ban。和往常一样,安装可以通过命令行完成:
|
||||
|
||||
```
|
||||
$ sudo dnf install fail2ban
|
||||
```
|
||||
|
||||
安装完毕后,下一步就是配置“监狱”(你要以设置的任何阈值监视并禁止的服务)。默认情况下,IP 会被禁止 1 小时(这其实不够长)。最好的做法是使用 `*.local` 文件覆盖系统默认值,而不是直接修改 `*.config` 文件。如果我们查看我的 `jail.local`,我们可以看到:
|
||||
|
||||
```
|
||||
# cat /etc/fail2ban/jail.local
|
||||
[DEFAULT]
|
||||
|
||||
# "bantime" is the number of seconds that a host is banned.
|
||||
bantime = 1d
|
||||
|
||||
# A host is banned if it has generated "maxretry" during the last "findtime"
|
||||
findtime = 1h
|
||||
|
||||
# "maxretry" is the number of failures before a host get banned.
|
||||
maxretry = 5
|
||||
```
|
||||
|
||||
换成通俗的语言讲,就是在过去一小时内尝试 5 次后,该 IP 将被封禁 1 天。对于多次被封的 IP,也可以选择增加封禁时间,但这是另一篇文章的主题。
|
||||
|
||||
下一步是配置“监狱”。在本教程中显示的是 `sshd`,但其他服务的步骤大致相同。在 `/etc/fail2ban/jail.d` 中创建一个配置文件。这是我的文件:
|
||||
|
||||
```
|
||||
# cat /etc/fail2ban/jail.d/sshd.local
|
||||
[sshd]
|
||||
enabled = true
|
||||
```
|
||||
|
||||
就这么简单! 很多配置已经在为 Fedora 构建的软件包中处理了(提示:我是当前的维护者)。接下来启用并启动 fail2ban 服务:
|
||||
|
||||
```
|
||||
$ sudo systemctl enable --now fail2ban
|
||||
```
|
||||
|
||||
希望没有立即出错,如果没有,请使用下面的命令检查 fail2ban 的状态:
|
||||
|
||||
```
|
||||
$ sudo systemctl status fail2ban
|
||||
```
|
||||
|
||||
如果它没有错误地启动,应该是这样的:
|
||||
|
||||
```
|
||||
$ systemctl status fail2ban
|
||||
● fail2ban.service - Fail2Ban Service
|
||||
Loaded: loaded (/usr/lib/systemd/system/fail2ban.service; disabled; vendor preset: disabled)
|
||||
Active: active (running) since Tue 2020-06-16 07:57:40 CDT; 5s ago
|
||||
Docs: man:fail2ban(1)
|
||||
Process: 11230 ExecStartPre=/bin/mkdir -p /run/fail2ban (code=exited, status=0/SUCCESS)
|
||||
Main PID: 11235 (f2b/server)
|
||||
Tasks: 5 (limit: 4630)
|
||||
Memory: 12.7M
|
||||
CPU: 109ms
|
||||
CGroup: /system.slice/fail2ban.service
|
||||
└─11235 /usr/bin/python3 -s /usr/bin/fail2ban-server -xf start
|
||||
Jun 16 07:57:40 localhost.localdomain systemd[1]: Starting Fail2Ban Service…
|
||||
Jun 16 07:57:40 localhost.localdomain systemd[1]: Started Fail2Ban Service.
|
||||
Jun 16 07:57:41 localhost.localdomain fail2ban-server[11235]: Server ready
|
||||
```
|
||||
|
||||
如果是刚刚启动的,fail2ban 不太可能显示任何有意思的信息,但要检查 fail2ban 的状态,并确保“监狱”被启用,请输入:
|
||||
|
||||
```
|
||||
$ sudo fail2ban-client status
|
||||
Status
|
||||
|- Number of jail: 1
|
||||
`- Jail list: sshd
|
||||
```
|
||||
|
||||
sshd “监狱”的上级状态也会显示出来。如果启用了多个“监狱”,它们会在这里显示出来。
|
||||
|
||||
要查看一个“监狱”的详细状态,只需在前面的命令中添加“监狱”名称。下面是我的系统的输出,它已经运行了一段时间。我已经从输出中删除了被禁止的 IP:
|
||||
|
||||
```
|
||||
$ sudo fail2ban-client status sshd
|
||||
Status for the jail: sshd
|
||||
|- Filter
|
||||
| |- Currently failed: 8
|
||||
| |- Total failed: 4399
|
||||
| `- Journal matches: _SYSTEMD_UNIT=sshd.service + _COMM=sshd
|
||||
`- Actions
|
||||
|- Currently banned: 101
|
||||
|- Total banned: 684
|
||||
`- Banned IP list: ...
|
||||
```
|
||||
|
||||
监控 fail2ban 日志文件是否有入侵尝试,可以通过“尾随”日志来实现:
|
||||
|
||||
```
|
||||
$ sudo tail -f /var/log/fail2ban.log
|
||||
```
|
||||
|
||||
`tail` 是一个很好的命令行工具,默认情况下,它可以显示一个文件的最后 10 行。添加 `-f` 告诉它尾随文件,这是个观察一个仍在被写入的文件的很好方式。
|
||||
|
||||
由于输出的内容中有真实的 IP,所以这里不会提供样本,但它的可读性很高。`INFO` 行通常是登录的尝试。如果从一个特定的 IP 地址进行了足够多的尝试,你会看到一个 `NOTICE` 行显示一个 IP 地址被禁止。在达到禁止时间后,你会看到一个 `NOTICE` 解禁行。
|
||||
|
||||
注意几个警告行。最常见的情况是,当添加了一个禁止后,fail2ban 发现该 IP 地址已经在其禁止数据库中,这意味着禁止可能无法正常工作。如果是最近安装的 fail2ban 包,它应该被设置为 FirewallD 的富规则。这个包在 fail2ban-0.11.1-6 版本时从 ipset 方式切换到了富规则方式,所以如果你的 fail2ban 安装时间较早,它可能还在尝试使用 ipset 方式,这种方式使用的是传统的 iptables,不是很可靠。
|
||||
|
||||
#### FirewallD 配置
|
||||
|
||||
##### 被动还是主动?
|
||||
|
||||
有两种策略可以分开或一起使用:**被动**地将单个 IP 地址或**主动**地根据来源国将子网永久列入黑名单。
|
||||
|
||||
对于被动方式,一旦 fail2ban 运行了一段时间,最好再运行 `sudo fail2ban-client status sshd` 来看看有哪些坏蛋。很可能会有很多被禁止的 IP 地址。选择一个,然后试着对它运行 `whois`。在输出结果中可能会有很多有趣的信息,但是对于这个方法来说,只有来源国是重要的。为了保持简单,让我们过滤掉除了国家以外的所有信息。
|
||||
|
||||
在这个例子中,我们将使用一些著名的域名:
|
||||
|
||||
```
|
||||
$ whois google.com | grep -i country
|
||||
Registrant Country: US
|
||||
Admin Country: US
|
||||
Tech Country: US
|
||||
```
|
||||
|
||||
```
|
||||
$ whois rpmfusion.org | grep -i country
|
||||
Registrant Country: FR
|
||||
```
|
||||
|
||||
```
|
||||
$ whois aliexpress.com | grep -i country
|
||||
Registrant Country: CN
|
||||
```
|
||||
|
||||
使用 `grep -i` 的原因是为了使 `grep` 不区分大小写,而大多数条目都使用的是 “Country”,而有些条目则是全小写的 “country”,所以这种方法无论如何都能匹配。
|
||||
|
||||
现在知道了尝试入侵的来源国,问题是,“是否有来自这个国家的人有合法的理由连接到这台计算机?”如果答案是否定的,那么封锁整个国家应该是可以接受的。
|
||||
|
||||
从功能上看,主动式方法它与被动式方法没有太大区别,然而,来自有些国家的入侵企图是非常普遍的。如果你的系统既不放在这些国家里,也没有任何源自这些国家的客户,那么为什么不现在就把它们加入黑名单而是等待呢?(LCTT 译注:我的经验是,动辄以国家的范畴而列入黑名单有些过于武断。建议可以将该 IP 所属的 WHOIS 网段放入到黑名单,因为这些网段往往具有相同的使用性质,如都用于用户接入或 IDC 托管,其安全状况也大致相同,因此,如果有来自该网段的某个 IP 的恶意尝试,可以预期该网段内的其它 IP 也可能被利用来做这样的尝试。)
|
||||
|
||||
##### 黑名单脚本和配置
|
||||
|
||||
那么如何做到这一点呢?用 FirewallD ipset。我开发了下面的脚本来尽可能地自动化这个过程:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
# Based on the below article
|
||||
# https://www.linode.com/community/questions/11143/top-tip-firewalld-and-ipset-country-blacklist
|
||||
|
||||
# Source the blacklisted countries from the configuration file
|
||||
. /etc/blacklist-by-country
|
||||
|
||||
# Create a temporary working directory
|
||||
ipdeny_tmp_dir=$(mktemp -d -t blacklist-XXXXXXXXXX)
|
||||
pushd $ipdeny_tmp_dir
|
||||
|
||||
# Download the latest network addresses by country file
|
||||
curl -LO http://www.ipdeny.com/ipblocks/data/countries/all-zones.tar.gz
|
||||
tar xf all-zones.tar.gz
|
||||
|
||||
# For updates, remove the ipset blacklist and recreate
|
||||
if firewall-cmd -q --zone=drop --query-source=ipset:blacklist; then
|
||||
firewall-cmd -q --permanent --delete-ipset=blacklist
|
||||
fi
|
||||
|
||||
# Create the ipset blacklist which accepts both IP addresses and networks
|
||||
firewall-cmd -q --permanent --new-ipset=blacklist --type=hash:net \
|
||||
--option=family=inet --option=hashsize=4096 --option=maxelem=200000 \
|
||||
--set-description="An ipset list of networks or ips to be dropped."
|
||||
|
||||
# Add the address ranges by country per ipdeny.com to the blacklist
|
||||
for country in $countries; do
|
||||
firewall-cmd -q --permanent --ipset=blacklist \
|
||||
--add-entries-from-file=./$country.zone && \
|
||||
echo "Added $country to blacklist ipset."
|
||||
done
|
||||
|
||||
# Block individual IPs if the configuration file exists and is not empty
|
||||
if [ -s "/etc/blacklist-by-ip" ]; then
|
||||
echo "Adding IPs blacklists."
|
||||
firewall-cmd -q --permanent --ipset=blacklist \
|
||||
--add-entries-from-file=/etc/blacklist-by-ip && \
|
||||
echo "Added IPs to blacklist ipset."
|
||||
fi
|
||||
|
||||
# Add the blacklist ipset to the drop zone if not already setup
|
||||
if firewall-cmd -q --zone=drop --query-source=ipset:blacklist; then
|
||||
echo "Blacklist already in firewalld drop zone."
|
||||
else
|
||||
echo "Adding ipset blacklist to firewalld drop zone."
|
||||
firewall-cmd --permanent --zone=drop --add-source=ipset:blacklist
|
||||
fi
|
||||
|
||||
firewall-cmd -q --reload
|
||||
|
||||
popd
|
||||
rm -rf $ipdeny_tmp_dir
|
||||
```
|
||||
|
||||
这个应该安装到 `/usr/local/sbin`,不要忘了让它可执行!
|
||||
|
||||
```
|
||||
$ sudo chmod +x /usr/local/sbin/firewalld-blacklist
|
||||
```
|
||||
|
||||
然后创建一个配置文件 `/etc/blacklist-by-country`:
|
||||
|
||||
```
|
||||
# Which countries should be blocked?
|
||||
# Use the two letter designation separated by a space.
|
||||
countries=""
|
||||
```
|
||||
|
||||
而另一个配置文件 `/etc/blacklist-by-ip`,每行只有一个 IP,没有任何额外的格式化。
|
||||
|
||||
在这个例子中,从 ipdeny 的区文件中随机选择了 10 个国家:
|
||||
|
||||
```
|
||||
# ls | shuf -n 10 | sed "s/\.zone//g" | tr '\n' ' '
|
||||
nl ee ie pk is sv na om gp bn
|
||||
```
|
||||
|
||||
现在只要在配置文件中加入至少一个国家,就可以运行了!
|
||||
|
||||
```
|
||||
$ sudo firewalld-blacklist
|
||||
% Total % Received % Xferd Average Speed Time Time Time Current
|
||||
Dload Upload Total Spent Left Speed
|
||||
100 142 100 142 0 0 1014 0 --:--:-- --:--:-- --:--:-- 1014
|
||||
100 662k 100 662k 0 0 989k 0 --:--:-- --:--:-- --:--:-- 989k
|
||||
Added nl to blacklist ipset.
|
||||
Added ee to blacklist ipset.
|
||||
Added ie to blacklist ipset.
|
||||
Added pk to blacklist ipset.
|
||||
Added is to blacklist ipset.
|
||||
Added sv to blacklist ipset.
|
||||
Added na to blacklist ipset.
|
||||
Added om to blacklist ipset.
|
||||
Added gp to blacklist ipset.
|
||||
Added bn to blacklist ipset.
|
||||
Adding ipset blacklist to firewalld drop zone.
|
||||
success
|
||||
```
|
||||
|
||||
要验证 FirewallD 黑名单是否成功,请检查 `drop` 区和 `blacklist` ipset。
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --info-zone=drop
|
||||
drop (active)
|
||||
target: DROP
|
||||
icmp-block-inversion: no
|
||||
interfaces:
|
||||
sources: ipset:blacklist
|
||||
services:
|
||||
ports:
|
||||
protocols:
|
||||
masquerade: no
|
||||
forward-ports:
|
||||
source-ports:
|
||||
icmp-blocks:
|
||||
rich rules:
|
||||
|
||||
$ sudo firewall-cmd --info-ipset=blacklist | less
|
||||
blacklist
|
||||
type: hash:net
|
||||
options: family=inet hashsize=4096 maxelem=200000
|
||||
entries:
|
||||
```
|
||||
|
||||
第二条命令将输出所有的子网,这些子网是基于被封杀的国家而添加的,可能会相当长。
|
||||
|
||||
### 那么现在我该怎么做?
|
||||
|
||||
虽然在开始的时候,监控的频率会比较高,但随着时间的推移,入侵尝试的次数应该会随着黑名单的增加而减少。那么目标应该是维护而不是主动监控。
|
||||
|
||||
为此,我创建了一个 SystemD 服务文件和定时器,这样每月都会刷新由 ipdeny 维护的每个国家的子网。事实上,这里讨论的所有内容都可以从我的 [pagure.io](https://pagure.io/firewalld-blacklist) 项目中下载。
|
||||
|
||||
是不是很高兴你看完了整篇文章?现在只要把服务文件和定时器下载到 `/etc/systemd/system/`,并启用定时器就行了:
|
||||
|
||||
```
|
||||
$ sudo systemctl daemon-reload
|
||||
$ sudo systemctl enable --now firewalld-blacklist.timer
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/protect-your-system-with-fail2ban-and-firewalld-blacklists/
|
||||
|
||||
作者:[hobbes1069][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/hobbes1069/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/06/fail2ban-and-firewalld-816x345.png
|
||||
[2]: https://www.linode.com/community/questions/11143/top-tip-firewalld-and-ipset-country-blacklist
|
@ -1,54 +1,54 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12414-1.html)
|
||||
[#]: subject: (How to Disable Dock on Ubuntu 20.04 and Gain More Screen Space)
|
||||
[#]: via: (https://itsfoss.com/disable-ubuntu-dock/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
如何在 Ubuntu 20.04 上禁用 Dock 并获得更多屏幕空间
|
||||
如何在 Ubuntu 20.04 上禁用坞站(dock)
|
||||
======
|
||||
|
||||
左侧的启动器已成为 [Ubuntu][1] 桌面的标识。它是在 [Unity 桌面][2]中引入的,甚至在 [Ubuntu 切换到 GNOME][3]时就有了,它分叉了 Dash to Panel,以在 [GNOME][4] 上创建一个类似的 Dock。
|
||||
左侧的启动器已成为 [Ubuntu][1] 桌面的标识。它是在 [Unity 桌面][2]中引入的,甚至在 [Ubuntu 切换到 GNOME][3]时就有了,它复刻了 “Dash to Panel”,以在 [GNOME][4] 上创建一个类似的<ruby>坞站<rt>dock</rt></ruby>。
|
||||
|
||||
就个人而言,我发现它对于快速访问常用应用非常方便。但并非所有人都希望它占用屏幕上的一些额外空间。
|
||||
就个人而言,我发现它对于快速访问常用应用非常方便,但并非所有人都希望它额外占用屏幕上的一些空间。
|
||||
|
||||
从 [Ubuntu 20.04][5] 开始,你可以轻松禁用 dock。在本教程中,让我向你展示如何以图形和命令的方式执行此操作。
|
||||
从 [Ubuntu 20.04][5] 开始,你可以轻松禁用坞站。在本教程中,让我向你展示如何以图形和命令的方式执行此操作。
|
||||
|
||||
![][6]
|
||||
|
||||
### 通过扩展应用禁用 Ubuntu Dock
|
||||
|
||||
[Ubuntu 20.04 的主要功能][7]之一是引入扩展 (Extension) 应用来管理系统上的 GNOME 扩展。只需在 GNOME 菜单中查找它(按下 Windows 键并输入):
|
||||
[Ubuntu 20.04 的主要功能][7]之一是引入“<ruby>扩展<rt>Extension</rt></ruby>”应用来管理系统上的 GNOME 扩展。只需在 GNOME 菜单中查找它(按下 Windows 键并输入):
|
||||
|
||||
![Look for Extensions app in the menu][8]
|
||||
|
||||
没有扩展应用?
|
||||
> 没有找到扩展应用?
|
||||
>
|
||||
> 如果尚未安装,你应该启用 GNOME Shell 扩展,“扩展” GUI 是此软件包的一部分。
|
||||
>
|
||||
> ```
|
||||
> sudo apt install gnome-shell-extensions
|
||||
> ```
|
||||
>
|
||||
> 这仅对 [GNOME 3.36][9] 或 Ubuntu 20.04(或更高版本) 中的更高版本有效。
|
||||
|
||||
如果尚未安装,你应该启用 GNOME Shell Extensions。Extensions GUI 是此软件包的一部分。
|
||||
|
||||
```
|
||||
sudo apt install gnome-shell-extensions
|
||||
```
|
||||
|
||||
这仅对 [GNOME 3.36][9] 或 Ubuntu 20.04(或更高版本) 中的更高版本有效。
|
||||
|
||||
启动扩展应用,你应该在“内置”扩展下看到 Ubuntu Dock。你只需要关闭即可禁用 dock。
|
||||
启动“扩展”应用,你应该在“内置”扩展下看到 Ubuntu Dock。你只需要关闭即可禁用坞站。
|
||||
|
||||
![Disable Ubuntu Dock][10]
|
||||
|
||||
更改是即时的,你会看到 dock 立即消失。
|
||||
更改是即时的,你会看到坞站立即消失。
|
||||
|
||||
你可以用相同的方式恢复。只需打开它,它就会立即显示。
|
||||
|
||||
在 Ubuntu 20.04 中非常容易隐藏 dock,不是吗?
|
||||
在 Ubuntu 20.04 中非常容易隐藏坞站,不是吗?
|
||||
|
||||
### 替代方法:通过命令行禁用 Ubuntu dock
|
||||
### 替代方法:通过命令行禁用 Ubuntu 坞站
|
||||
|
||||
如果你是终端爱好者,并且喜欢在终端中做事,那么我有一个对你而言的好消息。你可以从命令行禁用 Ubuntu dock。
|
||||
如果你是终端爱好者,并且喜欢在终端中做事,那么我有一个对你而言的好消息。你可以从命令行禁用 Ubuntu 坞站。
|
||||
|
||||
使用 Ctrl+Alt+T 打开终端。你可能已经知道 [Ubuntu 中的键盘快捷键][11]。
|
||||
使用 `Ctrl+Alt+T` 打开终端。你可能已经知道 [Ubuntu 中的键盘快捷键][11]。
|
||||
|
||||
在终端中,使用以下命令列出所有可用的 GNOME 扩展:
|
||||
|
||||
@ -60,13 +60,13 @@ gnome-extensions list
|
||||
|
||||
![List GNOME Extensions][12]
|
||||
|
||||
默认的 Ubuntu dock 扩展名是 [ubuntu-dock@ubuntu.com][13]。你可以使用以下命令将其禁用:
|
||||
默认的 Ubuntu 坞站扩展是 `ubuntu-dock@ubuntu.com`。你可以使用以下命令将其禁用:
|
||||
|
||||
```
|
||||
gnome-extensions disable ubuntu-dock@ubuntu.com
|
||||
```
|
||||
|
||||
屏幕上不会显示任何输出,但是你会注意到启动器或 dock 从左侧消失了。
|
||||
屏幕上不会显示任何输出,但是你会注意到启动器(坞站)从左侧消失了。
|
||||
|
||||
如果需要,你可以使用与上面相同的命令再次启用它,但使用启用选项:
|
||||
|
||||
@ -74,15 +74,15 @@ gnome-extensions disable ubuntu-dock@ubuntu.com
|
||||
gnome-extensions enable ubuntu-dock@ubuntu.com
|
||||
```
|
||||
|
||||
**总结**
|
||||
### 总结
|
||||
|
||||
在 Ubuntu 18.04 中也有禁用 Dock 的方法。但是,如果你尝试在 18.04 中删除它,这可能会导致不必要的情况。删除此软件包也会删除 ubuntu-desktop 包,最终可能会导致系统崩溃,例如没有应用菜单。
|
||||
在 Ubuntu 18.04 中也有禁用坞站的方法。但是,如果你尝试在 18.04 中删除它,这可能会导致不想要的结果。删除此软件包也会删除 ubuntu-desktop 包,最终可能会导致系统崩溃,例如没有应用菜单。
|
||||
|
||||
这就是为什么我不建议在 Ubuntu 18.04 上删除它的原因。
|
||||
|
||||
好消息是 Ubuntu 20.04 提供了一种隐藏任务栏的方法。用户拥有更大的自由度和更多的屏幕空间。说到更多的屏幕空间,你是否知道可以[从 Firefox 移除顶部标题栏并获得更多的屏幕空间][14]?
|
||||
|
||||
我想知道你喜欢怎样的 Ubuntu 桌面?有 dock,没有 dock 还是没有 GNOME?
|
||||
我想知道你喜欢怎样的 Ubuntu 桌面?要坞站,不要坞站,还是不要 GNOME?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -91,7 +91,7 @@ via: https://itsfoss.com/disable-ubuntu-dock/
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,214 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (summer2233)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12415-1.html)
|
||||
[#]: subject: (Back up your phone's storage with this Linux utility)
|
||||
[#]: via: (https://opensource.com/article/20/7/gphoto2-linux)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
使用 Linux 实用程序 gPhoto2 备份手机存储
|
||||
======
|
||||
|
||||
> 尽情地拍照吧,gphoto2 能够方便、快速地将照片从你的设备传输到 Linux 计算机上。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202007/14/170729pzljppapojy44ro4.jpg)
|
||||
|
||||
移动设备的最大缺点之一就是其将数据从设备传输到计算机很困难。移动设备在这一缺点上有着悠久的历史。早期的移动设备,如 Pilot 和掌上电脑 PDA 设备,需要使用特殊的同步软件来传输数据(你必须小心翼翼地做这件事,因为你的设备可能会因为电池耗尽而导致数据永久丢失);旧版 iPod 只提供特定平台的界面。现代移动设备默认将你的数据发送到在线帐户,以便你可以在计算机上再次下载。
|
||||
|
||||
好消息——如果你正在运行 Linux,你可以使用 `gphoto2` 命令与移动设备进行连接。`gphoto2` 最初是作为一种与数码相机通信的方式而开发的,那时的数码相机只有传统的相机功能,现在的 `gphoto2` 可以和许多不同种类的移动设备通讯。别让这个名字骗了你,它可以处理所有类型的文件,而不仅仅是照片。更棒的是,它可以编写脚本、很灵活、并且比大多数 GUI 界面功能强大得多。
|
||||
|
||||
如果你曾经为在计算机和移动设备之间同步数据而苦恼,请了解一下 `gphoto2`。
|
||||
|
||||
### 安装 gPhoto2
|
||||
|
||||
很可能你的 Linux 系统已经安装了 libgphoto2,因为它是与移动设备连接的一个关键库,但你可能还需要安装命令 `gphoto2`,该命令可能在你的存储库中。
|
||||
|
||||
在 Fedora 或 RHEL 上:
|
||||
|
||||
```
|
||||
$ sudo dnf install gphoto2
|
||||
```
|
||||
|
||||
在 Debian 或 Ubuntu 上:
|
||||
|
||||
```
|
||||
$ sudo apt install gphoto2
|
||||
```
|
||||
|
||||
### 验证兼容性
|
||||
|
||||
若要确认你的移动设备是否受支持,请使用 `--list-cameras`,通过管道传输到 `less`:
|
||||
|
||||
```
|
||||
$ gPhoto2 --list-cameras | less
|
||||
```
|
||||
|
||||
或者你可以通过管道把它传送到 `grep` 来搜索一个词。例如,如果你有三星 Galaxy,则使用 `grep`,并通过选项 `-i` 关闭区分大小写:
|
||||
|
||||
```
|
||||
$ gphoto2 --list-cameras | grep -i galaxy
|
||||
"Samsung Galaxy models (MTP)"
|
||||
"Samsung Galaxy models (MTP+ADB)"
|
||||
"Samsung Galaxy models Kies mode"
|
||||
```
|
||||
|
||||
这证实了三星 Galaxy 设备支持通过 MTP 连接和通过 ADB 连接 MTP。
|
||||
|
||||
如果你没有在列表中找到自己的移动设备,你仍然可以尝试使用 `gphoto2`,可能你的设备在列表中使用了不同的称呼。
|
||||
|
||||
### 查找移动设备
|
||||
|
||||
要使用 gPhoto2,首先必须将移动设备插入计算机,设置为 MTP 模式,并且授予计算机与它交互的权限。这通常需要在你的移动设备上操作,往往是在屏幕上按下一个按钮,以允许其文件系统被刚刚连接的计算机访问。
|
||||
|
||||
![Screenshot of allow access message][2]
|
||||
|
||||
如果你不授权电脑访问移动设备,那么 gPhoto2 可以检测到你的移动设备,但它不能与之交互。
|
||||
|
||||
要确保计算机检测到你连接的移动设备,请使用 `--auto-detect` 选项:
|
||||
|
||||
```
|
||||
$ gphoto2 --auto-detect
|
||||
Model Port
|
||||
---------------------------------------
|
||||
Samsung Galaxy models (MTP) usb:002,010
|
||||
```
|
||||
|
||||
如果你的移动设备没有被检测到,请先检查数据线,然后检查你的设备是否配置为通过 MTP、ADB 或其它 gPhoto2 支持的协议连接,如 `--list-cameras` 所示。
|
||||
|
||||
### 查询你的设备支持的特性
|
||||
|
||||
对于现代设备,通常有过多的潜在功能,但并非所有移动设备都支持这些功能。你可以用 `--abilities` 选项来确定自己的移动设备支持哪些功能。我觉得结果看起来直观。
|
||||
|
||||
```
|
||||
$ gphoto2 --abilities
|
||||
Abilities for camera : Samsung Galaxy models (MTP)
|
||||
Serial port support : no
|
||||
USB support : yes
|
||||
Capture choices : Capture not supported by driver
|
||||
Configuration support : no
|
||||
Delete selected files on camera : yes
|
||||
Delete all files on camera : no
|
||||
File preview (thumbnail) support: no
|
||||
File upload support : yes
|
||||
```
|
||||
|
||||
如果只连接一个设备,那么不需要指定查询的设备。但是,如果连接了多个 gPhoto2 可以与之交互的设备,则可以通过端口、相机型号或 usbid 指定设备。
|
||||
|
||||
### 与你的移动设备交互
|
||||
|
||||
如果你的设备支持拍摄功能,则可以从计算机调用你的摄像头来获取媒体。例如,要拍摄照片:
|
||||
|
||||
```
|
||||
$ gphoto2 --capture-image
|
||||
```
|
||||
|
||||
要拍摄照片并立即将其传输到连接的计算机:
|
||||
|
||||
```
|
||||
$ gphoto2 --capture-image-and-download
|
||||
```
|
||||
|
||||
你也可以录制视频和声音。如果连接了多个拍摄设备,可以按端口、相机型号或 usbid 指定要使用的设备:
|
||||
|
||||
```
|
||||
$ gphoto2 --camera "Samsung Galaxy models (MTP)" \
|
||||
--capture-image-and-download
|
||||
```
|
||||
|
||||
### 文件和文件夹
|
||||
|
||||
要想更加智能地管理移动设备上的文件,你需要了解 gPhoto2 连接的文件系统的结构。
|
||||
|
||||
你可以使用 `--get-folders` 选项查看可用文件夹:
|
||||
|
||||
```
|
||||
$ gphoto2 --list-folders
|
||||
There are 2 folders in folder '/'.
|
||||
- store_00010001
|
||||
- store_00020002
|
||||
There are 0 folders in folder '/store_00010001'.
|
||||
There are 0 folders in folder '/store_00020002'.
|
||||
```
|
||||
|
||||
每个文件夹代表设备上的一个存储单元。在本例中,`store_00010001` 是内部存储器,`store_00020002` 是 SD 卡,这可能与你的设备的结构不同。
|
||||
|
||||
### 获取文件
|
||||
|
||||
现在你知道了设备的文件夹布局,就可以从设备获取照片了。你可以使用许多不同的选项,具体取决于你想从设备中获取什么。
|
||||
|
||||
如果你知道绝对路径,则可以获取指定的文件:
|
||||
|
||||
```
|
||||
$ gphoto2 --get-file IMG_0001.jpg --folder /store_00010001/myphotos
|
||||
```
|
||||
|
||||
你可以同时获得所有的文件:
|
||||
|
||||
```
|
||||
$ gphoto2 --get-all-files --folder /store_00010001/myfiles
|
||||
```
|
||||
|
||||
你可以只获取音频文件:
|
||||
|
||||
```
|
||||
gphoto2 --get-all-audio-data --folder /store_00010001/mysounds
|
||||
```
|
||||
|
||||
gPhoto2 还有其他的选择,其中大多数取决于你连接的设备和使用协议是否支持。
|
||||
|
||||
### 上传文件
|
||||
|
||||
现在你知道了潜在的目标文件夹,就可以将文件从计算机上传到你的设备。例如,假设有一个名为 `example.epub` 的文件在当前目录中,你可以使用 `--upload-file` 选项和 `--folder` 选项将文件发送到设备并指定要上传到的目录:
|
||||
|
||||
```
|
||||
$ gphoto2 --upload file example.epub \
|
||||
--folder store_00010001
|
||||
```
|
||||
|
||||
如果你希望将多个文件上传到同一个位置,你可以在设备上创建一个目录:
|
||||
|
||||
|
||||
```
|
||||
$ gphoto2 --mkdir books \
|
||||
--folder store_00010001
|
||||
$ gphoto2 --upload-file *.epub \
|
||||
--folder store_00010001/books
|
||||
```
|
||||
|
||||
### 列出文件
|
||||
|
||||
若要查看设备上的文件,请使用 `--list-files` 选项:
|
||||
|
||||
```
|
||||
$ gphoto2 --list-files --folder /store_00010001
|
||||
There is 1 file in folder '/store_00010001'
|
||||
#1 example.epub 17713 KB application/x-unknown
|
||||
$ gphoto2 --list-files --folder /store_00010001/books
|
||||
There is 1 file in folder '/store_00010001'
|
||||
#1 example0.epub 17713 KB application/x-unknown
|
||||
#2 example1.epub 12264 KB application/x-unknown
|
||||
[...]
|
||||
```
|
||||
|
||||
### 探索你的使用方式
|
||||
|
||||
gPhoto2 的大部分功能取决于你的设备,因此不同用户的体验可能不尽相同。在 `gphoto2 --help` 中列出了许多操作供你探索。使用gPhoto2,再也不用费劲把文件从你的设备传输到电脑上了!
|
||||
|
||||
这些开源图片库能够帮助你组织文件,并让的图片看起来很棒。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/gphoto2-linux
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[summer2233](https://github.com/summer2233)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd (A person looking at a phone)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/gphoto2-mtp-allow.jpg (Screenshot of allow access message)
|
138
published/20200708 How to decipher Linux release info.md
Normal file
138
published/20200708 How to decipher Linux release info.md
Normal file
@ -0,0 +1,138 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12418-1.html)
|
||||
[#]: subject: (How to decipher Linux release info)
|
||||
[#]: via: (https://www.networkworld.com/article/3565432/how-to-decipher-linux-release-info.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
如何解密 Linux 版本信息
|
||||
======
|
||||
|
||||
> 显示和解释有关 Linux 版本的信息比看起来要复杂一些。
|
||||
|
||||
![christin hume / Linux / Modified by IDG Comm.][1]
|
||||
|
||||
与引用一个简单的版本号不同,识别 Linux 版本有很多种方法。即使只是快速查看一下 `uname` 命令的输出,也可以告诉你一些信息。这些信息是什么,它告诉你什么?
|
||||
|
||||
在本文中,我们将认真研究 `uname` 命令的输出以及其他一些命令和文件提供的版本说明。
|
||||
|
||||
### 使用 uname
|
||||
|
||||
每当在 Linux 系统终端窗口中执行命令 `uname -a` 时,都会显示很多信息。那是因为这个小小的 `a` 告诉 `uname` 命令你想查看该命令能提供的*全部*输出。结果显示的内容将告诉你许多有关该系统的各种信息。实际上,显示的每一块信息都会告诉你一些关于系统的不同信息。
|
||||
|
||||
例如,`uname -a` 输出看起来像这样:
|
||||
|
||||
```
|
||||
$ uname -a
|
||||
Linux dragonfly 5.4.0-37-generic #41-Ubuntu SMP Wed Jun 3 18:57:02 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
|
||||
```
|
||||
|
||||
尽管这可能不太重要,但你可以使用一个按适当的顺序包含 `uname` 所有选项来显示相同的信息:
|
||||
|
||||
```
|
||||
$ uname -snmrvpio
|
||||
Linux dragonfly 5.4.0-37-generic #41-Ubuntu SMP Wed Jun 3 18:57:02 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
|
||||
```
|
||||
|
||||
要将这一长串信息分解为单独的块,可以使用类似这样的 `for` 循环来遍历每个选项:
|
||||
|
||||
```
|
||||
$ for option in s n m r v p i o; do echo -n "$option: "; uname -$option; done
|
||||
s: Linux
|
||||
n: dragonfly
|
||||
m: x86_64
|
||||
r: 5.4.0-37-generic
|
||||
v: #41-Ubuntu SMP Wed Jun 3 18:57:02 UTC 2020
|
||||
p: x86_64
|
||||
i: x86_64
|
||||
o: GNU/Linux
|
||||
```
|
||||
|
||||
该循环显示了该选项提供了哪些信息。`uname` 手册页提供了每个选项的描述。以下是清单:
|
||||
|
||||
- Linux –- 内核名称(选项 `s`)
|
||||
- dragonfly –- 节点名(选项 `n`)
|
||||
- x86_64 –- 机器硬件名(选项 `m`)
|
||||
- 5.4.0-37-generic –- 内核发布版本(选项 `r`)
|
||||
- #41-Ubuntu SMP Wed Jun 3 18:57:02 UTC 2020 -- 内核版本(选项 `v`)
|
||||
- x86_64 –- 处理器(选项 `p`)
|
||||
- x86_64 –- 硬件平台(选项 `i`)
|
||||
- GNU/Linux –- 操作系统(选项 `o`)
|
||||
|
||||
要更深入地研究显示的信息,请认真查看显示的内核发行数据。第四行中的 `5.4.0-37` 不仅仅是一串任意数字。每个数字都很重要。
|
||||
|
||||
* `5` 表示内核版本
|
||||
* `4` 表示主要版本
|
||||
* `0` 表示次要版本
|
||||
* `37` 表示最新补丁
|
||||
|
||||
此外,在上面的循环中输出的第 5 行(内核版本)中的 `#41` 表示此发布版本已编译 41 次。
|
||||
|
||||
如果你只想显示所有信息中的一项,那么单个选项可能很有用。例如,命令 `uname -n` 可以仅告诉你系统名称,而 `uname -r` 仅可以告诉你内核发布版本。在盘点服务器或构建脚本时,这些和其他选项可能很有用。
|
||||
|
||||
在 Red Hat 系统时,`uname -a` 命令将提供相同种类的信息。这是一个例子:
|
||||
|
||||
```
|
||||
$ uname -a
|
||||
Linux fruitfly 4.18.0-107.el8.x86_64 #1 SMP Fri Jun 14 13:46:34 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
|
||||
```
|
||||
|
||||
### 发行版信息
|
||||
|
||||
如果你需要了解运行的发行版是什么,那么 `uname` 的输出不会对你有太大帮助。毕竟,内核版本与发行版不同。关于这个信息,你可以在 Ubuntu 和其他基于 Debian 的系统上使用 `lsb_release -r` 命令,而在 Red Hat 上可以显示 `/etc/redhat-release` 文件的内容。
|
||||
|
||||
对于 Debian 系统:
|
||||
|
||||
```
|
||||
$ lsb_release -r
|
||||
Release: 20.04
|
||||
```
|
||||
|
||||
对于 Red Hat 及相关系统:
|
||||
|
||||
```
|
||||
$ cat /etc/redhat-release
|
||||
Red Hat Enterprise Linux release 8.1 Beta (Ootpa)
|
||||
```
|
||||
|
||||
### 使用 /proc/version
|
||||
|
||||
`/proc/version` 文件还可以提供有关 Linux 版本的信息。该文件中提供的信息与 `uname -a` 输出有很多共同点。以下是例子。
|
||||
|
||||
在 Ubuntu 上:
|
||||
|
||||
```
|
||||
$ cat /proc/version
|
||||
Linux version 5.4.0-37-generic (buildd@lcy01-amd64-001) (gcc version 9.3.0 (Ubuntu 9.3.0-10ubuntu2)) #41-Ubuntu SMP Wed Jun 3 18:57:02 UTC 2020
|
||||
```
|
||||
|
||||
在 RedHat 上:
|
||||
|
||||
```
|
||||
$ cat /proc/version
|
||||
Linux version 4.18.0-107.el8.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc version 8.3.1 20190507 (Red Hat 8.3.1-4) (GCC)) #1 SMP Fri Jun 14 13:46:34 UTC 2019
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
Linux 系统提供了很多关于内核和发行版安装的信息。你只需要知道在哪里或如何寻找并理解它的含义。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3565432/how-to-decipher-linux-release-info.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2020/07/linux_woman-with-laptop_keyboard_code_programmer_devops_by-christin-hume-via-unsplash_linux_-100850842-large.jpg
|
||||
[2]: https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (Yufei-Yan)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (sunwenquan )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,91 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (My Linux story: From Linux user to contributor)
|
||||
[#]: via: (https://opensource.com/article/20/7/linux-user-contributor)
|
||||
[#]: author: (Taz Brown https://opensource.com/users/heronthecli)
|
||||
|
||||
My Linux story: From Linux user to contributor
|
||||
======
|
||||
Open source communities welcome a diverse group of contributors, from
|
||||
every background and skill set.
|
||||
![Penguin, stylized, contributor spotlight][1]
|
||||
|
||||
I am an IT professional with over 15 years of experience in a number of different roles—systems administrator, senior Linux administrator, DevOps engineer, automation consultant, and senior scrum master. I started learning Linux on Ubuntu but shifted to CentOS as a sysadmin, and later I moved to Fedora for personal use. But my joy for technology started much earlier than my first Linux distribution, and it came in the form of a movie.
|
||||
|
||||
My favorite movie is Hackers. The best scene occurs at the beginning of the movie. The movie starts with a group of special agents breaking into a house to catch the infamous hacker, Zero Cool. We soon discover that Zero Cool is actually 11-year-old Dade Murphy, who managed to crash 1,507 computer systems in one day. He is charged for his crimes, and his family is heavily fined. Additionally, he is banned from using computers or touch-tone telephones until he is 18.
|
||||
|
||||
Paul Cook a.k.a. Lord Nikon, played by Laurence Mason, was my favorite character. One of the main reasons is that I never really saw a hacker movie that had characters that looked like me, so I was fascinated by his portrayal. He was enigmatic. It was refreshing to see, and it made me really proud that I was passionate about IT and that I was a geek of a similar sort.
|
||||
|
||||
![Taz with astronaut][2]
|
||||
|
||||
### Becoming a Linux contributor
|
||||
|
||||
I first started using Linux about 15 years ago. When I became a Linux administrator, Linux became my passion. I was trying to find my way in terms of contributing to open source, and I didn't know where to go. I wondered if I could truly be an influencer because the community is so vast, but once I found a few people who embraced my interests and could show me the way, I was able to open up and ask questions and learn from the community. The [Fedora community][3] has been a core part of my contribution ever since.
|
||||
|
||||
I am relatively new to contributing to open source. My idea of open source changed when I realized I could contribute in ways other than code. I prefer to contribute through documentation, as I am not a software developer at heart, and one of the most pressing needs in the community is often documentation. Remember: user skills matter as much as developer skills.
|
||||
|
||||
### What about the hardware?
|
||||
|
||||
Hardware matters too, and almost everything can run Linux these days. My current home setup includes:
|
||||
|
||||
* Lenovo Thinkserver TS140 with 64 GB of RAM, 4x1TB SSDs and a 1TB HD for data storage, currently running Fedora 30
|
||||
* Synology NAS with 164 TB of storage using a RAID 5 configuration
|
||||
* Logitech MX Master and MX Master 2S for input and output configuration
|
||||
* Kinesis Advantage 2 for a customized and ergonomic keyboard
|
||||
* Two 38-inch LG ultrawide curved monitors and a single 34-inch LG ultrawide monitor
|
||||
* System76 16.1" Oryx Pro with IPS Display, i7 processor with six cores and 12 threads
|
||||
|
||||
|
||||
|
||||
I love the way Fedora handles the peripherals like my mouse and keyboard. Everything works seamlessly. Plug-and-play works as it should, and performance never suffers.
|
||||
|
||||
![Fedora double monitor setup][4]
|
||||
|
||||
### And software?
|
||||
|
||||
Using open source software is essential to my workflow. I rely on:
|
||||
|
||||
* Fedora 30 for my go-to Linux distribution
|
||||
* Wekan, an open-source kanban, for my projects
|
||||
* [Atom][5] as my text editor
|
||||
* Terminator as my go-to terminal because of grid arrangement as well as it's many keyboard shortcuts
|
||||
* Neofetch to show off system information every time I log in to the terminal
|
||||
|
||||
|
||||
|
||||
Last but not least, I have my terminal pimped out using [Powerline][6] and Powerlevel9k and Vim-Powerline as well.
|
||||
|
||||
![Multiple fedora screens][7]
|
||||
|
||||
### Linux brings us together
|
||||
|
||||
America is a melting pot, and that's how I see Linux, too, as well as specific communities like the Fedora Project. There is plenty of room for diverse contributions in every Linux community. There are so many ways to be involved, and there is always space for new ideas. I hope that sharing my experiences over the last 15 years in open source can help underrepresented members of the tech community learn about the amazing commitment that open source communities have to diversity and inclusion.
|
||||
|
||||
\---
|
||||
|
||||
_Editor's note: This article is an adaptation of [Taz Brown: How Do You Fedora?][8] and is republished with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/linux-user-contributor
|
||||
|
||||
作者:[Taz Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/heronthecli
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/contributor_spotlight_penguin.jpg?itok=azJA5Cj8 (Penguin, stylized, contributor spotlight)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/taz_with_astronaut_0.png (Taz with astronaut)
|
||||
[3]: https://getfedora.org/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/fedora_double_monitor_setup.jpg (Fedora double monitor setup)
|
||||
[5]: https://fedoramagazine.org/install-atom-fedora/
|
||||
[6]: https://fedoramagazine.org/add-power-terminal-powerline/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/fedora_screens.jpg (Multiple fedora screens)
|
||||
[8]: https://fedoramagazine.org/taz-brown-how-do-you-fedora/
|
@ -0,0 +1,56 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (My Linux story: breaking language barriers with open source)
|
||||
[#]: via: (https://opensource.com/article/20/7/linux-bengali)
|
||||
[#]: author: (Dr Anirban Mitra https://opensource.com/users/mitradranirban)
|
||||
|
||||
My Linux story: breaking language barriers with open source
|
||||
======
|
||||
Open source projects can help change the world by removing barriers,
|
||||
linguistic and otherwise.
|
||||
![India on the globe][1]
|
||||
|
||||
My open source journey started rather late in comparison to many of my peers and colleagues.
|
||||
|
||||
I was pursuing a post-graduate degree in medicine in 2000 when I managed to fulfill a dream I’d had since high school—to buy my own PC. Before that, my only exposure to computers was through occasional access in libraries or cyber cafés, which charged exorbitant prices for access at that time. So I saved up portions of my grad student stipend and managed to buy a Pentium III 550 Mhz with 128MB RAM, and as came standard in most computers in India at that time, a pirated version of Windows 98.
|
||||
|
||||
There was no Internet access in my hostel room. I had to go to the nearby cyber café, download software there, and then carry around dozens of floppy discs.
|
||||
|
||||
As happy as I was finally owning my own computer, it bothered me that I could not write in my mother tongue, Bangla. I came across resources provided by CDAC, a government agency that provided Indian language tools based on ISCII, an older national standard upon which the Unicode standard of Indic language was based. It was difficult to learn the keyboard layouts.
|
||||
|
||||
### My first contribution
|
||||
|
||||
Soon, I came across a software called [Yudit][2], which offered phonetic typing of Indic language using the standard QWERTY keyboard. It was with Yudit that I first came across terms like open source and free software, GNU, and Linux. Yudit allowed me to translate UI elements into Bengali too, and when I submitted the translations to the developer, he gladly incorporated them into the next version and credited me in the README of the software.
|
||||
|
||||
This was exciting for me, as I was seeing, for the very first time, an application user element in my mother tongue. Moreover, I had been able to contribute to the development of a software despite having almost zero knowledge of coding. I went on to create an ISCII-to-Unicode converter for Yudit, which can also be used for transliteration between various Indian languages. I also bought a Linux magazine that came with a free live CD of Knoppix, and that’s how I got a feel for the Linux desktop.
|
||||
|
||||
Another issue I faced was the lack of availability of Unicode-compliant OpenType Bangla font. The font I used was shareware, and I was supposed to pay a license fee for it. I thought, “Why not try my hand at developing it myself?” In the process, I came in contact with Bangla speakers scattered worldwide who were trying to enable Bangla in the Linux operating system, via `bengalinux.org` (later renamed Ankur group).
|
||||
|
||||
I joined their mailing list, and we discussed among ourselves and the authorities the various flaws in the Unicode and OpenType specifications of Bangla, which were then corrected in due course. I contributed by converting legacy Bangla fonts into OpenType Unicode-compliant fonts, translating UI, and so on. That group also came out with the world’s first Live Linux CD with a Bangla user interface.
|
||||
|
||||
In 2003, I had moved to a place where I did not have access to the Internet; I could only connect to the group on Sundays when I came to Kolkata. By that time, Bangla localization of Linux had become a mainstream thing. Some of our volunteers joined Red Hat to work on translation and font development. I also became busy in my medical practice and had little time left for open source development.
|
||||
|
||||
Now, I feel more comfortable using Linux to do my daily work than any other operating system. I also feel proud to be associated with a project which allows people to communicate in their own language. It also brought computing power to a population who were for a long time considered to be on the other side of the “digital divide” because they did not speak English. Bangla is actually one of the most widely spoken languages in the world, and this project removed a major barrier to access for a large chunk of the global population.
|
||||
|
||||
### Joining open source
|
||||
|
||||
Joining in on the open source movement is easy. Take the initiative to do something that is useful to yourself, and then think about how it could be useful to others. The key is to keep it freely available, and it can add untold value to the world.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/linux-bengali
|
||||
|
||||
作者:[Dr Anirban Mitra][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mitradranirban
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/india-globe-map.jpg?itok=6sSEL5iO (India on the globe)
|
||||
[2]: http://www.yudit.org/
|
@ -0,0 +1,79 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Impact of COVID-19 on the Global Digital Economy)
|
||||
[#]: via: (https://www.networkworld.com/article/3566911/the-impact-of-covid-19-on-the-global-digital-economy.html)
|
||||
[#]: author: (QTS https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
The Impact of COVID-19 on the Global Digital Economy
|
||||
======
|
||||
Post-Pandemic IT Landscape Demands a New Data Center Paradigm
|
||||
QTS
|
||||
|
||||
Prior to COVID-19, the world was undergoing massive business transformation illustrated by stunning success of mega-scale Internet businesses such as Amazon Prime, Twitter, Uber, Netflix, Xbox and others that are now an integral part of our lives and exemplify the new global digital economy.
|
||||
|
||||
The ability to apply artificial intelligence (AL), machine learning (ML), speech recognition, location services, speed and identity tracking in real-time enabled new applications and services once thought to be beyond the reach of computing technology.
|
||||
|
||||
Fueling this transformation are exponential increases in digitized data and the ability to apply enormous compute and storage capacity to it. Since 2016, digitization has created 90% of the world’s data. According to IDC, more than 59 zettabytes (ZB) of data are going to be created and consumed globally in 2020 and this is forecast to grow to 175 ZB by 2025. How much is 1 ZB you ask? It is equivalent to a _trillion gigabytes. Or, 100 million HD movies worth of data._
|
||||
|
||||
As it turns out, according to IDC, instead of hindering growth, COVID-19 is accelerating data growth, particularly in 2020 and 2021, due to abrupt increases in work from home employees, a changing mix of richer data sets, and a surge in video-based content consumption.
|
||||
|
||||
For the enterprise, an unforeseen byproduct is an even greater urgency for agility, adaptability and transformation. Business models are being disrupted while the digitalization of the economy is accelerating as new technologies and services serve a reshaped workforce.
|
||||
|
||||
The competitive landscape across all market sectors is changing. Now more than ever business is looking to technology to be agile in the face of disruption and create new digitally enabled business models for the post-COVID “new normal.”
|
||||
|
||||
That will require new capabilities and expertise in thousands of data centers and networks behind the scenes. That infrastructure provides the digital infrastructure that powers the critical applications and services keeping the economy afloat -- and all of us connected. The “cloud” lives in data centers and data centers are the commerce platforms of the 21st century.
|
||||
|
||||
**Data centers are essential…the best ones are innovating**
|
||||
|
||||
At the outset of the pandemic, data centers were designated “essential businesses” since virtually all industries and consumers depend on them. The more advanced data centers were quickly recognized for their [ability to provide customers with remote access and management of their systems][1] without operators or enterprise customers having to be there physically.
|
||||
|
||||
[QTS Realty Trust (NYSE: QTS)][2] is at the forefront of providing these services to all their customers. Supporting its commitment to digitize its entire end-to-end systems and processes, QTS is the first and only multi-tenant data center operator with a sophisticated software-defined orchestration platform powered by AI, ML, predictive analytics (PA) and virtual reality (VR) technologies.
|
||||
|
||||
QTS’ API-driven [Service Delivery Platform (SDP)][3] empowers customers to interact with their data, services, and connectivity ecosystem by providing real-time visibility, access and dynamic control of critical metrics across hybrid and hyperscale environments from a single platform and/or mobile device. It is akin to it having a software company within the data center delivering operational savings and business innovation which are central to every IT investment.
|
||||
|
||||
SDP applications leverage next-generation AI, ML and PA to accurately forecast power consumption, automate the service provisioning, perform online ordering and asset management. VR technologies are enabling new virtual collaboration tools and a 3D visualization application that renders an exact replication of a customer’s IT environment in real-time.
|
||||
|
||||
QTS’ SDP was profiled in the Raymond James Industry Brief: _Data Maps and Killer Apps_ (released June 2020) that surveyed the platforms of three global data center operators:
|
||||
|
||||
_“While all three platforms have the ability to track and report common data, the ease of use of the systems was quite different. Only QTS had the entire system wrapped up into an app that was available across multiple desktop, tablet, and mobile platforms, complete with 3D imaging and simple graphics that outlined the situation visually down to an individual rack within a cabinet. QTS' system also has video capability using facial recognition to detect and identify employees and contractors separately, highlight an open cabinet door and other potential hazards inside the customers cage, and it is all either real time or on a recorded basis to highlight potential errors and problems. Live heat maps allow customers to see the areas with potential and existing performance issues and to see outages in real time and track down problems. As far as features and functionality, QTS SDP system was the clear winner.”_
|
||||
|
||||
**Customer experience is the dealmaker**
|
||||
|
||||
As consumers become more adamant in their demand for quality of experience in their digital lives, businesses must ensure they are providing services, and the data that is generated from them, real-time, on-the-go, via any network, and are personalized.
|
||||
|
||||
Post COVID, the ability of data centers to ensure excellent customer experience will play an even greater role as large numbers of customers continue to work remotely with less on-premises interaction. Enterprises will seek data center operators that can ensure secure, ubiquitous, real time access to services and data backed by superior customer support.
|
||||
|
||||
Given a purchasing decision based on performance and price between two equally qualified data center operators, the first tiebreaker is increasingly coming down to proven and documented customer support.
|
||||
|
||||
In the data center industry, QTS is the undisputed leader in customer service and support boasting an [independent Net Promoter Score of 88][4] \- more than double the average NPS score for data center companies (42).
|
||||
|
||||
Customers rated QTS highly in a range of service areas, including its customer service, service delivery platform, physical facilities, processes, responsiveness, and service of onsite staff and the 24-hour [Operations Service Center][5]. QTS’ score of 88 is its highest yet and exceeds NPS scores of companies well-known for their customer service including Starbucks (71) and Apple (72).
|
||||
|
||||
Enterprise, Hyperscale and government organizations recognize that the post-COVID landscape is going to present an increasing need for innovation in their IT environments. In terms of IT service delivery, this means higher levels of transparency, visibility, compliance and sustainability that are at the foundation of QTS’ Service Delivery Platform.
|
||||
|
||||
With innovation come new technologies and complexity, raising the profile of the best service and support partners for companies looking to re-establish themselves in a new, post-COVID competitive landscape.
|
||||
|
||||
For more information, visit [www.qtsdatacenters.com][6]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3566911/the-impact-of-covid-19-on-the-global-digital-economy.html
|
||||
|
||||
作者:[QTS][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.qtsdatacenters.com/company/news/2020/03/26/qts-reports-significant-increase-in-customer-usage-of-sdp
|
||||
[2]: http://www.qtsdatacenters.com/
|
||||
[3]: https://www.qtsdatacenters.com/why-qts/service-delivery-platform
|
||||
[4]: https://www.qtsdatacenters.com/why-qts/customer-benefits/nps
|
||||
[5]: https://www.qtsdatacenters.com/why-qts/customer-benefits/operations-support-center
|
||||
[6]: http://www.qtsdatacenters.com
|
@ -0,0 +1,87 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Options grow for migrating mainframe apps to the cloud)
|
||||
[#]: via: (https://www.networkworld.com/article/3567058/options-grow-for-migrating-mainframe-apps-to-the-cloud.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Options grow for migrating mainframe apps to the cloud
|
||||
======
|
||||
Mainframe modernization vendor LzLabs is targeting Big Iron enterprises that want to move legacy applications into public or private cloud environments.
|
||||
Thinkstock
|
||||
|
||||
Mainframe users looking to bring legacy applications into the public or private cloud world have a new option: [LzLabs][1], a mainframe software migration vendor.
|
||||
|
||||
Founded in 2011 and based in Switzerland, LzLabs this week said it's setting up shop in North America to help mainframe users move legacy applications – think COBOL – into the more modern and flexible cloud application environment.
|
||||
|
||||
**Read also: [How to plan a software-defined data-center network][2]**
|
||||
|
||||
At the heart of LzLabs' service is its Software Defined Mainframe (SDM), an open-source, Eclipse-based system that's designed to let legacy applications, particularly those without typically available source code, such as COBOL, run in the cloud without recompilation.
|
||||
|
||||
The company says it works closely with cloud providers such as Amazon Web Services, and its service can be implemented on [Microsoft Azure][3]. It also works with other technology partners such as Red Hat and Accenture.
|
||||
|
||||
Legacy applications have become locked into mainframe infrastructures, and the lack of available skilled personnel who understand the platform and its development process has revealed the urgent need to move these critical applications to open platforms and the cloud, according to Mark Cresswell, CEO of LzLabs.
|
||||
|
||||
"With SDM, customers can run mainframe workloads on x86 or the cloud without recompilation or data reformatting. This approach significantly reduces the risks associated with mainframe migration, enables incremental modernization and integrates applications with DevOps, open-source and the cloud," the company [stated][4].
|
||||
|
||||
Cresswell pointed to the news stories around the COVID-19 pandemic that found many mainframe and COBOL-based state government systems were having trouble keeping up with the huge volume of unemployment claims hitting those systems.
|
||||
|
||||
For example, [CNN in April reported][5] that in New Jersey, Gov. Phil Murphy put out a call for volunteers who know how to code COBOL because many of the state's systems still run on older mainframes. Connecticut is also reportedly struggling to process the large volume of unemployment claims with its decades-old mainframe; it's working to develop a new benefits system with Maine, Rhode Island, Mississippi and Oklahoma, but the system won't be finished before next year, according to the CNN story.
|
||||
|
||||
"An estimated 70% of the world's commercial transactions are processed by a mainframe application at some point in their cycle, which means U.S. state governments are merely the canaries in the coal mine. Banks, insurance, telecom and manufacturing companies (to mention a few) should be planning their exit," Cresswell wrote in a [blog][6].
|
||||
|
||||
LzLabs is part of an ecosystem of mainframe modernization service providers that includes [Astadia][7], [Asysco][8], [GTSoftware][9], [Micro Focus][10] and others.
|
||||
|
||||
Large cloud players are also involved in modernizing mainframe applications. For example, Google Cloud in February [bought mainframe cloud-migration service firm Cornerstone Technology][11] with an eye toward helping Big Iron customers move workloads to the private and public cloud. Google said the Cornerstone technology – found in its [G4 platform][12] – will shape the foundation of its future mainframe-to-Google Cloud offerings and help mainframe customers modernize applications and infrastructure.
|
||||
|
||||
"Through the use of automated processes, Cornerstone's tools can break down your Cobol, PL/1, or Assembler programs into services and then make them cloud native, such as within a managed, containerized environment," wrote Howard Weale, Google's director, transformation practice, in a [blog][13] about the acquisition.
|
||||
|
||||
"As the industry increasingly builds applications as a set of services, many customers want to break their mainframe monolith programs into either Java monoliths or Java microservices," Weale stated.
|
||||
|
||||
The Cornerstone move is also part of Google's effort stay competitive in the face of mainframe-migration offerings from [Amazon Web Services][14], [IBM/RedHat][15] and [Microsoft][16].
|
||||
|
||||
While the number of services looking to aid in mainframe modernization might be increasing, the actual migration to those services should be well planned, experts say.
|
||||
|
||||
A _Network World_ article on [mainframe migration options][17] from 2017 still holds true today: "The problem facing enterprises wishing to move away from mainframes has always been the 'all-or-nothing' challenge posed by their own workloads. These workloads are so interdependent and complex that everything has to be moved at once or the enterprise suffers. The suffering typically centers on underperformance, increased complexity caused by bringing functions over piecemeal, or having to add new development or operational staff to support the target environment. In the end, the savings on hardware or software turns out to be less than the increased costs required of a hybrid computing solution."
|
||||
|
||||
Gartner last year also warned that migrating legacy applications should be undertaken very deliberately.
|
||||
|
||||
"The value gained by moving applications from the traditional enterprise platform onto the next 'bright, shiny thing' rarely provides an improvement in the business process or the company's bottom line. A great deal of analysis must be performed and each cost accounted for," Gartner stated in a 2019 report, [Considering Leaving Legacy IBM Platforms? Beware, as Cost Savings May Disappoint, While Risking Quality][18]*. "*Legacy platforms may seem old, outdated and due for replacement. Yet IBM and other vendors are continually integrating open-source tools to appeal to more developers while updating the hardware. Application leaders should reassess the capabilities and quality of these platforms before leaving them."
|
||||
|
||||
Join the Network World communities on [Facebook][19] and [LinkedIn][20] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3567058/options-grow-for-migrating-mainframe-apps-to-the-cloud.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://twitter.com/LzLabsGmbH
|
||||
[2]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
|
||||
[3]: https://www.astadia.com/video/mainframe-transformation-azure-is-the-new-mainframe
|
||||
[4]: https://appsource.microsoft.com/en-us/product/web-apps/lzlabsgmbh-5083555.lzlabs-softwaredefinedmainframe?src=office&tab=Overview
|
||||
[5]: https://www.cnn.com/2020/04/08/business/coronavirus-cobol-programmers-new-jersey-trnd/index.html
|
||||
[6]: https://blog.lzlabs.com/cobol-crisis-for-us-state-government-departments-is-the-canary-in-the-coal-mine
|
||||
[7]: https://www.astadia.com/blog/mainframe-migration-to-azure-in-5-steps
|
||||
[8]: https://www.asysco.com/code-transformation/
|
||||
[9]: https://www.gtsoftware.com/services/migration-services/
|
||||
[10]: https://www.microfocus.com/en-us/home
|
||||
[11]: https://www.networkworld.com/article/3528451/google-cloud-moves-to-aid-mainframe-migration.html
|
||||
[12]: https://www.cornerstone.nl/solutions/modernization
|
||||
[13]: https://cloud.google.com/blog/topics/inside-google-cloud/helping-customers-migrate-their-mainframe-workloads-to-google-cloud
|
||||
[14]: https://aws.amazon.com/blogs/enterprise-strategy/yes-you-should-modernize-your-mainframe-with-the-cloud/
|
||||
[15]: https://www.networkworld.com/article/3438542/ibm-z15-mainframe-amps-up-cloud-security-features.html
|
||||
[16]: https://azure.microsoft.com/en-us/migration/mainframe/
|
||||
[17]: https://www.networkworld.com/article/3192652/why-are-mainframes-still-in-the-enterprise-data-center.html
|
||||
[18]: https://www.gartner.com/doc/3905276
|
||||
[19]: https://www.facebook.com/NetworkWorld/
|
||||
[20]: https://www.linkedin.com/company/network-world
|
@ -1,310 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Turn your Raspberry Pi homelab into a network filesystem)
|
||||
[#]: via: (https://opensource.com/article/20/5/nfs-raspberry-pi)
|
||||
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
|
||||
|
||||
Turn your Raspberry Pi homelab into a network filesystem
|
||||
======
|
||||
Add shared filesystems to your homelab with an NFS server.
|
||||
![Blue folders flying in the clouds above a city skyline][1]
|
||||
|
||||
A shared filesystem is a great way to add versatility and functionality to a homelab. Having a centralized filesystem shared to the clients in the lab makes organizing data, doing backups, and sharing data considerably easier. This is especially useful for web applications load-balanced across multiple servers and for persistent volumes used by [Kubernetes][2], as it allows pods to be spun up with persistent data on any number of nodes.
|
||||
|
||||
Whether your homelab is made up of ordinary computers, surplus enterprise servers, or Raspberry Pis or other single-board computers (SBCs), a shared filesystem is a useful asset, and a network filesystem (NFS) server is a great way to create one.
|
||||
|
||||
I have written before about [setting up a "private cloud at home][3]," a homelab made up of Raspberry Pis or other SBCs and maybe some other consumer hardware or a desktop PC. An NFS server is an ideal way of sharing data between these components. Since most SBCs' operating systems (OSes) run off an SD card, there are some challenges. SD cards suffer from increased failures, especially when used as the OS disk for a computer, and they are not made to be constantly read from and written to. What you really need is a real hard drive: they are generally cheaper per gigabyte than SD cards, especially for larger disks, and they are less likely to sustain failures. Raspberry Pi 4's now come with USB 3.0 ports, and USB 3.0 hard drives are ubiquitous and affordable. It's a perfect match. For this project, I will use a 2TB USB 3.0 external hard drive plugged into a Raspberry Pi 4 running an NFS server.
|
||||
|
||||
![Raspberry Pi with a USB hard disk][4]
|
||||
|
||||
(Chris Collins, [CC BY-SA 4.0][5])
|
||||
|
||||
### Install the NFS server software
|
||||
|
||||
I am running Fedora Server on a Raspberry Pi, but this project can be done with other distributions as well. To run an NFS server on Fedora, you need the nfs-utils package, and luckily it is already installed (at least in Fedora 31). You also need the rpcbind package if you are planning to run NFSv3 services, but it is not strictly required for NFSv4.
|
||||
|
||||
If these packages are not already on your system, install them with the **dnf** command:
|
||||
|
||||
|
||||
```
|
||||
# Intall nfs-utils and rpcbind
|
||||
$ sudo dnf install nfs-utils rpcbind
|
||||
```
|
||||
|
||||
Raspbian is another popular OS used with Raspberry Pis, and the setup is almost exactly the same. The package names differ, but that is about the only major difference. To install an NFS server on a system running Raspbian, you need the following packages:
|
||||
|
||||
* **nfs-common:** These files are common to NFS servers and clients
|
||||
* **nfs-kernel-server:** The main NFS server software package
|
||||
|
||||
|
||||
|
||||
Raspbian uses **apt-get** for package management (not **dnf**, as Fedora does), so use that to install the packages:
|
||||
|
||||
|
||||
```
|
||||
# For a Raspbian system, use apt-get to install the NFS packages
|
||||
$ sudo apt-get install nfs-common nfs-kernel-server
|
||||
```
|
||||
|
||||
### Prepare a USB hard drive as storage
|
||||
|
||||
As I mentioned above, a USB hard drive is a good choice for providing storage for Raspberry Pis or other SBCs, especially because the SD card used for the OS disk image is not ideal. For your private cloud at home, you can use cheap USB 3.0 hard drives for large-scale storage. Plug the disk in and use **fdisk** to find out the device ID assigned to it, so you can work with it.
|
||||
|
||||
|
||||
```
|
||||
# Find your disk using fdisk
|
||||
# Unrelated disk content omitted
|
||||
$ sudo fdisk -l
|
||||
|
||||
Disk /dev/sda: 1.84 TiB, 2000398933504 bytes, 3907029167 sectors
|
||||
Disk model: BUP Slim BK
|
||||
Units: sectors of 1 * 512 = 512 bytes
|
||||
Sector size (logical/physical): 512 bytes / 512 bytes
|
||||
I/O size (minimum/optimal): 512 bytes / 512 bytes
|
||||
Disklabel type: dos
|
||||
Disk identifier: 0xe3345ae9
|
||||
|
||||
Device Boot Start End Sectors Size Id Type
|
||||
/dev/sda1 2048 3907028991 3907026944 1.8T 83 Linux
|
||||
```
|
||||
|
||||
For clarity, in the example output above, I omitted all the disks except the one I'm interested in. You can see the USB disk I want to use was assigned the device **/dev/sda**, and you can see some information about the model (**Disk model: BUP Slim BK**), which helps me identify the correct disk. The disk already has a partition, and its size confirms it is the disk I am looking for.
|
||||
|
||||
_Note:_ Make sure to identify the correct disk and partition for your device. It may be different than the example above.
|
||||
|
||||
Each partition created on a drive gets a special universally unique identifier (UUID). The computer uses the UUID to make sure it is mounting the correct partition to the correct location using the **/etc/fstab** config file. You can retrieve the UUID of the partition using the **blkid** command:
|
||||
|
||||
|
||||
```
|
||||
# Get the block device attributes for the partition
|
||||
# Make sure to use the partition that applies in your case. It may differ.
|
||||
$ sudo blkid /dev/sda1
|
||||
|
||||
/dev/sda1: LABEL="backup" UUID="bd44867c-447c-4f85-8dbf-dc6b9bc65c91" TYPE="xfs" PARTUUID="e3345ae9-01"
|
||||
```
|
||||
|
||||
In this case, the UUID of **/dev/sda1** is **bd44867c-447c-4f85-8dbf-dc6b9bc65c91**. Yours will be different, so make a note of it.
|
||||
|
||||
### Configure the Raspberry Pi to mount this disk on startup, then mount it
|
||||
|
||||
Now that you have identified the disk and partition you want to use, you need to tell the computer how to mount it, to do so whenever it boots up, and to go ahead and mount it now. Because this is a USB disk and might be unplugged, you will also configure the Raspberry Pi to not wait on boot if the disk is not plugged in or is otherwise unavailable.
|
||||
|
||||
In Linux, this is done by adding the partition to the **/etc/fstab** configuration file, including where you want it to be mounted and some arguments to tell the computer how to treat it. This example will mount the partition to **/srv/nfs**, so start by creating that path:
|
||||
|
||||
|
||||
```
|
||||
# Create the mountpoint for the disk partition
|
||||
$ sudo mkdir -p /srv/nfs
|
||||
```
|
||||
|
||||
Next, modify the **/etc/fstab** file using the following syntax format:
|
||||
|
||||
|
||||
```
|
||||
`<disk id> <mountpoint> <filesystem type> <options> <fs_freq> <fs_passno>`
|
||||
```
|
||||
|
||||
Use the UUID you identified earlier for the disk ID. As I mentioned in the prior step, the mountpoint is **/srv/nfs**. For the filesystem type, it is usually best to select the actual filesystem, but since this will be a USB disk, use **auto**.
|
||||
|
||||
For the options values, use **nosuid,nodev,nofail**.
|
||||
|
||||
#### An aside about man pages:
|
||||
|
||||
That said, there are a _lot_ of possible options, and the manual (man) pages are the best way to see what they are. Investigating the man page for for fstab is a good place to start:
|
||||
|
||||
|
||||
```
|
||||
# Open the man page for fstab
|
||||
$ man fstab
|
||||
```
|
||||
|
||||
This opens the manual/documentation associated with the fstab command. In the man page, each of the options is broken down to show what it does and the common selections. For example, **The fourth field (fs_mntopts)** gives some basic information about the options that work in that field and directs you to **man (8) mount** for more in-depth description of the mount options. That makes sense, as the **/etc/fstab** file, in essence, tells the computer how to automate mounting disks, in the same way you would manually use the mount command.
|
||||
|
||||
You can get more information about the options you will use from mount's man page. The numeral 8, in parentheses, indicates the man page section. In this case, section 8 is for _System Administration tools and Daemons_.
|
||||
|
||||
Helpfully, you can get a list of the standard sections from the man page for **man**.
|
||||
|
||||
Back to mountng the disk, take a look at **man (8) mount**:
|
||||
|
||||
|
||||
```
|
||||
# Open Section 8 of the man pages for mount
|
||||
$ man (8) mount
|
||||
```
|
||||
|
||||
In this man page, you can examine what the options listed above do:
|
||||
|
||||
* **nosuid:** Do not honor the suid/guid bit. Do not allow any files that might be on the USB disk to be executed as root. This is a good security practice.
|
||||
* **nodev:** Do not interpret characters or block special devices on the file system; i.e., do not honor any device nodes that might be on the USB disk. Another good security practice.
|
||||
* **nofail:** Do not log any errors if the device does not exist. This is a USB disk and might not be plugged in, so it will be ignored if that is the case.
|
||||
|
||||
|
||||
|
||||
Returning to the line you are adding to the **/etc/fstab** file, there are two final options: **fs_freq** and **fs_passno**. Their values are related to somewhat legacy options, and _most_ modern systems just use a **0** for both, especially for filesystems on USB disks. The fs_freq value relates to the dump command and making dumps of the filesystem. The fs_passno value defines which filesystems to **fsck** on boot and their order. If it's set, usually the root partition would be **1** and any other filesystems would be **2**. Set the value to **0** to skip using **fsck** on this partition.
|
||||
|
||||
In your preferred editor, open the **/etc/fstab** file and add the entry for the partition on the USB disk, replacing the values here with those gathered in the previous steps.
|
||||
|
||||
|
||||
```
|
||||
# With sudo, or as root, add the partition info to the /etc/fstab file
|
||||
UUID="bd44867c-447c-4f85-8dbf-dc6b9bc65c91" /srv/nfs auto nosuid,nodev,nofail,noatime 0 0
|
||||
```
|
||||
|
||||
### Enable and start the NFS server
|
||||
|
||||
With the packages installed and the partition added to your **/etc/fstab** file, you can now go ahead and start the NFS server. On a Fedora system, you need to enable and start two services: **rpcbind** and **nfs-server**. Use the **systemctl** command to accomplish this:
|
||||
|
||||
|
||||
```
|
||||
# Start NFS server and rpcbind
|
||||
$ sudo systemctl enable rpcbind.service
|
||||
$ sudo systemctl enable nfs-server.service
|
||||
$ sudo systemctl start rpcbind.service
|
||||
$ sudo systemctl start nfs-server.service
|
||||
```
|
||||
|
||||
On Raspbian or other Debian-based distributions, you just need to enable and start the **nfs-kernel-server** service using the **systemctl** command the same way as above.
|
||||
|
||||
#### RPCBind
|
||||
|
||||
The rpcbind utility is used to map remote procedure call (RPC) services to ports on which they listen. According to the rpcbind man page:
|
||||
|
||||
> "When an RPC service is started, it tells rpcbind the address at which it is listening, and the RPC program numbers it is prepared to serve. When a client wishes to make an RPC call to a given program number, it first contacts rpcbind on the server machine to determine the address where RPC requests should be sent."
|
||||
|
||||
In the case of an NFS server, rpcbind maps the protocol number for NFS to the port on which the NFS server is listening. However, NFSv4 does not require the use of rpcbind. If you use _only_ NFSv4 (by removing versions two and three from the configuration), rpcbind is not required. I've included it here for backward compatibility with NFSv3.
|
||||
|
||||
### Export the mounted filesystem
|
||||
|
||||
The NFS server decides which filesystems are shared with (exported to) which remote clients based on another configuration file, **/etc/exports**. This file is just a map of host internet protocol (IP) addresses (or subnets) to the filesystems to be shared and some options (read-only or read-write, root squash, etc.). The format of the file is:
|
||||
|
||||
|
||||
```
|
||||
`<directory> <host or hosts>(options)`
|
||||
```
|
||||
|
||||
In this example, you will export the partition mounted to **/srv/nfs**. This is the "directory" piece.
|
||||
|
||||
The second part, the host or hosts, includes the hosts you want to export this partition to. These can be specified as a single host with a fully qualified domain name or hostname, the IP address of the host, a number of hosts using wildcard characters to match domains (e.g., *.example.org), IP networks (e.g., classless inter-domain routing, or CIDR, notation), or netgroups.
|
||||
|
||||
The third piece includes options to apply to the export:
|
||||
|
||||
* **ro/rw:** Export the filesystem as read only or read write
|
||||
* **wdelay:** Delay writes to the disk if another write is imminent to improve performance (this is _probably_ not as useful with a solid-state USB disk, if that is what you are using)
|
||||
* **root_squash:** Prevent any root users on the client from having root access on the host, and set the root UID to **nfsnobody** as a security precaution
|
||||
|
||||
|
||||
|
||||
Test exporting the partition you have mouted at **/srv/nfs** to a single client—for example, a laptop. Identify your client's IP address (my laptop's is **192.168.2.64**, but yours will likely be different). You could share it to a large subnet, but for testing, limit it to the single IP address. The CIDR notation for just this IP is **192.168.2.64/32**; a **/32** subnet is just a single IP.
|
||||
|
||||
Using your preferred editor, edit the **/etc/exports** file with your directory, host CIDR, and the **rw** and **root_squash** options:
|
||||
|
||||
|
||||
```
|
||||
# Edit your /etc/exports file like so, substituting the information from your systems
|
||||
/srv/nfs 192.168.2.64/32(rw,root_squash)
|
||||
```
|
||||
|
||||
_Note:_ If you copied the **/etc/exports** file from another location or otherwise overwrote the original with a copy, you may need to restore the SELinux context for the file. You can do this with the **restorecon** command:
|
||||
|
||||
|
||||
```
|
||||
# Restore the SELinux context of the /etc/exports file
|
||||
$ sudo restorecon /etc/exports
|
||||
```
|
||||
|
||||
Once this is done, restart the NFS server to pick up the changes to the **/etc/exports** file:
|
||||
|
||||
|
||||
```
|
||||
# Restart the nfs server
|
||||
$ sudo systemctl restart nfs-server.service
|
||||
```
|
||||
|
||||
### Open the firewall for the NFS service
|
||||
|
||||
Some systems, by default, do not run a [firewall service][6]. Raspbian, for example, defaults to open iptables rules, with ports opened by different services immediately available from outside the machine. Fedora server, by contrast, runs the firewalld service by default, so you must open the port for the NFS server (and rpcbind, if you will be using NFSv3). You can do this with the **firewall-cmd** command.
|
||||
|
||||
Check the zones used by firewalld and get the default zone. For Fedora Server, this will be the FedoraServer zone:
|
||||
|
||||
|
||||
```
|
||||
# List the zones
|
||||
# Output omitted for brevity
|
||||
$ sudo firewall-cmd --list-all-zones
|
||||
|
||||
# Retrieve just the default zone info
|
||||
# Make a note of the default zone
|
||||
$ sudo firewall-cmd --get-default-zone
|
||||
|
||||
# Permanently add the nfs service to the list of allowed ports
|
||||
$ sudo firewall-cmd --add-service=nfs --permanent
|
||||
|
||||
# For NFSv3, we need to add a few more ports, nfsv3, rpc-mountd, rpc-bind
|
||||
$ sudo firewall-cmd --add-service=(nfs3,mountd,rpc-bind)
|
||||
|
||||
# Check the services for the zone, substituting the default zone in use by your system
|
||||
$ sudo firewall-cmd --list-services --zone=FedoraServer
|
||||
|
||||
# If all looks good, reload firewalld
|
||||
$ sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
And with that, you have successfully configured the NFS server with your mounted USB disk partition and exported it to your test system for sharing. Now you can test mounting it on the system you added to the exports list.
|
||||
|
||||
### Test the NFS exports
|
||||
|
||||
First, from the NFS server, create a file to read in the **/srv/nfs** directory:
|
||||
|
||||
|
||||
```
|
||||
# Create a test file to share
|
||||
echo "Can you see this?" >> /srv/nfs/nfs_test
|
||||
```
|
||||
|
||||
Now, on the client system you added to the exports list, first make sure the NFS client packages are installed. On Fedora systems, this is the **nfs-utils** package and can be installed with **dnf**. Raspbian systems have the **libnfs-utils** package that can be installed with **apt-get**.
|
||||
|
||||
Install the NFS client packages:
|
||||
|
||||
|
||||
```
|
||||
# Install the nfs-utils package with dnf
|
||||
$ sudo dnf install nfs-utils
|
||||
```
|
||||
|
||||
Once the client package is installed, you can test out the NFS export. Again on the client, use the mount command with the IP of the NFS server and the path to the export, and mount it to a location on the client, which for this test is the **/mnt** directory. In this example, my NFS server's IP is **192.168.2.109**, but yours will likely be different:
|
||||
|
||||
|
||||
```
|
||||
# Mount the export from the NFS server to the client host
|
||||
# Make sure to substitute the information for your own hosts
|
||||
$ sudo mount 192.168.2.109:/srv/nfs /mnt
|
||||
|
||||
# See if the nfs_test file is visible:
|
||||
$ cat /mnt/nfs_test
|
||||
Can you see this?
|
||||
```
|
||||
|
||||
Success! You now have a working NFS server for your homelab, ready to share files with multiple hosts, allow multi-read/write access, and provide centralized storage and backups for your data. There are many options for shared storage for homelabs, but NFS is venerable, efficient, and a great option to add to your "private cloud at home" homelab. Future articles in this series will expand on how to automatically mount NFS shares on clients and how to use NFS as a storage class for Kubernetes Persistent Volumes.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/5/nfs-raspberry-pi
|
||||
|
||||
作者:[Chris Collins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clcollins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO (Blue folders flying in the clouds above a city skyline)
|
||||
[2]: https://opensource.com/resources/what-is-kubernetes
|
||||
[3]: https://opensource.com/article/20/5/disk-image-raspberry-pi
|
||||
[4]: https://opensource.com/sites/default/files/uploads/raspberrypi_with_hard-disk.jpg (Raspberry Pi with a USB hard disk)
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]: https://opensource.com/article/18/9/linux-iptables-firewalld
|
@ -1,347 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Protect your system with fail2ban and firewalld blacklists)
|
||||
[#]: via: (https://fedoramagazine.org/protect-your-system-with-fail2ban-and-firewalld-blacklists/)
|
||||
[#]: author: (hobbes1069 https://fedoramagazine.org/author/hobbes1069/)
|
||||
|
||||
Protect your system with fail2ban and firewalld blacklists
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
If you run a server with a public-facing SSH access, you might have experienced malicious login attempts. This article shows how to use two utilities to keep the intruder out of our systems.
|
||||
|
||||
To protect against repeated ssh login attempts, we’ll look at _fail2ban_. And if you don’t travel much, and perhaps stay in one or two countries, you can configure _firewalld_ to only [allow access from the countries you choose][2].
|
||||
|
||||
First let’s work through a little terminology for those not familiar with the various applications we’ll need to make this work:
|
||||
|
||||
**fail2ban:** Daemon to ban hosts that cause multiple authentication errors.
|
||||
|
||||
fail2ban will monitor the SystemD journal to look for failed authentication attempts for whichever jails have been enabled. After the number of failed attempts specified it will add a firewall rule to block that specific IP address for an amount of time configured.
|
||||
|
||||
**firewalld:** A firewall daemon with D-Bus interface providing a dynamic firewall.
|
||||
|
||||
Unless you’ve manually decided to use traditional iptables, you’re already using firewalld on all supported releases of Fedora and CentOS.
|
||||
|
||||
### Assumptions
|
||||
|
||||
* The host system has an internet connection and is either fully exposed directly, through a DMZ (both REALLY bad ideas unless you know what you’re doing), or has a port being forwarded to it from a router.
|
||||
* While most of this might apply to other systems, this article assumes a current version of Fedora (31 and up) or RHEL/CentOS 8. On CentOS you must enable the Fedora EPEL repo with sudo dnf install epel-release
|
||||
|
||||
|
||||
|
||||
### Install & Configuration
|
||||
|
||||
#### Fail2Ban
|
||||
|
||||
More than likely whichever FirewallD zone is set already allows SSH access but the sshd service itself is not enabled by default. To start it manually and without permanently enabling on boot:
|
||||
|
||||
```
|
||||
$ sudo systemctl start sshd
|
||||
```
|
||||
|
||||
Or to start and enable on boot:
|
||||
|
||||
```
|
||||
$ sudo systemctl enable --now sshd
|
||||
```
|
||||
|
||||
The next step is to install, configure, and enable fail2ban. As usual the install can be done from the command line:
|
||||
|
||||
```
|
||||
$ sudo dnf install fail2ban
|
||||
```
|
||||
|
||||
Once installed the next step is to configure a jail (a service you want to monitor and ban at whatever thresholds you’ve set). By default IPs are banned for 1 hour (which is not near long enough). The best practice is to override the system defaults using *.local files instead of directly modifying the *.config files. If we look at my jail.local we see:
|
||||
|
||||
```
|
||||
# cat /etc/fail2ban/jail.local
|
||||
[DEFAULT]
|
||||
|
||||
# "bantime" is the number of seconds that a host is banned.
|
||||
bantime = 1d
|
||||
|
||||
# A host is banned if it has generated "maxretry" during the last "findtime"
|
||||
findtime = 1h
|
||||
|
||||
# "maxretry" is the number of failures before a host get banned.
|
||||
maxretry = 5
|
||||
```
|
||||
|
||||
Turning this into plain language, after 5 attempts within the last hour the IP will be blocked for 1 day. There’s also options for increasing the ban time for IPs that get banned multiple times, but that’s the subject for another article.
|
||||
|
||||
The next step is to configure a jail. In this tutorial sshd is shown but the steps are more or less the same for other services. Create a configuration file inside _/etc/fail2ban/jail.d_. Here’s mine:
|
||||
|
||||
```
|
||||
# cat /etc/fail2ban/jail.d/sshd.local
|
||||
[sshd]
|
||||
enabled = true
|
||||
```
|
||||
|
||||
It’s that simple! A lot of the configuration is already handled within the package built for Fedora (Hint: I’m the current maintainer). Next enable and start the fail2ban service.
|
||||
|
||||
```
|
||||
$ sudo systemctl enable --now fail2ban
|
||||
```
|
||||
|
||||
Hopefully there were not any immediate errors, if not, check the status of fail2ban using the following command:
|
||||
|
||||
```
|
||||
$ sudo systemctl status fail2ban
|
||||
```
|
||||
|
||||
If it started without errors it should look something like this:
|
||||
|
||||
```
|
||||
$ systemctl status fail2ban
|
||||
● fail2ban.service - Fail2Ban Service
|
||||
Loaded: loaded (/usr/lib/systemd/system/fail2ban.service; disabled; vendor preset: disabled)
|
||||
Active: active (running) since Tue 2020-06-16 07:57:40 CDT; 5s ago
|
||||
Docs: man:fail2ban(1)
|
||||
Process: 11230 ExecStartPre=/bin/mkdir -p /run/fail2ban (code=exited, status=0/SUCCESS)
|
||||
Main PID: 11235 (f2b/server)
|
||||
Tasks: 5 (limit: 4630)
|
||||
Memory: 12.7M
|
||||
CPU: 109ms
|
||||
CGroup: /system.slice/fail2ban.service
|
||||
└─11235 /usr/bin/python3 -s /usr/bin/fail2ban-server -xf start
|
||||
Jun 16 07:57:40 localhost.localdomain systemd[1]: Starting Fail2Ban Service…
|
||||
Jun 16 07:57:40 localhost.localdomain systemd[1]: Started Fail2Ban Service.
|
||||
Jun 16 07:57:41 localhost.localdomain fail2ban-server[11235]: Server ready
|
||||
```
|
||||
|
||||
If recently started, fail2ban is unlikely to show anything interesting going on just yet but to check the status of fail2ban and make sure the jail is enabled enter:
|
||||
|
||||
```
|
||||
$ sudo fail2ban-client status
|
||||
Status
|
||||
|- Number of jail: 1
|
||||
`- Jail list: sshd
|
||||
```
|
||||
|
||||
And the high level status of the sshd jail is shown. If multiple jails were enabled they would show up here.
|
||||
|
||||
To check the detailed status a jail, just add the jail to the previous command. Here’s the output from my system which has been running for a while. I have removed the banned IPs from the output:
|
||||
|
||||
```
|
||||
$ sudo fail2ban-client status sshd
|
||||
Status for the jail: sshd
|
||||
|- Filter
|
||||
| |- Currently failed: 8
|
||||
| |- Total failed: 4399
|
||||
| `- Journal matches: _SYSTEMD_UNIT=sshd.service + _COMM=sshd
|
||||
`- Actions
|
||||
|- Currently banned: 101
|
||||
|- Total banned: 684
|
||||
`- Banned IP list: ...
|
||||
```
|
||||
|
||||
Monitoring the fail2ban log file for intrusion attempts can be achieved by “tailing” the log:
|
||||
|
||||
```
|
||||
$ sudo tail -f /var/log/fail2ban.log
|
||||
```
|
||||
|
||||
Tail is a nice little command line utility which by default shows the last 10 lines of a file. Adding the “-f” tells it to follow the file which is a great way to watch a file that’s still being written to.
|
||||
|
||||
Since the output has real IPs in it, a sample won’t be provided but it’s pretty human readable. The INFO lines will usually be attempts at a login. If enough attempts are made from a specific IP address you will see a NOTICE line showing an IP address was banned. After the ban time has been reached you will see an NOTICE unban line.
|
||||
|
||||
Lookout for several WARNING lines. Most often this happens when a ban is added but fail2ban finds the IP address already in its ban database, which means banning may not be working correctly. If recently installed the fail2ban package it should be setup for FirewallD rich rules. The package was only switched from “ipset” to “rich rules” as of _fail2ban-0.11.1-6_ so if you have an older install of fail2ban it may still be trying to use the ipset method which utilizes legacy iptables and is not very reliable.
|
||||
|
||||
#### FirewallD Configuration
|
||||
|
||||
##### Reactive or Proactive?
|
||||
|
||||
There are two strategies that can be used either separately or together. Reactive or proactive permanent blacklisting of individual IP address or subnets based on country of origin.
|
||||
|
||||
For the reactive approach once fail2ban has been running for a while it’s a good idea to take a look at how “bad is bad” by running _sudo fail2ban-client status sshd_ again. There most likely will be many banned IP addresses. Just pick one and try running _whois_ on it. There can be quite a bit of interesting information in the output but for this method, only the country of origin is of importance. To keep things simple, let’s filter out everything but the country.
|
||||
|
||||
For this example a few well known domain names will be used:
|
||||
|
||||
```
|
||||
$ whois google.com | grep -i country
|
||||
Registrant Country: US
|
||||
Admin Country: US
|
||||
Tech Country: US
|
||||
```
|
||||
|
||||
```
|
||||
$ whois rpmfusion.org | grep -i country
|
||||
Registrant Country: FR
|
||||
```
|
||||
|
||||
```
|
||||
$ whois aliexpress.com | grep -i country
|
||||
Registrant Country: CN
|
||||
```
|
||||
|
||||
The reason for the _grep -i_ is to make grep non-case sensitive while most entries use “Country”, some are in all lower case so this method matches regardless.
|
||||
|
||||
Now that the country of origin of an intrusion attempt is known the question is, “Does anyone from that country have a legitimate reason to connect to this computer?” If the answer is NO, then it should be acceptable to block the entire country.
|
||||
|
||||
Functionally the proactive approach it not very different from the reactive approach, however, there are countries from which intrusion attempts are very common. If the system neither resides in one of those countries, nor has any customers originating from them, then why not add them to the blacklist now rather than waiting?
|
||||
|
||||
##### Blacklisting Script and Configuration
|
||||
|
||||
So how do you do that? With FirewallD ipsets. I developed the following script to automate the process as much as possible:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
# Based on the below article
|
||||
# https://www.linode.com/community/questions/11143/top-tip-firewalld-and-ipset-country-blacklist
|
||||
|
||||
# Source the blacklisted countries from the configuration file
|
||||
. /etc/blacklist-by-country
|
||||
|
||||
# Create a temporary working directory
|
||||
ipdeny_tmp_dir=$(mktemp -d -t blacklist-XXXXXXXXXX)
|
||||
pushd $ipdeny_tmp_dir
|
||||
|
||||
# Download the latest network addresses by country file
|
||||
curl -LO http://www.ipdeny.com/ipblocks/data/countries/all-zones.tar.gz
|
||||
tar xf all-zones.tar.gz
|
||||
|
||||
# For updates, remove the ipset blacklist and recreate
|
||||
if firewall-cmd -q --zone=drop --query-source=ipset:blacklist; then
|
||||
firewall-cmd -q --permanent --delete-ipset=blacklist
|
||||
fi
|
||||
|
||||
# Create the ipset blacklist which accepts both IP addresses and networks
|
||||
firewall-cmd -q --permanent --new-ipset=blacklist --type=hash:net \
|
||||
--option=family=inet --option=hashsize=4096 --option=maxelem=200000 \
|
||||
--set-description="An ipset list of networks or ips to be dropped."
|
||||
|
||||
# Add the address ranges by country per ipdeny.com to the blacklist
|
||||
for country in $countries; do
|
||||
firewall-cmd -q --permanent --ipset=blacklist \
|
||||
--add-entries-from-file=./$country.zone && \
|
||||
echo "Added $country to blacklist ipset."
|
||||
done
|
||||
|
||||
# Block individual IPs if the configuration file exists and is not empty
|
||||
if [ -s "/etc/blacklist-by-ip" ]; then
|
||||
echo "Adding IPs blacklists."
|
||||
firewall-cmd -q --permanent --ipset=blacklist \
|
||||
--add-entries-from-file=/etc/blacklist-by-ip && \
|
||||
echo "Added IPs to blacklist ipset."
|
||||
fi
|
||||
|
||||
# Add the blacklist ipset to the drop zone if not already setup
|
||||
if firewall-cmd -q --zone=drop --query-source=ipset:blacklist; then
|
||||
echo "Blacklist already in firewalld drop zone."
|
||||
else
|
||||
echo "Adding ipset blacklist to firewalld drop zone."
|
||||
firewall-cmd --permanent --zone=drop --add-source=ipset:blacklist
|
||||
fi
|
||||
|
||||
firewall-cmd -q --reload
|
||||
|
||||
popd
|
||||
rm -rf $ipdeny_tmp_dir
|
||||
```
|
||||
|
||||
This should be installed to _/usr/local/sbin_ and don’t forget to make it executable!
|
||||
|
||||
```
|
||||
$ sudo chmod +x /usr/local/sbin/firewalld-blacklist
|
||||
```
|
||||
|
||||
Then create a configure file: _/etc/blacklist-by-country_:
|
||||
|
||||
```
|
||||
# Which countries should be blocked?
|
||||
# Use the two letter designation separated by a space.
|
||||
countries=""
|
||||
```
|
||||
|
||||
And another configuration file _/etc/blacklist-by-ip_, which is just one IP per line without any additional formatting.
|
||||
|
||||
For this example 10 random countries were selected from the ipdeny zones:
|
||||
|
||||
```
|
||||
# ls | shuf -n 10 | sed "s/\.zone//g" | tr '\n' ' '
|
||||
nl ee ie pk is sv na om gp bn
|
||||
```
|
||||
|
||||
Now as long as at least one country has been added to the config file it’s ready to run!
|
||||
|
||||
```
|
||||
$ sudo firewalld-blacklist
|
||||
% Total % Received % Xferd Average Speed Time Time Time Current
|
||||
Dload Upload Total Spent Left Speed
|
||||
100 142 100 142 0 0 1014 0 --:--:-- --:--:-- --:--:-- 1014
|
||||
100 662k 100 662k 0 0 989k 0 --:--:-- --:--:-- --:--:-- 989k
|
||||
Added nl to blacklist ipset.
|
||||
Added ee to blacklist ipset.
|
||||
Added ie to blacklist ipset.
|
||||
Added pk to blacklist ipset.
|
||||
Added is to blacklist ipset.
|
||||
Added sv to blacklist ipset.
|
||||
Added na to blacklist ipset.
|
||||
Added om to blacklist ipset.
|
||||
Added gp to blacklist ipset.
|
||||
Added bn to blacklist ipset.
|
||||
Adding ipset blacklist to firewalld drop zone.
|
||||
success
|
||||
```
|
||||
|
||||
To verify that the firewalld blacklist was successful, check the drop zone and blacklist ipset:
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --info-zone=drop
|
||||
drop (active)
|
||||
target: DROP
|
||||
icmp-block-inversion: no
|
||||
interfaces:
|
||||
sources: ipset:blacklist
|
||||
services:
|
||||
ports:
|
||||
protocols:
|
||||
masquerade: no
|
||||
forward-ports:
|
||||
source-ports:
|
||||
icmp-blocks:
|
||||
rich rules:
|
||||
|
||||
$ sudo firewall-cmd --info-ipset=blacklist | less
|
||||
blacklist
|
||||
type: hash:net
|
||||
options: family=inet hashsize=4096 maxelem=200000
|
||||
entries:
|
||||
```
|
||||
|
||||
The second command will output all of the subnets that were added based on the countries blocked and can be quite lengthy.
|
||||
|
||||
##### So now what do I do?
|
||||
|
||||
While it will be a good idea to monitor things more frequently at the beginning, over time the number of intrusion attempts should decline as the blacklist grows. Then the goal should be maintenance rather than active monitoring.
|
||||
|
||||
To this end I created a SystemD service file and timer so that on a monthly basis the by country subnets maintained by ipdeny are refreshed. In fact everything discussed here can be downloaded from my pagure.io project:
|
||||
|
||||
<https://pagure.io/firewalld-blacklist>
|
||||
|
||||
Aren’t you glad you read the whole article? Now just download the service file and timer to _/etc/systemd/system/_ and enable the timer:
|
||||
|
||||
```
|
||||
$ sudo systemctl daemon-reload
|
||||
$ sudo systemctl enable --now firewalld-blacklist.timer
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/protect-your-system-with-fail2ban-and-firewalld-blacklists/
|
||||
|
||||
作者:[hobbes1069][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/hobbes1069/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/06/fail2ban-and-firewalld-816x345.png
|
||||
[2]: https://www.linode.com/community/questions/11143/top-tip-firewalld-and-ipset-country-blacklist
|
@ -1,228 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (summer2233)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Back up your phone's storage with this Linux utility)
|
||||
[#]: via: (https://opensource.com/article/20/7/gphoto2-linux)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Back up your phone's storage with this Linux utility
|
||||
======
|
||||
Take as many shots as you want; gphoto2 makes transferring photos from
|
||||
your device to your Linux computer quick and easy.
|
||||
![A person looking at a phone][1]
|
||||
|
||||
One of the great failings of mobile devices is how difficult it can be to transfer data from your device to your computer. Mobile devices have a long history of this. Early mobiles, like Pilot and Handspring PDA devices, required special synchronization software (which you had to do religiously for fear of your device running out of batteries and losing all of your data forever). Old iPods required a platform-specific interface. Modern mobile devices default to sending your data to an online account so you can download it again on your computer.
|
||||
|
||||
Good news—if you're running Linux, you can probably interface with your mobile device using the `gphoto2` command. Originally developed as a way to communicate with digital cameras back when a digital camera was just a camera, `gphoto2` can talk to many different kinds of mobile devices now. Don't let the name fool you, either. It can handle all types of files, not just photos. Better yet, it's scriptable, flexible, and a lot more powerful than most GUI interfaces.
|
||||
|
||||
If you've ever struggled with finding a comfortable way to sync your data between your computer and mobile, take a look at `gphoto2`.
|
||||
|
||||
### Install gPhoto2
|
||||
|
||||
Chances are your Linux system already has libgphoto2 installed, because it's a key library for interfacing with mobile devices. You may have to install the command `gphoto2`, however, which is probably available from your repository.
|
||||
|
||||
On Fedora or RHEL:
|
||||
|
||||
|
||||
```
|
||||
$ sudo dnf install gphoto2
|
||||
```
|
||||
|
||||
On Debian or Ubuntu:
|
||||
|
||||
|
||||
```
|
||||
$ sudo apt install gphoto2
|
||||
```
|
||||
|
||||
### Verify compatibility
|
||||
|
||||
To verify that your mobile device is supported, use the `--list-cameras` piped through `less`:
|
||||
|
||||
|
||||
```
|
||||
`$ gPhoto2 --list-cameras | less`
|
||||
```
|
||||
|
||||
Or you can pipe it through `grep` to search for a term. For example, if you have a Samsung Galaxy, then use `grep` with case sensitivity turned off with the `-i` switch:
|
||||
|
||||
|
||||
```
|
||||
$ gphoto2 --list-cameras | grep -i galaxy
|
||||
"Samsung Galaxy models (MTP)"
|
||||
"Samsung Galaxy models (MTP+ADB)"
|
||||
"Samsung Galaxy models Kies mode"
|
||||
```
|
||||
|
||||
This confirms that Samsung Galaxy devices are supported through MTP and MTP with ADB.
|
||||
|
||||
If you can't find your device listed, you can still try using `gphoto2` on the off chance that your device is actually something on the list masquerading as a different brand.
|
||||
|
||||
### Find your mobile device
|
||||
|
||||
To use gPhoto2, you first have to have a mobile device plugged into your computer, set to MTP mode, and you probably need to give your computer permission to interact with it. This usually requires physical interaction with your device, specifically pressing a button in the UI to permit its filesystem to be accessed by the computer it's just been attached to.
|
||||
|
||||
![Screenshot of allow access message][2]
|
||||
|
||||
If you don't give your computer access to your mobile, then gPhoto2 detects your device, but it isn't unable to interact with it.
|
||||
|
||||
To ensure your computer detects the device you've attached, use the `--auto-detect` option:
|
||||
|
||||
|
||||
```
|
||||
$ gphoto2 --auto-detect
|
||||
Model Port
|
||||
\---------------------------------------
|
||||
Samsung Galaxy models (MTP) usb:002,010
|
||||
```
|
||||
|
||||
If your device isn't detected, check your cables first, and then check that your device is configured to interface over MTP or ADB, or whatever protocol gPhoto2 supports for your device, as shown in the output of `--list-cameras`.
|
||||
|
||||
### Query your device for features
|
||||
|
||||
With modern devices, there's usually a plethora of potential features, but not all features are supported. You can find out for sure with the `--abilities` option, which I find rather intuitive.
|
||||
|
||||
|
||||
```
|
||||
$ gphoto2 --abilities
|
||||
Abilities for camera : Samsung Galaxy models (MTP)
|
||||
Serial port support : no
|
||||
USB support : yes
|
||||
Capture choices : Capture not supported by driver
|
||||
Configuration support : no
|
||||
Delete selected files on camera : yes
|
||||
Delete all files on camera : no
|
||||
File preview (thumbnail) support: no
|
||||
File upload support : yes
|
||||
```
|
||||
|
||||
There's no need to specify what device you're querying as long as you only have one device attached. If you have attached more than one device that gPhoto2 can interact with, though, you can specify the device by port, camera model, or usbid.
|
||||
|
||||
### Interacting with your device
|
||||
|
||||
If your device supports capture, then you can grab media through your camera from your computer. For instance, to capture an image:
|
||||
|
||||
|
||||
```
|
||||
$ gphoto2 --capture-image
|
||||
```
|
||||
|
||||
To capture an image and immediately transfer it to the computer you're on:
|
||||
|
||||
|
||||
```
|
||||
$ gphoto2 --capture-image-and-download
|
||||
```
|
||||
|
||||
You can also capture video and sound. If you have more than one camera attached, you can specify which device you want to use by port, camera model, or usbid:
|
||||
|
||||
|
||||
```
|
||||
$ gphoto2 --camera "Samsung Galaxy models (MTP)" \
|
||||
\--capture-image-and-download
|
||||
```
|
||||
|
||||
### Files and folders
|
||||
|
||||
To interact with files on your device intelligently, you need to understand the structure of the filesystem being exposed to gPhoto2.
|
||||
|
||||
You can view available folders with the `--get-folders` option:
|
||||
|
||||
|
||||
```
|
||||
$ gphoto2 --list-folders
|
||||
There are 2 folders in folder '/'.
|
||||
- store_00010001
|
||||
- store_00020002
|
||||
There are 0 folders in folder '/store_00010001'.
|
||||
There are 0 folders in folder '/store_00020002'.
|
||||
```
|
||||
|
||||
Each of these folders represents a storage destination on the device. In this example, `store_00010001` is the internal storage and `store_00020002` is an SD card. Your device may be structured differently.
|
||||
|
||||
### Getting files
|
||||
|
||||
Now that you know the folder layout of your device, you can ingest photos from your device. There are many different options you can use, depending on what you want to take from the device.
|
||||
|
||||
You can get a specific file, providing you know the full path:
|
||||
|
||||
|
||||
```
|
||||
`$ gphoto2 --get-file IMG_0001.jpg --folder /store_00010001/myphotos`
|
||||
```
|
||||
|
||||
You can get all files at once:
|
||||
|
||||
|
||||
```
|
||||
`$ gphoto2 --get-all-files --folder /store_00010001/myfiles`
|
||||
```
|
||||
|
||||
You can get just audio files:
|
||||
|
||||
|
||||
```
|
||||
`gphoto2 --get-all-audio-data --folder /store_00010001/mysounds`
|
||||
```
|
||||
|
||||
There are other options, too, and most of them depend on what your device, and the protocol you're using, support.
|
||||
|
||||
### Uploading files
|
||||
|
||||
Now that you know your potential target folders, you can upload files from your computer to your device. For example, assuming there's a file called `example.epub` in your current directory, you can send the file to your device with the `--upload-file` option combined with the `--folder` option to specify which storage location you want to upload to:
|
||||
|
||||
|
||||
```
|
||||
$ gphoto2 --upload file example.epub \
|
||||
\--folder store_00010001
|
||||
```
|
||||
|
||||
You can make a directory on your device, should you prefer to upload several files to a consolidated location:
|
||||
|
||||
|
||||
```
|
||||
$ gphoto2 --mkdir books \
|
||||
\--folder store_00010001
|
||||
$ gphoto2 --upload-file *.epub \
|
||||
\--folder store_00010001/books
|
||||
```
|
||||
|
||||
### Listing files
|
||||
|
||||
To see files uploaded to your device, use the `--list-files` option:
|
||||
|
||||
|
||||
```
|
||||
$ gphoto2 --list-files --folder /store_00010001
|
||||
There is 1 file in folder '/store_00010001'
|
||||
#1 example.epub 17713 KB application/x-unknown
|
||||
$ gphoto2 --list-files --folder /store_00010001/books
|
||||
There is 1 file in folder '/store_00010001'
|
||||
#1 example0.epub 17713 KB application/x-unknown
|
||||
#2 example1.epub 12264 KB application/x-unknown
|
||||
[...]
|
||||
```
|
||||
|
||||
### Exploring your options
|
||||
|
||||
Much of gPhoto2's power depends on your device, so your experience will be different than anyone else's. There are many operations listed in `gphoto2 --help` for you to explore. Use gPhoto2 and never struggle with transferring files from your device to your computer ever again!
|
||||
|
||||
These open source photo libraries help you stay organized while making your pictures look great.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/gphoto2-linux
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd (A person looking at a phone)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/gphoto2-mtp-allow.jpg (Screenshot of allow access message)
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,142 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to decipher Linux release info)
|
||||
[#]: via: (https://www.networkworld.com/article/3565432/how-to-decipher-linux-release-info.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
How to decipher Linux release info
|
||||
======
|
||||
Displaying and interpreting information about Linux releases is a bit more complicated than it might seem.
|
||||
[christin hume / Linux / Modified by IDG Comm.][1] [(CC0)][2]
|
||||
|
||||
There’s a lot more to identifying a Linux release than citing a simple version number. Even a quick look at the output from the **uname** command can tell you that. What is all of that information, and what does it tell you?
|
||||
|
||||
In this post, we’ll take a closer look at the output from the **uname** command along with release descriptions provided by some other commands and files.
|
||||
|
||||
### Using uname
|
||||
|
||||
A lot of information is displayed whenever you issue the command **uname -a** in a Linux system terminal window. That's because that little “a” tells the **man** command that you want to see _all_ of the output that the command is able to provide. The resultant display will tell you a lot of different things about the system. In fact, each chunk of information displayed tells you something different about the system.
|
||||
|
||||
As an example, the **uname -a** output might look like this:
|
||||
|
||||
```
|
||||
$ uname -a
|
||||
Linux dragonfly 5.4.0-37-generic #41-Ubuntu SMP Wed Jun 3 18:57:02 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
|
||||
```
|
||||
|
||||
While it's probably not much of a temptation, you could retrieve this very same information by using a command that includes all of the **uname** options in the proper order:
|
||||
|
||||
```
|
||||
$ uname -snmrvpio
|
||||
Linux dragonfly 5.4.0-37-generic #41-Ubuntu SMP Wed Jun 3 18:57:02 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
|
||||
```
|
||||
|
||||
To break this long string of information into separate chunks, you can use a **for** loop like this that runs through each of the options:
|
||||
|
||||
```
|
||||
$ for option in s n m r v p i o; do echo -n "$option: "; uname -$option; done
|
||||
s: Linux
|
||||
n: dragonfly
|
||||
m: x86_64
|
||||
r: 5.4.0-37-generic
|
||||
v: #41-Ubuntu SMP Wed Jun 3 18:57:02 UTC 2020
|
||||
p: x86_64
|
||||
i: x86_64
|
||||
o: GNU/Linux
|
||||
```
|
||||
|
||||
That loops shows what information is provided by which option. The **uname** man page provides descriptions for each option. Here's a list:
|
||||
|
||||
```
|
||||
Linux –- kernel name (option “s”)
|
||||
dragonfly –- nodename (option “n”)
|
||||
x86_64 –- machine hardware name (option “m”)
|
||||
5.4.0-37-generic –- kernel release (option “r”)
|
||||
#41-Ubuntu SMP Wed Jun 3 18:57:02 UTC 2020 -- kernel version (option “v”)
|
||||
x86_64 –- processor (option “p”)
|
||||
x86_64 –- hardware platform (option “i”)
|
||||
GNU/Linux –- operating system (option “o”)
|
||||
```
|
||||
|
||||
To delve a little more deeply into the information being displayed, take a closer look at the kernel release data shown. That **5.4.0-37** in the 4th line is not just a string of arbitrary numbers. Each numeric value is significant.
|
||||
|
||||
* **5** is the kernel version
|
||||
* **4** signifies the major revision
|
||||
* **0** indicates the minor revision
|
||||
* **37** represents the most recent patch
|
||||
|
||||
|
||||
|
||||
In addition, that **#41** in the 5th line of the loop output (kernel version) indicates that this release has been compiled 41 times.
|
||||
|
||||
Individual options can be useful when and if you want to display only one piece of all the available information. For example, the command **uname -n** can tell you just the name of the system and **uname -r** will show you just the kernel release. These and other options can be useful when you're taking inventory of your servers or building scripts.
|
||||
|
||||
The same variety of information will be provided by the **uname -a** command when working on Red Hat systems. Here’s an example:
|
||||
|
||||
```
|
||||
$ uname -a
|
||||
Linux fruitfly 4.18.0-107.el8.x86_64 #1 SMP Fri Jun 14 13:46:34 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
|
||||
```
|
||||
|
||||
### Distribution release information
|
||||
|
||||
If you need to know what version of a distribution you’re running, the **uname** output isn’t going to help you much. The kernel version is, after all, not the same as the distribution version. For that information, you can use the **lsb_release -r** command on Ubuntu and other Debian-based systems and display the contents of the **/etc/redhat-release** file for Red Hat.
|
||||
|
||||
For Debian systems:
|
||||
|
||||
```
|
||||
$ lsb_release -r
|
||||
Release: 20.04
|
||||
```
|
||||
|
||||
For Red Hat and related systems:
|
||||
|
||||
```
|
||||
$ cat /etc/redhat-release
|
||||
Red Hat Enterprise Linux release 8.1 Beta (Ootpa)
|
||||
```
|
||||
|
||||
### Using /proc/version
|
||||
|
||||
The **/proc/version** file can also provide information on your Linux release. The information provided in this file has a lot in common with the **uname -a** output. Here are some examples.
|
||||
|
||||
On Ubuntu**:**
|
||||
|
||||
```
|
||||
$ cat /proc/version
|
||||
Linux version 5.4.0-37-generic (buildd@lcy01-amd64-001) (gcc version 9.3.0 (Ubuntu 9.3.0-10ubuntu2)) #41-Ubuntu SMP Wed Jun 3 18:57:02 UTC 2020
|
||||
```
|
||||
|
||||
On RedHat:
|
||||
|
||||
```
|
||||
$ cat /proc/version
|
||||
Linux version 4.18.0-107.el8.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc version 8.3.1 20190507 (Red Hat 8.3.1-4) (GCC)) #1 SMP Fri Jun 14 13:46:34 UTC 2019
|
||||
```
|
||||
|
||||
### Wrap-Up
|
||||
|
||||
Linux systems provide a lot of information on the kernel and distributions installed. You just have to know where or how to look and make sense of what it means.
|
||||
|
||||
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3565432/how-to-decipher-linux-release-info.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://unsplash.com/photos/mfB1B1s4sMc
|
||||
[2]: https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
@ -1,95 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Btrfs to be the Default Filesystem on Fedora? Fedora 33 Starts Testing Btrfs Switch)
|
||||
[#]: via: (https://itsfoss.com/btrfs-default-fedora/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Btrfs to be the Default Filesystem on Fedora? Fedora 33 Starts Testing Btrfs Switch
|
||||
======
|
||||
|
||||
While we’re months away from Fedora’s next stable release ([Fedora 33][1]), there are a few changes worth keeping tabs on.
|
||||
|
||||
Among all the other [accepted system-wide changes for Fedora 33][1], the proposal of having Btrfs as the default filesystem for desktop variants is the most interesting one.
|
||||
|
||||
Here’s what Fedora mentions for the proposal:
|
||||
|
||||
> For laptop and workstation installs of Fedora, we want to provide file system features to users in a transparent fashion. We want to add new features, while reducing the amount of expertise needed to deal with situations like running out of disk space. Btrfs is well adapted to this role by design philosophy, let’s make it the default.
|
||||
|
||||
It’s worth noting that this isn’t an accepted system-wide change as of now and is subject to tests made on the [Test Day][2] (**8th July 2020**).
|
||||
|
||||
So, why is Fedora proposing this change? Is it going to be useful in any way? Is it a bad move? How is it going to affect Fedora distributions? Let’s talk a few things about it here.
|
||||
|
||||
![][3]
|
||||
|
||||
### What Fedora Editions will it Affect?
|
||||
|
||||
As per the proposal, all the desktop editions of Fedora 33, spins, and labs will be subject to this change, if the tests are successful.
|
||||
|
||||
So, you should expect the [workstation editions][4] to get Btrfs as the default file system on Fedora 33.
|
||||
|
||||
### Potential Benefits of Implementing This Change
|
||||
|
||||
To improve Fedora for laptops and workstation use-cases, Btrfs file system offers some benefits.
|
||||
|
||||
Even though this change hasn’t been accepted for Fedora 33 yet – let me point out the advantages of having Btrfs as the default file system:
|
||||
|
||||
* Improves the lifespan of storage hardware
|
||||
* Providing an easy solution to resolve when a user runs out of free space on the root or home directory.
|
||||
* Less-prone to data corruption and easy to recover
|
||||
* Gives better file system re-size ability
|
||||
* Ensure desktop responsiveness under heavy memory pressure by enforcing I/O limit
|
||||
* Makes complex storage setups easy to manage
|
||||
|
||||
|
||||
|
||||
If you’re curious, you might want to dive in deeper to know about [Btrfs][5] and its benefits in general.
|
||||
|
||||
Not to forget, Btrf was already a supported option — it just wasn’t the default file system.
|
||||
|
||||
But, overall, it feels like the introducing of Btrfs as the default file system on Fedora 33 could be a useful change, if implemented properly.
|
||||
|
||||
### Will Red Hat Enterprise Linux Implement This?
|
||||
|
||||
It’s quite obvious that Fedora is considered as the cutting-edge version of [Red Hat Enterprise Linux][6].
|
||||
|
||||
So, if Fedora rejects the change, Red Hat won’t implement it. On the other hand, if you’d want RHEL to use Btrfs, Fedora should be the first to approve the change.
|
||||
|
||||
To give you more clarity on this, Fedora has mentioned it in detail:
|
||||
|
||||
> Red Hat supports Fedora well, in many ways. But Fedora already works closely with, and depends on, upstreams. And this will be one of them. That’s an important consideration for this proposal. The community has a stake in ensuring it is supported. Red Hat will never support Btrfs if Fedora rejects it. Fedora necessarily needs to be first, and make the persuasive case that it solves more problems than alternatives. Feature owners believe it does, hands down.
|
||||
|
||||
Also, it’s worth noting that if you’re someone who does not want btrfs in Fedora, you should be looking at [OpenSUSE][7] and [SUSE Linux Enterprise][8] instead.
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
Even though it looks like the change should not affect any upgrades or compatibility, you can find more information on the changes with Btrfs by default in [Fedora Project’s wiki page][9].
|
||||
|
||||
What do you think about this change targeted for Fedora 33 release? Do you want btrfs file system as the default?
|
||||
|
||||
Feel free to let me know your thooughts in the comments below!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/btrfs-default-fedora/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoraproject.org/wiki/Releases/33/ChangeSet
|
||||
[2]: https://fedoraproject.org/wiki/Test_Day:2020-07-08_Btrfs_default?rd=Test_Day:F33_btrfs_by_default_2020-07-08
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/btrfs-default-fedora.png?ssl=1
|
||||
[4]: https://getfedora.org/en/workstation/
|
||||
[5]: https://en.wikipedia.org/wiki/Btrfs
|
||||
[6]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
|
||||
[7]: https://www.opensuse.org
|
||||
[8]: https://www.suse.com
|
||||
[9]: https://fedoraproject.org/wiki/Changes/BtrfsByDefault
|
@ -1,89 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What you need to know about automation testing in CI/CD)
|
||||
[#]: via: (https://opensource.com/article/20/7/automation-testing-cicd)
|
||||
[#]: author: (Taz Brown https://opensource.com/users/heronthecli)
|
||||
|
||||
What you need to know about automation testing in CI/CD
|
||||
======
|
||||
Continuous integration and continuous delivery is powered by testing.
|
||||
Here's how.
|
||||
![Net catching 1s and 0s or data in the clouds][1]
|
||||
|
||||
> "If things seem under control, you're just not going fast enough." —Mario Andretti
|
||||
|
||||
Test automation means focusing continuously on detecting defects, errors, and bugs as early and quickly as possible in the software development process. This is done using tools that pursue quality as the highest value and are put in place to _ensure_ quality—not just pursue it.
|
||||
|
||||
One of the most compelling features of a continuous integration/continuous delivery (CI/CD) solution (also called a DevOps pipeline) is the opportunity to test more frequently without burdening developers or operators with more manual work. Let's talk about why that's important.
|
||||
|
||||
### Why automate testing in CI/CD?
|
||||
|
||||
Agile teams iterate faster to deliver software and customer satisfaction at higher rates, and these pressures can jeopardize quality. Global competition has created _low tolerance_ for defects while increasing pressure on agile teams for _faster iterations_ of software delivery. What's the industry solution to alleviate this pressure? [DevOps][2].
|
||||
|
||||
DevOps is a big idea with many definitions, but one technology that is consistently essential to DevOps success is CI/CD. Designing a continuous cycle of improvement through a pipeline of software development can lead to new opportunities for testing.
|
||||
|
||||
### What does this mean for testers?
|
||||
|
||||
For testers, this generally means they must:
|
||||
|
||||
* Test earlier and more often (with automation)
|
||||
* Continue to test "real-world" workflows (automated and manual)
|
||||
|
||||
|
||||
|
||||
To be more specific, the role of testing in any form, whether it's run by the developers who write the code or designed by a team of quality assurance engineers, is to take advantage of the CI/CD infrastructure to increase quality while moving fast.
|
||||
|
||||
### What else do testers need to do?
|
||||
|
||||
To get more specific, testers are responsible for:
|
||||
|
||||
* Testing new and existing software applications
|
||||
* Verifying and validating functionality by evaluating software against system requirements
|
||||
* Utilizing automated-testing tools to develop and maintain reusable automated tests
|
||||
* Collaborating with all members of the scrum team to understand the functionality being developed and the implementation's technical design to design and develop accurate, high-quality automated tests
|
||||
* Analyzing documented user requirements and creating or assisting in designing test plans for moderately to highly complex software or IT systems
|
||||
* Developing automated tests and working with the functional team to review and evaluate test scenarios
|
||||
* Collaborating with the technical team to identify the proper approach to automating tests within the development environment
|
||||
* Working with the team to understand and resolve software problems with automated tests, and responding to suggestions for modifications or enhancements
|
||||
* Participating in backlog grooming, estimation, and other agile scrum ceremonies
|
||||
* Assisting in defining standards and procedures to support testing activities and materials (e.g., scripts, configurations, utilities, tools, plans, and results)
|
||||
|
||||
|
||||
|
||||
Testing is a great deal of work, but it's an essential part of building software effectively.
|
||||
|
||||
### What kind of continuous testing is important?
|
||||
|
||||
There are many types of tests you can use. The different types aren't firm lines between disciplines; instead, they are different ways of expressing how to test. It is less important to compare the types of tests and more important to have coverage for each test type.
|
||||
|
||||
* **Functional testing:** Ensures that the software has the functionality in its requirements
|
||||
* **Unit testing:** Independently tests smaller units/components of a software application to check their functionality
|
||||
* **Load testing:** Tests the performance of the software application during heavy load or usage
|
||||
* **Stress testing:** Determines the software application's breakpoint when under stress (maximum load)
|
||||
* **Integration testing:** Tests a group of components that are combined or integrated to produce an output
|
||||
* **Regression testing:** Tests the entire application's functionality when any component (no matter how small) has been modified
|
||||
|
||||
|
||||
|
||||
### Conclusion
|
||||
|
||||
Any software development process that includes continuous testing is on its way toward establishing a critical feedback loop to go fast and build effective software. Most importantly, the practice builds quality into the CI/CD pipeline and implies an understanding of the connection between increasing speed while reducing risk and waste in the software development lifecycle.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/automation-testing-cicd
|
||||
|
||||
作者:[Taz Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/heronthecli
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
|
||||
[2]: https://opensource.com/resources/devops
|
@ -0,0 +1,133 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Recovering audio from a lost format with open source)
|
||||
[#]: via: (https://opensource.com/article/20/7/hdcd)
|
||||
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
|
||||
|
||||
Recovering audio from a lost format with open source
|
||||
======
|
||||
The history of HDCD format and how I recovered lost audio on Linux.
|
||||
![11 CDs in a U shape][1]
|
||||
|
||||
Back in the early 2000s, we made a family decision to upgrade the living room stereo. The equipment in place at the time was based on a collection of gear that I had purchased some 20 years earlier when I first had a steady post-university income. That early collection could best be described as "industrial chic," most notably the [Hafler amplifiers][2] I had built from kits and the [Polk speakers][3] made from some kind of composite wood product and finished with an ugly faux-rosewood vinyl wrap. They produced decent sound, but the dorm-room-style decor just wasn't working out in the living room.
|
||||
|
||||
Those of you who remember the early 2000s will recall that most of the world was still consuming music on CD. Our family was no exception, and we ended up with a fine CD player that had an interesting feature—it was able to decode regular CDs as well as high-definition-compatible digital (HDCD) discs.
|
||||
|
||||
According to [Wikipedia][4], HDCD is a proprietary audio encode-decode process that claims to provide increased dynamic range over that of standard Red Book audio CDs, while retaining backward compatibility with existing compact disc players.
|
||||
|
||||
The [manual for our CD player][5] states: "HDCD system is manufactured under license from Pacific Microsonics, Inc." and "HDCD is a digital signal processing system developed by Pacific Microsonics of California which conceals control codes into a very small fraction of the recorded CD digital audio stream. An HDCD decoder recognizes these control codes and uses them to process the digital audio to increase its dynamic range and resolution, while leaving the original digital stream compatible with conventional CD players."
|
||||
|
||||
How does HDCD work this magic, you may ask? The same Wikipedia entry states, "HDCD encodes the equivalent of 20 bits worth of data in a 16-bit digital audio signal by using custom dithering, audio filters, and some reversible amplitude and gain encoding; Peak Extend, which is a reversible soft limiter; and Low-Level Range Extend, which is a reversible gain on low-level signals."
|
||||
|
||||
Whatever the merits of this technology, its parent company was unable to continue business and ceased operations sometime in 2000. The Wikipedia article indicates that Microsoft acquired the company and incorporated code in Windows Media Player to allow the decoding of HDCD, but seemingly lost interest in its promotion. Perhaps this was due to the emergence of other proprietary high-resolution audio formats such as SACD and DVD-A, which were able to encode a full 24 bits of signal on a similar-looking but incompatible media. Neither of these latter formats was especially successful, at least not in commercial terms, though studios continue to release music on SACD. As it happens, SACD included a "hybrid" standard that provided both SACD and backward-compatible CD layers on the same disc, allowing the playback of those albums on regular CD players at standard CD resolution.
|
||||
|
||||
How many artists and studios actually made use of HDCD? Well, Discogs offers [a list of 11,284 HDCD recordings][6] (as of this writing). [This web site][7] offers an interesting analysis of some of the facilities HDCD provided, using actual HDCD encoded music. And for those interested in the original patent, which Google Patents claims has expired, [it can be found here][8].
|
||||
|
||||
### My HDCD story
|
||||
|
||||
Anyone who is interested enough in audio equipment to read promotional brochures or audiophile magazines will recognize the fascination many audiophiles have with proprietary designs—they seem to view a patent as validation of the equipment that uses that technology.
|
||||
|
||||
Though now I do what I can to avoid being swayed by "proprietary technology fascination," I admit I was not such a staunch proponent of all things open back in the early 2000s. Not only did I buy the aforementioned fine CD player with its proprietary innards, but I also bought—the horror!—a few actual HDCD-encoded titles.
|
||||
|
||||
This past weekend, I managed to find three of them in our collection, but I am certain there are more. The three I managed to find include Ensemble Dumont's [La Messe du Roi][9], Musica Secreta's [Dangerous Graces][10], and the Orchestra of the Age of Enlightenment's [Vivaldi Concerti][11], all from the Linn Records Linux-friendly [music store][12]. While making sure these titles were still available, I noticed that they are no longer offered in HDCD.
|
||||
|
||||
Given that I have these albums on hand and that the patent seems to have expired, I decided to find out whether I could convert these discs in their full intended resolution to an open music format, and moreover, whether I could do so without using proprietary software.
|
||||
|
||||
The first software I stumbled upon for decoding HDCD format was hdcd.exe., [described and offered here][13]. Since the source code for this software was not offered, and since it required Windows, or at least Wine, to run, my initial interest mostly evaporated.
|
||||
|
||||
The Wikipedia article noted above mentioned that some other Windows-based music players offered HDCD decoding. Hmm. But then I spotted:
|
||||
|
||||
"FFmpeg's libavfilter includes an HDCD filter as of FFmpeg 3.1 (June 2016) that will convert 16-bit PCM with HDCD data to 20-bit PCM."
|
||||
|
||||
This seemed like a promising starting point, so I installed `ffmpeg` from my distro's repositories, and then went looking for some more hints, at which point I stumbled on [the very concise description on hydrogenaudio][14], which even supplies a script for finding HDCD-encoded files in one's music directory. I used the line that runs `ffmpeg` against one of the files ripped from the Musica Secreta CD mentioned previously, as follows:
|
||||
|
||||
|
||||
```
|
||||
ffmpeg -hide_banner -nostats -y -v verbose -i \
|
||||
'01 - Musica Secreta - Questi odorati fiori.flac'
|
||||
-vn -af hdcd -f s24le /dev/null 2>&1 | grep "_hdcd_"
|
||||
```
|
||||
|
||||
and received the following output:
|
||||
|
||||
|
||||
```
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] Disabling automatic format conversion.
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] Auto-convert: disabled
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] Looking for 16-bit HDCD in sample format s16
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] CDT period: 2000ms (88200 samples @44100Hz)
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] Process mode: process stereo channels together
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] Force PE: off
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] Analyze mode: [0] disabled
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] Channel 0: counter A: 0, B: 1657, C: 1657
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] Channel 0: pe: 1657, tf: 0, almost_A: 0, checkfail_B: 0, unmatched_C: 0, cdt_expired: 0
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] Channel 0: tg 0.0: 1657
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] Channel 1: counter A: 0, B: 1657, C: 1657
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] Channel 1: pe: 1657, tf: 0, almost_A: 0, checkfail_B: 0, unmatched_C: 0, cdt_expired: 0
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] Channel 1: tg 0.0: 1657
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] Packets: type: B, total: 3314
|
||||
[Parsed_hdcd_0 @ 0x55b2137e2c80] HDCD detected: yes, peak_extend: enabled permanently, max_gain_adj: 0.0 dB, transient_filter: not detected, detectable errors: 0
|
||||
```
|
||||
|
||||
Note the last line above mentioning that HDCD was, in fact, detected. Also, it seems that the "peak extend" capability is enabled. As I understand this capability, it reverses the compression/limiting applied to the loudest parts of the music after dropping the overall signal level by a factor of two, thus restoring some of the original recording's extra dynamic range. Goodwin's High End's web site has a detailed description of this topic [here][15].
|
||||
|
||||
At this point, it was time to try this whole thing out. For some reason, I did not feel confident doing a one-step conversion from 16-bit undecoded FLAC to 24-bit decoded FLAC, so I ran the conversion in two steps, as follows:
|
||||
|
||||
|
||||
```
|
||||
for f16 in *.flac; do
|
||||
trk=`basename "$f16" .flac`
|
||||
w24="$trk"_24.wav
|
||||
ffmpeg -i "$f16" -af hdcd -acodec pcm_s24le "$w24"
|
||||
flac "$w24"
|
||||
done
|
||||
```
|
||||
|
||||
This gave me a set of 24-bit 44.1kHz FLAC files, which I verified with the **file** command. At that point, all I needed to do was make sure all the tags looked good, and that was that.
|
||||
|
||||
### And speaking of music…
|
||||
|
||||
I've been taking a break from this music column this year, as I haven't done much except listen to things I already have on hand. But a few new items have crept into my collection.
|
||||
|
||||
Emancipator's latest, [Mountain of Memory][16], is available from that great Linux-friendly and artist-friendly online store, [Bandcamp][17]. If you like Emancipator's earlier stuff, you won't be disappointed with this.
|
||||
|
||||
The Choir of Clare College at Cambridge and the Dmitri Ensemble have released [a fine collection of music by Arvo Pärt, Peteris Vasks, and James MacMillan][18], entitled "Arvo Pärt Stabat." I haven't listened to this album carefully, but even so, I am struck by the similarity between the three composers' work presented here. Maybe something about the shared influence of a northern European landscape and weather? I bought this beautiful choral music as a 96/24 FLAC download from [Presto Classical][19], another fine Linux-friendly online store. For those interested in more information on this music, there is an interview on that site with Graham Ross, the conductor of the Choir of Clare College.
|
||||
|
||||
Finally, some other interesting news—an online store with lots of great high-resolution downloads that has frustrated me for many years (to the point of sending them numerous whiny emails), [HDtracks][20], has finally made it possible to purchase music from them without using their download manager! I haven't actually bought anything there yet, but I will soon give it a whirl and report back.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/hdcd
|
||||
|
||||
作者:[Chris Hermansen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_cd_dvd.png?itok=RBwVIzmi (11 CDs in a U shape)
|
||||
[2]: https://audiokarma.org/forums/index.php?threads/questions-about-hafler-dh-110-connections.790899/
|
||||
[3]: https://forum.polkaudio.com/discussion/166859/monitor-7b-question
|
||||
[4]: https://en.wikipedia.org/wiki/High_Definition_Compatible_Digital
|
||||
[5]: https://docs.linn.co.uk/wiki/images/0/08/Ikemi_genki_user_manual.pdf
|
||||
[6]: https://www.discogs.com/search/?format_exact=HDCD
|
||||
[7]: http://www.audiomisc.co.uk/HFN/HDCD/Examined.html
|
||||
[8]: https://patents.google.com/patent/US5479168?oq=US5479168
|
||||
[9]: https://www.linnrecords.com/recording-la-messe-du-roi
|
||||
[10]: https://www.linnrecords.com/recording-dangerous-graces-music-cipriano-de-rore-and-pupils
|
||||
[11]: https://www.linnrecords.com/recording-vivaldi-concerti
|
||||
[12]: https://www.linnrecords.com/
|
||||
[13]: http://forum.doom9.org/showthread.php?t=129136
|
||||
[14]: https://wiki.hydrogenaud.io/index.php?title=High_Definition_Compatible_Digital#FFmpeg
|
||||
[15]: http://www.goodwinshighend.com/music/hdcd/gain_scaling.htm
|
||||
[16]: https://emancipator.bandcamp.com/album/mountain-of-memory
|
||||
[17]: https://bandcamp.com/
|
||||
[18]: https://www.clarecollegechoir.com/product/arvo-p%C3%A4rt-stabat
|
||||
[19]: https://www.prestomusic.com/classical/products/8766094--arvo-part-stabat-mater
|
||||
[20]: https://www.hdtracks.com/
|
@ -0,0 +1,116 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 Mac terminal customizations even a curmudgeon can love)
|
||||
[#]: via: (https://opensource.com/article/20/7/mac-terminal)
|
||||
[#]: author: (Katie McLaughlin https://opensource.com/users/glasnt)
|
||||
|
||||
4 Mac terminal customizations even a curmudgeon can love
|
||||
======
|
||||
Open source means I can find Linux familiarity on any terminal.
|
||||
![Coffee and laptop][1]
|
||||
|
||||
A decade ago, I started my first job that required me to use Linux as my laptop's operating system. I was offered a range of variants, including Gentoo, if I was so inclined, but since I had used Ubuntu briefly in the past, I opted for Ubuntu Lucid Lynx 10.04.
|
||||
|
||||
My terminal, [Konsole][2], was themed in [Zenburn][3] and had a Bash prompt that looked like this:
|
||||
|
||||
|
||||
```
|
||||
`machinename ~/path/to/folder $`
|
||||
```
|
||||
|
||||
Nowadays, I'm running on a Mac, specifically macOS Catalina using [iTerm2][4] with the [Zenburn theme][5] and a zsh prompt that looks like this:
|
||||
|
||||
|
||||
```
|
||||
machinename ~/path/to/folder
|
||||
$
|
||||
```
|
||||
|
||||
I think after a decade with a near-identical prompt, I have earned the title of _curmudgeon_, if only as a sign that I have preferences and habits that go against what the cool kids are doing nowadays.
|
||||
|
||||
As if to prove my curmudgeonly point, I wanted to change my terminal to match my old one. Getting a setup that looks and feels like Lucid Lynx on Mac isn't simple and took some time.
|
||||
|
||||
My biggest recent change was moving from Bash to zsh and migrating my [Bash hacks][6]. But that was only one of the major shifts. I learned many new-fangled lessons that I now bestow onto you, dear reader.
|
||||
|
||||
### Coreutils forgives flag order
|
||||
|
||||
Moving from Ubuntu to macOS wasn't too much of a shift until I started thinking I was losing my Unix-foo. I'd try running basic operations like removing folders and be told that I was invoking `rm` incorrectly.
|
||||
|
||||
It turns out that the GNU-style utilities may look like BSD-style utilities, but one of the biggest usability differences is _flag order_. The order in which unnamed parameters are listed does not line up. For instance: `rm`.
|
||||
|
||||
Here's the familiar GNU-style command to remove a directory:
|
||||
|
||||
|
||||
```
|
||||
`$ rm path/to/folder -rf`
|
||||
```
|
||||
|
||||
This contrasts with the BSD-style version of the same command:
|
||||
|
||||
|
||||
```
|
||||
$ rm path/to/folder -rf
|
||||
rm: path/to/folder: is a directory
|
||||
rm: -rf: No such file or directory
|
||||
```
|
||||
|
||||
I got around this by installing [Coreutils][7] through [Homebrew][8]. This brings GNU utilities to macOS and makes flag order more forgiving by allowing me to not have to remember flag order for commands that are deeply ingrained into my muscle memory.
|
||||
|
||||
### iTerm2 is powerful
|
||||
|
||||
I'm not sure of any operating system where power users are happy with the default terminal. In macOS land, I settled on [iTerm2][4], which allows me more flexibility than the base OS's terminal application. One of my favorite iTerm power features is being able to use **Command**+**D** and **Command**+**Shift**+**D** to split panes vertically and horizontally. There are many more tricks to be learned, but easy split panes alone can make iTerm2 worth the switch from the default option.
|
||||
|
||||
### Context-aware plugins
|
||||
|
||||
One reason even a curmudgeon of a user customizes a terminal prompt is to gain some situational awareness. I enjoy it when a terminal gives me context and answers all the questions that come to mind. Not just what folder I'm in, but: What machine am I on? Is this a Git repository? If so, what branch am I in? Am I in a Python virtual environment?
|
||||
|
||||
Answers to these questions go into a category of terminal extensions that can be called "context-aware plugins."
|
||||
|
||||
For the current Git branch, I used this [parse_git_branch()][9] method (there is a similar plugin for [Oh My Zsh][10], if you're using that). For Python, virtualenv prefixes to the prompt automatically. Oh My Zsh has so many [plugins][11], you're sure to find something to improve your life.
|
||||
|
||||
As for my local machine? I just place it directly in the PS1 format because I'm basic like that, and macOS doesn't _really_ let you name your machines.
|
||||
|
||||
### Multi-line prompts are fine
|
||||
|
||||
The observant reader may notice the one change in my prompt over a decade is that it's now two lines. This is a recent change that I'm slowly learning to love because all those plugins I mentioned earlier make my prompt looonnngggg. You can navigate only so deep in a filesystem before you start having line-wrapped command inputs trying to do anything basic. And with that comes occasional redraw issues and readability concerns.
|
||||
|
||||
The suggestions I received about resolving this revolved mostly around, "Oh, you're using zsh? Use [Powerlevel10k][12]!" Which is fine for those who aren't stuck in their ways, like me. But I was able to learn from these themes and take a small bit of suggestion from them.
|
||||
|
||||
What I've done is to add a `$'\n'` before the final `$` in my prompt, which allows my context-aware information—current machine, current folder, current GitHub branch, current virtualenv, and the like—to all live on one line, and then my commands can be entered without issues.
|
||||
|
||||
The only problem I've found is learning where to _look_. I'm used to having my eyes start at the center of the line because that's where the prompt used to start. I'm slowly learning to look left to the prompt, but it's a slow process. I have a decade of eye training to undo.
|
||||
|
||||
### Use what works for you
|
||||
|
||||
If you prefer a certain style or tool, then you are absolutely valid in that preference. You can try other things, but never think you have to use the latest and greatest just to be like the cool kids. Your style and preferences can change over time, but never be forced into changes that aren't comfortable for you.
|
||||
|
||||
_Join us next time, when Aunty Katie complains about IDEs._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/mac-terminal
|
||||
|
||||
作者:[Katie McLaughlin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/glasnt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o (Coffee and laptop)
|
||||
[2]: https://konsole.kde.org/
|
||||
[3]: https://github.com/brson/zenburn-konsole
|
||||
[4]: https://www.iterm2.com/
|
||||
[5]: https://gist.github.com/fooforge/3373215
|
||||
[6]: https://opensource.com/article/20/1/bash-scripts-aliases
|
||||
[7]: https://formulae.brew.sh/formula/coreutils
|
||||
[8]: https://opensource.com/article/20/6/homebrew-mac
|
||||
[9]: https://gist.github.com/kevinchappell/09ca3805a9531b818579
|
||||
[10]: https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins/git
|
||||
[11]: https://github.com/ohmyzsh/ohmyzsh/wiki/Plugins
|
||||
[12]: https://github.com/romkatv/powerlevel10k
|
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (My feature-rich and minimal Linux terminal)
|
||||
[#]: via: (https://opensource.com/article/20/7/minimal-linux-terminal)
|
||||
[#]: author: (Sumantro Mukherjee https://opensource.com/users/sumantro)
|
||||
|
||||
My feature-rich and minimal Linux terminal
|
||||
======
|
||||
These apps and themes help make my terminal my own.
|
||||
![Digital images of a computer desktop][1]
|
||||
|
||||
Everyone likes to set up their workspaces in a specific way; it helps your productivity and makes life easier to have things organized in a way that feels organic and to have an environment that feels good to you. That definitely applies to terminals too; that's probably why there are so many terminal options available.
|
||||
|
||||
When starting on a new computer, the very first thing I do is set up my terminal to make it my own.
|
||||
|
||||
My preferred terminal app is [terminator][2] because of its minimalist design and built-in windowing options. But it gets more complex from there. I would describe my preferred terminal style as "feature-rich yet, keeping it minimal." That balance is one I'm often fine-tuning.
|
||||
|
||||
I use zsh as my default shell, and Ohmyzsh to give is additional features. One can install Ohmyzsh by downloading its install script:
|
||||
|
||||
|
||||
```
|
||||
$ curl -fsSL \
|
||||
<https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh> \
|
||||
\--output install-zsh.sh
|
||||
```
|
||||
|
||||
Read the script over to see what it does, and to ensure you feel confident in running it on your computer. Once you're ready, run the script:
|
||||
|
||||
|
||||
```
|
||||
`$ sh ./install-zsh.sh`
|
||||
```
|
||||
|
||||
My favorite theme/prompt is [Powerlevel 10k][3], which is an incredibly detailed view of my environment. It includes everything from color highlighting of commands to timestamps for when they were run. All the details integrate into an elegant, context-aware prompt. [context-aware is used as a benefit twice, here and below, can author provide more here on what that means and why it is a good thing in a terminal?}
|
||||
|
||||
Installing Powerlevel10k begins with downloading the source code in the `.oh-my-zsh/` custom theme directory.
|
||||
|
||||
|
||||
```
|
||||
git clone --depth=1 <https://github.com/romkatv/powerlevel10k.git>
|
||||
${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/themes/powerlevel10k
|
||||
```
|
||||
|
||||
However, to make Powerlevel10k look as it is shown in the repository, we need to add some fonts that are not included by default; these are listed below:
|
||||
|
||||
* [MesloLGS NF Regular.ttf][4]
|
||||
* [MesloLGS NF Bold.ttf][5]
|
||||
* [MesloLGS NF Italic.ttf][6]
|
||||
* [MesloLGS NF Bold Italic.ttf][7]
|
||||
|
||||
|
||||
|
||||
This results in a beautiful and context-aware terminal (as shown by [screenfetch][8])
|
||||
|
||||
![terminator terminal shot via screenFetch ][9]
|
||||
|
||||
I've become accustomed to this particular setup, but, as important as it is to make your work environment your own, that's also not a reason to be stubborn about trying new things. New terminals emerge in order to answer the needs and demands of new generations of users. That means that, even if it's unfamiliar at first, one of the more recently developed terminals could be better suited to today's environments and responsibilities than your old standby.
|
||||
|
||||
I have been considering other options recently. I started watching the development of [Starship][10], which describes itself as a minimal, blazing-fast, and infinitely customizable prompt for any shell. It still has a lot of visually immersive details without as much of what some might find distracting from Powerlevel10k.
|
||||
|
||||
What's your favorite terminal, and why? Share in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/minimal-linux-terminal
|
||||
|
||||
作者:[Sumantro Mukherjee][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sumantro
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_desk_home_laptop_browser.png?itok=Y3UVpY0l (Digital images of a computer desktop)
|
||||
[2]: https://terminator-gtk3.readthedocs.io/en/latest/
|
||||
[3]: https://github.com/romkatv/powerlevel10k
|
||||
[4]: https://github.com/romkatv/powerlevel10k-media/raw/master/MesloLGS%20NF%20Regular.ttf
|
||||
[5]: https://github.com/romkatv/powerlevel10k-media/raw/master/MesloLGS%20NF%20Bold.ttf
|
||||
[6]: https://github.com/romkatv/powerlevel10k-media/raw/master/MesloLGS%20NF%20Italic.ttf
|
||||
[7]: https://github.com/romkatv/powerlevel10k-media/raw/master/MesloLGS%20NF%20Bold%20Italic.ttf
|
||||
[8]: https://github.com/KittyKatt/screenFetch
|
||||
[9]: https://opensource.com/sites/default/files/uploads/osdc00_edit.png (terminator terminal shot via screenFetch )
|
||||
[10]: https://starship.rs/
|
@ -0,0 +1,143 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top 5 Open Source Video Conferencing Tools for Remote Working and Online Meetings)
|
||||
[#]: via: (https://itsfoss.com/open-source-video-conferencing-tools/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Top 5 Open Source Video Conferencing Tools for Remote Working and Online Meetings
|
||||
======
|
||||
|
||||
You will find several video conferencing tools available online. Some are tailored for professional use and some for daily casual conversations.
|
||||
|
||||
However, with hundreds of options to choose from, security and privacy is often a concern when picking a video conferencing app or service. Among the list of options, what’s usually the best and the most secure service?
|
||||
|
||||
Well, all (or most of them) claim to provide the best possible security and privacy. But you know that this cannot be taken at face value.
|
||||
|
||||
Fortunately, at It’s FOSS, we focus on open-source solutions and privacy-friendly options. So, let’s take a look at a list of open-source video conferencing tools that you can utilize.
|
||||
|
||||
### Top Open Source Video Conferencing Solutions
|
||||
|
||||
![][1]
|
||||
|
||||
Most of the video conferencing solutions can be installed on your own servers if you are a small business or enterprise.
|
||||
|
||||
For normal, non-sysadmins, some of these solutions also provide ready-to-use, free, web-based video conferencing service. I’ll mention this information in the description of each item on the list.
|
||||
|
||||
**Note:** The list is in no particular order of ranking.
|
||||
|
||||
#### 1\. Jitsi Meet
|
||||
|
||||
![][2]
|
||||
|
||||
Jitsi Meet is an impressive open-source video conferencing service. You can easily find out more about it on our separate coverage on [Jitsi Meet][3].
|
||||
|
||||
To give you a head start, it offers you free [official public instance][4] to test it and use it for free as long as you need it.
|
||||
|
||||
If you need to host it on your server while customizing some options for your requirements, you can download it from its [official website][5] for your server.
|
||||
|
||||
Even though they offer an electron-based app on Linux, you don’t need to download an app on your desktop to set it up. All you need is a browser and you’re good to go. For mobile, you will find apps for both Android and iOS.
|
||||
|
||||
[Jitsi Meet][5]
|
||||
|
||||
#### 2\. Jami
|
||||
|
||||
![][6]
|
||||
|
||||
Jami is a peer-to-peer based open-source video conferencing solution. It’s good to see a distributed service that does not rely on servers but peer-to-peer connections.
|
||||
|
||||
Of course, a distributed service has its pros and cons. But, it’s free and open-source, that’s what matters.
|
||||
|
||||
Jami is available for Linux, Windows, macOS, Android, and iOS, So, it’s a pure cross-platform solution for secure messaging and video conferencing. You can take a look at their [GitLab page][7] to explore more about it.
|
||||
|
||||
[Jami][8]
|
||||
|
||||
#### 3\. Nextcloud Talk
|
||||
|
||||
![][9]
|
||||
|
||||
[Nextcloud][10] is undoubtedly the open-source Swiss army of remote working tools. We at It’s FOSS utilize Nextcloud. So, if you already have your server set up, [Nextcloud Talk][11] can prove to be an excellent video conferencing and communication tool.
|
||||
|
||||
Of course, if you don’t have your own Nextcloud instance, you will require some technical expertise to set it up and start using Nextcloud Talk.
|
||||
|
||||
[Nextcloud Talk][11]
|
||||
|
||||
#### 4\. Riot.im
|
||||
|
||||
![][12]
|
||||
|
||||
Riot.im (soon going to be rebranded) is already one of the [best open source alternatives to slack][13].
|
||||
|
||||
It gives you the ability to create communities, send text messages, and start video conferences in a group/community. You can use it for free by using any of the public [Matrix servers][14] available.
|
||||
|
||||
If you want your own dedicated decentralized Matrix network, you can also opt for paid hosting plans on [Modular.im][15].
|
||||
|
||||
[Riot.im][16]
|
||||
|
||||
#### 5\. BigBlueButton
|
||||
|
||||
![][17]
|
||||
|
||||
BigBlueButton is an interesting open-source video conferencing option tailored for online learning.
|
||||
|
||||
If you are a teacher or running a school, you might want to try this out. Even though you can try it for free, there will be limitations for the free demo usage. So, it’s best to host it on your own server and you can also integrate it with your other products/services, if any.
|
||||
|
||||
It offers a good set of features that let you easily teach the students. You can explore its [GitHub page][18] to know more about it.
|
||||
|
||||
[BigBlueButton][19]
|
||||
|
||||
#### Additional mention: Wire
|
||||
|
||||
![][20]
|
||||
|
||||
Wire is a quite popular open-source secure messaging platform tailored for business and enterprise users. It also offers video calls or web conferencing options.
|
||||
|
||||
If you wanted a premium open-source option dedicated for your business or your team, you can try Wire and decide to upgrade it after the 30-day trial expires.
|
||||
|
||||
Personally, I love the user experience, but it comes at a cost. So, I’d recommend you to give it a try and explore its [GitHub page][21] before you decide
|
||||
|
||||
[Wire][22]
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
Now that you know some popular open-source web conferencing options, which one do you prefer to use?
|
||||
|
||||
Did I miss any of your favorites? Let me know your thoughts in the comments below!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/open-source-video-conferencing-tools/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/open-source-video-conferencing-tools.jpg?ssl=1
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/jitsi-meet-browser-screenshot.png?ssl=1
|
||||
[3]: https://itsfoss.com/jitsi-meet/
|
||||
[4]: https://meet.jit.si/
|
||||
[5]: https://jitsi.org/jitsi-meet/
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/jami-screenshot.png?ssl=1
|
||||
[7]: https://git.jami.net/savoirfairelinux/ring-project
|
||||
[8]: https://jami.net/
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/nextcloud-talk.png?ssl=1
|
||||
[10]: https://itsfoss.com/nextcloud/
|
||||
[11]: https://nextcloud.com/talk/
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/07/riot-communication-video.png?ssl=1
|
||||
[13]: https://itsfoss.com/open-source-slack-alternative/
|
||||
[14]: https://matrix.org/
|
||||
[15]: https://modular.im/
|
||||
[16]: https://about.riot.im/
|
||||
[17]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/big-blue-button.png?ssl=1
|
||||
[18]: https://github.com/bigbluebutton/bigbluebutton
|
||||
[19]: https://itsfoss.com/open-source-video-conferencing-tools/bigbluebutton.org/
|
||||
[20]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/wire-video-conferencing.png?ssl=1
|
||||
[21]: https://github.com/wireapp
|
||||
[22]: https://wire.com/en/
|
@ -0,0 +1,90 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 things to look for in an open source alternative to SharePoint)
|
||||
[#]: via: (https://opensource.com/article/20/7/sharepoint-alternative)
|
||||
[#]: author: (Will Kelly https://opensource.com/users/willkelly)
|
||||
|
||||
5 things to look for in an open source alternative to SharePoint
|
||||
======
|
||||
If you're considering an open source collaboration platform to meet your
|
||||
remote workforce's needs, here are five things to keep in mind.
|
||||
![Digital images of a computer desktop][1]
|
||||
|
||||
We're entering a collaboration platform renaissance as remote work becomes the norm for enterprises large and small. [Microsoft SharePoint][2]—a collaboration platform available on premises or in the cloud—is the de-facto standard for corporations and government agencies. However, SharePoint implementations are infamous for the [challenges][3] that prevent their completion. Combine those common speedbumps with shrinking IT budgets and rising collaboration requirements because of remote work, and open source alternatives to SharePoint become well worth a look.
|
||||
|
||||
Here are five things to consider in an open source alternative to SharePoint.
|
||||
|
||||
### Is it easy to install, set up, and use in the cloud?
|
||||
|
||||
Looking beyond installation and initial configuration, you want an open source alternative that's easy to set up. Treat open source collaboration tools as something you must take responsibility for, particularly in setup and user support, whether you have your IT department's approval or you're going shadow IT.
|
||||
|
||||
Chances are you'll be installing the platform in a public or private cloud space, so look for an open source collaboration platform that's cloud-friendly. For example, if your organization is running Amazon Web Services (AWS), you can install open source wikis, including [MediaWiki][4], [DokuWiki][5], and [TikiWiki][6], from the AWS Marketplace. After installing them, you can get an idea of how much using the platform will affect your organization's cloud bill.
|
||||
|
||||
### Is it friendly to end-users?
|
||||
|
||||
Show me a complex collaboration site, and I'll show you the developers and other staff who are doing their darndest to work around it. You don't want to make this mistake.
|
||||
|
||||
Keep it simple if you want to spin up an open source collaboration platform to replace or augment SharePoint for your remote workers. The easier the collaboration platform is for your users, the better chance you have of winning them over as allies.
|
||||
|
||||
With features like a Configure Sites wizard, TikiWiki is an example of an open source collaboration platform that's end-user friendly.
|
||||
|
||||
### Are the content-editing tools easy to use?
|
||||
|
||||
Editing options are a major benefit to the diverse communities of open source contributors building these technologies. MediaWiki is one example of how open source collaboration platforms approach authoring tools. The project has an [Editing team][7] that focuses just on editing and authoring tools. Some of its projects include [WikiEditor][8], [VisualEditor][9], and [CodeEditor][10] extensions. You're bound to find an editor that fits your users' workstyle.
|
||||
|
||||
This feature becomes especially important for developers, who have been known to rebel against SharePoint because it lacks Markdown support. Get feedback from your developers about their authoring needs. If Markdown is one of their requirements, make sure you choose an open source collaboration platform that supports it.
|
||||
|
||||
Also be sure to follow open source adoption best practices by ensuring the technology has an active community. For example, some DokuWiki editor plugins, such as [Ace Editor][11] and [Editor Plugin][12], haven't been updated in years.
|
||||
|
||||
### What kind of access control is available to protect content?
|
||||
|
||||
If you're dealing with project documentation or any type of sensitive corporate information, examine the access control options in any open source collaboration platform you're considering. Look for support for read-only pages and access-control lists (ACLs).
|
||||
|
||||
Open source wikis are open by default. That's not necessarily a bad thing, depending on your security posture. SharePoint permissions are a [known trouble spot][13], even in the eyes of SharePoint experts. In contrast, DokuWiki has a well-documented [ACL feature][14].
|
||||
|
||||
### Is it integration-friendly for your organization?
|
||||
|
||||
Even if you're moving to an open source collaboration platform as a last-minute replacement for an ailing SharePoint implementation, you can't ignore your integration requirements.
|
||||
|
||||
MediaWiki and TikiWiki use a MySQL backend. DokuWiki doesn't require a database; it uses plain text files. Databases can be an integration consideration, based on your team members' database chops.
|
||||
|
||||
Integration with an authentication backend such as LDAP will also be necessary for some organizations. Security and compliance people get worried about new platforms that aren't aligned with corporate standards. Users often resent having yet another password to remember.
|
||||
|
||||
### Deploy with care
|
||||
|
||||
Open source collaboration alternatives have a unique growth opportunity, as organizations find their once-ignored collaboration tools aren't serving their burgeoning remote workforces. Regardless of your goals, deploy your open source SharePoint alternative with care.
|
||||
|
||||
Have you moved to an open source collaboration platform to better serve your remote workers? If so, please share your experiences in the comments.
|
||||
|
||||
Sandstorm's Jade Wang shares some of her favorite open source web apps that are self-hosted...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/sharepoint-alternative
|
||||
|
||||
作者:[Will Kelly][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/willkelly
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_browser_web_desktop.png?itok=Bw8ykZMA (Digital images of a computer desktop)
|
||||
[2]: https://www.microsoft.com/en-us/microsoft-365/sharepoint/collaboration
|
||||
[3]: https://sharepointmaven.com/sharepoint-implementation-failed/
|
||||
[4]: https://www.mediawiki.org/wiki/MediaWiki
|
||||
[5]: https://www.dokuwiki.org/
|
||||
[6]: https://tiki.org/HomePage
|
||||
[7]: https://www.mediawiki.org/wiki/Editing_team
|
||||
[8]: https://www.mediawiki.org/wiki/Extension:WikiEditor
|
||||
[9]: https://www.mediawiki.org/wiki/Extension:VisualEditor
|
||||
[10]: https://www.mediawiki.org/wiki/Extension:CodeEditor
|
||||
[11]: https://www.dokuwiki.org/plugin:aceeditor
|
||||
[12]: https://www.dokuwiki.org/plugin:editor
|
||||
[13]: https://www.varonis.com/blog/why-do-sharepoint-permissions-cause-so-much-trouble/
|
||||
[14]: https://www.dokuwiki.org/acl
|
@ -0,0 +1,741 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (An example of very lightweight RESTful web services in Java)
|
||||
[#]: via: (https://opensource.com/article/20/7/restful-services-java)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
An example of very lightweight RESTful web services in Java
|
||||
======
|
||||
Explore lightweight RESTful services in Java through a full code example
|
||||
to manage a book collection.
|
||||
![Coding on a computer][1]
|
||||
|
||||
Web services, in one form or another, have been around for more than two decades. For example, [XML-RPC services][2] appeared in the late 1990s, followed shortly by ones written in the SOAP offshoot. Services in the [REST architectural style][3] also made the scene about two decades ago, soon after the XML-RPC and SOAP trailblazers. [REST][4]-style (hereafter, Restful) services now dominate in popular sites such as eBay, Facebook, and Twitter. Despite the alternatives to web services for distributed computing (e.g., web sockets, microservices, and new frameworks for remote-procedure calls), Restful web services remain attractive for several reasons:
|
||||
|
||||
* Restful services build upon existing infrastructure and protocols, in particular, web servers and the HTTP/HTTPS protocols. An organization that has HTML-based websites can readily add web services for clients interested more in the data and underlying functionality than in the HTML presentation. Amazon, for example, has pioneered making the same information and functionality available through both websites and web services, either SOAP-based or Restful.
|
||||
|
||||
* Restful services treat HTTP as an API, thereby avoiding the complicated software layering that has come to characterize the SOAP-based approach to web services. For example, the Restful API supports the standard CRUD (Create-Read-Update-Delete) operations through the HTTP verbs POST-GET-PUT-DELETE, respectively; HTTP status codes inform a requester whether a request succeeded or why it failed.
|
||||
|
||||
* Restful web services can be as simple or complicated as needed. Restful is a style—indeed, a very flexible one—rather than a set of prescriptions about how services should be designed and structured. (The attendant downside is that it may be hard to determine what does _not_ count as a Restful service.)
|
||||
|
||||
* For a consumer or client, Restful web services are language- and platform-neutral. The client makes requests in HTTP(S) and receives text responses in a format suitable for modern data interchange (e.g., JSON).
|
||||
|
||||
* Almost every general-purpose programming language has at least adequate (and often strong) support for HTTP/HTTPS, which means that web-service clients can be written in those languages.
|
||||
|
||||
|
||||
|
||||
|
||||
This article explores lightweight Restful services in Java through a full code example.
|
||||
|
||||
### The Restful novels web service
|
||||
|
||||
The Restful novels web service consists of three programmer-defined classes:
|
||||
|
||||
* The `Novel` class represents a novel with just three properties: a machine-generated ID, an author, and a title. The properties could be expanded for more realism, but I want to keep this example simple.
|
||||
|
||||
|
||||
* The `Novels` class consists of utilities for various tasks: converting a plain-text encoding of a `Novel` or a list of them into XML or JSON; supporting the CRUD operations on the novels collection; and initializing the collection from data stored in a file. The `Novels` class mediates between `Novel` instances and the servlet.
|
||||
|
||||
|
||||
* The `NovelsServlet` class derives from `HttpServlet`, a sturdy and flexible piece of software that has been around since the very early enterprise Java of the late 1990s. The servlet acts as an HTTP endpoint for client CRUD requests. The servlet code focuses on processing client requests and generating the appropriate responses, leaving the devilish details to utilities in the `Novels` class.
|
||||
|
||||
|
||||
|
||||
Some Java frameworks, such as Jersey (JAX-RS) and Restlet, are designed for Restful services. Nonetheless, the `HttpServlet` on its own provides a lightweight, flexible, powerful, and well-tested API for delivering such services. I'll demonstrate this with the novels example.
|
||||
|
||||
### Deploy the novels web service
|
||||
|
||||
Deploying the novels web service requires a web server, of course. My choice is [Tomcat][5], but the service should work (famous last words!) if it's hosted on, for example, Jetty or even a Java Application Server. The code and a README that summarizes how to install Tomcat are [available on my website][6]. There is also a documented Apache Ant script that builds the novels service (or any other service or website) and deploys it under Tomcat or the equivalent.
|
||||
|
||||
Tomcat is available for download from its [website][7]. Once you install it locally, let `TOMCAT_HOME` be the install directory. There are two subdirectories of immediate interest:
|
||||
|
||||
* The `TOMCAT_HOME/bin` directory contains startup and stop scripts for Unix-like systems (`startup.sh` and `shutdown.sh`) and Windows (`startup.bat` and `shutdown.bat`). Tomcat runs as a Java application. The web server's servlet container is named Catalina. (In Jetty, the web server and container have the same name.) Once Tomcat starts, enter `http://localhost:8080/` in a browser to see extensive documentation, including examples.
|
||||
|
||||
* The `TOMCAT_HOME/webapps` directory is the default for deployed websites and web services. The straightforward way to deploy a website or web service is to copy a JAR file with a `.war` extension (hence, a WAR file) to `TOMCAT_HOME/webapps` or a subdirectory thereof. Tomcat then unpacks the WAR file into its own directory. For example, Tomcat would unpack `novels.war` into a subdirectory named `novels`, leaving `novels.war` as-is. A website or service can be removed by deleting the WAR file and updated by overwriting the WAR file with a new version. By the way, the first step in debugging a website or service is to check that Tomcat has unpacked the WAR file; if not, the site or service was not published because of a fatal error in the code or configuration.
|
||||
|
||||
* Because Tomcat listens by default on port 8080 for HTTP requests, a request URL for Tomcat on the local machine begins:
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/`
|
||||
```
|
||||
|
||||
Access a programmer-deployed WAR file by adding the WAR file's name but without the `.war` extension:
|
||||
|
||||
|
||||
```
|
||||
`http://locahost:8080/novels/`
|
||||
```
|
||||
|
||||
If the service was deployed in a subdirectory (e.g., `myapps`) of `TOMCAT_HOME`, this would be reflected in the URL:
|
||||
|
||||
|
||||
```
|
||||
`http://locahost:8080/myapps/novels/`
|
||||
```
|
||||
|
||||
I'll offer more details about this in the testing section near the end of the article.
|
||||
|
||||
|
||||
|
||||
|
||||
As noted, the ZIP file on my homepage contains an Ant script that compiles and deploys a website or service. (A copy of `novels.war` is also included in the ZIP file.) For the novels example, a sample command (with `%` as the command-line prompt) is:
|
||||
|
||||
|
||||
```
|
||||
`% ant -Dwar.name=novels deploy`
|
||||
```
|
||||
|
||||
This command compiles Java source files and then builds a deployable file named `novels.war`, leaves this file in the current directory, and copies it to `TOMCAT_HOME/webapps`. If all goes well, a `GET` request (using a browser or a command-line utility, such as `curl`) serves as a first test:
|
||||
|
||||
|
||||
```
|
||||
`% curl http://localhost:8080/novels/`
|
||||
```
|
||||
|
||||
Tomcat is configured, by default, for _hot deploys_: the web server does not need to be shut down to deploy, update, or remove a web application.
|
||||
|
||||
### The novels service at the code level
|
||||
|
||||
Let's get back to the novels example but at the code level. Consider the `Novel` class below:
|
||||
|
||||
#### Example 1. The Novel class
|
||||
|
||||
|
||||
```
|
||||
package novels;
|
||||
|
||||
import java.io.Serializable;
|
||||
|
||||
public class Novel implements [Serializable][8], Comparable<Novel> {
|
||||
static final long serialVersionUID = 1L;
|
||||
private [String][9] author;
|
||||
private [String][9] title;
|
||||
private int id;
|
||||
|
||||
public Novel() { }
|
||||
|
||||
public void setAuthor(final [String][9] author) { this.author = author; }
|
||||
public [String][9] getAuthor() { return this.author; }
|
||||
public void setTitle(final [String][9] title) { this.title = title; }
|
||||
public [String][9] getTitle() { return this.title; }
|
||||
public void setId(final int id) { this.id = id; }
|
||||
public int getId() { return this.id; }
|
||||
|
||||
public int compareTo(final Novel other) { return this.id - other.id; }
|
||||
}
|
||||
```
|
||||
|
||||
This class implements the `compareTo` method from the `Comparable` interface because `Novel` instances are stored in a thread-safe `ConcurrentHashMap`, which does not enforce a sorted order. In responding to requests to view the collection, the novels service sorts a collection (an `ArrayList`) extracted from the map; the implementation of `compareTo` enforces an ascending sorted order by `Novel` ID.
|
||||
|
||||
The class `Novels` contains various utility functions:
|
||||
|
||||
#### Example 2. The Novels utility class
|
||||
|
||||
|
||||
```
|
||||
package novels;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.io.File;
|
||||
import java.io.ByteArrayOutputStream;
|
||||
import java.io.InputStream;
|
||||
import java.io.InputStreamReader;
|
||||
import java.io.BufferedReader;
|
||||
import java.nio.file.Files;
|
||||
import java.util.stream.Stream;
|
||||
import java.util.concurrent.ConcurrentMap;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
import java.util.Collections;
|
||||
import java.beans.XMLEncoder;
|
||||
import javax.servlet.ServletContext; // not in JavaSE
|
||||
import org.json.JSONObject;
|
||||
import org.json.XML;
|
||||
|
||||
public class Novels {
|
||||
private final [String][9] fileName = "/WEB-INF/data/novels.db";
|
||||
private ConcurrentMap<[Integer][10], Novel> novels;
|
||||
private ServletContext sctx;
|
||||
private AtomicInteger mapKey;
|
||||
|
||||
public Novels() {
|
||||
novels = new ConcurrentHashMap<[Integer][10], Novel>();
|
||||
mapKey = new AtomicInteger();
|
||||
}
|
||||
|
||||
public void setServletContext(ServletContext sctx) { this.sctx = sctx; }
|
||||
public ServletContext getServletContext() { return this.sctx; }
|
||||
|
||||
public ConcurrentMap<[Integer][10], Novel> getConcurrentMap() {
|
||||
if (getServletContext() == null) return null; // not initialized
|
||||
if (novels.size() < 1) populate();
|
||||
return this.novels;
|
||||
}
|
||||
|
||||
public [String][9] toXml([Object][11] obj) { // default encoding
|
||||
[String][9] xml = null;
|
||||
try {
|
||||
[ByteArrayOutputStream][12] out = new [ByteArrayOutputStream][12]();
|
||||
XMLEncoder encoder = new XMLEncoder(out);
|
||||
encoder.writeObject(obj);
|
||||
encoder.close();
|
||||
xml = out.toString();
|
||||
}
|
||||
catch([Exception][13] e) { }
|
||||
return xml;
|
||||
}
|
||||
|
||||
public [String][9] toJson([String][9] xml) { // option for requester
|
||||
try {
|
||||
JSONObject jobt = XML.toJSONObject(xml);
|
||||
return jobt.toString(3); // 3 is indentation level
|
||||
}
|
||||
catch([Exception][13] e) { }
|
||||
return null;
|
||||
}
|
||||
|
||||
public int addNovel(Novel novel) {
|
||||
int id = mapKey.incrementAndGet();
|
||||
novel.setId(id);
|
||||
novels.put(id, novel);
|
||||
return id;
|
||||
}
|
||||
|
||||
private void populate() {
|
||||
[InputStream][14] in = sctx.getResourceAsStream(this.fileName);
|
||||
// Convert novel.db string data into novels.
|
||||
if (in != null) {
|
||||
try {
|
||||
[InputStreamReader][15] isr = new [InputStreamReader][15](in);
|
||||
[BufferedReader][16] reader = new [BufferedReader][16](isr);
|
||||
|
||||
[String][9] record = null;
|
||||
while ((record = reader.readLine()) != null) {
|
||||
[String][9][] parts = record.split("!");
|
||||
if (parts.length == 2) {
|
||||
Novel novel = new Novel();
|
||||
novel.setAuthor(parts[0]);
|
||||
novel.setTitle(parts[1]);
|
||||
addNovel(novel); // sets the Id, adds to map
|
||||
}
|
||||
}
|
||||
in.close();
|
||||
}
|
||||
catch ([IOException][17] e) { }
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The most complicated method is `populate`, which reads from a text file contained in the deployed WAR file. The text file contains the initial collection of novels. To open the text file, the `populate` method needs the `ServletContext`, a Java map that contains all of the critical information about the servlet embedded in the servlet container. The text file, in turn, contains records such as this:
|
||||
|
||||
|
||||
```
|
||||
`Jane Austen!Persuasion`
|
||||
```
|
||||
|
||||
The line is parsed into two parts (author and title) separated by the bang symbol (`!`). The method then builds a `Novel` instance, sets the author and title properties, and adds the novel to the collection, which acts as an in-memory data store.
|
||||
|
||||
The `Novels` class also has utilities to encode the novels collection into XML or JSON, depending upon the format that the requester prefers. XML is the default, but JSON is available upon request. A lightweight XML-to-JSON package provides the JSON. Further details on encoding are below.
|
||||
|
||||
#### Example 3. The NovelsServlet class
|
||||
|
||||
|
||||
```
|
||||
package novels;
|
||||
|
||||
import java.util.concurrent.ConcurrentMap;
|
||||
import javax.servlet.ServletException;
|
||||
import javax.servlet.http.HttpServlet;
|
||||
import javax.servlet.http.HttpServletRequest;
|
||||
import javax.servlet.http.HttpServletResponse;
|
||||
import java.util.Arrays;
|
||||
import java.io.ByteArrayInputStream;
|
||||
import java.io.ByteArrayOutputStream;
|
||||
import java.io.OutputStream;
|
||||
import java.io.BufferedReader;
|
||||
import java.io.InputStreamReader;
|
||||
import java.beans.XMLEncoder;
|
||||
import org.json.JSONObject;
|
||||
import org.json.XML;
|
||||
|
||||
public class NovelsServlet extends HttpServlet {
|
||||
static final long serialVersionUID = 1L;
|
||||
private Novels novels; // back-end bean
|
||||
|
||||
// Executed when servlet is first loaded into container.
|
||||
@Override
|
||||
public void init() {
|
||||
this.novels = new Novels();
|
||||
novels.setServletContext(this.getServletContext());
|
||||
}
|
||||
|
||||
// GET /novels
|
||||
// GET /novels?id=1
|
||||
@Override
|
||||
public void doGet(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] param = request.getParameter("id");
|
||||
[Integer][10] key = (param == null) ? null : [Integer][10].valueOf((param.trim()));
|
||||
|
||||
// Check user preference for XML or JSON by inspecting
|
||||
// the HTTP headers for the Accept key.
|
||||
boolean json = false;
|
||||
[String][9] accept = request.getHeader("accept");
|
||||
if (accept != null && accept.contains("json")) json = true;
|
||||
|
||||
// If no query string, assume client wants the full list.
|
||||
if (key == null) {
|
||||
ConcurrentMap<[Integer][10], Novel> map = novels.getConcurrentMap();
|
||||
[Object][11][] list = map.values().toArray();
|
||||
[Arrays][18].sort(list);
|
||||
|
||||
[String][9] payload = novels.toXml(list); // defaults to Xml
|
||||
if (json) payload = novels.toJson(payload); // Json preferred?
|
||||
sendResponse(response, payload);
|
||||
}
|
||||
// Otherwise, return the specified Novel.
|
||||
else {
|
||||
Novel novel = novels.getConcurrentMap().get(key);
|
||||
if (novel == null) { // no such Novel
|
||||
[String][9] msg = key + " does not map to a novel.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
else { // requested Novel found
|
||||
if (json) sendResponse(response, novels.toJson(novels.toXml(novel)));
|
||||
else sendResponse(response, novels.toXml(novel));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// POST /novels
|
||||
@Override
|
||||
public void doPost(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] author = request.getParameter("author");
|
||||
[String][9] title = request.getParameter("title");
|
||||
|
||||
// Are the data to create a new novel present?
|
||||
if (author == null || title == null)
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
|
||||
// Create a novel.
|
||||
Novel n = new Novel();
|
||||
n.setAuthor(author);
|
||||
n.setTitle(title);
|
||||
|
||||
// Save the ID of the newly created Novel.
|
||||
int id = novels.addNovel(n);
|
||||
|
||||
// Generate the confirmation message.
|
||||
[String][9] msg = "Novel " + id + " created.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
|
||||
// PUT /novels
|
||||
@Override
|
||||
public void doPut(HttpServletRequest request, HttpServletResponse response) {
|
||||
/* A workaround is necessary for a PUT request because Tomcat does not
|
||||
generate a workable parameter map for the PUT verb. */
|
||||
[String][9] key = null;
|
||||
[String][9] rest = null;
|
||||
boolean author = false;
|
||||
|
||||
/* Let the hack begin. */
|
||||
try {
|
||||
[BufferedReader][16] br =
|
||||
new [BufferedReader][16](new [InputStreamReader][15](request.getInputStream()));
|
||||
[String][9] data = br.readLine();
|
||||
/* To simplify the hack, assume that the PUT request has exactly
|
||||
two parameters: the id and either author or title. Assume, further,
|
||||
that the id comes first. From the client side, a hash character
|
||||
# separates the id and the author/title, e.g.,
|
||||
|
||||
id=33#title=War and Peace
|
||||
*/
|
||||
[String][9][] args = data.split("#"); // id in args[0], rest in args[1]
|
||||
[String][9][] parts1 = args[0].split("="); // id = parts1[1]
|
||||
key = parts1[1];
|
||||
|
||||
[String][9][] parts2 = args[1].split("="); // parts2[0] is key
|
||||
if (parts2[0].contains("author")) author = true;
|
||||
rest = parts2[1];
|
||||
}
|
||||
catch([Exception][13] e) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
|
||||
// If no key, then the request is ill formed.
|
||||
if (key == null)
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
|
||||
// Look up the specified novel.
|
||||
Novel p = novels.getConcurrentMap().get([Integer][10].valueOf((key.trim())));
|
||||
if (p == null) { // not found
|
||||
[String][9] msg = key + " does not map to a novel.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
else { // found
|
||||
if (rest == null) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
}
|
||||
// Do the editing.
|
||||
else {
|
||||
if (author) p.setAuthor(rest);
|
||||
else p.setTitle(rest);
|
||||
|
||||
[String][9] msg = "Novel " + key + " has been edited.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DELETE /novels?id=1
|
||||
@Override
|
||||
public void doDelete(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] param = request.getParameter("id");
|
||||
[Integer][10] key = (param == null) ? null : [Integer][10].valueOf((param.trim()));
|
||||
// Only one Novel can be deleted at a time.
|
||||
if (key == null)
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
try {
|
||||
novels.getConcurrentMap().remove(key);
|
||||
[String][9] msg = "Novel " + key + " removed.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
catch([Exception][13] e) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
}
|
||||
|
||||
// Methods Not Allowed
|
||||
@Override
|
||||
public void doTrace(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void doHead(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void doOptions(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
// Send the response payload (Xml or Json) to the client.
|
||||
private void sendResponse(HttpServletResponse response, [String][9] payload) {
|
||||
try {
|
||||
[OutputStream][20] out = response.getOutputStream();
|
||||
out.write(payload.getBytes());
|
||||
out.flush();
|
||||
}
|
||||
catch([Exception][13] e) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Recall that the `NovelsServlet` class above extends the `HttpServlet` class, which in turn extends the `GenericServlet` class, which implements the `Servlet` interface:
|
||||
|
||||
|
||||
```
|
||||
`NovelsServlet extends HttpServlet extends GenericServlet implements Servlet`
|
||||
```
|
||||
|
||||
As the name makes clear, the `HttpServlet` is designed for servlets delivered over HTTP(S). The class provides empty methods named after the standard HTTP request verbs (officially, _methods_):
|
||||
|
||||
* `doPost` (Post = Create)
|
||||
* `doGet` (Get = Read)
|
||||
* `doPut` (Put = Update)
|
||||
* `doDelete` (Delete = Delete)
|
||||
|
||||
|
||||
|
||||
Some additional HTTP verbs are covered as well. An extension of the `HttpServlet`, such as the `NovelsServlet`, overrides any `do` method of interest, leaving the others as no-ops. The `NovelsServlet` overrides seven of the `do` methods.
|
||||
|
||||
Each of the `HttpServlet` CRUD methods takes the same two arguments. Here is `doPost` as an example:
|
||||
|
||||
|
||||
```
|
||||
`public void doPost(HttpServletRequest request, HttpServletResponse response) {`
|
||||
```
|
||||
|
||||
The `request` argument is a map of the HTTP request information, and the `response` provides an output stream back to the requester. A method such as `doPost` is structured as follows:
|
||||
|
||||
* Read the `request` information, taking whatever action is appropriate to generate a response. If information is missing or otherwise deficient, generate an error.
|
||||
|
||||
|
||||
* Use the extracted request information to perform the appropriate CRUD operation (in this case, create a `Novel`) and then encode an appropriate response to the requester using the `response` output stream to do so. In the case of `doPost`, the response is a confirmation that a new novel has been created and added to the collection. Once the response is sent, the output stream is closed, which closes the connection as well.
|
||||
|
||||
|
||||
|
||||
### More on the do method overrides
|
||||
|
||||
An HTTP request has a relatively simple structure. Here is a sketch in the familiar HTTP 1.1 format, with comments introduced by double sharp signs:
|
||||
|
||||
|
||||
```
|
||||
GET /novels ## start line
|
||||
Host: localhost:8080 ## header element
|
||||
Accept-type: text/plain ## ditto
|
||||
...
|
||||
[body] ## POST and PUT only
|
||||
```
|
||||
|
||||
The start line begins with the HTTP verb (in this case, `GET`) and the URI (Uniform Resource Identifier), which is the noun (in this case, `novels`) that names the targeted resource. The headers consist of key-value pairs, with a colon separating the key on the left from the value(s) on the right. The header with key `Host` (case insensitive) is required; the hostname `localhost` is the symbolic address of the local machine on the local machine, and the port number `8080` is the default for the Tomcat web server awaiting HTTP requests. (By default, Tomcat listens on port 8443 for HTTPS requests.) The header elements can occur in arbitrary order. In this example, the `Accept-type` header's value is the MIME type `text/plain`.
|
||||
|
||||
Some requests (in particular, `POST` and `PUT`) have bodies, whereas others (in particular, `GET` and `DELETE`) do not. If there is a body (perhaps empty), two newlines separate the headers from the body; the HTTP body consists of key-value pairs. For bodyless requests, header elements, such as the query string, can be used to send information. Here is a request to `GET` the `/novels` resource with the ID of 2:
|
||||
|
||||
|
||||
```
|
||||
`GET /novels?id=2`
|
||||
```
|
||||
|
||||
The query string starts with the question mark and, in general, consists of key-value pairs, although a key without a value is possible.
|
||||
|
||||
The `HttpServlet`, with methods such as `getParameter` and `getParameterMap`, nicely hides the distinction between HTTP requests with and without a body. In the novels example, the `getParameter` method is used to extract the required information from the `GET`, `POST`, and `DELETE` requests. (Handling a `PUT` request requires lower-level code because Tomcat does not provide a workable parameter map for `PUT` requests.) Here, for illustration, is a slice of the `doPost` method in the `NovelsServlet` override:
|
||||
|
||||
|
||||
```
|
||||
@Override
|
||||
public void doPost(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] author = request.getParameter("author");
|
||||
[String][9] title = request.getParameter("title");
|
||||
...
|
||||
```
|
||||
|
||||
For a bodyless `DELETE` request, the approach is essentially the same:
|
||||
|
||||
|
||||
```
|
||||
@Override
|
||||
public void doDelete(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] param = request.getParameter("id"); // id of novel to be removed
|
||||
...
|
||||
```
|
||||
|
||||
The `doGet` method needs to distinguish between two flavors of a `GET` request: one flavor means "get all*"*, whereas the other means _get a specified one_. If the `GET` request URL contains a query string whose key is an ID, then the request is interpreted as "get a specified one":
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/novels?id=2 ## GET specified`
|
||||
```
|
||||
|
||||
If there is no query string, the `GET` request is interpreted as "get all":
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/novels ## GET all`
|
||||
```
|
||||
|
||||
### Some devilish details
|
||||
|
||||
The novels service design reflects how a Java-based web server such as Tomcat works. At startup, Tomcat builds a thread pool from which request handlers are drawn, an approach known as the _one thread per request model_. Modern versions of Tomcat also use non-blocking I/O to boost performance.
|
||||
|
||||
The novels service executes as a _single_ instance of the `NovelsServlet` class, which in turn maintains a _single_ collection of novels. Accordingly, a race condition would arise, for example, if these two requests were processed concurrently:
|
||||
|
||||
* One request changes the collection by adding a new novel.
|
||||
* The other request gets all the novels in the collection.
|
||||
|
||||
|
||||
|
||||
The outcome is indeterminate, depending on exactly how the _read_ and _write_ operations overlap. To avoid this problem, the novels service uses a thread-safe `ConcurrentMap`. Keys for this map are generated with a thread-safe `AtomicInteger`. Here is the relevant code segment:
|
||||
|
||||
|
||||
```
|
||||
public class Novels {
|
||||
private ConcurrentMap<[Integer][10], Novel> novels;
|
||||
private AtomicInteger mapKey;
|
||||
...
|
||||
```
|
||||
|
||||
By default, a response to a client request is encoded as XML. The novels program uses the old-time `XMLEncoder` class for simplicity; a far richer option is the JAX-B library. The code is straightforward:
|
||||
|
||||
|
||||
```
|
||||
public [String][9] toXml([Object][11] obj) { // default encoding
|
||||
[String][9] xml = null;
|
||||
try {
|
||||
[ByteArrayOutputStream][12] out = new [ByteArrayOutputStream][12]();
|
||||
XMLEncoder encoder = new XMLEncoder(out);
|
||||
encoder.writeObject(obj);
|
||||
encoder.close();
|
||||
xml = out.toString();
|
||||
}
|
||||
catch([Exception][13] e) { }
|
||||
return xml;
|
||||
}
|
||||
```
|
||||
|
||||
The `Object` parameter is either a sorted `ArrayList` of novels (in response to a "get all" request); or a single `Novel` instance (in response to a _get one_ request); or a `String` (a confirmation message).
|
||||
|
||||
If an HTTP request header refers to JSON as a desired type, then the XML is converted to JSON. Here is the check in the `doGet` method of the `NovelsServlet`:
|
||||
|
||||
|
||||
```
|
||||
[String][9] accept = request.getHeader("accept"); // "accept" is case insensitive
|
||||
if (accept != null && accept.contains("json")) json = true;
|
||||
```
|
||||
|
||||
The `Novels` class houses the `toJson` method, which converts XML to JSON:
|
||||
|
||||
|
||||
```
|
||||
public [String][9] toJson([String][9] xml) { // option for requester
|
||||
try {
|
||||
JSONObject jobt = XML.toJSONObject(xml);
|
||||
return jobt.toString(3); // 3 is indentation level
|
||||
}
|
||||
catch([Exception][13] e) { }
|
||||
return null;
|
||||
}
|
||||
```
|
||||
|
||||
The `NovelsServlet` checks for errors of various types. For example, a `POST` request should include an author and a title for the new novel. If either is missing, the `doPost` method throws an exception:
|
||||
|
||||
|
||||
```
|
||||
if (author == null || title == null)
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
```
|
||||
|
||||
The `SC` in `SC_BAD_REQUEST` stands for _status code_, and the `BAD_REQUEST` has the standard HTTP numeric value of 400. If the HTTP verb in a request is `TRACE`, a different status code is returned:
|
||||
|
||||
|
||||
```
|
||||
public void doTrace(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
```
|
||||
|
||||
### Testing the novels service
|
||||
|
||||
Testing a web service with a browser is tricky. Among the CRUD verbs, modern browsers generate only `POST` (Create) and `GET` (Read) requests. Even a `POST` request is challenging from a browser, as the key-values for the body need to be included; this is typically done through an HTML form. A command-line utility such as [curl][21] is a better way to go, as this section illustrates with some `curl` commands, which are included in the ZIP on my website.
|
||||
|
||||
Here are some sample tests without the corresponding output:
|
||||
|
||||
|
||||
```
|
||||
% curl localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=1
|
||||
% curl --header "Accept: application/json" localhost:8080/novels/
|
||||
```
|
||||
|
||||
The first command requests all the novels, which are encoded by default in XML. The second command requests the novel with an ID of 1, which is encoded in XML. The last command adds an `Accept` header element with `application/json` as the MIME type desired. The `get one` command could also use this header element. Such requests have JSON rather than the XML responses.
|
||||
|
||||
The next two commands create a new novel in the collection and confirm the addition:
|
||||
|
||||
|
||||
```
|
||||
% curl --request POST --data "author=Tolstoy&title=War and Peace" localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=4
|
||||
```
|
||||
|
||||
A `PUT` command in `curl` resembles a `POST` command except that the `PUT` body does not use standard syntax. The documentation for the `doPut` method in the `NovelsServlet` goes into detail, but the short version is that Tomcat does not generate a proper map on `PUT` requests. Here is the sample `PUT` command and a confirmation command:
|
||||
|
||||
|
||||
```
|
||||
% curl --request PUT --data "id=3#title=This is an UPDATE" localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=3
|
||||
```
|
||||
|
||||
The second command confirms the update.
|
||||
|
||||
Finally, the `DELETE` command works as expected:
|
||||
|
||||
|
||||
```
|
||||
% curl --request DELETE localhost:8080/novels?id=2
|
||||
% curl localhost:8080/novels/
|
||||
```
|
||||
|
||||
The request is for the novel with the ID of 2 to be deleted. The second command shows the remaining novels.
|
||||
|
||||
### The web.xml configuration file
|
||||
|
||||
Although it's officially optional, a `web.xml` configuration file is a mainstay in a production-grade website or service. The configuration file allows routing, security, and other features of a site or service to be specified independently of the implementation code. The configuration for the novels service handles routing by providing a URL pattern for requests dispatched to this service:
|
||||
|
||||
|
||||
```
|
||||
<?xml version = "1.0" encoding = "UTF-8"?>
|
||||
<web-app>
|
||||
<servlet>
|
||||
<servlet-name>novels</servlet-name>
|
||||
<servlet-class>novels.NovelsServlet</servlet-class>
|
||||
</servlet>
|
||||
<servlet-mapping>
|
||||
<servlet-name>novels</servlet-name>
|
||||
<url-pattern>/*</url-pattern>
|
||||
</servlet-mapping>
|
||||
</web-app>
|
||||
```
|
||||
|
||||
The `servlet-name` element provides an abbreviation (`novels`) for the servlet's fully qualified class name (`novels.NovelsServlet`), and this name is used in the `servlet-mapping` element below.
|
||||
|
||||
Recall that a URL for a deployed service has the WAR file name right after the port number:
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/novels/`
|
||||
```
|
||||
|
||||
The slash immediately after the port number begins the URI known as the _path_ to the requested resource, in this case, the novels service; hence, the term `novels` occurs after the first single slash.
|
||||
|
||||
In the `web.xml` file, the `url-pattern` is specified as `/*`, which means _any path that starts with /novels_. Suppose Tomcat encounters a contrived request URL, such as this:
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/novels/foobar/`
|
||||
```
|
||||
|
||||
The `web.xml` configuration specifies that this request, too, should be dispatched to the novels servlet because the `/*` pattern covers `/foobar`. The contrived URL thus has the same result as the legitimate one shown above it.
|
||||
|
||||
A production-grade configuration file might include information on security, both wire-level and users-roles. Even in this case, the configuration file would be only two or three times the size of the sample one.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
The `HttpServlet` is at the center of Java's web technologies. A website or web service, such as the novels service, extends this class, overriding the `do` verbs of interest. A Restful framework such as Jersey (JAX-RS) or Restlet does essentially the same by providing a customized servlet, which then acts as the HTTP(S) endpoint for requests against a web application written in the framework.
|
||||
|
||||
A servlet-based application has access, of course, to any Java library required in the web application. If the application follows the separation-of-concerns principle, then the servlet code remains attractively simple: the code checks a request, issuing the appropriate error if there are deficiencies; otherwise, the code calls out for whatever functionality may be required (e.g., querying a database, encoding a response in a specified format), and then sends the response to the requester. The `HttpServletRequest` and `HttpServletResponse` types make it easy to perform the servlet-specific work of reading the request and writing the response.
|
||||
|
||||
Java has APIs that range from the very simple to the highly complicated. If you need to deliver some Restful services using Java, my advice is to give the low-fuss `HttpServlet` a try before anything else.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/restful-services-java
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
|
||||
[2]: http://xmlrpc.com/
|
||||
[3]: https://en.wikipedia.org/wiki/Representational_state_transfer
|
||||
[4]: https://www.redhat.com/en/topics/integration/whats-the-difference-between-soap-rest
|
||||
[5]: http://tomcat.apache.org/
|
||||
[6]: https://condor.depaul.edu/mkalin
|
||||
[7]: https://tomcat.apache.org/download-90.cgi
|
||||
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+serializable
|
||||
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer
|
||||
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+object
|
||||
[12]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+bytearrayoutputstream
|
||||
[13]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception
|
||||
[14]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+inputstream
|
||||
[15]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+inputstreamreader
|
||||
[16]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+bufferedreader
|
||||
[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+ioexception
|
||||
[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+arrays
|
||||
[19]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+runtimeexception
|
||||
[20]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+outputstream
|
||||
[21]: https://curl.haxx.se/
|
@ -0,0 +1,97 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What's the difference between DevSecOps and agile software development)
|
||||
[#]: via: (https://opensource.com/article/20/7/devsecops-vs-agile)
|
||||
[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
|
||||
|
||||
What's the difference between DevSecOps and agile software development
|
||||
======
|
||||
Are you focused more on security or software delivery? Or can you have
|
||||
both?
|
||||
![Brick wall between two people, a developer and an operations manager][1]
|
||||
|
||||
There is a tendency in the tech community to use the terms DevSecOps and agile development interchangeably. While there are some similarities, such as that both aim to detect risks earlier, there are also distinctions that [drastically alter how each would work][2] in your organization.
|
||||
|
||||
DevSecOps built on some of the principles that agile development established. However, DevSecOps is [especially focused on integrating security features][3], while agile is focused on delivering software.
|
||||
|
||||
Knowing how to protect your website or application from ransomware and other threats really comes down to the software and systems development you use. Your needs may impact whether you choose to utilize DevSecOps, agile development, or both.
|
||||
|
||||
### Differences between DevSecOps and agile
|
||||
|
||||
The main distinction between these two systems comes down to one simple concept: security. Depending on your software development practices, your company's security measures—and when, where, and who implements them—may differ significantly.
|
||||
|
||||
Every business [needs IT security][4] to protect their vital data. Virtual private networks (VPNs), digital certificates, firewall protection, multi-factor authentication, secure cloud storage, and teaching employees about basic cybersecurity measures are all actions a business should take if it truly values IT security.
|
||||
|
||||
When you trust DevSecOps, you're taking your company's security and essentially making it tantamount to continuous integration and delivery. DevSecOps methodologies emphasize security at the very beginning of development and make it an integral component of overall software quality.
|
||||
|
||||
This is due to three major principles in DevSecOps security:
|
||||
|
||||
* Balancing user access with data security
|
||||
* [Encrypting data][5] with VPN and SSL to protect it from intruders while it is in transit
|
||||
* Anticipating future risks with tools that scan new code for security flaws and notifying developers about the flaws
|
||||
|
||||
|
||||
|
||||
While DevOps has always intended to include security, not every organization practicing DevOps has kept it in mind. That is where DevSecOps as an evolution of DevOps can offer clarity. Despite the similarity of their names, the two [should not be confused][6]. In a DevSecOps model, security is the primary driving force for the organization.
|
||||
|
||||
Meanwhile, agile development is more focused on iterative development cycles, which means feedback is constantly integrated into continuous software development. [Agile's key principles][7] are to embrace changing environments to provide customers and clients with competitive advantages, to collaborate closely with developers and stakeholders, and to maintain a consistent focus of technical excellence throughout the process to help boost efficiency. In other words, unless an agile team includes security in its definition of excellence, security _is_ an afterthought in agile.
|
||||
|
||||
### Challenges for defense agencies
|
||||
|
||||
If there's any organization dedicated to the utmost in security, it's the U.S. Department of Defense. In 2018, the DoD published a [guide to "fake agile"][8] or "agile in name only" in software development. The guide was designed to warn DoD executives about bad programming and explain how to spot it to avoid risks.
|
||||
|
||||
It's not only DoD that has something to gain by using these methodologies. The healthcare and financial sectors also [maintain massive quantities][9] of sensitive data that must remain secure.
|
||||
|
||||
DoD's changing of the guard with its modernization strategy, which includes the adoption of DevSecOps, is essential. This is particularly pertinent in an age when even the DoD is susceptible to hacker attacks and data breaches, as evidenced by its [massive data breach][10] in February 2020.
|
||||
|
||||
There are also risks inherent in transferring cybersecurity best practices into real-life development. Things won't go perfectly 100% of the time. At best, things will be uncomfortable, and at worst, they could create a whole new set of risks.
|
||||
|
||||
Developers, especially those working on code for military software, may not have a thorough [understanding of all contexts][11] where DevSecOps should be employed. There will be a steep learning curve, but for the greater good of security, these are necessary growing pains.
|
||||
|
||||
### New models in the age of automation
|
||||
|
||||
To address growing concerns about previous security measures, DoD contractors have begun to assess the DevSecOps model. The key is deploying the methodology into continuous service delivery contexts.
|
||||
|
||||
There are three ways this can happen. The first involves automation, which is [already being used][12] in most privacy and security tools, including VPNs and privacy-enhanced mobile operating systems. Instead of relying on human-based checks and balances, automation in large-scale cloud infrastructures can handle ongoing maintenance and security assessments.
|
||||
|
||||
The second element involves the transition to DevSecOps as the primary security checkpoint. Traditionally, systems were designed with zero expectation that data would be accessible as it moves between various components.
|
||||
|
||||
The third and final element involves bringing corporate approaches to military software development. Many DoD contractors and employees come from the commercial sector rather than the military. Their background gives them knowledge and experience in [providing cybersecurity][13] to large-scale businesses, which they can bring into government positions.
|
||||
|
||||
### Challenges worth overcoming
|
||||
|
||||
Switching to a DevSecOps-based methodology presents some challenges. In the last decade, many organizations have completely redesigned their development lifecycles to comply with agile development practices, and making another switch so soon may seem daunting.
|
||||
|
||||
Businesses should gain peace of mind knowing that even the DoD has had trouble with this transition, and they're not alone in the challenges of rolling out new processes to make commercial techniques and tools more widely accessible.
|
||||
|
||||
Looking into the future, the switch to DevSecOps will be no more painful than the switch to agile development was. Firms have a lot to gain by acknowledging the [value of building security][4] into development workflows, as well as building upon the advantages of existing agile networks.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/devsecops-vs-agile
|
||||
|
||||
作者:[Sam Bocetta][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sambocetta
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops_confusion_wall_questions.png?itok=zLS7K2JG (Brick wall between two people, a developer and an operations manager)
|
||||
[2]: https://tech.gsa.gov/guides/understanding_differences_agile_devsecops/
|
||||
[3]: https://www.redhat.com/en/topics/devops/what-is-devsecops
|
||||
[4]: https://www.redhat.com/en/topics/security
|
||||
[5]: https://surfshark.com/blog/does-vpn-protect-you-from-hackers
|
||||
[6]: https://www.infoq.com/articles/evolve-devops-devsecops/
|
||||
[7]: https://enterprisersproject.com/article/2019/9/agile-project-management-explained
|
||||
[8]: https://www.governmentciomedia.com/defense-innovation-board-issues-guide-detecting-agile-bs
|
||||
[9]: https://www.redhat.com/en/solutions/financial-services
|
||||
[10]: https://www.military.com/daily-news/2020/02/25/dod-agency-suffers-data-breach-potentially-compromising-ssns.html
|
||||
[11]: https://fcw.com/articles/2020/01/23/dod-devsecops-guidance-williams.aspx
|
||||
[12]: https://privacyaustralia.net/privacy-tools/
|
||||
[13]: https://www.securitymagazine.com/articles/88301-cybersecurity-is-standard-business-practice-for-large-companies
|
@ -1,155 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Yufei-Yan)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What is IoT? The internet of things explained)
|
||||
[#]: via: (https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html)
|
||||
[#]: author: (Josh Fruhlinger https://www.networkworld.com/author/Josh-Fruhlinger/)
|
||||
|
||||
什么是 IoT?物联网简介
|
||||
======
|
||||
|
||||
物联网(IoT)是一个将智能设备连接起来的网络,并提供了丰富的数据,但是它也有可能是一场安全领域的噩梦。
|
||||
|
||||
对于越来越多不用于传统计算,但却相互连接用来收发数据的电子设备来说,物联网(IoT)这个词可以说是最夺人眼球的。
|
||||
|
||||
现在有数不胜数的东西可以归为这一类:传统家用电器里面,可以联网的那一部分"智能设备",比如说电冰箱和灯泡;那些只能运行于有互联网环境的小设备,比如像 Alexa 之类的电子助手;与互联网连接的传感器,他们正在改变着工厂、医疗、运输、物流中心和农场。
|
||||
|
||||
### 什么是物联网?
|
||||
|
||||
物联网(IoT)将互联网、数据处理和分析的能力带给了现实世界中的各种物品。对于消费者来说,这就意味着不需要键盘和显示器这些东西,就能和全世界的信息进行互动;日常用品当中,很多都可以通过网络得到操作指令,从而最大程度的减少了人工的操作。
|
||||
|
||||
**[更多关于物联网(IoT)在网络世界中的内容][1]**
|
||||
|
||||
在企业环境当中,互联网(Internet)长期以来为制造和分销提供了便利,物联网(IoT)也能带来同样的效率。互联网嵌入式传感器,即使没有几十亿,全世界数百万的此类传感器正在提供着丰富的数据,企业可以利用这些数据来保证他们运营的安全,跟踪资产和减少人工处理的次数。研究人员也可以使用物联网(IoT)来获取人们的喜好和行为数据,尽管这些行为可能会严重的涉及隐私和安全问题。
|
||||
|
||||
### 它有多大?
|
||||
|
||||
一句话:非常庞大。[Priceonomics 把它分开来看][2]:在2020年的时候,有超过50亿的物联网(IoT)设备,这些设备可以生成 4.4 泽字节(zettabyte,译者注:1 zettabyte = 10<sup>9</sup> terabyte = 10<sup>12</sup> gigabyte)的数据。相比较,物联网(IoT)设备在 2013 仅仅产生了 1000 亿千兆字节(gigabyte)的数据。在物联网(IoT)市场上可能挣到的钱也同样让人瞠目;到 2025 年,这块市场的价值可以达到 1.6 万亿美元到 14.4 万亿美元。
|
||||
|
||||
### 物联网(IoT)的历史
|
||||
|
||||
一个所有设备和传感器无处不链接的世界,是科幻小说中最经典的景象之一。物联网(IoT)的知识早在 1970 的时候就已经实现了,世界上第一个物联网设备,[卡耐基•梅隆大学的自动贩卖机][3],它还用来和 APRANET 相连,而且很多其他的技术也已经被发明出来并给大家展示了一个充满未来感和智慧的物联网。但是直到 1999 年,物联网这个词才由英国的技术人员,[Kevin Ashton][4] 提出来。
|
||||
|
||||
一开始,技术是滞后于当时对未来的憧憬的。每个与互联网相连的设备都需要一个处理器和一种能和其他东西通信的方式,无线的最好,这些因素都增加了物联网(IoT)大规模实际应用的成本和性能要求,这种情况至少一直持续到 21 世纪头十年中期,直到摩尔定律赶上来。
|
||||
|
||||
一个重要的里程碑是当 [RFID 的大规模使用][5],这种价格低廉的转发器可以被贴在任何物品上,然后这些物品就可以连接到更大的互联网上了。对于设计者来说,无处不在的 Wi-Fi 和 4G 让任何地方的无线连接都变得非常简单。而且,IPv6 的出现在也不用让人们担心把数十亿小设备连接到互联网上会将 IP 地址耗尽。(相关故事:[物联网(IoT)网络可以促进IPv6的使用吗?][6])
|
||||
|
||||
### 物联网(IoT)是如何工作的?
|
||||
|
||||
物联网(IoT)的基本元素是收集的数据的设备。广义地说,他们是和互联网相连的设备,所以每一个设备都有 IP 地址。这些设备涵盖了从从工厂运输货物的自动驾驶车辆到简单的温度监控的传感器。这其中也包括每天统计步数的个人手环。为了让这些数据变得有意义,就需要收集、处理、过滤和分析这些数据,又会有很多种办法来实现这些过程。
|
||||
|
||||
搜集数据是通过把数据从设备上发送到搜集端。可以通过各种无线或者有线网络进行数据的转移。数据可以通过无联网发送到有存储空间或者计算能力的数据中心或者云端,或者这些数据可以分段的进行传输,中间设备在发送之前会将这些数据聚集在一起。
|
||||
|
||||
处理数据可以在数据中心或者云端进行,但是有时候这不太可行。对于一些非常重要的设备,比如说工业领域的关停设备,从设备上将数据发送到远程数据中心的延迟,代价实在是太高了。发送、处理、分析数据和返回指令(在管道爆炸之前关闭阀门)这些操作,来回一趟的时间可能要花费非常多的时间。在这种情况下,边缘计算(edge-computing)就可以大显身手了。边缘计算是一个智能的边缘设备可以聚集数据并且进行分析,在需要的时候进行回应,所有的这一系列操作都是距离所需要控制的设备很近的,所以也就降低了延迟。边缘设备可以有上游连接,这样数据就可以进一步被处理和储存。
|
||||
|
||||
[][7] 网络世界 / IDG
|
||||
|
||||
物联网(IoT)是如何工作的。
|
||||
|
||||
### **物联网(IoT)设备的一些例子**
|
||||
|
||||
本质上,任何可以搜集来自于真实世界数据,并且可以发送回去的设备都可以参与到物联网(IoT)生态系统中。典型的例子包括智能家居设备,射频识别标签(RFID)和工业传感器。这些传感器可以监控一系列的要素,包括工业系统中的温度和压力,机器中关键设备的状态,患者身上与生命体征相关的信号,也可以利用水和电已经其他许多可能的东西。
|
||||
|
||||
一台工厂的机器人可以被认为是物联网(IoT)设备,因为他们可以看作是自动化的载具将货物在仓库之间转移
|
||||
|
||||
其他的例子包括可穿戴设备和家庭安防系统。还有一些其他更基础的设备,比如说[树莓派(Raspberry Pi)][8]和[Arduino][9],这些设备可以让你构建你自己的物联网(IoT)终端节点。
|
||||
|
||||
#### **设备管理**
|
||||
|
||||
为了能让这些设备一起工作,所有这些设备都需要进行验证、合理分配、调试和监控,并且在需要的时候进行更新。要么这些操作会经常的出现在由一个设备供应商制造的系统中;要么这些操作根本就不会发生,这样也是最有风险的。但是整个工业界正在向[标准化的设备管理模型][10]过渡,这样也就允许物联网(IoT)设备之间进行互相操作,也可以保证设备不会被孤立。
|
||||
|
||||
#### **物联网(IoT)通信标准和协议**
|
||||
|
||||
当物联网(IoT)小设备和其他设备通信的时候,他们可以使用各种通信标准和协议,这其中许多都是为这些处理能力有限和缺少电源供应的设备专门定制的。你一定听说过其中的一些,尽管说有一些设备使用的是Wi-Fi或者蓝牙,但是更多的设备是使用了专门为物联网(IoT)世界定制的标准。比如,ZigBee,就是一个低功耗、远距离传输的无线通信协议,而 MQTT(Message Queuing Telemetry Transport)是为链接在不可靠或者有延迟网络上的设备定制的一个发布/订阅(publish/subscribe)模式的信息传递协议。(参考网络世界的词汇表:[物联网(IoT)标准和协议](11)。)
|
||||
|
||||
物联网(IoT)也会受益于5G为蜂窝网络带来的高速度和高带宽,尽管这种使用场景会[滞后于普通的手机][12]。
|
||||
|
||||
### 物联网(IoT), 边缘计算(edge computing)和云(cloud)
|
||||
|
||||
[][23] 网络世界 / IDG
|
||||
|
||||
边缘计算如何使物联网(IoT)成为可能。
|
||||
|
||||
对于许多物联网(IoT)系统来说,大量的数据会以极快的速度涌来,这种情况催生了一个新的科技领域,[边缘计算(edge computing)][14],它由放置在物联网(IoT)设备附近的设备组成,处理来自那些设备的数据。这些机器会处理这些数据,并且只将相关的材料发送到一个更集中的系统系统进行分析。比如,假设在一个有几十个物联网(IoT)安防摄像头的网络中,边缘计算会直接分析传入的视频,而且只有当其中一个摄像头检测到有物体移动的时候才向SoC发出警报,而不会是一下子将所有的在线数据流全部发送到大楼的安全操作中心(SoC)。
|
||||
|
||||
一旦这些数据已经被处理过了,他们有去哪里了呢?好吧,它也许会被送到你的数据中心,但是更多情况下,它最终会进入云。
|
||||
|
||||
对于物联网(IoT)这种间歇或者不同步的数据来往场景来说,具有弹性的云计算是再适合不过的了。许多云计算巨头,包括[谷歌][15],[微软][16],和[亚马逊][17],都会提供物联网(IoT)产品。
|
||||
|
||||
### 物联网(IoT)平台
|
||||
|
||||
云计算巨头们正在尝试出售的,不仅仅是存放传感器搜集的数据。他们要提供一个可以协调物联网(IoT)系统中各种元素的网正完整平提案,平台会将很多功能捆绑在一起。本质上,物联网(IoT)平台作为中间件,将物联网(IoT)设备和边缘网关用处理物联网(IoT)数据的应用程序连接起来。也就是说,每一个平台的厂商看上去都会对物联网(IoT)平台应该是什么这个问题有一些稍微不同的解释,这样就能更好的[与其他竞争者拉开差距][18]。
|
||||
|
||||
### 物联网(IoT)和数据
|
||||
|
||||
正如前面所提到的,所有那些物联网(IoT)设备收集了有泽字节(zettabytes)这个数量级的数据,这些数据通过边缘网关被发送到平台上进行处理。在很多情况下,这些数据就是首先要部署物联网(IoT)的原因。通过从现实世界中的传感器搜集来的数据,各种组织就可以实时的作出灵活的决定。
|
||||
|
||||
例如,Oracle 公司[假设了一个场景][19],当人们在主题公园的时候,会被鼓励下载一个可以提供公园信息的应用。同时,这个程序会将 GPS 信号发回到公园的管理部门来帮助他们预测排队时间。有了这些信息,公园就可以在短期内(比如通过增加员工数量来提高一些景点的容量)和长期内(通过了解哪些设施最受欢迎,那些最不受欢迎)采取行动。
|
||||
|
||||
这些决定完全可以在没有人工干预的情况作出。比如,从化工厂管道中的压力传感器收集的数据可以通过边缘设备的软件进行分析,从而发现管道破裂的威胁,这样的信息可以触发关闭阀门从而避免泄漏的信号。
|
||||
|
||||
### 物联网(IoT)和大数据分析
|
||||
|
||||
主题公园的例子可以让你很容易理解,但是和许多现实世界中物联网(IoT)收集数据的操作相比,就显得小菜一碟了。许多打数据操作都会使用到来自物联网(IoT)设备收集的信息,然后与其他数据关联,这样就可以分析预测到人类的行为。_Software Advice_ 给出了[一些例子][20],其中包括由Birst提供的一项服务,该服务将从联网的咖啡机中收集的咖啡冲泡的信息与社交媒体上发布的帖子进行匹配,看看顾客是否在网上谈论咖啡品牌。
|
||||
|
||||
另一个最近才发生的戏剧性的例子,X-Mode 发布了一张基于位置追踪数据的地图,地图上显示了在 2020 年 3 月春假的时候,正当新冠病毒在美国加速传播的时候,人们在劳德代尔堡(Ft. Lauderdale)聚会完[最终都去了哪里][21]。这张地图令人震撼,不仅仅是因为它显示出病毒可能的扩散方向,更是因为它说明了物联网(IoT)设备是可以多么密切地追踪我们。(更多关于物联网(IoT)和分析的信息,请点击[此处][22])
|
||||
|
||||
### 物联网(IoT)数据和AI
|
||||
|
||||
物联网(IoT)设备能够收集的数据量远远大于任何人类能够以有效的方式处理的数据量,而且这肯定不是实时的。我们已经看到,来自于物联网(IoT)终端的原始数据需要边缘计算设备去进行解释。还需要检测和处理可能[完全错误的数据][23]。
|
||||
|
||||
许多物联网(IoT)供应商也同时提供机器学习和人工只能的功能,可以用来理解收集来的数据。比如,在 Watson 平台上获胜的 IBM Jeopard! ,就可以在[物联网(IoT)数据集进行训练][24],这样就可以在预测行维护领域产生有用的结果 - 比如说,分析来自无人机的数据,可以区分桥梁上轻微的损坏和需要重视的裂缝。同时,ARM 也在研发[低功耗芯片][25],它可以在物联网(IoT)终端上提供AI的能力。
|
||||
|
||||
### 物联网(IoT)和贸易
|
||||
|
||||
贸易领域, 物联网(IoT)可以用于包括跟踪客户,库存和重要部件的状态。[IoT for All][26] 列举了四个已经被物联网(IoT)改变的行业:
|
||||
|
||||
* **石油和天然气**:与人工干预相比,无联网传感器可以更好的检测孤立的钻井场地。
|
||||
* **农业**:通过物联网(IoT)传感器获得的田间作物的数据,可以用来提高产量。
|
||||
* **采暖通风**:制造商可以监控全国各地的气候控制系统。
|
||||
* **实体零售**:当顾客在商店的某一部分停留的时候,可以给他们的手机上发送优惠信息从而进行精准定位。
|
||||
|
||||
更普遍的情况是,企业正在寻找能够在[四个领域][27]上获得帮助的物联网(IoT)解决方案:能源使用,资产跟踪,安全领域和客户体验。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
|
||||
|
||||
作者:[Josh Fruhlinger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Yufei-Yan](https://github.com/Yufei-Yan)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Josh-Fruhlinger/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/category/internet-of-things/
|
||||
[2]: https://priceonomics.com/the-iot-data-explosion-how-big-is-the-iot-data/
|
||||
[3]: https://www.machinedesign.com/automation-iiot/article/21836968/iot-started-with-a-vending-machine
|
||||
[4]: https://www.visioncritical.com/blog/kevin-ashton-internet-of-things
|
||||
[5]: https://www.networkworld.com/article/2319384/rfid-readers-route-tag-traffic.html
|
||||
[6]: https://www.networkworld.com/article/3338106/can-iot-networking-drive-adoption-of-ipv6.html
|
||||
[7]: https://images.idgesg.net/images/article/2020/05/nw_how_iot_works_diagram-100840757-orig.jpg
|
||||
[8]: https://www.networkworld.com/article/3176091/10-killer-raspberry-pi-projects-collection-1.html
|
||||
[9]: https://www.networkworld.com/article/3075360/arduino-targets-the-internet-of-things-with-primo-board.html
|
||||
[10]: https://www.networkworld.com/article/3258812/the-future-of-iot-device-management.html
|
||||
[11]: https://www.networkworld.com/article/3235124/internet-of-things-definitions-a-handy-guide-to-essential-iot-terms.html
|
||||
[12]: https://www.networkworld.com/article/3291778/what-s-so-special-about-5g-and-iot.html
|
||||
[13]: https://images.idgesg.net/images/article/2017/09/nw_how_edge_computing_works_diagram_1400x1717-100736111-orig.jpg
|
||||
[14]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[15]: https://cloud.google.com/solutions/iot
|
||||
[16]: https://azure.microsoft.com/en-us/overview/iot/
|
||||
[17]: https://aws.amazon.com/iot/
|
||||
[18]: https://www.networkworld.com/article/3336166/why-are-iot-platforms-so-darn-confusing.html
|
||||
[19]: https://blogs.oracle.com/bigdata/how-big-data-powers-the-internet-of-things
|
||||
[20]: https://www.softwareadvice.com/resources/iot-data-analytics-use-cases/
|
||||
[21]: https://www.cnn.com/2020/04/04/tech/location-tracking-florida-coronavirus/index.html
|
||||
[22]: https://www.networkworld.com/article/3311919/iot-analytics-guide-what-to-expect-from-internet-of-things-data.html
|
||||
[23]: https://www.networkworld.com/article/3396230/when-iot-systems-fail-the-risk-of-having-bad-iot-data.html
|
||||
[24]: https://www.networkworld.com/article/3449243/watson-iot-chief-ai-can-broaden-iot-services.html
|
||||
[25]: https://www.networkworld.com/article/3532094/ai-everywhere-iot-chips-coming-from-arm.html
|
||||
[26]: https://www.iotforall.com/4-unlikely-industries-iot-changing/
|
||||
[27]: https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html
|
@ -0,0 +1,95 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Btrfs to be the Default Filesystem on Fedora? Fedora 33 Starts Testing Btrfs Switch)
|
||||
[#]: via: (https://itsfoss.com/btrfs-default-fedora/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Btrfs 将成为 Fedora 上的默认文件系统? Fedora 33 开始测试切换到 Btrfs
|
||||
======
|
||||
|
||||
尽管距离 Fedora 的下一个稳定版本([Fedora 33][1])还有几个月的距离,但仍有一些值得关注的变化。
|
||||
|
||||
在所有其他 [Fedora 33 接受的系统范围的更改][1]中,最有趣的提议是将 Btrfs 作为桌面的默认文件系统。
|
||||
|
||||
这是 Fedora 对该提案的评价:
|
||||
|
||||
>对于 Fedora 的笔记本电脑和工作站安装,我们希望以透明的方式向用户提供文件系统功能。我们希望添加新功能,同时减少处理磁盘空间不足之类的情况所需的专业知识。Btrfs 它的设计理念非常适合这个角色,让我们将其设为默认设置。
|
||||
|
||||
值得注意的是,到目前为止,这不是系统范围内的更改,并且需要在[测试日][2](**2020 年 7 月 8 日**)进行测试。
|
||||
|
||||
那么,为什么 Fedora 提出这一更改?这会有什么用么?这是糟糕的举动吗?对 Fedora 的发行有何影响?让我们在这里谈论下。
|
||||
|
||||
![][3]
|
||||
|
||||
### 它会影响哪些 Fedora 版本?
|
||||
|
||||
根据提议,如果测试成功,那么Fedora 33、spins 和 labs 的所有桌面版本都可能发生此更改。
|
||||
|
||||
因此,你可以期望[工作站版本][4]将 Btrfs 作为 Fedora 33 上的默认文件系统。
|
||||
|
||||
### 实施此更改的潜在好处
|
||||
|
||||
为了改进笔记本和工作站用例的 Fedora,Btrfs 文件系统提供了一些好处。
|
||||
|
||||
即使 Fedora 33 尚未接受此更改,但我先说下使用 Btrfs 作为默认文件系统的优点:
|
||||
|
||||
* 延长存储硬件的使用寿命
|
||||
* 提供一个简单的方案来解决用户耗尽根目录或主目录上的可用空间的情况。
|
||||
* 不易损坏数据,易于恢复
|
||||
* 提供更好的文件系统大小调整功能
|
||||
* 通过强制 I/O 限制来确保桌面在高内存压力下的响应能力
|
||||
* 使复杂的存储设置易于管理
|
||||
|
||||
|
||||
|
||||
如果你感到好奇,你可能想更深入地了解[ Btrfs][5] 及其总体优点。
|
||||
|
||||
不要忘记,Btrf 已经是受支持的选项,但它不是默认的文件系统。
|
||||
|
||||
但是,总的来说,感觉是引入 Btrfs 作为 Fedora 33 上的默认文件系统,如果实施得当,这可能会是一个有用的更改。
|
||||
|
||||
### Red Hat Enterprise Linux 可以实现吗?
|
||||
|
||||
很明显,Fedora 被认为是 [Red Hat Enterprise Linux][6] 的前沿版本。
|
||||
|
||||
因此,如果 Fedora 拒绝更改,那么 Red Hat 将不会实施。另一方面,如果你希望 RHEL 使用 Btrfs,那么 Fedora 应该首先同意更改。
|
||||
|
||||
为了让你更加清楚,Fedora 对其进行了详细介绍:
|
||||
|
||||
> Red Hat 在许多方面都很好地支持 Fedora。但是 Fedora 已经与上游紧密合作,并依赖上游。这将是其中之一。这是该提案的重要考虑因素。社区有责任确保它得到支持。如果 Fedora 拒绝,那么 Red Hat 将永远不会支持 Btrfs。Fedora 必然需要是第一位的, 并提出令人信服的理由, 它解决的问题比替代方案多。功能所有者相信它, 这是毫无疑问的。
|
||||
|
||||
另外,值得注意的是,如果你不想在 Fedora 中使用 btrfs,你应该看看 [OpenSUSE][7] 和 [SUSE Linux Enterprise][8]。
|
||||
|
||||
### 总结
|
||||
|
||||
即使这个更改看起来不会影响任何升级或兼容性,你也可以在 [Fedora 项目的 Wiki 页面][9]中找到有关 Btrfs 的更改的更多信息。
|
||||
|
||||
你对针对 Fedora 33 发行版的这一更改有何看法?你是否要将 btrfs 文件系统作为默认文件系统?
|
||||
|
||||
请在下面的评论中让我知道你的想法!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/btrfs-default-fedora/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoraproject.org/wiki/Releases/33/ChangeSet
|
||||
[2]: https://fedoraproject.org/wiki/Test_Day:2020-07-08_Btrfs_default?rd=Test_Day:F33_btrfs_by_default_2020-07-08
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/btrfs-default-fedora.png?ssl=1
|
||||
[4]: https://getfedora.org/en/workstation/
|
||||
[5]: https://en.wikipedia.org/wiki/Btrfs
|
||||
[6]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
|
||||
[7]: https://www.opensuse.org
|
||||
[8]: https://www.suse.com
|
||||
[9]: https://fedoraproject.org/wiki/Changes/BtrfsByDefault
|
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What you need to know about automation testing in CI/CD)
|
||||
[#]: via: (https://opensource.com/article/20/7/automation-testing-cicd)
|
||||
[#]: author: (Taz Brown https://opensource.com/users/heronthecli)
|
||||
|
||||
你需要了解有关 CI/CD 中的自动化测试的知识
|
||||
======
|
||||
持续集成和持续交付是由测试提供支持。以下是如何做的。
|
||||
![Net catching 1s and 0s or data in the clouds][1]
|
||||
|
||||
>“如果一切似乎都在控制之中,那么你就不会足够快。” —Mario Andretti
|
||||
|
||||
测试自动化意味着持续专注于在软件开发过程中尽早地检测到缺陷,错误和 bug。这是使用那些追求质量为最高价值的工具完成的,它们旨在_确保_质量,而不仅仅是追求。
|
||||
|
||||
持续集成/持续交付(CI / CD)解决方案(也称为 DevOps 管道)最引人注目的功能之一是可以更频繁地进行测试,而又不会给开发人员或操作人员增加更多的手动工作。让我们谈谈为什么这很重要。
|
||||
|
||||
### 为什么要在 CI/CD 中自动化测试?
|
||||
|
||||
敏捷团队迭代速度更快,以更高的速度交付软件和客户满意度,而这些压力可能会损害质量。全球竞争制造了对缺陷_低容忍度_,同时对敏捷团队软件交付_更快迭代_增加了压力。减轻压力的行业解决方案是什么? 是 [DevOps][2]。
|
||||
|
||||
DevOps 是一个有很多定义的大创意,但是对 DevOps 成功至关重要的一项技术是 CI/CD。通过软件开发流程设计一个连续的改进周期可以带来新的测试机会。
|
||||
|
||||
### 这对测试人员意味着什么?
|
||||
|
||||
对于测试人员,这通常意味着他们必须:
|
||||
|
||||
* 更早且更频繁地进行测试(使用自动化)
|
||||
* 持续测试“真实世界”的工作流(自动和手动)
|
||||
|
||||
|
||||
|
||||
更具体地说,任何形式的测试(无论是由编写代码的开发人员运行还是由质量保证工程师团队设计)的作用是利用 CI/CD 基础架构在快速推进的同时提高质量。
|
||||
|
||||
### 测试人员还需要做什么?
|
||||
|
||||
具体点说,测试人员负责:
|
||||
|
||||
* 测试新的和现有的软件应用
|
||||
* 通过根据系统要求评估软件来验证功能
|
||||
* 利用自动化测试工具来开发和维护可重复使用的自动化测试
|
||||
* 与 scrum 团队的所有成员合作,了解正在开发的功能以及实施的技术设计,以设计和开发准确、高质量的自动化测试
|
||||
* 分析记录的用户需求,并创建或协助设计针对中度到高度复杂的软件或 IT 系统的测试计划
|
||||
* 开发自动化测试,并与职能团队一起审查和评估测试方案
|
||||
* 与技术团队合作,确定在开发环境中自动化测试的正确方法
|
||||
* 与团队合作,通过自动化测试来理解和解决软件问题,并回应有关修改或增强的建议
|
||||
* 参与需求梳理,估算和其他敏捷 scrum 仪式
|
||||
* 协助定义标准和流程以支持测试活动和材料(例如脚本、配置、程序、工具、划和结果)
|
||||
|
||||
|
||||
|
||||
测试是一项艰巨的工作,但这是有效构建软件的重要组成部分。
|
||||
|
||||
### 哪些持续测试很重要?
|
||||
|
||||
你可以使用多种测试。不同的类型并不是学科之间的界限。相反,它们是表示测试的不同方式。比较测试类型不太重要,而覆盖每种测试类型更重要。
|
||||
|
||||
* **功能测试:**确保软件具有其要求的功能
|
||||
* **单元测试:**独立测试软件的较小单元/组件以检查其功能
|
||||
* **负载测试:**在重负载或使用期间测试软件的性能
|
||||
* **压力测试:**确定承受压力(最大负载)时软件的断点
|
||||
* **集成测试:**测试组合或集成的一组组件的输出
|
||||
* **回归测试:**当修改任意组件(无论多么小),测试整个应用的功能
|
||||
|
||||
|
||||
|
||||
### 总结
|
||||
|
||||
任何包含持续测试的软件开发过程都将朝着建立关键反馈环路的方向发展,以快速发展并构建有效的软件。最重要的是,该实践将质量内置到 CI/CD 管道中,并意味着了解在软件开发生命周期中提高速度同时减少风险和浪费之间的联系。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/automation-testing-cicd
|
||||
|
||||
作者:[Taz Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/heronthecli
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
|
||||
[2]: https://opensource.com/resources/devops
|
Loading…
Reference in New Issue
Block a user