mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-21 02:10:11 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
cb4db0dbe9
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11487-1.html)
|
||||
[#]: subject: (Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots)
|
||||
[#]: via: (https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
@ -35,15 +35,15 @@ Manjaro 18.1(KDE)安装图解
|
||||
|
||||
#### 步骤 1) 下载 Manjaro 18.1 ISO
|
||||
|
||||
在安装之前,你需要从位于 [这里] [1] 的官方下载页面下载 Manjaro 18.1 的最新副本。由于我们这里介绍的是 KDE 版本,因此我们选择 KDE 版本。但是对于所有桌面环境(包括 Xfce、KDE 和 Gnome 版本),安装过程都是相同的。
|
||||
在安装之前,你需要从位于 [这里][1] 的官方下载页面下载 Manjaro 18.1 的最新副本。由于我们这里介绍的是 KDE 版本,因此我们选择 KDE 版本。但是对于所有桌面环境(包括 Xfce、KDE 和 Gnome 版本),安装过程都是相同的。
|
||||
|
||||
#### 步骤 2) 创建 USB 启动盘
|
||||
|
||||
从 Manjaro 下载页面成功下载 ISO 文件后,就可以创建 USB 磁盘了。将下载的 ISO 文件复制到 USB 磁盘中,然后创建可引导磁盘。确保将你的引导设置更改为使用 USB 引导并重新启动系统。
|
||||
从 Manjaro 下载页面成功下载 ISO 文件后,就可以创建 USB 磁盘了。将下载的 ISO 文件复制到 USB 磁盘中,然后创建可引导磁盘。确保将你的引导设置更改为使用 USB 引导,并重新启动系统。
|
||||
|
||||
#### 步骤 3) Manjaro Live 版安装环境
|
||||
|
||||
系统重新启动时,它将自动检测到 USB 驱动器并开始启动进入 Manjaro Live 版安装屏幕。
|
||||
系统重新启动时,它将自动检测到 USB 驱动器,并开始启动进入 Manjaro Live 版安装屏幕。
|
||||
|
||||
![Boot-Manjaro-18-1-kde-installation][3]
|
||||
|
||||
@ -77,14 +77,14 @@ Manjaro 18.1(KDE)安装图解
|
||||
|
||||
#### 步骤 8) 选择分区类型
|
||||
|
||||
这是安装过程中非常关键的一步。 它将允许你选择:
|
||||
这是安装过程中非常关键的一步。 它将允许你选择分区方式:
|
||||
|
||||
* 擦除磁盘
|
||||
* 手动分区
|
||||
* 并存安装
|
||||
* 替换分区
|
||||
|
||||
如果要在 VM(虚拟机)中安装 Manjaro 18.1,则将看不到最后两个选项。
|
||||
如果在 VM(虚拟机)中安装 Manjaro 18.1,则将看不到最后两个选项。
|
||||
|
||||
如果你不熟悉 Manjaro Linux,那么我建议你使用第一个选项(<ruby>擦除磁盘<rt>Erase Disk</rt></ruby>),它将为你自动创建所需的分区。如果要创建自定义分区,则选择第二个选项“<ruby>手动分区<rt>Manual Partitioning</rt></ruby>”,顾名思义,它将允许我们创建自己的自定义分区。
|
||||
|
||||
@ -102,7 +102,7 @@ Manjaro 18.1(KDE)安装图解
|
||||
* `/opt` – 4 GB(ext4)
|
||||
* <ruby>交换分区<rt>Swap</rt></ruby> – 2 GB
|
||||
|
||||
当我们在上方窗口中单击“<ruby>下一步<rt>Next</rt></ruby>”时,将显示以下屏幕,选择创建“<ruby>新分区表<rt>new partition table</rt></ruby>”:
|
||||
当我们在上方窗口中单击“<ruby>下一步<rt>Next</rt></ruby>”时,将显示以下屏幕,选择“<ruby>新建分区表<rt>new partition table</rt></ruby>”:
|
||||
|
||||
![Create-Partition-Table-Manjaro18-1-Installation][9]
|
||||
|
||||
@ -110,10 +110,6 @@ Manjaro 18.1(KDE)安装图解
|
||||
|
||||
现在选择可用空间,然后单击“<ruby>创建<rt>create</rt></ruby>”以将第一个分区设置为大小为 2 GB 的 `/boot`,
|
||||
|
||||
点击“<ruby>确定<rt>OK</rt></ruby>”。
|
||||
|
||||
现在选择可用空间,然后单击“<ruby>创建<rt>create</rt></ruby>”以将第一个分区设置为大小为 2 GB 的 `/boot`:
|
||||
|
||||
![boot-partition-manjaro-18-1-installation][10]
|
||||
|
||||
单击“<ruby>确定<rt>OK</rt></ruby>”以继续操作,在下一个窗口中再次选择可用空间,然后单击“<ruby>创建<rt>create</rt></ruby>”以将第二个分区设置为 `/`,大小为 10 GB:
|
||||
@ -174,7 +170,7 @@ Manjaro 18.1(KDE)安装图解
|
||||
|
||||
![Login-screen-after-manjaro-18-1-installation][22]
|
||||
|
||||
点击“<ruby>登录<rt>Login</rt></ruby>。
|
||||
点击“<ruby>登录<rt>Login</rt></ruby>”。
|
||||
|
||||
![KDE-Desktop-Screen-Manjaro-18-1][23]
|
||||
|
||||
@ -187,7 +183,7 @@ via: https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11490-1.html)
|
||||
[#]: subject: (Bash Script to Delete Files/Folders Older Than “X” Days in Linux)
|
||||
[#]: via: (https://www.2daygeek.com/bash-script-to-delete-files-folders-older-than-x-days-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
@ -10,29 +10,21 @@
|
||||
在 Linux 中使用 Bash 脚本删除早于 “X” 天的文件/文件夹
|
||||
======
|
||||
|
||||
**[磁盘使用率][1]**监控工具能够在达到给定阈值时提醒我们。
|
||||
[磁盘使用率][1] 监控工具能够在达到给定阈值时提醒我们。但它们无法自行解决 [磁盘使用率][2] 问题。需要手动干预才能解决该问题。
|
||||
|
||||
但它们无法自行解决**[磁盘使用率][2]**问题。
|
||||
如果你想完全自动化此类操作,你会做什么。是的,可以使用 bash 脚本来完成。
|
||||
|
||||
需要手动干预才能解决该问题。
|
||||
|
||||
如果你想完全自动化此类操作,你会做什么。
|
||||
|
||||
是的,可以使用 bash 脚本来完成。
|
||||
|
||||
该脚本可防止来自**[监控工具][3]**的警报,因为我们会在填满磁盘空间之前删除旧的日志文件。
|
||||
该脚本可防止来自 [监控工具][3] 的警报,因为我们会在填满磁盘空间之前删除旧的日志文件。
|
||||
|
||||
我们过去做了很多 shell 脚本。如果要查看,请进入下面的链接。
|
||||
|
||||
* **[如何使用 shell 脚本自动化日常活动?][4]**
|
||||
|
||||
|
||||
* [如何使用 shell 脚本自动化日常活动?][4]
|
||||
|
||||
我在本文中添加了两个 bash 脚本,它们有助于清除旧日志。
|
||||
|
||||
### 1)在 Linux 中删除早于 “X” 天的文件夹的 Bash 脚本
|
||||
|
||||
我们有一个名为 **“/var/log/app/”** 的文件夹,其中包含 15 天的日志,我们将删除早于 10 天的文件夹。
|
||||
我们有一个名为 `/var/log/app/` 的文件夹,其中包含 15 天的日志,我们将删除早于 10 天的文件夹。
|
||||
|
||||
```
|
||||
$ ls -lh /var/log/app/
|
||||
@ -56,7 +48,7 @@ drwxrw-rw- 3 root root 24K Oct 15 23:52 app_log.15
|
||||
|
||||
该脚本将删除早于 10 天的文件夹,并通过邮件发送文件夹列表。
|
||||
|
||||
你可以根据需要修改 **“-mtime X”** 的值。另外,请替换你的电子邮箱,而不是用我们的。
|
||||
你可以根据需要修改 `-mtime X` 的值。另外,请替换你的电子邮箱,而不是用我们的。
|
||||
|
||||
```
|
||||
# /opt/script/delete-old-folders.sh
|
||||
@ -81,7 +73,7 @@ rm $MESSAGE /tmp/folder.out
|
||||
fi
|
||||
```
|
||||
|
||||
给 **“delete-old-folders.sh”** 设置可执行权限。
|
||||
给 `delete-old-folders.sh` 设置可执行权限。
|
||||
|
||||
```
|
||||
# chmod +x /opt/script/delete-old-folders.sh
|
||||
@ -109,15 +101,13 @@ Oct 15 /var/log/app/app_log.15
|
||||
|
||||
### 2)在 Linux 中删除早于 “X” 天的文件的 Bash 脚本
|
||||
|
||||
我们有一个名为 **“/var/log/apache/”** 的文件夹,其中包含15天的日志,我们将删除 10 天前的文件。
|
||||
我们有一个名为 `/var/log/apache/` 的文件夹,其中包含15天的日志,我们将删除 10 天前的文件。
|
||||
|
||||
以下文章与该主题相关,因此你可能有兴趣阅读。
|
||||
|
||||
* **[如何在 Linux 中查找和删除早于 “X” 天和 “X” 小时的文件?][6]**
|
||||
* **[如何在 Linux 中查找最近修改的文件/文件夹][7]**
|
||||
* **[如何在 Linux 中自动删除或清理 /tmp 文件夹内容?][8]**
|
||||
|
||||
|
||||
* [如何在 Linux 中查找和删除早于 “X” 天和 “X” 小时的文件?][6]
|
||||
* [如何在 Linux 中查找最近修改的文件/文件夹][7]
|
||||
* [如何在 Linux 中自动删除或清理 /tmp 文件夹内容?][8]
|
||||
|
||||
```
|
||||
# ls -lh /var/log/apache/
|
||||
@ -141,7 +131,7 @@ Oct 15 /var/log/app/app_log.15
|
||||
|
||||
该脚本将删除 10 天前的文件并通过邮件发送文件夹列表。
|
||||
|
||||
你可以根据需要修改 **“-mtime X”** 的值。另外,请替换你的电子邮箱,而不是用我们的。
|
||||
你可以根据需要修改 `-mtime X` 的值。另外,请替换你的电子邮箱,而不是用我们的。
|
||||
|
||||
```
|
||||
# /opt/script/delete-old-files.sh
|
||||
@ -166,7 +156,7 @@ rm $MESSAGE /tmp/file.out
|
||||
fi
|
||||
```
|
||||
|
||||
给 **“delete-old-files.sh”** 设置可执行权限。
|
||||
给 `delete-old-files.sh` 设置可执行权限。
|
||||
|
||||
```
|
||||
# chmod +x /opt/script/delete-old-files.sh
|
||||
@ -199,7 +189,7 @@ via: https://www.2daygeek.com/bash-script-to-delete-files-folders-older-than-x-d
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,70 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Kubernetes networking, OpenStack Train, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/19/10/kubernetes-openstack-and-more-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
Kubernetes networking, OpenStack Train, and more industry trends
|
||||
======
|
||||
A weekly look at open source community and industry trends.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
|
||||
|
||||
## [A look at the most exciting features in OpenStack Train][2]
|
||||
|
||||
> But given all the technology goodies ([you can see the release highlights here][3]) that the Train release has to offer, you may be curious about the features that we at Red Hat believe are among the top capabilities that will benefit our telecommunications and enterprise customers and their uses cases. Here's an overview of the features we are most excited about this release.
|
||||
|
||||
**The impact**: OpenStack to me is like Shia LaBeouf: it reached peak hype a couple of years ago and then continued turning out good work. The Train release looks like yet another pretty incredible drop of innovation.
|
||||
|
||||
## [Building Kubernetes Operators in an Ansible-native way][4]
|
||||
|
||||
> Operators simplify management of complex applications on Kubernetes. They are usually written in Go and require expertise with the internals of Kubernetes. But, there’s an alternative to that with a lower barrier to entry. Ansible is a first-class citizen in the Operator SDK. Using Ansible frees up application engineers, maximizes time to automate and orchestrate your applications, and doing it across new & existing platforms with one simple language. Here we see how.
|
||||
|
||||
**The impact**: This is like finding out you can make pretty good ice cream with a blender and frozen bananas: Ansible (which is generally thought of as being pretty simple to pick up) lets you do some pretty impressive Operator magic way easier than you thought you could.
|
||||
|
||||
## [Kubernetes networking: Behind the scenes][5]
|
||||
|
||||
> While there are very good resources around this topic (links [here][6]), I couldn’t find a single example that connects all of the dots with commands outputs that network engineers love and hate, showing what is actually happening behind the scenes. So, I decided to curate this information from a number of different sources to hopefully help you better understand how things are tied together.
|
||||
|
||||
**The impact**: An accessible, well-written take on a complicated topic (with pictures). Guaranteed to make Kube networking 10% less confusing.
|
||||
|
||||
## [Securing the container supply chain][7]
|
||||
|
||||
> With the emergence of containers, Software as a Service and Functions as a Service, the focus in on consuming existing services, functions and container images in the race to provide new value. Scott McCarty, Principal Product Manager, Containers at [Red Hat][8], says that focus has both advantages and disadvantages. “It allows us to focus our energy on writing new application code that is specific to our needs, while shifting the concern for the underlying infrastructure to someone else,” says McCarty. “Containers are in a sweet spot providing enough control, but offloading a lot of tedious infrastructure work.” But containers can also create disadvantages related to security.
|
||||
|
||||
**The impact**: I sit amongst a group of ~10 security people, and can safely say that it takes a certain disposition to want to think about software security all day. When you stare into the abyss for long enough, it stares back into you. If you are a software developer who is not so disposed, please take Scott's advice and make sure your suppliers are.
|
||||
|
||||
## [Fedora at 15: Why Matthew Miller sees a bright future for the Linux distribution][9]
|
||||
|
||||
> In a wide-ranging interview with TechRepublic, Fedora project leader Matthew Miller discussed lessons learned from the past, popular adoption and competing standards for software containers, potential changes coming to Fedora, as well as hot-button topics, including systemd.
|
||||
|
||||
**The impact**: What I like about the Fedora project is it's clarity; the project knows what it stands for. People like Matt are why.
|
||||
|
||||
## _I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/kubernetes-openstack-and-more-industry-trends
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://www.redhat.com/en/blog/look-most-exciting-features-openstack-train
|
||||
[3]: https://releases.openstack.org/train/highlights.html
|
||||
[4]: https://www.cncf.io/webinars/building-kubernetes-operators-in-an-ansible-native-way/
|
||||
[5]: https://itnext.io/kubernetes-networking-behind-the-scenes-39a1ab1792bb
|
||||
[6]: https://github.com/nleiva/kubernetes-networking-links
|
||||
[7]: https://www.devprojournal.com/technology-trends/open-source/securing-the-container-supply-chain/
|
||||
[8]: https://www.redhat.com/en
|
||||
[9]: https://www.techrepublic.com/article/fedora-at-15-why-matthew-miller-sees-a-bright-future-for-the-linux-distribution/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Enterprises find new uses for mainframes: blockchain and containerized apps)
|
||||
[#]: via: (https://www.networkworld.com/article/3446140/enterprises-find-a-new-use-for-mainframes-blockchain-and-containerized-apps.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Enterprises find new uses for mainframes: blockchain and containerized apps
|
||||
======
|
||||
Blockchain and containerized microservices can benefit from the mainframe’s integrated security and massive parallelization capabilities.
|
||||
Thinkstock
|
||||
|
||||
News flash: Mainframes still aren't dead.
|
||||
|
||||
On the contrary, mainframe use is increasing, and not to run COBOL, either. Mainframes are being eyed for modern technologies including blockchain and containers.
|
||||
|
||||
A survey of 153 IT decision makers found that 50% of organizations will continue with the mainframe and increase its use over the next two years, while just 5% plan to decrease or remove mainframe activity. The survey was conducted by Forrester Research and commissioned by Ensono, a hybrid IT services provider, and Wipro Limited, a global IT consulting services company.
|
||||
|
||||
[][1]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][1]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
**READ MORE:** [Data center workloads become more complex despite promises to the contrary][2]
|
||||
|
||||
That kind of commitment to the mainframe is a bit of a surprise, given the trend to reduce or eliminate the on-premises data center footprint and move to the cloud. However, enterprises are now taking a hybrid approach to their infrastructure, migrating some applications to the cloud while keeping the most business-critical applications on-premises and on mainframes.
|
||||
|
||||
Forrester's research found mainframes continue to be considered a critical piece of infrastructure for the modern business – and not solely to run old technologies. Of course, traditional enterprise applications and workloads remain firmly on the mainframe, with 48% of ERP apps, 45% of finance and accounting apps, 44% of HR management apps, and 43% of ECM apps staying on mainframes.
|
||||
|
||||
But that's not all. Among survey respondents, 25% said that mobile sites and applications were being put into the mainframe, and 27% said they're running new blockchain initiatives and containerized applications. Blockchain and containerized applications benefit from the integrated security and massive parallelization inherent in a mainframe, Forrester said in its report.
|
||||
|
||||
"We believe this research challenges popular opinion that mainframe is for legacy," said Brian Klingbeil, executive vice president of technology and strategy at Ensono, in a statement. "Mainframe modernization is giving enterprises not only the ability to continue to run their legacy applications, but also allows them to embrace new technologies such as containerized microservices, blockchain and mobile applications."
|
||||
|
||||
Wipro's Kiran Desai, senior vice president and global head of cloud and infrastructure services, added that enterprises should adopt two strategies to take full advantage of mainframes. The first is to refactor applications to take advantage of cloud, while the second is to adopt DevOps to modernize mainframes.
|
||||
|
||||
**Learn more about mixing cloud and on-premises workloads**
|
||||
|
||||
* [5 times when cloud repatriation makes sense][3]
|
||||
* [Network monitoring in the hybrid cloud/multi-cloud era][4]
|
||||
* [Data center workloads become more complex][2]
|
||||
* [The benefits of mixing private and public cloud services][5]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3446140/enterprises-find-a-new-use-for-mainframes-blockchain-and-containerized-apps.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[2]: https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html
|
||||
[3]: https://www.networkworld.com/article/3388032/5-times-when-cloud-repatriation-makes-sense.html
|
||||
[4]: https://www.networkworld.com/article/3398482/network-monitoring-in-the-hybrid-cloudmulti-cloud-era.html
|
||||
[5]: https://www.networkworld.com/article/3233132/what-is-hybrid-cloud-computing.html
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Tokalabs Software Defined Labs automates configuration of lab test-beds)
|
||||
[#]: via: (https://www.networkworld.com/article/3446816/tokalabs-software-defined-labs-automates-configuration-of-lab-test-beds.html)
|
||||
[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
|
||||
|
||||
Tokalabs Software Defined Labs automates configuration of lab test-beds
|
||||
======
|
||||
The primary challenge of running a test lab is the amount of time it takes to provision the test beds within the lab. This software defined lab platform automates the setup and configuration process so that tests can be accelerated.
|
||||
7Postman / Getty Images
|
||||
|
||||
Network environments have become so complex that companies such as systems integrators, equipment manufacturers and enterprise organizations feel compelled to test their configurations and equipment in lab environments before deployment. Performance test labs are used extensively for quality, proof of concept, customer support, and technical sales initiatives. Labs are the perfect place to see how well something performs before it’s put into a production environment.
|
||||
|
||||
The primary challenge of running a test lab is the amount of time it takes to provision the test environments. A network lab infrastructure might include switches, routers, servers, [virtual machines][1] running on various server clusters, security services, cloud resources, software and so on. It takes considerable time to wire the configurations, physically build the desired test beds, login to each individual device and load the proper software configurations. Quite often, lab staffers spend more time on setup than they do on conducting actual tests.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
|
||||
|
||||
This is a problem that the networking company Allied Telesis was having in building test beds for its own development engineers. The company developed an application for internal use that would ease the setup and reconfiguration problem. The equipment could be physically cabled once and then configured and controlled centrally through software. The application worked so well that Allied Telesis spun it off for others to use, and this is the origin of [Tokalabs Software Defined Labs][3] (SDL) technology.
|
||||
|
||||
[][4]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][4]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
Tokalabs provides a platform that enables engineers to manage a lab-network infrastructure and create sandboxes or topologies that can be used for R&D, product development and quality testing, customer support, sales demos, competitive benchmarking, driving proof of concept efforts, etc. There’s an automation sequencer built into the platform that allows users to automate test cases, sales demos, troubleshooting methods, image upgrades and the like.
|
||||
|
||||
The Tokalabs SDL controller is a virtual appliance that can be imported into any virtualization environment. Once installed, the customer can access the controller’s UI using a web browser. The controller has an auto-discovery mechanism that inventories everything within a specified range of IP addresses, including cloud resources.
|
||||
|
||||
Tokalabs probes the addresses to figure out what ports are open on them, what management types are supported, and the vendor information of the devices. This results in an inventory of hundreds of devices that are discovered by the SDL controller.
|
||||
|
||||
On the hardware side, lab engineers only need to cable and configure their lab devices once, which eliminates the cumbersome setup and tear down processes. These devices are abstracted and managed centrally through the SDL controller, which maintains a centralized networking fabric. Lab engineers have full visibility of every physical and virtual device and every public and [private cloud][5] instance within their domain.
|
||||
|
||||
Engineers can use the Tokalabs SDL controller to dynamically create and reserve test-bed resources and then save them as a template for future use. Engineers also can automate and schedule test executions and the controller will release the resources once the tests are done. The controller’s codeless automation feature means users don’t need to know how to write scripts to orchestrate and automate a pretty comprehensive configuration and test scenario. They can use the controller to automate sequences without writing code or instruct the controller to execute external scripts developed by an engineer.
|
||||
|
||||
The automation is helpful to set up a specific configuration quickly. For example, a customer-support engineer might need to replicate a scenario that one of its customers has in order to troubleshoot an issue. Using the controller’s automation feature, devices can be configured and loaded with specific firmware quickly to ease the setup process.
|
||||
|
||||
Tokalabs logs everything that transpires through its controller, so a lab administrator has oversight into how the equipment is being used or what types of tests are being created and executed. This helps with resource capacity planning, to ensure that there is enough equipment without having devices sit idle for too long.
|
||||
|
||||
One leader in cybersecurity became an early adopter of Tokalabs. This vendor has a test lab to conduct comparative benchmark numbers with competitors’ products in order to close large deals and to confirm their product strengths and performance numbers for marketing materials.
|
||||
|
||||
Prior to using the Tokalabs SDL controller, engineering teams would physically cable the topologies, configure the devices and execute various benchmark tests. Then they would tear down that configuration and start all over again for every set of devices and firmware revisions.
|
||||
|
||||
Given that this is a multi-billion-dollar equipment manufacturer, there are a lot of new product releases and updates to existing products. That means there’s a heck of a lot of work for the engineers in the lab to test each product and compare it to competitors’ offerings. They can’t really afford the time spent configuring rather than testing, so they turned to Tokalabs’ technology to manage the lab infrastructure and to automate the configurations and scheduling of test executions. They chose this solution largely for the ease of setup and use.
|
||||
|
||||
Now, each engineer can create hundreds of reusable templates, thus eliminating the repetitive work of creating test beds, and also automate test scripts using the Tokalabs’ automation sequencer. Additionally, all their existing test scripts are available to use through the SDL controller. This has helped the team reduce its backlog and keep up with the product release cycles.
|
||||
|
||||
Beyond this use case for comparative benchmark tests, some of the other uses for Tokalabs SDL include:
|
||||
|
||||
* Creating a portal for others to use lab resources; for example, for training purposes or for customers to test network environments prior to purchasing them
|
||||
* Doing sales demonstrations and customer PoCs in order to showcase a feature, an application, or even an entire configuration
|
||||
* Automating bringing up virtualized environments
|
||||
|
||||
|
||||
|
||||
Tokalabs claims to work closely with its customers to tailor the Software Defined Labs platform to specific use cases and customer needs.
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3446816/tokalabs-software-defined-labs-automates-configuration-of-lab-test-beds.html
|
||||
|
||||
作者:[Linda Musthaler][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Linda-Musthaler/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3234795/what-is-virtualization-definition-virtual-machine-hypervisor.html
|
||||
[2]: https://www.networkworld.com/newsletters/signup.html
|
||||
[3]: https://tokalabs.com/
|
||||
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[5]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -1,74 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (DevSecOps pipelines and tools: What you need to know)
|
||||
[#]: via: (https://opensource.com/article/19/10/devsecops-pipeline-and-tools)
|
||||
[#]: author: (Sagar Nangare https://opensource.com/users/sagarnangare)
|
||||
|
||||
DevSecOps pipelines and tools: What you need to know
|
||||
======
|
||||
DevSecOps evolves DevOps to ensure security remains an essential part of
|
||||
the process.
|
||||
![An intersection of pipes.][1]
|
||||
|
||||
DevOps is well-understood in the IT world by now, but it's not flawless. Imagine you have implemented all of the DevOps engineering practices in modern application delivery for a project. You've reached the end of the development pipeline—but a penetration testing team (internal or external) has detected a security flaw and come up with a report. Now you have to re-initiate all of your processes and ask developers to fix the flaw.
|
||||
|
||||
This is not terribly tedious in a DevOps-based software development lifecycle (SDLC) system—but it does consume time and affects the delivery schedule. If security were integrated from the start of the SDLC, you might have tracked down the glitch and eliminated it on the go. But pushing security to the end of the development pipeline, as in the above scenario, leads to a longer development lifecycle.
|
||||
|
||||
This is the reason for introducing DevSecOps, which consolidates the overall software delivery cycle in an automated way.
|
||||
|
||||
In modern DevOps methodologies, where containers are widely used by organizations to host applications, we see greater use of [Kubernetes][2] and [Istio][3]. However, these tools have their own vulnerabilities. For example, the Cloud Native Computing Foundation (CNCF) recently completed a [Kubernetes security audit][4] that identified several issues. All tools used in the DevOps pipeline need to undergo security checks while running in the pipeline, and DevSecOps pushes admins to monitor the tools' repositories for upgrades and patches.
|
||||
|
||||
### What Is DevSecOps?
|
||||
|
||||
Like DevOps, DevSecOps is a mindset or a culture that developers and IT operations teams follow while developing and deploying software applications. It integrates active and automated security audits and penetration testing into agile application development.
|
||||
|
||||
To utilize [DevSecOps][5], you need to:
|
||||
|
||||
* Introduce the concept of security right from the start of the SDLC to minimize vulnerabilities in software code.
|
||||
* Ensure everyone (including developers and IT operations teams) shares responsibility for following security practices in their tasks.
|
||||
* Integrate security controls, tools, and processes at the start of the DevOps workflow. These will enable automated security checks at each stage of software delivery.
|
||||
|
||||
|
||||
|
||||
DevOps has always been about including security—as well as quality assurance (QA), database administration, and everyone else—in the dev and release process. However, DevSecOps is an evolution of that process to ensure security is never forgotten as an essential part of the process.
|
||||
|
||||
### Understanding the DevSecOps pipeline
|
||||
|
||||
There are different stages in a typical DevOps pipeline; a typical SDLC process includes phases like Plan, Code, Build, Test, Release, and Deploy. In DevSecOps, specific security checks are applied in each phase.
|
||||
|
||||
* **Plan:** Execute security analysis and create a test plan to determine scenarios for where, how, and when testing will be done.
|
||||
* **Code:** Deploy linting tools and Git controls to secure passwords and API keys.
|
||||
* **Build:** While building code for execution, incorporate static application security testing (SAST) tools to track down flaws in code before deploying to production. These tools are specific to programming languages.
|
||||
* **Test:** Use dynamic application security testing (DAST) tools to test your application while in runtime. These tools can detect errors associated with user authentication, authorization, SQL injection, and API-related endpoints.
|
||||
* **Release:** Just before releasing the application, employ security analysis tools to perform thorough penetration testing and vulnerability scanning.
|
||||
* **Deploy:** After completing the above tests in runtime, send a secure build to production for final deployment.
|
||||
|
||||
|
||||
|
||||
### DevSecOps tools
|
||||
|
||||
Tools are available for every phase of the SDLC. Some are commercial products, but most are open source. In my next article, I will talk more about the tools to use in different stages of the pipeline.
|
||||
|
||||
DevSecOps will play a more crucial role as we continue to see an increase in the complexity of enterprise security threats built on modern IT infrastructure. However, the DevSecOps pipeline will need to improve over time, rather than simply relying on implementing all security changes simultaneously. This will eliminate the possibility of backtracking or the failure of application delivery.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/devsecops-pipeline-and-tools
|
||||
|
||||
作者:[Sagar Nangare][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sagarnangare
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe (An intersection of pipes.)
|
||||
[2]: https://opensource.com/resources/what-is-kubernetes
|
||||
[3]: https://opensource.com/article/18/9/what-istio
|
||||
[4]: https://www.cncf.io/blog/2019/08/06/open-sourcing-the-kubernetes-security-audit/
|
||||
[5]: https://resources.whitesourcesoftware.com/blog-whitesource/devsecops
|
@ -1,98 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Object-Oriented Programming and Essential State)
|
||||
[#]: via: (https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html)
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
|
||||
Object-Oriented Programming and Essential State
|
||||
======
|
||||
|
||||
Back in 2015, Brian Will wrote a provocative blog post: [Object-Oriented Programming: A Disaster Story][1]. He followed it up with a video called [Object-Oriented Programming is Bad][2], which is much more detailed. I recommend taking the time to watch the video, but here’s my one-paragraph summary:
|
||||
|
||||
The Platonic ideal of OOP is a sea of decoupled objects that send stateless messages to one another. No one really makes software like that, and Brian points out that it doesn’t even make sense: objects need to know which other objects to send messages to, and that means they need to hold references to one another. Most of the video is about the pain that happens trying to couple objects for control flow, while pretending that they’re decoupled by design.
|
||||
|
||||
Overall his ideas resonate with my own experiences of OOP: objects can be okay, but I’ve just never been satisfied with object-_orientation_ for modelling a program’s control flow, and trying to make code “properly” object-oriented always seems to create layers of unneccessary complexity.
|
||||
|
||||
There’s one thing I don’t think he explains fully. He says outright that “encapsulation does not work”, but follows it with the footnote “at fine-grained levels of code”, and goes on to acknowledge that objects can sometimes work, and that encapsulation can be okay at the level of, say, a library or file. But he doesn’t explain exactly why it sometimes works and sometimes doesn’t, and how/where to draw the line. Some people might say that makes his “OOP is bad” claim flawed, but I think his point stands, and that the line can be drawn between essential state and accidental state.
|
||||
|
||||
If you haven’t heard this usage of the terms “essential” and “accidental” before, you should check out Fred Brooks’ classic [No Silver Bullet][3] essay. (He’s written many great essays about building software systems, by the way.) I’ve aleady written [my own post about essential and accidential complexity][4] before, but here’s a quick TL;DR: Software is complex. Partly that’s because we want software to solve messy real-world problems, and we call that “essential complexity”. “Accidental complexity” is all the other complexity that exists because we’re trying to use silicon and metal to solve problems that have nothing to do with silicon and metal. For example, code for memory management, or transferring data between RAM and disk, or parsing text formats, is all “accidental complexity” for most programs.
|
||||
|
||||
Suppose you’re building a chat application that supports multiple channels. Messages can arrive for any channel at any time. Some channels are especially interesting and the user wants to be notified or pinged when a new message comes in. Other channels are muted: the message is stored, but the user isn’t interrupted. You need to keep track of the user’s preferred setting for each channel.
|
||||
|
||||
One way to do it is to use a map (a.k.a, hash table, dictionary or associative array) between the channels and channel settings. Note that a map is the kind of abstract data type (ADT) that Brian Will said can work as an object.
|
||||
|
||||
If we get a debugger and look inside the map object in memory, what will we see? We’ll find channel IDs and channel settings data of course (or pointers to them, at least). But we’ll also find other data. If the map is implemented using a red-black tree, we’ll see tree node objects with red/black labels and pointers to other nodes. The channel-related data is the essential state, and the tree nodes are the accidental state. Notice something, though: The map effectively encapsulates its accidental state — you could replace the map with another one implemented using AVL trees and your chat app would still work. On the other hand, the map doesn’t encapsulate the essential state (simply using `get()` and `set()` methods to access data isn’t encapsulation). In fact, the map is as agnostic as possible about the essential state — you could use basically the same map data structure to store other mappings unrelated to channels or notifications.
|
||||
|
||||
And that’s why the map ADT is so successful: it encapsulates accidental state and is decoupled from essential state. If you think about it, the problems that Brian describes with encapsulation are problems with trying to encapsulate essential state. The benefits that others describe are benefits from encapsulating accidental state.
|
||||
|
||||
It’s pretty hard to make entire software systems meet this ideal, but scaling up, I think it looks something like this:
|
||||
|
||||
* No global, mutable state
|
||||
* Accidental state encapsulated (in objects or modules or whatever)
|
||||
* Stateless accidental complexity enclosed in free functions, decoupled from data
|
||||
* Inputs and outputs made explicit using tricks like dependency injection
|
||||
* Components fully owned and controlled from easily identifiable locations
|
||||
|
||||
|
||||
|
||||
Some of this goes against instincts I had a long time ago. For example, if you have a function that makes a database query, the interface looks simpler and nicer if the database connection handling is hidden inside the function, and the only parameters are the query parameters. However, when you build a software system out of functions like this, it actually becomes more complex to coordinate the database usage. Not only are the components doing things their own ways, they’re trying to hide what they’re doing as “implementation details”. The fact that a database query requires a database connection never was an implementation detail. If something can’t be hidden, it’s saner to make it explicit.
|
||||
|
||||
I’m wary of feeding the OOP and functional programming false dichotomy, but I think it’s interesting that FP goes to the opposite extreme of OOP: OOP tries to encapsulate things, including the essential complexity that can’t be encapsulated, while pure FP tends to make things explicit, including some accidental complexity. Most of the time, that’s the safer side to go wrong, but sometimes (such as when [building self-referential data structures in a purely functional language][5]) you can get designs that are more for the sake of FP than for the sake of simplicity (which is why [Haskell includes some escape hatches][6]). I’ve written before about [the middle ground of so-called “weak purity”][7].
|
||||
|
||||
Brian found that encapsulation works at a larger scale for a couple of reasons. One is that larger components are simply more likely to contain accidental state, just because of size. Another is that what’s “accidental” is relative to what problem you’re solving. From the chat app user’s point of view, “accidental complexity” is anything unrelated to messages and channels and users, etc. As you break the problems into subproblems, however, more things become essential. For example, the mapping between channel names and channel IDs is arguably accidental complexity when solving the “build a chat app” problem, but it’s essential complexity when solving the “implement the `getChannelIdByName()` function” subproblem. So, encapsulation tends to be less useful for subcomponents than supercomponents.
|
||||
|
||||
By the way, at the end of his video, Brian Will wonders if any language supports anonymous functions that _can’t_ access they scope they’re in. [D][8] does. Anonymous lambdas in D are normally closures, but anonymous stateless functions can also be declared if that’s what you want:
|
||||
|
||||
```
|
||||
import std.stdio;
|
||||
|
||||
void main()
|
||||
{
|
||||
int x = 41;
|
||||
|
||||
// Value from immediately executed lambda
|
||||
auto v1 = () {
|
||||
return x + 1;
|
||||
}();
|
||||
writeln(v1);
|
||||
|
||||
// Same thing
|
||||
auto v2 = delegate() {
|
||||
return x + 1;
|
||||
}();
|
||||
writeln(v2);
|
||||
|
||||
// Plain functions aren't closures
|
||||
auto v3 = function() {
|
||||
// Can't access x
|
||||
// Can't access any mutable global state either if also marked pure
|
||||
return 42;
|
||||
}();
|
||||
writeln(v3);
|
||||
}
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://theartofmachinery.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://medium.com/@brianwill/object-oriented-programming-a-personal-disaster-1b044c2383ab
|
||||
[2]: https://www.youtube.com/watch?v=QM1iUe6IofM
|
||||
[3]: http://www.cs.nott.ac.uk/~pszcah/G51ISS/Documents/NoSilverBullet.html
|
||||
[4]: https://theartofmachinery.com/2017/06/25/compression_complexity_and_software.html
|
||||
[5]: https://wiki.haskell.org/Tying_the_Knot
|
||||
[6]: https://en.wikibooks.org/wiki/Haskell/Mutable_objects#The_ST_monad
|
||||
[7]: https://theartofmachinery.com/2016/03/28/dirtying_pure_functions_can_be_useful.html
|
||||
[8]: https://dlang.org
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
320
sources/tech/20191021 How to build a Flatpak.md
Normal file
320
sources/tech/20191021 How to build a Flatpak.md
Normal file
@ -0,0 +1,320 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to build a Flatpak)
|
||||
[#]: via: (https://opensource.com/article/19/10/how-build-flatpak-packaging)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
How to build a Flatpak
|
||||
======
|
||||
A universal packaging format with a decentralized means of distribution.
|
||||
Plus, portability and sandboxing.
|
||||
![][1]
|
||||
|
||||
A long time ago, a Linux distribution shipped an operating system along with _all_ the software available for it. There was no concept of “third party” software because everything was a part of the distribution. Applications weren’t so much installed as they were enabled from a great big software repository that you got on one of the many floppy disks or, later, CDs you purchased or downloaded.
|
||||
|
||||
This evolved into something even more convenient as the internet became ubiquitous, and the concept of what is now the “app store” was born. Of course, Linux distributions tend to call this a _software repository_ or just _repo_ for short, with some variations for “branding”, such as _Ubuntu Software Center_ or, with typical GNOME minimalism, simply _Software_.
|
||||
|
||||
This model worked well back when open source software was still a novelty and the number of open source applications was a number rather than a _theoretical_ number. In today’s world of GitLab and GitHub and Bitbucket (and [many][2] [many][3] more), it’s hardly possible to count the number of open source projects, much less package them up in a repository. No Linux distribution today, even [Debian][4] and its formidable group of package maintainers, can claim or hope to have a package for every installable open source project.
|
||||
|
||||
Of course, a Linux package doesn’t have to be in a repository to be installable. Any programmer can package up their software and distribute it from their own website. However, because repositories are seen as an integral part of a distribution, there isn’t a universal packaging format, meaning that a programmer must decide whether to release a `.deb` or `.rpm`, or an AUR build script, or a Nix or Guix package, or a Homebrew script, or just a mostly-generic `.tgz` archive for `/opt`. It’s overwhelming for a developer who lives and breathes Linux every day, much less for a developer just trying to make a best-effort attempt at supporting a free and open source target.
|
||||
|
||||
### Why Flatpak?
|
||||
|
||||
The Flatpak project provides a universal packaging format along with a decentralized means of distribution, plus portability, and sandboxing.
|
||||
|
||||
* **Universal** Install the Flatpak system, and you can run Flatpaks, regardless of your distribution. No daemon or systemd required. The same Flatpak runs on Fedora, Ubuntu, Mageia, Pop OS, Arch, Slackware, and more.
|
||||
* **Decentralized** Developers can create and sign their own Flatpak packages and repositories. There’s no repository to petition in order to get a package included.
|
||||
* **Portability** If you have a Flatpak on your system and want to hand it to a friend so they can run the same application, you can export the Flatpak to a USB thumbdrive.
|
||||
* **Sandboxed** Flatpaks use a container-based model, allowing multiple versions of libraries and applications to exist on one system. Yes, you can easily install the latest version of an app to test out while maintaining the old version you rely on.
|
||||
|
||||
|
||||
|
||||
### Building a Flatpak
|
||||
|
||||
To build a Flatpak, you must first install Flatpak (the subsystem that enables you to use Flatpak packages) and the Flatpak-builder application.
|
||||
|
||||
On Fedora, CentOS, RHEL, and similar:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf install flatpak flatpak-builder`
|
||||
```
|
||||
|
||||
On Debian, Ubuntu, and similar:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install flatpak flatpak-builder`
|
||||
```
|
||||
|
||||
You must also install the development tools required to build the application you are packaging. By nature of developing the application you’re now packaging, you may already have a development environment installed, so you might not notice that these components are required, but should you start building Flatpaks with Jenkins or from inside containers, then you must ensure that your build tools are a part of your toolchain.
|
||||
|
||||
For the first example build, this article assumes that your application uses [GNU Autotools][5], but Flatpak itself supports other build systems, such as `cmake`, `cmake-ninja`, `meson`, `ant`, as well as custom commands (a `simple` build system, in Flatpak terminology, but by no means does this imply that the build itself is actually simple).
|
||||
|
||||
#### Project directory
|
||||
|
||||
Unlike the strict RPM build infrastructure, Flatpak doesn’t impose a project directory structure. I prefer to create project directories based on the **dist** packages of software, but there’s no technical reason you can’t instead integrate your Flatpak build process with your source directory. It is technically easier to build a Flatpak from your **dist** package, though, and it’s an easier demo too, so that’s the model this article uses. Set up a project directory for GNU Hello, serving as your first Flatpak:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir hello_flatpak
|
||||
$ mkdir src
|
||||
```
|
||||
|
||||
Download your distributable source. For this example, the source code is located at `https://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz`.
|
||||
|
||||
|
||||
```
|
||||
$ cd hello_flatpak
|
||||
$ wget <https://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz>
|
||||
```
|
||||
|
||||
#### Manifest
|
||||
|
||||
A Flatpak is defined by a manifest, which describes how to build and install the application it is delivering. A manifest is atomic and reproducible. A Flatpak exists in a “sandbox” container, though, so the manifest is based on a mostly empty environment with a root directory call `/app`.
|
||||
|
||||
The first two attributes are the ID of the application you are packaging and the command provided by it. The application ID must be unique to the application you are packaging. The canonical way of formulating a unique ID is to use a triplet value consisting of the entity responsible for the code followed by the name of the application, such as `org.gnu.Hello`. The command provided by the application is whatever you type into a terminal to run the application. This does not imply that the application is intended to be run from a terminal instead of a `.desktop` file in the Activities or Applications menu.
|
||||
|
||||
In a file called `org.gnu.Hello.yaml`, enter this text:
|
||||
|
||||
|
||||
```
|
||||
id: org.gnu.Hello
|
||||
command: hello
|
||||
```
|
||||
|
||||
A manifest can be written in [YAML][6] or in JSON. This article uses YAML.
|
||||
|
||||
Next, you must define each “module” delivered by this Flatpak package. You can think of a module as a dependency or a component. For GNU Hello, there is only one module: GNU Hello. More complex applications may require a specific library or another application entirely.
|
||||
|
||||
|
||||
```
|
||||
modules:
|
||||
- name: hello
|
||||
buildsystem: autotools
|
||||
no-autogen: true
|
||||
sources:
|
||||
- type: archive
|
||||
path: src/hello-2.10.tar.gz
|
||||
```
|
||||
|
||||
The `buildsystem` value identifies how Flatpak must build the module. Each module can use its own build system, so one Flatpak can have several build systems defined.
|
||||
|
||||
The `no-autogen` value tells Flatpak not to run the setup commands for `autotools`, which aren’t necessary because the GNU Hello source code is the product of `make dist`. If the code you’re building isn’t in a easily buildable form, then you may need to install `autogen` and `autoconf` to prepare the source for `autotools`. This option doesn’t apply at all to projects that don’t use `autotools`.
|
||||
|
||||
The `type` value tells Flatpak that the source code is in an archive, which triggers the requisite unarchival tasks before building. The `path` points to the source code. In this example, the source exists in the `src` directory on your local build machine, but you could instead define the source as a remote location:
|
||||
|
||||
|
||||
```
|
||||
modules:
|
||||
- name: hello
|
||||
buildsystem: autotools
|
||||
no-autogen: true
|
||||
sources:
|
||||
- type: archive
|
||||
url: <https://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz>
|
||||
```
|
||||
|
||||
Finally, you must define the platform required for the application to run and build. The Flatpak maintainers supply runtimes and SDKs that include common libraries, including `freedesktop`, `gnome`, and `kde`. The basic requirement is the `freedesk` runtime and SDK, although this may be superseded by GNOME or KDE, depending on what your code needs to run. For this GNU Hello example, only the basics are required.
|
||||
|
||||
|
||||
```
|
||||
runtime: org.freedesktop.Platform
|
||||
runtime-version: '18.08'
|
||||
sdk: org.freedesktop.Sdk
|
||||
```
|
||||
|
||||
The entire GNU Hello flatpak manifest:
|
||||
|
||||
|
||||
```
|
||||
id: org.gnu.Hello
|
||||
runtime: org.freedesktop.Platform
|
||||
runtime-version: '18.08'
|
||||
sdk: org.freedesktop.Sdk
|
||||
command: hello
|
||||
modules:
|
||||
- name: hello
|
||||
buildsystem: autotools
|
||||
no-autogen: true
|
||||
sources:
|
||||
- type: archive
|
||||
path: src/hello-2.10.tar.gz
|
||||
```
|
||||
|
||||
#### Building a Flatpak
|
||||
|
||||
Now that the package is defined, you can build it. The build process prompts Flatpak-builder to parse the manifest and to resolve each requirement: it ensures that the necessary Platform and SDK are available (if they aren’t, then you’ll have to install them with the `flatpak` command), it unarchives the source code, and executes the `buildsystem` specified.
|
||||
|
||||
The command to start:
|
||||
|
||||
|
||||
```
|
||||
`$ flatpak-builder build-dir org.gnu.Hello.yaml`
|
||||
```
|
||||
|
||||
The directory `build-dir` is created if it does not already exist. The name `build-dir` is arbitrary; you could call it `build` or `bld` or `penguin`, and you can have more than one build destination in the same project directory. However, the term `build-dir` is a frequent value used in documentation, so using it as the literal value can be helpful.
|
||||
|
||||
#### Testing your application
|
||||
|
||||
You can test your application before or after it has been built by running the build command along with the `--run` option, and endingi the command with the command provided by the Flatpak:
|
||||
|
||||
|
||||
```
|
||||
$ flatpak-builder --run build-dir \
|
||||
org.gnu.Hello.yaml hello
|
||||
Hello, world!
|
||||
```
|
||||
|
||||
### Packaging GUI apps with Flatpak
|
||||
|
||||
Packaging up a simple self-contained _hello world_ application is trivial, and fortunately packaging up a GUI application isn’t much harder. The most difficult applications to package are those that don’t rely on common libraries and frameworks (in the context of packaging, “common” means anything _not_ already packaged by someone else). The Flatpak community provides SDKs and SDK Extensions for many components you might otherwise have had to package yourself. For instance, when packaging the pure Java implementation of `pdftk`, I use the OpenJDK SDK extension I found in the Flatpak Github repository:
|
||||
|
||||
|
||||
```
|
||||
runtime: org.freedesktop.Platform
|
||||
runtime-version: '18.08'
|
||||
sdk: org.freedesktop.Sdk
|
||||
sdk-extensions:
|
||||
- org.freedesktop.Sdk.Extension.openjdk11
|
||||
```
|
||||
|
||||
The Flatpak community does a lot of work on the foundations required for applications to run upon in order to make the packaging process easy for developers. For instance, the Kblocks game from the KDE community requires the KDE platform to run, and that’s already available from Flatpak. The additional `libkdegames` library is not included, but it’s as easy to add it to your list of `modules` as `kblocks` itself.
|
||||
|
||||
Here’s a manifest for the Kblocks game:
|
||||
|
||||
|
||||
```
|
||||
id: org.kde.kblocks
|
||||
command: kblocks
|
||||
modules:
|
||||
\- buildsystem: cmake-ninja
|
||||
name: libkdegames
|
||||
sources:
|
||||
type: archive
|
||||
path: src/libkdegames-19.08.2.tar.xz
|
||||
\- buildsystem: cmake-ninja
|
||||
name: kblocks
|
||||
sources:
|
||||
type: archive
|
||||
path: src/kblocks-19.08.2.tar.xz
|
||||
runtime: org.kde.Platform
|
||||
runtime-version: '5.13'
|
||||
sdk: org.kde.Sdk
|
||||
```
|
||||
|
||||
As you can see, the manifest is still straight-forward and relatively intuitive. The build system is different, and the runtime and SDK point to KDE instead of the Freedesktop, but the structure and requirements are basically the same.
|
||||
|
||||
Because it’s a GUI application, however, there are some new options required. First, it needs an icon so that when it’s listed in the Activities or Application menu, it looks nice and recognizable. Kblocks includes an icon in its sources, but the names of files exported by a Flatpak must be prefixed using the application ID (such as `org.kde.Kblocks.desktop`). The easiest way to do this is to rename the file directly in the application source, which Flatpak can do for you as long as you include this directive in your manifest:
|
||||
|
||||
|
||||
```
|
||||
`rename-icon: kblocks`
|
||||
```
|
||||
|
||||
Another unique trait of GUI applications is that they often require integration with common desktop services, like the graphics server (X11 or Wayland) itself, a sound server such as [Pulse Audio][7], and the Inter-Process Communication (IPC) subsystem.
|
||||
|
||||
In the case of Kblocks, the requirements are:
|
||||
|
||||
|
||||
```
|
||||
finish-args:
|
||||
\- --share=ipc
|
||||
\- --socket=x11
|
||||
\- --socket=wayland
|
||||
\- --socket=pulseaudio
|
||||
\- --device=dri
|
||||
\- --filesystem=xdg-config/kdeglobals:ro
|
||||
```
|
||||
|
||||
Here’s the final, complete manifest, using URLs for the sources so you can try this on your own system easily:
|
||||
|
||||
|
||||
```
|
||||
command: kblocks
|
||||
finish-args:
|
||||
\- --share=ipc
|
||||
\- --socket=x11
|
||||
\- --socket=wayland
|
||||
\- --socket=pulseaudio
|
||||
\- --device=dri
|
||||
\- --filesystem=xdg-config/kdeglobals:ro
|
||||
id: org.kde.kblocks
|
||||
modules:
|
||||
\- buildsystem: cmake-ninja
|
||||
name: libkdegames
|
||||
sources:
|
||||
- sha256: 83456cec44502a1f79c0be00c983090e32fd8aea5fec1461fbfbd37b5f8866ac
|
||||
type: archive
|
||||
url: <https://download.kde.org/stable/applications/19.08.2/src/libkdegames-19.08.2.tar.xz>
|
||||
\- buildsystem: cmake-ninja
|
||||
name: kblocks
|
||||
sources:
|
||||
- sha256: 8b52c949e2d446a4ccf81b09818fc90234f2f55d8722c385491ee67e1f2abf93
|
||||
type: archive
|
||||
url: <https://download.kde.org/stable/applications/19.08.2/src/kblocks-19.08.2.tar.xz>
|
||||
rename-icon: kblocks
|
||||
runtime: org.kde.Platform
|
||||
runtime-version: '5.13'
|
||||
sdk: org.kde.Sdk
|
||||
```
|
||||
|
||||
To build the application, you must have the KDE Platform and SDK Flatpaks (version 5.13 as of this writing) installed. Once the application has been built, you can run it using the `--run` method, but to see the application icon, you must install it.
|
||||
|
||||
#### Distributing and installing a Flatpak you have built
|
||||
|
||||
Distributing flatpaks happen through repositories.
|
||||
|
||||
You can list your apps on [Flathub.org][8], a community website meant as a _technically_ decentralised (but central in spirit) location for Flatpaks. To submit your Flatpak, [place your manifest into a Git repository][9] and [submit a pull request on Github][10].
|
||||
|
||||
Alternately, you can create your own repository using the `flatpak build-export` command.
|
||||
|
||||
You can also just install locally:
|
||||
|
||||
|
||||
```
|
||||
`$ flatpak-builder --force-clean --install build-dir org.kde.Kblocks.yaml`
|
||||
```
|
||||
|
||||
Once installed, open your Activities or Applications menu and search for Kblocks.
|
||||
|
||||
![The Activities menu in GNOME][11]
|
||||
|
||||
### Learning more
|
||||
|
||||
The [Flatpak documentation site][12] has a good walkthrough on building your first Flatpak. It’s worth reading even if you’ve followed along with this article. Besides that, the docs provide details on what Platforms and SDKs are available.
|
||||
|
||||
For those who enjoy learning from examples, there are manifests for _every application_ available on [Flathub][13].
|
||||
|
||||
The resources to build and use Flatpaks are plentiful, and Flatpak, along with containers and sandboxed apps, are arguably [the future][14], so get familiar with them, start integrating them with your Jenkins pipelines, and enjoy easy and universal Linux app packaging.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/how-build-flatpak-packaging
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/flatpak-lead-image.png?itok=J93RG_fi
|
||||
[2]: http://notabug.org
|
||||
[3]: http://savannah.nongnu.org/
|
||||
[4]: http://debian.org
|
||||
[5]: https://opensource.com/article/19/7/introduction-gnu-autotools
|
||||
[6]: https://www.redhat.com/sysadmin/yaml-tips
|
||||
[7]: https://opensource.com/article/17/1/linux-plays-sound
|
||||
[8]: http://flathub.org
|
||||
[9]: https://opensource.com/resources/what-is-git
|
||||
[10]: https://opensource.com/life/16/3/submit-github-pull-request
|
||||
[11]: https://opensource.com/sites/default/files/gnome-activities-kblocks.jpg (The Activities menu in GNOME)
|
||||
[12]: http://docs.flatpak.org/en/latest/introduction.html
|
||||
[13]: https://github.com/flathub
|
||||
[14]: https://silverblue.fedoraproject.org/
|
@ -0,0 +1,272 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to program with Bash: Syntax and tools)
|
||||
[#]: via: (https://opensource.com/article/19/10/programming-bash-part-1)
|
||||
[#]: author: (David Both https://opensource.com/users/dboth)
|
||||
|
||||
How to program with Bash: Syntax and tools
|
||||
======
|
||||
Learn basic Bash programming syntax and tools, as well as how to use
|
||||
variables and control operators, in the first article in this three-part
|
||||
series.
|
||||
![bash logo on green background][1]
|
||||
|
||||
A shell is the command interpreter for the operating system. Bash is my favorite shell, but every Linux shell interprets the commands typed by the user or sysadmin into a form the operating system can use. When the results are returned to the shell program, it sends them to STDOUT which, by default, [displays them in the terminal][2]. All of the shells I am familiar with are also programming languages.
|
||||
|
||||
Features like tab completion, command-line recall and editing, and shortcuts like aliases all contribute to its value as a powerful shell. Its default command-line editing mode uses Emacs, but one of my favorite Bash features is that I can change it to Vi mode to use editing commands that are already part of my muscle memory.
|
||||
|
||||
However, if you think of Bash solely as a shell, you miss much of its true power. While researching my three-volume [Linux self-study course][3] (on which this series of articles is based), I learned things about Bash that I'd never known in over 20 years of working with Linux. Some of these new bits of knowledge relate to its use as a programming language. Bash is a powerful programming language, one perfectly designed for use on the command line and in shell scripts.
|
||||
|
||||
This three-part series explores using Bash as a command-line interface (CLI) programming language. This first article looks at some simple command-line programming with Bash, variables, and control operators. The other articles explore types of Bash files; string, numeric, and miscellaneous logical operators that provide execution-flow control logic; different types of shell expansions; and the **for**, **while**, and **until** loops that enable repetitive operations. They will also look at some commands that simplify and support the use of these tools.
|
||||
|
||||
### The shell
|
||||
|
||||
A shell is the command interpreter for the operating system. Bash is my favorite shell, but every Linux shell interprets the commands typed by the user or sysadmin into a form the operating system can use. When the results are returned to the shell program, it displays them in the terminal. All of the shells I am familiar with are also programming languages.
|
||||
|
||||
Bash stands for Bourne Again Shell because the Bash shell is [based upon][4] the older Bourne shell that was written by Steven Bourne in 1977. Many [other shells][5] are available, but these are the four I encounter most frequently:
|
||||
|
||||
* **csh:** The C shell for programmers who like the syntax of the C language
|
||||
* **ksh:** The Korn shell, written by David Korn and popular with Unix users
|
||||
* **tcsh:** A version of csh with more ease-of-use features
|
||||
* **zsh:** The Z shell, which combines many features of other popular shells
|
||||
|
||||
|
||||
|
||||
All shells have built-in commands that supplement or replace the ones provided by the core utilities. Open the shell's man page and find the "BUILT-INS" section to see the commands it provides.
|
||||
|
||||
Each shell has its own personality and syntax. Some will work better for you than others. I have used the C shell, the Korn shell, and the Z shell. I still like the Bash shell more than any of them. Use the one that works best for you, although that might require you to try some of the others. Fortunately, it's quite easy to change shells.
|
||||
|
||||
All of these shells are programming languages, as well as command interpreters. Here's a quick tour of some programming constructs and tools that are integral parts of Bash.
|
||||
|
||||
### Bash as a programming language
|
||||
|
||||
Most sysadmins have used Bash to issue commands that are usually fairly simple and straightforward. But Bash can go beyond entering single commands, and many sysadmins create simple command-line programs to perform a series of tasks. These programs are common tools that can save time and effort.
|
||||
|
||||
My objective when writing CLI programs is to save time and effort (i.e., to be the lazy sysadmin). CLI programs support this by listing several commands in a specific sequence that execute one after another, so you do not need to watch the progress of one command and type in the next command when the first finishes. You can go do other things and not have to continually monitor the progress of each command.
|
||||
|
||||
### What is "a program"?
|
||||
|
||||
The Free On-line Dictionary of Computing ([FOLDOC][6]) defines a program as: "The instructions executed by a computer, as opposed to the physical device on which they run." Princeton University's [WordNet][7] defines a program as: "…a sequence of instructions that a computer can interpret and execute…" [Wikipedia][8] also has a good entry about computer programs.
|
||||
|
||||
Therefore, a program can consist of one or more instructions that perform a specific, related task. A computer program instruction is also called a program statement. For sysadmins, a program is usually a sequence of shell commands. All the shells available for Linux, at least the ones I am familiar with, have at least a basic form of programming capability, and Bash, the default shell for most Linux distributions, is no exception.
|
||||
|
||||
While this series uses Bash (because it is so ubiquitous), if you use a different shell, the general programming concepts will be the same, although the constructs and syntax may differ somewhat. Some shells may support some features that others do not, but they all provide some programming capability. Shell programs can be stored in a file for repeated use, or they may be created on the command line as needed.
|
||||
|
||||
### Simple CLI programs
|
||||
|
||||
The simplest command-line programs are one or two consecutive program statements, which may be related or not, that are entered on the command line before the **Enter** key is pressed. The second statement in a program, if there is one, might be dependent upon the actions of the first, but it does not need to be.
|
||||
|
||||
There is also one bit of syntactical punctuation that needs to be clearly stated. When entering a single command on the command line, pressing the **Enter** key terminates the command with an implicit semicolon (**;**). When used in a CLI shell program entered as a single line on the command line, the semicolon must be used to terminate each statement and separate it from the next one. The last statement in a CLI shell program can use an explicit or implicit semicolon.
|
||||
|
||||
### Some basic syntax
|
||||
|
||||
The following examples will clarify this syntax. This program consists of a single command with an explicit terminator:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ echo "Hello world." ;
|
||||
Hello world.
|
||||
```
|
||||
|
||||
That may not seem like much of a program, but it is the first program I encounter with every new programming language I learn. The syntax may be a bit different for each language, but the result is the same.
|
||||
|
||||
Let's expand a little on this trivial but ubiquitous program. Your results will be different from mine because I have done other experiments, while you may have only the default directories and files that are created in the account home directory the first time you log into an account via the GUI desktop.
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ echo "My home directory." ; ls ;
|
||||
My home directory.
|
||||
chapter25 TestFile1.Linux dmesg2.txt Downloads newfile.txt softlink1 testdir6
|
||||
chapter26 TestFile1.mac dmesg3.txt file005 Pictures Templates testdir
|
||||
TestFile1 Desktop dmesg.txt link3 Public testdir Videos
|
||||
TestFile1.dos dmesg1.txt Documents Music random.txt testdir1
|
||||
```
|
||||
|
||||
That makes a bit more sense. The results are related, but the individual program statements are independent of each other. Notice that I like to put spaces before and after the semicolon because it makes the code a bit easier to read. Try that little CLI program again without an explicit semicolon at the end:
|
||||
|
||||
|
||||
```
|
||||
`[student@studentvm1 ~]$ echo "My home directory." ; ls`
|
||||
```
|
||||
|
||||
There is no difference in the output.
|
||||
|
||||
### Something about variables
|
||||
|
||||
Like all programming languages, the Bash shell can deal with variables. A variable is a symbolic name that refers to a specific location in memory that contains a value of some sort. The value of a variable is changeable, i.e., it is variable.
|
||||
|
||||
Bash does not type variables like C and related languages, defining them as integers, floating points, or string types. In Bash, all variables are strings. A string that is an integer can be used in integer arithmetic, which is the only type of math that Bash is capable of doing. If more complex math is required, the [**bc** command][9] can be used in CLI programs and scripts.
|
||||
|
||||
Variables are assigned values and can be used to refer to those values in CLI programs and scripts. The value of a variable is set using its name but not preceded by a **$** sign. The assignment **VAR=10** sets the value of the variable VAR to 10. To print the value of the variable, you can use the statement **echo $VAR**. Start with text (i.e., non-numeric) variables.
|
||||
|
||||
Bash variables become part of the shell environment until they are unset.
|
||||
|
||||
Check the initial value of a variable that has not been assigned; it should be null. Then assign a value to the variable and print it to verify its value. You can do all of this in a single CLI program:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ echo $MyVar ; MyVar="Hello World" ; echo $MyVar ;
|
||||
|
||||
Hello World
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
_Note: The syntax of variable assignment is very strict. There must be no spaces on either side of the equal (**=**) sign in the assignment statement._
|
||||
|
||||
The empty line indicates that the initial value of **MyVar** is null. Changing and setting the value of a variable are done the same way. This example shows both the original and the new value.
|
||||
|
||||
As mentioned, Bash can perform integer arithmetic calculations, which is useful for calculating a reference to the location of an element in an array or doing simple math problems. It is not suitable for scientific computing or anything that requires decimals, such as financial calculations. There are much better tools for those types of calculations.
|
||||
|
||||
Here's a simple calculation:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var1*Var2))"
|
||||
Result = 63
|
||||
```
|
||||
|
||||
What happens when you perform a math operation that results in a floating-point number?
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var1/Var2))"
|
||||
Result = 0
|
||||
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var2/Var1))"
|
||||
Result = 1
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
The result is the nearest integer. Notice that the calculation was performed as part of the **echo** statement. The math is performed before the enclosing echo command due to the Bash order of precedence. For details see the Bash man page and search "precedence."
|
||||
|
||||
### Control operators
|
||||
|
||||
Shell control operators are one of the syntactical operators for easily creating some interesting command-line programs. The simplest form of CLI program is just stringing several commands together in a sequence on the command line:
|
||||
|
||||
|
||||
```
|
||||
`command1 ; command2 ; command3 ; command4 ; . . . ; etc. ;`
|
||||
```
|
||||
|
||||
Those commands all run without a problem so long as no errors occur. But what happens when an error occurs? You can anticipate and allow for errors using the built-in **&&** and **||** Bash control operators. These two control operators provide some flow control and enable you to alter the sequence of code execution. The semicolon is also considered to be a Bash control operator, as is the newline character.
|
||||
|
||||
The **&&** operator simply says, "if command1 is successful, then run command2. If command1 fails for any reason, then command2 is skipped." That syntax looks like this:
|
||||
|
||||
|
||||
```
|
||||
`command1 && command2`
|
||||
```
|
||||
|
||||
Now, look at some commands that will create a new directory and—if it's successful—make it the present working directory (PWD). Ensure that your home directory (**~**) is the PWD. Try this first in **/root**, a directory that you do not have access to:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir/ && cd $Dir
|
||||
mkdir: cannot create directory '/root/testdir/': Permission denied
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
The error was emitted by the **mkdir** command. You did not receive an error indicating that the file could not be created because the creation of the directory failed. The **&&** control operator sensed the non-zero return code, so the **touch** command was skipped. Using the **&&** control operator prevents the **touch** command from running because there was an error in creating the directory. This type of command-line program flow control can prevent errors from compounding and making a real mess of things. But it's time to get a little more complicated.
|
||||
|
||||
The **||** control operator allows you to add another program statement that executes when the initial program statement returns a code greater than zero. The basic syntax looks like this:
|
||||
|
||||
|
||||
```
|
||||
`command1 || command2`
|
||||
```
|
||||
|
||||
This syntax reads, "If command1 fails, execute command2." That implies that if command1 succeeds, command2 is skipped. Try this by attempting to create a new directory:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir || echo "$Dir was not created."
|
||||
mkdir: cannot create directory '/root/testdir': Permission denied
|
||||
/root/testdir was not created.
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
This is exactly what you would expect. Because the new directory could not be created, the first command failed, which resulted in the execution of the second command.
|
||||
|
||||
Combining these two operators provides the best of both. The control operator syntax using some flow control takes this general form when the **&&** and **||** control operators are used:
|
||||
|
||||
|
||||
```
|
||||
`preceding commands ; command1 && command2 || command3 ; following commands`
|
||||
```
|
||||
|
||||
This syntax can be stated like so: "If command1 exits with a return code of 0, then execute command2, otherwise execute command3." Try it:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir && cd $Dir || echo "$Dir was not created."
|
||||
mkdir: cannot create directory '/root/testdir': Permission denied
|
||||
/root/testdir was not created.
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
Now try the last command again using your home directory instead of the **/root** directory. You will have permission to create this directory:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ Dir=~/testdir ; mkdir $Dir && cd $Dir || echo "$Dir was not created."
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
The control operator syntax, like **command1 && command2**, works because every command sends a return code (RC) to the shell that indicates if it completed successfully or whether there was some type of failure during execution. By convention, an RC of zero (0) indicates success, and any positive number indicates some type of failure. Some of the tools sysadmins use just return a one (1) to indicate a failure, but many use other codes to indicate the type of failure that occurred.
|
||||
|
||||
The Bash shell variable **$?** contains the RC from the last command. This RC can be checked very easily by a script, the next command in a list of commands, or even the sysadmin directly. Start by running a simple command and immediately checking the RC. The RC will always be for the last command that ran before you looked at it.
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ ll ; echo "RC = $?"
|
||||
total 1264
|
||||
drwxrwxr-x 2 student student 4096 Mar 2 08:21 chapter25
|
||||
drwxrwxr-x 2 student student 4096 Mar 21 15:27 chapter26
|
||||
-rwxr-xr-x 1 student student 92 Mar 20 15:53 TestFile1
|
||||
<snip>
|
||||
drwxrwxr-x. 2 student student 663552 Feb 21 14:12 testdir
|
||||
drwxr-xr-x. 2 student student 4096 Dec 22 13:15 Videos
|
||||
RC = 0
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
The RC, in this case, is zero, which means the command completed successfully. Now try the same command on root's home directory, a directory you do not have permissions for:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ ll /root ; echo "RC = $?"
|
||||
ls: cannot open directory '/root': Permission denied
|
||||
RC = 2
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
In this case, the RC is two; this means permission was denied for a non-root user to access a directory to which the user is not permitted access. The control operators use these RCs to enable you to alter the sequence of program execution.
|
||||
|
||||
### Summary
|
||||
|
||||
This article looked at Bash as a programming language and explored its basic syntax as well as some basic tools. It showed how to print data to STDOUT and how to use variables and control operators. The next article in this series looks at some of the many Bash logical operators that control the flow of instruction execution.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/programming-bash-part-1
|
||||
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dboth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
|
||||
[2]: https://opensource.com/article/18/10/linux-data-streams
|
||||
[3]: http://www.both.org/?page_id=1183
|
||||
[4]: https://opensource.com/19/9/command-line-heroes-bash
|
||||
[5]: https://en.wikipedia.org/wiki/Comparison_of_command_shells
|
||||
[6]: http://foldoc.org/program
|
||||
[7]: https://wordnet.princeton.edu/
|
||||
[8]: https://en.wikipedia.org/wiki/Computer_program
|
||||
[9]: https://www.gnu.org/software/bc/manual/html_mono/bc.html
|
@ -0,0 +1,101 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Pylint: Making your Python code consistent)
|
||||
[#]: via: (https://opensource.com/article/19/10/python-pylint-introduction)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
Pylint: Making your Python code consistent
|
||||
======
|
||||
Pylint is your friend when you want to avoid arguing about code
|
||||
complexity.
|
||||
![OpenStack source code \(Python\) in VIM][1]
|
||||
|
||||
Pylint is a higher-level Python style enforcer. While [flake8][2] and [black][3] will take care of "local" style: where the newlines occur, how comments are formatted, or find issues like commented out code or bad practices in log formatting.
|
||||
|
||||
Pylint is extremely aggressive by default. It will offer strong opinions on everything from checking if declared interfaces are actually implemented to opportunities to refactor duplicate code, which can be a lot to a new user. One way of introducing it gently to a project, or a team, is to start by turning _all_ checkers off, and then enabling checkers one by one. This is especially useful if you already use flake8, black, and [mypy][4]: Pylint has quite a few checkers that overlap in functionality.
|
||||
|
||||
However, one of the things unique to Pylint is the ability to enforce higher-level issues: for example, number of lines in a function, or number of methods in a class.
|
||||
|
||||
These numbers might be different from project to project and can depend on the development team's preferences. However, once the team comes to an agreement about the parameters, it is useful to _enforce_ those parameters using an automated tool. This is where Pylint shines.
|
||||
|
||||
### Configuring Pylint
|
||||
|
||||
In order to start with an empty configuration, start your `.pylintrc` with
|
||||
|
||||
|
||||
```
|
||||
[MESSAGES CONTROL]
|
||||
|
||||
disable=all
|
||||
```
|
||||
|
||||
This disables all Pylint messages. Since many of them are redundant, this makes sense. In Pylint, a `message` is a specific kind of warning.
|
||||
|
||||
You can check that all messages have been turned off by running `pylint`:
|
||||
|
||||
|
||||
```
|
||||
`$ pylint <my package>`
|
||||
```
|
||||
|
||||
In general, it is not a great idea to add parameters to the `pylint` command-line: the best place to configure your `pylint` is the `.pylintrc`. In order to have it do _something_ useful, we need to enable some messages.
|
||||
|
||||
In order to enable messages, add to your `.pylintrc`, under the `[MESSAGES CONTROL]`.
|
||||
|
||||
|
||||
```
|
||||
enable=<message>,
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
For the "messages" (what Pylint calls different kinds of warnings) that look useful. Some of my favorites include `too-many-lines`, `too-many-arguments`, and `too-many-branches`. All of those limit complexity of modules or functions, and serve as an objective check, without a human nitpicker needed, for code complexity measurement.
|
||||
|
||||
A _checker_ is a source of _messages_: every message belongs to exactly one checker. Many of the most useful messages are under the [design checker][5]. The default numbers are usually good, but tweaking the maximums is straightfoward: we can add a section called `DESIGN` in the `.pylintrc`.
|
||||
|
||||
|
||||
```
|
||||
[DESIGN]
|
||||
|
||||
max-args=7
|
||||
|
||||
max-locals=15
|
||||
```
|
||||
|
||||
Another good source of useful messages is the `refactoring` checker. Some of my favorite messages to enable there are `consider-using-dict-comprehension`, `stop-iteration-return` (which looks for generators which use `raise StopIteration` when `return` is the correct way to stop the iteration). and `chained-comparison`, which will suggest using syntax like `1 <= x < 5` rather than the less obvious `1 <= x && 5 > 5`
|
||||
|
||||
Finally, an expensive checker, in terms of performance, but highly useful, is `similarities`. It is designed to enforce "Don't Repeat Yourself" (the DRY principle) by explicitly looking for copy-paste between different parts of the code. It only has one message to enable: `duplicate-code`. The default "minimum similarity lines" is set to `4`. It is possible to set it to a different value using the `.pylintrc`.
|
||||
|
||||
|
||||
```
|
||||
[SIMILARITIES]
|
||||
|
||||
min-similarity-lines=3
|
||||
```
|
||||
|
||||
### Pylint makes code reviews easy
|
||||
|
||||
If you are sick of code reviews where you point out that a class is too complicated, or that two different functions are basically the same, add Pylint to your [Continuous Integration][6] configuration, and only have the arguments about complexity guidelines for your project _once_.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/python-pylint-introduction
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openstack_python_vim_2.jpg?itok=4fza48WU (OpenStack source code (Python) in VIM)
|
||||
[2]: https://opensource.com/article/19/5/python-flake8
|
||||
[3]: https://opensource.com/article/19/5/python-black
|
||||
[4]: https://opensource.com/article/19/5/python-mypy
|
||||
[5]: https://pylint.readthedocs.io/en/latest/technical_reference/features.html#design-checker
|
||||
[6]: https://opensource.com/business/15/7/six-continuous-integration-tools
|
185
sources/tech/20191021 Transition to Nftables.md
Normal file
185
sources/tech/20191021 Transition to Nftables.md
Normal file
@ -0,0 +1,185 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Transition to Nftables)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/transition-to-nftables/)
|
||||
[#]: author: (Vijay Marcel D https://opensourceforu.com/author/vijay-marcel/)
|
||||
|
||||
Transition to Nftables
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
_Every major distribution in the open source world is moving towards nftables as the default firewall. In short, the venerable Iptables is now dead. This article is a tutorial on how to build nftables._
|
||||
|
||||
Currently, there is an iptables-nft backend that is compatible with nftables but soon, even this will not be available. Also, as noted by Red Hat developers, sometimes it may translate the rules incorrectly. Rather than rely on an iptables-to-nftables converter, we need to know how to build our own nftables. In nftables, all the address families come under one rule. Nftables runs in the user space unlike iptables, where every module is in the kernel. It also needs less kernel updates and comes with new features such as maps, families and dictionaries.
|
||||
|
||||
**Address families**
|
||||
Address families determine the types of packets that are processed. There are six address families in nftables and they are:
|
||||
|
||||
* ip
|
||||
* ipv6
|
||||
* inet
|
||||
* arp
|
||||
* bridge
|
||||
* netdev
|
||||
|
||||
|
||||
|
||||
In nftables, the ipv4 and ipv6 protocols are combined into one single family called inet. So we do not need to specify two rules – one for ipv4 and another for ipv6. If no address family is specified, it will default to ip protocol, i.e., ipv4. Our area of interest lies in the inet family, since most home users will use either ipv4 or ipv6 protocols (see Figure 1).
|
||||
|
||||
**Nftables**
|
||||
A typical nftable rule contains three parts – table, chain and rules.
|
||||
Tables are containers for chains and rules. They are identified by their address families and their names. Chains contain the rules needed for the _inet/arp/bridge/netdev_ protocols and are of three types — filter, NAT and route. Nftable rules can be loaded from a script or they can be typed into a terminal and then saved as a rule-set. For home users, the default chain will be filter. The inet family contains the following hooks:
|
||||
|
||||
* Input
|
||||
* Output
|
||||
* Forward
|
||||
* Pre-routing
|
||||
* Post-routing
|
||||
|
||||
|
||||
|
||||
**To script or not to script?**
|
||||
One of the biggest questions is whether we can use a firewall script or not. The answer is: it’s your choice. Here’s some advice – if you have hundreds of rules in your firewall, then it is best to use a script, but if you are a typical home user, then you can type the commands in the terminal and then load your rule-set. Each option has its own advantages and disadvantages. In this article, we will type them in the terminal to build our firewall.
|
||||
|
||||
Nftables uses a program called nft to add, create, list, delete and load rules. Make sure nftables is installed along with conntrackd and netfilter-persistent, and remove iptables, using the following command:
|
||||
|
||||
```
|
||||
apt-get install nftables conntrackd netfilter-persistent
|
||||
apt-get purge iptables
|
||||
```
|
||||
|
||||
_nft_ needs to be run as root or use sudo. Use the following commands to list, flush, delete ruleset and load the script respectively.
|
||||
|
||||
```
|
||||
nft list ruleset
|
||||
nft flush ruleset
|
||||
nft delete table inet filter
|
||||
/usr/sbin/nft -f /etc/nftables.conf
|
||||
```
|
||||
|
||||
**Input policy**
|
||||
The firewall will contain three parts – input, forward and output – just like in iptables. In the terminal, type the following commands for the input firewall. Make sure you have flushed your rule-set before you begin. Our default policy will be to drop everything. We will use the inet family in the firewall. Add the following rules as root or use sudo:
|
||||
|
||||
```
|
||||
nft add table inet filter
|
||||
nft add chain inet filter input { type filter hook input priority 0 \; counter \; policy drop \; }
|
||||
```
|
||||
|
||||
You have noticed there is something called _priority 0_. It means giving the rule higher precedence. Hooks typically give higher precedence to the negative integer. Every hook has its own precedence and the filter chain has priority 0. You can check the nftables wiki page to see the priority of each hook.
|
||||
To know the network interfaces in your computer, run the following command:
|
||||
|
||||
```
|
||||
ip link show
|
||||
```
|
||||
|
||||
It will show the installed network interface, one local host and other Ethernet port or your wireless port. Your Ethernet port’s name looks something like this: _enpXsY_ where X and Y are numbers, and the same goes for your wireless port. We have to allow the local host and only allow established incoming connections from the Internet.
|
||||
Nftables has a feature called verdict statements on how to parse a rule. The verdict statements are _accept, drop, queue, jump, goto, continue_ and _return_. Since the firewall is a simple one, we will use either _accept_ or _drop the packets_ (Figure 2).
|
||||
|
||||
```
|
||||
nft add rule inet filter input iifname lo accept
|
||||
nft add rule inet filter input iifname enpXsY ct state new, established, related accept
|
||||
```
|
||||
|
||||
Next, we have to add rules to protect us from stealth scans. Not all stealth scans are malicious but most of them are. We have to protect the network from such scans. The first set lists the TCP flags to be tested. Of these flags, the second set lists the flags to be matched with the first.
|
||||
|
||||
```
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(syn\|fin\) == \(syn\|fin\) drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(syn\|rst\) == \(syn\|rst\) drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(fin\|rst\) == \(fin\|rst\) drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(ack\|fin\) == fin drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(ack\|psh\) == psh drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(ack\|urg\) == urg drop
|
||||
```
|
||||
|
||||
Remember, we are typing these commands in the terminal. So we have to add a backslash before some special characters, to make sure the terminal interprets it as it should. If you are using a script, then this isn’t required.
|
||||
|
||||
**A word of caution regarding ICMP**
|
||||
The Internet Control Message Protocol (ICMP) is a diagnostic tool and so should not be dropped outright. Any attempt to fully block ICMP is unwise as it will also stop giving error messages to us. Enable only the most important control messages such as echo-request, echo-reply, destination-unreachable and time-exceeded, and reject the rest. Echo-request and echo-reply are part of ping. In the input, we only allow echo reply and in the output, we only allow the echo-request.
|
||||
|
||||
```
|
||||
nft add rule inet filter input iifname enpXsY icmp type { echo-reply, destination-unreachable, time-exceeded } limit rate 1/second accept
|
||||
nft add rule inet filter input iifname enpXsY ip protocol icmp drop
|
||||
```
|
||||
|
||||
Finally, we are logging and dropping all the invalid packets.
|
||||
|
||||
```
|
||||
nft add rule inet filter input iifname enpXsY ct state invalid log flags all level info prefix \”Invalid-Input: \”
|
||||
nft add rule inet filter input iifname enpXsY ct state invalid drop
|
||||
```
|
||||
|
||||
**Forward and output policy**
|
||||
In both the forward and output policies, we will drop packets by default and only accept those that are established connections.
|
||||
|
||||
```
|
||||
nft add chain inet filter forward { type filter hook forward priority 0 \; counter \; policy drop \; }
|
||||
nft add rule inet filter forward ct state established, related accept
|
||||
nft add rule inet filter forward ct state invalid drop
|
||||
nft add chain inet filter output { type filter hook output priority 0 \; counter \; policy drop \; }
|
||||
```
|
||||
|
||||
A typical desktop user needs only Port 80 and 443 to be allowed to access the Internet. Finally, allow acceptable ICMP protocols and drop the invalid packets while logging them.
|
||||
|
||||
```
|
||||
nft add rule inet filter output oifname enpXsY tcp dport { 80, 443 } ct state established accept
|
||||
nft add rule inet filter output oifname enpXsY icmp type { echo-request, destination-unreachable, time-exceeded } limit rate 1/second accept
|
||||
nft add rule inet filter output oifname enpXsY ip protocol icmp drop
|
||||
nft add rule inet filter output oifname enpXsY ct state invalid log flags all level info prefix \”Invalid-Output: \”
|
||||
nft add rule inet filter output oifname enpXsY ct state invalid drop
|
||||
```
|
||||
|
||||
Now we have to save our rule-set, otherwise it will be lost when we reboot. To do so, run the following command:
|
||||
|
||||
```
|
||||
sudo nft list ruleset. > /etc/nftables.conf
|
||||
```
|
||||
|
||||
We now have to load nftables at boot, for that enables the nftables service in systemd:
|
||||
|
||||
```
|
||||
sudo systemctl enable nftables
|
||||
```
|
||||
|
||||
Next, edit the nftables unit file to remove the Execstop option to avoid flushing the rule-set at every boot. The file is usually located in /etc/systemd/system/sysinit.target.wants/nftables.service. Now restart the nftables:
|
||||
|
||||
```
|
||||
sudo systemctl restart nftables
|
||||
```
|
||||
|
||||
**Logging in rsyslog**
|
||||
When you log the dropped packets, they go straight to _syslog_, which makes reading your log file quite difficult. It is better to redirect your firewall logs to a separate file. Create a directory called nftables in
|
||||
_/var/log_ and in it, create two files called _input.log_ and _output.log_ to store the input and output logs, respectively. Make sure rsyslog is installed in your system. Now go to _/etc/rsyslog.d_ and create a file called _nftables.conf_ with the following contents:
|
||||
|
||||
```
|
||||
:msg,regex,”Invalid-Input: “ -/var/log/nftables/Input.log
|
||||
:msg,regex,”Invalid-Output: “ -/var/log/nftables/Output.log
|
||||
& stop
|
||||
```
|
||||
|
||||
Now we have to make sure the log is manageable. For that, create another file in _/etc/logrotate.d_ called nftables with the following code:
|
||||
|
||||
```
|
||||
/var/log/nftables/* { rotate 5 daily maxsize 50M missingok notifempty delaycompress compress postrotate invoke-rc.d rsyslog rotate > /dev/null endscript }
|
||||
```
|
||||
|
||||
Restart nftables. You can now check your rule-set. If you feel typing each command in the terminal is bothersome, you can use a script to load the nftables firewall. I hope this article is useful in protecting your system.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/transition-to-nftables/
|
||||
|
||||
作者:[Vijay Marcel D][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/vijay-marcel/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/01/REHfirewall-1.jpg?resize=696%2C481&ssl=1 (REHfirewall)
|
||||
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/01/REHfirewall-1.jpg?fit=900%2C622&ssl=1
|
@ -0,0 +1,261 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Beginner’s Guide to Handle Various Update Related Errors in Ubuntu)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-update-error/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Beginner’s Guide to Handle Various Update Related Errors in Ubuntu
|
||||
======
|
||||
|
||||
_**Who hasn’t come across an error while doing an update in Ubuntu? Update errors are common and plenty in Ubuntu and other Linux distributions based on Ubuntu. Here are some common Ubuntu update errors and their fixes.**_
|
||||
|
||||
This article is part of Ubuntu beginner series that explains the know-how of Ubuntu so that a new user could understand the things better.
|
||||
|
||||
In an earlier article, I discussed [how to update Ubuntu][1]. In this tutorial, I’ll discuss some common errors you may encounter while updating [Ubuntu][2]. It usually happens because you tried to add software or repositories on your own and that probably caused an issue.
|
||||
|
||||
There is no need to panic if you see the errors while updating your system.The errors are common and the fix is easy. You’ll learn how to fix those common update errors.
|
||||
|
||||
_**Before you begin, I highly advise reading these two articles to have a better understanding of the repository concept in Ubuntu.**_
|
||||
|
||||
![Understand Ubuntu repositories][3]
|
||||
|
||||
![Understand Ubuntu repositories][3]
|
||||
|
||||
###### **Understand Ubuntu repositories**
|
||||
|
||||
Learn what are various repositories in Ubuntu and how they enable you to install software in your system.
|
||||
|
||||
[Read More][4]
|
||||
|
||||
![Understanding PPA in Ubuntu][5]
|
||||
|
||||
![Understanding PPA in Ubuntu][5]
|
||||
|
||||
###### **Understanding PPA in Ubuntu**
|
||||
|
||||
Further improve your concept of repositories and package handling in Ubuntu with this detailed guide on PPA.
|
||||
|
||||
[Read More][6]
|
||||
|
||||
### Error 0: Failed to download repository information
|
||||
|
||||
Many Ubuntu desktop users update their system through the graphical software updater tool. You are notified that updates are available for your system and you can click one button to start downloading and installing the updates.
|
||||
|
||||
Well, that’s what usually happens. But sometimes you’ll see an error like this:
|
||||
|
||||
![][7]
|
||||
|
||||
_**Failed to download repository information. Check your internet connection.**_
|
||||
|
||||
That’s a weird error because your internet connection is most likely working just fine and it still says to check the internet connection.
|
||||
|
||||
Did you note that I called it ‘error 0’? It’s because it’s not an error in itself. I mean, most probably, it has nothing to do with the internet connection. But there is no useful information other than this misleading error message.
|
||||
|
||||
If you see this error message and your internet connection is working fine, it’s time to put on your detective hat and [use your grey cells][8] (as [Hercule Poirot][9] would say).
|
||||
|
||||
You’ll have to use the command line here. You can [use Ctrl+Alt+T keyboard shortcut to open the terminal in Ubuntu][10]. In the terminal, use this command:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
Let the command finish. Observe the last three-four lines of its output. That will give you the real reason why sudo apt-get update fails. Here’s an example:
|
||||
|
||||
![][11]
|
||||
|
||||
Rest of the tutorial here shows how to handle the errors that you just saw in the last few lines of the update command output.
|
||||
|
||||
### Error 1: Problem With MergeList
|
||||
|
||||
When you run update in terminal, you may see an error “[problem with MergeList][12]” like below:
|
||||
|
||||
```
|
||||
E:Encountered a section with no Package: header,
|
||||
E:Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_universe_binary-i386_Packages,
|
||||
E:The package lists or status file could not be parsed or opened.’
|
||||
```
|
||||
|
||||
For some reasons, the file in /var/lib/apt/lists directory got corrupted. You can delete all the files in this directory and run the update again to regenerate everything afresh. Use the following commands one by one:
|
||||
|
||||
```
|
||||
sudo rm -r /var/lib/apt/lists/*
|
||||
sudo apt-get clean && sudo apt-get update
|
||||
```
|
||||
|
||||
Your problem should be fixed.
|
||||
|
||||
### Error 2: Hash Sum mismatch
|
||||
|
||||
If you find an error that talks about [Hash Sum mismatch][13], the fix is the same as the one in the previous error.
|
||||
|
||||
```
|
||||
W:Failed to fetch bzip2:/var/lib/apt/lists/partial/in.archive.ubuntu.com_ubuntu_dists_oneiric_restricted_binary-i386_Packages Hash Sum mismatch,
|
||||
W:Failed to fetch bzip2:/var/lib/apt/lists/partial/in.archive.ubuntu.com_ubuntu_dists_oneiric_multiverse_binary-i386_Packages Hash Sum mismatch,
|
||||
E:Some index files failed to download. They have been ignored, or old ones used instead
|
||||
```
|
||||
|
||||
The error occurs possibly because of a mismatched metadata cache between the server and your system. You can use the following commands to fix it:
|
||||
|
||||
```
|
||||
sudo rm -rf /var/lib/apt/lists/*
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
### Error 3: Failed to fetch with error 404 not found
|
||||
|
||||
If you try adding a PPA repository that is not available for your current [Ubuntu version][14], you’ll see that it throws a 404 not found error.
|
||||
|
||||
```
|
||||
W: Failed to fetch http://ppa.launchpad.net/venerix/pkg/ubuntu/dists/raring/main/binary-i386/Packages 404 Not Found
|
||||
E: Some index files failed to download. They have been ignored, or old ones used instead.
|
||||
```
|
||||
|
||||
You added a PPA hoping to install an application but it is not available for your Ubuntu version and you are now stuck with the update error. This is why you should check beforehand if a PPA is available for your Ubuntu version or not. I have discussed how to check the PPA availability in the detailed [PPA guide][6].
|
||||
|
||||
Anyway, the fix here is that you remove the troublesome PPA from your list of repositories. Note the PPA name from the error message. Go to _Software & Updates_ tool:
|
||||
|
||||
![Open Software & Updates][15]
|
||||
|
||||
In here, move to _Other Software_ tab and look for that PPA. Uncheck the box to [remove the PPA][16] from your system.
|
||||
|
||||
![Remove PPA Using Software & Updates In Ubuntu][17]
|
||||
|
||||
Your software list will be updated when you do that. Now if you run the update again, you shouldn’t see the error.
|
||||
|
||||
### Error 4: Failed to download package files error
|
||||
|
||||
A similar error is **[failed to download package files error][18] **like this:
|
||||
|
||||
![][19]
|
||||
|
||||
In this case, a newer version of the software is available but it’s not propagated to all the mirrors. If you are not using a mirror, easily fixed by changing the software sources to Main server. Please read this article for more details on [failed to download package error][18].
|
||||
|
||||
Go to _Software & Updates_ and in there changed the download server to Main server:
|
||||
|
||||
![][20]
|
||||
|
||||
### Error 5: GPG error: The following signatures couldn’t be verified
|
||||
|
||||
Adding a PPA may also result in the following [GPG error: The following signatures couldn’t be verified][21] when you try to run an update in terminal:
|
||||
|
||||
```
|
||||
W: GPG error: http://repo.mate-desktop.org saucy InRelease: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 68980A0EA10B4DE8
|
||||
```
|
||||
|
||||
All you need to do is to fetch this public key in the system. Get the key number from the message. In the above message, the key is 68980A0EA10B4DE8.
|
||||
|
||||
This key can be used in the following manner:
|
||||
|
||||
```
|
||||
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 68980A0EA10B4DE8
|
||||
```
|
||||
|
||||
Once the key has been added, run the update again and it should be fine.
|
||||
|
||||
### Error 6: BADSIG error
|
||||
|
||||
Another signature related Ubuntu update error is [BADSIG error][22] which looks something like this:
|
||||
|
||||
```
|
||||
W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://extras.ubuntu.com precise Release: The following signatures were invalid: BADSIG 16126D3A3E5C1192 Ubuntu Extras Archive Automatic Signing Key
|
||||
W: GPG error: http://ppa.launchpad.net precise Release:
|
||||
The following signatures were invalid: BADSIG 4C1CBC1B69B0E2F4 Launchpad PPA for Jonathan French W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/precise/Release
|
||||
```
|
||||
|
||||
All the repositories are signed with the GPG and for some reason, your system finds them invalid. You’ll need to update the signature keys. The easiest way to do that is by regenerating the apt packages list (with their signature keys) and it should have the correct key.
|
||||
|
||||
Use the following commands one by one in the terminal:
|
||||
|
||||
```
|
||||
cd /var/lib/apt
|
||||
sudo mv lists oldlist
|
||||
sudo mkdir -p lists/partial
|
||||
sudo apt-get clean
|
||||
sudo apt-get update
|
||||
```
|
||||
|
||||
### Error 7: Partial upgrade error
|
||||
|
||||
Running updates in terminal may throw this partial upgrade error:
|
||||
|
||||
![][23]
|
||||
|
||||
```
|
||||
Not all updates can be installed
|
||||
Run a partial upgrade, to install as many updates as possible
|
||||
```
|
||||
|
||||
Run the following command in terminal to fix this error:
|
||||
|
||||
```
|
||||
sudo apt-get install -f
|
||||
```
|
||||
|
||||
### Error 8: Could not get lock /var/cache/apt/archives/lock
|
||||
|
||||
This error happens when another program is using APT. Suppose you are installing some thing in Ubuntu Software Center and at the same time, trying to run apt in terminal.
|
||||
|
||||
```
|
||||
E: Could not get lock /var/cache/apt/archives/lock – open (11: Resource temporarily unavailable)
|
||||
E: Unable to lock directory /var/cache/apt/archives/
|
||||
```
|
||||
|
||||
Check if some other program might be using apt. It could be a command running terminal, Software Center, Software Updater, Software & Updates or any other software that deals with installing and removing applications.
|
||||
|
||||
If you can close other such programs, close them. If there is a process in progress, wait for it to finish.
|
||||
|
||||
If you cannot find any such programs, use the following [command to kill all such running processes][24]:
|
||||
|
||||
```
|
||||
sudo killall apt apt-get
|
||||
```
|
||||
|
||||
This is a tricky problem and if the problem still persists, please read this detailed tutorial on [fixing the unable to lock the administration directory error in Ubuntu][25].
|
||||
|
||||
_**Any other update error you encountered?**_
|
||||
|
||||
That compiles the list of frequent Ubuntu update errors you may encounter. I hope this helps you to get rid of these errors.
|
||||
|
||||
Have you encountered any other update error in Ubuntu recently that hasn’t been covered here? Do mention it in comments and I’ll try to do a quick tutorial on it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-update-error/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/update-ubuntu/
|
||||
[2]: https://ubuntu.com/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/ubuntu-repositories.png?ssl=1
|
||||
[4]: https://itsfoss.com/ubuntu-repositories/
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/what-is-ppa.png?ssl=1
|
||||
[6]: https://itsfoss.com/ppa-guide/
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2013/04/Failed-to-download-repository-information-Ubuntu-13.04.png?ssl=1
|
||||
[8]: https://idioms.thefreedictionary.com/little+grey+cells
|
||||
[9]: https://en.wikipedia.org/wiki/Hercule_Poirot
|
||||
[10]: https://itsfoss.com/ubuntu-shortcuts/
|
||||
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2013/11/Ubuntu-Update-error.jpeg?ssl=1
|
||||
[12]: https://itsfoss.com/how-to-fix-problem-with-mergelist/
|
||||
[13]: https://itsfoss.com/solve-ubuntu-error-failed-to-download-repository-information-check-your-internet-connection/
|
||||
[14]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
||||
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/05/software-updates-ubuntu-gnome.jpeg?ssl=1
|
||||
[16]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/remove_ppa_using_software_updates_in_ubuntu.jpg?ssl=1
|
||||
[18]: https://itsfoss.com/fix-failed-download-package-files-error-ubuntu/
|
||||
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2014/09/Ubuntu_Update_error.jpeg?ssl=1
|
||||
[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2014/09/Change_server_Ubuntu.jpeg?ssl=1
|
||||
[21]: https://itsfoss.com/solve-gpg-error-signatures-verified-ubuntu/
|
||||
[22]: https://itsfoss.com/solve-badsig-error-quick-tip/
|
||||
[23]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2013/09/Partial_Upgrade_error_Elementary_OS_Luna.png?ssl=1
|
||||
[24]: https://itsfoss.com/how-to-find-the-process-id-of-a-program-and-kill-it-quick-tip/
|
||||
[25]: https://itsfoss.com/could-not-get-lock-error/
|
@ -0,0 +1,192 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Get the Size of a Directory in Linux)
|
||||
[#]: via: (https://www.2daygeek.com/find-get-size-of-directory-folder-linux-disk-usage-du-command/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How to Get the Size of a Directory in Linux
|
||||
======
|
||||
|
||||
You may have noticed that the size of a directory is showing only 4KB when you use the **[ls command][1]** to list the directory content in Linux.
|
||||
|
||||
Is this the right size? If not, what is it, and how to get a directory or folder size in Linux?
|
||||
|
||||
This is the default size, which is used to store the meta information of the directory on the disk.
|
||||
|
||||
There are some applications on Linux to **[get the actual size of a directory][2]**.
|
||||
|
||||
But the disk usage (du) command is widely used by the Linux administrator.
|
||||
|
||||
I will show you how to get folder size with various options.
|
||||
|
||||
### What’s du Command?
|
||||
|
||||
**[du command][3]** stands for `Disk Usage`. It’s a standard Unix program which used to estimate file space usage in present working directory.
|
||||
|
||||
It summarize disk usage recursively to get a directory and its sub-directory size.
|
||||
|
||||
As I said, the directory size only shows 4KB when you use the ls command. See the below output.
|
||||
|
||||
```
|
||||
$ ls -lh | grep ^d
|
||||
|
||||
drwxr-xr-x 3 daygeek daygeek 4.0K Aug 2 13:57 Bank_Details
|
||||
drwxr-xr-x 2 daygeek daygeek 4.0K Mar 15 2019 daygeek
|
||||
drwxr-xr-x 6 daygeek daygeek 4.0K Feb 16 2019 drive-2daygeek
|
||||
drwxr-xr-x 13 daygeek daygeek 4.0K Jan 6 2019 drive-mageshm
|
||||
drwxr-xr-x 15 daygeek daygeek 4.0K Sep 29 21:32 Thanu_Photos
|
||||
```
|
||||
|
||||
### 1) How to Check Only the Size of the Parent Directory on Linux
|
||||
|
||||
Use the below du command format to get the total size of a given directory. In this example, we are going to get the total size of the **“/home/daygeek/Documents”** directory.
|
||||
|
||||
```
|
||||
$ du -hs /home/daygeek/Documents
|
||||
or
|
||||
$ du -h --max-depth=0 /home/daygeek/Documents/
|
||||
|
||||
20G /home/daygeek/Documents
|
||||
```
|
||||
|
||||
**Details**:
|
||||
|
||||
* du – It is a command
|
||||
* h – Print sizes in human readable format (e.g., 1K 234M 2G)
|
||||
* s – Display only a total for each argument
|
||||
* –max-depth=N – Print levels of directory
|
||||
|
||||
|
||||
|
||||
### 2) How to Get the Size of Each Directory on Linux
|
||||
|
||||
Use the below du command format to get the total size of each directory, including sub-directories.
|
||||
|
||||
In this example, we are going to get the total size of each **“/home/daygeek/Documents”** directory and it’s sub-directories.
|
||||
|
||||
```
|
||||
$ du -h /home/daygeek/Documents/ | sort -rh | head -20
|
||||
|
||||
20G /home/daygeek/Documents/
|
||||
9.6G /home/daygeek/Documents/drive-2daygeek
|
||||
6.3G /home/daygeek/Documents/Thanu_Photos
|
||||
5.3G /home/daygeek/Documents/Thanu_Photos/Camera
|
||||
5.3G /home/daygeek/Documents/drive-2daygeek/Thanu-videos
|
||||
3.2G /home/daygeek/Documents/drive-mageshm
|
||||
2.3G /home/daygeek/Documents/drive-2daygeek/Thanu-Photos
|
||||
2.2G /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month
|
||||
916M /home/daygeek/Documents/drive-mageshm/Tanisha
|
||||
454M /home/daygeek/Documents/drive-mageshm/2g-backup
|
||||
415M /home/daygeek/Documents/Thanu_Photos/WhatsApp Video
|
||||
300M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Jan-2017
|
||||
288M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Oct-2017
|
||||
226M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Sep-2017
|
||||
219M /home/daygeek/Documents/Thanu_Photos/WhatsApp Documents
|
||||
213M /home/daygeek/Documents/drive-mageshm/photos
|
||||
163M /home/daygeek/Documents/Thanu_Photos/WhatsApp Video/Sent
|
||||
161M /home/daygeek/Documents/Thanu_Photos/WhatsApp Images
|
||||
154M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/June-2017
|
||||
150M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Nov-2016
|
||||
```
|
||||
|
||||
### 3) How to Get a Summary of Each Directory on Linux
|
||||
|
||||
Use the below du command format to get only the summary for each directory.
|
||||
|
||||
```
|
||||
$ du -hs /home/daygeek/Documents/* | sort -rh | head -10
|
||||
|
||||
9.6G /home/daygeek/Documents/drive-2daygeek
|
||||
6.3G /home/daygeek/Documents/Thanu_Photos
|
||||
3.2G /home/daygeek/Documents/drive-mageshm
|
||||
756K /home/daygeek/Documents/Bank_Details
|
||||
272K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-TouchInterface1.png
|
||||
172K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-NightLight.png
|
||||
164K /home/daygeek/Documents/ConfigServer Security and Firewall (csf) Cheat Sheet.pdf
|
||||
132K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-Todo.png
|
||||
112K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-ZorinAutoTheme.png
|
||||
96K /home/daygeek/Documents/distro-info.xlsx
|
||||
```
|
||||
|
||||
### 4) How to Display the Size of Each Directory and Exclude Sub-Directories on Linux
|
||||
|
||||
Use the below du command format to display the total size of each directory, excluding subdirectories.
|
||||
|
||||
```
|
||||
$ du -hS /home/daygeek/Documents/ | sort -rh | head -20
|
||||
|
||||
5.3G /home/daygeek/Documents/Thanu_Photos/Camera
|
||||
5.3G /home/daygeek/Documents/drive-2daygeek/Thanu-videos
|
||||
2.3G /home/daygeek/Documents/drive-2daygeek/Thanu-Photos
|
||||
1.5G /home/daygeek/Documents/drive-mageshm
|
||||
831M /home/daygeek/Documents/drive-mageshm/Tanisha
|
||||
454M /home/daygeek/Documents/drive-mageshm/2g-backup
|
||||
300M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Jan-2017
|
||||
288M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Oct-2017
|
||||
253M /home/daygeek/Documents/Thanu_Photos/WhatsApp Video
|
||||
226M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Sep-2017
|
||||
219M /home/daygeek/Documents/Thanu_Photos/WhatsApp Documents
|
||||
213M /home/daygeek/Documents/drive-mageshm/photos
|
||||
163M /home/daygeek/Documents/Thanu_Photos/WhatsApp Video/Sent
|
||||
154M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/June-2017
|
||||
150M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Nov-2016
|
||||
127M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Dec-2016
|
||||
100M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Oct-2016
|
||||
94M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Nov-2017
|
||||
92M /home/daygeek/Documents/Thanu_Photos/WhatsApp Images
|
||||
90M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Dec-2017
|
||||
```
|
||||
|
||||
### 5) How to Get Only the Size of First-Level Sub-Directories on Linux
|
||||
|
||||
If you want to get the size of the first-level sub-directories, including their subdirectories, for a given directory on Linux, use the command format below.
|
||||
|
||||
```
|
||||
$ du -h --max-depth=1 /home/daygeek/Documents/
|
||||
|
||||
3.2G /home/daygeek/Documents/drive-mageshm
|
||||
4.0K /home/daygeek/Documents/daygeek
|
||||
756K /home/daygeek/Documents/Bank_Details
|
||||
9.6G /home/daygeek/Documents/drive-2daygeek
|
||||
6.3G /home/daygeek/Documents/Thanu_Photos
|
||||
20G /home/daygeek/Documents/
|
||||
```
|
||||
|
||||
### 6) How to Get Grand Total in the du Command Output
|
||||
|
||||
If you want to get the grand total in the du Command output, use the below du command format.
|
||||
|
||||
```
|
||||
$ du -hsc /home/daygeek/Documents/* | sort -rh | head -10
|
||||
|
||||
20G total
|
||||
9.6G /home/daygeek/Documents/drive-2daygeek
|
||||
6.3G /home/daygeek/Documents/Thanu_Photos
|
||||
3.2G /home/daygeek/Documents/drive-mageshm
|
||||
756K /home/daygeek/Documents/Bank_Details
|
||||
272K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-TouchInterface1.png
|
||||
172K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-NightLight.png
|
||||
164K /home/daygeek/Documents/ConfigServer Security and Firewall (csf) Cheat Sheet.pdf
|
||||
132K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-Todo.png
|
||||
112K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-ZorinAutoTheme.png
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/find-get-size-of-directory-folder-linux-disk-usage-du-command/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/linux-unix-ls-command-display-directory-contents/
|
||||
[2]: https://www.2daygeek.com/how-to-get-find-size-of-directory-folder-linux/
|
||||
[3]: https://www.2daygeek.com/linux-check-disk-usage-files-directories-size-du-command/
|
@ -7,22 +7,22 @@
|
||||
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-oauth/)
|
||||
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
|
||||
|
||||
Building a Messenger App: OAuth
|
||||
构建一个即时消息应用(二):OAuth
|
||||
======
|
||||
|
||||
[Previous part: Schema][1].
|
||||
[上一篇:模式](https://linux.cn/article-11396-1.html),[原文][1]。
|
||||
|
||||
In this post we start the backend by adding social login.
|
||||
在这篇帖子中,我们将会通过为应用添加社交登录功能进入后端开发。
|
||||
|
||||
This is how it works: the user click on a link that redirects him to the GitHub authorization page. The user grant access to his info and get redirected back logged in. The next time he tries to login, he won’t be asked to grant permission, it is remembered so the login flow is as fast as a single click.
|
||||
社交登录的工作方式十分简单:用户点击链接,然后重定向到 GitHub 授权页面。当用户授予我们对他的个人信息的访问权限之后,就会重定向回登录页面。下一次尝试登录时,系统将不会再次请求授权,也就是说,我们的应用已经记住了这个用户。这使得整个登录流程看起来就和你用鼠标单击一样快。
|
||||
|
||||
Internally, the history is more complex tho. First we need the register a new [OAuth app on GitHub][2].
|
||||
如果进一步考虑其内部实现的话,过程就会变得复杂起来。首先,我们需要注册一个新的 [GitHub OAuth 应用][2]。
|
||||
|
||||
The important part is the callback URL. Set it to `http://localhost:3000/api/oauth/github/callback`. On development we are on localhost, so when you ship the app to production, register a new app with the correct callback URL.
|
||||
这一步中,比较重要的是回调 URL。我们将它设置为 `http://localhost:3000/api/oauth/github/callback`。这是因为,在开发过程中,我们总是在本地主机上工作。一旦你要将应用交付生产,请使用正确的回调 URL 注册一个新的应用。
|
||||
|
||||
This will give you a client id and a secret key. Don’t share them with anyone 👀
|
||||
注册以后,你将会收到「客户端 id」和「安全密钥」。安全起见,请不要与任何人分享他们 👀
|
||||
|
||||
With that off of the way, lets start to write some code. Create a `main.go` file:
|
||||
顺便让我们开始写一些代码吧。现在,创建一个 `main.go` 文件:
|
||||
|
||||
```
|
||||
package main
|
||||
@ -139,7 +139,7 @@ func intEnv(key string, fallbackValue int) int {
|
||||
}
|
||||
```
|
||||
|
||||
Install dependencies:
|
||||
安装依赖项:
|
||||
|
||||
```
|
||||
go get -u github.com/gorilla/securecookie
|
||||
@ -151,28 +151,26 @@ go get -u github.com/matryer/way
|
||||
go get -u golang.org/x/oauth2
|
||||
```
|
||||
|
||||
We use a `.env` file to save secret keys and other configurations. Create it with at least this content:
|
||||
我们将会使用 `.env` 文件来保存密钥和其他配置。请创建这个文件,并保证里面至少包含以下内容:
|
||||
|
||||
```
|
||||
GITHUB_CLIENT_ID=your_github_client_id
|
||||
GITHUB_CLIENT_SECRET=your_github_client_secret
|
||||
```
|
||||
|
||||
The other enviroment variables we use are:
|
||||
我们还要用到的其他环境变量有:
|
||||
|
||||
* `PORT`: The port in which the server runs. Defaults to `3000`.
|
||||
* `ORIGIN`: Your domain. Defaults to `http://localhost:3000/`. The port can also be extracted from this.
|
||||
* `DATABASE_URL`: The Cockroach address. Defaults to `postgresql://root@127.0.0.1:26257/messenger?sslmode=disable`.
|
||||
* `HASH_KEY`: Key to sign cookies. Yeah, we’ll use signed cookies for security.
|
||||
* `JWT_KEY`: Key to sign JSON web tokens.
|
||||
* `PORT`:服务器运行的端口,默认值是 `3000`。
|
||||
* `ORIGIN`:你的域名,默认值是 `http://localhost:3000/`。我们也可以在这里指定端口。
|
||||
* `DATABASE_URL`:Cockroach 数据库的地址。默认值是 `postgresql://root@127.0.0.1:26257/messenger?sslmode=disable`。
|
||||
* `HASH_KEY`:用于为 cookies 签名的密钥。没错,我们会使用已签名的 cookies 来确保安全。
|
||||
* `JWT_KEY`:用于签署 JSON 网络令牌(Json Web Token)的密钥。
|
||||
|
||||
因为代码中已经设定了默认值,所以你也不用把它们写到 `.env` 文件中。
|
||||
|
||||
在读取配置并连接到数据库之后,我们会创建一个 OAuth 配置。我们会使用 `ORIGIN` 来构建回调 URL(就和我们在 GitHub 页面上注册的一样)。我们的数据范围设置为 “read:user”。这会允许我们读取公开的用户信息,这里我们只需要他的用户名和头像就够了。然后我们会初始化 cookie 和 JWT 签名器。定义一些端点并启动服务器。
|
||||
|
||||
Because they have default values, your don’t need to write them on the `.env` file.
|
||||
|
||||
After reading the configuration and connecting to the database, we create an OAuth config. We use the origin to build the callback URL (the same we registered on the github page). And we set the scope to “read:user”. This will give us permission to read the public user info. That’s because we just need his username and avatar. Then we initialize the cookie and JWT signers. Define some endpoints and start the server.
|
||||
|
||||
Before implementing those HTTP handlers lets write a couple functions to send HTTP responses.
|
||||
在实现 HTTP 处理程序之前,让我们编写一些函数来发送 HTTP 响应。
|
||||
|
||||
```
|
||||
func respond(w http.ResponseWriter, v interface{}, statusCode int) {
|
||||
@ -192,11 +190,11 @@ func respondError(w http.ResponseWriter, err error) {
|
||||
}
|
||||
```
|
||||
|
||||
The first one is to send JSON and the second one logs the error to the console and return a `500 Internal Server Error` error.
|
||||
第一个函数用来发送 JSON,而第二个将错误记录到控制台并返回一个 `500 Internal Server Error` 错误信息。
|
||||
|
||||
### OAuth Start
|
||||
### OAuth 开始
|
||||
|
||||
So, the user clicks on a link that says “Access with GitHub”… That link points the this endpoint `/api/oauth/github` that will redirect the user to github.
|
||||
所以,用户点击写着 “Access with GitHub” 的链接。该链接指向 `/api/oauth/github`,这将会把用户重定向到 github。
|
||||
|
||||
```
|
||||
func githubOAuthStart(w http.ResponseWriter, r *http.Request) {
|
||||
@ -222,11 +220,11 @@ func githubOAuthStart(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
```
|
||||
|
||||
OAuth2 uses a mechanism to prevent CSRF attacks so it requires a “state”. We use nanoid to create a random string and use that as state. We save it as a cookie too.
|
||||
OAuth2 使用一种机制来防止 CSRF 攻击,因此它需要一个「状态」 "state"。我们使用 `Nanoid()` 来创建一个随机字符串,并用这个字符串作为状态。我们也把它保存为一个 cookie。
|
||||
|
||||
### OAuth Callback
|
||||
### OAuth 回调
|
||||
|
||||
Once the user grant access to his info on the GitHub page, he will be redirected to this endpoint. The URL will come with the state and a code on the query string `/api/oauth/github/callback?state=&code=`
|
||||
一旦用户授权我们访问他的个人信息,他将会被重定向到这个端点。这个 URL 的查询字符串上将会包含状态(state)和授权码(code) `/api/oauth/github/callback?state=&code=`
|
||||
|
||||
```
|
||||
const jwtLifetime = time.Hour * 24 * 14
|
||||
@ -341,19 +339,19 @@ func githubOAuthCallback(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
```
|
||||
|
||||
First we try to decode the cookie with the state we saved before. And compare it with the state that comes in the query string. In case they don’t match, we return a `418 I'm teapot` error.
|
||||
首先,我们会尝试使用之前保存的状态对 cookie 进行解码。并将其与查询字符串中的状态进行比较。如果它们不匹配,我们会返回一个 `418 I'm teapot`(未知来源)错误。
|
||||
|
||||
Then we exchange the code for a token. This token is used to create an HTTP client to make requests to the GitHub API. So we do a GET request to `https://api.github.com/user`. This endpoint will give us the current authenticated user info in JSON format. We decode it to get the user ID, login (username) and avatar URL.
|
||||
接着,我们使用授权码生成一个令牌。这个令牌被用于创建 HTTP 客户端来向 GitHub API 发出请求。所以最终我们会向 `https://api.github.com/user` 发送一个 GET 请求。这个端点将会以 JSON 格式向我们提供当前经过身份验证的用户信息。我们将会解码这些内容,一并获取用户的 ID,登录名(用户名)和头像 URL。
|
||||
|
||||
Then we try to find a user with that GitHub ID on the database. If none is found, we create one using that data.
|
||||
然后我们将会尝试在数据库上找到具有该 GitHub ID 的用户。如果没有找到,就使用该数据创建一个新的。
|
||||
|
||||
Then, with the newly created user, we issue a JSON web token with the user ID as Subject and redirect to the frontend with the token, along side the expiration date in the query string.
|
||||
之后,对于新创建的用户,我们会发出一个用户 ID 为主题(subject)的 JSON 网络令牌,并使用该令牌重定向到前端,查询字符串中一并包含该令牌的到期日(the expiration date)。
|
||||
|
||||
The web app will be for another post, but the URL you are being redirected is `/callback?token=&expires_at=`. There we’ll have some JavaScript to extract the token and expiration date from the URL and do a GET request to `/api/auth_user` with the token in the `Authorization` header in the form of `Bearer token_here` to get the authenticated user and save it to localStorage.
|
||||
这一 Web 应用也会被用在其他帖子,但是重定向的链接会是 `/callback?token=&expires_at=`。在那里,我们将会利用 JavaScript 从 URL 中获取令牌和到期日,并通过 `Authorization` 标头中的令牌以`Bearer token_here` 的形式对 `/ api / auth_user` 进行GET请求,来获取已认证的身份用户并将其保存到 localStorage。
|
||||
|
||||
### Guard Middleware
|
||||
### Guard 中间件
|
||||
|
||||
To get the current authenticated user we use a middleware. That’s because in future posts we’ll have more endpoints that requires authentication, and a middleware allow us to share functionality.
|
||||
为了获取当前已经过身份验证的用户,我们设计了 Guard 中间件。这是因为在接下来的文章中,我们会有很多需要进行身份认证的端点,而中间件将会允许我们共享这一功能。
|
||||
|
||||
```
|
||||
type ContextKey struct {
|
||||
@ -388,9 +386,9 @@ func guard(handler http.HandlerFunc) http.HandlerFunc {
|
||||
}
|
||||
```
|
||||
|
||||
First we try to read the token from the `Authorization` header or a `token` in the URL query string. If none found, we return a `401 Unauthorized` error. Then we decode the claims in the token and use the Subject as the current authenticated user ID.
|
||||
首先,我们尝试从 `Authorization` 标头或者是 URL 查询字符串中的 `token` 字段中读取令牌。如果没有找到,我们需要返回 `401 Unauthorized`(未授权)错误。然后我们将会对令牌中的申明进行解码,并使用该主题作为当前已经过身份验证的用户 ID。
|
||||
|
||||
Now, we can wrap any `http.handlerFunc` that needs authentication with this middleware and we’ll have the authenticated user ID in the context.
|
||||
现在,我们可以用这一中间件来封装任何需要授权的 `http.handlerFunc`,并且在处理函数的上下文中保有已经过身份验证的用户 ID。
|
||||
|
||||
```
|
||||
var guarded = guard(func(w http.ResponseWriter, r *http.Request) {
|
||||
@ -398,7 +396,7 @@ var guarded = guard(func(w http.ResponseWriter, r *http.Request) {
|
||||
})
|
||||
```
|
||||
|
||||
### Get Authenticated User
|
||||
### 获取认证用户
|
||||
|
||||
```
|
||||
func getAuthUser(w http.ResponseWriter, r *http.Request) {
|
||||
@ -422,13 +420,13 @@ func getAuthUser(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
```
|
||||
|
||||
We use the guard middleware to get the current authenticated user id and do a query to the database.
|
||||
我们使用 Guard 中间件来获取当前经过身份认证的用户 ID 并查询数据库。
|
||||
|
||||
* * *
|
||||
|
||||
That will cover the OAuth process on the backend. In the next part we’ll see how to start conversations with other users.
|
||||
这一部分涵盖了后端的 OAuth 流程。在下一篇帖子中,我们将会看到如何开始与其他用户的对话。
|
||||
|
||||
[Souce Code][3]
|
||||
[源代码][3]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
@ -0,0 +1,69 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lnrCoder)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (DevSecOps pipelines and tools: What you need to know)
|
||||
[#]: via: (https://opensource.com/article/19/10/devsecops-pipeline-and-tools)
|
||||
[#]: author: (Sagar Nangare https://opensource.com/users/sagarnangare)
|
||||
|
||||
你需要知道的 DevSecOps 流程及工具
|
||||
======
|
||||
DevSecOps 对 DevOps 进行了改进,以确保安全性仍然是该过程的一个重要部分。
|
||||
![An intersection of pipes.][1]
|
||||
|
||||
到目前为止,DevOps 在 IT 世界中已广为人知,但其并非完美无缺。试想一下,你已经在一个项目的现代应用程序交付中实施了所有 DevOps 工程实践。你已经到达开发流程的末尾,但是渗透测试团队(内部或外部)检测到安全漏洞并提出了报告。 现在,你必须重新启动所有流程,并要求开发人员修复该漏洞。
|
||||
|
||||
在基于 DevOps 的软件开发生命周期(SDLC)系统中,这并不繁琐,但它确实会浪费时间并影响交付进度。如果安全性从 SDLC 初期就已经集成,那么你可能已经跟踪到了该故障,并在开发流程中就消除了它。但是,如上述情形那样,将安全性推到开发流程的最后将导致更长的开发生命周期。
|
||||
|
||||
这就是引入 DevSecOps 的原因,它以自动化的方式巩固了整个软件交付周期。
|
||||
|
||||
在现代 DevOps 方法中,组织广泛使用容器托管应用程序,我们看到 [Kubernetes][2] 和 [Istio][3] 使用的较多。但是,这些工具都有其自身的漏洞。例如,云原生计算基金会(CNCF)最近完成了一项 [kubernetes 安全审计][4],发现了几个问题。DevOps 开发流程中使用的所有工具在流程运行时都需要进行安全检查,DevSecOps 会推动管理员监视工具的存储库以获取升级和补丁。
|
||||
|
||||
### 什么是 DevSecOps?
|
||||
|
||||
与 DevOps 一样,DevSecOps 是开发人员和 IT 运营团队在开发和部署软件应用程序时所遵循的一种思维方式或文化。它将主动和自动化的安全审计以及渗透测试集成到敏捷应用程序开发中。
|
||||
|
||||
要使用 [DevSecOps][5],你需要:
|
||||
|
||||
* 从SDLC开始就引入安全性概念,以最大程度地减少软件代码中的漏洞。
|
||||
* 确保每个人(包括开发人员和IT运营团队)共同承担在其任务中遵循安全实践的责任。
|
||||
* 在DevOps工作流程开始时集成安全控件,工具和流程。这些将在软件交付的每个阶段启用自动安全检查。
|
||||
|
||||
DevOps 一直致力于在开发和发布过程中包括安全性以及质量保证(QA),数据库管理和其他所有方面。然而,DevSecOps 是该过程的一个演进,以确保安全永远不会被遗忘,成为该过程的一个重要部分。
|
||||
|
||||
### 了解 DevSecOps 流程
|
||||
|
||||
典型的 DevOps 流程有不同的阶段;典型的 SDLC 流程包括计划,编码,构建,测试,发布和部署等阶段。在 DevSecOps 中,每个阶段都会应用特定的安全检查。
|
||||
|
||||
* **计划:** 执行安全性分析并创建测试计划,以确定在何处,如何以及何时进行测试的方案。
|
||||
* **编码:** 部署整理工具和 Git 控件以保护密码和 API 密钥。
|
||||
* **构建:** 在构建执行代码时,请结合使用静态应用程序安全测试(SAST)工具来跟踪代码中的缺陷,然后再部署到生产环境中。 这些工具针对特定的编程语言。
|
||||
* **测试:** 在运行时使用动态应用程序安全测试(DAST)工具来测试您的应用程序。 这些工具可以检测与用户身份验证,授权,SQL 注入以及与 API 相关的端点相关的错误。
|
||||
* **发布:** 在发布应用程序之前,请使用安全分析工具来进行全面的渗透测试和漏洞扫描。
|
||||
* **部署:** 在运行时完成上述测试后,将安全的版本发送到生产中以进行最终部署。
|
||||
|
||||
### DevSecOps 工具
|
||||
|
||||
SDLC 的每个阶段都有可用的工具。有些是商业产品,但大多数是开源的。在我的下一篇文章中,我将更多地讨论在流程的不同阶段使用的工具。
|
||||
|
||||
随着基于现代 IT 基础设施的企业安全威胁的复杂性增加,DevSecOps 将发挥更加关键的作用。然而,DevSecOps 流程将需要随着时间的推移而改进,而不是仅仅依靠同时实施所有安全更改即可。这将消除回溯或应用交付失败的可能性。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/devsecops-pipeline-and-tools
|
||||
|
||||
作者:[Sagar Nangare][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lnrCoder](https://github.com/lnrCoder)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sagarnangare
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe (An intersection of pipes.)
|
||||
[2]: https://opensource.com/resources/what-is-kubernetes
|
||||
[3]: https://opensource.com/article/18/9/what-istio
|
||||
[4]: https://www.cncf.io/blog/2019/08/06/open-sourcing-the-kubernetes-security-audit/
|
||||
[5]: https://resources.whitesourcesoftware.com/blog-whitesource/devsecops
|
@ -0,0 +1,99 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Object-Oriented Programming and Essential State)
|
||||
[#]: via: (https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html)
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
|
||||
面向对象编程和根本状态
|
||||
======
|
||||
|
||||
早在 2015 年,Brian Will 撰写了一篇有挑衅性的博客:[面向对象编程:一个灾难故事][1]。他随后发布了一个名为[面向对象编程很糟糕][2]的视频,该视频更加详细。我建议你花些时间观看视频,但这是我的一小段摘要:
|
||||
|
||||
OOP 的柏拉图式理想是一堆相互解耦的对象,它们彼此之间发送无状态消息。没有人真的像这样制作软件,Brian 指出这甚至没有意义:对象需要知道向哪个对象发送消息,这意味着它们需要相互引用。视频大部分讲述的是人们试图将对象耦合以实现控制流,同时假装它们是通过设计解耦的。
|
||||
|
||||
总的来说,他的想法与我自己的 OOP 经验产生了共鸣:对象没有问题,但是我从来没有对_面向_对象建立程序控制流满意,而试图使代码“正确地”面向对象似乎总是在创建不必要的复杂性。
|
||||
|
||||
我认为他无法完全解释一件事。他直截了当地说“封装没有作用”,但在脚注后面加上“在细粒度的代码级别”,并继续承认对象有时可以奏效,并且在库和文件级别可以封装。但是他没有确切解释为什么有时会奏效,有时却没有奏效,以及如何/在何处划清界限。有人可能会说这使他的“ OOP不好”的说法有缺陷,但是我认为他的观点是正确的,并且可以在根本状态和偶发状态之间划清界限。
|
||||
|
||||
如果你以前从未听说过“根本”和“偶发”这两个术语的使用,那么你应该阅读 Fred Brooks 的经典文章[没有银弹][3]。 (顺便说一句,他写了许多有关构建软件系统的很棒的文章。)我以前曾写过[关于根本和偶发的复杂性的文章][4],但是这里有一个简短的摘要:软件很复杂。部分原因是因为我们希望软件能够解决混乱的现实世界问题,因此我们将其称为“根本复杂性”。“偶发复杂性”是所有其他复杂性,因为我们正尝试使用硅和金属来解决与硅和金属无关的问题。例如,对于大多数程序而言,用于内存管理或在内存与磁盘之间传输数据或解析文本格式的代码都是“偶发的复杂性”。
|
||||
|
||||
假设你正在构建一个支持多个频道的聊天应用。消息可以随时到达任何频道。有些频道特别有趣,当有新消息传入时,用户希望得到通知。其他频道静音:消息被存储,但用户不会受到打扰。你需要跟踪每个频道的用户首选设置。
|
||||
|
||||
一种实现方法是在频道和频道设置之间使用映射(也称为哈希表,字典或关联数组)。注意,映射是 Brian Will 所说的可以用作对象的抽象数据类型(ADT)。
|
||||
|
||||
如果我们有一个调试器并查看内存中的 map 对象,我们将看到什么?我们当然会找到频道 ID 和频道设置数据(或至少指向它们的指针)。但是我们还会找到其他数据。如果 map 是使用红黑树实现的,我们将看到带有红/黑标签和指向其他节点的指针的树节点对象。与频道相关的数据是根本状态,而树节点是偶发状态。不过,请注意以下几点:该映射有效地封装了它的偶发状态-你可以用 AVL 树实现的另一个映射替换该映射,并且你的聊天程序仍然可以使用。另一方面,映射没有封装根本状态(仅使用 `get()` 和 `set()`方法访问数据不是封装)。事实上,映射与根本状态是尽可能不可知的,你可以使用基本相同的映射数据结构来存储与频道或通知无关的其他映射。
|
||||
|
||||
|
||||
这就是映射 ADT 如此成功的原因:它封装了偶发状态,并与根本状态解耦。如果你考虑一下,Brian 描述的封装问题就是尝试封装根本状态。其他描述的好处是封装偶发状态的好处。
|
||||
|
||||
要使整个软件系统都达到这一理想相当困难,但扩展开来,我认为它看起来像这样:
|
||||
|
||||
* 没有全局的可变状态
|
||||
* 封装了偶发状态(在对象或模块或以其他任何形式)
|
||||
* 无状态偶发复杂性封装在单独函数中,与数据解耦
|
||||
* 使用诸如依赖注入之类的技巧使输入和输出变得明确
|
||||
* 完全拥有组件,并从易于识别的位置进行控制
|
||||
|
||||
|
||||
|
||||
其中有些违反了我很久以前的本能。例如,如果你有一个数据库查询函数,如果数据库连接处理隐藏在该函数内部,并且唯一的参数是查询参数,那么接口会看起来会更简单。但是,当你使用这样的函数构建软件系统时,协调数据库的使用实际上变得更加复杂。组件不仅以自己的方式做事,而且还试图将自己所做的事情隐藏为“实现细节”。数据库查询需要数据库连接这一事实从来都不是实现细节。如果无法隐藏某些内容,那么显露它是更合理的。
|
||||
|
||||
我警惕将面向对象编程和函数式编程放在两极,但我认为从函数式编程进入面向对象编程的另一极端是很有趣的:OOP 试图封装事物,包括无法封装的根本复杂性,而纯函数式编程往往会使事情变得明确,包括一些偶发复杂性。在大多数时候,没什么问题,但有时候(比如[在纯函数式语言中构建自我指称的数据结构][5])设计更多的是为了函数编程,而不是为了简便(这就是为什么 [Haskell 包含了一些“逃生出口”( escape hatches)][6])。我之前写过一篇[中立的所谓的“弱纯性” (weak purity)][7]
|
||||
|
||||
Brian 发现封装对更大规模有效,原因有几个。一个是,由于大小的原因,较大的组件更可能包含偶发状态。另一个是“偶发”与你要解决的问题有关。从聊天程序用户的角度来看,“偶发的复杂性”是与消息,频道和用户等无关的任何事物。但是,当你将问题分解为子问题时,更多的事情就变得重要。例如,在解决“构建聊天应用”问题时,可以说频道名称和频道 ID 之间的映射是偶发的复杂性,而在解决“实现 `getChannelIdByName()` 函数”子问题时,这是根本复杂性。因此,封装对于子组件的作用比对父组件的作用要小。
|
||||
|
||||
顺便说一句,在影片的结尾,Brian Will 想知道是否有任何语言支持_无法_访问它们所作用的范围的匿名函数。[D][8] 语言可以。 D 中的匿名 Lambda 通常是闭包,但是如果你想要的话,也可以声明匿名无状态函数:
|
||||
|
||||
```
|
||||
import std.stdio;
|
||||
|
||||
void main()
|
||||
{
|
||||
int x = 41;
|
||||
|
||||
// Value from immediately executed lambda
|
||||
auto v1 = () {
|
||||
return x + 1;
|
||||
}();
|
||||
writeln(v1);
|
||||
|
||||
// Same thing
|
||||
auto v2 = delegate() {
|
||||
return x + 1;
|
||||
}();
|
||||
writeln(v2);
|
||||
|
||||
// Plain functions aren't closures
|
||||
auto v3 = function() {
|
||||
// Can't access x
|
||||
// Can't access any mutable global state either if also marked pure
|
||||
return 42;
|
||||
}();
|
||||
writeln(v3);
|
||||
}
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://theartofmachinery.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://medium.com/@brianwill/object-oriented-programming-a-personal-disaster-1b044c2383ab
|
||||
[2]: https://www.youtube.com/watch?v=QM1iUe6IofM
|
||||
[3]: http://www.cs.nott.ac.uk/~pszcah/G51ISS/Documents/NoSilverBullet.html
|
||||
[4]: https://theartofmachinery.com/2017/06/25/compression_complexity_and_software.html
|
||||
[5]: https://wiki.haskell.org/Tying_the_Knot
|
||||
[6]: https://en.wikibooks.org/wiki/Haskell/Mutable_objects#The_ST_monad
|
||||
[7]: https://theartofmachinery.com/2016/03/28/dirtying_pure_functions_can_be_useful.html
|
||||
[8]: https://dlang.org
|
Loading…
Reference in New Issue
Block a user