2
0
mirror of https://github.com/LCTT/TranslateProject.git synced 2025-03-27 02:30:10 +08:00

Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-11-04 11:50:17 +08:00
commit 89dfe533b9
12 changed files with 1029 additions and 30 deletions

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11535-1.html)
[#]: subject: (How to Enable EPEL Repository on CentOS 8 and RHEL 8 Server)
[#]: via: (https://www.linuxtechi.com/enable-epel-repo-centos8-rhel8-server/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
@ -10,23 +10,21 @@
如何在 CentOS 8 和 RHEL 8 服务器上启用 EPEL 仓库
======
**EPEL** 代表 “Extra Packages for Enterprise Linux”它是一个免费的开源附加软件包仓库,可用于 **CentOS****RHEL** 服务器。顾名思义EPEL 仓库提供了额外的软件包,它们在 [CentOS 8][1]和 [RHEL 8][2] 的默认软件包仓库中不可用。
EPEL 代表 “Extra Packages for Enterprise Linux”它是一个自由开源的附加软件包仓库,可用于 CentOS 和 RHEL 服务器。顾名思义EPEL 仓库提供了额外的软件包,这些软件在 [CentOS 8][1] 和 [RHEL 8][2] 的默认软件包仓库中不可用。
在本文中,我们将演示如何在 CentOS 8 和 RHEL 8 服务器上启用和使用 epel 存储库。
在本文中,我们将演示如何在 CentOS 8 和 RHEL 8 服务器上启用和使用 EPEL 存储库。
[![EPEL-Repo-CentOS8-RHEL8][3]][4]
![](https://img.linux.net.cn/data/attachment/album/201911/04/113307wz4y3lnczzlxzn2j.jpg)
### EPEL 仓库的先决条件
* Minimal CentOS 8 和 RHEL 8 服务器
* 最小化安装的 CentOS 8 和 RHEL 8 服务器
* root 或 sudo 管理员权限
* 网络连接
### 在 RHEL 8.x 服务器上安装并启用 EPEL 仓库
登录或 SSH 到你的 RHEL 8.x 服务器并执行以下 dnf 命令来安装 EPEL rpm 包,
登录或 SSH 到你的 RHEL 8.x 服务器并执行以下 `dnf` 命令来安装 EPEL rpm 包,
```
[root@linuxtechi ~]# dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm -y
@ -34,9 +32,9 @@
上面命令的输出将如下所示,
![dnf-install-epel-repo-rehl8][3]
![dnf-install-epel-repo-rehl8][5]
epel rpm 包成功安装后,它将自动启用并配置其 yum/dnf 仓库。运行以下 dnf 或 yum 命令,以验证是否启用了 EPEL 仓库,
EPEL rpm 包成功安装后,它将自动启用并配置其 yum/dnf 仓库。运行以下 `dnf``yum` 命令,以验证是否启用了 EPEL 仓库,
```
[root@linuxtechi ~]# dnf repolist epel
@ -44,11 +42,11 @@ epel rpm 包成功安装后,它将自动启用并配置其 yum/dnf 仓库。
[root@linuxtechi ~]# dnf repolist epel -v
```
![epel-repolist-rhel8][3]
![epel-repolist-rhel8][6]
### 在 CentOS 8.x 服务器上安装并启用 EPEL 仓库
登录或 SSH 到你的 CentOS 8 服务器,并执行以下 dnf 或 yum 命令来安装 “**epel-release**” rpm 软件包。在 CentOS 8 服务器中epel rpm 在其默认软件包仓库中。
登录或 SSH 到你的 CentOS 8 服务器,并执行以下 `dnf``yum` 命令来安装 `epel-release` rpm 软件包。在 CentOS 8 服务器中EPEL rpm 在其默认软件包仓库中。
```
[root@linuxtechi ~]# dnf install epel-release -y
@ -56,7 +54,7 @@ epel rpm 包成功安装后,它将自动启用并配置其 yum/dnf 仓库。
[root@linuxtechi ~]# yum install epel-release -y
```
执行以下命令来验证 CentOS 8 服务器上 epel 仓库的状态,
执行以下命令来验证 CentOS 8 服务器上 EPEL 仓库的状态,
```
[root@linuxtechi ~]# dnf repolist epel
@ -82,11 +80,11 @@ Total packages: 1,977
[root@linuxtechi ~]#
```
以上命令的输出说明我们已经成功启用了epel 仓库。 让我们在 EPEL 仓库上执行一些基本操作。
以上命令的输出说明我们已经成功启用了 EPEL 仓库。让我们在 EPEL 仓库上执行一些基本操作。
### 列出 epel 仓库种所有可用包
### 列出 EPEL 仓库种所有可用包
如果要列出 epel 仓库中的所有的软件包,请运行以下 dnf 命令,
如果要列出 EPEL 仓库中的所有的软件包,请运行以下 `dnf` 命令,
```
[root@linuxtechi ~]# dnf repository-packages epel list
@ -116,9 +114,9 @@ zvbi-fonts.noarch 0.2.35-9.el8 epel
[root@linuxtechi ~]#
```
### 从 epel 仓库中搜索软件包
### 从 EPEL 仓库中搜索软件包
假设我们要搜索 epel 仓库中的 Zabbix 包,请执行以下 dnf 命令,
假设我们要搜索 EPEL 仓库中的 Zabbix 包,请执行以下 `dnf` 命令,
```
[root@linuxtechi ~]# dnf repository-packages epel list | grep -i zabbix
@ -126,21 +124,23 @@ zvbi-fonts.noarch 0.2.35-9.el8 epel
上面命令的输出类似下面这样,
![epel-repo-search-package-centos8][3]
![epel-repo-search-package-centos8][7]
### 从 epel 仓库安装软件包
### 从 EPEL 仓库安装软件包
假设我们要从 epel 仓库安装 htop 包,运行以下 dnf 命令,
假设我们要从 EPEL 仓库安装 htop 包,运行以下 `dnf` 命令,
语法:
# dnf enablerepo=”epel” install <pkg_name>
```
# dnf enablerepo=”epel” install <包名>
```
```
[root@linuxtechi ~]# dnf --enablerepo="epel" install htop -y
```
**注意:**如果我们在上面的命令中未指定 “**enablerepo=epel**”,那么它将在所有可用的软件包仓库中查找 htop 包。
注意:如果我们在上面的命令中未指定 `enablerepo=epel`,那么它将在所有可用的软件包仓库中查找 htop 包。
本文就是这些内容了,我希望上面的步骤能帮助你在 CentOS 8 和 RHEL 8 服务器上启用并配置 EPEL 仓库,请在下面的评论栏分享你的评论和反馈。
@ -151,7 +151,7 @@ via: https://www.linuxtechi.com/enable-epel-repo-centos8-rhel8-server/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -161,3 +161,6 @@ via: https://www.linuxtechi.com/enable-epel-repo-centos8-rhel8-server/
[2]: https://www.linuxtechi.com/install-configure-kvm-on-rhel-8/
[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/10/EPEL-Repo-CentOS8-RHEL8.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/10/dnf-install-epel-repo-rehl8.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/10/epel-repolist-rhel8.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/10/epel-repo-search-package-centos8.jpg

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (laingke)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -89,7 +89,7 @@ via: https://opensource.com/article/19/10/kubernetes-complex-business-problem
作者:[Scott McCarty][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[laingke](https://github.com/laingke)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Big Four carriers want to rule IoT by simplifying it)
[#]: via: (https://www.networkworld.com/article/3449820/big-four-carriers-want-to-rule-iot-by-simplifying-it.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Big Four carriers want to rule IoT by simplifying it
======
A look at some of the pros and cons of IoT services from AT&T, Sprint, T-Mobile and Verizon
Natalya Burova / Getty Images
The [Internet of Things][1] promises a transformative impact on a wide range of industries, but along with that promise comes an enormous new level of complexity for the network and those in charge of maintaining it. For the major mobile data carriers in the U.S., that fact suggests an opportunity.
The core of the carriers appeal for IoT users is simplicity. Opting for Verizon or AT&amp;T instead of in-house connectivity removes a huge amount of the work involved in pulling an IoT implementation together.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
Operationally, its the same story. The carrier is handling the network management and security functionality, and everything involved in the connectivity piece is available through a centralized management console.
[][3]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][3]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
The carriers approach to the IoT market is two-pronged, in that they sell connectivity services directly to end-users as well as selling connectivity wholesale to device makers. For example, one customer might buy a bunch of sensors directly from Verizon, while another might buy equipment from a specialist manufacturer that contracts with Verizon to provide connectivity.
There are, experts agree, numerous advantages to simply handing off the wireless networking of an IoT project to a major carrier. Licensed networks are largely free of interference the carriers own the exclusive rights to the RF spectrum being used in a designated area, so no one else is allowed to use it without risking the wrath of the FCC. In contrast, a company using unlicensed technologies like Wi-Fi might be competing for the same spectrum area with half a dozen other organizations.
Its also better-secured than most unlicensed technologies or at least easier to secure, according to former chair of the IEEEs IoT [smart cities][4] working group Shawn Chandler. Buying connectivity services that will have to be managed and secured in-house can be a lot more work than letting one of the carriers take care of it.
“If youre going to use mesh networks and RF networks,” he said, “then the enterprise is looking at [buying] a full security solution.”
There are, of course, downsides as well. Plenty of businesses with a lot of institutional experience on the networking side are going to have trust issues with handing over control of mission-critical networks to a third party, said 451 Research vice president Christian Renaud.
“For someone to come in over the top with, Oh well manage everything for you,’” he said, might draw a response along the lines of, “Wait, what?” from the networking staff. Carriers promise a lot of visibility into the logical relationships between endpoints, edge modules and the cloud but the actual topology of the network itself may be abstracted out.
And despite a generally higher level of security, carrier networks arent completely bulletproof. Several research teams have demonstrated attack techniques that, although unlikely to be seen in the wild, at least have the potential to compromise modern LTE networks. An example: researchers at Ruhr-University Bochum in 2018 [published a paper detailing potential attack vectors][5] that could allow a bad actor to target unencrypted metadata, which details users connected to a given mobile node, in order to spoof DNS requests.
Nevertheless, carriers are set to play a crucially important part in the future evolution of enterprise IoT, and each of the big four U.S. carriers has a robust suite of offerings.
### T-Mobile
T-Mobiles focus is on asset tracking, smart city technology, smart buildings and vehicular fleet management, which makes sense, given that those areas are a natural fit for carrier-based IoT. All except smart buildings require a large geographical coverage area, and the ability to bring a large number of diverse endpoints from diverse sources onto the network is a strength.
The company also runs the CONNECT partner program, aimed at makers of IoT solutions who want to use T-Mobiles network for connectivity. It offers the option to sell hardware, software or specialist IoT platforms through the main T-Mobile for Business program, as well as, of course, close technical integration with T-Mobiles network.
Finally, T-Mobile offers the option of using [narrow-band IoT technology, or NB-IoT][6]. This refers to the practice of using a small slice of the networks spectrum to provide low-throughput connectivity to a large number of devices at the same time. Its purpose-built for IoT, and although it wont work for something like streaming video, where a lot of data has to be moved quickly, its well-suited to an asset tracking system that merely has to send brief status reports. The company even sells five-dollar systems-on-a-chip in bulk for organizations that want to integrate existing hardware or sensors into T-Mobiles network.
### AT&amp;T
Like the rest of the big four, AT&amp;T does business both by selling their own IoT services most of it under the umbrella of the Multi-Network Connect platform, a single pane of glass offering designed to streamline the management of many types of IoT product and by partnering with an array of hardware and other product makers who want to use the companys network.
Along with NB-IoT, AT&amp;T provides LTE-M connectivity, a similar but slightly more capable IoT-focused network technology that adds voice support and more throughput to the NB-IoT playbook. David Allen, director of advanced product development at AT&amp;Ts advanced mobility and enterprise solutions division, said that LTE-M and NB-IoT are powerful tools in the companys IoT arsenal.
“These are small slivers of spectrum that offer an instant national footprint,” he said.
MNC is advertised as a broad-based platform that can bring together input from nearly any type of licensed network, from 2G up through satellite, and even integrate with other connectivity management platforms so a company that uses multiple operators could bring trhem all under the roof of MNC.
### Verizon
Verizons IoT platform, and the focus of its efforts to do business in the IoT realm is Thingspace, which is similar to AT&amp;Ts MNC in many respects. The company also offers both NB-IoT and LTE-M for flexible IoT-specific connectivity options, as well as support for traditional SIM-based networking. As with the rest of the big four, Verizon also sells connectivity services to third parties.
While the company said that it doesnt break down its IoT business into third-party/first-party sales, Verizon says it has had success in several verticals, including telematics for the energy and healthcare industries. The first use case involves using current sensors on the grid and smart meters at the home to study sustainability and track usage more closely. The second involves working on remote monitoring of patient data, and the company said it will hav announcements around that in the future.
While the focus is obviously on connectivity, Verizon also does something slightly unusual for the carrier IoT market by selling a one-size-fits-most sensor of its own creation, called the Critical Asset Sensor. This is a small sensor module that contains acceleration, temperature, pressure, light, humidity and shock sensors, along with GPS and network connectivity, so that it can fit a huge variety of IoT use cases. The idea is that they can be bought in bulk for an IoT implementation direct from Verizon, obviating the need to deal with a separate sensor vendor.
### Sprint
Sprints IoT offerings are partially provided under the umbrella of the companys IoT Factory store, and the emphasis has been on various types of sensor-based service, including restaurant and food-service storage temperatures, smart building solutions for offices and other commercial property, as well as fleet management for terrestrial and marine vehicles.
Most of these are offered through Sprint via partnerships with vertical specialists in those areas, like Apptricity, CU Trak, M2M in Motion and Rently, among many others.
The company also has a dedicated IoT platform offering called Curiosity IoT, which leans on [Arms][7] platform security and connectivity management for basic functionality, but it promises most of the same functionality as the other Big Four vendors platforms. It provides a single pane of glass that integrates management and monitoring for every sensor on the network and shapes data into a standardized format for analysis on the back end.
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3449820/big-four-carriers-want-to-rule-iot-by-simplifying-it.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.networkworld.com/article/3411561/report-smart-city-iot-isnt-smart-enough-yet.html
[5]: https://alter-attack.net/media/breaking_lte_on_layer_two.pdf
[6]: https://www.networkworld.com/article/3227206/faq-what-is-nb-iot.html
[7]: https://www.networkworld.com/article/3294781/arm-flexes-flexibility-with-pelion-iot-announcement.html
[8]: https://www.facebook.com/NetworkWorld/
[9]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Micron finally delivers its answer to Optane)
[#]: via: (https://www.networkworld.com/article/3449576/micron-finally-delivers-its-answer-to-optane.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Micron finally delivers its answer to Optane
======
New drive offers DRAM-like performance and is targeted at analytics and transaction workloads.
Intel
Micron Technology partnered with Intel back in 2015 to develop 3D XPoint, a new type of memory that has the storage capability of NAND flash but speed almost equal to DRAM. However, the two companies parted ways in 2018 before either of them could bring a product to market. They had completed the first generation, agreed to work on the second generation together, and decided to part after that and do their own thing for the third generation.
Intel released its product under the [Optane][1] brand name. Now Micron is hitting the market with its own product under the QuantX brand. At its Insight 2019 show in San Francisco, Micron unveiled the X100, a new solid-state drive the company claims is the fastest in the world.
On paper, this thing is fast:
* Up to 2.5 million IOPS, which it claims is the fastest in the world.
* More than 9GB per second bandwidth for read, write, and mixed workloads, which it claims is three times faster than comparable NAND drives.
* Read-write latency of less than 8 microseconds, which it claims is 11 times better than NAND-based SSDs.
Micron sees the X100 serving data to the worlds most demanding analytics and transactional applications, “a role thats befitting the worlds fastest drive,” it said in a statement.
The company also launched the Micron 7300, a NVMe SSD for data center use with capacities from 400GB to 8TB, depending on the form factor. It comes in SATA and U.2 form factors, the latter of which is like the M.2 PCI Express drives that are the size of a stick of gum and mount on the motherboard.
Also released is the Micron 5300, a SATA drive with capacities from 240GB to nearly 8TB. This drive is the first to use 96-layer 3D TLC NAND, hence its high capacity. It can deliver random read performance of up to 95K IOPS and random write IOPS of 75K.
Micron also announced it had acquired FWDNXT, an AI startup that develop deep learning solutions. Micron says its integrating the compute, memory, tools, and software from FWDNXT into a “comprehensive AI development platform,” which it calls the Micron Deep Learning Accelerator (DLA).
* [Backup vs. archive: Why its important to know the difference][2]
* [How to pick an off-site data-backup method][3]
* [Tape vs. disk storage: Why isnt tape dead yet?][4]
* [The correct levels of backup save time, bandwidth, space][5]
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3449576/micron-finally-delivers-its-answer-to-optane.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html
[2]: https://www.networkworld.com/article/3285652/storage/backup-vs-archive-why-its-important-to-know-the-difference.html
[3]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
[4]: https://www.networkworld.com/article/3315156/storage/tape-vs-disk-storage-why-isnt-tape-dead-yet.html
[5]: https://www.networkworld.com/article/3302804/storage/the-correct-levels-of-backup-save-time-bandwidth-space.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Product vs. project in open source)
[#]: via: (https://opensource.com/article/19/11/product-vs-project)
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
Product vs. project in open source
======
What's the difference between an open source product and an open source
project? Not all open source is created (and maintained) equal.
![Bees on a hive, connected by dots][1]
Open source is a good thing. Open source is a particularly good thing for security. I've written about this before (notably in [_Disbelieving the many eyes hypothesis_][2] and [_The commonwealth of open source_][3]), and I'm going to keep writing about it. In this article, however, I want to talk a little more about a feature of open source that is arguably both a possible disadvantage and a benefit: the difference between a project and a product. I'll come down firmly on one side (spoiler alert: for organisations, it's "product"), but I'd like to start with a little disclaimer. I am employed by Red Hat, and we are a company that makes money from supporting open source. I believe this is a good thing, and I approve of the model that we use, but I wanted to flag any potential bias early in the article.
The main reason that open source is good for security is that you can see what's going on when there's a problem, and you have a chance to fix it. Or, more realistically, unless you're a security professional with particular expertise in the open source project in which the problem arises, somebody _else_ has a chance to fix it. We hope that there are sufficient security folks with the required expertise to fix security problems and vulnerabilities in software projects about which we care.
It's a little more complex than that, however. As an organisation, there are two main ways to consume open source:
* As a **project**: you take the code, choose which version to use, compile it yourself, test it, and then manage it.
* As a **product**: a vendor takes the project, chooses which version to package, compiles it, tests it, and then sells support for the package, typically including docs, patching, and updates.
Now, there's no denying that consuming a project "raw" gives you more options. You can track the latest version, compiling and testing as you go, and you can take security patches more quickly than the product version may supply them, selecting those that seem most appropriate for your business and use cases. On the whole, this seems like a good thing. There are, however, downsides that are specific to security. These include:
1. Some security fixes come with an [embargo][4], to which only a small number of organisations (typically the vendors) have access. Although you may get access to fixes at the same time as the wider ecosystem, you will need to check and test them (unless you blindly apply them—don't do that), which will already have been performed by the vendors.
2. The _huge_ temptation to make changes to the code that don't necessarily—or immediately—make it into the upstream project means that you are likely to be running a fork of the code. Even if you _do_ manage to get these upstream in time, during the period that you're running the changes but they're not upstream, you run a major risk that any security patches will not be immediately applicable to your version. (This is, of course, true for non-security patches, but security patches are typically more urgent.) One option, of course, if you believe that your version is likely to consumed by others, is to make an _official_ fork of the project and try to encourage a community to grow around that; but in the end, you will still have to decide whether to support the _new_ version internally or externally.
3. Unless you ensure that _all_ instances of the software are running the same version in your deployment, any back-porting of security fixes to older versions will require you to invest in security expertise equal (or close to equal) to that of the people who created the fix in the first place. In this case, you are giving up the "commonwealth" benefit of open source, as you need to pay experts who duplicate the skills of the community.
What you are basically doing, by choosing to deploy a _project_ rather than a _product_ is taking the decision to do _internal productisation_ of the project. You lose not only the commonwealth benefit of security fixes but also the significant _economies of scale_ that are intrinsic to the vendor-supported product model. There may also be _economies of scope_ that you miss: many vendors will have multiple products that they support and will be able to apply security expertise across those products in ways that may not be possible for an organisation whose core focus is not on product support.
These economies are reflected in another possible benefit to the commonwealth of using a vendor: The very fact that multiple customers are consuming their products means that vendors have an incentive and a revenue stream to spend on security fixes and general features. There are other types of fixes and improvements on which they may apply resources, but the relative scarcity of skilled security experts means that the [principle of comparative advantage][5] suggests that they should be in the best position to apply them for the benefit of the wider community.[1][6]
What if a vendor you use to provide a productised version of an open source project goes bust or decides to drop support for that product? Well, this is a problem in the world of proprietary software as well, of course. But in the case of proprietary software, there are three likely outcomes:
* You now have no access to the software source, and therefore no way to make improvements.
* You _are_ provided access to the software source, but it is not available to the wider world, and therefore you are on your own.
* _Everyone_ is provided with the software source, but no existing community exists to improve it, and it either dies or takes significant time for a community to build around it.
In the case of open source, however, if the vendor you have chosen goes out of business, there is always the option to use another vendor, encourage a new vendor to take it on, productise it yourself (and supply it to other organisations), or, if the worst comes to the worst, take the internal productisation route while you search for a scalable long-term solution.
In the modern open source world, we (the community) have gotten quite good at managing these options, as the growth of open source consortia[2][7] shows. In a consortium, groups of organisations and individuals cluster around a software project or a set of related projects to encourage community growth, alignment around feature and functionality additions, general security work, and productisation for use cases that may as yet be ill-defined, all the while trying to exploit the economies of scale and scope outlined above. An example of this would be the Linux Foundation's [Confidential Computing Consortium][8], to which the [Enarx project][9] aims to be contributed.
Choosing to consume open source software as a product instead of as a project involves some trade-offs, but, from a security point of view at least, the economics for organisations are fairly clear: unless you are in a position to employ ample security experts, products are most likely to suit your needs.
* * *
1\. Note: I'm not an economist, but I believe that this holds in this case. Happy to have comments explaining why I'm wrong (if I am…).
2\. "Consortiums" if you _really_ must.
* * *
_This article was originally published on [Alice, Eve, and Bob][10] and is reprinted with the author's permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/product-vs-project
作者:[Mike Bursell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_bees_network.png?itok=NFNRQpJi (Bees on a hive, connected by dots)
[2]: https://opensource.com/article/17/10/many-eyes
[3]: https://opensource.com/article/17/11/commonwealth-open-source
[4]: https://aliceevebob.com/2018/01/09/meltdown-and-spectre-thinking-about-embargoes-and-disclosures/
[5]: https://en.wikipedia.org/wiki/Comparative_advantage
[6]: tmp.ov8Yhb4jS4#1
[7]: tmp.ov8Yhb4jS4#2
[8]: https://confidentialcomputing.io/
[9]: https://enarx.io/
[10]: https://aliceevebob.com/2019/10/15/of-projects-products-and-security-community/

View File

@ -0,0 +1,166 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Retro computing with FPGAs and MiSTer)
[#]: via: (https://opensource.com/article/19/11/fpga-mister)
[#]: author: (Sarah Thornton https://opensource.com/users/sarah-thornton)
Retro computing with FPGAs and MiSTer
======
Field-programmable gate arrays are used in devices like smartphones,
medical devices, aircraft, and—here—emulating an old-school Amiga.
![Mesh networking connected dots][1]
Another weekend rolls around, and I can spend some time working on my passion projects, including working with single-board computers, playing with emulators, and general tinkering with a soldering iron. Earlier this year, I wrote about [resurrecting the Commodore Amiga on the Raspberry Pi][2]. A colleague referred to our shared obsession with old technology as a "[passion for preserving our digital culture][3]."
In my travels in the world of "digital archeology," I heard about a new way to emulate old systems by using [field-programmable gate arrays][4] (FPGAs). I was intrigued by the concept, so I dedicated a weekend to learn more. Specifically, I wanted to know if I could use an FPGA to emulate a Commodore Amiga.
### What is an FPGA?
When you build a circuit board, everything is literally etched in silicon. You can change the software that runs on it, but the physical circuit is immutable. So if you want to add a new component to it or modify it later, you are limited by the physical nature of the hardware. With an FPGA, you can program the hardware to simulate new components or change existing ones. This is achieved through programmable logic gates (hence the name). This provides a lot of flexibility for Internet-of-Things (IoT) devices, as they can be changed later to meet new requirements.
![Terasic DE10-Nano][5]
FPGAs are used in many devices today, including smartphones, medical devices, motor vehicles, and aircraft. Because FPGAs can be easily modified and generally have low power requirements, these devices are everywhere! They are also inexpensive to manufacture and can be configured for multiple uses.
The Commodore Amiga was designed with chips that had specific uses and fun names. For example, "Gary" was a gate array that later became "Fat Gary" when "he" was upgraded on the A3000 and A4000. "Bridgette" was an integrated bus buffer, and the delightful "Amber" was a "flicker fixer" on the A3000. The ability to simulate these chips with programmable gates makes an ideal platform for Amiga emulation.
When you use an emulator, you are tricking an application into using software to find the architecture it expects. The primary limitations are the accuracy of the emulation and the sequential nature of how the commands are processed through the CPU. With an FPGA, you can teach the hardware what chips are in play, and software can talk to each chip as if it was native and in parallel. It also means applications can thread as if they were running on the original hardware. This makes FGPAs especially good for emulating old systems.
### Introducing the MiSTer project
The board I have been working with is [Terasic][6]'s [DE10-Nano][7]. Out of the box, this device is excellent for learning how FPGAs work and gives you access to tools to get you started.
![Terasic DE10-Nano][8]
The [MiSTer project][9] is built on top of this board and employs daughter boards to provide memory expansion, SDRAM, and improved I/O, all built on a Linux-based distribution. To use it as a platform for emulation, it's expanded through the use of "cores" that define the architecture the board will emulate.
Once you have flashed the device with the MiSTer distro, you can load a "core," which is a combination of a definition for the chips you want to use and the associated menus to manage the emulated system.
![Terasic DE10-Nano][10]
Compared to a Raspberry Pi running emulation software, these cores provide a more native experience for emulation, and often apps that don't run perfectly on software-based emulators will run fine on a MiSTer.
### How to get started
There are excellent resources online to help get you started. The first stop is the [documentation][11] on MiSTer's [GitHub page][12], which has step-by-step instructions on putting everything together. If you prefer a visual walkthrough of the board, check out [this video][13] from the [Retro Man Cave][14] YouTube channel. For more information on configuring the [Minimig][15] (short for mini Amiga) core to load disks or using Amiga's classic [Workbench][16] and [WHDLoad][17], check out this great [tutorial][18] from [Phil's Computer Lab][19] on YouTube.
### Cores
MiSTer has cores available for a multitude of systems; my main interest is in Amiga emulation, which is provided by the Minimig core. I'm also interested in the Commodore 64 and PET and the BBC microcomputer, which I used at college. I also have a soft spot for playing [Space Invaders on the Commodore PET][20], which I will admit (many years later!) was the real reason I booked time in the college computer lab at the end of the week.
Once a core is loaded, you can interact with it through a connected keyboard and by pressing F12 to access the "core" menu. To access a shell, you can log in by using the F9 key, which presents you with a login prompt. You will need a [kickstart ROM][21] (the equivalent of a PC's BIOS), to get your Amiga running. You can obtain these from [Cloanto][22], which sells the [Amiga Forever][23] kickstart that contains the ROMs required to boot a system as well as games, demos, and hard drive files that can be used on your MiSTer. Store the kickstart ROM in the root of your SD card and name it "KICK.ROM."
On my MiSTer board, I can run Amiga demos that don't run on my Raspberry Pi, even though my Pi has much more memory available. The emulation is more accurate and runs more efficiently. Through the expansion board, I can even use old hardware, such as an original Commodore monitor and Amiga joysticks.
### Source code
All code for the MiSTer project is available in its [GitHub repo][12]. You have access to the cores as well as the main MiSTer setup, associated scripts, and menu files. These are actively updated, and there is a solid community actively developing, bug fixing, and improving all contributions, so check back regularly for updates. The repo has a wealth of information available to help get you up and running.
### Security considerations
With the flexibility of customization comes the potential for [security vulnerabilities][24]. All MiSTer installs come with a preset password on the root account, so one of the first things you want to do is to change the password. If you are using the device to build a cabinet for a game and you have given the device access to your network, it can be exploited using the default login credentials, and that can lead to giving a third party access to your network.
For non-MiSTer projects, FPGAs expose the ability for one process to be able to listen in on another process, so limiting access to the device should be one of the first things you do. When you build your application, you should isolate processes to prevent unwanted access. This is especially important if you intend to deploy your board where access is open to other users or with shared applications.
### Find more information
There is a lot of information about this type of project online. Here are some of the resources you may find helpful.
#### Community
* [MiSTer wiki][9]
* [Setup guide][11]
* [Internet connections on supporting cores][25]
* [Discussion forums][26]
* [MiSTer add-ons][27] (public Facebook group)
#### Daughter boards
* [SDRAM board][28]
* [I/O board][29]
* [RTC board][30]
* [USB hub][31]
#### Videos and walkthroughs
* [Exploring the MiSTer and DE-10 Nano FPGA][32]: Is this the future of retro?
* [FPGA emulation MiSTer project on the Terasic DE10-Nano][33]
* [Amiga OS 3.1 on FPGA—DE10-Nano running MisTer][34]
#### Where to buy the hardware
##### MiSTer project
* [DE10-Nano][35] (Amazon)
* [Ultimate Mister][36]
* [MiSTer Add-ons][37]
##### Other FPGAs
* [TinyFPGA BX—ICE40 FPGA development board with USB][38] (Adafruit)
* [Terasic][6], makers of the DE10-Nano and other high-performance FPGAs
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/fpga-mister
作者:[Sarah Thornton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sarah-thornton
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3 (Mesh networking connected dots)
[2]: https://opensource.com/article/19/3/amiga-raspberry-pi
[3]: https://www.linkedin.com/pulse/passion-preserving-digital-culture-%C3%B8ivind-ekeberg/
[4]: https://en.wikipedia.org/wiki/Field-programmable_gate_array
[5]: https://opensource.com/sites/default/files/uploads/image5_0.jpg (Terasic DE10-Nano)
[6]: https://www.terasic.com.tw/en/
[7]: https://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&CategoryNo=165&No=1046
[8]: https://opensource.com/sites/default/files/uploads/image2_0.jpg (Terasic DE10-Nano)
[9]: https://github.com/MiSTer-devel/Main_MiSTer/wiki
[10]: https://opensource.com/sites/default/files/uploads/image1_0.jpg (Terasic DE10-Nano)
[11]: https://github.com/MiSTer-devel/Main_MiSTer/wiki/Setup-Guide
[12]: https://github.com/MiSTer-devel
[13]: https://www.youtube.com/watch?v=e5yPbzD-W-I&t=2s
[14]: https://www.youtube.com/channel/UCLEoyoOKZK0idGqSc6Pi23w
[15]: https://github.com/MiSTer-devel/Minimig-AGA_MiSTer
[16]: https://en.wikipedia.org/wiki/Workbench_%28AmigaOS%29
[17]: https://en.wikipedia.org/wiki/WHDLoad
[18]: https://www.youtube.com/watch?v=VFespp1adI0
[19]: https://www.youtube.com/channel/UCj9IJ2QvygoBJKSOnUgXIRA
[20]: https://www.youtube.com/watch?v=hqs6gIZbpxo
[21]: https://en.wikipedia.org/wiki/Kickstart_(Amiga)
[22]: https://cloanto.com/
[23]: https://www.amigaforever.com/
[24]: https://www.helpnetsecurity.com/2019/06/03/vulnerability-in-fpgas/
[25]: https://github.com/MiSTer-devel/Main_MiSTer/wiki/Internet-and-console-connection-from-supported-cores
[26]: http://www.atari-forum.com/viewforum.php?f=117
[27]: https://www.facebook.com/groups/251655042432052/
[28]: https://github.com/MiSTer-devel/Main_MiSTer/wiki/SDRAM-Board
[29]: https://github.com/MiSTer-devel/Main_MiSTer/wiki/IO-Board
[30]: https://github.com/MiSTer-devel/Main_MiSTer/wiki/RTC-board
[31]: https://github.com/MiSTer-devel/Main_MiSTer/wiki/USB-Hub-daughter-board
[32]: https://www.youtube.com/watch?v=e5yPbzD-W-I
[33]: https://www.youtube.com/watch?v=1jb8YPXc8DA
[34]: https://www.youtube.com/watch?v=tAz8VRAv7ig
[35]: https://www.amazon.com/Terasic-Technologies-P0496-DE10-Nano-Kit/dp/B07B89YHSB/
[36]: https://ultimatemister.com/
[37]: https://misteraddons.com/
[38]: https://www.adafruit.com/product/4038

View File

@ -0,0 +1,56 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (6 remarkable features of the new United Nations open source initiative)
[#]: via: (https://opensource.com/article/19/11/united-nations-goes-open-source)
[#]: author: (Frank Karlitschek https://opensource.com/users/frankkarlitschek)
6 remarkable features of the new United Nations open source initiative
======
What does it mean when the UN goes open source?
![Globe up in the clouds][1]
Three months, ago the United Nations asked me to join a new advisory board to help them develop their open source strategy and policy. Im honored to have the opportunity to work together with a group of established experts in open source licensing and policy areas.
The United Nations wants to make technology, software, and intellectual property available to everyone, including developing countries. Open source and free software are great tools to achieve this goal since open source is all about empowering people and global collaboration while protecting the personal data and privacy of users. So, the United Nations and the open source community share the same values.
This new open source strategy and policy is developed by the [United Nations Technology Innovation Labs][2] (UNTIL). Last month, we had our first in-person meeting in Helsinki in the UNTIL offices. I find this initiative remarkable for several reasons:
* **Sharing:** The United Nations wants to have a positive impact on everyone on this planet. For that goal, it is important that software, data, and services are available for everyone independent of their language, budget, education, or other factors. Open source is perfect to guarantee that result.
* **Contributing:** It should be possible that everyone can contribute to the software, data, and services of the United Nations. The goal is to not depend on a single software vendor alone, but instead, build a bigger ecosystem that drives innovation together.
* **Empowering:** Open source makes it possible for underdeveloped countries and regions to foster local companies and expertise by building on top of existing open source software—standing on the shoulders of giants.
* **Sustainability:** Open source guarantees more sustainable software, data, and services by not relying on a single entity to support, maintain, and develop it. Open source helps to avoid a single point of failure by creating an equal playing field for everyone.
* **Security:** Open source software is more secure than proprietary software because the code can be constantly reviewed and audited. This fact is especially important for security-sensitive applications that require [transparency and openness][3].
* **Decentralization:** An open source strategy enables decentralized hosting of software and data. This fact makes it possible to be compliant with all data protection and privacy regulations and enables a more free and open internet.
We discussed that a fair business model like the one from Nextcloud should be encouraged and recommended. Specifically, we discussed that that 100% of the code should be placed under an [OSI-approved open source license][4]. There should be no open core, proprietary extensions, dual licensing, or other limited-access components to ensure that everyone is on the same playing field.
Im excited to have the opportunity to advise the United Nations in this matter, and I hope to have a positive influence on the future of IT, especially in developing countries.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/united-nations-goes-open-source
作者:[Frank Karlitschek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/frankkarlitschek
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-globe.png?itok=_drXt4Tn (Globe up in the clouds)
[2]: https://until.un.org
[3]: https://until.un.org/content/governance
[4]: https://opensource.org/licenses

View File

@ -0,0 +1,66 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Can Data Scientists be Replaced by Automation?)
[#]: via: (https://opensourceforu.com/2019/11/can-data-scientists-be-replaced-by-automation/)
[#]: author: (Preet Gandhi https://opensourceforu.com/author/preet-gandhi/)
Can Data Scientists be Replaced by Automation?
======
[![][1]][2]
_The advent of AI, automation and smart bots triggers the question: Is it possible that data scientists will become redundant in the future? Are they indispensable? The ideal approach appears to be automation complementing the work data scientists do. This would better utilise the tremendous data being generated throughout the world every day._
Data scientists are currently very much in demand. But there is the question about whether they can automate themselves out of their jobs. Can artificial intelligence replace data scientists? If so, up to what extent can their tasks be automated? Gartner recently reported that 40 per cent of data science tasks will be automated by 2020. So what kind of skills can be efficiently handled by automation? All this speculation adds fuel to the ongoing Man vs Machine debate.
Data scientists need a strong mathematical mind, quantitative skills, computer programming skills and business acumen to make decisions. They need to gather large unstructured data and transform it into results and insights, which can be understood by laymen or business executives. The whole process is highly customised, depending on the type of application domain. Some degree of human interaction will always be needed due to the subjective nature of the process, and what percentage of the task is automated depends in the specific use case and is open to debate. To understand how much or what parts can be automated, we need to have a deep understanding of the process.
Data scientists are expensive to hire and there is a shortage of this skill in the industry as its a relatively new field. Many companies try to look for alternative solutions. Several AI algorithms have now been developed, which can analyse data and provide insights similar to a data scientist. The algorithm has to provide the data output and make accurate predictions, which can be done by using Natural Language Processing (NLP).
NLP can be used to communicate with AI in the same way that laymen interact with data scientists to put forth their demands. For example, IBM Watson has NLP facilities which interact with business intelligence (BI) tools to perform data science tasks. Microsofts Cortana also has a powerful BI tool, and users can process Big Data sets by just speaking to it. All these are simple forms of automation which are widely available already. Data engineering tasks such as cleansing, normalisation, skewness removal, transformation, etc, as well as modelling methods like champion model selection, feature selection, algorithm selection, fitness metric selection, etc, are tasks for which automated tools are currently available in the market.
Automation in data science will squeeze some manual labour out of the workflow instead of completely replacing the data scientists. Low-level functions can be efficiently handled by AI systems. There are many technologies to do this. The Alteryx Designer tool automatically generates customised REST APIs and Docker images around machine learning models during the promotion and deployment stage.
Designer workflows can also be set up to automatically retrain machine learning models, using fresh data, and then to automatically redeploy them. Data integration, model building, and optimising model hyper parameters are areas where automation can be helpful. Data integration combines data from multiple sources to provide a uniform data set. Automation here can pull trusted data from multiple sources for a data scientist to analyse. Collecting data, searching for patterns and making predictions are required for model building, which can be automated as machines can collect data to find patterns.
Machines are getting smarter everyday due to the integration of AI principles that help them learn from the types of patterns they were historically trying to detect. An added advantage here is that machines will not make the kinds of errors that humans do.
Automation has its own set of limitations, however. It can only go so far. Artificial intelligence can automate data engineering and machine learning processes but AI cant automate itself. Data wrangling (data munging) consists of manually converting raw data to an easily consumable form. The process still requires human judgment to turn raw data into insights that make sense for an organisation, and take all of an organisations complexities into account. Even unsupervised learning is not entirely automated. Data scientists still prepare sets, clean them, specify which algorithms to use, and interpret the findings. Data visualisation, most of the time, needs a human as the findings to be presented to laymen have to be highly customised, depending on the technical knowledge of the audience. A machine cant possibly be trained to do that.
Low-level visualisations can be automated, but human intelligence would be required to interpret and explain the data. It will also be needed to write AI algorithms that can handle mundane visualisation tasks. Moreover, intangibles like human curiosity, intuition or the desire to create/validate experiments cant be simulated by AI. This aspect of data science probably wont be ever handled by AI in the near future as the technology hasnt evolved to that extent.
While thinking about automation, we should also consider the quality of the output. Here, output means the validity or relevance of the insights. With automation, the quantity and throughput of data science artefacts will increase, but that doesnt translate to an increase in quality. The process of extracting insights and applying them within the context of particular data driven applications is still inherently a creative, exploratory process that demands human judgment. To get a deeper understanding of the data, feature engineering is a very essential portion of the process. It allows us to make maximum use of the data available to us. Automating feature engineering is really difficult as it requires human domain knowledge and a real-world understanding, which is tough for a machine to acquire. Even if AI is used, it cant provide the same level of feedback that a human expert in that domain can. While automation can help identify patterns in an organisation, machines cannot truly understand what data means for an organisation and its relationships between different, unconnected operations.
You cant teach a machine to be creative. After getting results from a pipeline, a data scientist can seek further domain knowledge in order to add value and improve the pipeline.Collaborating alongside marketing, sales and engineering teams, solutions will need to be implemented and deployed based on these findings to improve the model. Its an iterative process and after each iteration, the creativity with which data scientists plan on adding to the next phase is what differentiates them from bots. The interactions and conversations driving these initiatives, which are fuelled by abstract, creative thinking, surpass the capabilities of any modern-day machine.
Current data scientists shouldnt be worried about losing their jobs to computers due to automation, as they are an amalgamation of thought leaders, coders and statisticians. A successful data science project will always need a strong team of humans to work together and collaborate to synergistically solve a problem. AI will have a tough time collaborating, which is essential in order to transform data to actionable data. Even if automation is used to some extent, a data scientist will always have to manually validate the results of a pipeline in order to make sure it makes sense in the real world. Automation can be thought of as a supplementary tool which will help scale data science and make the work more efficient. Bots can handle lower-level tasks and leave the problem-solving tasks to human experts. The combination of automation with human problem-solving will actually empower, rather than threaten, the jobs of data scientists as bots will be like assistants to the former.
Automation can never completely replace a data scientist because no amount of advanced AI can emulate the most important quality a skilful data scientist must possess intuition.
![Avatar][3]
[Preet Gandhi][4]
The author is an avid Big Data and data science enthusiast. You can contact her at [gandhipreet1995@gmail.com][5].
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/can-data-scientists-be-replaced-by-automation/
作者:[Preet Gandhi][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/preet-gandhi/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Data-Scientist-automation.jpg?resize=696%2C458&ssl=1 (Data Scientist automation)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Data-Scientist-automation.jpg?fit=727%2C478&ssl=1
[3]: https://secure.gravatar.com/avatar/4603e91c8ba6455d0d817c912a8985bf?s=100&r=g
[4]: https://opensourceforu.com/author/preet-gandhi/
[5]: mailto:gandhipreet1995@gmail.com

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (lnrCoder)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -204,7 +204,7 @@ via: https://www.2daygeek.com/linux-find-top-memory-consuming-processes/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[lnrCoder](https://github.com/lnrCoder)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,254 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Awk one-liners and scripts to help you sort text files)
[#]: via: (https://opensource.com/article/19/11/how-sort-awk)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Awk one-liners and scripts to help you sort text files
======
Awk is a powerful tool for doing tasks that might otherwise be left to
other common utilities, including sort.
![Green graph of measurements][1]
Awk is the ubiquitous Unix command for scanning and processing text containing predictable patterns. However, because it features functions, it's also justifiably called a programming language.
Confusingly, there is more than one awk. (Or, if you believe there can be only one, then there are several clones.) There's **awk**, the original program written by Aho, Weinberger, and Kernighan, and then there's **nawk**, **mawk**, and the GNU version, **gawk**. The GNU version of awk is a highly portable, free software version of the utility with several unique features, so this article is about GNU awk.
While its official name is gawk, on GNU+Linux systems it's aliased to awk and serves as the default version of that command. On other systems that don't ship with GNU awk, you must install it and refer to it as gawk, rather than awk. This article uses the terms awk and gawk interchangeably.
Being both a command and a programming language makes awk a powerful tool for tasks that might otherwise be left to **sort**, **cut**, **uniq**, and other common utilities. Luckily, there's lots of room in open source for redundancy, so if you're faced with the question of whether or not to use awk, the answer is probably a solid "maybe."
The beauty of awk's flexibility is that if you've already committed to using awk for a task, then you can probably stay in awk no matter what comes up along the way. This includes the eternal need to sort data in a way other than the order it was delivered to you.
### Sample set
Before exploring awk's sorting methods, generate a sample dataset to use. Keep it simple so that you don't get distracted by edge cases and unintended complexity. This is the sample set this article uses:
```
Aptenodytes;forsteri;Miller,JF;1778;Emperor
Pygoscelis;papua;Wagler;1832;Gentoo
Eudyptula;minor;Bonaparte;1867;Little Blue
Spheniscus;demersus;Brisson;1760;African
Megadyptes;antipodes;Milne-Edwards;1880;Yellow-eyed
Eudyptes;chrysocome;Viellot;1816;Sothern Rockhopper
Torvaldis;linux;Ewing,L;1996;Tux
```
It's a small dataset, but it offers a good variety of data types:
* A genus and species name, which are associated with one another but considered separate
* A surname, sometimes with first initials after a comma
* An integer representing a date
* An arbitrary term
* All fields separated by semi-colons
Depending on your educational background, you may consider this a 2D array or a table or just a line-delimited collection of data. How you think of it is up to you, because awk doesn't expect anything more than text. It's up to you to tell awk how you want to parse it.
### The sort cheat
If you just want to sort a text dataset by a specific, definable field (think of a "cell" in a spreadsheet), then you can use the [sort command][2].
### Fields and records
Regardless of the format of your input, you must find patterns in it so that you can focus on the parts of the data that are important to you. In this example, the data is delimited by two factors: lines and fields. Each new line represents a new _record_, as you would likely see in a spreadsheet or database dump. Within each line, there are distinct _fields_ (think of them as cells in a spreadsheet) that are separated by semicolons (;).
Awk processes one record at a time, so while you're structuring the instructions you will give to awk, you can focus on just one line. Establish what you want to do with one line, then test it (either mentally or with awk) on the next line and a few more. You'll end up with a good hypothesis on what your awk script must do in order to provide you with the data structure you want.
In this case, it's easy to see that each field is separated by a semicolon. For simplicity's sake, assume you want to sort the list by the very first field of each line.
Before you can sort, you must be able to focus awk on just the first field of each line, so that's the first step. The syntax of an awk command in a terminal is **awk**, followed by relevant options, followed by your awk command, and ending with the file of data you want to process.
```
$ awk --field-separator=";" '{print $1;}' penguins.list
Aptenodytes
Pygoscelis
Eudyptula
Spheniscus
Megadyptes
Eudyptes
Torvaldis
```
Because the field separator is a character that has special meaning to the Bash shell, you must enclose the semicolon in quotes or precede it with a backslash. This command is useful only to prove that you can focus on a specific field. You can try the same command using the number of another field to view the contents of another "column" of your data:
```
$ awk --field-separator=";" '{print $3;}' penguins.list
Miller,JF
Wagler
Bonaparte
Brisson
Milne-Edwards
Viellot
Ewing,L
```
Nothing has been sorted yet, but this is good groundwork.
### Scripting
Awk is more than just a command; it's a programming language with indices and arrays and functions. That's significant because it means you can grab a list of fields you want to sort by, store the list in memory, process it, and then print the resulting data. For a complex series of actions such as this, it's easier to work in a text file, so create a new file called **sort.awk** and enter this text:
```
#!/bin/gawk -f
BEGIN {
        FS=";";
}
```
This establishes the file as an awk script that executes the lines contained in the file.
The **BEGIN** statement is a special setup function provided by awk for tasks that need to occur only once. Defining the built-in variable **FS**, which stands for _field separator_ and is the same value you set in your awk command with **\--field-separator**, only needs to happen once, so it's included in the **BEGIN** statement.
#### Arrays in awk
You already know how to gather the values of a specific field by using the **$** notation along with the field number, but in this case, you need to store it in an array rather than print it to the terminal. This is done with an awk array. The important thing about an awk array is that it contains keys and values. Imagine an array about this article; it would look something like this: **author:"seth",title:"How to sort with awk",length:1200**. Elements like **author** and **title** and **length** are keys, with the following contents being values.
The advantage to this in the context of sorting is that you can assign any field as the key and any record as the value, and then use the built-in awk function **asorti()** (sort by index) to sort by the key. For now, assume arbitrarily that you _only_ want to sort by the second field.
Awk statements _not_ preceded by the special keywords **BEGIN** or **END** are loops that happen at each record. This is the part of the script that scans the data for patterns and processes it accordingly. Each time awk turns its attention to a record, statements in **{}** (unless preceded by **BEGIN** or **END**) are executed.
To add a key and value to an array, create a variable (in this example script, I call it **ARRAY**, which isn't terribly original, but very clear) containing an array, and then assign it a key in brackets and a value with an equals sign (**=**).
```
{   # dump each field into an array
    ARRAY[$2] = $R;
}
```
In this statement, the contents of the second field (**$2**) are used as the key term, and the current record (**$R**) is used as the value.
### The asorti() function
In addition to arrays, awk has several basic functions that you can use as quick and easy solutions for common tasks. One of the functions introduced in GNU awk, **asorti()**, provides the ability to sort an array by key (or _index_) or value.
You can only sort the array once it has been populated, meaning that this action must not occur with every new record but only the final stage of your script. For this purpose, awk provides the special **END** keyword. The inverse of **BEGIN**, an **END** statement happens only once and only after all records have been scanned.
Add this to your script:
```
END {
    asorti(ARRAY,SARRAY);
    # get length
    j = length(SARRAY);
   
    for (i = 1; i &lt;= j; i++) {
        printf("%s %s\n", SARRAY[i],ARRAY[SARRAY[i]])
    }
}
```
The **asorti()** function takes the contents of **ARRAY**, sorts it by index, and places the results in a new array called **SARRAY** (an arbitrary name I invented for this article, meaning _Sorted ARRAY_).
Next, the variable **j** (another arbitrary name) is assigned the results of the **length()** function, which counts the number of items in **SARRAY**.
Finally, use a **for** loop to iterate through each item in **SARRAY** using the **printf()** function to print each key, followed by the corresponding value of that key in **ARRAY**.
### Running the script
To run your awk script, make it executable:
```
`$ chmod +x sorter.awk`
```
And then run it against the **penguin.list** sample data:
```
$ ./sorter.awk penguins.list
antipodes Megadyptes;antipodes;Milne-Edwards;1880;Yellow-eyed
chrysocome Eudyptes;chrysocome;Viellot;1816;Sothern Rockhopper
demersus Spheniscus;demersus;Brisson;1760;African
forsteri Aptenodytes;forsteri;Miller,JF;1778;Emperor
linux Torvaldis;linux;Ewing,L;1996;Tux
minor Eudyptula;minor;Bonaparte;1867;Little Blue
papua Pygoscelis;papua;Wagler;1832;Gentoo
```
As you can see, the data is sorted by the second field.
This is a little restrictive. It would be better to have the flexibility to choose at runtime which field you want to use as your sorting key so you could use this script on any dataset and get meaningful results.
### Adding command options
You can add a command variable to an awk script by using the literal value **var** in your script. Change your script so that your iterative clause uses **var** when creating your array:
```
{ # dump each field into an array
    ARRAY[$var] = $R;
}
```
Try running the script so that it sorts by the third field by using the **-v var** option when you execute it:
```
$ ./sorter.awk -v var=3 penguins.list
Bonaparte Eudyptula;minor;Bonaparte;1867;Little Blue
Brisson Spheniscus;demersus;Brisson;1760;African
Ewing,L Torvaldis;linux;Ewing,L;1996;Tux
Miller,JF Aptenodytes;forsteri;Miller,JF;1778;Emperor
Milne-Edwards Megadyptes;antipodes;Milne-Edwards;1880;Yellow-eyed
Viellot Eudyptes;chrysocome;Viellot;1816;Sothern Rockhopper
Wagler Pygoscelis;papua;Wagler;1832;Gentoo
```
### Fixes
This article has demonstrated how to sort data in pure GNU awk. The script can be improved so, if it's useful to you, spend some time researching [awk functions][3] on gawk's man page and customizing the script for better output.
Here is the complete script so far:
```
#!/usr/bin/awk -f
# GPLv3 appears here
# usage: ./sorter.awk -v var=NUM FILE
BEGIN { FS=";"; }
{ # dump each field into an array
    ARRAY[$var] = $R;
}
END {
    asorti(ARRAY,SARRAY);
    # get length
    j = length(SARRAY);
   
    for (i = 1; i &lt;= j; i++) {
        printf("%s %s\n", SARRAY[i],ARRAY[SARRAY[i]])
    }
}
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/how-sort-awk
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_lead-steps-measure.png?itok=DG7rFZPk (Green graph of measurements)
[2]: https://opensource.com/article/19/10/get-sorted-sort
[3]: https://www.gnu.org/software/gawk/manual/html_node/Built_002din.html#Built_002din

View File

@ -0,0 +1,107 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Keyboard Shortcuts to Speed Up Your Work in Linux)
[#]: via: (https://opensourceforu.com/2019/11/keyboard-shortcuts-to-speed-up-your-work-in-linux/)
[#]: author: (S Sathyanarayanan https://opensourceforu.com/author/s-sathyanarayanan/)
Keyboard Shortcuts to Speed Up Your Work in Linux
======
[![Google Keyboard][1]][2]
_Manipulating the mouse, keyboard and menus takes up a lot of our time, which could be saved by using keyboard shortcuts. These not only save time, but also make the computer user more efficient._
Did you realise that switching from the keyboard to the mouse while typing takes up to two seconds each time? If a person works for eight hours every day, switching from the keyboard to the mouse once a minute, and there are around 240 working days in a year, the amount of time wasted (as per calculations done by Brainscape) is:
_[2 wasted seconds/min] x [480 minutes per day] x 240 working days per year = 64 wasted hours per year_
This is equal to eight working days lost and hence learning keyboard shortcuts will increase productivity by 3.3 per cent (_<https://www.brainscape.com/blog/2011/08/keyboard-shortcuts-economy/>_).
Keyboard shortcuts provide a quicker way to do a task, which otherwise would have had to be done in multiple steps using the mouse and/or the menu. Figure 1 gives a list of a few most frequently used shortcuts in Ubuntu 18.04 Linux OS and the Web browsers. I am omitting the very well-known shortcuts like copy, paste, etc, and the ones which are not used frequently. The readers can refer to online resources for a comprehensive list of shortcuts. Note that the Windows key is renamed as Super key in Linux.
**General shortcuts**
A list of general shortcuts is given below.
[![][3]][4]
**Print Screen and video recording of the screen**
The following shortcuts can be used to print the screen or take a video recording of the screen.
[![][5]][6]**Switching between applications**
The shortcut keys listed here can be used to switch between applications.
[![][7]][8]
**Tile windows**
The windows can be tiled in different ways using the shortcuts given below.
[![][9]][10]
**Browser shortcuts**
The most frequently used shortcuts for browsers are listed here. Most of the shortcuts are common to the Chrome/Firefox browsers.
**Key combination** | **Action**
---|---
Ctrl + T | Opens a new tab.
Ctrl + Shift + T | Opens the most recently closed tab.
Ctrl + D | Adds a new bookmark.
Ctrl + W | Closes the browser tab.
Alt + D | Positions the cursor in the browsers address bar.
F5 or Ctrl-R | Refreshes a page.
Ctrl + Shift + Del | Clears private data and history.
Ctrl + N | Opens a new window.
Home | Scrolls to the top of the page.
End | Scrolls to the bottom of the page.
Ctrl + J | Opens the Downloads folder
(in Chrome)
F11 | Full-screen view (toggle effect)
**Terminal shortcuts**
Here is a list of terminal shortcuts.
[![][11]][12]You can also configure your own custom shortcuts in Ubuntu, as follows:
* Click on Settings in Ubuntu Dash.
* Select the Devices tab in the left menu of the Settings window.
* Select the Keyboard tab in the Devices menu.
* The + button is displayed at the bottom of the right panel. Click on the + sign to open the custom shortcut dialogue box and configure a new shortcut.
Learning three shortcuts mentioned in this article can save a lot of time and make you more productive.
**Reference**
_Cohen, Andrew. How keyboard shortcuts could revive Americas economy; [www.brainscape.com][13]. [Online] Brainscape, 26 May 2017; <https://www.brainscape.com/blog/2011/08/keyboard-shortcuts-economy/>_
![Avatar][14]
[S Sathyanarayanan][15]
The author is currently working with Sri Sathya Sai University for Human Excellence, Gulbarga. He has more than 25 years of experience in systems management and in teaching IT courses. He is an enthusiastic promoter of FOSS and can be reached at [sathyanarayanan.brn@gmail.com][16].
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/keyboard-shortcuts-to-speed-up-your-work-in-linux/
作者:[S Sathyanarayanan][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/s-sathyanarayanan/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/12/Google-Keyboard.jpg?resize=696%2C418&ssl=1 (Google Keyboard)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/12/Google-Keyboard.jpg?fit=750%2C450&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/1.png?resize=350%2C319&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/1.png?ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/NW.png?resize=350%2C326&ssl=1
[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/NW.png?ssl=1
[7]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/2.png?resize=350%2C264&ssl=1
[8]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/2.png?ssl=1
[9]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/3.png?resize=350%2C186&ssl=1
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/3.png?ssl=1
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/7.png?resize=350%2C250&ssl=1
[12]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/7.png?ssl=1
[13]: http://www.brainscape.com
[14]: https://secure.gravatar.com/avatar/736684a2707f2ed7ae72675edf7bb3ee?s=100&r=g
[15]: https://opensourceforu.com/author/s-sathyanarayanan/
[16]: mailto:sathyanarayanan.brn@gmail.com

View File

@ -0,0 +1,95 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Update a Fedora Linux System [Beginners Tutorial])
[#]: via: (https://itsfoss.com/update-fedora/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
How To Update a Fedora Linux System [Beginners Tutorial]
======
_**This quick tutorial shows various ways to update a Fedora Linux install.**_
So, the other day, I installed the [newly released Fedora 31][1]. Ill be honest with you, it was my first time with a [non-Ubuntu distribution][2].
The first thing I did after installing Fedora was to try and install some software. I opened the software center and found that the software center was broken. I couldnt install any application from it.
I wasnt sure what went wrong with my installation. Discussing within the team, Abhishek advised me to update the system first. I did that and poof! everything was back to normal. After updating the [Fedora][3] system, the software center worked as it should.
Sometimes we just ignore the updates and keep troubleshooting the issue we face. No matter how big/small the issue is to avoid them, you should keep your system up-to-date.
In this article, Ill show you various possible methods to update your Fedora Linux system.
* [Update Fedora using software center][4]
* [Update Fedora using command line][5]
* [Update Fedora from system settings][6]
Keep in mind that updating Fedora means installing the security patches, kernel updates and software updates. If you want to update from one version of Fedora to another, it is called version upgrade and you can [read about Fedora version upgrade procedure here][7].
### Updating Fedora From The Software Center
![Software Center][8]
You will most likely be notified that you have some system updates to look at, you should end up launching the software center when you click on that notification.
All you have to do is hit Update and verify the root password to start updating.
In case you did not get a notification for the available updates, you can simply launch the software center and head to the “Updates” tab. Now, you just need to proceed with the updates listed.
### Updating Fedora Using The Terminal
If you cannot load up the software center for some reason, you can always utilize the dnf package managing commands to easily update your system.
Simply launch the terminal and type in the following command to start updating (you should be prompted to verify the root password):
```
sudo dnf upgrade
```
**dnf update vs dnf upgrade
**
Youll find that there are two dnf commands available: dnf update and dnf upgrade.
Both command do the same job and that is to install all the updates provided by Fedora.
Then why there is dnf update and dnf upgrade and which one should you use?
Well, dnf update is basically an alias to dnf upgrade. While dnf update may still work, the good practice is to use dnf upgrade because that is the real command.
### Updating Fedora From System Settings
![][9]
If nothing else works (or if youre already in the System settings for a reason), navigate your way to the “Details” option at the bottom of your settings.
This should show up the details of your OS and hardware along with a “Check for Updates” button as shown in the image above. You just need to click on it and provide the root/admin password to proceed to install the available updates.
**Wrapping Up**
As explained above, it is quite easy to update your Fedora installation. Youve got three available methods to choose from so you have nothing to worry about.
If you notice any issue in following the instructions mentioned above, feel free to let me know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/update-fedora/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/fedora-31-release/
[2]: https://itsfoss.com/non-ubuntu-beginner-linux/
[3]: https://getfedora.org/
[4]: tmp.Lqr0HBqAd9#software-center
[5]: tmp.Lqr0HBqAd9#command-line
[6]: tmp.Lqr0HBqAd9#system-settings
[7]: https://itsfoss.com/upgrade-fedora-version/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/11/software-center.png?ssl=1
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/11/system-settings-fedora-1.png?ssl=1