Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2019-04-22 22:17:40 +08:00
commit 97c1a75a58
25 changed files with 2855 additions and 152 deletions

View File

@ -1,38 +1,37 @@
[#]: collector: (lujun9972)
[#]: translator: (zgj1024)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10765-1.html)
[#]: subject: (HTTPie A Modern Command Line HTTP Client For Curl And Wget Alternative)
[#]: via: (https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
HTTPie 替代 Curl 和 Wget 的现代 HTTP 命令行客户端
HTTPie替代 Curl 和 Wget 的现代 HTTP 命令行客户端
======
大多数时间我们会使用 Curl 命令或是 Wget 命令下载文件或者做其他事
大多数时间我们会使用 `curl` 命令或是 `wget` 命令下载文件或者做其他事。
我们以前曾写过 **[最佳命令行下载管理器][1]** 的文章。你可以点击相应的 URL 连接来浏览这些文章。
我们以前曾写过 [最佳命令行下载管理器][1] 的文章。你可以点击相应的 URL 连接来浏览这些文章。
* **[aria2 Linux 下的多协议命令行下载工具][2]**
* **[Axel Linux 下的轻量级命令行下载加速器][3]**
* **[Wget Linux 下的标准命令行下载工具][4]**
* **[curl Linux 下的实用的命令行下载工具][5]**
* [aria2 Linux 下的多协议命令行下载工具][2]
* [Axel Linux 下的轻量级命令行下载加速器][3]
* [Wget Linux 下的标准命令行下载工具][4]
* [curl Linux 下的实用的命令行下载工具][5]
今天我们将讨论同样的话题。这个实用程序名为 HTTPie。
今天我们将讨论同样的话题。实用程序名为 HTTPie。
它是现代命令行 http 客户端也是curl和wget命令的最佳替代品。
它是现代命令行 http 客户端,也是 `curl``wget` 命令的最佳替代品。
### 什么是 HTTPie
HTTPie (发音是 aitch-tee-tee-pie) 是一个 Http 命令行客户端。
HTTPie (发音是 aitch-tee-tee-pie) 是一个 HTTP 命令行客户端。
httpie 工具是现代命令的 HTTP 客户端,它能让命令行界面与 Web 服务进行交互。
HTTPie 工具是现代的 HTTP 命令行客户端,它能通过命令行界面与 Web 服务进行交互。
他提供一个简单 Http 命令,运行使用简单而自然的语法发送任意的 HTTP 请求,并会显示彩色的输出。
它提供一个简单的 `http` 命令,允许使用简单而自然的语法发送任意的 HTTP 请求,并会显示彩色的输出。
HTTPie 能用于测试、debugging及与 HTTP 服务器交互。
HTTPie 能用于测试、调试及与 HTTP 服务器交互。
### 主要特点
@ -40,50 +39,52 @@ HTTPie 能用于测试、debugging及与 HTTP 服务器交互。
* 格式化的及彩色化的终端输出
* 内置 JSON 支持
* 表单和文件上传
* HTTPS, 代理, 和认证
* HTTPS、代理和认证
* 任意请求数据
* 自定义头部
* 持久化会话sessions
* 类似 wget 的下载
* 持久化会话
* 类似 `wget` 的下载
* 支持 Python 2.7 和 3.x
### 在 Linux 下如何安装 HTTPie
大部分 Linux 发行版都提供了系统包管理器,可以用它来安装。
**`Fedora`** 系统,使用 **[DNF 命令][6]** 来安装 httpie
Fedora 系统,使用 [DNF 命令][6] 来安装 httpie
```
$ sudo dnf install httpie
```
**`Debian/Ubuntu`** 系统, 使用 **[APT-GET 命令][7]** 或 **[APT 命令][8]** 来安装 httpie。
Debian/Ubuntu 系统,使用 [APT-GET 命令][7] 或 [APT 命令][8] 来安装 HTTPie。
```
$ sudo apt install httpie
```
基于 **`Arch Linux`** 的系统, 使用 **[Pacman 命令][9]** 来安装 httpie。
基于 Arch Linux 的系统,使用 [Pacman 命令][9] 来安装 HTTPie。
```
$ sudo pacman -S httpie
```
**`RHEL/CentOS`** 的系统, 使用 **[YUM 命令][10]** 来安装 httpie。
RHEL/CentOS 的系统,使用 [YUM 命令][10] 来安装 HTTPie。
```
$ sudo yum install httpie
```
**`openSUSE Leap`** 系统, 使用 **[Zypper 命令][11]** 来安装 httpie。
openSUSE Leap 系统,使用 [Zypper 命令][11] 来安装 HTTPie。
```
$ sudo zypper install httpie
```
### 1) 如何使用 HTTPie 请求URL
### 用法
httpie 的基本用法是将网站的 URL 作为参数。
#### 如何使用 HTTPie 请求 URL
HTTPie 的基本用法是将网站的 URL 作为参数。
```
# http 2daygeek.com
@ -99,9 +100,9 @@ Transfer-Encoding: chunked
Vary: Accept-Encoding
```
### 2) 如何使用 HTTPie 下载文件
#### 如何使用 HTTPie 下载文件
你可以使用带 `--download` 参数的 HTTPie 命令下载文件。类似于 wget 命令。
你可以使用带 `--download` 参数的 HTTPie 命令下载文件。类似于 `wget` 命令。
```
# http --download https://www.2daygeek.com/wp-content/uploads/2019/04/Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png
@ -148,10 +149,11 @@ Vary: Accept-Encoding
Downloading 31.31 kB to "Anbox-1.png"
Done. 31.31 kB in 0.01551s (1.97 MB/s)
```
如何使用HTTPie恢复部分下载
### 3) 如何使用 HTTPie 恢复部分下载?
#### 如何使用 HTTPie 恢复部分下载?
你可以使用带 `-c` 参数的 HTTPie 继续下载。
```
# http --download --continue https://speed.hetzner.de/100MB.bin -o 100MB.bin
HTTP/1.1 206 Partial Content
@ -169,24 +171,24 @@ Downloading 100.00 MB to "100MB.bin"
| 24.14 % 24.14 MB 1.12 MB/s 0:01:07 ETA^C
```
你根据下面的输出验证是否同一个文件
你根据下面的输出验证是否同一个文件:
```
[email protected]:/var/log# ls -lhtr 100MB.bin
-rw-r--r-- 1 root root 25M Apr 9 01:33 100MB.bin
```
### 5) 如何使用 HTTPie 上传文件?
#### 如何使用 HTTPie 上传文件?
你可以通过使用带有 `小于号 "<"` 的 HTTPie 命令上传文件
You can upload a file using HTTPie with the `less-than symbol "<"` symbol.
你可以通过使用带有小于号 `<` 的 HTTPie 命令上传文件
```
$ http https://transfer.sh < Anbox-1.png
```
### 6) 如何使用带有重定向符号">" 的 HTTPie 下载文件?
#### 如何使用带有重定向符号 > 下载文件?
你可以使用带有 `重定向 ">"` 符号的 HTTPie 命令下载文件。
你可以使用带有重定向 `>` 符号的 HTTPie 命令下载文件。
```
# http https://www.2daygeek.com/wp-content/uploads/2019/03/How-To-Install-And-Enable-Flatpak-Support-On-Linux-1.png > Flatpak.png
@ -195,7 +197,7 @@ $ http https://transfer.sh < Anbox-1.png
-rw-r--r-- 1 root root 47K Apr 9 01:44 Flatpak.png
```
### 7) 发送一个 HTTP GET 请求?
#### 发送一个 HTTP GET 请求?
您可以在请求中发送 HTTP GET 方法。GET 方法会使用给定的 URI从给定服务器检索信息。
@ -214,7 +216,7 @@ Transfer-Encoding: chunked
Vary: Accept-Encoding
```
### 8) 提交表单?
#### 提交表单?
使用以下格式提交表单。POST 请求用于向服务器发送数据,例如客户信息、文件上传等。要使用 HTML 表单。
@ -261,24 +263,24 @@ Server: Apache/2.4.29 (Ubuntu)
Vary: Accept-Encoding
```
### 9) HTTP 认证?
#### HTTP 认证?
当前支持的身份验证认证方案是基本认证Basic和摘要验证Digest
The currently supported authentication schemes are Basic and Digest
当前支持的身份验证认证方案是基本认证Basic和摘要验证Digest
基本认证
基本认证
```
$ http -a username:password example.org
```
摘要验证
摘要验证
```
$ http -A digest -a username:password example.org
```
提示输入密码
提示输入密码:
```
$ http -a username example.org
```
@ -289,8 +291,8 @@ via: https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/zgj1024)
校对:[校对者ID](https://github.com/校对者ID)
译者:[zgj1024](https://github.com/zgj1024)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -306,4 +308,4 @@ via: https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/
[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/

View File

@ -0,0 +1,130 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco Talos details exceptionally dangerous DNS hijacking attack)
[#]: via: (https://www.networkworld.com/article/3389747/cisco-talos-details-exceptionally-dangerous-dns-hijacking-attack.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco Talos details exceptionally dangerous DNS hijacking attack
======
Cisco Talos says state-sponsored attackers are battering DNS to gain access to sensitive networks and systems
![Peshkova / Getty][1]
Security experts at Cisco Talos have released a [report detailing][2] what it calls the “first known case of a domain name registry organization that was compromised for cyber espionage operations.”
Talos calls ongoing cyber threat campaign “Sea Turtle” and said that state-sponsored attackers are abusing DNS to harvest credentials to gain access to sensitive networks and systems in a way that victims are unable to detect, which displays unique knowledge on how to manipulate DNS, Talos stated.
**More about DNS:**
* [DNS in the cloud: Why and why not][3]
* [DNS over HTTPS seeks to make internet use more private][4]
* [How to protect your infrastructure from DNS cache poisoning][5]
* [ICANN housecleaning revokes old DNS security key][6]
By obtaining control of victims DNS, the attackers can change or falsify any data on the Internet, illicitly modify DNS name records to point users to actor-controlled servers; users visiting those sites would never know, Talos reported.
DNS, routinely known as the Internets phonebook, is part of the global internet infrastructure that translates between familiar names and the numbers computers need to access a website or send an email.
### Threat to DNS could spread
At this point Talos says Sea Turtle isn't compromising organizations in the U.S.
“While this incident is limited to targeting primarily national security organizations in the Middle East and North Africa, and we do not want to overstate the consequences of this specific campaign, we are concerned that the success of this operation will lead to actors more broadly attacking the global DNS system,” Talos stated.
Talos reports that the ongoing operation likely began as early as January 2017 and has continued through the first quarter of 2019. “Our investigation revealed that approximately 40 different organizations across 13 different countries were compromised during this campaign,” Talos stated. “We assess with high confidence that this activity is being carried out by an advanced, state-sponsored actor that seeks to obtain persistent access to sensitive networks and systems.”
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][7] ]**
Talos says the attackers directing the Sea Turtle campaign show signs of being highly sophisticated and have continued their attacks despite public reports of their activities. In most cases, threat actors typically stop or slow down their activities once their campaigns are publicly revealed suggesting the Sea Turtle actors are unusually brazen and may be difficult to deter going forward, Talos stated.
In January the Department of Homeland Security (DHS) [issued an alert][8] about this activity, warning that an attacker could redirect user traffic and obtain valid encryption certificates for an organizations domain names.
At that time the DHSs [Cybersecurity and Infrastructure Security Agency][9] said in its [Emergency Directive][9] that it was tracking a series of incidents targeting DNS infrastructure. CISA wrote that it “is aware of multiple executive branch agency domains that were impacted by the tampering campaign and has notified the agencies that maintain them.”
### DNS hijacking
CISA said that attackers have managed to intercept and redirect web and mail traffic and could target other networked services. The agency said the attacks start with compromising user credentials of an account that can make changes to DNS records. Then the attacker alters DNS records, like Address, Mail Exchanger, or Name Server records, replacing the legitimate address of the services with an address the attacker controls.
To achieve their nefarious goals, Talos stated the Sea Turtle accomplices:
* Use DNS hijacking through the use of actor-controlled name servers.
* Are aggressive in their pursuit targeting DNS registries and a number of registrars, including those that manage country-code top-level domains (ccTLD).
* Use Lets Encrypts, Comodo, Sectigo, and self-signed certificates in their man-in-the-middle (MitM) servers to gain the initial round of credentials.
* Steal victim organizations legitimate SSL certificate and use it on actor-controlled servers.
Such actions also distinguish Sea Turtle from an earlier DNS exploit known as DNSpionage, which [Talos reported][10] on in November 2018.
Talos noted “with high confidence” that these operations are distinctly different and independent from the operations performed by [DNSpionage.][11]
In that report, Talos said a DNSpionage campaign utilized two fake, malicious websites containing job postings that were used to compromise targets via malicious Microsoft Office documents with embedded macros. The malware supported HTTP and DNS communication with the attackers.
In a separate DNSpionage campaign, the attackers used the same IP address to redirect the DNS of legitimate .gov and private company domains. During each DNS compromise, the actor carefully generated Let's Encrypt certificates for the redirected domains. These certificates provide X.509 certificates for [Transport Layer Security (TLS)][12] free of charge to the user, Talos said.
The Sea Turtle campaign gained initial access either by exploiting known vulnerabilities or by sending spear-phishing emails. Talos said it believes the attackers have exploited multiple known common vulnerabilities and exposures (CVEs) to either gain initial access or to move laterally within an affected organization. Talos research further shows the following known exploits of Sea Turtle include:
* CVE-2009-1151: PHP code injection vulnerability affecting phpMyAdmin
* CVE-2014-6271: RCE affecting GNU bash system, specifically the SMTP (this was part of the Shellshock CVEs)
* CVE-2017-3881: RCE by unauthenticated user with elevated privileges Cisco switches
* CVE-2017-6736: Remote Code Exploit (RCE) for Cisco integrated Service Router 2811
* CVE-2017-12617: RCE affecting Apache web servers running Tomcat
* CVE-2018-0296: Directory traversal allowing unauthorized access to Cisco Adaptive Security Appliances (ASAs) and firewalls
* CVE-2018-7600: RCE for Website built with Drupal, aka “Drupalgeddon”
“As with any initial access involving a sophisticated actor, we believe this list of CVEs to be incomplete,” Talos stated. “The actor in question can leverage known vulnerabilities as they encounter a new threat surface. This list only represents the observed behavior of the actor, not their complete capabilities.”
Talos says that the Sea Turtle campaign continues to be highly successful for several reasons. “First, the actors employ a unique approach to gain access to the targeted networks. Most traditional security products such as IDS and IPS systems are not designed to monitor and log DNS requests,” Talos stated. “The threat actors were able to achieve this level of success because the DNS domain space system added security into the equation as an afterthought. Had more ccTLDs implemented security features such as registrar locks, attackers would be unable to redirect the targeted domains.”
Talos said the attackers also used previously undisclosed techniques such as certificate impersonation. “This technique was successful in part because the SSL certificates were created to provide confidentiality, not integrity. The attackers stole organizations SSL certificates associated with security appliances such as [Cisco's Adaptive Security Appliance] to obtain VPN credentials, allowing the actors to gain access to the targeted network, and have long-term persistent access, Talos stated.
### Cisco Talos DNS attack mitigation strategy
To protect against Sea Turtle, Cisco recommends:
* Use a registry lock service, which will require an out-of-band message before any changes can occur to an organization's DNS record.
* If your registrar does not offer a registry-lock service, Talos recommends implementing multi-factor authentication, such as DUO, to access your organization's DNS records.
* If you suspect you were targeted by this type of intrusion, Talos recommends instituting a network-wide password reset, preferably from a computer on a trusted network.
* Apply patches, especially on internet-facing machines. Network administrators can monitor passive DNS records on their domains to check for abnormalities.
Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3389747/cisco-talos-details-exceptionally-dangerous-dns-hijacking-attack.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/man-in-boat-surrounded-by-sharks_risk_fear_decision_attack_threat_by-peshkova-getty-100786972-large.jpg
[2]: https://blog.talosintelligence.com/2019/04/seaturtle.html
[3]: https://www.networkworld.com/article/3273891/hybrid-cloud/dns-in-the-cloud-why-and-why-not.html
[4]: https://www.networkworld.com/article/3322023/internet/dns-over-https-seeks-to-make-internet-use-more-private.html
[5]: https://www.networkworld.com/article/3298160/internet/how-to-protect-your-infrastructure-from-dns-cache-poisoning.html
[6]: https://www.networkworld.com/article/3331606/security/icann-housecleaning-revokes-old-dns-security-key.html
[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[8]: https://www.networkworld.com/article/3336201/batten-down-the-dns-hatches-as-attackers-strike-feds.html
[9]: https://cyber.dhs.gov/ed/19-01/
[10]: https://blog.talosintelligence.com/2018/11/dnspionage-campaign-targets-middle-east.html
[11]: https://krebsonsecurity.com/tag/dnspionage/
[12]: https://www.networkworld.com/article/2303073/lan-wan-what-is-transport-layer-security-protocol.html
[13]: https://www.facebook.com/NetworkWorld/
[14]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Clearing up confusion between edge and cloud)
[#]: via: (https://www.networkworld.com/article/3389364/clearing-up-confusion-between-edge-and-cloud.html#tk.rss_all)
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
Clearing up confusion between edge and cloud
======
The benefits of edge computing are not just hype; however, that doesnt mean you should throw cloud computing initiatives to the wind.
![iStock][1]
Edge computing and cloud computing are sometimes discussed as if theyre mutually exclusive approaches to network infrastructure. While they may function in different ways, utilizing one does not preclude the use of the other.
Indeed, [Futurum Research][2] found that, among companies that have deployed edge projects, only 15% intend to separate these efforts from their cloud computing initiatives — largely for security or compartmentalization reasons.
So then, whats the difference, and how do edge and cloud work together?
**Location, location, location**
Moving data and processing to the cloud, as opposed to on-premises data centers, has enabled the business to move faster, more efficiently, less expensively — and in many cases, more securely.
Yet cloud computing is not without challenges, particularly:
* Users will abandon a graphics-heavy website if it doesnt load quickly. So, imagine the lag for compute-heavy processing associated artificial intelligence or machine learning functions.
* The strength of network connectivity is crucial for large data sets. As enterprises increasingly generate data, particularly with the adoption of Internet of Things (IoT), traditional cloud connections will be insufficient.
To make up for the lack of speed and connectivity with cloud, processing for mission-critical applications will need to occur closer to the data source. Maybe thats a robot on the factory floor, digital signage at a retail store, or an MRI machine in a hospital. Thats edge computing, which reduces the distance the data must travel and thereby boosts the performance and reliability of applications and services.
**One doesnt supersede the other**
That said, the benefits gained by edge computing dont negate the need for cloud. In many cases, IT will now become a decision-maker in terms of best usage for each. For example, edge might make sense for devices running processing-power-hungry apps such as IoT, artificial intelligence, and machine learning. And cloud will work for apps where time isnt necessarily of the essence, like inventory or big-data projects.
> “By being able to triage the types of data processing on the edge versus that heading to the cloud, we can keep both systems running smoothly keeping our customers and employees safe and happy,” [writes Daniel Newman][3], principal analyst for Futurum Research.
And in reality, edge will require cloud. “To enable digital transformation, you have to build out the edge computing side and connect it with the cloud,” [Tony Antoun][4], senior vice president of edge and digital at GE Digital, told _Automation World_. “Its a journey from the edge to the cloud and back, and the cycle keeps continuing. You need both to enrich and boost the business and take advantage of different points within this virtual lifecycle.”
**Ensuring resiliency of cloud and edge**
Both edge and cloud computing require careful consideration to the underlying processing power. Connectivity and availability, no matter the application, are always critical measures.
But especially for the edge, it will be important to have a resilient architecture. Companies should focus on ensuring security, redundancy, connectivity, and remote management capabilities.
Discover how your edge and cloud computing environments can coexist at [APC.com][5].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3389364/clearing-up-confusion-between-edge-and-cloud.html#tk.rss_all
作者:[Anne Taylor][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Anne-Taylor/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/istock-612507606-100793995-large.jpg
[2]: https://futurumresearch.com/edge-computing-from-edge-to-enterprise/
[3]: https://futurumresearch.com/edge-computing-data-centers/
[4]: https://www.automationworld.com/article/technologies/cloud-computing/its-not-edge-vs-cloud-its-both
[5]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp

View File

@ -0,0 +1,58 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Startup MemVerge combines DRAM and Optane into massive memory pool)
[#]: via: (https://www.networkworld.com/article/3389358/startup-memverge-combines-dram-and-optane-into-massive-memory-pool.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Startup MemVerge combines DRAM and Optane into massive memory pool
======
MemVerge bridges two technologies that are already a bridge.
![monsitj / Getty Images][1]
A startup called MemVerge has announced software to combine regular DRAM with Intels Optane DIMM persistent memory into a single clustered storage pool and without requiring any changes to applications.
MemVerge has been working with Intel in developing this new hardware platform for close to two years. It offers what it calls a Memory-Converged Infrastructure (MCI) to allow existing apps to use Optane DC persistent memory. It's architected to integrate seamlessly with existing applications.
**[ Read also:[Mass data fragmentation requires a storage rethink][2] ]**
Optane memory is designed to sit between high-speed memory and [solid-state drives][3] (SSDs) and acts as a cache for the SSD, since it has speed comparable to DRAM but SSD persistence. With Intels new Xeon Scalable processors, this can make up to 4.5TB of memory available to a processor.
Optane runs in one of two modes: Memory Mode and App Direct Mode. In Memory Mode, the Optane memory functions like regular memory and is not persistent. In App Direct Mode, it functions as the SSD cache but apps dont natively support it. They need to be tweaked to function properly in Optane memory.
As it was explained to me, apps arent designed for persistent storage because the data is already in memory on powerup rather than having to load it from storage. So, the app has to know memory doesnt go away and that it does not need to shuffle data back and forth between storage and memory. Therefore, apps natively dont work in persistent memory.
### Why didn't Intel think of this?
All of which really begs a question I cant get answered, at least not immediately: Why didnt Intel think of this when it created Optane in the first place?
MemVerge has what it calls Distributed Memory Objects (DMO) hypervisor technology to provide a logical convergence layer to run data-intensive workloads at memory speed with guaranteed data consistency across multiple systems. This allows Optane memory to process and derive insights from the enormous amounts of data in real time.
Thats because MemVerges technology makes random access as fast as sequential access. Normally, random access is slower than sequential because of all the jumping around with random access vs. reading one sequential file. But MemVerge can handle many small files as fast as it handles one large file.
MemVerge itself is actually software, with a single API for both DRAM and Optane. Its also available via a hyperconverged server appliance that comes with 2 Cascade Lake processors, up to 512 GB DRAM, 6TB of Optane memory, and 360TB of NVMe physical storage capacity.
However, all of this is still vapor. MemVerge doesnt expect to ship a beta product until at least June.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3389358/startup-memverge-combines-dram-and-optane-into-massive-memory-pool.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/big_data_center_server_racks_storage_binary_analytics_by_monsitj_gettyimages-951389152_3x2-100787358-large.jpg
[2]: https://www.networkworld.com/article/3323580/mass-data-fragmentation-requires-a-storage-rethink.html
[3]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,119 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Want to the know future of IoT? Ask the developers!)
[#]: via: (https://www.networkworld.com/article/3389877/want-to-the-know-future-of-iot-ask-the-developers.html#tk.rss_all)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
Want to the know future of IoT? Ask the developers!
======
A new survey of IoT developers reveals that connectivity, performance, and standards are growing areas of concern as IoT projects hit production.
![Avgust01 / Getty Images][1]
It may be a cliché that software developers rule the world, but if you want to know the future of an important technology, it pays to look at what the developers are doing. With that in mind, there are some real, on-the-ground insights for the entire internet of things (IoT) community to be gained in a new [survey of more than 1,700 IoT developers][2] (pdf) conducted by the [Eclipse Foundation][3].
### IoT connectivity concerns
Perhaps not surprisingly, security topped the list of concerns, easily outpacing other IoT worries. But that's where things begin to get interesting. More than a fifth (21%) of IoT developers cited connectivity as a challenge, followed by data collection and analysis (19%), performance (18%), privacy (18%), and standards (16%).
Connectivity rose to second place after being the number three IoT concern for developers last year. Worries over security and data collection and analysis, meanwhile, actually declined slightly year over year. (Concerns over performance, privacy, and standards also increased significantly from last year.)
**[ Learn more:[Download a PDF bundle of five essential articles about IoT in the enterprise][4] ]**
“If you look at the list of developers top concerns with IoT in the survey,” said [Mike Milinkovich][5], executive director of the Eclipse Foundation via email, “I think connectivity, performance, and standards stand out — those are speaking to the fact that the IoT projects are getting real, that theyre getting out of sandboxes and into production.”
“With connectivity in IoT,” Milinkovich continued, “everything seems straightforward until you have a sensor in a corner somewhere — narrowband or broadband — and physical constraints make it hard to connect."
He also cited a proliferation of incompatible technologies that is driving developer concerns over connectivity.
![][6]
### IoT standards and interoperability
Milinkovich also addressed one of [my personal IoT bugaboos: interoperability][7]. “Standards is a proxy for interoperability” among products from different vendors, he explained, which is an “elusive goal” in industrial IoT (IIoT).
**[[Learn Java from beginning concepts to advanced design patterns in this comprehensive 12-part course!][8] ]**
“IIoT is about breaking down the proprietary silos and re-tooling the infrastructure thats been in our factories and logistics for many years using OSS standards and implementations — standard sets of protocols as opposed to vendor-specific protocols,” he said.
That becomes a big issue when youre deploying applications in the field and different manufacturers are using different protocols or non-standard extensions to existing protocols and the machines cant talk to each other.
**[ Also read:[Interoperability is the key to IoT success][7] ]**
“This ties back to the requirement of not just having open standards, but more robust implementations of those standards in open source stacks,” Milinkovich said. “To keep maturing, the market needs not just standards, but out-of-the-box interoperability between devices.”
“Performance is another production-grade concern,” he said. “When youre in development, you think you know the bottlenecks, but then you discover the real-world issues when you push to production.”
### Cloudy developments for IoT
The survey also revealed that in some ways, IoT is very much aligned with the larger technology community. For example, IoT use of public and hybrid cloud architectures continues to grow. Amazon Web Services (AWS) (34%), Microsoft Azure (23%), and Google Cloud Platform (20%) are the leading IoT cloud providers, just as they are throughout the industry. If anything, AWS lead may be smaller in the IoT space than it is in other areas, though reliable cloud-provider market share figures are notoriously hard to come by.
But Milinkovich sees industrial IoT as “a massive opportunity for hybrid cloud” because many industrial IoT users are very concerned about minimizing latency with their factory data, what he calls “their gold.” He sees factories moving towards hybrid cloud environments, leveraging “modern infrastructure technology like Kubernetes, and building around open protocols like HTTP and MQTT while getting rid of the older proprietary protocols.”
### How IoT development is different
In some ways, the IoT development world doesnt seem much different than wider software development. For example, the top IoT programming languages mirror [the popularity of those languages][9] over all, with C and Java ruling the roost. (C led the way on constrained devices, while Java was the top choice for gateway and edge nodes, as well as the IoT cloud.)
![][10]
But Milinkovich noted that when developing for embedded or constrained devices, the programmers interface to a device could be through any number of esoteric hardware connectors.
“Youre doing development using emulators and simulators, and its an inherently different and more complex interaction between your dev environment and the target for your application,” he said. “Sometimes hardware and software are developed in tandem, which makes it even more complicated.”
For example, he explained, building an IoT solution may bring in web developers working on front ends using JavaScript and Angular, while backend cloud developers control cloud infrastructure and embedded developers focus on building software to run on constrained devices.
No wonder IoT developers have so many things to worry about.
**More about IoT:**
* [What is the IoT? How the internet of things works][11]
* [What is edge computing and how its changing the network][12]
* [Most powerful Internet of Things companies][13]
* [10 Hot IoT startups to watch][14]
* [The 6 ways to make money in IoT][15]
* [What is digital twin technology? [and why it matters]][16]
* [Blockchain, service-centric networking key to IoT success][17]
* [Getting grounded in IoT networking and security][4]
* [Building IoT-ready networks must become a priority][18]
* [What is the Industrial IoT? [And why the stakes are so high]][19]
Join the Network World communities on [Facebook][20] and [LinkedIn][21] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3389877/want-to-the-know-future-of-iot-ask-the-developers.html#tk.rss_all
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/iot_internet_of_things_mobile_connections_by_avgust01_gettyimages-1055659210_2400x1600-100788447-large.jpg
[2]: https://drive.google.com/file/d/17WEobD5Etfw5JnoKC1g4IME_XCtPNGGc/view
[3]: https://www.eclipse.org/
[4]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
[5]: https://blogs.eclipse.org/post/mike-milinkovich/measuring-industrial-iot%E2%80%99s-evolution
[6]: https://images.idgesg.net/images/article/2019/04/top-developer-concerns-2019-eclipse-foundation-100793974-large.jpg
[7]: https://www.networkworld.com/article/3204529/interoperability-is-the-key-to-iot-success.html
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fjava
[9]: https://blog.newrelic.com/technology/popular-programming-languages-2018/
[10]: https://images.idgesg.net/images/article/2019/04/top-iot-programming-languages-eclipse-foundation-100793973-large.jpg
[11]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
[12]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
[13]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
[14]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
[15]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
[16]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
[17]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
[18]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
[19]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
[20]: https://www.facebook.com/NetworkWorld/
[21]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,64 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: ('Fiber-in-air' 5G network research gets funding)
[#]: via: (https://www.networkworld.com/article/3389881/extreme-5g-network-research-gets-funding.html#tk.rss_all)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
'Fiber-in-air' 5G network research gets funding
======
A consortium of tech companies and universities plan to aggressively investigate the exploitation of D-Band to develop a new variant of 5G infrastructure.
![Peshkova / Getty Images][1]
Wireless transmission at data rates of around 45gbps could one day be commonplace, some engineers say. “Fiber-in-air” is how the latest variant of 5G infrastructure is being described. To get there, a Britain-funded consortium of chip makers, universities, and others intend to aggressively investigate the exploitation of D-Band. That part of the radio spectrum is at 151-174.8 GHz in millimeter wavelengths (mm-wave) and hasnt been used before.
The researchers intend to do it by riffing on a now roughly 70-year-old gun-like electron-sending device that can trace its roots back through the annals of radio history: The Traveling Wave Tube, or TWT, an electron gun-magnet-combo that was used in the development of television and still brings space images back to Earth.
**[ Also read:[The time of 5G is almost here][2] ]**
D-Band, the spectrum the researchers want to use, has the advantage that its wide, so theoretically it should be good for fast, copious data rates. The problem with it though, and the reason it hasnt thus far been used, is that its subject to monkey-wrenching from atmospheric conditions such as rain, explains IQE, a semiconductor wafer and materials producer involved in the project, in a [press release][3]. The team says attenuation is fixable, though. Their solution is the now-aging TWTs.
The group, which includes BT, Filtronic, Glasgow University, Intel, Nokia Bell Labs, Optocap, and Teledyne e2v, has secured funding of the equivalent of $1.12 million USD from the U.K.s [Engineering and Physical Sciences Research Council (EPSRC)][4]. Thats the principal public funding body for engineering science research there.
### Tapping the power of TWTs
The DLINK system, as the team calls it, will use a high-power vacuum TWT with a special, newly developed tunneling diode and a modulator. Two bands of 10 GHz, each will deliver the throughput, [explains Lancaster University on its website][5]. The tubes are, in fact, special amplifiers that produce 10 Watts. Thats 10 times what an equivalent solid-state solution would likely produce at the same spot in the band, they say. Energy is basically sent from the electron beam to an electric field generated by the input signal.
Despite TWTs being around for eons, “no D-band TWTs are available in the market.” The development of one is key to these fiber-in-air speeds, the researchers say.
They will include “unprecedented data rate and transmission distance,” IQE writes.
The TWT device, although used extensively in space wireless communications since its invention in the 1950s, is overlooked as a significant contributor to global communications systems, say a group of French researchers working separately from this project, who recently argue that TWTs should be given more recognition.
TWTs are “the unsung heroes of space exploration,” the Aix-Marseille Université researchers say in [an article on publisher Springers website][6]. Springer is promoting the group's 2019-published [paper][7] in the European Physical Journal H in which they delve into the history of the simple electron gun and magnet device.
“Its role in the history of wireless communications and in the space conquest is significant, but largely ignored,” they write in their paper.
They will be pleased to hear it maybe isnt going away anytime soon.
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3389881/extreme-5g-network-research-gets-funding.html#tk.rss_all
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/abstract_data_coding_matrix_structure_network_connections_by_peshkova_gettyimages-897683944_2400x1600-100788487-large.jpg
[2]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[3]: https://www.iqep.com/media/2019/03/iqe-partners-in-key-wireless-communications-project-for-5g-infrastructure-(1)/
[4]: https://epsrc.ukri.org/
[5]: http://wp.lancs.ac.uk/dlink/
[6]: https://www.springer.com/gp/about-springer/media/research-news/all-english-research-news/traveling-wave-tubes--the-unsung-heroes-of-space-exploration/16578434
[7]: https://link.springer.com/article/10.1140%2Fepjh%2Fe2018-90023-1
[8]: https://www.facebook.com/NetworkWorld/
[9]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,76 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco warns WLAN controller, 9000 series router and IOS/XE users to patch urgent security holes)
[#]: via: (https://www.networkworld.com/article/3390159/cisco-warns-wlan-controller-9000-series-router-and-iosxe-users-to-patch-urgent-security-holes.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco warns WLAN controller, 9000 series router and IOS/XE users to patch urgent security holes
======
Cisco says unpatched vulnerabilities could lead to DoS attacks, arbitrary code execution, take-over of devices.
![Woolzian / Getty Images][1]
Cisco this week issued 31 security advisories but directed customer attention to “critical” patches for its IOS and IOS XE Software Cluster Management and IOS software for Cisco ASR 9000 Series routers. A number of other vulnerabilities also need attention if customers are running Cisco Wireless LAN Controllers.
The [first critical patch][2] has to do with a vulnerability in the Cisco Cluster Management Protocol (CMP) processing code in Cisco IOS and Cisco IOS XE Software that could allow an unauthenticated, remote attacker to send malformed CMP-specific Telnet options while establishing a Telnet session with an affected Cisco device configured to accept Telnet connections. An exploit could allow an attacker to execute arbitrary code and obtain full control of the device or cause a reload of the affected device, Cisco said.
**[ Also see[What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
The problem has a Common Vulnerability Scoring System number of 9.8 out of 10.
According to Cisco, the Cluster Management Protocol utilizes Telnet internally as a signaling and command protocol between cluster members. The vulnerability is due to the combination of two factors:
* The failure to restrict the use of CMP-specific Telnet options only to internal, local communications between cluster members and instead accept and process such options over any Telnet connection to an affected device
* The incorrect processing of malformed CMP-specific Telnet options.
Cisco says the vulnerability can be exploited during Telnet session negotiation over either IPv4 or IPv6. This vulnerability can only be exploited through a Telnet session established _to_ the device; sending the malformed options on Telnet sessions _through_ the device will not trigger the vulnerability.
The company says there are no workarounds for this problem, but disabling Telnet as an allowed protocol for incoming connections would eliminate the exploit vector. Cisco recommends disabling Telnet and using SSH instead. Information on how to do both can be found on the [Cisco Guide to Harden Cisco IOS Devices][5]. For patch information [go here][6].
The second critical patch involves a vulnerability in the sysadmin virtual machine (VM) on Ciscos ASR 9000 carrier class routers running Cisco IOS XR 64-bit Software could let an unauthenticated, remote attacker access internal applications running on the sysadmin VM, Cisco said in the [advisory][7]. This CVSS also has a 9.8 rating.
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]**
Cisco said the vulnerability is due to incorrect isolation of the secondary management interface from internal sysadmin applications. An attacker could exploit this vulnerability by connecting to one of the listening internal applications. A successful exploit could result in unstable conditions, including both denial of service (DoS) and remote unauthenticated access to the device, Cisco stated.
Cisco has released [free software updates][6] that address the vulnerability described in this advisory.
Lastly, Cisco wrote that [multiple vulnerabilities][9] in the administrative GUI configuration feature of Cisco Wireless LAN Controller (WLC) Software could let an authenticated, remote attacker cause the device to reload unexpectedly during device configuration when the administrator is using this GUI, causing a DoS condition on an affected device. The attacker would need to have valid administrator credentials on the device for this exploit to work, Cisco stated.
“These vulnerabilities are due to incomplete input validation for unexpected configuration options that the attacker could submit while accessing the GUI configuration menus. An attacker could exploit these vulnerabilities by authenticating to the device and submitting crafted user input when using the administrative GUI configuration feature,” Cisco stated.
“These vulnerabilities have a Security Impact Rating (SIR) of High because they could be exploited when the software fix for the Cisco Wireless LAN Controller Cross-Site Request Forgery Vulnerability is not in place,” Cisco stated. “In that case, an unauthenticated attacker who first exploits the cross-site request forgery vulnerability could perform arbitrary commands with the privileges of the administrator user by exploiting the vulnerabilities described in this advisory.”
Cisco has released [software updates][10] that address these vulnerabilities and said that there are no workarounds.
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3390159/cisco-warns-wlan-controller-9000-series-router-and-iosxe-users-to-patch-urgent-security-holes.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/compromised_data_security_breach_vulnerability_by_woolzian_gettyimages-475563052_2400x1600-100788413-large.jpg
[2]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20170317-cmp
[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: http://www.cisco.com/c/en/us/support/docs/ip/access-lists/13608-21.html
[6]: https://www.cisco.com/c/en/us/about/legal/cloud-and-software/end_user_license_agreement.html
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190417-asr9k-exr
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190417-wlc-iapp
[10]: https://www.cisco.com/c/en/us/support/web/tsd-cisco-worldwide-contacts.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,58 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fujitsu completes design of exascale supercomputer, promises to productize it)
[#]: via: (https://www.networkworld.com/article/3389748/fujitsu-completes-design-of-exascale-supercomputer-promises-to-productize-it.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Fujitsu completes design of exascale supercomputer, promises to productize it
======
Fujitsu hopes to be the first to offer exascale supercomputing using Arm processors.
![Riken Advanced Institute for Computational Science][1]
Fujitsu and Japanese research institute Riken announced the design for the post-K supercomputer, to be launched in 2021, is complete and that they will productize the design for sale later this year.
The K supercomputer was a massive system, built by Fujitsu and housed at the Riken Advanced Institute for Computational Science campus in Kobe, Japan, with more than 80,000 nodes and using Sparc64 VIIIfx processors, a derivative of the Sun Microsystems Sparc processor developed under a license agreement that pre-dated Oracle buying out Sun in 2010.
**[ Also read:[10 of the world's fastest supercomputers][2] ]**
It was ranked as the top supercomputer when it was launched in June 2011 with a computation speed of over 8 petaflops. And in November 2011, K became the first computer to top 10 petaflops. It was eventually surpassed as the world's fastest supercomputer by the IBMs Sequoia, but even now, eight years later, its still in the top 20 of supercomputers in the world.
### What's in the Post-K supercomputer?
The new system, dubbed “Post-K,” will feature an Arm-based processor called A64FX, a high-performance CPU developed by Fujitsu, designed for exascale systems. The chip is based off the Arm8 design, which is popular in smartphones, with 48 cores plus four “assistant” cores and the ability to access up to 32GB of memory per chip.
A64FX is the first CPU to adopt the Scalable Vector Extension (SVE), an instruction set specifically designed for Arm-based supercomputers. Fujitsu claims A64FX will offer a peak double precision (64-bit) floating point operations performance of over 2.7 teraflops per chip. The system will have one CPU per node and 384 nodes per rack. That comes out to one petaflop per rack.
Contrast that with Summit, the top supercomputer in the world built by IBM for the Oak Ridge National Laboratory using IBM Power9 processors and Nvidia GPUs. A Summit rack has a peak computer of 864 teraflops.
Let me put it another way: IBMs Power processor and Nvidias Tesla are about to get pwned by a derivative chip to the one in your iPhone.
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][3] ]**
Fujitsu will productize the Post-K design and sell it as the successor to the Fujitsu Supercomputer PrimeHPC FX100. The company said it is also considering measures such as developing an entry-level model that will be easy to deploy, or supplying these technologies to other vendors.
Post-K will be installed in the Riken Center for Computational Science (R-CCS), where the K computer is currently located. The system will be one of the first exascale supercomputers in the world, although the U.S. and China are certainly gunning to be first if only for bragging rights.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3389748/fujitsu-completes-design-of-exascale-supercomputer-promises-to-productize-it.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/06/riken_advanced_institute_for_computational_science_k-computer_supercomputer_1200x800-100762135-large.jpg
[2]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Most data center workers happy with their jobs -- despite the heavy demands)
[#]: via: (https://www.networkworld.com/article/3389359/most-data-center-workers-happy-with-their-jobs-despite-the-heavy-demands.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Most data center workers happy with their jobs -- despite the heavy demands
======
An Informa Engage and Data Center Knowledge survey finds data center workers are content with their jobs, so much so they would encourage their children to go into that line of work.
![Thinkstock][1]
A [survey conducted by Informa Engage and Data Center Knowledge][2] finds data center workers overall are content with their job, so much so they would encourage their children to go into that line of work despite the heavy demands on time and their brain.
Overall satisfaction is pretty good, with 72% of respondents generally agreeing with the statement “I love my current job,” while a third strongly agreed. And 75% agreed with the statement, “If my child, niece or nephew asked, Id recommend getting into IT.”
**[ Also read:[20 hot jobs ambitious IT pros should shoot for][3] ]**
And there is a feeling of significance among data center workers, with 88% saying they feel they are very important to the success of their employer.
Thats despite some challenges, not the least of which is a skills and certification shortage. Survey respondents cite a lack of skills as the biggest area of concern. Only 56% felt they had the training necessary to do their job, and 74% said they had been in the IT industry for more than a decade.
The industry offers certification programs, every major IT hardware provider has them, but 61% said they have not completed or renewed certificates in the past 12 months. There are several reasons why.
A third (34%) said it was due to a lack of a training budget at their organization, while 24% cited a lack of time, 16% said management doesnt see a need for training, and 16% cited no training plans within their workplace.
That doesnt surprise me, since tech is one of the most open industries in the world where you can find training or educational materials and teach yourself. Its already established that [many coders are self-taught][4], including industry giants Bill Gates, Steve Wozniak, John Carmack, and Jack Dorsey.
**[[Looking to upgrade your career in tech? This comprehensive online course teaches you how.][5] ]**
### Data center workers' salaries
Data center workers cant complain about the pay. Well, most cant, as 50% make $100,000 per year or more, but 11% make less than $40,000. Two-thirds of those surveyed are in the U.S., so those on the low end might be outside the country.
There was one notable discrepancy. Steve Brown, managing director of London-based Datacenter People, noted that software engineers get paid a lot better than the hardware people.
“The software engineering side of the data center is comparable to the highest-earning professions,” Brown said in the report. “On the physical infrastructure — the mechanical/electrical side — its not quite the case. Its more equivalent to mid-level management.”
### Data center professionals still predominantly male
The least surprising finding? Nine out of 10 survey respondents were male. The industry is bending over backwards to fix the gender imbalance, but so far nothing has changed.
The conclusion of the report is a bit ominous, but I also think is wrong:
> “As data center infrastructure completes its transition to a cloud computing model, and software moves into containers and microservices, the remaining, treasured leaders of the data center workforce — people who acquired their skills in the 20th century — may find themselves with nothing recognizable they can manage and no-one to lead. We may be shocked when the crisis finally hits, but we wont be able to say we werent warned.”
How many times do I have to say it, [the data center is not going away][6].
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3389359/most-data-center-workers-happy-with-their-jobs-despite-the-heavy-demands.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/02/data_center_thinkstock_879720438-100749725-large.jpg
[2]: https://informa.tradepub.com/c/pubRD.mpl?sr=oc&_t=oc:&qf=w_dats04&ch=datacenterkids
[3]: https://www.networkworld.com/article/3276025/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
[4]: https://www.networkworld.com/article/3046178/survey-finds-most-coders-are-self-taught.html
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fupgrading-your-technology-career
[6]: https://www.networkworld.com/article/3289509/two-studies-show-the-data-center-is-thriving-instead-of-dying.html
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -1,104 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (jdh8383)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The shell scripting trap)
[#]: via: (https://arp242.net/weblog/shell-scripting-trap.html)
[#]: author: (Martin Tournoij https://arp242.net/)
The shell scripting trap
======
Shell scripting is great. It is amazingly simple to create something very useful. Even a simple no-brainer command such as:
```
# Official way of naming Go-related things:
$ grep -i ^go /usr/share/dict/american-english /usr/share/dict/british /usr/share/dict/british-english /usr/share/dict/catala /usr/share/dict/catalan /usr/share/dict/cracklib-small /usr/share/dict/finnish /usr/share/dict/french /usr/share/dict/german /usr/share/dict/italian /usr/share/dict/ngerman /usr/share/dict/ogerman /usr/share/dict/spanish /usr/share/dict/usa /usr/share/dict/words | cut -d: -f2 | sort -R | head -n1
goldfish
```
Takes several lines of code and a lot more brainpower in many programming languages. For example in Ruby:
```
puts(Dir['/usr/share/dict/*-english'].map do |f|
File.open(f)
.readlines
.select { |l| l[0..1].downcase == 'go' }
end.flatten.sample.chomp)
```
The Ruby version isnt that long, or even especially complicated. But the shell script version was so simple that I didnt even need to actually test it to make sure it is correct, whereas I did have to test the Ruby version to ensure I didnt make a mistake. Its also twice as long and looks a lot more dense.
This is why people use shell scripts, its so easy to make something useful. Heres is another example:
```
curl https://nl.wikipedia.org/wiki/Lijst_van_Nederlandse_gemeenten |
grep '^<li><a href=' |
sed -r 's|<li><a href="/wiki/.+" title=".+">(.+)</a>.*</li>|\1|' |
grep -Ev '(^Tabel van|^Lijst van|Nederland)'
```
This gets a list of all Dutch municipalities. I actually wrote this as a quick one-shot script to populate a database years ago, but it still works fine today, and it took me a minimum of effort to make it. Doing this in e.g. Ruby would take a lot more effort.
But theres a downside, as your script grows it will become increasingly harder to maintain, but you also dont really want to rewrite it to something else, as youve already spent so much time on the shell script version.
This is what I call the shell script trap, which is a special case of the [sunk cost fallacy][1].
And many scripts do grow beyond their original intended size, and often you will spend a lot more time than you should on “fixing that one bug”, or “adding just one small feature”. Rinse, repeat.
If you had written it in Python or Ruby or another similar language from the start, you would have spent some more time writing the original version, but would have spent much less time maintaining it, while almost certainly having fewer bugs.
Take my [packman.vim][2] script for example. It started out as a simple `for` loop over all directories and a `git pull` and has grown from there. At about 200 lines its hardly the most complex script, but had I written it in Go as I originally planned then it would have been much easier to add support for printing out the status or cloning new repos from a config file. It would also be almost trivial to add support for parallel clones, which is hard (though not impossible) to do correct in a shell script. In hindsight, I would have saved time, and gotten a better result to boot.
I regret writing most shell scripts Ive written for similar reasons, and my 2018 new years pledge will be to not write any more.
#### Appendix: the problems
And to be clear, shell scripting does come with some real limitation. Some examples:
* Dealing with filenames that contain spaces or other special characters requires careful attention to detail. The vast majority of scripts get this wrong, even when written by experienced authors who care about such things (e.g. me), because its so easy to do it wrong. [Adding quotes is not enough][3].
* There are many “right” and “wrong” ways to do things. Should you use `which` or `command`? Should you use `$@` or `$*`, and should that be quoted? Should you use `cmd $arg` or `cmd "$arg"`? etc. etc.
* You cannot store any NULL bytes (0x00) in variables; it is very hard to make shell scripts deal with binary data.
* While you can make something very useful very quickly, implementing more complex algorithms can be very painful if not nigh-impossible even when using the ksh/zsh/bash extensions. My ad-hoc HTML parsing in the example above was okay for a quick one-off script, but you really dont want to do things like that in a production-script.
* It can be hard to write shell scripts that work well on all platforms. `/bin/sh` could be `dash` or `bash`, and will behave different. External tools such as `grep`, `sed`, etc. may or may not support certain flags. Are you sure that your script works on all versions (past, present, and future) of Linux, macOS, and Windows equally well?
* Debugging shell scripts can be hard, especially as the syntax can get fairly obscure quite fast, and not everyone is equally well versed in shell scripting.
* Error handling can be tricky (check `$?` or `set -e`), and doing something more advanced beyond “an error occurred” is practically impossible.
* Undefined variables are not an error unless you use `set -u`, leading to “fun stuff” like `rm -r ~/$undefined` deleting users home dir ([not a theoretical problem][4]).
* Everything is a string. Some shells add arrays, which works but the syntax is obscure and ugly. Numeric computations with fractions remain tricky and rely on external tools such as `bc` or `dc` (`$(( .. ))` expansion only works for integers).
**Feedback**
You can mail me at [martin@arp242.net][5] or [create a GitHub issue][6] for feedback, questions, etc.
--------------------------------------------------------------------------------
via: https://arp242.net/weblog/shell-scripting-trap.html
作者:[Martin Tournoij][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://arp242.net/
[b]: https://github.com/lujun9972
[1]: https://youarenotsosmart.com/2011/03/25/the-sunk-cost-fallacy/
[2]: https://github.com/Carpetsmoker/packman.vim
[3]: https://dwheeler.com/essays/filenames-in-shell.html
[4]: https://github.com/ValveSoftware/steam-for-linux/issues/3671
[5]: mailto:martin@arp242.net
[6]: https://github.com/Carpetsmoker/arp242.net/issues/new

View File

@ -0,0 +1,34 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Move Data to the Cloud with Azure Data Migration)
[#]: via: (https://www.linux.com/blog/move-data-cloud-azure-data-migration)
[#]: author: (InfoWorld https://www.linux.com/users/infoworld)
Move Data to the Cloud with Azure Data Migration
======
Despite more than a decade of cloud migration, theres still a vast amount of data running on-premises. Thats not surprising since data migrations, even between similar systems, are complex, slow, and add risk to your day-to-day operations. Moving to the cloud adds additional management overhead, raising questions of network connectivity and bandwidth, as well as the variable costs associated with running cloud databases.
Read more at: [InfoWorld][1]
Despite more than a decade of cloud migration, theres still a vast amount of data running on-premises. Thats not surprising since data migrations, even between similar systems, are complex, slow, and add risk to your day-to-day operations. Moving to the cloud adds additional management overhead, raising questions of network connectivity and bandwidth, as well as the variable costs associated with running cloud databases.
Read more at: [InfoWorld][1]
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/move-data-cloud-azure-data-migration
作者:[InfoWorld][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/infoworld
[b]: https://github.com/lujun9972
[1]: https://www.infoworld.com/article/3388312/move-data-to-the-cloud-with-azure-data-migration.html

View File

@ -0,0 +1,132 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to use Ansible to document procedures)
[#]: via: (https://opensource.com/article/19/4/ansible-procedures)
[#]: author: (Marco Bravo https://opensource.com/users/marcobravo/users/shawnhcorey/users/marcobravo)
How to use Ansible to document procedures
======
In Ansible, the documentation is the playbook, so the documentation
naturally evolves alongside the code
![][1]
> "Documentation is a love letter that you write to your future self." —[Damian Conway][2]
I use [Ansible][3] as my personal notebook for documenting coding procedures—both the ones I use often and the ones I rarely use. This process facilitates my work and reduces the time it takes to do repetitive tasks, the ones where specific commands in a certain sequence are executed to accomplish a specific result.
By documenting with Ansible, I don't need to memorize all the parameters for each command or all the steps involved with a specific procedure, and it's easy to share the details with my teammates.
Traditional approaches for documentation, like wikis or shared drives, are useful for general documents, but inevitably they become outdated and can't keep pace with the rapid changes in infrastructure and environments. For specific procedures, it's better to document directly into the code using a tool like Ansible.
### Ansible's advantages
Before we begin, let's recap some basic Ansible concepts: a _playbook_ is a high-level organization of procedures using plays; _plays_ are specific procedures for a group of hosts; _tasks_ are specific actions, _modules_ are units of code, and _inventory_ is a list of managed nodes.
Ansible's great advantage is that the documentation is the playbook itself, so it evolves with and is contained inside the code. This is not only useful; it's also practical because, more than just documenting solutions with Ansible, you're also coding a playbook that permits you to write your procedures and commands, reproduce them, and automate them. This way, you can look back in six months and be able to quickly understand and execute them again.
It's true that this way of resolving problems could take more time at first, but it will definitely save a lot of time in the long term. By being courageous and disciplined to adopt these new habits, you will improve your skills in each iteration.
Following are some other important elements and support tools that will facilitate your process.
### Use source code control
> "First do it, then do it right, then do it better." —[Addy Osmani][4]
When working with Ansible playbooks, it's very important to implement a playbook-as-code strategy. A good way to accomplish this is to use a source code control repository that will permit to you start with a simple solution and iterate to improve it.
A source code control repository provides many advantages as you collaborate with other developers, restore previous versions, and back up your work. But in creating documentation, its main advantages are that you get traceability about what are you doing and can iterate around small changes to improve your work.
The most popular source control system is [Git][5], but there are [others][6] like [Subversion][7], [Bazaar][8], [BitKeeper][9], and [Mercurial][10].
### Keep idempotency in mind
In infrastructure automation, idempotency means to reach a specific end state that remains the same, no matter how many times the process is executed. So when you are preparing to automate your procedures, keep the desired result in mind and write scripts and commands that will achieve them consistently.
This concept exists in most Ansible modules because after you specify the desired final state, Ansible will accomplish it. For instance, there are modules for creating filesystems, modifying iptables, and managing cron entries. All of these modules are idempotent by default, so you should give them preference.
If you are using some of the lower-level modules, like command or shell, or developing your own modules, be careful to write code that will be idempotent and safe to repeat many times to get the same result.
The idempotency concept is important when you prepare procedures for automation because it permits you to evaluate several scenarios and incorporate the ones that will make your code safer and create an abstraction level that points to the desired result.
### Test it!
Testing your deployment workflow creates fewer surprises when your code arrives in production. Ansible's belief that you shouldn't need another framework to validate basic things in your infrastructure is true. But your focus should be on application testing, not infrastructure testing.
Ansible's documentation offers several [testing strategies for your procedures][11]. For testing Ansible playbooks, you can use [Molecule][12], which is designed to aid in the development and testing of Ansible roles. Molecule supports testing with multiple instances, operating systems/distributions, virtualization providers, test frameworks, and testing scenarios. This means Molecule will run through all the testing steps: linting verifications, checking playbook syntax, building Docker environments, running playbooks against Docker environments, running the playbook again to verify idempotence, and cleaning everything up afterward. [Testing Ansible roles with Molecule][13] is a good introduction to Molecule.
### Run it!
Running Ansible playbooks can create logs that are formatted in an unfriendly and difficult-to-read way. In those cases, the Ansible Run Analysis (ARA) is a great complementary tool for running Ansible playbooks, as it provides an intuitive interface to browse them. Read [Analyzing Ansible runs using ARA][14] for more information.
Remember to protect your passwords and other sensitive information with [Ansible Vault][15]. Vault can encrypt binary files, **group_vars** , **host_vars** , **include_vars** , and **var_files**. But this encrypted data is exposed when you run a playbook in **-v** (verbose) mode, so it's a good idea to combine it with the keyword **no_log** set to **true** to hide any task's information, as it indicates that the value of the argument should not be logged or displayed.
### A basic example
Do you need to connect to a server to produce a report file and copy the file to another server? Or do you need a lot of specific parameters to connect? Maybe you're not sure where to store the parameters. Or are your procedures are taking a long time because you need to collect all the parameters from several sources?
Suppose you have a network topology with some restrictions and you need to copy a file from a server that you can access ( **server1** ) to another server that is managed by a third party ( **server2** ). The parameters to connect are:
```
Source server: server1
Target server: server2
Port: 2202
User: transfers
SSH Key: transfers_key
File to copy: file.log
Remote directory: /logs/server1/
```
In this scenario, you need to connect to **server1** and copy the file using these parameters. You can accomplish this using a one-line command:
```
`ssh server1 "scp -P 2202 -oUser=transfers -i ~/.ssh/transfers_key file.log server2:/logs/server1/"`
```
Now your playbook can do the procedure.
### Useful combinations
If you produce a lot of Ansible playbooks, you can organize all your procedures with other tools like [AWX][16] (Ansible Works Project), which provides a web-based user interface, a REST API, and a task engine built on top of Ansible so that users can better control their Ansible project use in IT environments.
Other interesting combinations are Ansible with [Rundeck][17], which provides procedures as self-service jobs, and [Jenkins][18] for continuous integration and continuous delivery processes.
### Conclusion
I hope that these tips for using Ansible will help you improve your automation processes, coding, and documentation. If you have more interest, dive in and learn more. And I would like to hear your ideas or questions, so please share them in the comments below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/ansible-procedures
作者:[Marco Bravo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/marcobravo/users/shawnhcorey/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/document_free_access_cut_security.png?itok=ocvCv8G2
[2]: https://en.wikipedia.org/wiki/Damian_Conway
[3]: https://www.ansible.com/
[4]: https://addyosmani.com/
[5]: https://git-scm.com/
[6]: https://en.wikipedia.org/wiki/Comparison_of_version_control_software
[7]: https://subversion.apache.org/
[8]: https://bazaar.canonical.com/en/
[9]: https://www.bitkeeper.org/
[10]: https://www.mercurial-scm.org/
[11]: https://docs.ansible.com/ansible/latest/reference_appendices/test_strategies.html
[12]: https://molecule.readthedocs.io/en/latest/
[13]: https://opensource.com/article/18/12/testing-ansible-roles-molecule
[14]: https://opensource.com/article/18/5/analyzing-ansible-runs-using-ara
[15]: https://docs.ansible.com/ansible/latest/user_guide/vault.html
[16]: https://github.com/ansible/awx
[17]: https://www.rundeck.com/ansible
[18]: https://www.redhat.com/en/blog/integrating-ansible-jenkins-cicd-process

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Managing RAID arrays with mdadm)
[#]: via: (https://fedoramagazine.org/managing-raid-arrays-with-mdadm/)
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
Managing RAID arrays with mdadm
======
![][1]
Mdadm stands for Multiple Disk and Device Administration. It is a command line tool that can be used to manage software [RAID][2] arrays on your Linux PC. This article outlines the basics you need to get started with it.
The following five commands allow you to make use of mdadms most basic features:
1. **Create a RAID array** :
### mdadm --create /dev/md/test --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
2. **Assemble (and start) a RAID array** :
### mdadm --assemble /dev/md/test /dev/sda1 /dev/sdb1
3. **Stop a RAID array** :
### mdadm --stop /dev/md/test
4. **Delete a RAID array** :
### mdadm --zero-superblock /dev/sda1 /dev/sdb1
5. **Check the status of all assembled RAID arrays** :
### cat /proc/mdstat
#### Notes on features
##### `mdadm --create`
The _create_ command shown above includes the following four parameters in addition to the create parameter itself and the device names:
1. **homehost** :
By default, mdadm stores your computers name as an attribute of the RAID array. If your computer name does not match the stored name, the array will not automatically assemble. This feature is useful in server clusters that share hard drives because file system corruption usually occurs if multiple servers attempt to access the same drive at the same time. The name _any_ is reserved and disables the _homehost_ restriction.
2. **metadata** :
_mdadm_ reserves a small portion of each RAID device to store information about the RAID array itself. The _metadata_ parameter specifies the format and location of the information. The value _1.0_ indicates to use version-1 formatting and store the metadata at the end of the device.
3. **level** :
The _level_ parameter specifies how the data should be distributed among the underlying devices. Level _1_ indicates each device should contain a complete copy of all the data. This level is also known as [disk mirroring][3].
4. **raid-devices** :
The _raid-devices_ parameter specifies the number of devices that will be used to create the RAID array.
By using _level=1_ (mirroring) in combination with _metadata=1.0_ (store the metadata at the end of the device), you create a RAID1 array whose underlying devices appear normal if accessed without the aid of the mdadm driver. This is useful in the case of disaster recovery, because you can access the device even if the new system doesnt support mdadm arrays. Its also useful in case a program needs _read-only_ access to the underlying device before mdadm is available. For example, the [UEFI][4] firmware in a computer may need to read the bootloader from the [ESP][5] before mdadm is started.
##### `mdadm --assemble`
The _assemble_ command above fails if a member device is missing or corrupt. To force the RAID array to assemble and start when one of its members is missing, use the following command:
```
# mdadm --assemble --run /dev/md/test /dev/sda1
```
#### Other important notes
Avoid writing directly to any devices that underlay a mdadm RAID1 array. That causes the devices to become out-of-sync and mdadm wont know that they are out-of-sync. If you access a RAID1 array with a device thats been modified out-of-band, you can cause file system corruption. If you modify a RAID1 device out-of-band and need to force the array to re-synchronize, delete the mdadm metadata from the device to be overwritten and then re-add it to the array as demonstrated below:
```
# mdadm --zero-superblock /dev/sdb1
# mdadm --assemble --run /dev/md/test /dev/sda1
# mdadm /dev/md/test --add /dev/sdb1
```
These commands completely overwrite the contents of sdb1 with the contents of sda1.
To specify any RAID arrays to automatically activate when your computer starts, create an _/etc/mdadm.conf_ configuration file.
For the most up-to-date and detailed information, check the man pages:
```
$ man mdadm
$ man mdadm.conf
```
The next article of this series will show a step-by-step guide on how to convert an existing single-disk Linux installation to a mirrored-disk installation, that will continue running even if one of its hard drives suddenly stops working!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/managing-raid-arrays-with-mdadm/
作者:[Gregory Bartholomew][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/glb/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/mdadm-816x345.jpg
[2]: https://en.wikipedia.org/wiki/RAID
[3]: https://en.wikipedia.org/wiki/Disk_mirroring
[4]: https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface
[5]: https://en.wikipedia.org/wiki/EFI_system_partition

View File

@ -0,0 +1,119 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Electronics designed in 5 different countries with open hardware)
[#]: via: (https://opensource.com/article/19/4/hardware-international)
[#]: author: (Michael Weinberg https://opensource.com/users/mweinberg)
Electronics designed in 5 different countries with open hardware
======
This month's open source hardware column looks at certified open
hardware from five countries that may surprise you.
![Gadgets and open hardware][1]
The Open Source Hardware Association's [Hardware Registry][2] lists hardware from 29 different countries on five continents, demonstrating the broad, international footprint of certified open source hardware.
![Open source hardware map][3]
In some ways, this international reach shouldn't be a surprise. Like many other open source communities, the open source hardware community is built on top of the internet, not grounded in any specific geographical location. The focus on documentation, sharing, and openness makes it easy for people in different places with different backgrounds to connect and work together to develop new hardware. Even the community-developed open source hardware [definition][4] has been translated into 11 languages from the original English.
Even if you're familiar with the international nature of open source hardware, it can still be refreshing to step back and remember what it means in practice. While it may not surprise you that there are many certifications from the United States, Germany, and India, some of the other countries boasting certifications might be a bit less expected. Let's look at six such projects from five of those countries.
### Bulgaria
Bulgaria may have the highest per-capita open source hardware certification rate of any country on earth. That distinction is mostly due to the work of two companies: [ANAVI Technology][5] and [Olimex][6].
ANAVI focuses mostly on IoT projects built on top of the Raspberry Pi and ESP8266. The concept of "creator contribution" means that these projects can be certified open source even though they are built upon non-open bases. That is because all of ANAVI's work to develop the hardware on top of these platforms (ANAVI's "creator contribution") has been open sourced in compliance with the certification requirements.
The [ANAVI Light pHAT][7] was the first piece of Bulgarian hardware to be certified by OSHWA. The Light pHAT makes it easy to add a 12V RGB LED strip to a Raspberry Pi.
![ANAVI-Light-pHAT][8]
[ANAVI-Light-pHAT][9]
Olimex's first OSHWA certification was for the [ESP32-PRO][10], a highly connectable IoT board built around an ESP32 microcontroller.
![Olimex ESP32-PRO][11]
[Olimex ESP32-PRO][12]
### China
While most people know China is a hotbed for hardware development, fewer realize that it is also the home to a thriving _open source_ hardware culture. One of the reasons is the tireless advocacy of Naomi Wu (also known as [SexyCyborg][13]). It is fitting that the first piece of certified hardware from China is one she helped develop: the [sino:bit][14]. The sino:bit is designed to help introduce students to programming and includes China-specific features like a LED matrix big enough to represent Chinese characters.
![sino:bit][15]
[ sino:bit][16]
### Mexico
Mexico has also produced a range of certified open source hardware. A recent certification is the [Meow Meow][17], a capacitive touch interface from [Electronic Cats][18]. Meow Meow makes it easy to use a wide range of objects—bananas are always a favorite—as controllers for your computer.
![Meow Meow][19]
[Meow Meow][20]
### Saudi Arabia
Saudi Arabia jumped into open source hardware earlier this year with the [M1 Rover][21]. The robot is an unmanned vehicle that you can build (and build upon). It is compatible with a number of different packages designed for specific purposes, so you can customize it for a wide range of applications.
![M1-Rover ][22]
[M1-Rover][23]
### Sri Lanka
This project from Sri Lanka is part of a larger effort to improve traffic flow in urban areas. The team behind the [Traffic Wave Disruptor][24] read research about how many traffic jams are caused by drivers slamming on their brakes when they drive too close to the car in front of them, producing a ripple of rapid breaking on the road behind them. This stop/start effect can be avoided if cars maintain a consistent, optimal distance from one another. If you reduce the stop/start pattern, you also reduce the number of traffic jams.
![Traffic Wave Disruptor][25]
[Traffic Wave Disruptor][26]
But how can drivers know if they are keeping an optimal distance? The prototype Traffic Wave Disruptor aims to give drivers feedback when they fail to keep optimal spacing. Wider adoption could help increase traffic flow without building new highways nor reducing the number of cars using them.
* * *
You may have noticed that all the hardware featured here is based on electronics. In next month's open source hardware column, we will take a look at open source hardware for the outdoors, away from batteries and plugs. Until then, [certify][27] your open source hardware project (especially if your country is not yet on the registry). It might be featured in a future column.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/hardware-international
作者:[Michael Weinberg][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mweinberg
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openhardwaretools_0.png?itok=NUIvc-R1 (Gadgets and open hardware)
[2]: https://certification.oshwa.org/list.html
[3]: https://opensource.com/sites/default/files/uploads/opensourcehardwaremap.jpg (Open source hardware map)
[4]: https://www.oshwa.org/definition/
[5]: http://anavi.technology/
[6]: https://www.olimex.com/
[7]: https://certification.oshwa.org/bg000001.html
[8]: https://opensource.com/sites/default/files/uploads/anavi-light-phat.png (ANAVI-Light-pHAT)
[9]: http://anavi.technology/#products
[10]: https://certification.oshwa.org/bg000010.html
[11]: https://opensource.com/sites/default/files/uploads/olimex-esp32-pro.png (Olimex ESP32-PRO)
[12]: https://www.olimex.com/Products/IoT/ESP32/ESP32-PRO/open-source-hardware
[13]: https://www.youtube.com/channel/UCh_ugKacslKhsGGdXP0cRRA
[14]: https://certification.oshwa.org/cn000001.html
[15]: https://opensource.com/sites/default/files/uploads/sinobit.png (sino:bit)
[16]: https://github.com/sinobitorg/hardware
[17]: https://certification.oshwa.org/mx000003.html
[18]: https://electroniccats.com/
[19]: https://opensource.com/sites/default/files/uploads/meowmeow.png (Meow Meow)
[20]: https://electroniccats.com/producto/meowmeow/
[21]: https://certification.oshwa.org/sa000001.html
[22]: https://opensource.com/sites/default/files/uploads/m1-rover.png (M1-Rover )
[23]: https://www.hackster.io/AhmedAzouz/m1-rover-362c05
[24]: https://certification.oshwa.org/lk000001.html
[25]: https://opensource.com/sites/default/files/uploads/traffic-wave-disruptor.png (Traffic Wave Disruptor)
[26]: https://github.com/Aightm8/Traffic-wave-disruptor
[27]: https://certification.oshwa.org/

View File

@ -0,0 +1,120 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to organize with Calculist: Ideas, events, and more)
[#]: via: (https://opensource.com/article/19/4/organize-calculist)
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
How to organize with Calculist: Ideas, events, and more
======
Give structure to your ideas and plans with Calculist, an open source
web app for creating outlines.
![Team checklist][1]
Thoughts. Ideas. Plans. We all have a few of them. Often, more than a few. And all of us want to make some or all of them a reality.
Far too often, however, those thoughts and ideas and plans are a jumble inside our heads. They refuse to take a discernable shape, preferring instead to rattle around here, there, and everywhere in our brains.
One solution to that problem is to put everything into [an outline][2]. An outline can be a great way to organize what you need to organize and give it the shape you need to take it to the next step.
A number of people I know rely on a popular web-based tool called WorkFlowy for their outlining needs. If you prefer your applications (including web ones) to be open source, you'll want to take a look at [Calculist][3].
The brainchild of [Dan Allison][4], Calculist is billed as _the thinking tool for problem solvers_. It does much of what WorkFlowy does, and it has a few features that its rival is missing.
Let's take a look at using Calculist to organize your ideas (and more).
### Getting started
If you have a server, you can try to [install Calculist][5] on it. If, like me, you don't have server or just don't have the technical chops, you can turn to the [hosted version][6] of Calculist.
[Sign up][7] for a no-cost account, then log in. Once you've done that, you're ready to go.
### Creating a basic outline
What you use Calculist for really depends on your needs. I use Calculist to create outlines for articles and essays, to create lists of various sorts, and to plan projects. Regardless of what I'm doing, every outline I create follows the same pattern.
To get started, click the **New List** button. This creates a blank outline (which Calculist calls a _list_ ).
![Create a new list in Calculist][8]
The outline is a blank slate waiting for you to fill it up. Give the outline a name, then press Enter. When you do that, Calculist adds the first blank line for your outline. Use that as your starting point.
![A new outline in Calculist][9]
Add a new line by pressing Enter. To indent a line, press the Tab key while on that line. If you need to create a hierarchy, you can indent lines as far as you need to indent them. Press Shift+Tab to outdent a line.
Keep adding lines until you have a completed outline. Calculist saves your work every few seconds, so you don't need to worry about that.
![Calculist outline][10]
### Editing an outline
Outlines are fluid. They morph. They grow and shrink. Individual items in an outline change. Calculist makes it easy for you to adapt and make those changes.
You already know how to add an item to an outline. If you don't, go back a few paragraphs for a refresher. To edit text, click on an item and start typing. Don't double-click (more on this in a few moments). If you accidentally double-click on an item, press Esc on your keyboard and all will be well.
Sometimes you need to move an item somewhere else in the outline. Do that by clicking and holding the bullet for that item. Drag the item and drop it wherever you want it. Anything indented below the item moves with it.
At the moment, Calculist doesn't support adding notes or comments to an item in an outline. A simple workaround I use is to add a line indented one level deeper than the item where I want to add the note. That's not the most elegant solution, but it works.
### Let your keyboard do the walking
Not everyone likes to use their mouse to perform actions in an application. Like a good desktop application, you're not at the mercy of your mouse when you use Calculist. It has many keyboard shortcuts that you can use to move around your outlines and manipulate them.
The keyboard shortcuts I mentioned a few paragraphs ago are just the beginning. There are a couple of dozen keyboard shortcuts that you can use.
For example, you can focus on a single portion of an outline by pressing Ctrl+Right Arrow key. To get back to the full outline, press Ctrl+Left Arrow key. There are also shortcuts for moving up and down in your outline, expanding and collapsing lists, and deleting items.
You can view the list of shortcuts by clicking on your user name in the upper-right corner of the Calculist window and clicking **Preferences**. You can also find a list of [keyboard shortcuts][11] in the Calculist GitHub repository.
If you need or want to, you can change the shortcuts on the **Preferences** page. Click on the shortcut you want to change—you can, for example, change the shortcut for zooming in on an item to Ctrl+0.
### The power of commands
Calculist's keyboard shortcuts are useful, but they're only the beginning. The application has command mode that enables you to perform basic actions and do some interesting and complex tasks.
To use a command, double-click an item in your outline or press Ctrl+Enter while on it. The item turns black. Type a letter or two, and a list of commands displays. Scroll down to find the command you want to use, then press Enter. There's also a [list of commands][12] in the Calculist GitHub repository.
![Calclulist commands][13]
The commands are quite comprehensive. While in command mode, you can, for example, delete an item in an outline or delete an entire outline. You can import or export outlines, sort and group items in an outline, or change the application's theme or font.
### Final thoughts
I've found that Calculist is a quick, easy, and flexible way to create and view outlines. It works equally well on my laptop and my phone, and it packs not only the features I regularly use but many others (including support for [LaTeX math expressions][14] and a [table/spreadsheet mode][15]) that more advanced users will find useful.
That said, Calculist isn't for everyone. If you prefer your outlines on the desktop, then check out [TreeLine][16], [Leo][17], or [Emacs org-mode][18].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/organize-calculist
作者:[Scott Nesbitt ][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
[2]: https://en.wikipedia.org/wiki/Outline_(list)
[3]: https://calculist.io/
[4]: https://danallison.github.io/
[5]: https://github.com/calculist/calculist-web
[6]: https://app.calculist.io/
[7]: https://app.calculist.io/join
[8]: https://opensource.com/sites/default/files/uploads/calculist-new-list.png (Create a new list in Calculist)
[9]: https://opensource.com/sites/default/files/uploads/calculist-getting-started.png (A new outline in Calculist)
[10]: https://opensource.com/sites/default/files/uploads/calculist-outline.png (Calculist outline)
[11]: https://github.com/calculist/calculist/wiki/Keyboard-Shortcuts
[12]: https://github.com/calculist/calculist/wiki/Command-Mode
[13]: https://opensource.com/sites/default/files/uploads/calculist-commands.png (Calculist commands)
[14]: https://github.com/calculist/calculist/wiki/LaTeX-Expressions
[15]: https://github.com/calculist/calculist/issues/32
[16]: https://opensource.com/article/18/1/creating-outlines-treeline
[17]: http://www.leoeditor.com/
[18]: https://orgmode.org/

View File

@ -0,0 +1,196 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Level up command-line playgrounds with WebAssembly)
[#]: via: (https://opensource.com/article/19/4/command-line-playgrounds-webassembly)
[#]: author: (Robert Aboukhalil https://opensource.com/users/robertaboukhalil)
Level up command-line playgrounds with WebAssembly
======
WebAssembly is a powerful tool for bringing command line utilities to
the web and giving people the chance to tinker with tools.
![Various programming languages in use][1]
[WebAssembly][2] (Wasm) is a new low-level language designed with the web in mind. Its main goal is to enable developers to compile code written in other languages—such as C, C++, and Rust—into WebAssembly and run that code in the browser. In an environment where JavaScript has traditionally been the only option, WebAssembly is an appealing counterpart, and it enables portability along with the promise for near-native runtimes. WebAssembly has also already been used to port lots of tools to the web, including [desktop applications][3], [games][4], and even [data science tools written in Python][5]!
Another application of WebAssembly is command line playgrounds, where users are free to play with a simulated version of a command line tool. In this article, we'll explore a concrete example of leveraging WebAssembly for this purpose, specifically to port the tool **[jq][6]** —which is normally confined to the command line—to run directly in the browser.
If you haven't heard, jq is a very powerful command line tool for querying, modifying, and wrangling JSON objects on the command line.
### Why WebAssembly?
Aside from WebAssembly, there are two other approaches we can take to build a jq playground:
1. **Set up a sandboxed environment** on your server that executes queries and returns the result to the user via API calls. Although this means your users get to play with the real thing, the thought of hosting, securing, and sanitizing user inputs for such an application is worrisome. Aside from security, the other concern is responsiveness; the additional round trips to the server can introduce noticeable latencies and negatively impact the user experience.
2. **Simulate the command line environment using JavaScript** , where you define a series of steps that the user can take. Although this approach is more secure than option 1, it involves _a lot_ more work, as you need to rewrite the logic of the tool in JavaScript. This method is also limiting: when I'm learning a new tool, I'm not just interested in the "happy path"; I want to break things!
These two solutions are not ideal because we have to choose between security and a meaningful learning experience. Ideally, we could simply run the command line tool directly in the browser, with no servers and no simulations. Lucky for us, WebAssembly is just the solution we need to achieve that.
### Set up your environment
In this article, we'll use the [Emscripten tool][7] to port jq from C to WebAssembly. Conveniently, it provides us with drop-in replacements for the most common C/C++ build tools, including gcc, make, and configure.
Instead of [installing Emscripten from scratch][8] (the build process can take a long time), we'll use a Docker image I put together that comes prepackaged with everything you'll need for this article (and beyond!).
Let's start by pulling the image and creating a container from it:
```
# Fetch docker image containing Emscripten
docker pull robertaboukhalil/emsdk:1.38.26
# Create container from that image
docker run -dt --name wasm robertaboukhalil/emsdk:1.38.26
# Enter the container
docker exec -it wasm bash
# Make sure we can run emcc, Emscripten's wrapper around gcc
emcc --version
```
If you see the Emscripten version on the screen, you're good to go!
### Porting jq to WebAssembly
Next, let's clone the jq repository:
```
git clone <https://github.com/stedolan/jq.git>
cd jq
git checkout 9fa2e51
```
Note that we're checking out a specific commit, just in case the jq code changes significantly after this article is published.
Before we compile jq to WebAssembly, let's first consider how we would normally compile jq to binary for use on the command line.
From the [README file][9], here is what we need to build jq to binary (don't type this in yet):
```
# Fetch jq dependencies
git submodule update --init
# Generate ./configure file
autoreconf -fi
# Run ./configure
./configure \
\--with-oniguruma=builtin \
\--disable-maintainer-mode
# Build jq executable
make LDFLAGS=-all-static
```
Instead, to compile jq to WebAssembly, we'll leverage Emscripten's drop-in replacements for the configure and make build tools (note the differences here from the previous entry: **emconfigure** and **emmake** in the Run and Build statements, respectively):
```
# Fetch jq dependencies
git submodule update --init
# Generate ./configure file
autoreconf -fi
# Run ./configure
emconfigure ./configure \
\--with-oniguruma=builtin \
\--disable-maintainer-mode
# Build jq executable
emmake make LDFLAGS=-all-static
```
If you type the commands above inside the Wasm container we created earlier, you'll notice that emconfigure and emmake will make sure jq is compiled using emcc instead of gcc (Emscripten also has a g++ replacement called em++).
So far, this was surprisingly easy: we just prepended a handful of commands with Emscripten tools and ported a codebase—comprising tens of thousands of lines—from C to WebAssembly. Note that it won't always be this easy, especially for more complex codebases and graphical applications, but that's for [another article][10].
Another advantage of Emscripten is that it can generate some JavaScript glue code for us that handles initializing the WebAssembly module, calling C functions from JavaScript, and even providing a [virtual filesystem][11].
Let's generate that glue code from the executable file jq that emmake outputs:
```
# But first, rename the jq executable to a .o file; otherwise,
# emcc complains that the "file has an unknown suffix"
mv jq jq.o
# Generate .js and .wasm files from jq.o
# Disable errors on undefined symbols to avoid warnings about llvm_fma_f64
emcc jq.o -o jq.js \
-s ERROR_ON_UNDEFINED_SYMBOLS=0
```
To make sure it works, let's try an example from the [jq tutorial][12] directly on the command line:
```
# Output the description of the latest commit on the jq repo
$ curl -s "<https://api.github.com/repos/stedolan/jq/commits?per\_page=5>" | \
node jq.js '.[0].commit.message'
"Restore cfunction arity in builtins/0\n\nCount arguments up-front at definition/invocation instead of doing it at\nbind time, which comes after generating builtins/0 since e843a4f"
```
And just like that, we are now ready to run jq in the browser!
### The result
Using the output of emcc above, we can put together a user interface that calls jq on a JSON blob the user provides. This is the approach I took to build [jqkungfu][13] (source code [available on GitHub][14]):
![jqkungfu screenshot][15]
jqkungfu, a playground built by compiling jq to WebAssembly
Although there are similar web apps that let you execute arbitrary jq queries in the browser, they are generally implemented as server-side applications that execute user queries in a sandbox (option #1 above).
Instead, by compiling jq from C to WebAssembly, we get the best of both worlds: the flexibility of the server and the security of the browser. Specifically, the benefits are:
1. **Flexibility** : Users can "choose their own adventure" and use the app with fewer limitations
2. **Speed** : Once the Wasm module is loaded, executing queries is extremely fast because all the magic happens in the browser
3. **Security** : No backend means we don't have to worry about our servers being compromised or used to mine Bitcoins
4. **Convenience** : Since we don't need a backend, jqkungfu is simply hosted as static files on a cloud storage platform
### Conclusion
WebAssembly is a powerful tool for bringing existing command line utilities to the web. When included as part of a tutorial, such playgrounds can become powerful teaching tools. They can even allow your users to test-drive your tool before they bother installing it.
If you want to dive further into WebAssembly and learn how to build applications like jqkungfu (or games like Pacman!), check out my book [_Level up with WebAssembly_][16].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/command-line-playgrounds-webassembly
作者:[Robert Aboukhalil][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/robertaboukhalil
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_language_c.png?itok=mPwqDAD9 (Various programming languages in use)
[2]: https://webassembly.org/
[3]: https://www.figma.com/blog/webassembly-cut-figmas-load-time-by-3x/
[4]: http://www.continuation-labs.com/projects/d3wasm/
[5]: https://hacks.mozilla.org/2019/03/iodide-an-experimental-tool-for-scientific-communicatiodide-for-scientific-communication-exploration-on-the-web/
[6]: https://stedolan.github.io/jq/
[7]: https://emscripten.org/
[8]: https://emscripten.org/docs/getting_started/downloads.html
[9]: https://github.com/stedolan/jq/blob/9fa2e51099c55af56e3e541dc4b399f11de74abe/README.md
[10]: https://medium.com/@robaboukhalil/porting-games-to-the-web-with-webassembly-70d598e1a3ec?sk=20c835664031227eae5690b8a12514f0
[11]: https://emscripten.org/docs/porting/files/file_systems_overview.html
[12]: https://stedolan.github.io/jq/tutorial/
[13]: http://jqkungfu.com
[14]: https://github.com/robertaboukhalil/jqkungfu/
[15]: https://opensource.com/sites/default/files/uploads/jqkungfu.gif (jqkungfu screenshot)
[16]: http://levelupwasm.com/

View File

@ -0,0 +1,167 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Simplifying organizational change: A guide for the perplexed)
[#]: via: (https://opensource.com/open-organization/19/4/simplifying-change)
[#]: author: (Jen Kelchner https://opensource.com/users/jenkelchner)
Simplifying organizational change: A guide for the perplexed
======
Here's a 4-step, open process for making change easier—both for you and
your organization.
![][1]
Most organizational leaders have encountered a certain paralysis around efforts to implement culture change—perhaps because of perceived difficulty or the time necessary for realizing our work. But change is only as difficult as we choose to make it. In order to lead successful change efforts, we must simplify our understanding and approach to change.
Change isn't something rare. We live everyday life in a continuous state of change—from grappling with the speed of innovation to simply interacting with the environment around us. Quite simply, *change is how we process, disseminate, and adopt new information. *And whether you're leading a team or an organization—or are simply breathing—you'll benefit from a more focused, simplified approach to change. Here's a process that can save you time and reduce frustration.
### Three interactions with change
Everyone interacts with change in different ways. Those differences are based on who we are, our own unique experiences, and our core beliefs. In fact, [only 5% of decision making involves conscious processing][2]. Even when you don't _think_ you're making a decision, you are actually making a decision (that is, to not take action).
So you see, two actors are at play in situations involving change. The first is the human decision maker. The second is the information _coming to_ the decision maker. Both are present in three sets of interactions at varying stages in the decision-making process.
#### **Engaging change**
First, we must understand that uncertainty is really the result of "new information" we must process. We must accept where we are, at that moment, while waiting for additional information. Engaging with change requires us to trust—at the very least, ourselves and our capacity to manage—as new information continues to arrive. Everyone will respond to new information differently, and those responses are based on multiple factors: general hardwiring, unconscious needs that need to be met to feel safe, and so on. How do you feel safe in periods of uncertainty? Are you routine driven? Do you need details or need to assess risk? Are you good with figuring it out on the fly? Or does safety feel like creating something brand new?
#### **Navigating change**
"Navigating" doesn't necessarily mean "going around" something safely. It's knowing how to "get through it." Navigating change truly requires "all hands on deck" in order to keep everything intact and moving forward as we encounter each oncoming wave of new information. Everyone around you has something to contribute to the process of navigation; leverage them for “smooth sailing."
#### **Adopting change**
Only a small set of members in your organization will be truly comfortable with adopting change. But that committed and confident minority can spread the fire of change and help you grow some innovative ideas within your organization. Consider taking advantage of what researchers call "[the pendulum effect][3]," which holds that a group as small as 5% of an organization's population can influence a crowd's direction (the other 95% will follow along without realizing it). Moreover, [scientists at Rensselaer Polytechnic Institute have found][4] that when just 10% of a population holds an unshakable belief, that belief will always be adopted by a majority. Findings from this cognitive study have implications for the spread of innovations and movements within a collective group of people. Opportunities for mass adoption are directly related to your influence with the external parties around you.
Everyone interacts with change in different ways. Those differences are based on who we are, our own unique experiences, and our core beliefs.
### A useful matrix to guide culture change
So far, we've identified three "interactions" every person, team, or department will experience with change: "engaging," "navigating," and "adopting." When we examine the work of _implementing_ change in the broader context of an organization (any kind), we can also identify _three relationships_ that drive the success of each interaction: "people," "capacity," and "information."
Here's a brief list of considerations you should make—at every moment and with every relationship—to help you build roadmaps thoughtfully.
#### **Engaging—People**
Organizational success comes from the overlap of awareness and action of the "I" and the "We."
* _Individuals (I)_ are aware of and engage based on their [natural response strength][5].
* _Teams (We)_ are aware of and balance their responsibilities based on the Individual strengths by initiative.
* _Leaders (I/We) l_ everage insight based on knowing their (I) and the collective (We).
#### **Engaging—Capacity**
"Capacity" applies to skills, processes, and culture that is clearly structured, documented, and accessible with your organization. It is the “space” within which you operate and achieve solutions.
* _Current state_ awareness allows you to use what and who you have available and accessible through your known operational capacity.
* _Future state_ needs will show you what is required of you to learn, _or stretch_ , in order to bridge any gaps; essentially, you will design the recoding of your organization.
#### **Engaging—Information**
* _Access to information_ is readily available to all based on appropriate needs within protocols.
* _Communication flows_ easily and is reciprocated at all levels.
* _Communication flow_ is timely and transparent.
#### **Navigating—People**
* Balance responses from both individuals and the collective will impact your outcomes.
* Balance the _I_ with the _We_. This allows for responses to co-exist in a seamless, collaborative way—which fuels every project.
#### **Navigating—Capacity**
* _Skills_ : Assuring a continuous state of assessment and learning through various modalities allows you to navigate with ease as each person graduates their understanding in preparation for the next iteration of change.
* _Culture:_ Be clear on goals and mission with a supported ecosystem in which your teams can operate by contributing their best efforts when working together.
* _Processes:_ Review existing processes and let go of anything that prevents you from evolving. Open practices and methodologies do allow for a higher rate of adaptability and decision making.
* _Utilize Talent:_ Discover who is already in your organization and how you can leverage their talent in new ways. Go beyond your known teams and seek out sources of new perspectives.
#### **Navigating—Information**
* Be clear on your mission.
* Be very clear on your desired endgame so everyone knows what you are navigating toward (without clearly defined and posted directions, it's easy to waste time, money and efforts resulting in missed targets).
#### **Adopting—People**
* _Behaviors_ have a critical impact on influence and adoption.
* For _internal adoption_ , consider the [pendulum of thought][3] swung by the committed few.
#### **Adopting—Capacity**
* _Sustainability:_ Leverage people who are more routine and legacy-oriented to help stabilize and embed your new initiatives.
* Allows your innovators and co-creators to move into the next phase of development and begin solving problems while other team members can perform follow-through efforts.
#### **Adopting—Information**
* Be open and transparent with your external communication.
* Lead the way in _what_ you do and _how_ you do it to create a tidal wave of change.
* Remember that mass adoption has a tipping point of 10%.
[**Download a one-page guide to this model on GitHub.**][6]
---
### Four steps to simplify change
You now understand what change is and how you are processing it. You've seen how you and your organization can reframe various interactions with it. Now, let's examine the four steps to simplify how you interact with and implement change as an individual, team leader, or organizational executive.
#### **1\. Understand change**
Change is receiving and processing new information and determining how to respond and participate with it (think personal or organizational operating system). Change is a _reciprocal_ action between yourself and incoming new information (think system interface). Change is an evolutionary process that happens in layers and stages in a continuous cycle (think data processing, bug fixes, and program iterations).
#### **2\. Know your people**
Change is personal and responses vary by context. People's responses to change are not indicators of the speed of adoption. Knowing how your people and your teams interact with change allows you to balance and optimize your efforts to solving problems, building solutions and sustaining implementations. Are they change makers, fast followers, innovators, stabilizers? When you know how you, _or others_ , process change, you can leverage your risk mitigators to sweep for potential pitfalls; and, your routine minded folks to be responsible for implementation follow through.
Only a small set of members in your organization will be truly comfortable with adopting change. But that committed and confident minority can spread the fire of change and help you grow some innovative ideas within your organization.
#### **3\. Know your capacity**
Your capacity to implement widespread change will depend on your culture, your processes, and decision-making models. Get familiar with your operational capacity and guardrails (process and policy).
#### **4\. Prepare for Interaction**
Each interaction uses your people, capacity (operational), and information flow. Working with the stages of change is not always a linear process and may overlap at certain points along the way. Understand that [_people_ feed all engagement, navigation, and adoption actions][7].
Humans are built for adaptation to our environments. Yes, any kind of change can be scary at first. But it need not involve some major new implementation with a large, looming deadline that throws you off. Knowing that you can take a simplified approach to change, hopefully, you're able to engage new information with ease. Using this approach over time—and integrating it as habit—allows for both the _I_ and the _We_ to experience continuous cycles of change without the tensions of old.
_Want to learn more about simplifying change?[View additional resources on GitHub][8]._
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/4/simplifying-change
作者:[Jen Kelchner][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jenkelchner
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_2dot0.png?itok=bKJ41T85
[2]: http://www.simplifyinginterfaces.com/2008/08/01/95-percent-of-brain-activity-is-beyond-our-conscious-awareness/
[3]: http://www.leeds.ac.uk/news/article/397/sheep_in_human_clothing__scientists_reveal_our_flock_mentality
[4]: https://news.rpi.edu/luwakkey/2902
[5]: https://opensource.com/open-organization/18/7/transformation-beyond-digital-2
[6]: https://github.com/jenkelchner/simplifying-change/blob/master/Visual_%20Simplifying%20Change%20(1).pdf
[7]: https://opensource.com/open-organization/17/7/digital-transformation-people-1
[8]: https://github.com/jenkelchner/simplifying-change

View File

@ -0,0 +1,105 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ubuntu 19.04 Disco Dingo Has Arrived: Downloads Available Now!)
[#]: via: (https://itsfoss.com/ubuntu-19-04-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Ubuntu 19.04 Disco Dingo Has Arrived: Downloads Available Now!
======
Its the time to disco! Why? Well, Ubuntu 19.04 Disco Dingo is here and finally available to download. Although, we are aware of the [new features in Ubuntu 19.04][1] I will mention a few important things below and would also point you to the official links to download it and get started.
### Ubuntu 19.04: What You Need To Know
Here are a few things you should know about Ubuntu 19.04 Disco Dingo release.
#### Ubuntu 19.04 is not an LTS Release
Unlike Ubuntu 18.04 LTS, this will not be [supported for 10 years][2]. Instead, the non-LTS 19.04 will be supported for **9 months until January 2020.**
So, if you have a production environment, we may not recommend upgrading it right away. For example, if you have a server that runs on Ubuntu 18.04 LTS it may not be a good idea to upgrade it to 19.04 just because it is an exciting release.
However, for users who want the latest and greatest installed on their machines can try it out.
![][3]
#### Ubuntu 19.04 is a sweet update for NVIDIA GPU Owners
_Martin Wimpress_ (from Canonical) mentioned that Ubuntu 19.04 is particularly a big deal for NVIDIA GPU owners in the final release notes of Ubuntu MATE 19.04 (one of the Ubuntu flavors) on [GitHub][4].
In other words, while installing the proprietary graphics driver it now selects the best driver compatible with your particular GPU model.
#### Ubuntu 19.04 Features
Even though we have already discussed the [best features of Ubuntu 19.04][1] Disco Dingo, it is worth mentioning that Im exciting about the desktop updates (GNOME 3.32) and the Linux Kernel (5.0) that comes as one of the major changes in this release.
#### Upgrading from Ubuntu 18.10 to 19.04
If you have Ubuntu 18.10 installed, you should upgrade it for obvious reasons. 18.10 will reach its end of life in July 2019 so we recommend you to upgrade it to 19.04.
To do that, you can simply head on to the “ **Software and Updates** ” settings and then navigate your way to the “ **Updates** ” tab.
Now, change the option for **Notify me of a new Ubuntu version** to “ _For any new version_ “.
When you run the update manager now, you should see that Ubuntu 19.04 is available now.
![][5]
#### Upgrading from Ubuntu 18.04 to 19.04
It is not recommended to directly upgrade from 18.04 to 19.04 because you will have to update the OS to 18.10 first and then proceed to get 19.04 on board.
Instead, you can simply download the official ISO image of Ubuntu 19.04 and then re-install Ubuntu on your system.
### Ubuntu 19.04: Downloads Available for all flavors
As per the [release notes][6], Ubuntu 19.04 is available to download now. You can get the torrent or the ISO file on its official release download page.
[Download Ubuntu 19.04][7]
If you need a different desktop environment or need something specific, you should check out the official flavors of Ubuntu available:
* [Ubuntu MATE][8]
* [Kubuntu][9]
* [Lubuntu][10]
* [Ubuntu Budgie][11]
* [Ubuntu Studio][12]
* [Xubuntu][13]
Some of the above mentioned Ubuntu flavors havent put the 19.04 release on their download yet. But you can [still find the ISOs on the Ubuntus release note webpage][6]. Personally, I use Ubuntu with GNOME desktop. You can choose whatever you like.
**Wrapping Up**
What do you think about Ubuntu 19.04 Disco Dingo? Are the new features exciting enough? Have you tried it yet? Let us know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-19-04-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/ubuntu-19-04-release-features/
[2]: https://itsfoss.com/ubuntu-18-04-ten-year-support/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/ubuntu-19-04-Disco-Dingo-default-wallpaper.jpg?resize=800%2C450&ssl=1
[4]: https://github.com/ubuntu-mate/ubuntu-mate.org/blob/master/blog/20190418-ubuntu-mate-disco-final-release.md
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/ubuntu-19-04-upgrade-available.jpg?ssl=1
[6]: https://wiki.ubuntu.com/DiscoDingo/ReleaseNotes
[7]: https://www.ubuntu.com/download/desktop
[8]: https://ubuntu-mate.org/download/
[9]: https://kubuntu.org/getkubuntu/
[10]: https://lubuntu.me/cosmic-released/
[11]: https://ubuntubudgie.org/downloads
[12]: https://ubuntustudio.org/2019/04/ubuntu-studio-19-04-released/
[13]: https://xubuntu.org/download/

View File

@ -0,0 +1,110 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 cool new projects to try in COPR for April 2019)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-april-2019/)
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
4 cool new projects to try in COPR for April 2019
======
![][1]
COPR is a [collection][2] of personal repositories for software that isnt carried in Fedora. Some software doesnt conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isnt supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
Heres a set of new and interesting projects in COPR.
### Joplin
[Joplin][3] is a note-taking and to-do app. Notes are written in the Markdown format, and organized by sorting them into various notebooks and using tags.
Joplin can import notes from any Markdown source or exported from Evernote. In addition to the desktop app, theres an Android version with the ability to synchronize notes between them — using Nextcloud, Dropbox or other cloud services. Finally, theres a browser extension for Chrome and Firefox to save web pages and screenshots.
![][4]
#### Installation instructions
The [repo][5] currently provides Joplin for Fedora 29 and 30, and for EPEL 7. To install Joplin, use these commands [with sudo][6]:
```
sudo dnf copr enable taw/joplin
sudo dnf install joplin
```
### Fzy
[Fzy][7] is a command-line utility for fuzzy string searching. It reads from a standard input and sorts the lines based on what is most likely the sought after text, and then prints the selected line. In addition to command-line, fzy can be also used within vim. You can try fzy in this online [demo][8].
#### Installation instructions
The [repo][9] currently provides fzy for Fedora 29, 30, and Rawhide, and other distributions. To install fzy, use these commands:
```
sudo dnf copr enable lehrenfried/fzy
sudo dnf install fzy
```
### Fondo
Fondo is a program for browsing many photographs from the [unsplash.com][10] website. It has a simple interface that allows you to look for pictures of one of several themes, or all of them at once. You can then set a found picture as a wallpaper with a single click, or share it.
* ![][11]
#### Installation instructions
The [repo][12] currently provides Fondo for Fedora 29, 30, and Rawhide. To install Fondo, use these commands:
```
sudo dnf copr enable atim/fondo
sudo dnf install fondo
```
### YACReader
[YACReader][13] is a digital comic book reader that supports many comics and image formats, such as _cbz_ , _cbr_ , _pdf_ and others. YACReader keeps track of reading progress, and can download comics information from [Comic Vine.][14] It also comes with a YACReader Library for organizing and browsing your comic book collection.
* ![][15]
#### Installation instructions
The [repo][16] currently provides YACReader for Fedora 29, 30, and Rawhide. To install YACReader, use these commands:
```
sudo dnf copr enable atim/yacreader
sudo dnf install yacreader
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-april-2019/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/dturecek/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
[2]: https://copr.fedorainfracloud.org/
[3]: https://joplin.cozic.net/
[4]: https://fedoramagazine.org/wp-content/uploads/2019/04/joplin.png
[5]: https://copr.fedorainfracloud.org/coprs/taw/joplin/
[6]: https://fedoramagazine.org/howto-use-sudo/
[7]: https://github.com/jhawthorn/fzy
[8]: https://jhawthorn.github.io/fzy-demo/
[9]: https://copr.fedorainfracloud.org/coprs/lehrenfried/fzy/
[10]: https://unsplash.com/
[11]: https://fedoramagazine.org/wp-content/uploads/2019/04/fondo.png
[12]: https://copr.fedorainfracloud.org/coprs/atim/fondo/
[13]: https://www.yacreader.com/
[14]: https://comicvine.gamespot.com/
[15]: https://fedoramagazine.org/wp-content/uploads/2019/04/yacreader.png
[16]: https://copr.fedorainfracloud.org/coprs/atim/yacreader/

View File

@ -0,0 +1,294 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building scalable social media sentiment analysis services in Python)
[#]: via: (https://opensource.com/article/19/4/social-media-sentiment-analysis-python-scalable)
[#]: author: (Michael McCune https://opensource.com/users/elmiko/users/jschlessman)
Building scalable social media sentiment analysis services in Python
======
Learn how you can use spaCy, vaderSentiment, Flask, and Python to add
sentiment analysis capabilities to your work.
![Tall building with windows][1]
The [first part][2] of this series provided some background on how sentiment analysis works. Now let's investigate how to add these capabilities to your designs.
### Exploring spaCy and vaderSentiment in Python
#### Prerequisites
* A terminal shell
* Python language binaries (version 3.4+) in your shell
* The **pip** command for installing Python packages
* (optional) A [Python Virtualenv][3] to keep your work isolated from the system
#### Configure your environment
Before you begin writing code, you will need to set up the Python environment by installing the [spaCy][4] and [vaderSentiment][5] packages and downloading a language model to assist your analysis. Thankfully, most of this is relatively easy to do from the command line.
In your shell, type the following command to install the spaCy and vaderSentiment packages:
```
`pip install spacy vaderSentiment`
```
After the command completes, install a language model that spaCy can use for text analysis. The following command will use the spaCy module to download and install the English language [model][6]:
```
`python -m spacy download en_core_web_sm`
```
With these libraries and models installed, you are now ready to begin coding.
#### Do a simple text analysis
Use the [Python interpreter interactive mode][7] to write some code that will analyze a single text fragment. Begin by starting the Python environment:
```
$ python
Python 3.6.8 (default, Jan 31 2019, 09:38:34)
[GCC 8.2.1 20181215 (Red Hat 8.2.1-6)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
_(Your Python interpreter version print might look different than this.)_
1. Import the necessary modules: [code] >>> import spacy
>>> from vaderSentiment import vaderSentiment
```
2. Load the English language model from spaCy: [code]`>>> english = spacy.load("en_core_web_sm")`
```
3. Process a piece of text. This example shows a very simple sentence that we expect to return a slightly positive sentiment: [code]`>>> result = english("I like to eat applesauce with sugar and cinnamon.")`
```
4. Gather the sentences from the processed result. SpaCy has identified and processed the entities within the phrase; this step generates sentiment for each sentence (even though there is only one sentence in this example): [code]`>>> sentences = [str(s) for s in result.sents]`
```
5. Create an analyzer using vaderSentiments: [code]`>>> analyzer = vaderSentiment.SentimentIntensityAnalyzer()`
```
6. Perform the sentiment analysis on the sentences: [code]`>>> sentiment = [analyzer.polarity_scores(str(s)) for s in sentences]`
```
The sentiment variable now contains the polarity scores for the example sentence. Print out the value to see how it analyzed the sentence.
```
>>> print(sentiment)
[{'neg': 0.0, 'neu': 0.737, 'pos': 0.263, 'compound': 0.3612}]
```
What does this structure mean?
On the surface, this is an array with a single dictionary object; had there been multiple sentences, there would be a dictionary for each one. There are four keys in the dictionary that correspond to different types of sentiment. The **neg** key represents negative sentiment, of which none has been reported in this text, as evidenced by the **0.0** value. The **neu** key represents neutral sentiment, which has gotten a fairly high score of **0.737** (with a maximum of **1.0** ). The **pos** key represents positive sentiments, which has a moderate score of **0.263**. Last, the **compound** key represents an overall score for the text; this can range from negative to positive scores, with the value **0.3612** representing a sentiment more on the positive side.
To see how these values might change, you can run a small experiment using the code you already entered. The following block demonstrates an evaluation of sentiment scores on a similar sentence.
```
>>> result = english("I love applesauce!")
>>> sentences = [str(s) for s in result.sents]
>>> sentiment = [analyzer.polarity_scores(str(s)) for s in sentences]
>>> print(sentiment)
[{'neg': 0.0, 'neu': 0.182, 'pos': 0.818, 'compound': 0.6696}]
```
You can see that by changing the example sentence to something overwhelmingly positive, the sentiment values have changed dramatically.
### Building a sentiment analysis service
Now that you have assembled the basic building blocks for doing sentiment analysis, let's turn that knowledge into a simple service.
For this demonstration, you will create a [RESTful][8] HTTP server using the Python [Flask package][9]. This service will accept text data in English and return the sentiment analysis. Please note that this example service is for learning the technologies involved and not something to put into production.
#### Prerequisites
* A terminal shell
* The Python language binaries (version 3.4+) in your shell.
* The **pip** command for installing Python packages
* The **curl** command
* A text editor
* (optional) A [Python Virtualenv][3] to keep your work isolated from the system
#### Configure your environment
This environment is nearly identical to the one in the previous section. The only difference is the addition of the Flask package to Python.
1. Install the necessary dependencies: [code]`pip install spacy vaderSentiment flask`
```
2. Install the English language model for spaCy: [code]`python -m spacy download en_core_web_sm`
```
#### Create the application file
Open your editor and create a file named **app.py**. Add the following contents to it _(don't worry, we will review every line)_ :
```
import flask
import spacy
import vaderSentiment.vaderSentiment as vader
app = flask.Flask(__name__)
analyzer = vader.SentimentIntensityAnalyzer()
english = spacy.load("en_core_web_sm")
def get_sentiments(text):
result = english(text)
sentences = [str(sent) for sent in result.sents]
sentiments = [analyzer.polarity_scores(str(s)) for s in sentences]
return sentiments
@app.route("/", methods=["POST", "GET"])
def index():
if flask.request.method == "GET":
return "To access this service send a POST request to this URL with" \
" the text you want analyzed in the body."
body = flask.request.data.decode("utf-8")
sentiments = get_sentiments(body)
return flask.json.dumps(sentiments)
```
Although this is not an overly large source file, it is quite dense. Let's walk through the pieces of this application and describe what they are doing.
```
import flask
import spacy
import vaderSentiment.vaderSentiment as vader
```
The first three lines bring in the packages needed for performing the language analysis and the HTTP framework.
```
app = flask.Flask(__name__)
analyzer = vader.SentimentIntensityAnalyzer()
english = spacy.load("en_core_web_sm")
```
The next three lines create a few global variables. The first variable, **app** , is the main entry point that Flask uses for creating HTTP routes. The second variable, **analyzer** , is the same type used in the previous example, and it will be used to generate the sentiment scores. The last variable, **english** , is also the same type used in the previous example, and it will be used to annotate and tokenize the initial text input.
You might be wondering why these variables have been declared globally. In the case of the **app** variable, this is standard procedure for many Flask applications. But, in the case of the **analyzer** and **english** variables, the decision to make them global is based on the load times associated with the classes involved. Although the load time might appear minor, when it's run in the context of an HTTP server, these delays can negatively impact performance.
```
def get_sentiments(text):
result = english(text)
sentences = [str(sent) for sent in result.sents]
sentiments = [analyzer.polarity_scores(str(s)) for s in sentences]
return sentiments
```
The next piece is the heart of the service—a function for generating sentiment values from a string of text. You can see that the operations in this function correspond to the commands you ran in the Python interpreter earlier. Here they're wrapped in a function definition with the source **text** being passed in as the variable text and finally the **sentiments** variable returned to the caller.
```
@app.route("/", methods=["POST", "GET"])
def index():
if flask.request.method == "GET":
return "To access this service send a POST request to this URL with" \
" the text you want analyzed in the body."
body = flask.request.data.decode("utf-8")
sentiments = get_sentiments(body)
return flask.json.dumps(sentiments)
```
The last function in the source file contains the logic that will instruct Flask how to configure the HTTP server for the service. It starts with a line that will associate an HTTP route **/** with the request methods **POST** and **GET**.
After the function definition line, the **if** clause will detect if the request method is **GET**. If a user sends this request to the service, the following line will return a text message instructing how to access the server. This is largely included as a convenience to end users.
The next line uses the **flask.request** object to acquire the body of the request, which should contain the text string to be processed. The **decode** function will convert the array of bytes into a usable, formatted string. The decoded text message is now passed to the **get_sentiments** function to generate the sentiment scores. Last, the scores are returned to the user through the HTTP framework.
You should now save the file, if you have not done so already, and return to the shell.
#### Run the sentiment service
With everything in place, running the service is quite simple with Flask's built-in debugging server. To start the service, enter the following command from the same directory as your source file:
```
`FLASK_APP=app.py flask run`
```
You will now see some output from the server in your shell, and the server will be running. To test that the server is running, you will need to open a second shell and use the **curl** command.
First, check to see that the instruction message is printed by entering this command:
```
`curl http://localhost:5000`
```
You should see the instruction message:
```
`To access this service send a POST request to this URI with the text you want analyzed in the body.`
```
Next, send a test message to see the sentiment analysis by running the following command:
```
`curl http://localhost:5000 --header "Content-Type: application/json" --data "I love applesauce!"`
```
The response you get from the server should be similar to the following:
```
`[{"compound": 0.6696, "neg": 0.0, "neu": 0.182, "pos": 0.818}]`
```
Congratulations! You have now implemented a RESTful HTTP sentiment analysis service. You can find a link to a [reference implementation of this service and all the code from this article on GitHub][10].
### Continue exploring
Now that you have an understanding of the principles and mechanics behind natural language processing and sentiment analysis, here are some ways to further your discovery of this topic.
#### Create a streaming sentiment analyzer on OpenShift
While creating local applications to explore sentiment analysis is a convenient first step, having the ability to deploy your applications for wider usage is a powerful next step. By following the instructions and code in this [workshop from Radanalytics.io][11], you will learn how to create a sentiment analyzer that can be containerized and deployed to a Kubernetes platform. You will also see how Apache Kafka is used as a framework for event-driven messaging and how Apache Spark can be used as a distributed computing platform for sentiment analysis.
#### Discover live data with the Twitter API
Although the [Radanalytics.io][12] lab generated synthetic tweets to stream, you are not limited to synthetic data. In fact, anyone with a Twitter account can access the Twitter streaming API and perform sentiment analysis on tweets with the [Tweepy Python][13] package.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-scalable
作者:[Michael McCune ][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/elmiko/users/jschlessman
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/windows_building_sky_scale.jpg?itok=mH6CAX29 (Tall building with windows)
[2]: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-part-1
[3]: https://virtualenv.pypa.io/en/stable/
[4]: https://pypi.org/project/spacy/
[5]: https://pypi.org/project/vaderSentiment/
[6]: https://spacy.io/models
[7]: https://docs.python.org/3.6/tutorial/interpreter.html
[8]: https://en.wikipedia.org/wiki/Representational_state_transfer
[9]: http://flask.pocoo.org/
[10]: https://github.com/elmiko/social-moments-service
[11]: https://github.com/radanalyticsio/streaming-lab
[12]: http://Radanalytics.io
[13]: https://github.com/tweepy/tweepy

View File

@ -0,0 +1,117 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with social media sentiment analysis in Python)
[#]: via: (https://opensource.com/article/19/4/social-media-sentiment-analysis-python)
[#]: author: (Michael McCune https://opensource.com/users/elmiko/users/jschlessman)
Getting started with social media sentiment analysis in Python
======
Learn the basics of natural language processing and explore two useful
Python packages.
![Raspberry Pi and Python][1]
Natural language processing (NLP) is a type of machine learning that addresses the correlation between spoken/written languages and computer-aided analysis of those languages. We experience numerous innovations from NLP in our daily lives, from writing assistance and suggestions to real-time speech translation and interpretation.
This article examines one specific area of NLP: sentiment analysis, with an emphasis on determining the positive, negative, or neutral nature of the input language. This part will explain the background behind NLP and sentiment analysis and explore two open source Python packages. [Part 2][2] will demonstrate how to begin building your own scalable sentiment analysis services.
When learning sentiment analysis, it is helpful to have an understanding of NLP in general. This article won't dig into the mathematical guts, rather our goal is to clarify key concepts in NLP that are crucial to incorporating these methods into your solutions in practical ways.
### Natural language and text data
A reasonable place to begin is defining: "What is natural language?" It is the means by which we, as humans, communicate with one another. The primary modalities for communication are verbal and text. We can take this a step further and focus solely on text communication; after all, living in an age of pervasive Siri, Alexa, etc., we know speech is a group of computations away from text.
### Data landscape and challenges
Limiting ourselves to textual data, what can we say about language and text? First, language, particularly English, is fraught with exceptions to rules, plurality of meanings, and contextual differences that can confuse even a human interpreter, let alone a computational one. In elementary school, we learn articles of speech and punctuation, and from speaking our native language, we acquire intuition about which words have less significance when searching for meaning. Examples of the latter would be articles of speech such as "a," "the," and "or," which in NLP are referred to as _stop words_ , since traditionally an NLP algorithm's search for meaning stops when reaching one of these words in a sequence.
Since our goal is to automate the classification of text as belonging to a sentiment class, we need a way to work with text data in a computational fashion. Therefore, we must consider how to represent text data to a machine. As we know, the rules for utilizing and interpreting language are complicated, and the size and structure of input text can vary greatly. We'll need to transform the text data into numeric data, the form of choice for machines and math. This transformation falls under the area of _feature extraction_.
Upon extracting numeric representations of input text data, one refinement might be, given an input body of text, to determine a set of quantitative statistics for the articles of speech listed above and perhaps classify documents based on them. For example, a glut of adverbs might make a copywriter bristle, or excessive use of stop words might be helpful in identifying term papers with content padding. Admittedly, this may not have much bearing on our goal of sentiment analysis.
### Bag of words
When you assess a text statement as positive or negative, what are some contextual clues you use to assess its polarity (i.e., whether the text has positive, negative, or neutral sentiment)? One way is connotative adjectives: something called "disgusting" is viewed as negative, but if the same thing were called "beautiful," you would judge it as positive. Colloquialisms, by definition, give a sense of familiarity and often positivity, whereas curse words could be a sign of hostility. Text data can also include emojis, which carry inherent sentiments.
Understanding the polarity influence of individual words provides a basis for the [_bag-of-words_][3] (BoW) model of text. It considers a set of words or vocabulary and extracts measures about the presence of those words in the input text. The vocabulary is formed by considering text where the polarity is known, referred to as _labeled training data_. Features are extracted from this set of labeled data, then the relationships between the features are analyzed and labels are associated with the data.
The name "bag of words" illustrates what it utilizes: namely, individual words without consideration of spatial locality or context. A vocabulary typically is built from all words appearing in the training set, which tends to be pruned afterward. Stop words, if not cleaned prior to training, are removed due to their high frequency and low contextual utility. Rarely used words can also be removed, given the lack of information they provide for general input cases.
It is important to note, however, that you can (and should) go further and consider the appearance of words beyond their use in an individual instance of training data, or what is called [_term frequency_][4] (TF). You should also consider the counts of a word through all instances of input data; typically the infrequency of words among all documents is notable, which is called the [_inverse document frequency_][5] (IDF). These metrics are bound to be mentioned in other articles and software packages on this subject, so having an awareness of them can only help.
BoW is useful in a number of document classification applications; however, in the case of sentiment analysis, things can be gamed when the lack of contextual awareness is leveraged. Consider the following sentences:
* We are not enjoying this war.
* I loathe rainy days, good thing today is sunny.
* This is not a matter of life and death.
The sentiment of these phrases is questionable for human interpreters, and by strictly focusing on instances of individual vocabulary words, it's difficult for a machine interpreter as well.
Groupings of words, called _n-grams_ , can also be considered in NLP. A bigram considers groups of two adjacent words instead of (or in addition to) the single BoW. This should alleviate situations such as "not enjoying" above, but it will remain open to gaming due to its loss of contextual awareness. Furthermore, in the second sentence above, the sentiment context of the second half of the sentence could be perceived as negating the first half. Thus, spatial locality of contextual clues also can be lost in this approach. Complicating matters from a pragmatic perspective is the sparsity of features extracted from a given input text. For a thorough and large vocabulary, a count is maintained for each word, which can be considered an integer vector. Most documents will have a large number of zero counts in their vectors, which adds unnecessary space and time complexity to operations. While a number of clever approaches have been proposed for reducing this complexity, it remains an issue.
### Word embeddings
Word embeddings are a distributed representation that allows words with a similar meaning to have a similar representation. This is based on using a real-valued vector to represent words in connection with the company they keep, as it were. The focus is on the manner that words are used, as opposed to simply their existence. In addition, a huge pragmatic benefit of word embeddings is their focus on dense vectors; by moving away from a word-counting model with commensurate amounts of zero-valued vector elements, word embeddings provide a more efficient computational paradigm with respect to both time and storage.
Following are two prominent word embedding approaches.
#### Word2vec
The first of these word embeddings, [Word2vec][6], was developed at Google. You'll probably see this embedding method mentioned as you go deeper in your study of NLP and sentiment analysis. It utilizes either a _continuous bag of words_ (CBOW) or a _continuous skip-gram_ model. In CBOW, a word's context is learned during training based on the words surrounding it. Continuous skip-gram learns the words that tend to surround a given word. Although this is more than what you'll probably need to tackle, if you're ever faced with having to generate your own word embeddings, the author of Word2vec advocates the CBOW method for speed and assessment of frequent words, while the skip-gram approach is better suited for embeddings where rare words are more important.
#### GloVe
The second word embedding, [_Global Vectors for Word Representation_][7] (GloVe), was developed at Stanford. It's an extension to the Word2vec method that attempts to combine the information gained through classical global text statistical feature extraction with the local contextual information determined by Word2vec. In practice, GloVe has outperformed Word2vec for some applications, while falling short of Word2vec's performance in others. Ultimately, the targeted dataset for your word embedding will dictate which method is optimal; as such, it's good to know the existence and high-level mechanics of each, as you'll likely come across them.
#### Creating and using word embeddings
Finally, it's useful to know how to obtain word embeddings; in part 2, you'll see that we are standing on the shoulders of giants, as it were, by leveraging the substantial work of others in the community. This is one method of acquiring a word embedding: namely, using an existing trained and proven model. Indeed, myriad models exist for English and other languages, and it's possible that one does what your application needs out of the box!
If not, the opposite end of the spectrum in terms of development effort is training your own standalone model without consideration of your application. In essence, you would acquire substantial amounts of labeled training data and likely use one of the approaches above to train a model. Even then, you are still only at the point of acquiring understanding of your input-text data; you then need to develop a model specific for your application (e.g., analyzing sentiment valence in software version-control messages) which, in turn, requires its own time and effort.
You also could train a word embedding on data specific to your application; while this could reduce time and effort, the word embedding would be application-specific, which would reduce reusability.
### Available tooling options
You may wonder how you'll ever get to a point of having a solution for your problem, given the intensive time and computing power needed. Indeed, the complexities of developing solid models can be daunting; however, there is good news: there are already many proven models, tools, and software libraries available that may provide much of what you need. We will focus on [Python][8], which conveniently has a plethora of tooling in place for these applications.
#### SpaCy
[SpaCy][9] provides a number of language models for parsing input text data and extracting features. It is highly optimized and touted as the fastest library of its kind. Best of all, it's open source! SpaCy performs tokenization, parts-of-speech classification, and dependency annotation. It contains word embedding models for performing this and other feature extraction operations for over 46 languages. You will see how it can be used for text analysis and feature extraction in the second article in this series.
#### vaderSentiment
The [vaderSentiment][10] package provides a measure of positive, negative, and neutral sentiment. As the [original paper][11]'s title ("VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text") indicates, the models were developed and tuned specifically for social media text data. VADER was trained on a thorough set of human-labeled data, which included common emoticons, UTF-8 encoded emojis, and colloquial terms and abbreviations (e.g., meh, lol, sux).
For given input text data, vaderSentiment returns a 3-tuple of polarity score percentages. It also provides a single scoring measure, referred to as _vaderSentiment's compound metric_. This is a real-valued measurement within the range **[-1, 1]** wherein sentiment is considered positive for values greater than **0.05** , negative for values less than **-0.05** , and neutral otherwise.
In [part 2][2], you will learn how to use these tools to add sentiment analysis capabilities to your designs.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/social-media-sentiment-analysis-python
作者:[Michael McCune ][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/elmiko/users/jschlessman
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_python.png?itok=MFEKm3gl (Raspberry Pi and Python)
[2]: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-part-2
[3]: https://en.wikipedia.org/wiki/Bag-of-words_model
[4]: https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Term_frequency
[5]: https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Inverse_document_frequency
[6]: https://en.wikipedia.org/wiki/Word2vec
[7]: https://en.wikipedia.org/wiki/GloVe_(machine_learning)
[8]: https://www.python.org/
[9]: https://pypi.org/project/spacy/
[10]: https://pypi.org/project/vaderSentiment/
[11]: http://comp.social.gatech.edu/papers/icwsm14.vader.hutto.pdf

View File

@ -0,0 +1,78 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (This is how System76 does open hardware)
[#]: via: (https://opensource.com/article/19/4/system76-hardware)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
This is how System76 does open hardware
======
What sets the new Thelio line of desktops apart from the rest.
![metrics and data shown on a computer screen][1]
Most people know very little about the hardware in their computers. As a long-time Linux user, I've had my share of frustration while getting my wireless cards, video cards, displays, and other hardware working with my chosen distribution. Proprietary hardware often makes it difficult to determine why an Ethernet controller, wireless controller, or mouse performs differently than we expect. As Linux distributions have matured, this has become less of a problem, but we still see some quirks with touchpads and other peripherals, especially when we don't know much—if anything—about our underlying hardware.
Companies like [System76][2] aim to take these types of problems out of the Linux user experience. System76 manufactures a line of Linux laptops, desktops, and servers, and even offers its own Linux distro, [Pop! OS][3], as an option for buyers, Recently I had the privilege of visiting System76's plant in Denver for [the unveiling][4] of [Thelio][5], its new desktop product line.
### About Thelio
System76 says Thelio's open hardware daughterboard, named Thelio Io after the fifth moon of Jupiter, is one thing that makes the computer unique in the marketplace. Thelio Io is certified [OSHWA #us000145][6] and has four SATA ports for storage and an embedded controller for fan and power button control. Thelio Io SAS is certified [OSHWA #us000146][7] and has four U.2 ports for storage and no embedded controller. During a demonstration, System76 showed how these components adjust fans throughout the chassis to optimize the unit's performance.
The controller also runs the power button and the LED ring around the button, which glows at 100% brightness when it is pressed. This provides both tactile and visual confirmation that the unit is being powered on. While the computer is in use, the button is set to 35% brightness, and when it's in suspend mode, it pulses between 2.35% and 25%. When the computer is off, the LED remains dimly lit so that it's easy to find the power control in a dark room.
Thelio's embedded controller is a low-power [ATmega32U4][8] microchip, and the controller's setup can be prototyped with an Arduino Micro. The number of Thelio Io boards changes depending on which Thelio model you purchase.
Thelio is also perhaps the best-designed computer case and system I have ever seen. You'll probably agree if you have ever skinned your knuckles trying to operate inside a typical PC case. I have done this a number of times, and I have the scars to prove it.
### Why open hardware?
The boards were designed in [KiCAD][9], and you can access all of Thelio's design files under GPL on [GitHub][10]. So, why would a company that competes with other PC manufacturers design a unique interface then license it openly? It's because the company recognizes the value of open design and the ability to share and adjust an I/O board to your needs, even if you're a competitor in the marketplace.
![Don Watkins speaks with System76 CEO Carl Richell at the Thelio launch event.][11]
Don Watkins speaks with System76 CEO Carl Richell at the [Thelio launch event][12].
I asked [Carl Richell][13], System76's founder and CEO, whether the company is concerned that openly licensing its hardware designs means someone could take its unique design and use it to drive System76 out of business. He said:
> Open hardware benefits all of us. It's how we further advance technology and make it more available to everyone. We welcome anyone who wishes to improve on Thelio's design to do so. Opening the hardware not only helps advance improvements of our computers more quickly, but it also empowers our customers to truly own 100% of their device. Our goal is to remove as much proprietary functioning as we can, while still producing a competitive Linux computer for customers.
>
> We've been working with the Linux community for over 13 years to create a flawless and digestible experience on all of our laptops, desktops, and servers. Our long tenure serving the Linux community, providing our customers with a high level of service, and our personability are what makes System76 unique.
I also asked Carl why open hardware makes sense for System76 and the PC business in general. He replied:
> System76 was founded on the idea that technology should be open and accessible to everyone. We're not yet at the point where we can create a computer that is 100% open source, but with open hardware, we're one large, essential step closer to reaching that point.
>
> We live in an era where technology has become a utility. Computers are tools for people at every level of education and across many industries. With everyone's needs specific, each person has their own ideas on how they might improve the computer and its software as their primary tool. Having our computers open allows these ideas to come to fruition, which in turn makes the technology a more powerful tool. In an open environment, we constantly get to iterate a better PC. And that's kind of cool.
We wrapped up our conversation by talking about System76's roadmap, which includes open hardware mini PCs and, eventually, laptops. Existing mini PCs and laptops sold under the System76 brand are manufactured by other vendors and are not based on open hardware (although their Linux software is, of course, open source).
Designing and supporting open hardware is a game-changer in the PC business, and it is what sets System76's new Thelio line of desktop computers apart.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/system76-hardware
作者:[Don Watkins ][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
[2]: https://system76.com/
[3]: https://opensource.com/article/18/1/behind-scenes-popos-linux
[4]: /article/18/11/system76-thelio-desktop-computer
[5]: https://system76.com/desktops
[6]: https://certification.oshwa.org/us000145.html
[7]: https://certification.oshwa.org/us000146.html
[8]: https://www.microchip.com/wwwproducts/ATmega32u4
[9]: http://kicad-pcb.org/
[10]: https://github.com/system76/thelio-io
[11]: https://opensource.com/sites/default/files/uploads/don_system76_ceo.jpg (Don Watkins speaks with System76 CEO Carl Richell at the Thelio launch event.)
[12]: https://trevgstudios.smugmug.com/System76/121418-Thelio-Press-Event/i-FKWFxFv
[13]: https://www.linkedin.com/in/carl-richell-9435781

View File

@ -0,0 +1,147 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (New Features Coming to Debian 10 Buster Release)
[#]: via: (https://itsfoss.com/new-features-coming-to-debian-10-buster-release/)
[#]: author: (Shirish https://itsfoss.com/author/shirish/)
New Features Coming to Debian 10 Buster Release
======
Debian 10 Buster is nearing its release. The first release candidate is already out and we should see the final release, hopefully, in a few weeks.
If you are excited about this major new release, let me tell you whats in it for you.
### Debian 10 Buster Release Schedule
There is no set release date for [Debian 10 Buster][1]. Why is that so? Unlike other distributions, [Debian][2] doesnt do time-based releases. It instead focuses on fixing release-critical bugs. Release-critical bugs are bugs which have either security issues [CVEs][3] or some other critical issues which prevent Debian from releasing.
Debian has three parts in its archive, called Main, contrib and non-free. Of the three, Debian Developers and Release Managers are most concerned that the packages which form the bedrock of the distribution i.e. Main is rock stable. So they make sure that there arent any major functional or security issues. They are also given priority values such as Essential, Required, Important, Standard, Optional and Extra. More on this in some later Debian article.
This is necessary because Debian is used as a server in many different environments and people have come to depend on Debian. They also look at upgrade cycles to see nothing breaks for which they look for people to test and see if something breaks while upgrading and inform Debian of the same.
This commitment to stability is one of the [many reasons why I love to use Debian][4].
### Whats new in Debian 10 Buster Release
Here are a few visual and under the hood changes in the upcoming major release of Debian.
#### New theme and wallpaper
The Debian theme for Buster is called [FuturePrototype][5] and can be seen below:
![Debian Buster FuturePrototype Theme][6]
#### 1\. GNOME Desktop 3.30
The GNOME desktop which was 1.3.22 in Debian Stretch is updated to 1.3.30 in Buster. Some of the new packages included in this GNOME desktop release are gnome-todo, tracker instead of tracker-gui , dependency against gstreamer1.0-packagekit so there is automatic codec installation for playing movies etc. The big move has been all packages being moved from libgtk2+ to libgtk3+ .
#### 2\. Linux Kernel 4.19.0-4
Debian uses LTS Kernel versions so you can expect much better hardware support and long 5 year maintainance and support cycle from Debian. From kernel 4.9.0.3 we have come to 4.19.0-4 .
```
$ uname -r
4.19.0-4-amd64
```
#### 3\. OpenJDK 11.0
For a long time Debian was stuck on OpenJDK 8.0. Now in Debian Buster we have moved to OpenJDK 11.0 and have a team which will take care of new versions.
#### 4\. AppArmor Enabled by Default
In Debian Buster [AppArmor][7] will be enabled by default. While this is a good thing, care would have to be taken care by system administrators to enable correct policies. This is only the first step and would need fixing probably lot of scripts to be as useful as been envisioned for the user.
#### 5\. Nodejs 10.15.2
For a long time Debian had Nodejs 4.8 in the repo. In this cycle Debian has moved to Nodejs 10.15.2 . In fact, Debian Buster has many javascript libraries such as yarnpkg (an npm alternative) and many others.
Of course, you can [install latest Nodejs in Debian][8] from the projects repository but its good to see newer version in Debian repository.
#### 6\. NFtables replaces iptables
Debian buster provides nftables as a full replacement to iptables which means better and easier syntax, better support for dual-stack ipv4-v6 firewalls and more.
#### 7\. Support for lot of ARM 64 and ARMHF SBC Boards.
There has been a constant stream of new SBC boards which Debian is supporting, the latest amongst these are pine64_plus, pinebook for ARM64, while Firefly-RK3288, u-boot-rockchip for ARMHF 64 as well as Odroid HC1/HC2 boards, SolidRun Cubox-i Dual/Quad (1.5som), and SolidRun Cubox-i Dual/Quad (1.5som+emmc) boards, Cubietruckplus as well. There is support for Rock 64, Banana Pi M2 Berry, Pine A64 LTS Board, Olimex A64 Teres-1 as well as Raspberry Pi 1, Zero and Pi 3. Support will be out-of-the box for RISC-V systems as well.
#### 8\. Python 2 is dead, long live Python 3
Python 2 will be [deprecated][9] on January 1, 2020 by python.org . While Debian does have Python 2.7 efforts are on to remove after moving all packages to Python 3 to remove it from the repo. This may happen either at Buster release or in a future point release but this is imminent. So Python developers are encouraged to move their code-base to be compatible with Python 3. At the moment of writing, both python2 and python3 are supported in Debian buster.
#### 9\. Mailman 3
Mailman3 is finally available in Debian. While [Mailman][10] has been further sub-divided into components. To install the whole stack, install mailman3-full to get all the components.
#### 10\. Any existing Postgresql databases used will need to be reindexed
Due to updates in glibc locale data, the way the information is sorted in put in text indexes will change hence it would be beneficial to reindex the data so no data corruption arises in near future.
#### 11\. Bash 5.0 by Default
You probably have already about the [new features in Bash 5.0][11], this version is already in Debian.
#### 12\. Debian implementing /usr/merge
An excellent freedesktop [primer][12] on what /usr/merge brings is already shared. Couple of things to note though. While Debian would like to do the whole transition, there is possibility that due to unforseen circumstances, some binaries may not be in a position to do the change. One point to note though, /var and /etc/ will be left alone so people who are using containers or cloud would not have to worry too much :)
#### 13\. Secure-boot support
With Buster RC1, Debian now has secure-boot support. Which means machines which have the secure-boot bit turned on in the machine should be easily able to install Debian on such machines. No need to disable or workaround Secure boot anymore :)
#### 14\. Calameres Live-installer for Debian-Live images
For Debian buster, Debian Live, Debian introduces [Calameres Installer][13] instead of plain old debian-installer. While the Debian-installer has lot many features than Calameres, for newbies Calameres provides a fresh alternative to install than debian-installer. Some screenshots from the installation process.
![Calamares Partitioning Stage][14]
As can be seen it is pretty easy to Install Debian under Calamares, only 5 stages to go through and you can have Debian installed at your end.
### Download Debian 10 Live Images (only for testing)
Dont use it on production machines just yet. Try it on a test machine or a virtual machine.
You can get Debian 64-bit and 32 bit images from Debian Live [directory][15]. If you want the 64-bit look into 64-bit directory, if you want the 32-bit, you can look into the 32-bit directory.
[Debian 10 Buster Live Images][15]
If you upgrade from existing stable and something breaks, see if it is reported against the [upgrade-reports][16] psuedo-package using [reportbug][17] you saw the issue with. If the bug has not reported in the package then report it and share as much information as you can.
**In Conclusion**
While thousands of packages have been updated and it is virtually impossible to list them all. I have tired to list some of the major changes that you can look for in Debian buster. What do you think of it?
--------------------------------------------------------------------------------
via: https://itsfoss.com/new-features-coming-to-debian-10-buster-release/
作者:[Shirish][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/shirish/
[b]: https://github.com/lujun9972
[1]: https://wiki.debian.org/DebianBuster
[2]: https://www.debian.org/
[3]: https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures
[4]: https://itsfoss.com/reasons-why-i-love-debian/
[5]: https://wiki.debian.org/DebianArt/Themes/futurePrototype
[6]: https://itsfoss.com/wp-content/uploads/2019/04/debian-buster-theme-800x450.png
[7]: https://wiki.debian.org/AppArmor
[8]: https://itsfoss.com/install-nodejs-ubuntu/
[9]: https://www.python.org/dev/peps/pep-0373/
[10]: https://www.list.org/
[11]: https://itsfoss.com/bash-5-release/
[12]: https://www.freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/
[13]: https://calamares.io/about/
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/calamares-partitioning-wizard.jpg?fit=800%2C538&ssl=1
[15]: https://cdimage.debian.org/cdimage/weekly-live-builds/
[16]: https://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=upgrade-reports;dist=unstable
[17]: https://itsfoss.com/bug-report-debian/

View File

@ -0,0 +1,341 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Monitor Disk I/O Activity Using iotop And iostat Commands In Linux?)
[#]: via: (https://www.2daygeek.com/monitor-disk-io-activity-using-iotop-iostat-command-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How To Monitor Disk I/O Activity Using iotop And iostat Commands In Linux?
======
Do you know what are the tools we can use for troubleshooting or monitoring real-time disk activity in Linux?
If **[Linux system performance][1]** gets slow down we may use **[top command][2]** to see the system performance.
It is used to check what are the processes are consuming high utilization on server.
Its common for most of the Linux administrator.
Its widely used by Linux administrator in the real world.
If you dont see much difference in the process output still you have an option to check other things.
I would like to advise you to check `wa` status in the top output because most of the time the server performance will be degraded due to high I/O Read and Write on hard disk.
If its high or fluctuation, it could be a cause. So, we need to check I/O activity on hard drive.
We can monitory disk I/O statistics for all disks and file system in Linux system using `iotop` and `iostat` commands.
### What Is iotop?
iotop is a top-like utility for displaying real-time disk activity.
iotop watches I/O usage information output by the Linux kernel and displays a table of current I/O usage by processes or threads on the system.
It displays the I/O bandwidth read and written by each process/thread. It also displays the percentage of time the thread/process spent while swapping in and while waiting on I/O.
Total DISK READ and Total DISK WRITE values represent total read and write bandwidth between processes and kernel threads on the one side and kernel block device subsystem on the other.
Actual DISK READ and Actual DISK WRITE values represent corresponding bandwidths for actual disk I/O between kernel block device subsystem and underlying hardware (HDD, SSD, etc.).
### How To Install iotop In Linux?
We can easily install it with help of package manager since the package is available in all the Linux distributions repository.
For **`Fedora`** system, use **[DNF Command][3]** to install iotop.
```
$ sudo dnf install iotop
```
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][4]** or **[APT Command][5]** to install iotop.
```
$ sudo apt install iotop
```
For **`Arch Linux`** based systems, use **[Pacman Command][6]** to install iotop.
```
$ sudo pacman -S iotop
```
For **`RHEL/CentOS`** systems, use **[YUM Command][7]** to install iotop.
```
$ sudo yum install iotop
```
For **`openSUSE Leap`** system, use **[Zypper Command][8]** to install iotop.
```
$ sudo zypper install iotop
```
### How To Monitor Disk I/O Activity/Statistics In Linux Using iotop Command?
There are many options are available in iotop command to check varies statistics about disk I/O.
Run the iotop command without any arguments to see each process or thread current I/O usage.
```
# iotop
```
[![][9]![][9]][10]
If you would like to check which process are actually doing IO then run the iotop command with `-o` or `--only` option.
```
# iotop --only
```
[![][9]![][9]][11]
**Details:**
* **`IO:`** It shows I/O utilization for each process, which includes disk and swap.
* **`SWAPIN:`** It shows only the swap usage of each process.
### What Is iostat?
iostat is used to report Central Processing Unit (CPU) statistics and input/output statistics for devices and partitions.
The iostat command is used for monitoring system input/output device loading by observing the time the devices are active in relation to their average transfer rates.
The iostat command generates reports that can be used to change system configuration to better balance the input/output load between physical disks.
All statistics are reported each time the iostat command is run. The report consists of a CPU header row followed by a row of CPU statistics.
On multiprocessor systems, CPU statistics are calculated system-wide as averages among all processors. A device header row is displayed followed by a line of statistics for each device that is configured.
The iostat command generates two types of reports, the CPU Utilization report and the Device Utilization report.
### How To Install iostat In Linux?
iostat tool is part of sysstat package so, We can easily install it with help of package manager since the package is available in all the Linux distributions repository.
For **`Fedora`** system, use **[DNF Command][3]** to install sysstat.
```
$ sudo dnf install sysstat
```
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][4]** or **[APT Command][5]** to install sysstat.
```
$ sudo apt install sysstat
```
For **`Arch Linux`** based systems, use **[Pacman Command][6]** to install sysstat.
```
$ sudo pacman -S sysstat
```
For **`RHEL/CentOS`** systems, use **[YUM Command][7]** to install sysstat.
```
$ sudo yum install sysstat
```
For **`openSUSE Leap`** system, use **[Zypper Command][8]** to install sysstat.
```
$ sudo zypper install sysstat
```
### How To Monitor Disk I/O Activity/Statistics In Linux Using sysstat Command?
There are many options are available in iostat command to check varies statistics about disk I/O and CPU.
Run the iostat command without any arguments to see complete statistics of the system.
```
# iostat
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
29.45 0.02 16.47 0.12 0.00 53.94
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
nvme0n1 6.68 126.95 124.97 0.00 58420014 57507206 0
sda 0.18 6.77 80.24 0.00 3115036 36924764 0
loop0 0.00 0.00 0.00 0.00 2160 0 0
loop1 0.00 0.00 0.00 0.00 1093 0 0
loop2 0.00 0.00 0.00 0.00 1077 0 0
```
Run the iostat command with `-d` option to see I/O statistics for all the devices
```
# iostat -d
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
nvme0n1 6.68 126.95 124.97 0.00 58420030 57509090 0
sda 0.18 6.77 80.24 0.00 3115292 36924764 0
loop0 0.00 0.00 0.00 0.00 2160 0 0
loop1 0.00 0.00 0.00 0.00 1093 0 0
loop2 0.00 0.00 0.00 0.00 1077 0 0
```
Run the iostat command with `-p` option to see I/O statistics for all the devices and their partitions.
```
# iostat -p
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
29.42 0.02 16.45 0.12 0.00 53.99
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
nvme0n1 6.68 126.94 124.96 0.00 58420062 57512278 0
nvme0n1p1 6.40 124.46 118.36 0.00 57279753 54474898 0
nvme0n1p2 0.27 2.47 6.60 0.00 1138069 3037380 0
sda 0.18 6.77 80.23 0.00 3116060 36924764 0
sda1 0.00 0.01 0.00 0.00 3224 0 0
sda2 0.18 6.76 80.23 0.00 3111508 36924764 0
loop0 0.00 0.00 0.00 0.00 2160 0 0
loop1 0.00 0.00 0.00 0.00 1093 0 0
loop2 0.00 0.00 0.00 0.00 1077 0 0
```
Run the iostat command with `-x` option to see detailed I/O statistics for all the devices.
```
# iostat -x
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
29.41 0.02 16.45 0.12 0.00 54.00
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
nvme0n1 2.45 126.93 0.60 19.74 0.40 51.74 4.23 124.96 5.12 54.76 3.16 29.54 0.00 0.00 0.00 0.00 0.00 0.00 0.31 30.28
sda 0.06 6.77 0.00 0.00 8.34 119.20 0.12 80.23 19.94 99.40 31.84 670.73 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13
loop0 0.00 0.00 0.00 0.00 0.08 19.64 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop1 0.00 0.00 0.00 0.00 0.40 12.86 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop2 0.00 0.00 0.00 0.00 0.38 19.58 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
```
Run the iostat command with `-d [Device_Name]` option to see I/O statistics of particular device and their partitions.
```
# iostat -p [Device_Name]
# iostat -p sda
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
29.38 0.02 16.43 0.12 0.00 54.05
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
sda 0.18 6.77 80.21 0.00 3117468 36924764 0
sda2 0.18 6.76 80.21 0.00 3112916 36924764 0
sda1 0.00 0.01 0.00 0.00 3224 0 0
```
Run the iostat command with `-m` option to see I/O statistics with `MB` for all the devices instead of `KB`. By default it shows the output with KB.
```
# iostat -m
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
29.36 0.02 16.41 0.12 0.00 54.09
Device tps MB_read/s MB_wrtn/s MB_dscd/s MB_read MB_wrtn MB_dscd
nvme0n1 6.68 0.12 0.12 0.00 57050 56176 0
sda 0.18 0.01 0.08 0.00 3045 36059 0
loop0 0.00 0.00 0.00 0.00 2 0 0
loop1 0.00 0.00 0.00 0.00 1 0 0
loop2 0.00 0.00 0.00 0.00 1 0 0
```
Run the iostat command with certain interval then use the following format. In this example, we are going to capture totally two reports at five seconds interval.
```
# iostat [Interval] [Number Of Reports]
# iostat 5 2
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
29.35 0.02 16.41 0.12 0.00 54.10
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
nvme0n1 6.68 126.89 124.95 0.00 58420116 57525344 0
sda 0.18 6.77 80.20 0.00 3118492 36924764 0
loop0 0.00 0.00 0.00 0.00 2160 0 0
loop1 0.00 0.00 0.00 0.00 1093 0 0
loop2 0.00 0.00 0.00 0.00 1077 0 0
avg-cpu: %user %nice %system %iowait %steal %idle
3.71 0.00 2.51 0.05 0.00 93.73
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
nvme0n1 19.00 0.20 311.40 0.00 1 1557 0
sda 0.20 25.60 0.00 0.00 128 0 0
loop0 0.00 0.00 0.00 0.00 0 0 0
loop1 0.00 0.00 0.00 0.00 0 0 0
loop2 0.00 0.00 0.00 0.00 0 0 0
```
Run the iostat command with `-N` option to see the LVM disk I/O statistics report.
```
# iostat -N
Linux 4.15.0-47-generic (Ubuntu18.2daygeek.com) Thursday 18 April 2019 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.38 0.07 0.18 0.26 0.00 99.12
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 3.60 57.07 69.06 968729 1172340
sdb 0.02 0.33 0.00 5680 0
sdc 0.01 0.12 0.00 2108 0
2g-2gvol1 0.00 0.07 0.00 1204 0
```
Run the nfsiostat command to see the I/O statistics for Network File System(NFS).
```
# nfsiostat
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/monitor-disk-io-activity-using-iotop-iostat-command-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/category/monitoring-tools/
[2]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/
[3]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[4]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[5]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[6]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[7]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[8]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[9]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[10]: https://www.2daygeek.com/wp-content/uploads/2015/03/monitor-disk-io-activity-using-iotop-iostat-command-in-linux-1.jpg
[11]: https://www.2daygeek.com/wp-content/uploads/2015/03/monitor-disk-io-activity-using-iotop-iostat-command-in-linux-2.jpg

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: (jdh8383)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The shell scripting trap)
[#]: via: (https://arp242.net/weblog/shell-scripting-trap.html)
[#]: author: (Martin Tournoij https://arp242.net/)
Shell 脚本陷阱
======
Shell 脚本很棒,你可以非常轻松地写出有用的东西来。甚至像是下面这个傻瓜式的命令:
```
# 用含有 Go 的词汇起名字:
$ grep -i ^go /usr/share/dict/american-english /usr/share/dict/british /usr/share/dict/british-english /usr/share/dict/catala /usr/share/dict/catalan /usr/share/dict/cracklib-small /usr/share/dict/finnish /usr/share/dict/french /usr/share/dict/german /usr/share/dict/italian /usr/share/dict/ngerman /usr/share/dict/ogerman /usr/share/dict/spanish /usr/share/dict/usa /usr/share/dict/words | cut -d: -f2 | sort -R | head -n1
goldfish
```
如果用其他编程语言,就需要花费更多的脑力,用多行代码实现,比如用 Ruby 的话:
```
puts(Dir['/usr/share/dict/*-english'].map do |f|
File.open(f)
.readlines
.select { |l| l[0..1].downcase == 'go' }
end.flatten.sample.chomp)
```
Ruby 版本的代码虽然不是那么长,也并不复杂。但是 shell 版是如此简单,我甚至不用实际测试就可以确保它是正确的。而 Ruby 版的我就没法确定它不会出错了,必须得测试一下。而且它要长一倍,看起来也更复杂。
这就是人们使用 Shell 脚本的原因,它简单却实用。下面是另一个例子:
```
curl https://nl.wikipedia.org/wiki/Lijst_van_Nederlandse_gemeenten |
grep '^<li><a href=' |
sed -r 's|<li><a href="/wiki/.+" title=".+">(.+)</a>.*</li>|\1|' |
grep -Ev '(^Tabel van|^Lijst van|Nederland)'
```
这个脚本可以从维基百科上获取荷兰基层政权的列表。几年前我写了这个临时的脚本,用来快速生成一个数据库,到现在它仍然可以正常运行,当时写它并没有花费我多少精力。但要用 Ruby 完成同样的功能则会麻烦得多。
现在来说说 shell 的缺点吧。随着代码量的增加,你的脚本会变得越来越难以维护,但你也不会想用别的语言重写一遍,因为你已经在这个 shell 版上花费了很多时间。
我把这种情况称为“Shell 脚本陷阱”,这是[沉没成本谬论][1]的一种特例(沉没成本谬论是一个经济学概念,可以简单理解为,对已经投入的成本可能被浪费而念念不忘)。
实际上许多脚本会增长到超出预期的大小,你经常会花费过多的时间来“修复某个 bug”或者“添加一个小功能”。如此循环往复让人头大。
如果你从一开始就使用 Python、Ruby 或是其他类似的语言来写这个程序你可能会在写第一版的时候多花些时间但以后维护起来就容易很多bug 也肯定会少很多。
以我的 [packman.vim][2] 脚本为例。它起初只包含一个简单的用来遍历所有目录的 `for` 循环,外加一个 `git pull`,但在这之后就刹不住车了,它现在有 200 行左右的代码,这肯定不能算是最复杂的脚本,但假如我一上来就按计划用 Go 来编写它的话,那么增加一些像“打印状态”或者“从配置文件里克隆新的 git 库”这样的功能就会轻松很多;添加“并行克隆”的支持也几乎不算个事儿了,而在 shell 脚本里却很难实现(尽管不是不可能)。事后看来,我本可以节省时间,并且获得更好的结果。
出于类似的原因,我很后悔写出了许多这样的 shell 脚本,而我在 2018 年的新年誓言就是不要再犯类似的错误了。
#### 附录:问题汇总
需要指出的是shell 编程的确存在一些实际的限制。下面是一些例子:
* 在处理一些包含“空格”或者其他“特殊”字符的文件名时,需要特别注意细节。绝大多数脚本都会犯错,即使是那些经验丰富的作者(比如我)编写的脚本,因为太容易写错了,[只添加引号是不够的][3]。
* 有许多所谓“正确”和“错误”的做法。你应该用 `which` 还是 `command`?该用 `$@` 还是 `$*`,是不是得加引号?你是该用 `cmd $arg` 还是 `cmd "$arg"`?等等等等。
* 你没法在变量里存储空字节0x00shell 脚本处理二进制数据很麻烦。
* 虽然你可以非常快速地写出有用的东西,但实现更复杂的算法则要痛苦许多,即使用 ksh/zsh/bash 扩展也是如此。我上面那个解析 HTML 的脚本临时用用是可以的,但你真的不会想在生产环境中使用这种脚本。
* 很难写出跨平台的通用型 shell 脚本。`/bin/sh` 可能是 `dash` 或者 `bash`,不同的 shell 有不同的运行方式。外部工具如 `grep`、`sed` 等,不一定能支持同样的参数。你能确定你的脚本可以适用于 Linux、macOS 和 Windows 的所有版本吗(过去、现在和将来)?
* 调试 shell 脚本会很难,特别是你眼中的语法可能会很快变得模糊起来,并不是所有人都熟悉 shell 编程的语境。
* 处理错误会很棘手(检查 `$?` 或是 `set -e`),排查一些超过“出了个小错”级别的复杂错误几乎是不可能的。
* 除非你使用了 `set -u`,变量未定义将不会报错,而这会导致一些“搞笑事件”,比如 `rm -r ~/$undefined` 会删除用户的整个家目录([瞅瞅 Github 上的这个悲剧][4])。
* 所有东西都是字符串。一些 shell 引入了数组,能用,但是语法非常丑陋和模糊。带分数的数字运算仍然难以应付,并且依赖像 `bc``dc` 这样的外部工具(`$(( .. ))` 这种方式只能对付一下整数)。
**反馈**
你可以发邮件到 [martin@arp242.net][5],或者[在 GitHub 上创建 issue][6] 来向我反馈,提问等。
--------------------------------------------------------------------------------
via: https://arp242.net/weblog/shell-scripting-trap.html
作者:[Martin Tournoij][a]
选题:[lujun9972][b]
译者:[jdh8383](https://github.com/jdh8383)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://arp242.net/
[b]: https://github.com/lujun9972
[1]: https://youarenotsosmart.com/2011/03/25/the-sunk-cost-fallacy/
[2]: https://github.com/Carpetsmoker/packman.vim
[3]: https://dwheeler.com/essays/filenames-in-shell.html
[4]: https://github.com/ValveSoftware/steam-for-linux/issues/3671
[5]: mailto:martin@arp242.net
[6]: https://github.com/Carpetsmoker/arp242.net/issues/new