Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-09-25 21:18:48 +08:00
commit b47afc11bd
21 changed files with 2958 additions and 502 deletions

View File

@ -0,0 +1,252 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11387-1.html)
[#]: subject: (Linux commands for measuring disk activity)
[#]: via: (https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
用于测量磁盘活动的 Linux 命令
======
> Linux 发行版提供了几个度量磁盘活动的有用命令。让我们了解一下其中的几个。
![](https://images.idgesg.net/images/article/2018/12/tape-measure-100782593-large.jpg)
Linux 系统提供了一套方便的命令,帮助你查看磁盘有多忙,而不仅仅是磁盘有多满。在本文中,我们将研究五个非常有用的命令,用于查看磁盘活动。其中两个命令(`iostat` 和 `ioping`)可能必须添加到你的系统中,这两个命令一样要求你使用 sudo 特权,所有这五个命令都提供了查看磁盘活动的有用方法。
这些命令中最简单、最直观的一个可能是 `dstat` 了。
### dtstat
尽管 `dstat` 命令以字母 “d” 开头,但它提供的统计信息远远不止磁盘活动。如果你只想查看磁盘活动,可以使用 `-d` 选项。如下所示,你将得到一个磁盘读/写测量值的连续列表,直到使用 `CTRL-c` 停止显示为止。注意,在第一个报告信息之后,显示中的每个后续行将在接下来的时间间隔内报告磁盘活动,缺省值仅为一秒。
```
$ dstat -d
-dsk/total-
read writ
949B 73k
65k 0 <== first second
0 24k <== second second
0 16k
0 0 ^C
```
`-d` 选项后面包含一个数字将把间隔设置为该秒数。
```
$ dstat -d 10
-dsk/total-
read writ
949B 73k
65k 81M <== first five seconds
0 21k <== second five second
0 9011B ^C
```
请注意报告的数据可能以许多不同的单位显示——例如MMb、KKb和 B字节
如果没有选项,`dstat` 命令还将显示许多其他信息——指示 CPU 如何使用时间、显示网络和分页活动、报告中断和上下文切换。
```
$ dstat
You did not select any stats, using -cdngy by default.
--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read writ| recv send| in out | int csw
0 0 100 0 0| 949B 73k| 0 0 | 0 3B| 38 65
0 0 100 0 0| 0 0 | 218B 932B| 0 0 | 53 68
0 1 99 0 0| 0 16k| 64B 468B| 0 0 | 64 81 ^C
```
`dstat` 命令提供了关于整个 Linux 系统性能的有价值的见解,几乎可以用它灵活而功能强大的命令来代替 `vmstat`、`netstat`、`iostat` 和 `ifstat` 等较旧的工具集合,该命令结合了这些旧工具的功能。要深入了解 `dstat` 命令可以提供的其它信息,请参阅这篇关于 [dstat][1] 命令的文章。
### iostat
`iostat` 命令通过观察设备活动的时间与其平均传输速率之间的关系,帮助监视系统输入/输出设备的加载情况。它有时用于评估磁盘之间的活动平衡。
```
$ iostat
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.01 0.03 0.05 0.00 99.85
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
loop0 0.00 0.00 0.00 1048 0
loop1 0.00 0.00 0.00 365 0
loop2 0.00 0.00 0.00 1056 0
loop3 0.00 0.01 0.00 16169 0
loop4 0.00 0.00 0.00 413 0
loop5 0.00 0.00 0.00 1184 0
loop6 0.00 0.00 0.00 1062 0
loop7 0.00 0.00 0.00 5261 0
sda 1.06 0.89 72.66 2837453 232735080
sdb 0.00 0.02 0.00 48669 40
loop8 0.00 0.00 0.00 1053 0
loop9 0.01 0.01 0.00 18949 0
loop10 0.00 0.00 0.00 56 0
loop11 0.00 0.00 0.00 7090 0
loop12 0.00 0.00 0.00 1160 0
loop13 0.00 0.00 0.00 108 0
loop14 0.00 0.00 0.00 3572 0
loop15 0.01 0.01 0.00 20026 0
loop16 0.00 0.00 0.00 24 0
```
当然当你只想关注磁盘时Linux 回环设备上提供的所有统计信息都会使结果显得杂乱无章。不过,该命令也确实提供了 `-p` 选项,该选项使你可以仅查看磁盘——如以下命令所示。
```
$ iostat -p sda
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.01 0.03 0.05 0.00 99.85
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 1.06 0.89 72.54 2843737 232815784
sda1 1.04 0.88 72.54 2821733 232815784
```
请注意 `tps` 是指每秒的传输量。
你还可以让 `iostat` 提供重复的报告。在下面的示例中,我们使用 `-d` 选项每五秒钟进行一次测量。
```
$ iostat -p sda -d 5
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 1.06 0.89 72.51 2843749 232834048
sda1 1.04 0.88 72.51 2821745 232834048
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.80 0.00 11.20 0 56
sda1 0.80 0.00 11.20 0 56
```
如果你希望省略第一个(自启动以来的统计信息)报告,请在命令中添加 `-y`
```
$ iostat -p sda -d 5 -y
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.80 0.00 11.20 0 56
sda1 0.80 0.00 11.20 0 56
```
接下来,我们看第二个磁盘驱动器。
```
$ iostat -p sdb
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.01 0.03 0.05 0.00 99.85
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sdb 0.00 0.02 0.00 48669 40
sdb2 0.00 0.00 0.00 4861 40
sdb1 0.00 0.01 0.00 35344 0
```
### iotop
`iotop` 命令是类似 `top` 的实用程序,用于查看磁盘 I/O。它收集 Linux 内核提供的 I/O 使用信息,以便你了解哪些进程在磁盘 I/O 方面的要求最高。在下面的示例中,循环时间被设置为 5 秒。显示将自动更新,覆盖前面的输出。
```
$ sudo iotop -d 5
Total DISK READ: 0.00 B/s | Total DISK WRITE: 1585.31 B/s
Current DISK READ: 0.00 B/s | Current DISK WRITE: 12.39 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
32492 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.12 % [kworker/u8:1-ev~_power_efficient]
208 be/3 root 0.00 B/s 1585.31 B/s 0.00 % 0.11 % [jbd2/sda1-8]
1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init splash
2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
3 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_gp]
4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_par_gp]
8 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [mm_percpu_wq]
```
### ioping
`ioping` 命令是一种完全不同的工具,但是它可以报告磁盘延迟——也就是磁盘响应请求需要多长时间,而这有助于诊断磁盘问题。
```
$ sudo ioping /dev/sda1
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=1 time=960.2 us (warmup)
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=2 time=841.5 us
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=3 time=831.0 us
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=4 time=1.17 ms
^C
--- /dev/sda1 (block device 111.8 GiB) ioping statistics ---
3 requests completed in 2.84 ms, 12 KiB read, 1.05 k iops, 4.12 MiB/s
generated 4 requests in 3.37 s, 16 KiB, 1 iops, 4.75 KiB/s
min/avg/max/mdev = 831.0 us / 947.9 us / 1.17 ms / 158.0 us
```
### atop
`atop` 命令,像 `top` 一样提供了大量有关系统性能的信息,包括有关磁盘活动的一些统计信息。
```
ATOP - butterfly 2018/12/26 17:24:19 37d3h13m------ 10ed
PRC | sys 0.03s | user 0.01s | #proc 179 | #zombie 0 | #exit 6 |
CPU | sys 1% | user 0% | irq 0% | idle 199% | wait 0% |
cpu | sys 1% | user 0% | irq 0% | idle 99% | cpu000 w 0% |
CPL | avg1 0.00 | avg5 0.00 | avg15 0.00 | csw 677 | intr 470 |
MEM | tot 5.8G | free 223.4M | cache 4.6G | buff 253.2M | slab 394.4M |
SWP | tot 2.0G | free 2.0G | | vmcom 1.9G | vmlim 4.9G |
DSK | sda | busy 0% | read 0 | write 7 | avio 1.14 ms |
NET | transport | tcpi 4 | tcpo stall 8 | udpi 1 | udpo 0swout 2255 |
NET | network | ipi 10 | ipo 7 | ipfrw 0 | deliv 60.67 ms |
NET | enp0s25 0% | pcki 10 | pcko 8 | si 1 Kbps | so 3 Kbp0.73 ms |
PID SYSCPU USRCPU VGROW RGROW ST EXC THR S CPUNR CPU CMD 1/1673e4 |
3357 0.01s 0.00s 672K 824K -- - 1 R 0 0% atop
3359 0.01s 0.00s 0K 0K NE 0 0 E - 0% <ps>
3361 0.00s 0.01s 0K 0K NE 0 0 E - 0% <ps>
3363 0.01s 0.00s 0K 0K NE 0 0 E - 0% <ps>
31357 0.00s 0.00s 0K 0K -- - 1 S 1 0% bash
3364 0.00s 0.00s 8032K 756K N- - 1 S 1 0% sleep
2931 0.00s 0.00s 0K 0K -- - 1 I 1 0% kworker/u8:2-e
3356 0.00s 0.00s 0K 0K -E 0 0 E - 0% <sleep>
3360 0.00s 0.00s 0K 0K NE 0 0 E - 0% <sleep>
3362 0.00s 0.00s 0K 0K NE 0 0 E - 0% <sleep>
```
如果你*只*想查看磁盘统计信息,则可以使用以下命令轻松进行管理:
```
$ atop | grep DSK
DSK | sda | busy 0% | read 122901 | write 3318e3 | avio 0.67 ms |
DSK | sdb | busy 0% | read 1168 | write 103 | avio 0.73 ms |
DSK | sda | busy 2% | read 0 | write 92 | avio 2.39 ms |
DSK | sda | busy 2% | read 0 | write 94 | avio 2.47 ms |
DSK | sda | busy 2% | read 0 | write 99 | avio 2.26 ms |
DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms |
DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms |
DSK | sda | busy 2% | read 0 | write 92 | avio 2.43 ms |
^C
```
### 了解磁盘 I/O
Linux 提供了足够的命令,可以让你很好地了解磁盘的工作强度,并帮助你关注潜在的问题或减缓。希望这些命令中的一个可以告诉你何时需要质疑磁盘性能。偶尔使用这些命令将有助于确保当你需要检查磁盘,特别是忙碌或缓慢的磁盘时可以显而易见地发现它们。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3291616/linux/examining-linux-system-performance-with-dstat.html
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,61 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale)
[#]: via: (https://opensourceforu.com/2019/09/global-tech-giants-form-presto-foundation-to-tackle-distributed-data-processing-at-scale/)
[#]: author: (Longjam Dineshwori https://opensourceforu.com/author/dineshwori-longjam/)
Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale
======
* _**The Foundation aims to make the database search engine “the fastest and most reliable SQL engine for massively distributed data processing.”**_
* _**Prestos architecture allows users to query a variety of data sources and move at scale and speed.**_
![Facebook][1]
Facebook, Uber, Twitter and Alibaba have joined hands to form a foundation to help Presto, a database search engine and processing tool, scale and diversify its community.
Under Presto will be now hosted under the Linux Foundation, the U.S.-based non-profit organization announced on Monday.
The newly established Presto Foundation will operate under a community governance model with representation from each of the founding members. It aims to make the engine “the fastest and most reliable SQL engine for massively distributed data processing.”
“The Linux Foundation is excited to work with the Presto community, collaborating to solve the increasing problem of massive distributed data processing at internet scale,” said Michael Dolan, VP of Strategic Programs at the Linux Foundation.”
**Presto can run on large clusters of machines**
Presto was developed at Facebook in 2012 as a high-performance distributed SQL query engine for large scale data analytics. Prestos architecture allows users to query a variety of data sources such as Hadoop, S3, Alluxio, MySQL, PostgreSQL, Kafka, MongoDB and move at scale and speed.
It can query data where it is stored without needing to move the data to a separate system. Its in-memory and distributed query processing results in query latencies of seconds to minutes.
“Presto has been designed for high performance exabyte-scale data processing on a large number of machines. Its flexible design allows processing data from a wide variety of data sources. From day one Presto has been designed with efficiency, scalability and reliability in mind, and it has been improved over the years to take on additional use cases at Facebook, such as batch and other application specific interactive use cases,” said Nezih Yigitbasi, Engineering Manager of Presto at Facebook.
Presto is being used by over a thousand Facebook employees for running several million queries and processing petabytes of data per day, according to Kathy Kam, Head of Open Source at Facebook.
**Expanding community for the benefit of all**
Facebook released the source code of Presto to developers in 2013 in the hope that other companies would help to drive the future direction of the project.
“It turns out many other companies were interested and so under The Linux Foundation, we believe the project can engage others and grow the community for the benefit of all,” said Kathy Kam.
Ubers data platform architecture uses Presto to extract critical insights from aggregated data. “Uber is honoured to partner with the Linux Foundation and major contributors from the tech community to bring the Presto Foundation to life. Our goal is to help create an open and collaborative community in which Presto developers can thrive,” asserted Brian Hsieh, Head of Open Source at Uber.
Liang Lin, Senior Director of Alibaba OLAP products, believes that the collaboration would eventually benefit the community as well as Alibaba and its customers.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/global-tech-giants-form-presto-foundation-to-tackle-distributed-data-processing-at-scale/
作者:[Longjam Dineshwori][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/dineshwori-longjam/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/06/Facebook-Like.jpg?resize=350%2C213&ssl=1

View File

@ -0,0 +1,67 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Deloitte Launches New Tool for Tracking the Trajectory of Open Source Technologies)
[#]: via: (https://opensourceforu.com/2019/09/deloitte-launches-new-tool-for-tracking-the-trajectory-of-open-source-technologies/)
[#]: author: (Longjam Dineshwori https://opensourceforu.com/author/dineshwori-longjam/)
Deloitte Launches New Tool for Tracking the Trajectory of Open Source Technologies
======
* _**Called Open Source Compass, the new open source analysis tool provides insights into 15 emergent technology domains**_
* _**It can help software engineers in identifying potential platforms for prototyping, experimentation and scaled innovation.**_
Deloitte has launched a first-of-its-kind public data visualization tool, called Open Source Compass (OSC), which is intended to help C-suite leaders, product managers and software engineers understand the trajectory of open source development and emerging technologies.
Deloitte collaborated with University of Toulouse Chair of Artificial and Natural Intelligence Toulouse Institute (ANITI) and co-founder of Datawheel, César Hidalgo to design and developed the tool.
The tool enables users to search technology domains, projects, programming languages and locations of interest, explore emerging trends, run comparisons, and share and download data.
“Open source software has been around since the early days of the internet and has incited a completely new kind of collaboration and productivity — especially in the realm of emerging technology,” said Bill Briggs, chief technology officer, Deloitte Consulting LLP.
“Deloittes Open Source Compass can help provide insights that allow organizations to be more deliberate in their approach to innovation, while connecting to a pool of bourgeoning talent,” he added.
**Free and open to the public**
Open Source Compass will provide insights into 15 emergent technology domains, including cyber security, virtual/augmented reality, serverless computing and machine learning, to name a few.
The site will offer a view into systemic trends on how the domains are evolving. The open source platform will also explore geographic trends based on project development, authors and knowledge sharing across cities and countries. It will also track how certain programming languages are being used and how fast they are growing. Free and open to the public, the site will enable users to query technology domains of interest, run their own comparisons and share or download data.
**The benefits of using Open Source Compass**
OSC analyzes data from the largest open source development platform which brings together over 36 million developers from around the world. OSC visualizes the scale and reach of emerging technology domains — over 100 million repositories/projects — in areas including blockchain, machine learning and the Internet of Things (IoT).
Some of the key benefits of Deloittes new open source analysis tool include:
* Exploring which specific open source projects are growing or stagnating in domains like machine learning.
* Identifying potential platforms for prototyping, experimentation and scaled innovation.
* Scouting for tech talent in specific technology domains and locations.
* Detecting and assessing technology risks.
* Understanding what programming languages are gaining or losing ground to inform training and recruitment
According to Ragu Gurumurthy, global chief innovation officer for Deloitte Consulting LLP, Open Source Compass can address different organizational needs for different types of users based on their priorities.
He explained, “A CTO could explore the latest project developments in machine learning to help drive experimentation, while a learning and development leader can find the most popular programming language for robotics that could then be taught as a new skill in an internal course offering.”
Datawheel is an award-winning company specialized in the creation of data visualization solutions. “Making sense of large streams of data is one of the most pressing challenges of our day,” said Hidalgo.
“In Open Source Compass, we used our latest technologies to create a platform that turns opaque and difficult to understand streams of data into simple and easy to understand visualizations,” he commented.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/deloitte-launches-new-tool-for-tracking-the-trajectory-of-open-source-technologies/
作者:[Longjam Dineshwori][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/dineshwori-longjam/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,112 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How DevOps professionals can become security champions)
[#]: via: (https://opensource.com/article/19/9/devops-security-champions)
[#]: author: (Jessica Repka https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo)
How DevOps professionals can become security champions
======
Breaking down silos and becoming a champion for security will help you,
your career, and your organization.
![A lock on the side of a building][1]
Security is a misunderstood element in DevOps. Some see it as outside of DevOps' purview, while others find it important (and overlooked) enough to recommend moving to [DevSecOps][2]. No matter your perspective on where it belongs, it's clear that security affects everyone.
Each year, the [statistics on hacking][3] become more alarming. For example, there's a hacker attack every 39 seconds, which can lead to stolen records, identities, and proprietary projects you're writing for your company. It can take months (and possibly forever) for your security team to discover the who, what, where, or when behind a hack.
What are operations professionals to do about these dire problems? I say it is time for us to become part of the solution by becoming security champions.
### Silos and turf wars
Over my years of working side-by-side with my local IT security (ITSEC) teams, I've noticed a great many things. A big one is that tension is very common between DevOps and security. This tension almost always stems from the security team's efforts to protect against vulnerabilities (e.g., by setting rules or disabling things) that interrupt DevOps' work and hinder their ability to deploy apps quickly.
You've seen it, I've seen it, everyone you meet in the field has at least one story about it. A small set of grudges turns into a burned bridge that takes time to repair—or the groups begin a small turf war, and the resulting silos make achieving DevOps unlikely.
### Get a new perspective
To try to break down these silos and end the turf wars, I talk to at least one person on each security team to learn about the ins and outs of daily security operations in our organization. I started doing this out of general curiosity, but I've continued because it always gives me a valuable new perspective. For example, I've learned that for every deployment that's stopped due to failed security, the ITSEC team is feverishly trying to patch 10 other problems it sees. Their brashness and quickness to react are due to the limited time they have to fix something before it becomes a large problem.
Consider the immense amount of knowledge it takes to find, analyze, and undo what has been done. Or to figure out what the DevOps team is doing—without background information—then replicate and test it. And to do all of this with their usual greatly understaffed security team.
This is the daily life of your security team, and your DevOps team is not seeing it. ITSEC's daily work can mean overtime hours and overwork to make sure that the company, its teams, and the proprietary work its teams are producing are secure.
### Ways to be a security champion
This is where being your own security champion can help. This means—for everything you work on—you must take a good, hard look at all the ways someone could log into it and what could be taken from it.
Help your security team help you. Introduce tools into your pipelines to integrate what you know will work with what they will know will work. Start with small things, such as reading up on Common Vulnerabilities and Exposures (CVEs) and adding scanning functions to your [CI/CD][4] pipelines. For everything you build, there is an open source scanning tool, and adding small open source tools (such as the ones below) can go the extra mile in the long run.
**Container scanning tools:**
* [Anchore Engine][5]
* [Clair][6]
* [Vuls][7]
* [OpenSCAP][8]
**Code scanning tools:**
* [OWASP SonarQube][9]
* [Find Security Bugs][10]
* [Google Hacking Diggity Project][11]
**Kubernetes security tools:**
* [Project Calico][12]
* [Kube-hunter][13]
* [NeuVector][14]
### Keep your DevOps hat on
Learning about new technology and how to create new things with it is part of the job if you're in a DevOps-related role. Security is no different. Here's my list of ways to keep up to date on the security front while keeping your DevOps hat on.
* Read one article each week about something related to security in whatever you're working on.
* Look at the [CVE][15] website weekly to see what's new.
* Try doing a hackathon. Some companies do this once a month; check out the [Beginner Hack 1.0][16] site if yours doesn't and you'd like to learn more.
* Try to attend at least one security conference a year with a member of your security team to see things from their side.
### Be a champion for good
There are several reasons you should become your own security champion. The first and foremost is to further your knowledge and advance your career. The second reason is to help other teams, foster new relationships, and break down the silos that harm your organization. Creating friendships across your organization has multiple benefits, including setting a good example of bridging teams and encouraging people to work together. You will also foster sharing knowledge throughout the organization and provide everyone with a new lease on security and greater internal cooperation.
Overall, being a security champion will lead you to be a champion for good across your organization.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/devops-security-champions
作者:[Jessica Repka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
[2]: https://opensource.com/article/19/1/what-devsecops
[3]: https://hostingtribunal.com/blog/hacking-statistics/
[4]: https://opensource.com/article/18/8/what-cicd
[5]: https://github.com/anchore/anchore-engine
[6]: https://github.com/coreos/clair
[7]: https://vuls.io/
[8]: https://www.open-scap.org/
[9]: https://github.com/OWASP/sonarqube
[10]: https://find-sec-bugs.github.io/
[11]: https://resources.bishopfox.com/resources/tools/google-hacking-diggity/
[12]: https://www.projectcalico.org/
[13]: https://github.com/aquasecurity/kube-hunter
[14]: https://github.com/neuvector/neuvector-helm
[15]: https://cve.mitre.org/
[16]: https://www.hackerearth.com/challenges/hackathon/beginner-hack-10/

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Java still relevant, Linux desktop, and more industry trends)
[#]: via: (https://opensource.com/article/19/9/java-relevant-and-more-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
Java still relevant, Linux desktop, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Is Java still relevant?][2]
> Mike Milinkovich, executive director of the Eclipse Foundation, which oversees Java Enterprise Edition (now Jakarta EE), also believes Java itself is going to evolve to support these technologies. “I think thats theres going to be changes to Java that go from the JVM all the way up,” said Milinkovich. “So any new features in the JVM which will help integrate the JVM with Docker containers and be able to do a better job of instrumenting Docker containers within Kubernetes is definitely going to be a big help. So we are going to be looking for Java SE to evolve in that direction.” 
**The impact**: A completely open source release of Java Enterprise Edition as Jakarta EE lays the groundwork for years of Java development to come. Some of Java's relevance comes from the mind-boggling sums that have been spent developing in it and the years of experience that software developers have in solving problems with it. Combine that with the innovation in the ecosystem (for example, see [Quarkus][3], or GraalVM), and the answer has to be "yes."
## [GraalVM: The holy graal of polyglot JVM?][4]
> While most of the hype around GraalVM has been around compiling JVM projects to native, we found plenty of value in its Polyglot APIs. GraalVM is a compelling and already fully useable alternative to Nashorn, though the migration path is still a little rocky, mostly due to a lack of documentation. Hopefully this post helps others find their way off of Nashorn and on to the holy graal.
**The impact**: One of the best things that can happen with an open source project is if users start raving about some novel application of the technology that isn't even the headline use case. "Yeah yeah, sounds great but we don't even turn that thing on... this other piece though!"
## [Call me crazy, but Windows 11 could run on Linux][5]
> Microsoft has already been doing some of the needed work. [Windows Subsystem for Linux][6] (WSL) developers have been working on mapping Linux API calls to Windows, and vice versa. With the first version of WSL, Microsoft connected the dots between Windows-native libraries and programs and Linux. At the time, [Carmen Crincoli tweeted][7]: “2017 is finally the year of Linux on the Desktop. Its just that the Desktop is Windows.” Who is Carmen Crincoli? Microsofts manager of partnerships with storage and independent hardware vendors.
**The impact**: [Project Hieroglyph][8] builds on the premise that "a good science fiction work posits one vision for the future... that is built on a foundation of realism [that]... invites us to consider the complex ways our choices and interactions contribute to generating the future." Could Microsoft's choices and interactions with the broader open source community lead to a sci-fi future? Stay tuned!
## [Python is eating the world: How one developer's side project became the hottest programming language on the planet][9]
> There are also questions over whether the makeup of bodies overseeing the development of the language — Python core developers and the Python Steering Council — could better reflect the diverse user base of Python users in 2019.
>
> "I would like to see better representation across all the diverse metrics, not just in terms of gender balance, but also race and everything else," says Wijaya.
>
> "At PyCon I spoke to [PyLadies][10] members from India and Africa. They commented that, 'When we hear about Python or PyLadies, we think about people in North America or Canada, where in reality there are big user bases in other parts of the world. Why aren't we seeing more of them?' I think it makes so much sense. So I definitely would like to see that happening, and I think we all need to do our part."
**The impact**: In these troubled times who doesn't want to hear about a benevolent dictator turning the reigns of their project over to the people who are using it the most?
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/java-relevant-and-more-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://sdtimes.com/java/is-java-still-relevant/
[3]: https://github.com/quarkusio/quarkus
[4]: https://www.transposit.com/blog/2019.01.02-graalvm-holy/?c=hn
[5]: https://www.computerworld.com/article/3438856/call-me-crazy-but-windows-11-could-run-on-linux.html#tk.rss_operatingsystems
[6]: https://blogs.msdn.microsoft.com/wsl/
[7]: https://twitter.com/CarmenCrincoli/status/862714516257226752
[8]: https://hieroglyph.asu.edu/2016/04/what-is-the-purpose-of-science-fiction-stories/
[9]: https://www.techrepublic.com/article/python-is-eating-the-world-how-one-developers-side-project-became-the-hottest-programming-language-on-the-planet/
[10]: https://www.pyladies.com/

View File

@ -0,0 +1,143 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Great Open Source Divide: ICE, Hippocratic License and the Controversy)
[#]: via: (https://itsfoss.com/hippocratic-license/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
The Great Open Source Divide: ICE, Hippocratic License and the Controversy
======
_**Coraline Ada Ehmke has created “Hippocratic License” that “add ethics to open source projects”. But this seems to be just the beginning of a controversy as the “Hippocratic License” may not be open source at all.**_
Coraline Ada Ehmke, better known for her [Contributor Covenant][1], has modified the MIT open source license into Hippocratic License that adds a couple of conditions to the existing MIT license. Before you learn what it is, let me give you the context on why its been created in the first place.
### No Tech for ICE
![No Tech For ICE | Image Credit Science for All][2]
Immigration and Customs Enforcement agency of the US government, [ICE][3], has been condemned by human rights groups and activists for inhumane practices of separating children from their parents at the US-Mexico border under the new strict immigration policy.
Some techies have been vocal against the actions of ICE and they dont want ICE to use tech projects they work on as it helps ICE in one way or another.
The “[No Tech for ICE][4]” movement has been going on for some time but it got highlighted once again this week when an engineer named [Seth Vargo took down his open source project after finding ICE was using it][5] through Chef.
The project was called [Chef Sugar][6], a Ruby library for simplifying work with [Chef][7], a platform for configuration management. ICE is one of the clients for Chef. The project withdrawal momentarily impacted Chef and its clients. Chef swiftly fixed the problem by uploading the Chef Sugar project on its own GitHub repository.
Despite the trouble it caused for a number of companies using Chef worldwide, Vargo made a point. The pressure tactic worked and after [initial resistance][8], Chef caved in and [agreed to not renew its contract with ICE][9].
Now Chef Sugar is an open source project and its developer cannot stop people from forking it and continue using it. And thats where [Coraline Ada Ehmke][10] came up with a new licensing model called Hippocratic License.
### What is Hippocratic License?
![][11]
To enable more developers to forbid unethical organizations like ICE from using their open source projects, Coraline Ada Ehmake introduced a new license called “Hippocratic License”.
The term Hippocratic relates to ancient Greek physician [Hippocrates][12]. The [Hippocratic oath][13] is an ethical oath (historically taken by physicians) and one of the crucial part of the oath is “I will abstain from all intentional wrong-doing and harm”. This part of the oath is known as “Primum non nocere” or “First do no harm”.
The entire terminology is significant. The license is called Hippocratic license and is hosted on a domain called [firstdonoharm.dev][14] and the idea is to enable the developers to be not part of intentional wrong-doing.
The [Hippocratic License][14] is based on the popular [MIT open source license][15]. It adds this additional and crucial condition:
> The software may not be used by individuals, corporations, governments, or other groups for systems or activities that actively and knowingly endanger, harm, or otherwise threaten the physical, mental, economic, or general well-being of underprivileged individuals or groups.
### Is Hippocratic license really an open source license?
No, it is not. Thats what [Open Source Initiative][16] (OSI) says. OSI is the community-recognized body for reviewing and approving licenses as Open Source Definition conformant.
> The intro to the Hippocratic Licence might lead some to believe
> the license is an Open Source Software licence, and software distributed under the Hippocratic Licence is Open Source Software.
>
> As neither is true, we ask you to please modify the language to remove confusion.
>
> — OpenSourceInitiative (@OpenSourceOrg) [September 23, 2019][17]
Coraline first [thanked][18] OSI for pointing it out and then goes on to attack it as an “open source problem”.
> This is the problem: the current structure of open source specifically prohibits us from protecting our labor from use by organizations like ICE.
>
> Thats not a license problem. Thats an Open Source™ problem. <https://t.co/XEyu5VNUMJ>
>
> — Coraline Ada Ehmke (@CoralineAda) [September 23, 2019][19]
Coraline clearly doesnt accept that OSI (open Source Initiative) and [FSF][20] (Free Software Foundation) has the authority on the matter of defining open source and free software.
> OSI and FSF are not the real arbiters of what is Open Source and what is Free Software.
>
> We are.
>
> — Coraline Ada Ehmke (@CoralineAda) [September 22, 2019][21]
So if OSI and FSF, the organizations created for the sole purpose of defining open source and free software, are not the authority on this subject then who is? The “we” in “we are” of Coralines statement is ambiguous. Does we represents the people who agree to Coralines view or we means the entire open source community? If its the latter, then Coraline doesnt represent or speak for every person in the open source community.
### Does it solve the problem or does it create more problems? Can open source be neutral?
> Developers are (finally) becoming more aware of the impact that their work has on the world, and in particular on underprivileged people.
>
> Its late to come to that realization, but not TOO LATE to do something about it.
>
> The lesson here is that TECH IS NOT NEUTRAL.
>
> — Coraline Ada Ehmke (@CoralineAda) [September 23, 2019][22]
Everything looks good from an idealistic point of view at the first glance. It seems like this new license will solve the problem of evil people using open source projects.
But I see a problem here and that problem is the perception of evil. What you consider evil depends on your point of view.
A number of “No Tech for ICE” supporting techies are also supporters of ANTIFA. [ANTIFA has been indulging in physical violence from time to time][23]. What if a bunch of cis white men, who found [far-left organizations like ANTIFA][24] evil, stop them from using their open source projects? What if [Richard Stallman comes back from his forced retirement][25] and starts selecting people who can use GNU projects based on whether or not they agree with his views?
The license condition also says “knowingly endanger, harm, or otherwise threaten the physical, mental, economic, or general well-being of underprivileged individuals or groups”.
So the entire stuff is only applicable to “underprivileged individuals or groups”, not others? So the others dont get the same rights anymore? This should not come as surprise because Coraline is the same person who took extreme measure to harm the economic well being of a developer ([Coraline disagreed with his views][26]) by doing everything in capacity to get him fired from his job.
Until these concerns are addressed, the Hippocratic License will unfortunately remain hypocrite license.
Where will this end? How many open source projects will be forked between sparring groups of different ideologies? Why should the rest of the world suffer from the American domestic politics? Can we not leave open source undivided?
Your views are welcome. Please note that abusive comments wont be published.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][27].
--------------------------------------------------------------------------------
via: https://itsfoss.com/hippocratic-license/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.contributor-covenant.org/
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/no-tech-for-ice.jpg?resize=800%2C340&ssl=1
[3]: https://en.wikipedia.org/wiki/U.S._Immigration_and_Customs_Enforcement
[4]: https://notechforice.com/
[5]: https://www.zdnet.com/article/developer-takes-down-ruby-library-after-he-finds-out-ice-was-using-it/
[6]: https://github.com/sethvargo/chef-sugar
[7]: https://www.chef.io/
[8]: https://blog.chef.io/2019/09/19/chefs-position-on-customer-engagement-in-the-public-and-private-sectors/
[9]: https://www.vice.com/en_us/article/qvg3q5/chef-not-renewing-ice-immigration-customs-enforcement-contract-after-code-deleting-protest
[10]: https://where.coraline.codes/
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/hippocratic-license.png?ssl=1
[12]: https://en.wikipedia.org/wiki/Hippocrates
[13]: https://en.wikipedia.org/wiki/Hippocratic_Oath
[14]: https://firstdonoharm.dev/
[15]: https://opensource.org/licenses/MIT
[16]: https://opensource.org/
[17]: https://twitter.com/OpenSourceOrg/status/1176229398929977344?ref_src=twsrc%5Etfw
[18]: https://twitter.com/CoralineAda/status/1176246765676302336
[19]: https://twitter.com/CoralineAda/status/1176262778459496454?ref_src=twsrc%5Etfw
[20]: https://www.fsf.org/
[21]: https://twitter.com/CoralineAda/status/1175878569169432582?ref_src=twsrc%5Etfw
[22]: https://twitter.com/CoralineAda/status/1176207120133447680?ref_src=twsrc%5Etfw
[23]: https://www.aol.com/article/news/2017/05/04/what-is-antifa-controversial-far-left-group-defends-use-of-violence/22067671/?guccounter=1&guce_referrer=aHR0cHM6Ly9lbi53aWtpcGVkaWEub3JnLw&guce_referrer_sig=AQAAAHYUcIrnC8zD4UX-W4N2Vshf-QVSVDTwNXlTNmy4gbUJUb9smDm7W9Bf1IelnBGz5x0QAdI-O3Zhm9obQjZcORvHjvp3J8tUgEbdlpKNef-jk1rTH8BTZOP7YJule2n7wbIc4wDHPMFjrZUsMx-kypQYVCpkjtEDltAHHo-73ZD_
[24]: https://www.bbc.com/news/world-us-canada-40930831
[25]: https://itsfoss.com/richard-stallman-controversy/
[26]: https://itsfoss.com/linux-code-of-conduct/
[27]: https://reddit.com/r/linuxusersgroup

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,170 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to start developing with .NET)
[#]: via: (https://opensource.com/article/19/9/getting-started-net)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic)
How to start developing with .NET
======
Learn the basics to get up and running with the .NET development
platform.
![Coding on a computer][1]
The .NET framework was released in 2000 by Microsoft. An open source implementation of the platform, [Mono][2], was the center of controversy in the early 2000s because Microsoft held several patents for .NET technology and could have used those patents to end Mono implementations. Fortunately, in 2014, Microsoft declared that the .NET development platform would be open source under the MIT license from then on, and in 2016, Microsoft purchased Xamarin, the company that produces Mono.
Both .NET and Mono have grown into cross-platform programming environments for C#, F#, GTK#, Visual Basic, Vala, and more. Applications created with .NET and Mono have been delivered to Linux, BSD, Windows, MacOS, Android, and even some gaming consoles. You can use either .NET or Mono to develop .NET applications. Both are open source, and both have active and vibrant communities. This article focuses on getting started with Microsoft's implementation of the .NET environment.
### How to install .NET
The .NET downloads are divided into packages: one containing just a .NET runtime, and the other a .NET software development kit (SDK) containing the .NET Core and runtime. Depending on your platform, there may be several variants of even these packages, accounting for architecture and OS version. To start developing with .NET, you must [install the SDK][3]. This gives you the [dotnet][4] terminal or PowerShell command, which you can use to create and build projects.
#### Linux
To install .NET on Linux, first, add the Microsoft Linux software repository to your computer.
On Fedora:
```
$ sudo rpm --import <https://packages.microsoft.com/keys/microsoft.asc>
$ sudo wget -q -O /etc/yum.repos.d/microsoft-prod.repo <https://packages.microsoft.com/config/fedora/27/prod.repo>
```
On Ubuntu:
```
$ wget -q <https://packages.microsoft.com/config/ubuntu/19.04/packages-microsoft-prod.deb> -O packages-microsoft-prod.deb
$ sudo dpkg -i packages-microsoft-prod.deb
```
Next, install the SDK using your package manager, replacing **&lt;X.Y&gt;** with the current version of the .NET release:
On Fedora:
```
`$ sudo dnf install dotnet-sdk-<X.Y>`
```
On Ubuntu:
```
$ sudo apt install apt-transport-https
$ sudo apt update
$ sudo apt install dotnet-sdk-&lt;X.Y&gt;
```
Once all the packages are downloaded and installed, confirm the installation by opening a terminal and typing:
```
$ dotnet --version
X.Y.Z
```
#### Windows
If you're on Microsoft Windows, you probably already have the .NET runtime installed. However, to develop .NET applications, you must also install the .NET Core SDK.
First, [download the installer][3]. To keep your options open, download .NET Core for cross-platform development (the .NET Framework is Windows-only). Once the **.exe** file is downloaded, double-click it to launch the installation wizard, and click through the two-step install process: accept the license and allow the install to proceed.
![Installing dotnet on Windows][5]
Afterward, open PowerShell from your Application menu in the lower-left corner. In PowerShell, type a test command:
```
`PS C:\Users\osdc> dotnet`
```
If you see information about a dotnet installation, .NET has been installed correctly.
#### MacOS
If you're on an Apple Mac, [download the Mac installer][3], which comes in the form of a **.pkg** package. Download and double-click on the **.pkg** file and click through the installer. You may need to grant permission for the installer since the package is not from the App Store.
Once all packages are downloaded and installed, confirm the installation by opening a terminal and typing:
```
$ dotnet --version
X.Y.Z
```
### Hello .NET
A sample "hello world" application written in .NET is provided with the **dotnet** command. Or, more accurately, the command provides the sample application.
First, create a project directory and the required code infrastructure using the **dotnet** command with the **new** and **console** options to create a new console-only application. Use the **-o** option to specify a project name:
```
`$ dotnet new console -o hellodotnet`
```
This creates a directory called **hellodotnet** in your current directory. Change into your project directory and have a look around:
```
$ cd hellodotnet
$ dir
hellodotnet.csproj  obj  Program.cs
```
The file **Program.cs** is an empty C# file containing a simple Hello World application. Open it in a text editor to view it. Microsoft's Visual Studio Code is a cross-platform, open source application built with dotnet in mind, and while it's not a bad text editor, it also collects a lot of data about its user (and grants itself permission to do so in the license applied to its binary distribution). If you want to try out Visual Studio Code, consider using [VSCodium][6], a distribution of Visual Studio Code that's built from the MIT-licensed source code _without_ the telemetry (read the [documentation][7] for options to disable other forms of tracking in even this build). Alternatively, just use your existing favorite text editor or IDE.
The boilerplate code in a new console application is:
```
using System;
namespace hellodotnet
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}
```
To run the program, use the **dotnet run** command:
```
$ dotnet run
Hello World!
```
That's the basic workflow of .NET and the **dotnet** command. The full [C# guide for .NET][8] is available, and everything there is relevant to .NET. For examples of .NET in action, follow [Alex Bunardzic][9]'s mutation testing articles here on opensource.com.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/getting-started-net
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
[2]: https://www.monodevelop.com/
[3]: https://dotnet.microsoft.com/download
[4]: https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet?tabs=netcore21
[5]: https://opensource.com/sites/default/files/uploads/dotnet-windows-install.jpg (Installing dotnet on Windows)
[6]: https://vscodium.com/
[7]: https://github.com/VSCodium/vscodium/blob/master/DOCS.md
[8]: https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/intro-to-csharp/
[9]: https://opensource.com/users/alex-bunardzic (View user profile.)

View File

@ -0,0 +1,332 @@
[#]: collector: (lujun9972)
[#]: translator: (GraveAccent)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with data science using Python)
[#]: via: (https://opensource.com/article/19/9/get-started-data-science-python)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/jroakeshttps://opensource.com/users/tiwarinitish86)
Getting started with data science using Python
======
Doing data science with Python offers limitless potential for you to
parse, interpret, and structure data in meaningful and enlightening
ways.
![Metrics and a graph illustration][1]
Data science is an exciting new field in computing that's built around analyzing, visualizing, correlating, and interpreting the boundless amounts of information our computers are collecting about the world. Of course, calling it a "new" field is a little disingenuous because the discipline is a derivative of statistics, data analysis, and plain old obsessive scientific observation.
But data science is a formalized branch of these disciplines, with processes and tools all its own, and it can be broadly applied across disciplines (such as visual effects) that had never produced big dumps of unmanageable data before. Data science is a new opportunity to take a fresh look at data from oceanography, meteorology, geography, cartography, biology, medicine and health, and entertainment industries and gain a better understanding of patterns, influences, and causality.
Like other big and seemingly all-inclusive fields, it can be intimidating to know where to start exploring data science. There are a lot of resources out there to help data scientists use their favorite programming languages to accomplish their goals, and that includes one of the most popular programming languages out there: Python. Using the [Pandas][2], [Matplotlib][3], and [Seaborn][4] libraries, you can learn the basic toolset of data science.
If you're not familiar with the basics of Python yet, read my [introduction to Python][5] before continuing.
### Creating a Python virtual environment
Programmers sometimes forget which libraries they have installed on their development machine, and this can lead them to ship code that worked on their computer but fails on all others for lack of a library. Python has a system designed to avoid this manner of unpleasant surprise: the virtual environment. A virtual environment intentionally ignores all the Python libraries you have installed, effectively forcing you to begin development with nothing more than stock Python.
To activate a virtual environment with **venv**, invent a name for your environment (I'll use **example**) and create it with:
```
`$ python3 -m venv example`
```
Source the **activate** file in the environment's **bin** directory to activate it:
```
$ source ./example/bin/activate
(example) $
```
You are now "in" your virtual environment, a clean slate where you can build custom solutions to problems—with the added burden of consciously needing to install required libraries.
### Installing Pandas and NumPy
The first libraries you must install in your new environment are Pandas and NumPy. These libraries are common in data science, so this won't be the last time you'll install them. They're also not the only libraries you'll ever need in data science, but they're a good start.
Pandas is an open source, BSD-licensed library that makes it easy to process data structures for analysis. It depends on NumPy, a scientific library that provides multi-dimensional arrays, linear algebra, Fourier transforms, and much more. Install both using **pip3**:
```
`(example) $ pip3 install pandas`
```
Installing Pandas also installs NumPy, so you don't need to specify both. Once you have installed them to your virtual environment once, the installation packages are cached so that when you install them again, you don't have to download them from the internet.
Those are the only libraries you need for now. Next, you need some sample data.
### Generating a sample dataset
Data science is all about data, and luckily there are lots of free and open datasets available from scientific, computing, and government organizations. While these datasets are a great resource for education, they have a lot more data than necessary for this simple example. You can create a sample and manageable dataset quickly with Python:
```
#!/usr/bin/env python3
import random
def rgb():
    NUMBER=random.randint(0,255)/255
    return NUMBER
FILE = open('sample.csv','w')
FILE.write('"red","green","blue"')
for COUNT in range(10):
    FILE.write('\n{:0.2f},{:0.2f},{:0.2f}'.format(rgb(),rgb(),rgb()))
```
This produces a file called **sample.csv**, consisting of randomly generated floats representing, in this example, RGB values (a commonly tracked value, among hundreds, in visual effects). You can use a CSV file as a data source for Pandas.
### Ingesting data with Pandas
One of Pandas' basic features is its ability to ingest data and process it without the programmer writing new functions just to parse input. If you're used to applications that do that automatically, this might not seem like it's very special—but imagine opening a CSV in [LibreOffice][6] and having to write formulas to split the values at each comma. Pandas shields you from low-level operations like that. Here's some simple code to ingest and print out a file of comma-separated values:
```
#!/usr/bin/env python3
from pandas import read_csv, DataFrame
import pandas as pd
FILE = open('sample.csv','r')
DATAFRAME = pd.read_csv(FILE)
print(DATAFRAME)
```
The first few lines import components of the Pandas library. The Pandas library is extensive, so you'll refer to its documentation frequently when looking for functions beyond the basic ones in this article.
Next, a variable **f** is created by opening the **sample.csv** file you created. That variable is used by the Pandas module **read_csv** (imported in the second line) to create a _dataframe_. In Pandas, a dataframe is a two-dimensional array, commonly thought of as a table. Once your data is in a dataframe, you can manipulate it by column and row, query it for ranges, and do a lot more. The sample code, for now, just prints the dataframe to the terminal.
Run the code. Your output will differ slightly from this sample output because the numbers are randomly generated, but the format is the same:
```
(example) $ python3 ./parse.py
    red  green  blue
0  0.31   0.96  0.47
1  0.95   0.17  0.64
2  0.00   0.23  0.59
3  0.22   0.16  0.42
4  0.53   0.52  0.18
5  0.76   0.80  0.28
6  0.68   0.69  0.46
7  0.75   0.52  0.27
8  0.53   0.76  0.96
9  0.01   0.81  0.79
```
Assume you need only the red values from your dataset. You can do this by declaring your dataframe's column names and selectively printing only the column you're interested in:
```
from pandas import read_csv, DataFrame
import pandas as pd
FILE = open('sample.csv','r')
DATAFRAME = pd.read_csv(FILE)
# define columns
DATAFRAME.columns = [ 'red','green','blue' ]
print(DATAFRAME['red'])
```
Run the code now, and you get just the red column:
```
(example) $ python3 ./parse.py
0    0.31
1    0.95
2    0.00
3    0.22
4    0.53
5    0.76
6    0.68
7    0.75
8    0.53
9    0.01
Name: red, dtype: float64
```
Manipulating tables of data is a great way to get used to how data can be parsed with Pandas. There are many more ways to select data from a dataframe, and the more you experiment, the more natural it becomes.
### Visualizing your data
It's no secret that many humans prefer to visualize information. It's the reason charts and graphs are staples of meetings with upper management and why "infographics" are popular in the news business. Part of a data scientist's job is to help others understand large samples of data, and there are libraries to help with this task. Combining Pandas with a visualization library can produce visual interpretations of your data. One popular open source library for visualization is [Seaborn][7], which is based on the open source [Matplotlib][3].
#### Installing Seaborn and Matplotlib
Your Python virtual environment doesn't yet have Seaborn and Matplotlib, so install them with pip3. Seaborn also installs Matplotlib along with many other libraries:
```
`(example) $ pip3 install seaborn`
```
For Matplotlib to display graphics, you must also install [PyGObject][8] and [Pycairo][9]. This involves compiling code, which pip3 can do for you as long as you have the necessary header files and libraries installed. Your Python virtual environment has no awareness of these support libraries, so you can execute the installation command inside or outside the environment.
On Fedora and CentOS:
```
(example) $ sudo dnf install -y gcc zlib-devel bzip2 bzip2-devel readline-devel \
sqlite sqlite-devel openssl-devel tk-devel git python3-cairo-devel \
cairo-gobject-devel gobject-introspection-devel
```
On Ubuntu and Debian:
```
(example) $ sudo apt install -y libgirepository1.0-dev build-essential \
libbz2-dev libreadline-dev libssl-dev zlib1g-dev libsqlite3-dev wget \
curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libcairo2-dev
```
Once they are installed, you can install the GUI components needed by Matplotlib:
```
`(example) $ pip3 install PyGObject pycairo`
```
### Displaying a graph with Seaborn and Matplotlib
Open a file called **vizualize.py** in your favorite text editor. To create a line graph visualization of your data, first, you must import the necessary Python modules: the Pandas modules you used in the previous code examples:
```
#!/usr/bin/env python3
from pandas import read_csv, DataFrame
import pandas as pd
```
Next, import Seaborn, Matplotlib, and several components of Matplotlib so you can configure the graphics you produce:
```
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import rcParams
```
Matplotlib can export its output to many formats, including PDF, SVG, or just a GUI window on your desktop. For this example, it makes sense to send your output to the desktop, so you must set the Matplotlib backend to GTK3Agg. If you're not using Linux, you may need to use the TkAgg backend instead.
After setting the backend for the GUI window, set the size of the window and the Seaborn preset style:
```
matplotlib.use('GTK3Agg')
rcParams['figure.figsize'] = 11,8
sns.set_style('darkgrid')
```
Now that your display is configured, the code is familiar. Ingest your **sample.csv** file with Pandas, and define the columns of your dataframe:
```
FILE = open('sample.csv','r')
DATAFRAME = pd.read_csv(FILE)
DATAFRAME.columns = [ 'red','green','blue' ]
```
With the data in a useful format, you can plot it out in a graph. Use each column as input for a plot, then use **plt.show()** to draw the graph in a GUI window. The **plt.legend()** parameter associates the column header with each line on your graph (the **loc** parameter places the legend outside the chart rather than over it):
```
for i in DATAFRAME.columns:
    DATAFRAME[i].plot()
plt.legend(bbox_to_anchor=(1, 1), loc=2, borderaxespad=1)
plt.show()
```
Run the code to display the results.
![First data visualization][10]
Your graph accurately displays all the information contained in your CSV file: values are on the Y-axis, index numbers are on the X-axis, and the lines of the graph are identified so that you know what they represent. However, since this code is tracking color values (at least, it's pretending to), the colors of the lines are not just non-intuitive, but counterintuitive. If you never need to analyze color data, you may never run into this problem, but you're sure to run into something analogous. When visualizing data, you must consider the best way to present it to prevent the viewer from extrapolating false information from what you're presenting.
To fix this problem (and show off some of the customization available), the following code assigns each plotted line a specific color:
```
import matplotlib
from pandas import read_csv, DataFrame
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import rcParams
matplotlib.use('GTK3Agg')
rcParams['figure.figsize'] = 11,8
sns.set_style('whitegrid')
FILE = open('sample.csv','r')
DATAFRAME = pd.read_csv(FILE)
DATAFRAME.columns = [ 'red','green','blue' ]
plt.plot(DATAFRAME['red'],'r-')
plt.plot(DATAFRAME['green'],'g-')
plt.plot(DATAFRAME['blue'],'b-')
plt.plot(DATAFRAME['red'],'ro')
plt.plot(DATAFRAME['green'],'go')
plt.plot(DATAFRAME['blue'],'bo')
plt.show()
```
This uses special Matplotlib notation to create two plots per column. The initial plot of each column is assigned a color (**r** for red, **g** for green, and **b** for blue). These are built-in Matplotlib settings. The **-** notation indicates a solid line (a double dash, such as **r--**, creates a dashed line). A second plot is created for each column with the same colors but using **o** to denote dots or nodes. To demonstrate built-in Seaborn themes, change the value of **sns.set_style** to **whitegrid**.
![Improved data visualization][11]
### Deactivating your virtual environment
When you're finished exploring Pandas and plotting, you can deactivate your Python virtual environment with the **deactivate** command:
```
(example) $ deactivate
$
```
When you want to get back to it, just reactivate it as you did at the start of this article. You'll have to reinstall your modules when you reactivate your virtual environment, but they'll be installed from cache rather than downloaded from the internet, so you don't have to be online.
### Endless possibilities
The true power of Pandas, Matplotlib, Seaborn, and data science is the endless potential for you to parse, interpret, and structure data in a meaningful and enlightening way. Your next step is to explore simple datasets with the new tools you've learned in this article. There's a lot more to Matplotlib and Seaborn than just line graphs, so try creating a bar graph or a pie chart or something else entirely.
The possibilities are limitless once you understand your toolset and have some idea of how to correlate your data. Data science is a new way to find stories hidden within data; let open source be your medium.
Data visualization is the mechanism of taking tabular or spatial data and conveying it in a human-...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/get-started-data-science-python
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/jroakeshttps://opensource.com/users/tiwarinitish86
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D (Metrics and a graph illustration)
[2]: https://pandas.pydata.org/
[3]: https://matplotlib.org/
[4]: https://seaborn.pydata.org/index.html
[5]: https://opensource.com/article/17/10/python-101
[6]: http://libreoffice.org
[7]: https://seaborn.pydata.org/
[8]: https://pygobject.readthedocs.io/en/latest/getting_started.html
[9]: https://pycairo.readthedocs.io/en/latest/
[10]: https://opensource.com/sites/default/files/uploads/seaborn-matplotlib-graph_0.png (First data visualization)
[11]: https://opensource.com/sites/default/files/uploads/seaborn-matplotlib-graph_1.png (Improved data visualization)

View File

@ -0,0 +1,222 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots)
[#]: via: (https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots
======
Within a year of releasing **Manjaro 18.0** (**Illyria**), the team has come out with their next big release with **Manjaro 18.1**, codenamed “**Juhraya**“. The team also have come up with an official announcement saying that Juhraya comes packed with a lot of improvements and bug fixes.
### New Features in Manjaro 18.1
Some of the new features and enhancements in Manjaro 18.1 are listed below:
* Option to choose between LibreOffice or Free Office
* New Matcha theme for Xfce edition
* Redesigned messaging system in KDE edition
* Support for Snap and FlatPak packages using “bhau” tool
### Minimum System Requirements for Manjaro 18.1
* 1 GB RAM
* One GHz Processor
* Around 30 GB Hard disk space
* Internet Connection
* Bootable Media (USB/DVD)
### Step by Step Guide to Install Manjaro 18.1 (KDE Edition)
To start installing Manjaro 18.1 (KDE Edition) in your system, please follow the steps outline below:
### Step 1) Download Manjaro 18.1 ISO
Before installing, you need to download the latest copy of Manjaro 18.1 from its official download page located **[here][1]**. Since we are seeing about the KDE version, we chose to install the KDE version. But the installation process is the same for all desktop environments including Xfce, KDE and Gnome editions.
### Step 2) Create a USB Bootable Disk
Once you have successfully downloaded the ISO file from Manjaro downloads page, it is time to create an USB disk. Copy the downloaded ISO file in a USB disk and create a bootable disk. Make sure to change your boot settings to boot using a USB and restart your system
### Step 3) Manjaro Live Installation Environment
When the system restarts, it will automatically detect the USB drive and starts booting into the Manjaro Live Installation Screen.
[![Boot-Manjaro-18-1-kde-installation][2]][3]
Next use the arrow keys to choose “**Boot: Manjaro x86_64 kde**” and hit enter to launch the Manjaro Installer.
### Step 4) Choose Launch Installer
Next the Manjaro installer will be launched and If you are connected to the internet, Manjaro will automatically detect your location and time zone. Click “**Launch Installer**” start installing Manjaro 18.1 KDE edition in your system.
[![Choose-Launch-Installaer-Manjaro18-1-kde][2]][4]
### Step 5) Choose Your Language
Next the installer will take you to choose your preferred language.
[![Choose-Language-Manjaro18-1-Kde-Installation][2]][5]
Select your desired language and click “Next”
### Step 6) Choose Your time zone and region
In the next screen, select your desired time zone and region and click “Next” to continue
[![Select-Location-During-Manjaro18-1-KDE-Installation][2]][6]
### Step 7) Choose Keyboard layout
In the next screen, select your preferred keyboard layout and click “Next” to continue.
[![Select-Keyboard-Layout-Manjaro18-1-kde-installation][2]][7]
### Step 8) Choose Partition Type
This is a very critical step in the installation process. It will allow you to choose between:
* Erase Disk
* Manual Partitioning
* Install Alongside
* Replace a Partition
If you are installing Manjaro 18.1 in a VM (Virtual Machine), then you wont be able to see the last 2 options.
If you are new to Manjaro Linux then I would suggest you should go with first option (**Erase Disk**), it will automatically create required partitions for you. If you want to create custom partitions then choose the second option “**Manual Partitioning**“, as its name suggests it will allow us to create our own custom partitions.
In this tutorial I will be creating custom partitions by selecting “Manual Partitioning” option,
[![Manual-Partition-Manjaro18-1-KDE][2]][8]
Choose the second option and click “Next” to continue.
As we can see i have 40 GB hard disk, so I will create following partitions on it,
* /boot         2GB (ext4 file system)
* /                 10 GB (ext4 file system)
* /home       22 GB (ext4 file system)
* /opt           4 GB (ext4 file system)
* Swap         2 GB
When we click on Next in above window, we will get the following screen, choose to create a **new partition table**,
[![Create-Partition-Table-Manjaro18-1-Installation][2]][9]
Click on Ok,
Now choose the free space and then click on **create** to setup the first partition as /boot of size 2 GB,
[![boot-partition-manjaro-18-1-installation][2]][10]
Click on OK to proceed with further, in the next window choose again free space and then click on create  to setup second partition as / of size 10 GB,
[![slash-root-partition-manjaro18-1-installation][2]][11]
Similarly create next partition as /home of size 22 GB,
[![home-partition-manjaro18-1-installation][2]][12]
As of now we have created three partitions as primary, now create next partition as extended,
[![Extended-Partition-Manjaro18-1-installation][2]][13]
Click on OK to proceed further,
Create /opt and Swap partition of size 5 GB and 2 GB respectively as logical partitions
[![opt-partition-manjaro-18-1-installation][2]][14]
[![swap-partition-manjaro18-1-installation][2]][15]
Once are done with all the partitions creation, click on Next
[![choose-next-after-partition-creation][2]][16]
### Step 9) Provide User Information
In the next screen, you need to provide the user information including your name, username, password, computer name etc.
[![User-creation-details-manjaro18-1-installation][2]][17]
Click “Next” to continue with the installation after providing all the information.
In the next screen you will be prompted to choose the office suite, so make a choice that suits to your installation,
[![Office-Suite-Selection-Manjaro18-1][2]][18]
Click on Next to proceed further,
### Step 10) Summary Information
Before the actual installation is done, the installer will show you all the details youve chosen including the language, time zone, keyboard layout and partitioning information etc. Click “**Install**” to proceed with the installation process.
[![Summary-manjaro18-1-installation][2]][19]
### Step 11) Install Manjaro 18.1 KDE Edition
Now the actual installation process begins and once it gets completed, restart the system to login to Manjaro 18.1 KDE edition ,
[![Manjaro18-1-Installation-Progress][2]][20]
[![Restart-Manjaro-18-1-after-installation][2]][21]
### Step:12) Login after successful installation
After the restart we will get the following login screen, use the users credentials that we created during the installation
[![Login-screen-after-manjaro-18-1-installation][2]][22]
Click on Login,
[![KDE-Desktop-Screen-Manjaro-18-1][2]][23]
Thats it! Youve successfully installed Manjaro 18.1 KDE edition in your system and explore all the exciting features. Please post your feedback and suggestions in the comments section below.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://manjaro.org/download/official/kde/
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Boot-Manjaro-18-1-kde-installation.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Launch-Installaer-Manjaro18-1-kde.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Language-Manjaro18-1-Kde-Installation.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Location-During-Manjaro18-1-KDE-Installation.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Keyboard-Layout-Manjaro18-1-kde-installation.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Manual-Partition-Manjaro18-1-KDE.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Create-Partition-Table-Manjaro18-1-Installation.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/boot-partition-manjaro-18-1-installation.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/09/slash-root-partition-manjaro18-1-installation.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/09/home-partition-manjaro18-1-installation.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Extended-Partition-Manjaro18-1-installation.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/09/opt-partition-manjaro-18-1-installation.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/09/swap-partition-manjaro18-1-installation.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/09/choose-next-after-partition-creation.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/09/User-creation-details-manjaro18-1-installation.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Office-Suite-Selection-Manjaro18-1.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Summary-manjaro18-1-installation.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Manjaro18-1-Installation-Progress.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Restart-Manjaro-18-1-after-installation.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Login-screen-after-manjaro-18-1-installation.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/09/KDE-Desktop-Screen-Manjaro-18-1.jpg

View File

@ -0,0 +1,138 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Introduction to the Linux chgrp and newgrp commands)
[#]: via: (https://opensource.com/article/19/9/linux-chgrp-and-newgrp-commands)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/seth)
Introduction to the Linux chgrp and newgrp commands
======
The chgrp and newgrp commands help you manage files that need to
maintain group ownership.
![Penguins walking on the beach ][1]
In a recent article, I introduced the [**chown** command][2], which is used for modifying ownership of files on systems. Recall that ownership is the combination of the user and group assigned to an object. The **chgrp** and **newgrp** commands provide additional help for managing files that need to maintain group ownership.
### Using chgrp
The **chgrp** command simply changes the group ownership of a file. It is the same as the **chown :&lt;group&gt;** command. You can use:
```
`$chown :alan mynotes`
```
or:
```
`$chgrp alan mynotes`
```
#### Recursive
A few additional arguments to chgrp can be useful at both the command line and in a script. Just like many other Linux commands, chgrp has a recursive argument, **-R**. You will need this to operate on a directory and its contents recursively, as I'll demonstrate below. I added the **-v** (**verbose**) argument so chgrp tells me what it is doing:
```
$ ls -l . conf
.:
drwxrwxr-x 2 alan alan 4096 Aug  5 15:33 conf
conf:
-rw-rw-r-- 1 alan alan 0 Aug  5 15:33 conf.xml
# chgrp -vR delta conf
changed group of 'conf/conf.xml' from alan to delta
changed group of 'conf' from alan to delta
```
#### Reference
A reference file (**\--reference=RFILE**) can be used when changing the group on files to match a certain configuration or when you don't know the group, as might be the case when running a script. You can duplicate another file's group (**RFILE**), referred to as a reference file. For example, to undo the changes made above (recall that a dot [**.**] refers to the present working directory):
```
`$ chgrp -vR --reference=. conf`
```
#### Report changes
Most commands have arguments for controlling their output. The most common is **-v** to enable verbose, and the chgrp command has a verbose mode. It also has a **-c** (**\--changes**) argument, which instructs chgrp to report only when a change is made. Chgrp will still report other things, such as if an operation is not permitted.
The argument **-f** (**\--silent**, **\--quiet**) is used to suppress most error messages. I will use this argument and **-c** in the next section so it will show only actual changes.
#### Preserve root
The root (**/**) of the Linux filesystem should be treated with great respect. If a command mistake is made at this level, the consequences can be terrible and leave a system completely useless. Particularly when you are running a recursive command that will make any kind of change—or worse, deletions. The chgrp command has an argument that can be used to protect and preserve the root. The argument is **\--preserve-root**. If this argument is used with a recursive chgrp command on the root, nothing will happen and a message will appear instead:
```
[root@localhost /]# chgrp -cfR --preserve-root a+w /
chgrp: it is dangerous to operate recursively on '/'
chgrp: use --no-preserve-root to override this failsafe
```
The option has no effect when it's not used in conjunction with recursive. However, if the command is run by the root user, the permissions of **/** will change, but not those of other files or directories within it:
```
[alan@localhost /]$ chgrp -c --preserve-root alan /
chgrp: changing group of '/': Operation not permitted
[root@localhost /]# chgrp -c --preserve-root alan /
changed group of '/' from root to alan
```
Surprisingly, it seems, this is not the default argument. The option **\--no-preserve-root** is the default. If you run the command above without the "preserve" option, it will default to "no preserve" mode and possibly change permissions on files that shouldn't be changed:
```
[alan@localhost /]$ chgrp -cfR alan /
changed group of '/dev/pts/0' from tty to alan
changed group of '/dev/tty2' from tty to alan
changed group of '/var/spool/mail/alan' from mail to alan
```
### About newgrp
The **newgrp** command allows a user to override the current primary group. newgrp can be handy when you are working in a directory where all files must have the same group ownership. Suppose you have a directory called _share_ on your intranet server where different teams store marketing photos. The group is **share**. As different users place files into the directory, the files' primary groups might become mixed up. Whenever new files are added, you can run **chgrp** to correct any mix-ups by setting the group to **share**:
```
$ cd share
ls -l
-rw-r--r--. 1 alan share 0 Aug  7 15:35 pic13
-rw-r--r--. 1 alan alan 0 Aug  7 15:35 pic1
-rw-r--r--. 1 susan delta 0 Aug  7 15:35 pic2
-rw-r--r--. 1 james gamma 0 Aug  7 15:35 pic3
-rw-rw-r--. 1 bill contract  0 Aug  7 15:36 pic4
```
I covered **setgid** mode in my article on the [**chmod** command][3]. This would be one way to solve this problem. But, suppose the setgid bit was not set for some reason. The newgrp command is useful in this situation. Before any users put files into the _share_ directory, they can run the command **newgrp share**. This switches their primary group to **share** so all files they put into the directory will automatically have the group **share**, rather than the user's primary group. Once they are finished, users can switch back to their regular primary group with (for example):
```
`newgrp alan`
```
### Conclusion
It is important to understand how to manage users, groups, and permissions. It is also good to know a few alternative ways to work around problems you might encounter since not all environments are set up the same way.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/linux-chgrp-and-newgrp-commands
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A (Penguins walking on the beach )
[2]: https://opensource.com/article/19/8/linux-chown-command
[3]: https://opensource.com/article/19/8/linux-chmod-command

View File

@ -0,0 +1,195 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mutation testing by example: How to leverage failure)
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-tdd)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
Mutation testing by example: How to leverage failure
======
Use planned failure to ensure your code meets expected outcomes and
follow along with the .NET xUnit.net testing framework.
![failure sign at a party, celebrating failure][1]
In my article _[Mutation testing is the evolution of TDD][2]_, I exposed the power of iteration to guarantee a solution when a measurable test is available. In that article, an iterative approach helped to determine how to implement code that calculates the square root of a given number.
I also demonstrated that the most effective method is to find a measurable goal or test, then start iterating with best guesses. The first guess at the correct answer will most likely fail, as expected, so the failed guess needs to be refined. The refined guess must be validated against the measurable goal or test. Based on the result, the guess is either validated or must be further refined.
In this model, the only way to learn how to reach the solution is to fail repeatedly. It sounds counterintuitive, but amazingly, it works.
Following in the footsteps of that analysis, this article examines the best way to use a DevOps approach when building a solution containing some dependencies. The first step is to write a test that can be expected to fail.
### The problem with dependencies is that you can't depend on them
The problem with dependencies, as Michael Nygard wittily expresses in _[Architecture without an end state][3]_, is a huge topic better left for another article. Here, you'll look into potential pitfalls that dependencies tend to bring to a project and how to leverage test-driven development (TDD) to avoid those pitfalls.
First, pose a real-life challenge, then see how it can be solved using TDD.
### Who let the cat out?
![Cat standing on a roof][4]
In Agile development environments, it's helpful to start building the solution by defining the desired outcomes. Typically, the desired outcomes are described in a [_user story_][5]:
> _Using my home automation system (HAS),
> I want to control when the cat can go outside,
> because I want to keep the cat safe overnight._
Now that you have a user story, you need to elaborate on it by providing some functional requirements (that is, by specifying the _acceptance criteria_). Start with the simplest of scenarios described in pseudo-code:
> _Scenario #1: Disable cat trap door during nighttime_
>
> * Given that the clock detects that it is nighttime
> * When the clock notifies the HAS
> * Then HAS disables the Internet of Things (IoT)-capable cat trap door
>
### Decompose the system
The system you are building (the HAS) needs to be _decomposed_broken down to its dependenciesbefore you can start working on it. The first thing you must do is identify any dependencies (if you're lucky, your system has no dependencies, which would make it easy to build, but then it arguably wouldn't be a very useful system).
From the simple scenario above, you can see that the desired business outcome (automatically controlling a cat door) depends on detecting nighttime. This dependency hinges upon the clock. But the clock is not capable of determining whether it is daylight or nighttime. It's up to you to supply that logic.
Another dependency in the system you're building is the ability to automatically access the cat door and enable or disable it. That dependency most likely hinges upon an API provided by the IoT-capable cat door.
### Fail fast toward dependency management
To satisfy one dependency, we will build the logic that determines whether the current time is daylight or nighttime. In the spirit of TDD, we will start with a small failure.
Refer to my [previous article][2] for detailed instructions on how to set the development environment and scaffolds required for this exercise. We will be reusing the same NET environment and relying on the [xUnit.net][6] framework.
Next, create a new project called HAS (for "home automation system") and create a file called **UnitTest1.cs**. In this file, write the first failing unit test. In this unit test, describe your expectations. For example, when the system runs, if the time is 7pm, then the component responsible for deciding whether it's daylight or nighttime returns the value "Nighttime."
Here is the unit test that describes that expectation:
```
using System;
using Xunit;
namespace unittest
{
   public class UnitTest1
   {
       DayOrNightUtility dayOrNightUtility = [new][7] DayOrNightUtility();
       [Fact]
       public void Given7pmReturnNighttime()
       {
           var expected = "Nighttime";
           var actual = dayOrNightUtility.GetDayOrNight();
           Assert.Equal(expected, actual);
       }
   }
}
```
By this point, you may be familiar with the shape and form of a unit test. A quick refresher: describe the expectation by giving the unit test a descriptive name, **Given7pmReturnNighttime**, in this example. Then in the body of the unit test, a variable named **expected** is created, and it is assigned the expected value (in this case, the value "Nighttime"). Following that, a variable named **actual** is assigned the actual value (available after the component or service processes the time of day).
Finally, it checks whether the expectation has been met by asserting that the expected and actual values are equal: **Assert.Equal(expected, actual)**.
You can also see in the above listing a component or service called **dayOrNightUtility**. This module is capable of receiving the message **GetDayOrNight** and is supposed to return the value of the type **string**.
Again, in the spirit of TDD, the component or service being described hasn't been built yet (it is merely being described with the intention to prescribe it later). Building it is driven by the described expectations.
Create a new file in the **app** folder and give it the name **DayOrNightUtility.cs**. Add the following C# code to that file and save it:
```
using System;
namespace app {
   public class DayOrNightUtility {
       public string GetDayOrNight() {
           string dayOrNight = "Undetermined";
           return dayOrNight;
       }
   }
}
```
Now go to the command line, change directory to the **unittests** folder, and run the test:
```
[Xunit.net 00:00:02.33] unittest.UnitTest1.Given7pmReturnNighttime [FAIL]
Failed unittest.UnitTest1.Given7pmReturnNighttime
[...]
```
Congratulations, you have written the first failing unit test. The unit test was expecting **DayOrNightUtility** to return string value "Nighttime" but instead, it received the string value "Undetermined."
### Fix the failing unit test
A quick and dirty way to fix the failing test is to replace the value "Undetermined" with the value "Nighttime" and save the change:
```
using System;
namespace app {
   public class DayOrNightUtility {
       public string GetDayOrNight() {
           string dayOrNight = "Nighttime";
           return dayOrNight;
       }
   }
}
```
Now when we run the test, it passes:
```
Starting test execution, please wait...
Total tests: 1. Passed: 1. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 2.6470 Seconds
```
However, hardcoding the values is basically cheating, so it's better to endow **DayOrNightUtility** with some intelligence. Modify the **GetDayOrNight** method to include some time-calculation logic:
```
public string GetDayOrNight() {
    string dayOrNight = "Daylight";
    DateTime time = new DateTime();
    if(time.Hour &lt; 7) {
        dayOrNight = "Nighttime";
    }
    return dayOrNight;
}
```
The method now gets the current time from the system and compares the **Hour** value to see if it is less than 7am. If it is, the logic transforms the **dayOrNight** string value from "Daylight" to "Nighttime." The unit test now passes.
### The start of a test-driven solution
We now have the beginnings of a base case unit test and a viable solution for our time dependency. There are more than a few more cases to work through. 
In the next article, I'll demonstrate how to test for daylight hours and how to leverage failure along the way.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mutation-testing-example-tdd
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_failure_celebrate.png?itok=LbvDAEZF (failure sign at a party, celebrating failure)
[2]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd
[3]: https://www.infoq.com/presentations/Architecture-Without-an-End-State/
[4]: https://opensource.com/sites/default/files/uploads/cat.png (Cat standing on a roof)
[5]: https://www.agilealliance.org/glossary/user-stories
[6]: https://xunit.net/
[7]: http://www.google.com/search?q=new+msdn.microsoft.com

View File

@ -0,0 +1,121 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A human approach to reskilling in the age of AI)
[#]: via: (https://opensource.com/open-organization/19/9/claiming-human-age-of-AI)
[#]: author: (Jen Kelchner https://opensource.com/users/jenkelchnerhttps://opensource.com/users/jenkelchner)
A human approach to reskilling in the age of AI
======
Investing in learning agility and core capabilities is as important for
the individual worker as it is for the decision-making executive.
Thinking openly can get us there.
![Person on top of a mountain, arm raise][1]
[The age of AI is upon us][2]. Emerging technologies give humans some relief from routine tasks and allow us to get back to the creative, adaptable creatures many of us prefer being.
So a shift to developing _human_ skills in the workplace should be a critical focus for organizations. In this part of my series on learning agility, we'll take a look at some reasons for a sense of urgency over reskilling our workforce and reconnecting to our humanness.
### The clock is ticking
If you don't believe AI conversations affect you, then I suggest reviewing this 2018 McKinsey Report on [reskilling in the age of automation][3], which provides some interesting statistics. Here are a few applicable nuggets:
* 62% of executives believe they need to **retrain or replace more than a quarter** of their workforce **by 2023** due to advancing digitization
* The **US and Europe face a larger threat** on reskilling than the rest of the world
* 70% of execs in companies with more than $500 million in annual revenue state this **will affect more than 25%** of their employees
No matter where you fall on an organizational chart, automation (and digitalization more generally) is an important topic for you—because the need for reskilling that it introduces will most likely affect you.
But what does this reskilling conversation have to do with core capability development?
To answer _that_ question, let's take a look at a few statistics curated in a [2019 LinkedIn Global Talent Report][4].
When surveyed on the topic of ~~soft skills~~ core human capabilities, global companies had this to say:
* **92%** agree that they matter as much or more than "hard skills"
* **80%** said these skills are increasingly important to company success
* Only **41%** have a formal process to identify these skills
Before panicking at the thought of what these stats could mean to you or your company, let's actually dig into these core capabilities that you already have but may need to brush up on and strengthen.
### Core human capabilities
_What the heck does all this have to do with learning agility_, you may be asking, _and why should I care_?
What many call "soft skills" are really human skills—core capabilities anyone can cultivate.
I recommend catching up with this introduction to [learning agility][5]. There, I define learning agility as "the capacity for adapting to situations and applying knowledge from prior experience—even when you don't know what to do [...], a willingness to learn from all your experiences and then apply that knowledge to tackle new challenges in new situations." In that piece, we also discussed reasons why characteristics associated with learning agility are among the most sought after skills on the planet today.
Too often, [these skills go by the name "soft skills."][6] Explanations usually go something like this: "hard skills" are more like engineering- or science-based skills and, well, "non-peopley" related things. But what many call "soft skills" are really _human skills_—core capabilities anyone can cultivate. As leaders, we need to continue to change the narrative concerning these core capabilities (for many reasons, not least of which is the fact that the distinction frequently re-entrenches a [gender bias][7], as if skills somehow fit on a spectrum from "soft to hard.")
For two decades, I've heard decision makers choose not to invest in people or leadership development because "there isn't money in soft skills" and "there's no way to track the ROI" on developing them. Fortunately, we're moving out of this tragic mindset, as leaders recognize how digital transformation has reshaped how we connect, build community, and organize for work. Perhaps this has something to do with increasingly pervasive reports (and blowups) we see across ecosystems regarding [toxic work culture][8] or broken leadership styles. Top consulting firms doing [global talent surveys][9] continue to identify crucial breakdowns in talent development pointing right back to our topic at hand.
For two decades, I've heard decision makers choose not to invest in people or leadership development because "there isn't money in soft skills" and "there's no way to track the ROI" on developing them. Fortunately, we're moving out of this tragic mindset.
We all have access to these capabilities, but often we've lacked examples to learn by or have had little training on how to put them to work. Let's look at the list of the most-needed human skills right now, shall we?
Topping the leaderboard moving into 2020:
* Communication
* Relationship building
* Emotional intelligence (EQ)
* Critical thinking and problem-solving (CQ)
* [Learning agility][5] and adaptability quotient (AQ)
* Creativity
If we were to take the items on this list and generalize them into three categories of importance for the future of work, it would look like:
1. Emotional Quotient
2. Adaptability Quotient
3. Creativity Quotient
Some of us have been conditioned to think we're "not creative" because the term "creativity" refers only to things like art, design, or music. However, in this case, "creativity" means the ability to combine ideas, things, techniques, or approaches in new ways—and it's [crucial to innovation][10]. Solving problems in new ways is the [most important skill][11] companies look for when trying to solve their skill-gap problems. (_Spoiler alert: This is learning agility!_) Obviously, our generalized list ignores many nuances (not to mention additional skills we might develop in our people and organizations as contexts shift); however, this is a really great place to start.
### Where do we go from here?
In order to accommodate the demands of tomorrow's organizations, we must:
* look at retraining and reskilling from early education models to organizational talent development programs, and
* adjust our organizational culture and internal frameworks to support being human and innovative.
This means exploring [open principles][12], agile methodologies, collaborative work models, and continuous states of learning across all aspects of your organization. Digital transformation and reskilling on core capabilities leaves no one—and _no department_—behind.
In our next installment, we'll begin digging into these core capabilities and examine the five dimensions of learning agility with simple ways to apply them.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/9/claiming-human-age-of-AI
作者:[Jen Kelchner][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jenkelchnerhttps://opensource.com/users/jenkelchner
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/developer_mountain_cloud_top_strong_win.jpg?itok=axK3EX-q (Person on top of a mountain, arm raise)
[2]: https://appinventiv.com/blog/ai-technology-trends/
[3]: https://www.mckinsey.com/featured-insights/future-of-work/retraining-and-reskilling-workers-in-the-age-of-automation
[4]: https://app.box.com/s/c5scskbsz9q6lb0hqb7euqeb4fr8m0bl/file/388525098383
[5]: https://opensource.com/open-organization/19/8/introduction-learning-agility
[6]: https://enterprisersproject.com/article/2019/9/6-soft-skills-for-ai-age
[7]: https://enterprisersproject.com/article/2019/8/why-soft-skills-core-to-IT
[8]: https://ldr21.com/how-ubers-workplace-crisis-can-save-your-organization-money/
[9]: https://www.inc.com/scott-mautz/new-deloitte-study-of-10455-millennials-says-employers-are-failing-to-help-young-people-develop-4-crucial-skills.html
[10]: https://velites.nl/en/2018/11/12/creative-quotient/
[11]: https://learning.linkedin.com/blog/top-skills/why-creativity-is-the-most-important-skill-in-the-world
[12]: https://opensource.com/open-organization/resources/open-org-definition

View File

@ -0,0 +1,132 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An advanced look at Python interfaces using zope.interface)
[#]: via: (https://opensource.com/article/19/9/zopeinterface-python-package)
[#]: author: (Moshe Zadka https://opensource.com/users/moshezhttps://opensource.com/users/lauren-pritchetthttps://opensource.com/users/sethhttps://opensource.com/users/drmjg)
An advanced look at Python interfaces using zope.interface
======
Zope.interface helps declare what interfaces exist, which objects
provide them, and how to query for that information.
![Snake charmer cartoon with a yellow snake and a blue snake][1]
The **zope.interface** library is a way to overcome ambiguity in Python interface design. Let's take a look at it.
### Implicit interfaces are not zen
The [Zen of Python][2] is loose enough and contradicts itself enough that you can prove anything from it. Let's meditate upon one of its most famous principles: "Explicit is better than implicit."
One thing that traditionally has been implicit in Python is the expected interface. Functions have been documented to expect a "file-like object" or a "sequence." But what is a file-like object? Does it support **.writelines**? What about **.seek**? What is a "sequence"? Does it support step-slicing, such as **a[1:10:2]**?
Originally, Python's answer was the so-called "duck-typing," taken from the phrase "if it walks like a duck and quacks like a duck, it's probably a duck." In other words, "try it and see," which is possibly the most implicit you could possibly get.
In order to make those things explicit, you need a way to express expected interfaces. One of the first big systems written in Python was the [Zope][3] web framework, and it needed those things desperately to make it obvious what rendering code, for example, expected from a "user-like object."
Enter **zope.interface**, which is developed by Zope but published as a separate Python package. **Zope.interface** helps declare what interfaces exist, which objects provide them, and how to query for that information.
Imagine writing a simple 2D game that needs various things to support a "sprite" interface; e.g., indicate a bounding box, but also indicate when the object intersects with a box. Unlike some other languages, in Python, attribute access as part of the public interface is a common practice, instead of implementing getters and setters. The bounding box should be an attribute, not a method.
A method that renders the list of sprites might look like:
```
def render_sprites(render_surface, sprites):
    """
    sprites should be a list of objects complying with the Sprite interface:
    * An attribute "bounding_box", containing the bounding box.
    * A method called "intersects", that accepts a box and returns
      True or False
    """
    pass # some code that would actually render
```
The game will have many functions that deal with sprites. In each of them, you would have to specify the expected contract in a docstring.
Additionally, some functions might expect a more sophisticated sprite object, maybe one that has a Z-order. We would have to keep track of which methods expect a Sprite object, and which expect a SpriteWithZ object.
Wouldn't it be nice to be able to make what a sprite is explicit and obvious so that methods could declare "I need a sprite" and have that interface strictly defined? Enter **zope.interface**.
```
from zope import interface
class ISprite(interface.Interface):
    bounding_box = interface.Attribute(
        "The bounding box"
    )
    def intersects(box):
        "Does this intersect with a box"
```
This code looks a bit strange at first glance. The methods do not include a **self**, which is a common practice, and it has an **Attribute** thing. This is the way to declare interfaces in **zope.interface**. It looks strange because most people are not used to strictly declaring interfaces.
The reason for this practice is that the interface shows how the method will be called, not how it is defined. Because interfaces are not superclasses, they can be used to declare data attributes.
One possible implementation of the interface can be with a circular sprite:
```
@implementer(ISprite)
@attr.s(auto_attribs=True)
class CircleSprite:
    x: float
    y: float
    radius: float
    @property
    def bounding_box(self):
        return (
            self.x - self.radius,
            self.y - self.radius,
            self.x + self.radius,
            self.y + self.radius,
        )
    def intersects(self, box):
        # A box intersects a circle if and only if
        # at least one corner is inside the circle.
        top_left, bottom_right = box[:2], box[2:]
        for choose_x_from (top_left, bottom_right):
            for choose_y_from (top_left, bottom_right):
                x = choose_x_from[0]
                y = choose_y_from[1]
                if (((x - self.x) ** 2 + (y - self.y) ** 2) &lt;=
                    self.radius ** 2):
                     return True
        return False
```
This _explicitly_ declares that the **CircleSprite** class implements the interface. It even enables us to verify that the class implements it properly:
```
from zope.interface import verify
def test_implementation():
    sprite = CircleSprite(x=0, y=0, radius=1)
    verify.verifyObject(ISprite, sprite)
```
This is something that can be run by **pytest**, **nose**, or another test runner, and it will verify that the sprite created complies with the interface. The test is often partial: it will not test anything only mentioned in the documentation, and it will not even test that the methods can be called without exceptions! However, it does check that the right methods and attributes exist. This is a nice addition to the unit test suite and—at a minimum—prevents simple misspellings from passing the tests.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/zopeinterface-python-package
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshezhttps://opensource.com/users/lauren-pritchetthttps://opensource.com/users/sethhttps://opensource.com/users/drmjg
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_python.png?itok=MFEKm3gl (Snake charmer cartoon with a yellow snake and a blue snake)
[2]: https://en.wikipedia.org/wiki/Zen_of_Python
[3]: http://zope.org

View File

@ -0,0 +1,165 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (CodeReady Containers: complex solutions on OpenShift + Fedora)
[#]: via: (https://fedoramagazine.org/codeready-containers-complex-solutions-on-openshift-fedora/)
[#]: author: (Marc Chisinevski https://fedoramagazine.org/author/mchisine/)
CodeReady Containers: complex solutions on OpenShift + Fedora
======
![][1]
Want to experiment with (complex) solutions on [OpenShift][2] 4.1+? CodeReady Containers (CRC) on a physical Fedora server is a great choice. It lets you:
* Configure the RAM available to CRC / OpenShift (this is key as well deploy Machine Learning, Change Data Capture, Process Automation and other solutions with significant memory requirements)
* Avoid installing anything on your laptop
* Standardize (on Fedora 30) so that you get the same results every time
Start by installing CRC and Ansible Agnostic Deployer (AgnosticD) on a Fedora 30 physical server. Then, youll use AgnosticD to deploy Open Data Hub on the OpenShift 4.1 environment created by CRC. Lets get started!
### Set up CodeReady Containers
```
$ dnf config-manager --set-enabled fedora
$ su -c 'dnf -y install git wget tar qemu-kvm libvirt NetworkManager jq libselinux-python'
$ sudo systemctl enable --now libvirtd
```
Lets also add a user.
```
$ sudo adduser demouser
$ sudo passwd demouser
$ sudo usermod -aG wheel demouser
```
Download and extract CodeReady Containers:
```
$ su demouser
$ cd /home/demouser
$ wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/1.0.0-beta.3/crc-linux-amd64.tar.xz
$ tar -xvf crc-linux-amd64.tar.xz
$ cd crc-linux-1.0.0-beta.3-amd64/
$ sudo cp ./crc /usr/bin
```
Set the memory available to CRC according to what you have on your physical server. For example, on a physical server with around 100GB you can allocate 80G to CRC as follows:
```
$ crc config set memory 81920
$ crc setup
```
Youll need your pull secret from <https://cloud.redhat.com/openshift/install/metal/user-provisioned>.
```
$ crc start
```
Thats it — you can now login to your OpenShift environment:
```
eval $(crc oc-env) && oc login -u kubeadmin -p <password> https://api.crc.testing:6443
```
### Set up Ansible Agnostic Deployer
[github.com/redhat-cop/agnosticd][3] is a fully automated two-phase deployer. Lets deploy it!
```
$ su demouser
$ cd /home/demouser
$ git clone https://github.com/redhat-cop/agnosticd.git
$ cd agnosticd/ansible
$ python -m pip install --upgrade --trusted-host files.pythonhosted.org -r requirements.txt
$ python3 -m pip install --upgrade --trusted-host files.pythonhosted.org -r requirements.txt
$ pip3 install kubernetes
$ pip3 install openshift
$ pip install kubernetes
$ pip install openshift
```
### Set up Open Data Hub on Code Ready Containers
[Open Data Hub][4] is a machine-learning-as-a-service platform built on OpenShift and Kafka/Strimzi. It integrates a collection of open source projects.
First, create an Ansible inventory file with the following content.
```
$ cat inventory
$ 127.0.0.1 ansible_connection=local
```
Set up the WORKLOAD environment variable so that Ansible Agnostic Deployer knows that we want to deploy Open Data Hub.
```
$ export WORKLOAD="ocp4-workload-open-data-hub"
$ sudo cp /usr/local/bin/ansible-playbook /usr/bin/ansible-playbook
```
We are only deploying one Open Data Hub project, so set _user_count_ to 1. You can set up workshops for many students by setting _user_count_.
An OpenShift project (with Open Data Hub in our case) will be created for each student.
```
$ eval $(crc oc-env) && oc login -u kubeadmin -p <password> https://api.crc.testing:6443
$ ansible-playbook -i inventory ./configs/ocp-workloads/ocp-workload.yml -e"ocp_workload=${WORKLOAD}" -e"ACTION=create" -e"user_count=1" -e"ocp_username=kubeadmin" -e"ansible_become_pass=<password>" -e"silent=False"
$ oc project open-data-hub-user1
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
jupyterhub jupyterhub-open-data-hub-user1.apps-crc.testing jupyterhub 8080-tcp edge/Redirect None
```
On your laptop, add _jupyterhub-open-data-hub-user1.apps-crc.testing_ to your _/etc/hosts_ file. For example:
```
127.0.0.1 localhost fedora30 console-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing mapit-app-management.apps-crc.testing mapit-spring-pipeline-demo.apps-crc.testing jupyterhub-open-data-hub-user1.apps-crc.testing jupyterhub-open-data-hub-user1.apps-crc.testing
```
On your laptop:
```
$ sudo ssh marc@fedora30 -L 443:jupyterhub-open-data-hub-user1.apps-crc.testing:443
```
You can now browse to [https://jupyterhub-open-data-hub-user1.apps-crc.testing][5].
Now that we have Open Data Hub ready, you could deploy something interesting on it. For example, you could deploy IBMs Qiskit open source framework for quantum computing. For more information, refer to Video no. 9 at [this YouTube playlist][6], and the [Github repo here][7].
You could also deploy plenty of other useful tools for Process Automation, Change Data Capture, Camel Integration, and 3scale API Management. You dont have to wait for articles on these, though. Step-by-step short videos are already [available on YouTube][6].
The corresponding step-by-step instructions are [also on YouTube][6]. You can also follow along with this article using the [GitHub repo][8].
* * *
_Photo by _[_Marta Markes_][9]_ on _[_Unsplash_][10]_._
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/codeready-containers-complex-solutions-on-openshift-fedora/
作者:[Marc Chisinevski][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/mchisine/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/codeready-containers-816x345.jpg
[2]: https://fedoramagazine.org/run-openshift-locally-minishift/
[3]: https://github.com/redhat-cop/agnosticd
[4]: https://opendatahub.io/
[5]: https://jupyterhub-open-data-hub-user1.apps-crc.testing/
[6]: https://www.youtube.com/playlist?list=PLg1pvyPzFye2UtQjZTSjoXhFdqkGK6exw
[7]: https://github.com/marcredhat/crcdemos/blob/master/IBMQuantum-qiskit
[8]: https://github.com/marcredhat/crcdemos/tree/master/fedora
[9]: https://unsplash.com/@vnevremeni?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[10]: https://unsplash.com/s/photos/container?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,381 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Integrate online documents editors, into a Python web app using ONLYOFFICE)
[#]: via: (https://opensourceforu.com/2019/09/integrate-online-documents-editors-into-a-python-web-app-using-onlyoffice/)
[#]: author: (Aashima Sharma https://opensourceforu.com/author/aashima-sharma/)
Integrate online documents editors, into a Python web app using ONLYOFFICE
======
[![][1]][2]
_[ONLYOFFICE][3] is an open-source collaborative office suite distributed under the terms of GNU AGPL v.3 license. It contains three editors for text documents, spreadsheets, and presentations and features the following:_
* Viewing, editing and co-editing docx, .xlsx, pptx files. OOXML as a core format ensures high compatibility with Microsoft Word, Excel and PowerPoint files.
* Editing other popular formats (.odt, .rtf, .txt, .html, .ods., .csv, .odp) with inner conversion to OOXML.
* Familiar tabbed interface.
* Collaboration tools: two co-editing modes (fast and strict), track changes, comments and integrated chat.
* Flexible access rights management: full access, read only, review, form filling and comment.
* Building your own add-ons using the API.
* 250 languages available and hieroglyphic alphabets.
API allowing the developers integrate ONLYOFFICE editors into their own web sites and apps written in any programming language and setup and manage the editors.
To integrate ONLYOFFICE editors, we will need an integration app connecting the editors (ONLYOFFICE Document Server) and your service. To use editors within your interface, it should grant to ONLYOFFICE the following permissions :
* Adding and executing custom code.
* Anonymous access for downloading and saving files. It means that the editors only communicate with your service on the server side without involving any user authorization data from the client side (browser cookies).
* Adding new buttons to UI (for example, “Open in ONLYOFFICE”, “Edit in ONLYOFFICE”).
* Оpening a new page where ONLYOFFICE can execute the script to add an editor.
* Ability to specify Document Server connection settings.
There are several cases of successful integration with popular collaboration solutions such as Nextcloud, ownCloud, Alfresco, Confluence and SharePoint, via official ready-to-use connectors offered by ONLYOFFICE.
One of the most actual integration cases is the integration of ONLYOFFICE editors with its open-source collaboration platform written in C#. This platform features document and project management, CRM, email aggregator, calendar, user database, blogs, forums, polls, wiki, and instant messenger.
Integrating online editors with CRM and Projects modules, you can:
* Attach documents to CRM opportunities and cases, or to project tasks and discussions, or even create a separate folder with documents, spreadsheets, and presentations related to the project.
* Create new docs, sheets, and presentations right in CRM or in the Project module.
* Open and edit attached documents, or download and delete them.
* Import contacts to your CRM in bulk from a CSV file as well as export the customer database as a CSV file.
In the Mail module, you can attach files stored in the Documents module or insert a link to the needed document into the message body. When ONLYOFFICE users receive a message with an attached document, they are able to: download the attachment, view the file in the browser, open the file for editing or save it to the Documents module. As mentioned above, if the format differs from OOXML, the file will be automatically converted to .docx/.xlsx/.pptx and its copy will be saved in the original format as well.
In this article, you will see the integration process of ONLYOFFICE into the Document Management System written in Python, one of the most popular programming languages. The following steps will show you how to create all the necessary elements to make possible work and collaboration on documents within DMS interface: viewing, editing, co-editing, saving files and users access management and may serve as an example of integration into your Python app.
**1\. What you will need**
Lets start off by creating key components of the integration process: [_ONLYOFFICE Document Server_][4] and DMS written in Python.
1.1 To install ONLYOFFICE Document Server you can choose from multiple installation options: compile the source code available on GitHub, use .deb or .rpm packages or the Docker image.
We recommend installing Document Server and all the necessary dependencies with only one command using the Docker image. Please note, that choosing this method, you need the latest Docker version installed.
```
docker run -itd -p 80:80 onlyoffice/documentserver-de
```
1.2 We need to develop DMS in Python. If you have one already, please, check if it meets the following conditions:
* Has a list of files you need to open for viewing/editing
* Allows downloading files
For the app, we will use a Bottle framework. We will install it in the working directory using the following command:
```
pip install bottle
```
Then we create the apps code * main.py*  and the template _index.tpl_ .
We add the following code into this * main.py*  file:
```
from bottle import route, run, template, get, static_file # connecting the framework and the necessary components
@route('/') # setting up routing for requests for /
def index():
return template('index.tpl') # showing template in response to request
run(host="localhost", port=8080) # running the application on port 8080
```
Once we run the app, an empty page will be rendered on <http://localhost:8080 >.
In order, the Document Server to be able to create new docs, add default files and form a list of their names in the template, we should create a folder  _files_ , and put 3 files (.docx, .xlsx and .pptx) in there.
To read these files names, we use the _listdir_ component.
```
from os import listdir
```
Now lets create a variable for all the file names from the files folder:
```
sample_files = [f for f in listdir('files')]
```
To use this variable in the template, we need to pass it through the _template_ method:
```
def index():
return template('index.tpl', sample_files=sample_files)
Heres this variable in the template:
%for file in sample_files:
<div>
<span>{{file}}</span>
</div>
% end
```
We restart the application to see the list of filenames on the page.
Heres the method to make these files available for all the app users:
```
@get("/files/<filepath:re:.*\.*>")
def show_sample_files(filepath):
return static_file(filepath, root="files")
```
**2\. How to view docs in ONLYOFFICE within the Python App**
Once all the components are ready, lets add functions to make editors operational within the app interface.
The first option enables users to open and view docs. Connect document editors API in the template:
```
<script type="text/javascript" src="editor_url/web-apps/apps/api/documents/api.js"></script>
```
_editor_url_  is a link to document editors.
A button to open each file for viewing:
```
<button onclick="view('files/{{file}}')">view</button>
```
Now we need to add a div with  _id_ , in which the document editor will be opened:
```
<div id="editor"></div>
```
To open the editor, we have to call a function:
```
<script>
function view(filename) {
if (/docx$/.exec(filename)) {
filetype = "text"
}
if (/xlsx$/.exec(filename)) {
filetype = "spreadsheet"
}
if (/pptx$/.exec(filename)) {
filetype = "presentation",
title: filename
}
new DocsAPI.DocEditor("editor",
{
documentType: filetype,
document: {
url: "host_url" + '/' + filename,
title: filename
},
editorConfig: {mode: 'view'}
});
}
</script>
```
There are two arguments for the DocEditor function: id of the element where the editors will be opened and a JSON with the editors settings.
In this example, the following mandatory parameters are used:
* _documentType_ is identified by its format (.docx, .xlsx, .pptx for texts, spreadsheets and presentations accordingly)
* _document.url_ is the link to the file you are going to open.
* _editorConfig.mode_.
We can also add _title_ that will be displayed in the editors.
So, now we have everything to view docs in our Python app.
**3\. How to edit docs in ONLYOFFICE within the Python App**
First of all, add the “Edit” button:
```
<button onclick="edit('files/{{file}}')">edit</button>
```
Then create a new function that will open files for editing. It is similar to the View function.
Now we have 3 functions:
```
<script>
var editor;
function view(filename) {
if (editor) {
editor.destroyEditor()
}
editor = new DocsAPI.DocEditor("editor",
{
documentType: get_file_type(filename),
document: {
url: "host_url" + '/' + filename,
title: filename
},
editorConfig: {mode: 'view'}
});
}
function edit(filename) {
if (editor) {
editor.destroyEditor()
}
editor = new DocsAPI.DocEditor("editor",
{
documentType: get_file_type(filename),
document: {
url: "host_url" + '/' + filename,
title: filename
}
});
}
function get_file_type(filename) {
if (/docx$/.exec(filename)) {
return "text"
}
if (/xlsx$/.exec(filename)) {
return "spreadsheet"
}
if (/pptx$/.exec(filename)) {
return "presentation"
}
}
</script>
```
_destroyEditor_  is called to close an open editor.
As you might notice, the _editorConfig_ parameter is absent from the _edit()_ function, because it has by default the value * {“mode”: “edit”}.*
Now we have everything to open docs for co-editing in your Python app.
**4\. How to co-edit docs in ONLYOFFICE within the Python App**
Co-editing is implemented by using the same document.key for the same document in the editors settings. Without this key, the editors will create the editing session each time you open the file.
Set unique keys for each doc to make users connect to the same editing session for co-editing. The format of the key should be the following:  _filename + “_key”_. The next step is to add it to all of the configs where document is present.
```
document: {
url: "host_url" + '/' + filepath,
title: filename,
key: filename + '_key'
},
```
**5\. How to save docs in ONLYOFFICE within the Python App**
Every time we change and save the file, ONLYOFFICE stores all its versions. Lets see closely how it works. After we close the editor, Document Server builds the file version to be saved and sends the request to callbackUrl address. This request contains document.key and the link to the just built file.
document.key is used to find the old version of the file and replace it with the new one. As we do not have any database here, we just send the filename using callbackUrl.
Specify _callbackUrl_ parameter in the setting in _editorConfig.callbackUrl_ and add it to the _edit()method_:
```
function edit(filename) {
const filepath = 'files/' + filename;
if (editor) {
editor.destroyEditor()
}
editor = new DocsAPI.DocEditor("editor",
{
documentType: get_file_type(filepath),
document: {
url: "host_url" + '/' + filepath,
title: filename,
key: filename + '_key'
}
,
editorConfig: {
mode: 'edit',
callbackUrl: "host_url" + '/callback' + '&filename=' + filename // add file name as a request parameter
}
});
}
```
Write a method that will save file after getting the POST request to* /callback* address:
```
@post("/callback") # processing post requests for /callback
def callback():
if request.json['status'] == 2:
file = requests.get(request.json['url']).content
with open('files/' + request.query['filename'], 'wb') as f:
f.write(file)
return "{\"error\":0}"
```
* # status 2*  is the built file.
When we close the editor, the new version of the file will be saved to storage.
**6\. How to manage users in ONLYOFFICE within the Python App**
If there are users in your app, and you need to see who exactly is editing a doc, write their identifiers (id and name) in the editors configuration.
Add the ability to select a user in the interface:
```
<select id="user_selector" onchange="pick_user()">
<option value="1" selected="selected">JD</option>
<option value="2">Turk</option>
<option value="3">Elliot</option>
<option value="4">Carla</option>
</select>
```
If you add the call of the function *pick_user()*at the beginning of the tag _&lt;script&gt;_, it will initialize, in the function itself, the variables responsible for the id and the user name.
```
function pick_user() {
const user_selector = document.getElementById("user_selector");
this.current_user_name = user_selector.options[user_selector.selectedIndex].text;
this.current_user_id = user_selector.options[user_selector.selectedIndex].value;
}
```
Make use of _editorConfig.user.id_ and  _editorConfig.user.name_ to configure users settings. Add these parameters to the editors configuration in the file editing function.
```
function edit(filename) {
const filepath = 'files/' + filename;
if (editor) {
editor.destroyEditor()
}
editor = new DocsAPI.DocEditor("editor",
{
documentType: get_file_type(filepath),
document: {
url: "host_url" + '/' + filepath,
title: filename
},
editorConfig: {
mode: 'edit',
callbackUrl: "host_url" + '/callback' + '?filename=' + filename,
user: {
id: this.current_user_id,
name: this.current_user_name
}
}
});
}
```
Using this approach, you can integrate ONLYOFFICE editors into your app written in Python and get all the necessary tools for working and collaborating on docs. For more integration examples (Java, Node.js, PHP, Ruby), please, refer to the official [_API documentation_][5].
**By: Maria Pashkina**
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/integrate-online-documents-editors-into-a-python-web-app-using-onlyoffice/
作者:[Aashima Sharma][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/aashima-sharma/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/09/Typist-composing-text-in-laptop.jpg?resize=696%2C420&ssl=1 (Typist composing text in laptop)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/09/Typist-composing-text-in-laptop.jpg?fit=900%2C543&ssl=1
[3]: https://www.onlyoffice.com/en/
[4]: https://www.onlyoffice.com/en/developer-edition.aspx
[5]: https://api.onlyoffice.com/editors/basic

View File

@ -0,0 +1,188 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mutation testing by example: Failure as experimentation)
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew)
Mutation testing by example: Failure as experimentation
======
Develop the logic for an automated cat door that opens during daylight
hours and locks during the night, and follow along with the .NET
xUnit.net testing framework.
![Digital hand surrounding by objects, bike, light bulb, graphs][1]
In the [first article][2] in this series, I demonstrated how to use planned failure to ensure expected outcomes in your code. In this second article, I'll continue developing my example project—an automated cat door that opens during daylight hours and locks during the night.
As a reminder, you can follow along using the .NET xUnit.net testing framework by following the [instructions here][3].
### What about the daylight hours?
Recall that test-driven development (TDD) centers on a healthy amount of unit tests.
The first article implemented logic that fulfills the expectations of the **Given7pmReturnNighttime** unit test. But you're not done yet. Now you need to describe the expectations of what happens when the current time is greater than 7am. Here is the new unit test, called **Given7amReturnDaylight**:
```
       [Fact]
       public void Given7amReturnDaylight()
       {
           var expected = "Daylight";
           var actual = dayOrNightUtility.GetDayOrNight();
           Assert.Equal(expected, actual);
       }
```
The new unit test now fails (it is very desirable to fail as early as possible!):
```
Starting test execution, please wait...
[Xunit.net 00:00:01.23] unittest.UnitTest1.Given7amReturnDaylight [FAIL]
Failed unittest.UnitTest1.Given7amReturnDaylight
[...]
```
It was expecting to receive the string value "Daylight" but instead received the string value "Nighttime."
### Analyze the failed test case
Upon closer inspection, it seems that the code has trapped itself. It turns out that the implementation of the **GetDayOrNight** method is not testable!
Take a look at the core challenge we have ourselves in:
1. **GetDayOrNight relies on hidden input. **
The value of **dayOrNight** is dependent upon the hidden input (it obtains the value for the time of day from the built-in system clock).
2. **GetDayOrNight contains non-deterministic behavior. **
The value of the time of day obtained from the system clock is non-deterministic. It depends on the point in time when you run the code, which we must consider unpredictable.
3. **Low quality of the GetDayOrNight API.**
This API is tightly coupled to the concrete data source (system **DateTime**).
4. **GetDayOrNight violates the single responsibility principle.**
You have implemented a method that consumes and processes information at the same time. It is a good practice that a method should be responsible for performing a single duty.
5. **GetDayOrNight has more than one reason to change.**
It is possible to imagine a scenario where the internal source of time may change. Also, it is quite easy to imagine that the processing logic will change. These disparate reasons for changing must be isolated from each other.
6. **The API signature of GetDayOrNight is not sufficient when it comes to trying to understand its behavior.**
It is very desirable to be able to understand what type of behavior to expect from an API by simply looking at its signature.
7. **GetDayOrNight depends on global shared mutable state.**
Shared mutable state is to be avoided at all costs!
8. **The behavior of the GetDayOrNight method cannot be predicted even after reading the source code.**
That is a scary proposition. It should always be very clear from reading the source code what kind of behavior can be predicted once the system is operational.
### The principles behind what failed
Whenever you're faced with an engineering problem, it is advisable to use the time-tested strategy of _divide and conquer_. In this case, following the principle of _separation of concerns_ is the way to go.
> **separation of concerns** (**SoC**) is a design principle for separating a computer program into distinct sections, so that each section addresses a separate concern. A concern is a set of information that affects the code of a computer program. A concern can be as general as the details of the hardware the code is being optimized for, or as specific as the name of a class to instantiate. A program that embodies SoC well is called a modular program.
>
> ([source][4])
The **GetDayOrNight** method should be concerned only with deciding whether the date and time value means daylight or nighttime. It should not be concerned with finding the source of that value. That concern should be left to the calling client.
You must leave it to the calling client to take care of obtaining the current time. This approach aligns with another valuable engineering principle—_inversion of control_. Martin Fowler explores this concept in [detail, here][5].
> One important characteristic of a framework is that the methods defined by the user to tailor the framework will often be called from within the framework itself, rather than from the user's application code. The framework often plays the role of the main program in coordinating and sequencing application activity. This inversion of control gives frameworks the power to serve as extensible skeletons. The methods supplied by the user tailor the generic algorithms defined in the framework for a particular application.
>
> \-- [Ralph Johnson and Brian Foote][6]
### Refactoring the test case
So the code needs refactoring. Get rid of the dependency on the internal clock (the **DateTime** system utility):
```
` DateTime time = new DateTime();`
```
Delete the above line (which should be line 7 in your file). Refactor your code further by adding an input parameter **DateTime** time to the **GetDayOrNight** method.
Here's the refactored class **DayOrNightUtility.cs**:
```
using System;
namespace app {
   public class DayOrNightUtility {
       public string GetDayOrNight(DateTime time) {
           string dayOrNight = "Nighttime";
           if(time.Hour &gt;= 7 &amp;&amp; time.Hour &lt; 19) {
               dayOrNight = "Daylight";
           }
           return dayOrNight;
       }
   }
}
```
Refactoring the code requires the unit tests to change. You need to prepare values for the **nightHour** and the **dayHour** and pass those values into the **GetDayOrNight** method. Here are the refactored unit tests:
```
using System;
using Xunit;
using app;
namespace unittest
{
   public class UnitTest1
   {
       DayOrNightUtility dayOrNightUtility = [new][7] DayOrNightUtility();
       DateTime nightHour = [new][7] DateTime(2019, 08, 03, 19, 00, 00);
       DateTime dayHour = [new][7] DateTime(2019, 08, 03, 07, 00, 00);
       [Fact]
       public void Given7pmReturnNighttime()
       {
           var expected = "Nighttime";
           var actual = dayOrNightUtility.GetDayOrNight(nightHour);
           Assert.Equal(expected, actual);
       }
       [Fact]
       public void Given7amReturnDaylight()
       {
           var expected = "Daylight";
           var actual = dayOrNightUtility.GetDayOrNight(dayHour);
           Assert.Equal(expected, actual);
       }
   }
}
```
### Lessons learned
Before moving forward with this simple scenario, take a look back and review the lessons in this exercise.
It is easy to create a trap inadvertently by implementing code that is untestable. On the surface, such code may appear to be functioning correctly. However, following test-driven development (TDD) practice—describing the expectations first and only then prescribing the implementation—revealed serious problems in the code.
This shows that TDD is the ideal methodology for ensuring code does not get too messy. TDD points out problem areas, such as the absence of single responsibility and the presence of hidden inputs. Also, TDD assists in removing non-deterministic code and replacing it with fully testable code that behaves deterministically.
Finally, TDD helped deliver code that is easy to read and logic that's easy to follow.
In the next article in this series, I'll demonstrate how to use the logic created during this exercise to implement functioning code and how further testing can make it even better.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs)
[2]: https://opensource.com/article/19/9/mutation-testing-example-part-1-how-leverage-failure
[3]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd
[4]: https://en.wikipedia.org/wiki/Separation_of_concerns
[5]: https://martinfowler.com/bliki/InversionOfControl.html
[6]: http://www.laputan.org/drc/drc.html
[7]: http://www.google.com/search?q=new+msdn.microsoft.com

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Essential Accessories for Intel NUC Mini PC)
[#]: via: (https://itsfoss.com/intel-nuc-essential-accessories/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Essential Accessories for Intel NUC Mini PC
======
I bought a [barebone Intel NUC mini PC][1] a few weeks back. I [installed Linux on it][2] and I am totally enjoying it. This tiny fanless gadget replaces that bulky CPU of the desktop computer.
Intel NUC mostly comes in barebone format which means it doesnt have any RAM, hard disk and obviously no operating system. Many [Linux-based mini PCs][3] customize the Intel NUC and sell them to end users by adding disk, RAM and operating systems.
Needless to say that it doesnt come with keyboard, mouse or screen just like most other desktop computers out there.
[Intel NUC][4] is an excellent device and if you are looking to buy a desktop computer, I highly recommend it. And if you are considering to get Intel NUC, here are a few accessories you should have in order to start using the NUC as your computer.
### Essential Intel NUC accessories
![][5]
_The Amazon links in the article are affiliate links. Please read our [affiliate policy][6]._
#### The peripheral devices: monitor, keyboard and mouse
This is a no-brainer. You need to have a screen, keyboard and mouse to use a computer. Youll need a monitor with HDMI connection and USB or wireless keyboard-mouse. If you have these things already, you are good to go.
If you are looking for recommendations, I suggest LG IPS LED monitor. I have two of them in 22 inch model and I am happy with the sharp visuals it provides.
These monitors have a simple stand that doesnt move. If you want a monitor that can move up and down and rotate in portrait mode, try [HP EliteDisplay monitors][7].
![HP EliteDisplay Monitor][8]
I connect all three monitors at the same time in a multi-monitor setup. One monitor connects to the given HDMI port. Two monitors connect to thunderbolt port via a [thunderbolt to HDMI splitter from Club 3D][9].
You may also opt for the ultrawide monitors. I dont have a personal experience with them.
#### A/C power cord
This will be a surprise for you When you get your NUC, youll notice that though it has power adapter, its not complete with the plug.
![][10]
Since different countries have different plug points, Intel decided to simply drop it from the NUC kit. I am using the power cord of an old dead laptop but if you dont have one, chances are that you may have to get one for yourself.
#### RAM
Intel NUC has two RAM slots and it can support up to 32 GB of RAM. Since I have the core i3 processor, I opted from [8GB DDR4 RAM from Crucial][11] that costs around $33.
![][12]
8 GB RAM is fine for most cases but if you have core i7 processor, you may opt for a [16 GB RAM][13] one that costs almost $67. You can double it up and get the maximum 32 GB. The choice is all yours.
#### Hard disk [Important]
Intel NUC supports both 2.5 drive and M.2 SSD and you can use both at the same time to get more storage.
The 2.5 inches slot can hold both SSD and HDD. I strongly recommend to opt for SSD because its way faster than HDD. A [480 GB 2.5][14] costs $60. Which is a fair price in my opinion.
![][15]
The 2.5″ drive is limited with the standard SATA interface speed of 6Gb/sec. The M.2 slot could be faster depending upon whether you are choosing a NVMe SSD or not. The NVMe (non volatile memory express) SSDs are up to 4 times faster than the normal SSDs (also called SATA SSD). But they may also be slightly more expensive than SATA M2 SSD.
While buying the M.2 SSD, check the product image. It should be mentioned on the image of the disk itself whether its a NVMe or SATA SSD. [Samsung EVO is a cost effective NVMe M.2 SSD][16] that you may consider.
![Make sure that your are buying the faster NVMe M2 SSD][17]
A SATA SSD in both M.2 slot and 2.5″ slot has the same speed. This is why if you dont want to opt for the expensive NVMe SSD, I suggest you go for the 2.5″ SATA SSD and keep the M.2 slot free for future upgrades.
#### Other supporting accessories
Youll need HDMI cable to connect your monitor. If you are buying a new monitor, you should usually get a cable with it.
You may need a screw driver if you are going to use the M.2 slot. Intel NUC is an excellent device and you can unscrew the bottom panel just by rotating the four pods simply by your hands. Youll have to open the device in order to place the RAM and disk.
![Intel NUC with Security Cable | Image Credit Intel][18]
NUC also has the antitheft key lock hole that you can use with security cables. Keeping computers secure with cables is a recommended security practices in a business environment. Investing a [few dollars in the security cable][19] could save you hundreds of dollars.
**What accessories do you use?**
Thats the Intel NUC accessories I use and I suggest. How about you? If you own a NUC, what accessories you use and recommend to other NUC users?
--------------------------------------------------------------------------------
via: https://itsfoss.com/intel-nuc-essential-accessories/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.amazon.com/Intel-NUC-Mainstream-Kit-NUC8i3BEH/dp/B07GX4X4PW?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07GX4X4PW (barebone Intel NUC mini PC)
[2]: https://itsfoss.com/install-linux-on-intel-nuc/
[3]: https://itsfoss.com/linux-based-mini-pc/
[4]: https://www.intel.in/content/www/in/en/products/boards-kits/nuc.html
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/intel-nuc-accessories.png?ssl=1
[6]: https://itsfoss.com/affiliate-policy/
[7]: https://www.amazon.com/HP-EliteDisplay-21-5-Inch-1FH45AA-ABA/dp/B075L4VKQF?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B075L4VKQF (HP EliteDisplay monitors)
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/hp-elitedisplay-monitor.png?ssl=1
[9]: https://www.amazon.com/Club3D-CSV-1546-USB-C-Multi-Monitor-Splitter/dp/B06Y2FX13G?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B06Y2FX13G (thunderbolt to HDMI splitter from Club 3D)
[10]: https://itsfoss.com/wp-content/uploads/2019/09/ac-power-cord-3-pongs.webp
[11]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B01BIWKP58?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01BIWKP58 (8GB DDR4 RAM from Crucial)
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/crucial-ram.jpg?ssl=1
[13]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B019FRBHZ0?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B019FRBHZ0 (16 GB RAM)
[14]: https://www.amazon.com/Green-480GB-Internal-SSD-WDS480G2G0A/dp/B01M3POPK3?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01M3POPK3 (480 GB 2.5)
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/wd-green-ssd.png?ssl=1
[16]: https://www.amazon.com/Samsung-970-EVO-500GB-MZ-V7E500BW/dp/B07BN4NJ2J?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BN4NJ2J (Samsung EVO is a cost effective NVMe M.2 SSD)
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/samsung-evo-nvme.jpg?ssl=1
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/intel-nuc-security-cable.jpg?ssl=1
[19]: https://www.amazon.com/Kensington-Combination-Laptops-Devices-K64673AM/dp/B005J7Y99W?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B005J7Y99W (few dollars in the security cable)

View File

@ -1,252 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux commands for measuring disk activity)
[#]: via: (https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
用于测量磁盘活动的 Linux 命令
======
![](https://images.idgesg.net/images/article/2018/12/tape-measure-100782593-large.jpg)
Linux 系统提供了一套方便的命令帮助您查看磁盘有多忙而不仅仅是磁盘有多满。在本文中我们将研究五个非常有用的命令用于查看磁盘活动。其中两个命令iostat 和 ioping可能必须添加到您的系统中这两个相同的命令要求您使用 sudo 特权,但是这五个命令都提供了查看磁盘活动的有用方法。
这些命令中最简单、最明显的一个可能是 **dstat** 了。
### dtstat
尽管 **dstat** 命令以字母 "d" 开头,但它提供的统计信息远远不止磁盘活动。如果您只想查看磁盘活动,可以使用 **-d** 选项。如下所示,您将得到一个磁盘读/写测量值的连续列表,直到使用 a ^c 停止显示为止。注意,在第一个报告之后,显示中的每个后续行将在接下来的时间间隔内报告磁盘活动,缺省值仅为一秒。
```
$ dstat -d
-dsk/total-
read writ
949B 73k
65k 0 <== first second
0 24k <== second second
0 16k
0 0 ^C
```
在 -d 选项后面包含一个数字将把间隔设置为其秒数。
```
$ dstat -d 10
-dsk/total-
read writ
949B 73k
65k 81M <== first five seconds
0 21k <== second five second
0 9011B ^C
```
请注意报告的数据可能以许多不同的单位显示——例如M (megabytes), k (kilobytes), and B (bytes).
如果没有选项dstat 命令还将显示许多其他信息——指示 CPU 如何使用时间、显示网络和分页活动、报告中断和上下文切换。
```
$ dstat
You did not select any stats, using -cdngy by default.
--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read writ| recv send| in out | int csw
0 0 100 0 0| 949B 73k| 0 0 | 0 3B| 38 65
0 0 100 0 0| 0 0 | 218B 932B| 0 0 | 53 68
0 1 99 0 0| 0 16k| 64B 468B| 0 0 | 64 81 ^C
```
dstat 命令提供了关于整个 Linux 系统性能的有价值的见解,几乎可以用它灵活而功能强大的命令来代替 vmstatnetstatiostat 和 ifstat 等较旧的工具集合,该命令结合了这些旧工具的功能。要深入了解 dstat 命令可以提供的其它信息,请参阅这篇关于 [dstat][1] 命令的文章。
### iostat
iostat 命令通过观察设备活动的时间与其平均传输速率之间的关系,帮助监视系统输入/输出设备的加载情况。它有时用于评估磁盘之间的活动平衡。
```
$ iostat
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.01 0.03 0.05 0.00 99.85
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
loop0 0.00 0.00 0.00 1048 0
loop1 0.00 0.00 0.00 365 0
loop2 0.00 0.00 0.00 1056 0
loop3 0.00 0.01 0.00 16169 0
loop4 0.00 0.00 0.00 413 0
loop5 0.00 0.00 0.00 1184 0
loop6 0.00 0.00 0.00 1062 0
loop7 0.00 0.00 0.00 5261 0
sda 1.06 0.89 72.66 2837453 232735080
sdb 0.00 0.02 0.00 48669 40
loop8 0.00 0.00 0.00 1053 0
loop9 0.01 0.01 0.00 18949 0
loop10 0.00 0.00 0.00 56 0
loop11 0.00 0.00 0.00 7090 0
loop12 0.00 0.00 0.00 1160 0
loop13 0.00 0.00 0.00 108 0
loop14 0.00 0.00 0.00 3572 0
loop15 0.01 0.01 0.00 20026 0
loop16 0.00 0.00 0.00 24 0
```
当然当您只想关注磁盘时Linux loop 设备上提供的所有统计信息都会使结果显得杂乱无章。但是,该命令也确实提供了 **-p** 选项,该选项使您可以仅查看磁盘——如以下命令所示。
```
$ iostat -p sda
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.01 0.03 0.05 0.00 99.85
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 1.06 0.89 72.54 2843737 232815784
sda1 1.04 0.88 72.54 2821733 232815784
```
请注意 **tps** 是指每秒的传输量。
您还可以让 iostat 提供重复的报告。在下面的示例中,我们使用 **-d** 选项每五秒钟进行一次测量。
```
$ iostat -p sda -d 5
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 1.06 0.89 72.51 2843749 232834048
sda1 1.04 0.88 72.51 2821745 232834048
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.80 0.00 11.20 0 56
sda1 0.80 0.00 11.20 0 56
```
如果您希望省略第一个(自启动以来的统计信息)报告,请在命令中添加 **-y**。
```
$ iostat -p sda -d 5 -y
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.80 0.00 11.20 0 56
sda1 0.80 0.00 11.20 0 56
```
接下来,我们看第二个磁盘驱动器。
```
$ iostat -p sdb
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.01 0.03 0.05 0.00 99.85
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sdb 0.00 0.02 0.00 48669 40
sdb2 0.00 0.00 0.00 4861 40
sdb1 0.00 0.01 0.00 35344 0
```
### iotop
**iotop** 命令是类似 top 的实用程序,用于查看磁盘 I/O。它收集 Linux 内核提供的 I/O 使用信息,以便您了解哪些进程在磁盘 I/O 方面的要求最高。在下面的示例中循环时间被设置为5秒。显示将自动更新覆盖前面的输出。
```
$ sudo iotop -d 5
Total DISK READ: 0.00 B/s | Total DISK WRITE: 1585.31 B/s
Current DISK READ: 0.00 B/s | Current DISK WRITE: 12.39 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
32492 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.12 % [kworker/u8:1-ev~_power_efficient]
208 be/3 root 0.00 B/s 1585.31 B/s 0.00 % 0.11 % [jbd2/sda1-8]
1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init splash
2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
3 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_gp]
4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_par_gp]
8 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [mm_percpu_wq]
```
### ioping
**ioping** 命令是一种完全不同的工具,但是它可以报告磁盘延迟——也就是磁盘响应请求需要多长时间,而这有助于诊断磁盘问题。
```
$ sudo ioping /dev/sda1
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=1 time=960.2 us (warmup)
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=2 time=841.5 us
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=3 time=831.0 us
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=4 time=1.17 ms
^C
--- /dev/sda1 (block device 111.8 GiB) ioping statistics ---
3 requests completed in 2.84 ms, 12 KiB read, 1.05 k iops, 4.12 MiB/s
generated 4 requests in 3.37 s, 16 KiB, 1 iops, 4.75 KiB/s
min/avg/max/mdev = 831.0 us / 947.9 us / 1.17 ms / 158.0 us
```
### atop
**atop** 命令,像 **top** 一样提供了大量有关系统性能的信息,包括有关磁盘活动的一些统计信息。
```
ATOP - butterfly 2018/12/26 17:24:19 37d3h13m------ 10ed
PRC | sys 0.03s | user 0.01s | #proc 179 | #zombie 0 | #exit 6 |
CPU | sys 1% | user 0% | irq 0% | idle 199% | wait 0% |
cpu | sys 1% | user 0% | irq 0% | idle 99% | cpu000 w 0% |
CPL | avg1 0.00 | avg5 0.00 | avg15 0.00 | csw 677 | intr 470 |
MEM | tot 5.8G | free 223.4M | cache 4.6G | buff 253.2M | slab 394.4M |
SWP | tot 2.0G | free 2.0G | | vmcom 1.9G | vmlim 4.9G |
DSK | sda | busy 0% | read 0 | write 7 | avio 1.14 ms |
NET | transport | tcpi 4 | tcpo stall 8 | udpi 1 | udpo 0swout 2255 |
NET | network | ipi 10 | ipo 7 | ipfrw 0 | deliv 60.67 ms |
NET | enp0s25 0% | pcki 10 | pcko 8 | si 1 Kbps | so 3 Kbp0.73 ms |
PID SYSCPU USRCPU VGROW RGROW ST EXC THR S CPUNR CPU CMD 1/1673e4 |
3357 0.01s 0.00s 672K 824K -- - 1 R 0 0% atop
3359 0.01s 0.00s 0K 0K NE 0 0 E - 0% <ps>
3361 0.00s 0.01s 0K 0K NE 0 0 E - 0% <ps>
3363 0.01s 0.00s 0K 0K NE 0 0 E - 0% <ps>
31357 0.00s 0.00s 0K 0K -- - 1 S 1 0% bash
3364 0.00s 0.00s 8032K 756K N- - 1 S 1 0% sleep
2931 0.00s 0.00s 0K 0K -- - 1 I 1 0% kworker/u8:2-e
3356 0.00s 0.00s 0K 0K -E 0 0 E - 0% <sleep>
3360 0.00s 0.00s 0K 0K NE 0 0 E - 0% <sleep>
3362 0.00s 0.00s 0K 0K NE 0 0 E - 0% <sleep>
```
如果您 _只_ 想查看磁盘统计信息,则可以使用以下命令轻松进行管理:
```
$ atop | grep DSK
$ atop | grep DSK
DSK | sda | busy 0% | read 122901 | write 3318e3 | avio 0.67 ms |
DSK | sdb | busy 0% | read 1168 | write 103 | avio 0.73 ms |
DSK | sda | busy 2% | read 0 | write 92 | avio 2.39 ms |
DSK | sda | busy 2% | read 0 | write 94 | avio 2.47 ms |
DSK | sda | busy 2% | read 0 | write 99 | avio 2.26 ms |
DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms |
DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms |
DSK | sda | busy 2% | read 0 | write 92 | avio 2.43 ms |
^C
```
### 了解磁盘 I/O
Linux 提供了足够的命令,可以让您很好地了解磁盘的工作强度,并帮助您关注潜在的问题或慢速。希望这些命令中的一个可以告诉您何时需要质疑磁盘性能。偶尔使用这些命令将有助于确保当您需要检查磁盘,特别是忙碌或缓慢的磁盘时可以显而易见地发现它们。
加入 [Facebook][2] 和 [LinkedIn][3] 上的 Network World 社区,对最重要的话题发表评论。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3291616/linux/examining-linux-system-performance-with-dstat.html
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -7,51 +7,53 @@
[#]: via: (https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8
如何在 RHEL 8 /CentOS 8 上建立多节点 Elastic stack 集群
======
Elastic stack widely known as **ELK stack**, it is a group of opensource products like **Elasticsearch**, **Logstash** and **Kibana**. Elastic Stack is developed and maintained by Elastic company. Using elastic stack, one can feed systems logs to Logstash, it is a data collection engine which accept the logs or data from all the sources and normalize logs and then it forwards the logs to Elasticsearch for **analyzing**, **indexing**, **searching** and **storing** and finally using Kibana one can represent the visualize data, using Kibana we can also create interactive graphs and diagram based on users queries.
Elastic stack 俗称 **ELK stack**,是一组开源产品,如 **Elasticsearch**、**Logstash**和**Kibana**。Elastic Stack 由 Elastic 公司开发和维护。使用 Elastic stack可以将系统日志发送到 Logstash它是一个数据收集引擎接受来自可能任何来源的日志或数据并对日志进行格式化然后将日志转发到 Elasticsearch ,用于 **分析**、**索引**、**搜索**和**存储**,最后使用 Kibana 表示为可视化数据,使用 Kibana我们还可以基于用户的查询创建交互式图表。
[![Elastic-Stack-Cluster-RHEL8-CentOS8][1]][2]
In this article we will demonstrate how to setup multi node elastic stack cluster on RHEL 8 / CentOS 8 servers. Following are details for my Elastic Stack Cluster:
在本文中,我们将演示如何在 RHEL 8 / CentOS 8 服务器上设置多节点 elastic stack 集群。以下是我的 Elastic Stack 集群的详细信息:
### Elasticsearch:
* Three Servers with Minimal RHEL 8 / CentOS 8
* IPs &amp; Hostname 192.168.56.40 (elasticsearch1.linuxtechi. local), 192.168.56.50 (elasticsearch2.linuxtechi. local), 192.168.56.60 (elasticsearch3.linuxtechi. local)
* 三台服务器,最小化安装 RHEL 8 / CentOS 8
* IPs &amp; 主机名 192.168.56.40 (elasticsearch1.linuxtechi. local), 192.168.56.50 (elasticsearch2.linuxtechi. local), 192.168.56.60 (elasticsearch3.linuxtechi. local)
### Logstash:
* Two Servers with minimal RHEL 8 / CentOS 8
* IPs &amp; Hostname 192.168.56.20 (logstash1.linuxtechi. local) , 192.168.56.30 (logstash2.linuxtechi. local)
* 两台服务器,最小化安装 RHEL 8 / CentOS 8
* IPs &amp; 主机 192.168.56.20 (logstash1.linuxtechi. local) , 192.168.56.30 (logstash2.linuxtechi. local)
### Kibana:
* One Server with minimal RHEL 8 / CentOS 8
* Hostname kibana.linuxtechi.local
* 一台服务器,最小化安装 RHEL 8 / CentOS 8
* 主机名 kibana.linuxtechi.local
* IP 192.168.56.10
### Filebeat:
* One Server with minimal CentOS 7
* IP &amp; hostname 192.168.56.70 (web-server)
* 一台服务器,最小化安装 CentOS 7
* IP &amp;主机名 192.168.56.70 (web-server)
Lets start with Elasticsearch cluster setup,
让我们从设置 Elasticsearch 集群开始,
#### Setup 3 node Elasticsearch cluster
#### 设置3个节点 Elasticsearch 集群
As I have already stated that I have kept nodes for Elasticsearch cluster, login to each node, set the hostname and configure yum/dnf repositories.
正如我已经说过的,设置 Elasticsearch 集群的节点,登录到每个节点,设置主机名并配置 yum/dnf 库。
Use the below hostnamectl command to set the hostname on respective nodes,
使用命令 hostnamectl 设置各个节点上的主机名,
```
[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch1.linuxtechi. local"
@ -65,11 +67,11 @@ Use the below hostnamectl command to set the hostname on respective nodes,
[root@linuxtechi ~]#
```
For CentOS 8 System we dont need to configure any OS package repository and for RHEL 8 Server, if you have valid subscription and then subscribed it with Red Hat for getting package repository.  In Case you want to configure local yum/dnf repository for OS packages then refer the below url:
对于 CentOS 8 系统,我们不需要配置任何操作系统包库,对于 RHEL 8服务器如果您订阅了然后用红帽订阅以获得包存储库就可以了。如果您想为操作系统包配置本地 yum/dnf 存储库,请参考以下网址:
[How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File][3]
Configure Elasticsearch package repository on all the nodes, create a file elastic.repo  file under /etc/yum.repos.d/ folder with the following content
在所有节点上配置 Elasticsearch 包存储库,在 /etc/yum.repo.d/ 文件夹下创建一个包含以下内容的 elastic.repo 文件
```
~]# vi /etc/yum.repos.d/elastic.repo
@ -83,15 +85,15 @@ autorefresh=1
type=rpm-md
```
save &amp; exit the file
保存 &amp; 退出文件
Use below rpm command on all three nodes to import Elastics public signing key
在所有三个节点上使用 rpm 命令导入 Elastic 公共签名密钥
```
~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
```
Add the following lines in /etc/hosts file on all three nodes,
在所有三个节点的 /etc/hosts 文件中添加以下行:
```
192.168.56.40 elasticsearch1.linuxtechi.local
@ -99,7 +101,7 @@ Add the following lines in /etc/hosts file on all three nodes,
192.168.56.60 elasticsearch3.linuxtechi.local
```
Install Java on all three Nodes using yum / dnf command,
使用 yum/dnf 命令在所有三个节点上安装 Java
```
[root@linuxtechi ~]# dnf install java-openjdk -y
@ -107,7 +109,7 @@ Install Java on all three Nodes using yum / dnf command,
[root@linuxtechi ~]# dnf install java-openjdk -y
```
Install Elasticsearch using beneath dnf command on all three nodes,
使用 dnf 命令在所有三个节点上安装 Elasticsearch
```
[root@linuxtechi ~]# dnf install elasticsearch -y
@ -115,7 +117,7 @@ Install Elasticsearch using beneath dnf command on all three nodes,
[root@linuxtechi ~]# dnf install elasticsearch -y
```
**Note:** In case OS firewall is enabled and running in each Elasticsearch node then allow following ports using beneath firewall-cmd command,
**注意:** 如果操作系统防火墙已启用并在每个 Elasticsearch 节点中运行,则使用 firewall-cmd 命令允许以下端口开放,
```
~]# firewall-cmd --permanent --add-port=9300/tcp
@ -123,7 +125,7 @@ Install Elasticsearch using beneath dnf command on all three nodes,
~]# firewall-cmd --reload
```
Configure Elasticsearch, edit the file “**/etc/elasticsearch/elasticsearch.yml**” on all the three nodes and add the followings,
配置 Elasticsearch, 在所有节点上编辑文件 **/etc/elasticsearch/elasticsearch.yml** 并加入以下内容,
```
~]# vim /etc/elasticsearch/elasticsearch.yml
@ -137,9 +139,9 @@ cluster.initial_master_nodes: ["elasticsearch1.linuxtechi.local", "elasticsearch
……………………………………………
```
**Note:** on Each node, add the correct hostname in node.name parameter and ip address in network.host parameter and other parameters will remain the same.
**注意:** 在每个节点上,在 node.name 中填写正确的主机名,在 network.host 中填写正确的 IP 地址,其他参数将保持不变。
Now Start and enable the Elasticsearch service on all three nodes using following systemctl command,
现在使用 systemctl 命令在所有三个节点上启动并启用 Elasticsearch 服务,
```
~]# systemctl daemon-reload
@ -147,7 +149,7 @@ Now Start and enable the Elasticsearch service on all three nodes using followin
~]# systemctl start elasticsearch.service
```
Use below ss command to verify whether elasticsearch node is start listening on 9200 port,
使用下面 'ss' 命令验证 elasticsearch 节点是否开始监听9200端口
```
[root@linuxtechi ~]# ss -tunlp | grep 9200
@ -155,33 +157,33 @@ tcp LISTEN 0 128 [::ffff:192.168.56.40]:9200 *:*
[root@linuxtechi ~]#
```
Use following curl commands to verify the Elasticsearch cluster status
使用以下 curl 命令验证 Elasticsearch 群集状态
```
[root@linuxtechi ~]# curl http://elasticsearch1.linuxtechi.local:9200
[root@linuxtechi ~]# curl -X GET http://elasticsearch2.linuxtechi.local:9200/_cluster/health?pretty
```
Output above command would be something like below,
命令的输出如下所示,
![Elasticsearch-cluster-status-rhel8][1]
Above output confirms that we have successfully created 3 node Elasticsearch cluster and status of cluster is also green.
以上输出表明我们已经成功创建了3节点的 Elasticsearch 集群,集群的状态也是绿色的。
**Note:** If you want to modify JVM heap size then you have edit the file “**/etc/elasticsearch/jvm.options**” and change the below parameters that suits to your environment,
**注意:** 如果您想修改 JVM 堆大小,那么您已经编辑了文件 “**/etc/elasticsearch/jvm.options**”,并根据您的环境更改以下参数,
* -Xms1g
* -Xmx1g
Now lets move to Logstash nodes,
现在让我们转到 Logstash 节点,
#### Install and Configure Logstash
#### 安装和配置 Logstash
Perform the following steps on both Logstash nodes,
在两个 Logstash 节点上执行以下步骤,
Login to both the nodes set the hostname using following hostnamectl command,
登录到两个节点使用 hostnamectl 命令设置主机名,
```
[root@linuxtechi ~]# hostnamectl set-hostname "logstash1.linuxtechi.local"
@ -192,7 +194,7 @@ Login to both the nodes set the hostname using following hostnamectl command,
[root@linuxtechi ~]#
```
Add the following entries in /etc/hosts file in both logstash nodes
在两个 logstash 节点的 /etc/hosts 文件中添加以下条目
```
~]# vi /etc/hosts
@ -201,9 +203,10 @@ Add the following entries in /etc/hosts file in both logstash nodes
192.168.56.60 elasticsearch3.linuxtechi.local
```
Save and exit the file
保存并退出文件
Configure Logstash repository on both the nodes, create a file **logstash.repo** under the folder /ete/yum.repos.d/ with following content,
在两个节点上配置 Logstash 存储库,在文件夹/ete/yum.repo.d/下创建一个包含以下内容的文件 **logstash.repo**,
```
~]# vi /etc/yum.repos.d/logstash.repo
@ -217,35 +220,35 @@ autorefresh=1
type=rpm-md
```
Save and exit the file, run the following rpm command to import the signing key
保存并退出文件,运行 rpm 命令导入签名密钥
```
~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
```
Install Java OpenJDK on both the nodes using following dnf command,
使用 dnf 命令在两个节点上安装 Java OpenJDK
```
~]# dnf install java-openjdk -y
```
Run the following dnf command from both the nodes to install logstash,
从两个节点运行 dnf 命令来安装 logstash
```
[root@linuxtechi ~]# dnf install logstash -y
[root@linuxtechi ~]# dnf install logstash -y
```
Now configure logstash, perform below steps on both logstash nodes,
现在配置logstash在两个 logstash 节点上执行以下步骤,
Create a logstash conf file, for that first we have copy sample logstash file under /etc/logstash/conf.d/
创建一个 logstash conf 文件,首先我们在 “/etc/logstash/conf.d/” 下复制 logstash 示例文件
```
# cd /etc/logstash/
# cp logstash-sample.conf conf.d/logstash.conf
```
Edit conf file and update the following content,
编辑 conf 文件并更新以下内容,
```
# vi conf.d/logstash.conf
@ -266,23 +269,22 @@ output {
}
```
Under output section, in hosts parameter specify FQDN of all three Elasticsearch nodes, other parameters leave as it is.
输出部分,在主机参数中指定所有三个 Elasticsearch 节点的 FQDN其他参数保持不变。
Allow logstash port “5044” in OS firewall using following firewall-cmd command,
使用 firewall-cmd 命令在操作系统防火墙中允许 logstash 端口 “5044”
```
~ # firewall-cmd --permanent --add-port=5044/tcp
~ # firewall-cmd reload
```
Now start and enable Logstash service, run the following systemctl commands on both the nodes
现在,在每个节点上运行以下 systemctl 命令,启动并启用 Logstash 服务
```
~]# systemctl start logstash
~]# systemctl eanble logstash
```
Use below ss command to verify whether logstash service start listening on 5044,
使用 ss 命令验证 logstash 服务是否开始监听 5044 端口,
```
[root@linuxtechi ~]# ss -tunlp | grep 5044
@ -290,11 +292,11 @@ tcp LISTEN 0 128 *:5044 *:*
[root@linuxtechi ~]#
```
Above output confirms that logstash has been installed and configured successfully. Lets move to Kibana installation.
以上输出表明 logstash 已成功安装和配置。让我们转到 Kibana 安装。
#### Install and Configure Kibana
#### 安装和配置 Kibana
Login to Kibana node, set the hostname with **hostnamectl** command,
登录 Kibana 节点,使用 **hostnamectl** 命令设置主机名,
```
[root@linuxtechi ~]# hostnamectl set-hostname "kibana.linuxtechi.local"
@ -302,7 +304,8 @@ Login to Kibana node, set the hostname with **hostnamectl** command,
[root@linuxtechi ~]#
```
Edit /etc/hosts file and add the following lines
编辑 /etc/hosts 文件并添加以下行
```
192.168.56.40 elasticsearch1.linuxtechi.local
@ -310,7 +313,7 @@ Edit /etc/hosts file and add the following lines
192.168.56.60 elasticsearch3.linuxtechi.local
```
Setup the Kibana repository using following,
使用以下命令设置 Kibana 存储库,
```
[root@linuxtechi ~]# vi /etc/yum.repos.d/kibana.repo
@ -326,13 +329,13 @@ type=rpm-md
[root@linuxtechi ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
```
Execute below dnf command to install kibana,
执行 dnf 命令安装kibana
```
[root@linuxtechi ~]# yum install kibana -y
```
Configure Kibana by editing the file “**/etc/kibana/kibana.yml**”
通过编辑 “**/etc/kibana/kibana.yml**” 文件,配置 Kibana
```
[root@linuxtechi ~]# vim /etc/kibana/kibana.yml
@ -343,14 +346,15 @@ elasticsearch.hosts: ["http://elasticsearch1.linuxtechi.local:9200", "http://ela
…………
```
Start and enable kibana service
启动并且启用 kibana 服务
```
[root@linuxtechi ~]# systemctl start kibana
[root@linuxtechi ~]# systemctl enable kibana
```
Allow Kibana port 5601 in OS firewall,
在系统防火墙上允许 Kibana 端口 5601
```
[root@linuxtechi ~]# firewall-cmd --permanent --add-port=5601/tcp
@ -359,22 +363,25 @@ success
success
[root@linuxtechi ~]#
```
Access Kibana portal / GUI using the following URL:
使用以下 URL 访问 Kibana 界面
<http://kibana.linuxtechi.local:5601>
[![Kibana-Dashboard-rhel8][1]][4]
From dashboard, we can also check our Elastic Stack cluster status
从 面板上,我们可以检查 Elastic Stack 集群的状态
[![Stack-Monitoring-Overview-RHEL8][1]][5]
This confirms that we have successfully setup multi node Elastic Stack cluster on RHEL 8 / CentOS 8.
Now lets send some logs to logstash nodes via filebeat from other Linux servers, In my case I have one CentOS 7 Server, I will push all important logs of this server to logstash via filebeat.
这证明我们已经在 RHEL 8 /CentOS 8 上成功地安装并设置了多节点 Elastic Stack 集群。
Login to CentOS 7 server and install filebeat package using following rpm command,
现在让我们通过 filebeat 从其他 Linux 服务器发送一些日志到 logstash 节点中,在我的例子中,我有一个 CentOS 7服务器我将通过 filebeat 将该服务器的所有重要日志推送到 logstash 。
登录到 CentOS 7 服务器使用 rpm 命令安装 filebeat 包,
```
[root@linuxtechi ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm
@ -385,16 +392,18 @@ Updating / installing...
[root@linuxtechi ~]#
```
Edit the /etc/hosts file and add the following entries,
编辑 /etc/hosts 文件并添加以下内容,
```
192.168.56.20 logstash1.linuxtechi.local
192.168.56.30 logstash2.linuxtechi.local
```
Now configure the filebeat so that it can send logs to logstash nodes using load balancing technique, edit the file “**/etc/filebeat/filebeat.yml**” and add the following parameters,
Under the **filebeat.inputs:** section change **enabled: false** to **enabled: true** and under the “**paths**” parameter specify the location log files that we can send to logstash, In output Elasticsearch section comment out “**output.elasticsearch**” and **host** parameter. In Logstash output section, remove the comments for “**output.logstash:**” and “**hosts:**” and add the both logstash nodes in hosts parameters and also “**loadbalance: true**”.
现在配置 filebeat以便它可以使用负载平衡技术向 logstash 节点发送日志,编辑文件 **/etc/filebeat/filebeat.yml**,并添加以下参数,
在 “**filebeat.inputs:**”部分将“**enabled: false**”更改为“**enabled: true**”,并在“**paths**”参数下指定我们可以发送到 logstash 的日志文件的位置,在 Elasticsearch 输出部分注释掉“**output.elasticsearch**”和**host**参数。在 Logstash 输出部分,删除“**output.logstash:**” 和 “**hosts:**” 的注释,并在 hosts 参数和 “**loadbalance: true**” 中添加 logstash 节点。
```
[root@linuxtechi ~]# vi /etc/filebeat/filebeat.yml
@ -416,40 +425,43 @@ output.logstash:
………………………………………
```
Start and enable filebeat service using beneath systemctl commands,
使用下面的2个 systemctl 命令 启动并启用 filebeat 服务
```
[root@linuxtechi ~]# systemctl start filebeat
[root@linuxtechi ~]# systemctl enable filebeat
```
Now go to Kibana GUI, verify whether new indices are visible or not,
现在转到 Kibana 用户界面,验证新索引是否可见,
Choose Management option from Left side bar and then click on Index Management under Elasticsearch,
从左侧栏中选择管理选项,然后单击 Elasticsearch 下的索引管理,
[![Elasticsearch-index-management-Kibana][1]][6]
As we can see above, indices are visible now, lets create index pattern,
正如我们上面看到的,索引现在是可见的,让我们创建索引模型,
Click on “Index Patterns” from Kibana Section, it will prompt us to create a new pattern, click on “**Create Index Pattern**” and specify the pattern name as “**filebeat**”
点击 Kibana 部分的 “索引模型”,它将提示我们创建一个新模型,点击 **Create Index Pattern** ,并将模式名称指定为 **filebeat**
[![Define-Index-Pattern-Kibana-RHEL8][1]][7]
Click on Next Step
Choose “**Timestamp**” as time filter for index pattern and then click on “Create index pattern”
点击下一步
选择 **Timestamp** 作为索引模型的时间过滤器,然后单击 “Create index pattern”
[![Time-Filter-Index-Pattern-Kibana-RHEL8][1]][8]
[![filebeat-index-pattern-overview-Kibana][1]][9]
Now Click on Discover to see real time filebeat index pattern,
现在单击查看实时 filebeat 索引模型,
[![Discover-Kibana-REHL8][1]][10]
This confirms that Filebeat agent has been configured successfully and we are able to see real time logs on Kibana dashboard.
Thats all from this article, please dont hesitate to share your feedback and comments in case these steps help you to setup multi node Elastic Stack Cluster on RHEL 8 / CentOS 8 system.
这表明 Filebeat 代理已配置成功,我们能够在 Kibana 仪表盘上看到实时日志。
以上就是本文的全部内容,对这些帮助您在 RHEL 8 / CentOS 8 系统上设置 Elastic Stack集群的步骤请不要犹豫分享您的反馈和意见
--------------------------------------------------------------------------------

View File

@ -0,0 +1,170 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to start developing with .NET)
[#]: via: (https://opensource.com/article/19/9/getting-started-net)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic)
如何开始使用 .NET 进行开发
======
了解 .NET 开发平台启动和运行的基础知识。
![Coding on a computer][1]
.NET 框架由 Microsoft 于 2000 年发布。该平台的开源实现 [Mono][2] 在 21 世纪初成为了争议的焦点,因为微软拥有 .NET 技术的多项专利,并且可能使用这些专利来结束 Mono。幸运的是在 2014 年,微软宣布 .NET 开发平台从此成为 MIT 许可下的开源平台,并在 2016 年收购了开发 Mono 的 Xamarin 公司。
.NET 和 Mono 已经同时可用于 C#、F#、GTK+、Visual Basic、Vala 等的跨平台编程环境。使用 .NET 和 Mono 创建的程序已经应用于 Linux、BSD、Windows、MacOS、Android甚至一些游戏机。你可以使用 .NET 或 Mono 来开发 .NET 应用。它们都是开源的,并且都有活跃和充满活力的社区。本文重点介绍 Microsoft .NET 环境实现。
### 如何安装 .NET
.NET 下载被分为多个包:一个仅包含 .NET 运行时,另一个包含了 .NET Core 和运行时 .NET SDK。根据架构和操作系统版本这些包可能有多个版本。要开始使用 .NET 进行开发,你必须[安装 SDK][3]。它为您提供了 [dotnet][4] 终端或 PowerShell 命令,你可以使用它们来创建和生成项目。
#### Linux
要在 Linux 上安装 .NET首先将 Microsoft Linux 软件仓库添加到你的计算机。
在 Fedora 上:
```
$ sudo rpm --import <https://packages.microsoft.com/keys/microsoft.asc>
$ sudo wget -q -O /etc/yum.repos.d/microsoft-prod.repo <https://packages.microsoft.com/config/fedora/27/prod.repo>
```
在 Ubuntu 上:
```
$ wget -q <https://packages.microsoft.com/config/ubuntu/19.04/packages-microsoft-prod.deb> -O packages-microsoft-prod.deb
$ sudo dpkg -i packages-microsoft-prod.deb
```
接下来,使用包管理器安装 SDK**&lt;X.Y&gt;** 替换为当前版本的 .NET 版本:
在 Fedora 上:
```
`$ sudo dnf install dotnet-sdk-<X.Y>`
```
在 Ubuntu 上:
```
$ sudo apt install apt-transport-https
$ sudo apt update
$ sudo apt install dotnet-sdk-&lt;X.Y&gt;
```
下载并安装所有包后,打开终端并输入下面命令确认安装:
```
$ dotnet --version
X.Y.Z
```
#### Windows
如果你使用的是 Microsoft Windows那么你可能已经安装了 .NET 运行时。但是,要开发 .NET 应用,你还必须安装 .NET Core SDK。
首先,[下载安装程序][3]。请认准下载 .NET Core 进行跨平台开发(.NET Framework 仅适用于 Windows。下载 **.exe** 文件后,双击该文件启动安装向导,然后单击两下进行安装:接受许可证并允许安装继续。
![Installing dotnet on Windows][5]
然后,从左下角的“应用程序”菜单中打开 PowerShell。在 PowerShell 中,输入测试命令:
```
`PS C:\Users\osdc> dotnet`
```
如果你看到有关 dotnet 安装的信息,那么说明 .NET 已正确安装。
#### MacOS
如果你使用的是 Apple Mac请下载 **.pkg**形式的 [Mac 安装程序][3]。下载并双击 **.pkg** 文件,然后单击安装程序。你可能需要授予安装程序权限,因为该软件包并非来自 App Store。
下载并安装所有软件包后,请打开终端并输入以下命令来确认安装:
```
$ dotnet --version
X.Y.Z
```
### Hello .NET
**dotnet** 命令提供了一个用 .NET 编写的 “hello world ” 示例程序。或者,更准确地说,该命令提供了示例应用。
首先,使用 **dotnet** 命令以及 **new****console** 参数创建一个控制台应用的项目目录及所需的代码基础结构。使用 **-o** 选项指定项目名称:
```
`$ dotnet new console -o hellodotnet`
```
这将在当前目录中创建一个名为 **hellodotnet** 的目录。进入你的项目目录并看一下:
```
$ cd hellodotnet
$ dir
hellodotnet.csproj  obj  Program.cs
```
**Program.cs** 是一个空的 C 文件,它包含了一个简单的 Hello World 程序。在文本编辑器中打开浏览。微软的 Visual Studio Code 是一个使用 dotnet 编写的跨平台的开源应用,虽然它不是一个糟糕的文本编辑器,但它会收集用户的大量数据(在它的二进制发行版的许可证中授予了自己权限)。如果要尝试使用 Visual Studio Code请考虑使用 [VSCodium][6],它是使用 Visual Studio Code 的 MIT 许可的源码构建的版本而_没有_远程收集请阅读[文档][7]来禁止此构建中的其他形式追踪)。或者,只需使用现有的你最喜欢的文本编辑器或 IDE。
新控制台应用中的样板代码为:
```
using System;
namespace hellodotnet
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}
```
要运行该程序,请使用 **dotnet run** 命令:
```
$ dotnet run
Hello World!
```
这是 .NET 和 **dotnet** 命令的基本工作流程。这里有完整的 [.NET C 指南][8],并且都是与 .NET 相关的内容。关于 .NET 实战示例,请关注 [Alex Bunardzic][9] 在 opensource.com 中的变异测试文章。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/getting-started-net
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
[2]: https://www.monodevelop.com/
[3]: https://dotnet.microsoft.com/download
[4]: https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet?tabs=netcore21
[5]: https://opensource.com/sites/default/files/uploads/dotnet-windows-install.jpg (Installing dotnet on Windows)
[6]: https://vscodium.com/
[7]: https://github.com/VSCodium/vscodium/blob/master/DOCS.md
[8]: https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/intro-to-csharp/
[9]: https://opensource.com/users/alex-bunardzic (View user profile.)