Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-08-08 18:08:27 +08:00
commit 4b13d8b800
18 changed files with 1876 additions and 416 deletions

View File

@ -0,0 +1,130 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11200-1.html)
[#]: subject: (How to detect automatically generated emails)
[#]: via: (https://arp242.net/weblog/autoreply.html)
[#]: author: (Martin Tournoij https://arp242.net/)
如何检测自动生成的电子邮件
======
![](https://img.linux.net.cn/data/attachment/album/201908/08/003503fw0w0pzx2ue6a6a6.jpg)
当你用电子邮件系统发送自动回复时,你需要注意不要向自动生成的电子邮件发送回复。最好的情况下,你将获得无用的投递失败消息。更可能的是,你会得到一个无限的电子邮件循环和一个混乱的世界。
事实证明,可靠地检测自动生成的电子邮件并不总是那么容易。以下是基于为此编写的检测器并使用它扫描大约 100,000 封电子邮件(大量的个人存档和公司存档)的观察结果。
### Auto-submitted 信头
由 [RFC 3834][1] 定义。
这是表示你的邮件是自动回复的“官方”标准。如果存在 `Auto-Submitted` 信头,并且其值不是 `no`,你应该**不**发送回复。
### X-Auto-Response-Suppress 信头
[由微软][2]定义。
此信头由微软 Exchange、Outlook 和其他一些产品使用。许多新闻订阅等都设定了这个。如果 `X-Auto-Response-Suppress` 包含 `DR`(“抑制投递报告”)、`AutoReply`(“禁止 OOF 通知以外的自动回复消息”)或 `All`,你应该**不**发送回复。
### List-Id 和 List-Unsubscribe 信头
由 [RFC 2919][3] 定义。
你通常不希望给邮件列表或新闻订阅发送自动回复。几乎所有的邮件列表和大多数新闻订阅都至少设置了其中一个信头。如果存在这些信头中的任何一个,你应该**不**发送回复。这个信头的值不重要。
### Feedback-ID 信头
[由谷歌][4]定义。
Gmail 使用此信头识别邮件是否是新闻订阅,并使用它为这些新闻订阅的所有者生成统计信息或报告。如果此信头存在,你应该**不**发送回复。这个信头的值不重要。
### 非标准方式
上述方法定义明确(即使有些是非标准的)。不幸的是,有些电子邮件系统不使用它们中的任何一个 :-( 这里有一些额外的措施。
#### Precedence 信头
在 [RFC 2076][5] 中没有真正定义,不鼓励使用它(但通常会遇到此信头)。
请注意,不建议检查是否存在此信头,因为某些邮件使用 `normal` 和其他一些(少见的)值(尽管这不常见)。
我的建议是如果其值不区分大小写地匹配 `bulk`、`auto_reply` 或 `list`,则**不**发送回复。
#### 其他不常见的信头
这是我遇到的另外的一些(不常见的)信头。如果设置了其中一个,我建议**不**发送自动回复。大多数邮件也设置了上述信头之一,但有些没有(这并不常见)。
* `X-MSFBL`无法真正找到定义Microsoft 信头?),但我只有自动生成的邮件带有此信头。
* `X-Loop`:在任何地方都没有真正定义过,有点罕见,但有时有。它通常设置为不应该收到电子邮件的地址,但也会遇到 `X-Loop: yes`
* `X-Autoreply`:相当罕见,并且似乎总是具有 `yes` 的值。
#### Email 地址
检查 `From``Reply-To` 信头是否包含 `noreply`、`no-reply` 或 `no_reply`(正则表达式:`^no.?reply@`)。
#### 只有 HTML 部分
如果电子邮件只有 HTML 部分,而没有文本部分,则表明这是一个自动生成的邮件或新闻订阅。几乎所有邮件客户端都设置了文本部分。
#### 投递失败消息
许多传递失败消息并不能真正表明它们是失败的。一些检查方法:
* `From` 包含 `mailer-daemon``Mail Delivery Subsystem`
#### 特定的邮件库特征
许多邮件类库留下了某种痕迹,大多数常规邮件客户端使用自己的数据覆盖它。检查这个似乎工作得相当可靠。
* `X-Mailer: Microsoft CDO for Windows 2000`:由某些微软软件设置;我只能在自动生成的邮件中找到它。是的,在 2015 年它仍然在使用。
* `Message-ID` 信头包含 `.JavaMail.`我发现了一些5 个 50k 大小的)常规消息,但不是很多;绝大多数(数千封)邮件是新闻订阅、订单确认等。
* `^X-Mailer``PHP` 开头。这应该会同时看到 `X-Mailer: PHP/5.5.0``X-Mailer: PHPmailer XXX XXX`。与 “JavaMail” 相同。
* 出现了 `X-Library`;似乎只有 [Indy][6] 设定了这个。
* `X-Mailer``wdcollect` 开头。由一些 Plesk 邮件设置。
* `X-Mailer``MIME-tools` 开头。
### 最后的预防措施:限制回复的数量
即使遵循上述所有建议,你仍可能会遇到一个避开所有这些检测的电子邮件程序。这可能非常危险,因为电子邮件系统只是“如果有电子邮件那么发送”,就有可能导致无限的电子邮件循环。
出于这个原因,我建议你记录你自动发送的电子邮件,并将此速率限制为在几分钟内最多几封电子邮件。这将打破循环链条。
我们使用每五分钟一封电子邮件的设置,但没这么严格的设置可能也会运作良好。
### 你需要为自动回复设置什么信头
具体细节取决于你发送的邮件类型。这是我们用于自动回复邮件的内容:
```
Auto-Submitted: auto-replied
X-Auto-Response-Suppress: All
Precedence: auto_reply
```
### 反馈
你可以发送电子邮件至 [martin@arp242.net][7] 或 [创建 GitHub 议题][8]以提交反馈、问题等。
--------------------------------------------------------------------------------
via: https://arp242.net/weblog/autoreply.html
作者:[Martin Tournoij][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://arp242.net/
[b]: https://github.com/lujun9972
[1]: http://tools.ietf.org/html/rfc3834
[2]: https://msdn.microsoft.com/en-us/library/ee219609(v=EXCHG.80).aspx
[3]: https://tools.ietf.org/html/rfc2919)
[4]: https://support.google.com/mail/answer/6254652?hl=en
[5]: http://www.faqs.org/rfcs/rfc2076.html
[6]: http://www.indyproject.org/index.en.aspx
[7]: mailto:martin@arp242.net
[8]: https://github.com/Carpetsmoker/arp242.net/issues/new

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Microsoft finds Russia-backed attacks that exploit IoT devices)
[#]: via: (https://www.networkworld.com/article/3430356/microsoft-finds-russia-backed-attacks-that-exploit-iot-devices.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Microsoft finds Russia-backed attacks that exploit IoT devices
======
Microsoft says default passwords, unpatched devices, poor inventory of IoT gear led to exploits against companies by Russia's STRONTIUM hacking group.
![Zmeel / Getty Images][1]
The STRONTIUM hacking group, which has been strongly linked by security researchers to Russias GRU military intelligence agency, was responsible for an [IoT][2]-based attack on unnamed Microsoft customers, according to the company. a blog post from the companys security response center issued Monday.
Microsoft [said in a blog][3] that the attack, which it discovered in April, targeted three specific IoT devices a VoIP phone, a video decoder and a printer (the company declined to specify the brands) and used them to gain access to unspecified corporate networks. Two of the devices were compromised because nobody had changed the manufacturers default password, and the other one hadnt had the latest security patch applied.
**More on IoT:**
* [][4] [Most powerful Internet of Things companies][5]
* [10 Hot IoT startups to watch][6]
* [The 6 ways to make money in IoT][7]
* [What is digital twin technology? [and why it matters]][8]
* [Blockchain, service-centric networking key to IoT success][9]
* [Getting grounded in IoT networking and security][10]
* [Building IoT-ready networks must become a priority][11]
* [What is the Industrial IoT? [And why the stakes are so high]][12]
Devices compromised in this way acted as back doors to secured networks, allowing the attackers to freely scan those networks for further vulnerabilities, access additional systems, and gain more and more information. The attackers were also seen investigating administrative groups on compromised networks, in an attempt to gain still more access, as well as analyzing local subnet traffic for additional data.
STRONTIUM, which has also been referred to as Fancy Bear, Pawn Storm, Sofacy and APT28, is thought to be behind a host of malicious cyber-activity undertaken on behalf of the Russian government, including the 2016 hack of the Democratic National Committee, attacks on the World Anti-Doping Agency, the targeting of journalists investigating the shoot-down of Malaysia Airlines Flight 17 over Ukraine, sending death threats to the wives of U.S. military personnel under a false flag and much more.
According to an indictment released in July 2018 by the office of Special Counsel Robert Mueller, the architects of the STRONTIUM attacks are a group of Russian military officers, all of whom are wanted by the FBI in connection with those crimes.
Microsoft notifies customers that it discovers are attacked by nation-states and has delivered about 1,400 such notifications related to STRONTIUM over the past 12 months. Most of those four in five went to organizations in the government, military, defense, IT, medicine, education and engineering sectors, and the remainder were for NGOs, think-tanks and other “politically affiliated organizations,” Microsoft said.
The heart of the vulnerability, according to the Microsoft team, was a lack of full awareness by institutions of all the devices running on their networks. They recommended, among other things, cataloguing all IoT devices running in a corporate environment, implementing custom security policies for each device, walling off IoT devices on their own separate networks wherever practical, and performing regular patch and configuration audits on IoT gadgets.
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][13] ]**
Join the Network World communities on [Facebook][14] and [LinkedIn][15] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3430356/microsoft-finds-russia-backed-attacks-that-exploit-iot-devices.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/07/cso_russian_hammer_and_sickle_binary_code_by_zmeel_gettyimages-927363118_2400x1600-100801412-large.jpg
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[3]: https://msrc-blog.microsoft.com/2019/08/05/corporate-iot-a-path-to-intrusion/
[4]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
[5]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
[6]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
[7]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
[8]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
[9]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
[10]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
[11]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
[12]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
[13]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[14]: https://www.facebook.com/NetworkWorld/
[15]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intel pulls the plug on Omni-Path networking fabric architecture)
[#]: via: (https://www.networkworld.com/article/3429614/intel-pulls-the-plug-on-omni-path-networking-fabric-architecture.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Intel pulls the plug on Omni-Path networking fabric architecture
======
Omni-Path Architecture, which Intel had high hopes for in the high-performance computing (HPC) space, has been scrapped after one generation.
![Gerd Altmann \(CC0\)][1]
Intels battle to gain ground in the high-performance computing (HPC) market isnt going so well. The Omni-Path Architecture it had pinned its hopes on has been scrapped after one generation.
An Intel spokesman confirmed to me that the company will no longer offer Intel OmniPath Architecture 200 (OPA200) products to customers, but it will continue to encourage customers, OEMs, and partners to use OPA100 in new designs. 
“We are continuing to sell, maintain, and support OPA100. We actually announced some new features for OPA100 back at International Supercomputing in June,” the spokesperson said via email.
**[ Learn [who's developing quantum computers][2]. ]**
Intel said it continues to invest in connectivity solutions for its customers and that the recent [acquisition of Barefoot Networks][3] is an example of Intels strategy of supporting end-to-end cloud networking and infrastructure. It would not say if Barefoots technology would be the replacement for OPA.
While Intel owns the supercomputing market, it has not been so lucky with the HPC fabric, a network that connects CPUs and memory for faster data sharing. Market leader Mellanox with its Enhanced Data Rate (HDR) InfiniBand framework rules the roost, and now [Mellanox is about to be acquired][4] by Intels biggest nemesis, Nvidia.
Technically, Intel was a bit behind Mellanox. OPA100 is 100 Gbits, and OPA200 was intended to be 200 Gbits, but Mellanox was already at 200 Gbits and is set to introduce 400-Gbit products later this year.
Analyst Jim McGregor isnt surprised. “They have a history where if they dont get high uptick on something and dont think its of value, theyll kill it. A lot of times when they go through management changes, they look at how to optimize. Paul [Otellini] did this extensively. I would expect Bob [Swan, the newly minted CEO] to do that and say these things arent worth our investment,” he said.
The recent [sale of the 5G unit to Apple][5] is another example of Swan cleaning house. McGregor notes Apple was hounding them to invest more in 5G and at the same time tried to hire people away.
The writing was on the wall for OPA as far back as March when Intel introduced Compute Express Link (CXL), a cache coherent accelerator interconnect that basically does what OPA does. At the time, people were wondering where this left OPA. Now they know.
The problem once again is that Intel is swimming upstream. CXL competes with Cache Coherent Interconnect for Accelerators (CCIX) and OpenCAPI, the former championed by basically all of its competition and the latter promoted by IBM.
All are built on PCI Express (PCIe) and bring features such as cache coherence to PCIe, which it does not have natively. Both CXL and CCIX can run on top of PCIe and co-exist with it. The trick is that the host and the accelerator must have matching support. A host with CCIX can only work with a CCIX device; there is no mixing them.
As I said, CCIX support is basically everybody but Intel: ARM, AMD, IBM, Marvell, Qualcomm, and Xilinx are just a few of its backers. CXL includes Intel, Hewlett Packard Enterprise, and Dell EMC. The sane thing to do would be merge the two standards, take the best of both and make one standard. But anyone who remembers the HD-DVD/Blu-ray battle of last decade knows how likely that is.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3429614/intel-pulls-the-plug-on-omni-path-networking-fabric-architecture.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/07/digital_transformation_finger_tap_causes_waves_of_interconnected_digital_ripples_by_gerd_altmann_cc0_via_pixabay_1200x800-100765086-large.jpg
[2]: https://www.networkworld.com/article/3275385/who-s-developing-quantum-computers.html
[3]: https://www.idgconnect.com/news/1502086/intel-delves-deeper-network-barefoot-networks
[4]: https://www.networkworld.com/article/3356444/nvidia-grabs-mellanox-out-from-under-intels-nose.html
[5]: https://www.computerworld.com/article/3411922/what-you-need-to-know-about-apples-1b-intel-5g-modem-investment.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Is Perl going extinct?)
[#]: via: (https://opensource.com/article/19/8/command-line-heroes-perl)
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/shawnhcoreyhttps://opensource.com/users/sethhttps://opensource.com/users/roger78)
Is Perl going extinct?
======
Command Line Heroes explores the meteoric rise of Perl, its fall from
the spotlight, and what's next in the programming language's lifecycle.
![Listen to the Command Line Heroes Podcast][1]
Is there an [endangered species][2] list for programming languages? If there is, [Command Line Heroes][3] suggests that Perl is somewhere between vulnerable and critically endangered. The dominant language of the 1990s is the focus of this week's podcast (Season 3, Episode 4) and explores its highs and lows since it was introduced over 30 years ago.
### The timeline
1991 was the year that changed everything. Tim Berners-Lee released the World Wide Web. The internet had been connecting computers for 20 years, but the web connected people in brand new ways. An entire new frontier of web-based development opened.
Last week's episode explored [how JavaScript was born][4] and launched the browser wars. Before that language dominated the web, Perl was incredibly popular. It was open source, general purpose, and ran on nearly every Unix-like platform. Perl allowed a familiar set of practices any sysadmin would appreciate running.
### What happened?
So, if Perl was doing so well in the '90s, why did it start to sink? The dot-com bubble burst in 2000, and the first heady rush of web development was about to give way to a slicker, faster, different generation. Python became a favorite for first-time developers, much like Perl used to be an attractive first language that stole newbies away from FORTRAN or C.
Perl was regarded highly because it allowed developers to solve a problem in many ways, but that feature later became known as a bug. [Python's push toward a single right answer][5] ended up being where many people wanted to go. Conor Myhrvold wrote in _Fast Company_ how Python became more attractive and [what Perl might have done to keep up][6]. For these reasons and myriad others, like the delay of Perl 6, it lost momentum.
### Lifecycle management
Overall, I'm comfortable with the idea that some languages don't make it. [BASIC][7] isn't exactly on the software bootcamp hit list these days. But maybe Perl isn't on the same trajectory and could be best-in-class for a more specific type of problem around glue code for system challenges.
I love how Command Line Heroes host [Saron Yitbarek][8] summarizes it at the end of the podcast episode:
> "Languages have lifecycles. When new languages emerge, exquisitely adapted to new realities, an option like Perl might occupy a smaller, more niche area. But that's not a bad thing. Our languages should expand and shrink their communities as our needs change. Perl was a crucial player in the early history of web development—and it stays with us in all kinds of ways that become obvious with a little history and a look at the big picture."
Learning about Perl's rise and search for a new niche makes me wonder which of the new languages we're developing today will still be around in 30 years.
Command Line Heroes will cover programming languages for all of season 3. [Subscribe so you don't miss a single one][3]. I would love to hear your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/command-line-heroes-perl
作者:[Matthew Broberg][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/shawnhcoreyhttps://opensource.com/users/sethhttps://opensource.com/users/roger78
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command-line-heroes-520x292.png?itok=s_F6YEoS (Listen to the Command Line Heroes Podcast)
[2]: https://www.nationalgeographic.org/media/endangered/
[3]: https://www.redhat.com/en/command-line-heroes
[4]: https://opensource.com/article/19/7/command-line-heroes-javascript
[5]: https://opensource.com/article/19/6/command-line-heroes-python
[6]: https://www.fastcompany.com/3026446/the-fall-of-perl-the-webs-most-promising-language
[7]: https://opensource.com/19/7/command-line-heroes-ruby-basic
[8]: https://twitter.com/saronyitbarek

View File

@ -0,0 +1,249 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intro to Corteza, an open source alternative to Salesforce)
[#]: via: (https://opensource.com/article/19/8/corteza-open-source-alternative-salesforce)
[#]: author: (Denis Arh https://opensource.com/users/darhhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/jason-bakerhttps://opensource.com/users/jason-baker)
Intro to Corteza, an open source alternative to Salesforce
======
Learn how to use and get Corteza up and running.
![Team communication, chat][1]
[Corteza][2] is an open source, self-hosted digital work platform for growing an organization's productivity, enabling its relationships, and protecting its work and the privacy of those involved. The project was developed entirely in the public domain by [Crust Technology][3]. It has four core features: customer relationship management, a low-code development platform, messaging, and a unified workspace. This article will also explain how to get started with Corteza on the command line.
### Customer relationship management
[Corteza CRM][4] is a feature-rich open source CRM platform that gives organizations a 360-degree overview of leads and accounts. It's flexible, can easily be tailored to any organization, and includes a powerful automation module to automate processes.
![Corteza CRM screenshot][5]
### Low-code development platform
[Corteza Low Code][6] is an open source [low-code development platform][7] and alternative to Salesforce Lightning. It has an intuitive drag-and-drop builder and allows users to create and deploy record-based business applications with ease. Corteza CRM is built on Corteza Low Code.
![Corteza Low Code screenshot][8]
### Messaging
[Corteza Messaging][9] is an alternative to Salesforce Chatter and Slack. It is a secure, high-performance collaboration platform that allows teams to collaborate more efficiently and communicate safely with other organizations or customers. It integrates tightly with Corteza CRM and Corteza Low Code.
![Corteza Messaging screenshot][10]
### Unified workspace
[Corteza One][11] is a unified workspace to access and run third-party web and Corteza applications. Centralized access management from a single console enables administrative control over who can see or access applications.
![Corteza One screenshot][12]
## Set up Corteza
You can set up and run the Corteza platform with a set of simple command-line commands.
### Set up Docker
If Docker is already set up on the machine where you want to use Corteza, you can skip this section. (If you are using a Docker version below 18.0, I strongly encourage you to update.)
If you're not sure whether you have Docker, open your console or terminal and enter:
```
`$> docker -v`
```
If the response is "command not found," download and install a Docker community edition for [desktop][13] or [server or cloud][14] that fits your environment.
### Configure Corteza locally
By using Docker's command-line interface (CLI) utility **docker-compose** (which simplifies work with containers), Corteza's setup is as effortless as possible.
The following script provides the absolute minimum configuration to set up a local version of Corteza. If you prefer, you can [open this file on GitHub][15]. Note that this setup does not use persistent storage; you will need to set up container volumes for that.
```
version: '2.0'
services:
  db:
    image: percona:8.0
    environment:
      MYSQL_DATABASE:      corteza
      MYSQL_USER:          corteza
      MYSQL_PASSWORD:      oscom-tutorial
      MYSQL_ROOT_PASSWORD: supertopsecret
  server:
    image: cortezaproject/corteza-server:latest
    # Map internal 80 port (where Corteza API is listening)
    # to external port 10080. If you change this, make sure you change API_BASEURL setting below
    ports: [ "10080:80" ]
    environment:
      # Tell corteza-server where can it be reached from the outside
      VIRTUAL_HOST: localhost:10080
      # Serving the app from the localhost port 20080 is not very usual setup,
      # this will override settings auto-discovery procedure (provision) and
      # use custom values for frontend URL base
      PROVISION_SETTINGS_AUTH_FRONTEND_URL_BASE: <http://localhost:20080>
      # Database connection, make sure username, password, and database matches values in the db service
      DB_DSN: corteza:oscom-tutorial@tcp(db:3306)/corteza?collation=utf8mb4_general_ci
  webapp:
    image: cortezaproject/corteza-webapp:latest
    # Map internal 80 port (where we serve the web application)
    # to external port 20080.
    ports: [ "20080:80" ]
    environment:
      # Where API can be found
      API_BASEURL: localhost:10080
      # We're using one service for the API
      MONOLITH_API: 1
```
Run the services by entering:
```
`docker-compose up`
```
You'll see a stream of log lines announcing the database container initialization. Meanwhile, Corteza server will try (and retry) to connect to the database. If you change the database configuration (i.e., username, database, password), you'll get some errors.
When Corteza server connects, it initializes "store" (for uploaded files), and the settings-discovery process will try to auto-discover as much as possible. (You can help by setting the **VIRTUAL_HOST** and **PROVISION_SETTINGS_AUTH_FRONTEND_URL_BASE** variables just right for your environment.)
When you see "Starting HTTP server with REST API," Corteza server is ready for use.
#### Troubleshooting
If you misconfigure **VIRTUAL_HOST**, **API_BASEURL**, or **PROVISION_SETTINGS_AUTH_FRONTEND_URL_BASE**, your setup will most likely be unusable. The easiest fix is bringing all services down (**docker-compose down**) and back up (**docker-compose up**) again, but this will delete all data. See "Splitting services" below if you want to make it work without this purge-and-restart approach.
### Log in and explore
Open <http://localhost:20080> in your browser, and give Corteza a try.
First, you'll see the login screen. Follow the sign-up link and register. Corteza auto-promotes the first user to the administrator role. You can explore the administration area and the messaging and low-code tools with the support of the [user][16] and [administrator][17] guides.
### Run in the background
If you're not familiar with **docker-compose**, you can bring up the services with the **-d** flag and run them in the background. You can still access service logs with the **docker-container logs** command if you want.
## Expose Corteza to your internal network and the world
If you want to use Corteza in production and with other users, take a look at Corteza's [simple][18] and [advanced][19] deployment setup examples.
### Establish data persistency
If you use one of the simple or advanced examples, you can persist your data by uncommenting one of the volume line pairs.
If you want to store data on your local filesystem, you might need to pay special attention to file permissions. Review the logs when starting up the services if there are any related errors.
### Proxy requests to containers
Both server and web-app containers listen on port 80 by default. If you want to expose them to the outside world, you need to use a different outside port. Unfortunately, this makes it not very user-friendly. We do not want to have to tell users to access Corteza on (for example) **corteza.example.org:31337** but directly on **corteza.example.org** with an API served from **api.corteza.example.org**.
The easiest way to do this is with another Docker image: **[jwilder/nginx-proxy][20]**. You can find a [configuration example][21] in Corteza's docs. When you spin-up an **nginx-proxy** container, it listens for Docker events (e.g., when a container starts or stops), reads values from the container's **VIRTUAL_HOST** variable, and creates an [Nginx][22] configuration that proxies requests directed to the domain configured with **VIRTUAL_HOST** to the container using the domain.
### Secure HTTP
Corteza server speaks only plain HTTP (and HTTP 2.0). It does not do any SSL termination, so you must set up the (reverse) proxy that handles HTTPS traffic and redirects it to internal HTTP ports.
If you use **jwilder/nginx-proxy** as your front end, you can use another image—**[jrcs/letsencrypt-nginx-proxy-companion][23]**—to take care of SSL certificates from Let's Encrypt. It listens for Docker events (in a similar way as **nginx-proxy**) and reads the **LETSENCRYPT_HOST** variable.
### Configuration
Another ENV file holds configuration values. See the [docs][24] for details. There are two levels of configuration, with ENV variables and settings stored in the database.
### Authentication
Along with its internal authentication capabilities (with usernames and encrypted passwords stored in the database), Corteza supports registration and authentication with GitHub, Google, Facebook, Linkedin, or any other [OpenID Connect][25] (OIDC)-compatible identity provider.
You can add any [external OIDC provider][26] using auto-discovery or by explicitly setting your key and secret. (Note that GitHub, Google, Facebook, and Linkedin require you to register an application and provide a redirection link.)
### Splitting services
If you expect more load or want to separate services to fine-tune your containers, follow the [advanced deployment][19] example, which has more of a microservice-type architecture. It still uses a single database, but it can split into three parts.
### Other types of setup
In the future, Corteza will be available for different distributions, appliances, container types, and more
If you have special requirements, you can always build Corteza as a backend service and the web applications [from source][27].
## Using Corteza
Once you have Corteza up and running, you can start using all its features. Here is a list of recommended actions.
### Log in
Enter Corteza at, for example, your **corteza.example.org** link. You'll see the login screen.
![Corteza Login screenshot][28]
As mentioned above, Corteza auto-promotes the first user to the administrator role. If you haven't yet, explore the administration area, messaging, and low-code tools.
### Other tasks to get started
Here are some other tasks you'll want to do when you're setting up Corteza for your organization.
* Share the login link with others who will work in your Corteza instance so that they can sign up.
* Create roles, if needed, and assign users to the right roles. By default, only admins can do this.
* Corteza CRM has a complete list of modules. You can enter the CRM admin page to fine-tune the CRM to your needs or just start using it with the defaults.
* Enter Corteza Low Code and create a new low-code app from scratch.
* Create public and private channels in Corteza Messaging for your team. (For example a public "General" or a private "Sales" channel.)
## For more information
If you or your users have any questions—or would like to contribute—please join the [Corteza Community][29]. After you log in, please introduce yourself in the #Welcome channel.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/corteza-open-source-alternative-salesforce
作者:[Denis Arh][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/darhhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/jason-bakerhttps://opensource.com/users/jason-baker
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat)
[2]: https://www.cortezaproject.org/
[3]: https://www.crust.tech/
[4]: https://cortezaproject.org/technology/core/corteza-crm/
[5]: https://opensource.com/sites/default/files/uploads/screenshot-corteza-crm-1.png (Corteza CRM screenshot)
[6]: https://cortezaproject.org/technology/core/corteza-low-code/
[7]: https://en.wikipedia.org/wiki/Low-code_development_platform
[8]: https://opensource.com/sites/default/files/uploads/screenshot-corteza-low-code-2.png (Corteza Low Code screenshot)
[9]: https://cortezaproject.org/technology/core/corteza-messaging/
[10]: https://opensource.com/sites/default/files/uploads/screenshot-corteza-messaging-1.png (Corteza Messaging screenshot)
[11]: https://cortezaproject.org/technology/core/corteza-one/
[12]: https://opensource.com/sites/default/files/uploads/screenshot-cortezaone-unifiedworkspace.png (Corteza One screenshot)
[13]: https://www.docker.com/products/docker-desktop
[14]: https://hub.docker.com/search?offering=community&type=edition
[15]: https://github.com/cortezaproject/corteza-docs/blob/master/deploy/docker-compose/example-2019-07-os.com/docker-compose.yml
[16]: https://cortezaproject.org/documentation/user-guides/
[17]: https://cortezaproject.org/documentation/administrator-guides/
[18]: https://github.com/cortezaproject/corteza-docs/blob/master/deploy/docker-compose/simple.md
[19]: https://github.com/cortezaproject/corteza-docs/blob/master/deploy/docker-compose/advanced.md
[20]: https://github.com/jwilder/nginx-proxy
[21]: https://github.com/cortezaproject/corteza-docs/blob/master/deploy/docker-compose/nginx-proxy/docker-compose.yml
[22]: http://nginx.org/
[23]: https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion
[24]: https://github.com/cortezaproject/corteza-docs/tree/master/config
[25]: https://en.wikipedia.org/wiki/OpenID_Connect
[26]: https://github.com/cortezaproject/corteza-docs/blob/master/config/ExternalAuth.md
[27]: https://github.com/cortezaproject
[28]: https://opensource.com/sites/default/files/uploads/screenshot-corteza-login.png (Corteza Login screenshot)
[29]: https://latest.cortezaproject.org/

View File

@ -0,0 +1,91 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why fear of failure is a silent DevOps virus)
[#]: via: (https://opensource.com/article/19/8/why-fear-failure-silent-devops-virus)
[#]: author: (Willy-Peter SchaubJustin Kearns https://opensource.com/users/wpschaubhttps://opensource.com/users/bclasterhttps://opensource.com/users/juliegundhttps://opensource.com/users/kearnsjdhttps://opensource.com/users/ophirhttps://opensource.com/users/willkelly)
Why fear of failure is a silent DevOps virus
======
In the context of software development, fail fast is innocuous to
DevOps.
![gears and lightbulb to represent innovation][1]
Do you recognize the following scenario? I do, because a manager once stifled my passion and innovation to the point I was anxious to make decisions, take risks, and focus on what's important: "_uncovering better ways of developing software by doing it and helping others do it_" ([Agile Manifesto, 2001][2]).
> **Developer:** "_The UX hypothesis failed. Users did not respond well to the new navigation experience, resulting in 80% of users switching back to the classic navigation._"
>
> **Manager: **"_This is really bad! How is this possible? We need to fix this because I'm not sure that our stakeholders want to hear about your failure._"
Here is a different, more powerful response.
> **Leader:** "What are the probable causes for our hypothesis failing and how can we improve the user experience? Let us analyze and share our remediation plan with our stakeholders."
It is all about the tone that centers on a constructive and blameless mindset.
There are various types of fear that paralyze people, who then infect their team. Fearful that nothing is ever enough, pushing themselves to do more and more, viewing feedback as unfavorable, and often burning themselves out. They work hard, not smart—delivering volume, not value.
Others fear being judged, compare themselves with others, and shy away from accountability. They seldom share their knowledge, passion, or work; instead of vibrant collaboration, they find themselves wandering through a ghost ship filled with skeletons and foul fish.
> _"The only thing we have to fear is fear itself."_ Franklin D. Roosevelt
_Fear of failure_ is rife in many organizations, especially those that have embarked on a digital transformation journey. It's caused by the undesirability of failure, knowledge of repercussions, and lack of appetite for validated learning.
This is a strange phenomenon because when we look at the Manifesto for Agile Software Development, we notice references to "customer collaboration" and "responding to change." Lean thinking promotes principles such as "optimize the whole," "eliminate waste," "create knowledge," "build quality in," and "respect people." Also, two of the [Kanban principles][3] mention "visualize work" and "continuous improvement."
I have a hypothesis:
> "_I believe an organization will embrace failure if we elaborate the benefit of failure in the context of software engineering to all stakeholders._"
Let's look at a traditional software development lifecycle that strives for quality, is based on strict processes, and is sensitive to failure. We design, develop, and test all features in sequence. The solution is released to the customer when QA and security give us the "thumbs up," followed by a happy user (success) or unhappy user (failure).
![Traditional software development lifecycle][4]
With the traditional model, we have one opportunity to fail or succeed. This is probably an effective model if we are building a sensitive product such as a multimillion-dollar rocket or aircraft. Context is important!
Now let's look at a more modern software development lifecycle that strives for quality and _embraces failure_. We design, build, and test and release a limited release to our users for preview. We get feedback. If the user is happy (success), we move to the next feature. If the user is unhappy (failure), we either improve or scrap the feature based on validated feedback.
![Modern software development lifecycle][5]
Note that we have a minimum of one opportunity to fail per feature, giving us a minimum of 10 opportunities to improve our product, based on validated user feedback, before we release the same product. Essentially, this modern approach is a repetition of the traditional approach, but it's broken down into smaller release cycles. We cannot reduce the effort to design, develop, and test our features, but we can learn and improve the process. You can take this software engineering process a few steps further.
* **Continuous delivery** (CD) aims to deliver software in short cycles and releases features reliably, one at a time, at a click of a button by the business or user.
* **Test-driven development** (TDD) is a software engineering approach that creates many debates among stakeholders in business, development, and quality assurance. It relies on short and repetitive development cycles, each one crafting test cases based on requirements, failing, and developing software to pass the test cases.
* [**Hypothesis-driven development**][6] (HDD) is based on a series of experiments to validate or disprove a hypothesis in a complex problem domain where we have unknown unknowns. When the hypothesis fails, we pivot. When it passes, we focus on the next experiment. Like TDD, it is based on very short repetitions to explore and react on validated learning.
Yes, I am confident that failure is not bad. In fact, it is an enabler for innovation and continuous learning. It is important to emphasize that we need to embrace failure in the form of _fail fast_, which means that we slice our product into small units of work that can be developed and delivered as value, quickly and reliably. When we fail, the waste and impact must be minimized, and the validated learning maximized.
To avoid the fear of failure among engineers, all stakeholders in an organization need to trust the engineering process and embrace failure. The best antidote is leaders who are supportive and inspiring, along with having a collective blameless mindset to plan, prioritize, build, release, and support. We should not be reckless or oblivious of the impact of failure, especially when it impacts investors or livelihoods.
We cannot be afraid of failure when we're developing software. Otherwise, we will stifle innovation and evolution, which in turn suffocates the union of people, process, and continuous delivery of value, key ingredients of DevOps as defined by [Donovan Brown][7]:
> _"DevOps is the union of people, process, and products to enable continuous delivery of value to our end users."_
* * *
_Special thanks to Anton Teterine and Alex Bunardzic for sharing their perspectives on fear._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/why-fear-failure-silent-devops-virus
作者:[Willy-Peter SchaubJustin Kearns][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/wpschaubhttps://opensource.com/users/bclasterhttps://opensource.com/users/juliegundhttps://opensource.com/users/kearnsjdhttps://opensource.com/users/ophirhttps://opensource.com/users/willkelly
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation)
[2]: https://agilemanifesto.org/
[3]: https://www.agilealliance.org/glossary/kanban
[4]: https://opensource.com/sites/default/files/uploads/waterfall.png (Traditional software development lifecycle)
[5]: https://opensource.com/sites/default/files/uploads/agile_1.png (Modern software development lifecycle)
[6]: https://opensource.com/article/19/6/why-hypothesis-driven-development-devops
[7]: http://donovanbrown.com/post/what-is-devops

View File

@ -1,154 +0,0 @@
translating by zyk2290
Two great uses for the cp command: Bash shortcuts
============================================================
### Here's how to streamline the backup and synchronize functions of the cp command.
![Two great uses for the cp command: Bash shortcuts ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC)
>Image by : [Internet Archive Book Images][6]. Modified by Opensource.com. CC BY-SA 4.0
Last July, I wrote about [two great uses for the cp command][7]: making a backup of a file, and synchronizing a secondary copy of a folder.
Having discovered these great utilities, I find that they are more verbose than necessary, so I created shortcuts to them in my Bash shell startup script. I thought Id share these shortcuts in case they are useful to others or could offer inspiration to Bash users who havent quite taken on aliases or shell functions.
### Updating a second copy of a folder Bash alias
The general pattern for updating a second copy of a folder with cp is:
```
cp -r -u -v SOURCE-FOLDER DESTINATION-DIRECTORY
```
I can easily remember the -r option because I use it often when copying folders around. I can probably, with some more effort, remember -v, and with even more effort, -u (is it “update” or “synchronize” or…).
Or I can just use the [alias capability in Bash][8] to convert the cp command and options to something more memorable, like this:
```
alias sync='cp -r -u -v'
```
```
sync Pictures /media/me/4388-E5FE
```
Not sure if you already have a sync alias defined? You can list all your currently defined aliases by typing the word alias at the command prompt in your terminal window.
Like this so much you just want to start using it right away? Open a terminal window and type:
```
echo "alias sync='cp -r -u -v'" >> ~/.bash_aliases
```
```
me@mymachine~$ alias
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias gvm='sdk'
alias l='ls -CF'
alias la='ls -A'
alias ll='ls -alF'
alias ls='ls --color=auto'
alias sync='cp -r -u -v'
me@mymachine:~$
```
### Making versioned backups Bash function
The general pattern for making a backup of a file with cp is:
```
cp --force --backup=numbered WORKING-FILE BACKED-UP-FILE
```
Besides remembering the options to the cp command, we also need to remember to repeat the WORKING-FILE name a second time. But why repeat ourselves when [a Bash function][9] can take care of that overhead for us, like this:
Again, you can save this to your .bash_aliases file in your home directory.
```
function backup {
    if [ $# -ne 1 ]; then
        echo "Usage: $0 filename"
    elif [ -f $1 ] ; then
        echo "cp --force --backup=numbered $1 $1"
        cp --force --backup=numbered $1 $1
    else
        echo "$0: $1 is not a file"
    fi
}
```
The first if statement checks to make sure that only one argument is provided to the function, otherwise printing the correct usage with the echo command.
The elif statement checks to make sure the argument provided is a file, and if so, it (verbosely) uses the second echo to print the cp command to be used and then executes it.
If the single argument is not a file, the third echo prints an error message to that effect.
In my home directory, if I execute the backup command so defined on the file checkCounts.sql, I see that backup creates a file called checkCounts.sql.~1~. If I execute it once more, I see a new file checkCounts.sql.~2~.
Success! As planned, I can go on editing checkCounts.sql, but if I take a snapshot of it every so often with backup, I can return to the most recent snapshot should I run into trouble.
At some point, its better to start using git for version control, but backup as defined above is a nice cheap tool when you need to create snapshots but youre not ready for git.
### Conclusion
In my last article, I promised you that repetitive tasks can often be easily streamlined through the use of shell scripts, shell functions, and shell aliases.
Here Ive shown concrete examples of the use of shell aliases and shell functions to streamline the synchronize and backup functionality of the cp command. If youd like to learn more about this, check out the two articles cited above: [How to save keystrokes at the command line with alias][10] and [Shell scripting: An introduction to the shift method and custom functions][11], written by my colleagues Greg and Seth, respectively.
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/clh_portrait2.jpg?itok=V1V-YAtY)][13] Chris Hermansen 
 Engaged in computing since graduating from the University of British Columbia in 1978, I have been a full-time Linux user since 2005 and a full-time Solaris, SunOS and UNIX System V user before that. On the technical side of things, I have spent a great deal of my career doing data analysis; especially spatial data analysis. I have a substantial amount of programming experience in relation to data analysis, using awk, Python, PostgreSQL, PostGIS and lately Groovy. I have also built a few... [more about Chris Hermansen][14]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/two-great-uses-cp-command-update
作者:[Chris Hermansen][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/clhermansen
[1]:https://opensource.com/users/clhermansen
[2]:https://opensource.com/users/clhermansen
[3]:https://opensource.com/user/37806/feed
[4]:https://opensource.com/article/18/1/two-great-uses-cp-command-update?rate=J_7R7wSPbukG9y8jrqZt3EqANfYtVAwZzzpopYiH3C8
[5]:https://opensource.com/article/18/1/two-great-uses-cp-command-update#comments
[6]:https://www.flickr.com/photos/internetarchivebookimages/14803082483/in/photolist-oy6EG4-pZR3NZ-i6r3NW-e1tJSX-boBtf7-oeYc7U-o6jFKK-9jNtc3-idt2G9-i7NG1m-ouKjXe-owqviF-92xFBg-ow9e4s-gVVXJN-i1K8Pw-4jybMo-i1rsBr-ouo58Y-ouPRzz-8cGJHK-85Evdk-cru4Ly-rcDWiP-gnaC5B-pAFsuf-hRFPcZ-odvBMz-hRCE7b-mZN3Kt-odHU5a-73dpPp-hUaaAi-owvUMK-otbp7Q-ouySkB-hYAgmJ-owo4UZ-giHgqu-giHpNc-idd9uQ-osAhcf-7vxk63-7vwN65-fQejmk-pTcLgA-otZcmj-fj1aSX-hRzHQk-oyeZfR
[7]:https://opensource.com/article/17/7/two-great-uses-cp-command
[8]:https://opensource.com/article/17/5/introduction-alias-command-line-tool
[9]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions
[10]:https://opensource.com/article/17/5/introduction-alias-command-line-tool
[11]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions
[12]:https://opensource.com/tags/linux
[13]:https://opensource.com/users/clhermansen
[14]:https://opensource.com/users/clhermansen

View File

@ -1,144 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to detect automatically generated emails)
[#]: via: (https://arp242.net/weblog/autoreply.html)
[#]: author: (Martin Tournoij https://arp242.net/)
How to detect automatically generated emails
======
### How to detect automatically generated emails
When you send out an auto-reply from an email system you want to take care to not send replies to automatically generated emails. At best, you will get a useless delivery failure. At words, you will get an infinite email loop and a world of chaos.
Turns out that reliably detecting automatically generated emails is not always easy. Here are my observations based on writing a detector for this and scanning about 100,000 emails with it (extensive personal archive and company archive).
### Auto-submitted header
Defined in [RFC 3834][1].
This is the official standard way to indicate your message is an auto-reply. You should **not** send a reply if `Auto-Submitted` is present and has a value other than `no`.
### X-Auto-Response-Suppress header
Defined [by Microsoft][2]
This header is used by Microsoft Exchange, Outlook, and perhaps some other products. Many newsletters and such also set this. You should **not** send a reply if `X-Auto-Response-Suppress` contains `DR` (“Suppress delivery reports”), `AutoReply` (“Suppress auto-reply messages other than OOF notifications”), or `All`.
### List-Id and List-Unsubscribe headers
Defined in [RFC 2919][3]
You usually dont want to send auto-replies to mailing lists or news letters. Pretty much all mail lists and most newsletters set at least one of these headers. You should **not** send a reply if either of these headers is present. The value is unimportant.
### Feedback-ID header
Defined [by Google][4].
Gmail uses this header to identify mail newsletters, and uses it to generate statistics/reports for owners of those newsletters. You should **not** send a reply if this headers is present; the value is unimportant.
### Non-standard ways
The above methods are well-defined and clear (even though some are non-standard). Unfortunately some email systems do not use any of them :-( Here are some additional measures.
#### Precedence header
Not really defined anywhere, mentioned in [RFC 2076][5] where its use is discouraged (but this header is commonly encountered).
Note that checking for the existence of this field is not recommended, as some ails use `normal` and some other (obscure) values (this is not very common though).
My recommendation is to **not** send a reply if the value case-insensitively matches `bulk`, `auto_reply`, or `list`.
#### Other obscure headers
A collection of other (somewhat obscure) headers Ive encountered. I would recommend **not** sending an auto-reply if one of these is set. Most mails also set one of the above headers, but some dont (but its not very common).
* `X-MSFBL`; cant really find a definition (Microsoft header?), but I only have auto-generated mails with this header.
* `X-Loop`; not really defined anywhere, and somewhat rare, but sometimes its set. Its most often set to the address that should not get emails, but `X-Loop: yes` is also encountered.
* `X-Autoreply`; fairly rare, and always seems to have a value of `yes`.
#### Email address
Check if the `From` or `Reply-To` headers contains `noreply`, `no-reply`, or `no_reply` (regex: `^no.?reply@`).
#### HTML only
If an email only has a HTML part, but no text part its a good indication this is an auto-generated mail or newsletter. Pretty much all mail clients also set a text part.
#### Delivery failures
Many delivery failure messages dont really indicate that theyre failures. Some ways to check this:
* `From` contains `mailer-daemon` or `Mail Delivery Subsystem`
Many mail libraries leave some sort of footprint, and most regular mail clients override this with their own data. Checking for this seems to work fairly well.
* `X-Mailer: Microsoft CDO for Windows 2000` Set by some MS software; I can only find it on autogenerated mails. Yes, its still used in 2015.
* `Message-ID` header contains `.JavaMail.` Ive found a few (5 on 50k) regular messages with this, but not many; the vast majority (thousends) of messages are news-letters, order confirmations, etc.
* `^X-Mailer` starts with `PHP`. This should catch both `X-Mailer: PHP/5.5.0` and `X-Mailer: PHPmailer blah blah`. The same as `JavaMail` applies.
* `X-Library` presence; only [Indy][6] seems to set this.
* `X-Mailer` starts with `wdcollect`. Set by some Plesk mails.
* `X-Mailer` starts with `MIME-tools`.
### Final precaution: limit the number of replies
Even when following all of the above advice, you may still encounter an email program that will slip through. This can very dangerous, as email systems that simply `IF email THEN send_email` have the potential to cause infinite email loops.
For this reason, I recommend keeping track of which emails youve sent an autoreply to and rate limiting this to at most n emails in n minutes. This will break the back-and-forth chain.
We use one email per five minutes, but something less strict will probably also work well.
### What you need to set on your auto-response
The specifics for this will vary depending on what sort of mails youre sending. This is what we use for auto-reply mails:
```
Auto-Submitted: auto-replied
X-Auto-Response-Suppress: All
Precedence: auto_reply
```
### Feedback
You can mail me at [martin@arp242.net][7] or [create a GitHub issue][8] for feedback, questions, etc.
--------------------------------------------------------------------------------
via: https://arp242.net/weblog/autoreply.html
作者:[Martin Tournoij][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://arp242.net/
[b]: https://github.com/lujun9972
[1]: http://tools.ietf.org/html/rfc3834
[2]: https://msdn.microsoft.com/en-us/library/ee219609(v=EXCHG.80).aspx
[3]: https://tools.ietf.org/html/rfc2919)
[4]: https://support.google.com/mail/answer/6254652?hl=en
[5]: http://www.faqs.org/rfcs/rfc2076.html
[6]: http://www.indyproject.org/index.en.aspx
[7]: mailto:martin@arp242.net
[8]: https://github.com/Carpetsmoker/arp242.net/issues/new

View File

@ -1,117 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (GameMode A Tool To Improve Gaming Performance On Linux)
[#]: via: (https://www.ostechnix.com/gamemode-a-tool-to-improve-gaming-performance-on-linux/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
GameMode A Tool To Improve Gaming Performance On Linux
======
![Gamemmode improve gaming performance on Linux][1]
Ask some Linux users why they still sticks with Windows dual boot, probably the answer would be “Games!”. It was true! Luckily, open source gaming platforms like [**Lutris**][2] and Proprietary gaming platform **Steam** have brought many games to Linux platforms and improved the Linux gaming experience significantly over the years. Today, I stumbled upon yet another Linux gaming-related, open source tool named **GameMode** , which allows the users to improve gaming performance on Linux.
GameMode is basically a daemon/lib combo that lets the games optimise Linux system performance on demand. I thought GameMode is a kind of tool that would kill some resource-hungry tools running in the background. But it is different. What it does actually is just instruct the CPU to **automatically run in Performance mode when playing games** and helps the Linux users to get best possible performance out of their games.
GameMode improves the gaming performance significantly by requesting a set of optimisations be temporarily applied to the host OS while playing the games. Currently, It includes support for optimisations including the following:
* CPU governor,
* I/O priority,
* Process niceness,
* Kernel scheduler (SCHED_ISO),
* Screensaver inhibiting,
* GPU performance mode (NVIDIA and AMD), GPU overclocking (NVIDIA),
* Custom scripts.
GameMode is free and open source system tool developed by [**Feral Interactive**][3], a world-leading publisher of games.
### Install GameMode
GameMode is available for many Linux distributions.
On Arch Linux and its variants, you can install it from [**AUR**][4] using any AUR helper programs, for example [**Yay**][5].
```
$ yay -S gamemode
```
On Debian, Ubuntu, Linux Mint and other Deb-based systems:
```
$ sudo apt install gamemode
```
If GameMode is not available for your system, you can manually compile and install it from source as described in its Github page under Development section.
### Activate GameMode support to improve Gaming Performance on Linux
Here are the list of games with GameMode integration, so we need not to do any additional configuration to activate GameMode support.
* Rise of the Tomb Raider
* Total War Saga: Thrones of Britannia
* Total War: WARHAMMER II
* DiRT 4
* Total War: Three Kingdoms
Simply run these games and GameMode support will be enabled automatically.
There is also an [**extension**][6] is available to integrate GameMode support with GNOME shell. It indicates when GameMode is active in the top panel.
For other games, you may need to manually request GameMode support like below.
```
gamemoderun ./game
```
I am not fond of games and I havent played any games for years. So, I cant share any actual benchmarks.
However, Ive found a short video tutorial on Youtube to enable GameMode support for Lutris games. It is a good start point for those who wants to try GameMode for the first time.
<https://youtu.be/4gyRyYfyGJw>
By looking at the comments in the video, I can say that that GameMode has indeed improved gaming performance on Linux.
For more details, refer the [**GameMode GitHub repository**][7].
* * *
**Related read:**
* [**GameHub An Unified Library To Put All Games Under One Roof**][8]
* [**How To Run MS-DOS Games And Programs In Linux**][9]
* * *
Have you used GameMode tool? Did it really improve the Gaming performance on your Linux box? Share you thoughts in the comment section below.
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/gamemode-a-tool-to-improve-gaming-performance-on-linux/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/07/Gamemode-720x340.png
[2]: https://www.ostechnix.com/manage-games-using-lutris-linux/
[3]: http://www.feralinteractive.com/en/
[4]: https://aur.archlinux.org/packages/gamemode/
[5]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[6]: https://github.com/gicmo/gamemode-extension
[7]: https://github.com/FeralInteractive/gamemode
[8]: https://www.ostechnix.com/gamehub-an-unified-library-to-put-all-games-under-one-roof/
[9]: https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,207 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 tools for doing presentations from the command line)
[#]: via: (https://opensource.com/article/19/8/command-line-presentation-tools)
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitthttps://opensource.com/users/murphhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/scottnesbitt)
3 tools for doing presentations from the command line
======
mdp, tpp, and sent may not win you any design awards, but they'll give
you basic slides that you can run from a terminal.
![Files in a folder][1]
Tired of creating and displaying presentation slides using [LibreOffice Impress][2] or various slightly geeky [tools][3] and [frameworks][4]? Instead, consider running the slides for your next talk from a terminal window.
Using the terminal to present slides sounds strange, but it isn't. Maybe you want to embrace your inner geek a bit more. Perhaps you want your audience to focus on your ideas rather than your slides. Maybe you're a devotee of the [Takahashi method][5]. Whatever your reasons for turning to the terminal, there's a (presentation) tool for you.
Let's take a look at three of them.
### mdp
Seeing as how I'm something of a Markdown person, I took [mdp][6] for a spin the moment I heard about it.
You create your slides in a text editor, prettying the text up with Markdown. mpd recognizes most Markdown formatting—from headings and lists to code blocks to character formatting and URLs.
You can also add a [Pandoc metadata block][7], which can contain your name, the title of your presentation, and the date you're giving your talk. That adds the title to the top of every slide and your name and the date to the bottom.
Your slides are in a single text file. To let mdp know where a slide starts, add a line of dashes after each slide.
Here's a very simple example:
```
%title: Presentation Title
%author: Your Name
%date: YYYY-MM-DD
-&gt; # Slide 1 &lt;-
Intro slide
\--------------------------------------------------
-&gt; # Slide 2 &lt;-
==============
* Item 1
* Item 2
* Item 3
\-------------------------------------------------
-&gt; # Slide 3  &lt;-
This one with a numbered list
1\. Item 1
2\. Item 2
3\. Item 3
\-------------------------------------------------
-&gt; # Conclusion  &lt;-
mdp supports *other* **formatting**, too. Give it a try!
```
See the **-&gt;** and **&lt;-** surrounding the titles of each slide? Any text between those characters is centered in a terminal window.
Run your slideshow by typing **mdp slides.md** (or whatever you named your file) in a terminal window. Here's what the example slides I cobbled together look like:
![Example mdp slide][8]
Cycle through them by pressing the arrow keys or the spacebar on your keyboard.
### tpp
[tpp][9] is another simple, text-based presentation tool. It eschews Markdown for its own formatting. While the formatting is simple, it's very concise and offers a couple of interesting—and useful—surprises.
You use dashes to indicate most of the formatting. You can add a metadata block at the top of your slide file to create the title slide for your presentation. Denote headings by typing **\--heading** followed by the heading's text. Center text on a slide by typing **\--center** and then the text.
To create a new slide, type:
```
\---
\--newpage
```
Here's an example of some basic slides:
```
\--title Presentation Title
\--date YYYY-MM-DD
\--author Your Name
\---
\--newpage
\--heading Slide 1
  * Item 1
\---
\--newpage
\--heading Slide 2
  * Item 1
  * Item 2
\---
\--newpage
\--heading Slide 3
  * Item 1
  * Item 2
  * Item 3
```
Here's what they look like in a terminal window:
![tpp slide example][10]
Move through your slides by pressing the arrow keys on your keyboard.
What about those interesting and useful surprises I mentioned earlier? You can add a splash of color to the text on a slide by typing **\--color** and then the name of the color you want to use—for example, **red**. Below that, add the text whose color you want to change, like this:
```
\--color red
Some text
```
If you have a terminal command that you want to include on a slide, wrap it between **\--beginoutput** and **\--endoutput**. Taking that a step further, you can simulate typing the command by putting it between **\--beginshelloutput** and **\--endshelloutput**. Here's an example:
![Typing a command on a slide with tpp][11]
### Sent
[Sent][12] isn't strictly a command-line presentation tool. You run it from the command line, but it opens an X11 window containing your slides.
Sent is built around the Takahashi method for presenting that I mentioned at the beginning of this article. The core idea behind the Takahashi method is to have one or two keywords in large type on a slide. The keywords distill the idea you're trying to get across at that point in your presentation.
As with mpd and tpp, you craft your slides in [plain text][13] in a text editor. Sent doesn't use markup, and there are no special characters to indicate where a new slide begins. Sent assumes each new paragraph is a slide.
You're not limited to using text. Sent also supports images. To add an image to a slide, type **@** followed by the name of the image—for example, **@mySummerVacation.jpg**.
Here's an excerpt from a slide file:
```
On Writing Evergreen Articles
Evergreen?
8 Keys to Good Evergreen Articles
@images/typewriter.jpg
Be Passionate
Get Your Hands Dirty
Focus
```
Fire up your slides by typing **sent filename** in a terminal window. The X11 window that opens goes into full-screen mode and displays text in as large a font as possible. Any images in your slides are centered in the window.
![Example Sent slide][14]
### Drawbacks of these tools
You're not going to win any design awards for the slides you create with mdp, tpp, or sent. They're plain. They're utilitarian. But, as I pointed out at the beginning of this article, the slides you create and present with those tools can help your audience focus on what you're saying rather than your visuals.
If you use mdp or tpp, you need to do some fiddling with your terminal emulator's settings to get the fonts and sizes right. Out of the box, the fonts might be too small—as you see in the screen captures above. If your terminal emulator supports profiles, create one for presentations with the font you want to use at the size you want. Then go full-screen.
Neither mdp, tpp, nor sent will appeal to everyone. That's fine. There's no one presentation tool to rule them all, no matter what some people say. But if you need, or just want, to go back to basics, these three tools are good options.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/command-line-presentation-tools
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitthttps://opensource.com/users/murphhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
[2]: https://www.libreoffice.org/discover/impress/
[3]: https://opensource.com/article/18/5/markdown-slide-generators
[4]: https://opensource.com/article/18/2/how-create-slides-emacs-org-mode-and-revealjs
[5]: https://presentationzen.blogs.com/presentationzen/2005/09/living_large_ta.html
[6]: https://github.com/visit1985/mdp
[7]: https://pandoc.org/MANUAL.html#metadata-blocks
[8]: https://opensource.com/sites/default/files/uploads/mdp-slides.png (Example mdp slide)
[9]: https://synflood.at/tpp.html
[10]: https://opensource.com/sites/default/files/uploads/tpp-example.png (tpp slide example)
[11]: https://opensource.com/sites/default/files/uploads/tpp-code_1.gif (Typing a command on a slide with tpp)
[12]: https://tools.suckless.org/sent/
[13]: https://plaintextproject.online
[14]: https://opensource.com/sites/default/files/uploads/sent-example.png (Example Sent slide)

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Avoiding burnout: 4 considerations for a more energetic organization)
[#]: via: (https://opensource.com/open-organization/19/8/energy-fatigue-burnout)
[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarlandhttps://opensource.com/users/jonobaconhttps://opensource.com/users/marcobravohttps://opensource.com/users/ron-mcfarland)
Avoiding burnout: 4 considerations for a more energetic organization
======
Being open requires a certain kind of energy—and if you don't take time
to replenish that energy, your organization will suffer.
![Light bulb][1]
In both personal and organizational life, energy levels are important. This is no less true of open organizations. Consider this: When you're tired, you'll have trouble _adapting_ when challenges arise. When your energy is low, you'll have trouble _collaborating_ with others. When you're feeling fatigued, building and energizing an open organization _community_ is difficult.
At a time when [the World Health Organization (WHO) has recently formalized its definition for burnout][2], this issue seems more important than ever. In this article, I'll address the important issue of managing daily energy—both at the personal and organizational levels.
### Energy management
Having spent most of my career in overseas sales and sales training, I've learned that both individual and company energy levels are very important. In the early part of the career, I was traveling approximately every two months. Most trips were two to four weeks long. When teaching during those trips, I'd find myself standing six hours every day as I presented selling techniques to salespeople. It was exhausting.
But I loved the work, and I knew I had to do something to keep my energy up. At a physical level, I knew I had to strengthen my legs and my heart, as well as change my diet to be able to sleep deeper at odd times of the day. Interestingly, I was not alone, as I heard many jet lag and exhaustion complaints in airport business lounges regularly. But I had to think about energy at an emotional level, too. I knew that the outside retail sales people I was training had to take rejection every day; they had to be able to control their levels of emotion and energy to be productive.
Whether for companies or individuals, exhaustion is counter-productive, resulting in errors, frustration, and poor performance. This could be even more true in open organizations, as many participants are contributing for a wide range of reasons outside of basic monetary rewards. Many organizations emphasize time-management, stress-management, crisis-management, and resource-management—but I've never heard of a company that has adopted and introduced an "energy management" program.
Why not?
In my case, in order to keep both myself and the sales people I was training energized and productive, I had to address the issue of energy. So I started studying and reading about the subject. Eventually I found the book [_The Way We're Working Isn't Working_][3] by Tony Schwartz, and still consider it the most helpful. Schwartz went on to develop [The Energy Project][4],"a consulting firm that helps individuals and organizations grow and transform so together they can prosper, add more value in the world, and do less harm." It addresses this issue of fatigue head on.
Schwartz distinguishes four separate and distinct categories of energy: physical, emotional, mental, and spiritual. Let me explain each and add some comments along the way.
### Four categories of energy
#### Physical health
First, to have adequate energy every day, you've got to take care of your physical health. That means you _must_ address issues like nutrition, posture, sleep, and exercise (I found the book _Farewell to Fatigue_ by Donald Norfolk especially helpful for learning how to get over jet lag quickly.) To use an automotive term: You should always have a good "power to weight ratio." In other words, you must increase the driveline power/efficiency or reduce the overall weight of the vehicle to improve performance.
I decided to work on both, keeping my legs strong and my weight down. I started jogging four to five times per week and improved my diet, which really helped me get a lot more out of my days and reduce fatigue.
#### Emotional well-being
Our energy grows when we feel appreciated by others. When the environment is exciting, we are motivated—which also leads to added energy. Conversely, when we don't feel that others appreciate us—or we find ourselves in a dull, even toxic environment—our energy levels suffer. In [_Every Word Has Power_][5], author Yvonne Oswald writes that the words we use and hear can make a big difference to our energy levels and well-being. She writes that simply reframing our everyday speech can help us boost energy levels. Instead of speaking excessively with the word "not" ("Don't strike out," "Don't double-fault,"), we should form sentences that paint a positive (energy-creating) image or picture in our minds. ("Hit a home run," "Serve an ace"). The former statements _consume_ our energy; the latter _enhance_ it.
Regarding energy generation in a working environment, the book [_The Progress Principle_ by Teresa Amabile][6] taught me an important insight. Amabile's research shows that generating a feeling of upward progress is the most powerful way to increase energy in an organization. A feeling of progress, learning, developing, and moving upward is extremely important. The opposite is true for declining organizations. I've consulted a lot of companies over the years, and when I'm seeking to get a feeling for an organization's energy level, all I have to do is ask one question: "What is your company's project or product development program and its future prospects?" If I can't get a clear, detailed answer to that question throughout the organization, I know that people are just biding their time and aren't energized.
#### Mental clarity
Our minds consume more energy than any other part of our body, just like computers that consume a lot of electricity. That is why we're so tired after studying for many hours or taking long tests. We didn't move around much, but we're tired just the same because we consumed energy through concentration and mental activity. When our mental energy is low, our ability to focus intensely, prioritize, and think creatively suffer. We therefore must know when to relax and re-energize our brains.
Whether for companies or individuals, exhaustion is counter-productive, resulting in errors, frustration, and poor performance. This could be even more true in open organizations, as many participants are contributing for a wide range of reasons outside of basic monetary rewards.
Additionally, consider mental energy and the concept of "multi-tasking." Some people proudly report that they're good at multi-tasking—but they shouldn't be, as it wastes mental energy. I am not saying we shouldn't listen to music while driving the car, but when it comes to cognitive mental processing, we can only do one thing at a time. The concept of "multi-tasking" is actually a misnomer in the context of deep thinking. Human brains process information sequentially (one issue at a time). We simply _cannot_ conduct two thinking tasks at the same time. Instead, what we're _really_ doing is task "switching," a back-and-forth movement between tasks that has a significant mental energy cost for the value gained. First and foremost, it is less efficient. Also, it consumes a great deal of energy. Best is to start a task and completely finish it before moving on to the next task, reducing shut-down and recall waste.
Just like overworking a muscle, mental clarity, strength, and energy can be used up. Therefore, mental recovery is important. Many people have found “power naps” or meditating extremely helpful. Going into a relax mode for only 20 minutes can be very helpful. It is just enough time to build your mental energy, but not long enough to make you groggy.
#### Spiritual significance
Spiritual significance creates energy, and it comes from the feeling of serving a mission beyond financial rewards. Put simply, it is doing something we consider important (an internal desire or goal). In the case of my sales training career, for example, I had a purpose of building the _careers_ of salespeople who had very little formal education. Teaching in countless developing countries really offered an inspiring purpose for me. At that time, this dimension of my work provided "spiritual significance". It generated a great deal of energy in me. As I grow older, I'm finding this spiritual significance becoming extremely important while financial rewards start to weaken. I'm sure this is true for many people that are contributing to open organizations. This contribution is where their energy comes from.
Not just for me, not just for open organization, but for _people in general_, I've found these four types of energy very important—so much so that I've prepared and delivered [presentations][7] on them.
### Energy across four zones
In _[The Way We're Working Isn't Working][3],_ Tony Schwartz mentions four zones that people operate in, which I've diagrammed in Figure 1.
![][8]
Here we see four quadrants useful in evaluating a working environment. From left to right we see an assessment of the environment, with a negative environment (energy-consuming) on the left and a positive environment (energizing) on the right. From top to bottom, then, is an assessment of the organization's (or individual's) energy level. Obviously, you want to be in the upper-right zone as much as possible and out of the bottom-left zone as much as possible.
To move from the Survival Zone (left) to the Performance Zone (right), Schwartz suggests you can manage your attitude by looking through three "lenses":
* **Reflection lens.** Ask two very simple questions: 1. What are the facts? and 2. What assumptions am I making? Stand back, observe the environment, and give yourself a chance to consider other, more positive ways of thinking about the situation.
* **Reverse lens.** How do you think other people (particularly the person creating the negative environment) feel about the situation? Try to find ways to see or feel that person's perspective.
* **Long lens.** Even after learning and confirming the facts and considering other people's perspectives, if you still feel like you're in the Survival Zone or Burnout Zone, consider the future. Imagine how you can make the best of a bad situation and move above it. No matter how badly you feel right now, how can you learn from this experience and excel in the future? Try to see value in the experience.
Do yourself a favor when you wake up in the morning. Evaluate yourself on the four energy types I mentioned in this article. Rate yourself from 1 to 10 on your current "morning physical health energy level." Then, do the same thing for your "emotional well-being energy level," "mental clarity energy level" and "spiritual significance energy level." That will give you some idea of what you'll have to work on to get the most out of both your day and life.
Jono Bacon shares some quite ridiculous life choices from his early years that illustrate important...
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/8/energy-fatigue-burnout
作者:[Ron McFarland][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ron-mcfarlandhttps://opensource.com/users/jonobaconhttps://opensource.com/users/marcobravohttps://opensource.com/users/ron-mcfarland
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bulb-light-energy-power-idea.png?itok=zTEEmTZB (Light bulb)
[2]: https://icd.who.int/browse11/l-m/en#/http://id.who.int/icd/entity/129180281
[3]: https://www.simonandschuster.com/books/The-Way-Were-Working-Isnt-Working/Tony-Schwartz/9781451610260
[4]: https://theenergyproject.com/team/tony-schwartz/
[5]: https://www.simonandschuster.com/books/Every-Word-Has-Power/Yvonne-Oswald/9781582701813
[6]: http://progressprinciple.com/books/single/the_progress_principle
[7]: https://www.slideshare.net/RonMcFarland1/increasing-company-energy?qid=e3c251fc-be71-4185-b216-3c7e5b48f7b1&v=&b=&from_search=2
[8]: https://opensource.com/sites/default/files/images/open-org/energy_zones.png

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Unboxing the Raspberry Pi 4)
[#]: via: (https://opensource.com/article/19/8/unboxing-raspberry-pi-4)
[#]: author: (Anderson Silva https://opensource.com/users/ansilvahttps://opensource.com/users/bennuttall)
Unboxing the Raspberry Pi 4
======
The Raspberry Pi 4 delivers impressive performance gains over its
predecessors, and the Starter Kit makes it easy to get it up and running
quickly.
![Raspberry Pi 4 board, posterized filter][1]
When the Raspberry Pi 4 was [announced at the end of June][2], I wasted no time. I ordered two Raspberry Pi 4 Starter Kits the same day from [CanaKit][3]. The 1GB RAM version was available right away, but the 4GB version wouldn't ship until July 19th. Since I wanted to try both, I ordered them to be shipped together.
![CanaKit's Raspberry Pi 4 Starter Kit and official accessories][4]
Here's what I found when I unboxed my Raspberry Pi 4.
### Power supply
The Raspberry Pi 4 uses a USB-C connector for its power supply. Even though USB-C cables are very common now, your Pi 4 [may not like your USB-C cable][5] (at least with these first editions of the Raspberry Pi 4). So, unless you know exactly what you are doing, I recommend ordering the Starter Kit, which comes with an official Raspberry Pi charger. In case you would rather try whatever you have on hand, the device's input reads 100-240V ~ 50/60Hz 0.5A, and the output says 5.1V --- 3.0A.
![Raspberry Pi USB-C charger][6]
### Keyboard and mouse
The official keyboard and mouse are [sold separately][7] from the Starter Kit, and at $25 total, they aren't really cheap, given you're paying only $35 to $55 for a proper computer. But the Raspberry Pi logo is printed on this keyboard (instead of the Windows logo), and there is something compelling about having an appropriate appearance. The keyboard is also a USB hub, so it allows you to plug in even more devices. I plugged in my [YubiKey][8] security key, and it works very nicely. I would classify the keyboard and mouse as a "nice to have" versus a "must-have." Your regular keyboard and mouse should work fine.
![Official Raspberry Pi keyboard \(with YubiKey plugged in\) and mouse.][9]
![Raspberry Pi logo on the keyboard][10]
### Micro-HDMI cable
Something that may have caught some folks by surprise is that, unlike the Raspberry Pi Zero that comes with a Mini-HDMI port, the Raspberry Pi 4 comes with a Micro-HDMI. They are not the same thing! So, even though you may have a suitable USB-C cable/power adaptor, mouse, and keyboard on hand, there is a pretty good chance you will need a Micro-HDMI-to-HDMI cable (or an adapter) to plug your new Raspberry Pi into a display.
### The case
Cases for the Raspberry Pi have been around for years and are probably one of the very first "official" peripherals the Raspberry Pi Foundation sold. Some people like them; others don't. I think putting a Pi in a case makes it easier to carry it around and avoid static electricity and bent pins.
On the other hand, keeping your Pi covered can overheat the board. This CanaKit Starter Kit also comes with heatsink for the processor, which might help, as the newer Pis are already [known for running pretty hot][11].
![Raspberry Pi 4 case][12]
### Raspbian and NOOBS
The other item that comes with the Starter Kit is a microSD card with the correct version of the [NOOBS][13] operating system for the Raspberry Pi 4 pre-installed. (I got version 3.1.1, released June 24, 2019). If you're using a Raspberry Pi for the first time and are not sure where to start, this could save you a lot of time. The microSD card in the Starter Kit is 32GB.
After you insert the microSD card and connect all the cables, just start up the Pi, boot into NOOBS, pick the Raspbian distribution, and wait while it installs.
![Raspberry Pi 4 with 4GB of RAM][14]
I noticed a couple of improvements while installing the latest Raspbian. (Forgive me if they've been around for a while—I haven't done a fresh install on a Pi since the 3 came out.) One is that Raspbian will ask you to set up a password for your account at first boot after installation, and the other is that it will run a software update (assuming you have network connectivity). These are great improvements to help keep your Raspberry Pi a little more secure. I would love to see the option to encrypt the microSD card at installation … maybe someday?
![Running Raspbian updates at first boot][15]
![Raspberry Pi 4 setup][16]
It runs very smoothly!
### Wrapping up
Although CanaKit isn't the only authorized Raspberry Pi retailer in the US, I found its Starter Kit to provide great value for the price.
So far, I am very impressed with the performance gains in the Raspberry Pi 4. I'm planning to try spending an entire workday using it as my only computer, and I'll write a follow-up article soon about how far I can go. Stay tuned!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/unboxing-raspberry-pi-4
作者:[Anderson Silva][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ansilvahttps://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberrypi4_board_hardware.jpg?itok=KnFU7NvR (Raspberry Pi 4 board, posterized filter)
[2]: https://opensource.com/article/19/6/raspberry-pi-4
[3]: https://www.canakit.com/raspberry-pi-4-starter-kit.html
[4]: https://opensource.com/sites/default/files/uploads/raspberrypi4_canakit.jpg (CanaKit's Raspberry Pi 4 Starter Kit and official accessories)
[5]: https://www.techrepublic.com/article/your-new-raspberry-pi-4-wont-power-on-usb-c-cable-problem-now-officially-confirmed/
[6]: https://opensource.com/sites/default/files/uploads/raspberrypi_usb-c_charger.jpg (Raspberry Pi USB-C charger)
[7]: https://www.canakit.com/official-raspberry-pi-keyboard-mouse.html?defpid=4476
[8]: https://www.yubico.com/products/yubikey-hardware/
[9]: https://opensource.com/sites/default/files/uploads/raspberrypi_keyboardmouse.jpg (Official Raspberry Pi keyboard (with YubiKey plugged in) and mouse.)
[10]: https://opensource.com/sites/default/files/uploads/raspberrypi_keyboardlogo.jpg (Raspberry Pi logo on the keyboard)
[11]: https://www.theregister.co.uk/2019/07/22/raspberry_pi_4_too_hot_to_handle/
[12]: https://opensource.com/sites/default/files/uploads/raspberrypi4_case.jpg (Raspberry Pi 4 case)
[13]: https://www.raspberrypi.org/downloads/noobs/
[14]: https://opensource.com/sites/default/files/uploads/raspberrypi4_ram.jpg (Raspberry Pi 4 with 4GB of RAM)
[15]: https://opensource.com/sites/default/files/uploads/raspberrypi4_rasbpianupdate.jpg (Running Raspbian updates at first boot)
[16]: https://opensource.com/sites/default/files/uploads/raspberrypi_setup.jpg (Raspberry Pi 4 setup)

View File

@ -0,0 +1,203 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Navigating the Bash shell with pushd and popd)
[#]: via: (https://opensource.com/article/19/8/navigating-bash-shell-pushd-popd)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/dnearyhttps://opensource.com/users/greg-phttps://opensource.com/users/shishz)
Navigating the Bash shell with pushd and popd
======
Pushd and popd are the fastest navigational commands you've never heard
of.
![bash logo on green background][1]
The **pushd** and **popd** commands are built-in features of the Bash shell to help you "bookmark" directories for quick navigation between locations on your hard drive. You might already feel that the terminal is an impossibly fast way to navigate your computer; in just a few key presses, you can go anywhere on your hard drive, attached storage, or network share. But that speed can break down when you find yourself going back and forth between directories, or when you get "lost" within your filesystem. Those are precisely the problems **pushd** and **popd** can help you solve.
### pushd
At its most basic, **pushd** is a lot like **cd**. It takes you from one directory to another. Assume you have a directory called **one**, which contains a subdirectory called **two**, which contains a subdirectory called **three**, and so on. If your current working directory is **one**, then you can move to **two** or **three** or anywhere with the **cd** command:
```
$ pwd
one
$ cd two/three
$ pwd
three
```
You can do the same with **pushd**:
```
$ pwd
one
$ pushd two/three
~/one/two/three ~/one
$ pwd
three
```
The end result of **pushd** is the same as **cd**, but there's an additional intermediate result: **pushd** echos your destination directory and your point of origin. This is your _directory stack_, and it is what makes **pushd** unique.
### Stacks
A stack, in computer terminology, refers to a collection of elements. In the context of this command, the elements are directories you have recently visited by using the **pushd** command. You can think of it as a history or a breadcrumb trail.
You can move all over your filesystem with **pushd**; each time, your previous and new locations are added to the stack:
```
$ pushd four
~/one/two/three/four ~/one/two/three ~/one
$ pushd five
~/one/two/three/four/five ~/one/two/three/four ~/one/two/three ~/one
```
### Navigating the stack
Once you've built up a stack, you can use it as a collection of bookmarks or fast-travel waypoints. For instance, assume that during a session you're doing a lot of work within the **~/one/two/three/four/five** directory structure of this example. You know you've been to **one** recently, but you can't remember where it's located in your **pushd** stack. You can view your stack with the **+0** (that's a plus sign followed by a zero) argument, which tells **pushd** not to change to any directory in your stack, but also prompts **pushd** to echo your current stack:
```
$ pushd +0
~/one/two/three/four ~/one/two/three ~/one ~/one/two/three/four/five
```
The first entry in your stack is your current location. You can confirm that with **pwd** as usual:
```
$ pwd
~/one/two/three/four
```
Starting at 0 (your current location and the first entry of your stack), the _second_ element in your stack is **~/one**, which is your desired destination. You can move forward in your stack using the **+2** option:
```
$ pushd +2
~/one ~/one/two/three/four/five ~/one/two/three/four ~/one/two/three
$ pwd
~/one
```
This changes your working directory to **~/one** and also has shifted the stack so that your new location is at the front.
You can also move backward in your stack. For instance, to quickly get to **~/one/two/three** given the example output, you can move back by one, keeping in mind that **pushd** starts with 0:
```
$ pushd -0
~/one/two/three ~/one ~/one/two/three/four/five ~/one/two/three/four
```
### Adding to the stack
You can continue to navigate your stack in this way, and it will remain a static listing of your recently visited directories. If you want to add a directory, just provide the directory's path. If a directory is new to the stack, it's added to the list just as you'd expect:
```
$ pushd /tmp
/tmp ~/one/two/three ~/one ~/one/two/three/four/five ~/one/two/three/four
```
But if it already exists in the stack, it's added a second time:
```
$ pushd ~/one
~/one /tmp ~/one/two/three ~/one ~/one/two/three/four/five ~/one/two/three/four
```
While the stack is often used as a list of directories you want quick access to, it is really a true history of where you've been. If you don't want a directory added redundantly to the stack, you must use the **+N** and **-N** notation.
### Removing directories from the stack
Your stack is, obviously, not immutable. You can add to it with **pushd** or remove items from it with **popd**.
For instance, assume you have just used **pushd** to add **~/one** to your stack, making **~/one** your current working directory. To remove the first (or "zeroeth," if you prefer) element:
```
$ pwd
~/one
$ popd +0
/tmp ~/one/two/three ~/one ~/one/two/three/four/five ~/one/two/three/four
$ pwd
~/one
```
Of course, you can remove any element, starting your count at 0:
```
$ pwd ~/one
$ popd +2
/tmp ~/one/two/three ~/one/two/three/four/five ~/one/two/three/four
$ pwd ~/one
```
You can also use **popd** from the back of your stack, again starting with 0. For example, to remove the final directory from your stack:
```
$ popd -0
/tmp ~/one/two/three ~/one/two/three/four/five
```
When used like this, **popd** does not change your working directory. It only manipulates your stack.
### Navigating with popd
The default behavior of **popd**, given no arguments, is to remove the first (zeroeth) item from your stack and make the next item your current working directory.
This is most useful as a quick-change command, when you are, for instance, working in two different directories and just need to duck away for a moment to some other location. You don't have to think about your directory stack if you don't need an elaborate history:
```
$ pwd
~/one
$ pushd ~/one/two/three/four/five
$ popd
$ pwd
~/one
```
You're also not required to use **pushd** and **popd** in rapid succession. If you use **pushd** to visit a different location, then get distracted for three hours chasing down a bug or doing research, you'll find your directory stack patiently waiting (unless you've ended your terminal session):
```
$ pwd ~/one
$ pushd /tmp
$ cd {/etc,/var,/usr}; sleep 2001
[...]
$ popd
$ pwd
~/one
```
### Pushd and popd in the real world
The **pushd** and **popd** commands are surprisingly useful. Once you learn them, you'll find excuses to put them to good use, and you'll get familiar with the concept of the directory stack. Getting comfortable with **pushd** was what helped me understand **git stash**, which is entirely unrelated to **pushd** but similar in conceptual intangibility.
Using **pushd** and **popd** in shell scripts can be tempting, but generally, it's probably best to avoid them. They aren't portable outside of Bash and Zsh, and they can be obtuse when you're re-reading a script (**pushd +3** is less clear than **cd $HOME/$DIR/$TMP** or similar).
Aside from these warnings, if you're a regular Bash or Zsh user, then you can and should try **pushd** and **popd**.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/navigating-bash-shell-pushd-popd
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/dnearyhttps://opensource.com/users/greg-phttps://opensource.com/users/shishz
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)

View File

@ -0,0 +1,210 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Trace code in Fedora with bpftrace)
[#]: via: (https://fedoramagazine.org/trace-code-in-fedora-with-bpftrace/)
[#]: author: (Augusto Caringi https://fedoramagazine.org/author/acaringi/)
Trace code in Fedora with bpftrace
======
![][1]
bpftrace is [a new eBPF-based tracing tool][2] that was first included in Fedora 28. It was developed by Brendan Gregg, Alastair Robertson and Matheus Marchini with the help of a loosely-knit team of hackers across the Net. A tracing tool lets you analyze what a system is doing behind the curtain. It tells you which functions in code are being called, with which arguments, how many times, and so on.
This article covers some basics about bpftrace, and how it works. Read on for more information and some useful examples.
### eBPF (extended Berkeley Packet Filter)
[eBPF][3] is a tiny virtual machine, or a virtual CPU to be more precise, in the Linux Kernel. The eBPF can load and run small programs in a safe and controlled way in kernel space. This makes it safer to use, even in production systems. This virtual machine has its own instruction set architecture ([ISA][4]) resembling a subset of modern processor architectures. The ISA makes it easy to translate those programs to the real hardware. The kernel performs just-in-time translation to native code for main architectures to improve the performance.
The eBPF virtual machine allows the kernel to be extended programmatically. Nowadays several kernel subsystems take advantage of this new powerful Linux Kernel capability. Examples include networking, seccomp, tracing, and more. The main idea is to attach eBPF programs into specific code points, and thereby extend the original kernel behavior.
eBPF machine language is very powerful. But writing code directly in it is extremely painful, because its a low level language. This is where bpftrace comes in. It provides a high-level language to write eBPF tracing scripts. The tool then translates these scripts to eBPF with the help of clang/LLVM libraries, and then attached to the specified code points.
## Installation and quick start
To install bpftrace, run the following command in a terminal [using][5] _[sudo][5]_:
```
$ sudo dnf install bpftrace
```
Try it out with a “hello world” example:
```
$ sudo bpftrace -e 'BEGIN { printf("hello world\n"); }'
```
Note that you must run _bpftrace_ as _root_ due to the privileges required. Use the _-e_ option to specify a program, and to construct the so-called “one-liners.” This example only prints _hello world_, and then waits for you to press **Ctrl+C**.
_BEGIN_ is a special probe name that fires only once at the beginning of execution. Every action inside the curly braces _{ }_ fires whenever the probe is hit — in this case, its just a _printf_.
Lets jump now to a more useful example:
```
$ sudo bpftrace -e 't:syscalls:sys_enter_execve { printf("%s called %s\n", comm, str(args->filename)); }'
```
This example prints the parent process name _(comm)_ and the name of every new process being created in the system. _t:syscalls:sys_enter_execve_ is a kernel tracepoint. Its a shorthand for _tracepoint:syscalls:sys_enter_execve_, but both forms can be used. The next section shows you how to list all available tracepoints.
_comm_ is a bpftrace builtin that represents the process name. _filename_ is a field of the _t:syscalls:sys_enter_execve_ tracepoint. You can access these fields through the _args_ builtin.
All available fields of the tracepoint can be listed with this command:
```
bpftrace -lv "t:syscalls:sys_enter_execve"
```
## Example usage
### Listing probes
A central concept for _bpftrace_ are **probe points**. Probe points are instrumentation points in code (kernel or userspace) where eBPF programs can be attached. They fit into the following categories:
* _kprobe_ kernel function start
* _kretprobe_ kernel function return
* _uprobe_ user-level function start
* _uretprobe_ user-level function return
* _tracepoint_ kernel static tracepoints
* _usdt_ user-level static tracepoints
* _profile_ timed sampling
* _interval_ timed output
* _software_ kernel software events
* _hardware_ processor-level events
All available _kprobe/kretprobe_, _tracepoints_, _software_ and _hardware_ probes can be listed with this command:
```
$ sudo bpftrace -l
```
The _uprobe/uretprobe_ and _usdt_ probes are userspace probes specific to a given executable. To use them, use the special syntax shown later in this article.
The _profile_ and _interval_ probes fire at fixed time intervals. Fixed time intervals are not covered in this article.
### Counting system calls
**Maps** are special BPF data types that store counts, statistics, and histograms. You can use maps to summarize how many times each syscall is being called:
```
$ sudo bpftrace -e 't:syscalls:sys_enter_* { @[probe] = count(); }'
```
Some probe types allow wildcards to match multiple probes. You can also specify multiple attach points for an action block using a comma separated list. In this example, the action block attaches to all tracepoints whose name starts with _t:syscalls:sys_enter__, which means all available syscalls.
The bpftrace builtin function _count()_ counts the number of times this function is called. _@[]_ represents a map (an associative array). The key of this map is _probe_, which is another bpftrace builtin that represents the full probe name.
Here, the same action block is attached to every syscall. Then, each time a syscall is called the map will be updated, and the entry is incremented in the map relative to this same syscall. When the program terminates, it automatically prints out all declared maps.
This example counts the syscalls called globally, its also possible to filter for a specific process by _PID_ using the bpftrace filter syntax:
```
$ sudo bpftrace -e 't:syscalls:sys_enter_* / pid == 1234 / { @[probe] = count(); }'
```
### Write bytes by process
Using these concepts, lets analyze how many bytes each process is writing:
```
$ sudo bpftrace -e 't:syscalls:sys_exit_write /args->ret > 0/ { @[comm] = sum(args->ret); }'
```
_bpftrace_ attaches the action block to the write syscall return probe (_t:syscalls:sys_exit_write_). Then, it uses a filter to discard the negative values, which are error codes _(/args-&gt;ret &gt; 0/)_.
The map key _comm_ represents the process name that called the syscall. The _sum()_ builtin function accumulates the number of bytes written for each map entry or process. _args_ is a bpftrace builtin to access tracepoints arguments and return values. Finally, if successful, the _write_ syscall returns the number of written bytes. _args-&gt;ret_ provides access to the bytes.
### Read size distribution by process (histogram):
_bpftrace_ supports the creation of histograms. Lets analyze one example that creates a histogram of the _read_ size distribution by process:
```
$ sudo bpftrace -e 't:syscalls:sys_exit_read { @[comm] = hist(args->ret); }'
```
Histograms are BPF maps, so they must always be attributed to a map (_@_). In this example, the map key is _comm_.
The example makes _bpftrace_ generate one histogram for every process that calls the _read_ syscall. To generate just one global histogram, attribute the _hist()_ function just to _@_ (without any key).
bpftrace automatically prints out declared histograms when the program terminates. The value used as base for the histogram creation is the number of read bytes, found through _args-&gt;ret_.
### Tracing userspace programs
You can also trace userspace programs with _uprobes/uretprobes_ and _USDT_ (User-level Statically Defined Tracing). The next example uses a _uretprobe_, which probes to the end of a user-level function. It gets the command lines issued in every _bash_ running in the system:
```
$ sudo bpftrace -e 'uretprobe:/bin/bash:readline { printf("readline: \"%s\"\n", str(retval)); }'
```
To list all available _uprobes/uretprobes_ of the _bash_ executable, run this command:
```
$ sudo bpftrace -l "uprobe:/bin/bash"
```
_uprobe_ instruments the beginning of a user-level functions execution, and _uretprobe_ instruments the end (its return). _readline()_ is a function of _/bin/bash_, and it returns the typed command line. _retval_ is the return value for the instrumented function, and can only be accessed on _uretprobe_.
When using _uprobes_, you can access arguments with _arg0..argN_. A _str()_ call is necessary to turn the _char *_ pointer to a _string_.
## Shipped Scripts
There are many useful scripts shipped with bpftrace package. You can find them in the _/usr/share/bpftrace/tools/_ directory.
Among them, you can find:
* _killsnoop.bt_ Trace signals issued by the kill() syscall.
* _tcpconnect.bt_ Trace all TCP network connections.
* _pidpersec.bt_ Count new procesess (via fork) per second.
* _opensnoop.bt_ Trace open() syscalls.
* _vfsstat.bt_ Count some VFS calls, with per-second summaries.
You can directly use the scripts. For example:
```
$ sudo /usr/share/bpftrace/tools/killsnoop.bt
```
You can also study these scripts as you create new tools.
## Links
* bpftrace reference guide <https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md>
* bpftrace (DTrace 2.0) for Linux 2018 <http://www.brendangregg.com/blog/2018-10-08/dtrace-for-linux-2018.html>
* BPF: the universal in-kernel virtual machine <https://lwn.net/Articles/599755/>
* Linux Extended BPF (eBPF) Tracing Tools <http://www.brendangregg.com/ebpf.html>
* Dive into BPF: a list of reading material [https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf][6]
* * *
_Photo by _[_Roman Romashov_][7]_ on _[_Unsplash_][8]_._
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/trace-code-in-fedora-with-bpftrace/
作者:[Augusto Caringi][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/acaringi/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/08/bpftrace-816x345.jpg
[2]: https://github.com/iovisor/bpftrace
[3]: https://lwn.net/Articles/740157/
[4]: https://github.com/iovisor/bpf-docs/blob/master/eBPF.md
[5]: https://fedoramagazine.org/howto-use-sudo/
[6]: https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/
[7]: https://unsplash.com/@wehavemegapixels?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[8]: https://unsplash.com/search/photos/trace?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Find Out How Long Does it Take To Boot Your Linux System)
[#]: via: (https://itsfoss.com/check-boot-time-linux/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Find Out How Long Does it Take To Boot Your Linux System
======
When you power on your system, you wait for the manufacturers logo to come up, a few messages on the screen perhaps (booting in insecure mode), [Grub][1] screen, operating system loading screen and finally the login screen.
Did you check how long did it take? Perhaps not. Unless you really need to know, you wont bother with the boot time details.
But what if you are curious to know long long your Linux system takes to boot? Running a stopwatch is one way to find that but in Linux, you have better and easier ways to find out your systems start up time.
### Checking boot time in Linux with systemd-analyze
![][2]
Like it or not, [systemd][3] is running on most of the popular Linux distributions. The systemd has a number of utilities to manage your Linux system. One of those utilities is systemd-analyze.
The systemd-analyze command gives you a detail of how many services ran at the last start up and how long they took.
If you run the following command in the terminal:
```
systemd-analyze
```
Youll get the total boot time along with the time taken by firmware, boot loader, kernel and the userspace:
```
Startup finished in 7.275s (firmware) + 13.136s (loader) + 2.803s (kernel) + 12.488s (userspace) = 35.704s
graphical.target reached after 12.408s in userspace
```
As you can see in the output above, it took about 35 seconds for my system to reach the screen where I could enter my password. I am using Dell XPS Ubuntu edition. It uses SSD storage and despite of that it takes this much time to start.
Not that impressive, is it? Why dont you share your systems boot time? Lets compare.
You can further breakdown the boot time into each unit with the following command:
```
systemd-analyze blame
```
This will produce a huge output with all the services listed in the descending order of the time taken.
```
7.347s plymouth-quit-wait.service
6.198s NetworkManager-wait-online.service
3.602s plymouth-start.service
3.271s plymouth-read-write.service
2.120s apparmor.service
1.503s [email protected]
1.213s motd-news.service
908ms snapd.service
861ms keyboard-setup.service
739ms fwupd.service
702ms bolt.service
672ms dev-nvme0n1p3.device
608ms [email protected]:intel_backlight.service
539ms snap-core-7270.mount
504ms snap-midori-451.mount
463ms snap-screencloud-1.mount
446ms snapd.seeded.service
440ms snap-gtk\x2dcommon\x2dthemes-1313.mount
420ms snap-core18-1066.mount
416ms snap-scrcpy-133.mount
412ms snap-gnome\x2dcharacters-296.mount
```
#### Bonus Tip: Improving boot time
If you look at this output, you can see that both network manager and [plymouth][4] take a huge bunch of boot time.
[][5]
Suggested read  How To Fix: No Unity, No Launcher, No Dash In Ubuntu Linux
Plymouth is responsible for that boot splash screen you see before the login screen in Ubuntu and other distributions. Network manager is responsible for the internet connection and may be turned off to speed up boot time. Dont worry, once you log in, youll have wifi working normally.
```
sudo systemctl disable NetworkManager-wait-online.service
```
If you want to revert the change, you can use this command:
```
sudo systemctl enable NetworkManager-wait-online.service
```
Now, please dont go disabling various services on your own without knowing what it is used for. It may have dangerous consequences.
_**Now that you know how to check the boot time of your Linux system, why not share your systems boot time in the comment section?**_
--------------------------------------------------------------------------------
via: https://itsfoss.com/check-boot-time-linux/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.gnu.org/software/grub/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/linux-boot-time.jpg?resize=800%2C450&ssl=1
[3]: https://en.wikipedia.org/wiki/Systemd
[4]: https://wiki.archlinux.org/index.php/Plymouth
[5]: https://itsfoss.com/how-to-fix-no-unity-no-launcher-no-dash-in-ubuntu-12-10-quick-tip/

View File

@ -0,0 +1,150 @@
两种cp 命令的绝佳用法Bash 捷径
============================================================
### 这篇文章是关于如何在使用cp 命令进行备份以及同步时提高效率
![Two great uses for the cp command: Bash shortcuts ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC)
>图片来自: [Internet Archive Book Images][6]. 由Opensource.com 修改. CC BY-SA 4.0
去年七月,我写了一篇[关于cp 命令的两种绝佳用法][7]的文章:备份一个文件,同步一个文件夹的备份
虽然这些工具确实很好用但同时输入这些命令太过于累赘了。为了解决这个问题我创建了一些Bash 的捷径在我的Bash 启动文件里。现在我想把这些捷径分享给你们以便于你们需要的时候可以拿来用或者是给那些还不知道怎么使用Bash 的别名以及函数功能的用户提供一些思路。
### 使用Bash 别名来更新一个文件夹的副本
如果要使用cp 来更新一个文件夹的副本,通常会使用到的命令是:
```
cp -r -u -v SOURCE-FOLDER DESTINATION-DIRECTORY
```
因为我经常使用cp 命令来复制文件夹,我会很自然地想起使用-r 选项。也许再想地更深入一些,我还可以想起用-v 选项,如果再想得再深一层,我会想起用选项-u (不知道这个选项是代表“更新”还是“同步”还是一些什么其它的)。
或者,还可以使用[Bash 的别名功能][8]来将cp 命令以及其后的选项转换成一个更容易记忆的单词,就像这样:
```
alias sync='cp -r -u -v'
```
```
sync Pictures /media/me/4388-E5FE
```
不清楚sync 是否已经被使用了你可以在终端里输入alias 这个单词来列出所有正在使用的命令别名。
喜欢吗?想要现在就立即使用吗?那就现在打开终端,输入:
```
echo "alias sync='cp -r -u -v'" >> ~/.bash_aliases
```
```
me@mymachine~$ alias
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias gvm='sdk'
alias l='ls -CF'
alias la='ls -A'
alias ll='ls -alF'
alias ls='ls --color=auto'
alias sync='cp -r -u -v'
me@mymachine:~$
```
### 使用Bash 的函数功能来为备份编号
若要使用cp 来备份一个文件,通常使用的命令是:
```
cp --force --backup=numbered WORKING-FILE BACKED-UP-FILE
```
我们不仅需要记得所有cp 的选项,我们还需要记得去重复输入 WORKING-FILE 的名字。但为什么我们还要不断地重复这个过程当[Bash 的函数功能][9]已经可以帮我们做这一切了呢?就像这样:
再一次提醒,你可将下列内容保存入你在家目录下的.bash_aliases 文件里
```
function backup {
    if [ $# -ne 1 ]; then
        echo "Usage: $0 filename"
    elif [ -f $1 ] ; then
        echo "cp --force --backup=numbered $1 $1"
        cp --force --backup=numbered $1 $1
    else
        echo "$0: $1 is not a file"
    fi
}
```
第一个if 语句是用于检查是否提供有且只有一个参数否则它会用echo 命令来打印出正确的用法。
elif 语句是用于检查提供的参数所指向的是一个文件如果是的话它会用第二个echo 命令来打印所需的cp 的命令(所有的选项都是用全称来表示)并且执行它。
如果所提供的参数不是一个文件文件中的第三个echo 用于打印错误信息。
在我的家目录下如果我执行backup 这个命令我可以发现目录下多了一个文件名为checkCounts.sql.~1~ 的文件如果我再执行一次便又多了另一个名为checkCounts.sql.~2~ 的文件。
成功了就像所想的一样我可以继续编辑checkCounts.sql ,但如果我可以经常地用这个命令来为文件制作快照的话,我可以在我遇到问题的时候回退到最近的版本。
也许在未来的某个时间使用git 作为版本控制系统会是一个好主意。但像上文所介绍的backup 这个简单而又好用的工具是你在需要使用快照的功能时却还未准备好使用git 的最好工具。
### 结论
在我未来的文章里我保证我会通过使用脚本shell 里的函数功能以及别名功能来简化一些机械性的动作来提高生产效率。
在这篇文章里我已经展示了如何在使用cp 命令同步或者备份文件时运用shell 的函数以及别名功能来简化操作。如果你想要了解更多,可以读一下这两篇文章:[怎样通过使用命令别名功能来减少敲击键盘的次数][10] 以及由我的同事Greg 和Seth 写的[Shell 编程shift 方法和自定义函数介绍][11]
### 关于作者
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/clh_portrait2.jpg?itok=V1V-YAtY)][13] Chris Hermansen 
从1978年于不列颠哥伦比亚大学毕业后一直从事计算机方面的工作。自从2005 年开始就一直是一名完全只使用Linux 的用户在之前还是一名SolarisSunOS 以及 UNIX System V 的用户。技术方面在我的职业生涯中我花费了大量的时间进行数据分析特别是空间数据分析LCTT译注原文为 spatial data analysis。同时我也在使用编程语言像awk, Python, PostgreSQL, PostGIS 和最近的 Groovy 来进行数据分析有着丰富的经验。我还创建了一些其它的东西。[关于Chris Hermansen的更多信息][14]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/two-great-uses-cp-command-update
作者:[Chris Hermansen][a]
译者:[zyk2290](https://github.com/zyk2290)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/clhermansen
[1]:https://opensource.com/users/clhermansen
[2]:https://opensource.com/users/clhermansen
[3]:https://opensource.com/user/37806/feed
[4]:https://opensource.com/article/18/1/two-great-uses-cp-command-update?rate=J_7R7wSPbukG9y8jrqZt3EqANfYtVAwZzzpopYiH3C8
[5]:https://opensource.com/article/18/1/two-great-uses-cp-command-update#comments
[6]:https://www.flickr.com/photos/internetarchivebookimages/14803082483/in/photolist-oy6EG4-pZR3NZ-i6r3NW-e1tJSX-boBtf7-oeYc7U-o6jFKK-9jNtc3-idt2G9-i7NG1m-ouKjXe-owqviF-92xFBg-ow9e4s-gVVXJN-i1K8Pw-4jybMo-i1rsBr-ouo58Y-ouPRzz-8cGJHK-85Evdk-cru4Ly-rcDWiP-gnaC5B-pAFsuf-hRFPcZ-odvBMz-hRCE7b-mZN3Kt-odHU5a-73dpPp-hUaaAi-owvUMK-otbp7Q-ouySkB-hYAgmJ-owo4UZ-giHgqu-giHpNc-idd9uQ-osAhcf-7vxk63-7vwN65-fQejmk-pTcLgA-otZcmj-fj1aSX-hRzHQk-oyeZfR
[7]:https://opensource.com/article/17/7/two-great-uses-cp-command
[8]:https://opensource.com/article/17/5/introduction-alias-command-line-tool
[9]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions
[10]:https://opensource.com/article/17/5/introduction-alias-command-line-tool
[11]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions
[12]:https://opensource.com/tags/linux
[13]:https://opensource.com/users/clhermansen
[14]:https://opensource.com/users/clhermansen

View File

@ -0,0 +1,117 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (GameMode A Tool To Improve Gaming Performance On Linux)
[#]: via: (https://www.ostechnix.com/gamemode-a-tool-to-improve-gaming-performance-on-linux/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
GameMode - 提高 Linux 游戏性能的工具
======
![Gamemmode improve gaming performance on Linux][1]
去问一些 Linux 用户为什么他们仍然坚持 Windows 双启动,他们的答案可能是 - “游戏!”。这是真的!幸运的是,开源游戏平台如 [**Lutris**][2] 和专有游戏平台 **Steam** 已经为 Linux 平台带来了许多游戏,并且近几年来显著改善了 Linux 的游戏体验。今天,我偶然发现了另一款名为 **GameMode** 的 Linux 游戏相关开源工具,它能让用户提高 Linux 上的游戏性能。
GameMode 基本上是一组守护进程/lib它可以按需优化 Linux 系统的游戏性能。我以为 GameMode 是一个杀死在后台运行的对资源消耗大进程的工具。但它并不是。它实际上只是让 CPU **自动运行在高性能模式下**并帮助 Linux 用户从游戏中获得最佳性能。
在玩游戏时GameMode 通过请求一组优化临时应用于宿主机来显著提升游戏性能。目前,它支持下面这些优化:
* CPU 调控器,
  * I/O 优先级,
  * 进程 nice 值
  * 内核调度器SCHED_ISO
  * 进制屏幕保护,
  * GPU 高性能模式NVIDIA 和 AMDGPU 超频NVIDIA
  * 自定义脚本。
GameMode 是由世界领先的游戏发行商 [**Feral Interactive**][3] 开发的免费开源系统工具。
### 安装 GameMode
GameMode 适用于许多 Linux 发行版。
在 Arch Linux 及其变体上,你可以使用任何 AUR 助手程序,如 [**Yay**][5] 从 [**AUR**][4] 安装它。
```
$ yay -S gamemode
```
在 Debian、Ubuntu、Linux Mint 和其他基于 Deb 的系统上:
```
$ sudo apt install gamemode
```
如果 GameMode 不适用于你的系统,你可以按照它的 Github 页面中开发章节下的描述从源码手动编译和安装它。
### 激活 GameMode 支持以改善 Linux 上的游戏性能
以下是 GameMode 集成支持的游戏列表,因此我们无需进行任何其他配置即可激活 GameMode 支持。
* 古墓丽影:崛起
* 全面战争传奇:不列颠尼亚王座
* 全面战争:战锤 2
* 尘埃 4
* 全面战争:三国
只需运行这些游戏,就会自动启用 GameMode 支持。
这里还有将 GameMode 与 GNOME shell 集成的的[扩展][6]。它会在顶部指示 GameMode 何时处于活跃。
对于其他游戏,你可能需要手动请求 GameMode 支持,如下所示。
```
gamemoderun ./game
```
我不喜欢游戏,并且我已经很多年没玩游戏了。所以,我无法分享一些实际的基准测试。
但是,我在 Youtube 上找到了一个简短的视频教程,以便为 Lutris 游戏启用 GameMode 支持。对于那些想要第一次尝试 GameMode 的人来说,这是个不错的开始。
<https://youtu.be/4gyRyYfyGJw>
通过浏览视频中的评论,我可以说 GameMode 确实提高了 Linux 上的游戏性能。
对于更多细节,请参阅 [**GameMode 的 GitHub 仓库**][7]。
* * *
**相关阅读:**
* [**GameHub 将所有游戏集合在一起的仓库**][8]
* [**如何在 Linux 中运行 MS-DOS 游戏和程序**][9]
* * *
你用过 GameMode 吗?它真的有改善 Linux 上的游戏性能吗?请在下面的评论栏分享你的想法。
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/gamemode-a-tool-to-improve-gaming-performance-on-linux/
作者:[sk][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/07/Gamemode-720x340.png
[2]: https://www.ostechnix.com/manage-games-using-lutris-linux/
[3]: http://www.feralinteractive.com/en/
[4]: https://aur.archlinux.org/packages/gamemode/
[5]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[6]: https://github.com/gicmo/gamemode-extension
[7]: https://github.com/FeralInteractive/gamemode
[8]: https://www.ostechnix.com/gamehub-an-unified-library-to-put-all-games-under-one-roof/
[9]: https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/