2
0
mirror of https://github.com/LCTT/TranslateProject.git synced 2025-03-30 02:40:11 +08:00

Merge remote-tracking branch 'LCTT/master' into 20160325-Network-automation-with-Ansible

This commit is contained in:
Xingyu.Wang 2018-08-29 15:57:42 +08:00
commit edbeacbb2d
55 changed files with 4952 additions and 1616 deletions
published
sources
talk
tech
20180131 What I Learned from Programming Interviews.md20180529 How To Add Additional IP (Secondary IP) In Ubuntu System.md20180717 Getting started with Etcher.io.md20180720 How to Install 2048 Game in Ubuntu and Other Linux Distributions.md20180730 How to use VS Code for your Python projects.md20180803 10 Popular Windows Apps That Are Also Available on Linux.md20180803 Why I still love Alpine for email at the Linux terminal.md20180808 5 applications to manage your to-do list on Fedora.md20180808 5 open source role-playing games for Linux.md20180810 6 Reasons Why Linux Users Switch to BSD.md20180813 MPV Player- A Minimalist Video Player for Linux.md20180821 A Collection Of More Useful Unix Utilities.md20180821 How I recorded user behaviour on my competitor-s websites.md20180822 What is a Makefile and how does it work.md20180823 An introduction to pipes and named pipes in Linux.md20180823 Getting started with Sensu monitoring.md20180823 How To Easily And Safely Manage Cron Jobs In Linux.md20180823 How to publish a WordPress blog to a static GitLab Pages site.md20180824 5 cool music player apps.md20180824 Add free books to your eReader- Formatting tips.md20180824 How to install software from the Linux command line.md20180824 Steam Makes it Easier to Play Windows Games on Linux.md20180824 What Stable Kernel Should I Use.md20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md20180826 How to capture and analyze packets with tcpdump command on Linux.md20180827 4 tips for better tmux sessions.md20180827 A sysadmin-s guide to containers.md20180827 An introduction to diffs and patches.md20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md20180827 Top 10 Raspberry Pi blogs to follow.md20180828 A Cat Clone With Syntax Highlighting And Git Integration.md20180828 An Introduction to Quantum Computing with Open Source Cirq Framework.md20180828 How to Play Windows-only Games on Linux with Steam Play.md
translated/tech

View File

@ -1,11 +1,13 @@
尝试,学习,修改:新 IT 领导者的代码
尝试、学习、修改:新 IT 领导者的代码
=====
> 随着创新步伐的增加, 长期规划变得越来越困难。让我们重新思考一下我们对变化的反应方式。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv)
几乎每一天,新的技术发展都在威胁破坏,甚至是那些最复杂,最完善的商业计划。组织经常发现自己正在努力适应新的环境,这导致了他们对未来规划的转变。
几乎每一天,新的技术发展都可能会动摇那些甚至最复杂、最完善的商业计划。组织经常发现自己需要不断努力适应新的环境,这导致了他们对未来规划的转变。
根据 CompTIA 2017 年的[研究][1],目前只有 34% 的公司正在制定超过 12 个月的 IT 架构计划。从长期计划转变的一个原因是:商业环境变化如此之快,以至于几乎不可能进一步规划未来。[CIO.com 说道][1]“如果你的公司正视图制定一项将持续五到十年的计划,那就忘了它。”
根据 CompTIA 2017 年的[研究][1],目前只有 34% 的公司正在制定超过 12 个月的 IT 架构计划。从长期计划转变的一个原因是:商业环境变化如此之快,以至于几乎不可能进一步规划未来。[CIO.com 说道][1]“如果你的公司正视图制定一项将持续五到十年的计划,那就忘了它。”
我听过来自世界各地无数客户和合作伙伴的类似声明:技术创新正以一种前所未有的速度发生着。
@ -13,21 +15,21 @@
### 计划是怎么死的
正如我在 Open Organization开源组织中写的那样,传统经营组织针对工业经济进行了优化。他们采用等级结构和严格规定的流程,以实现地位竞争优势。要取得成功,他们必须确定他们想要实现的战略地位。然后,他们必须制定并规划实现目标的计划,并以最有效的方式执行这些计划,通过协调活动和推动合规性。
正如我在<ruby>开放式组织<rt>The Open Organization</rt></ruby>中写的那样,传统经营组织针对工业经济进行了优化。他们采用等级结构和严格规定的流程,以实现地位竞争优势。要取得成功,他们必须确定他们想要实现的战略地位。然后,他们必须制定并规划实现目标的计划,并以最有效的方式执行这些计划,通过协调活动和推动合规性。
管理层的职责是优化这一过程:计划,规定,执行。包括:让我们想象一个有竞争力的优势地位;让我们来配置组织以最终到达那里;然后让我们通过确保组织的所有方面都遵守规定来推动执行。这就是我所说的“机械管理”,对于不同时期来说它都是一个出色的解决方案。
管理层的职责是优化这一过程:计划、规定、执行。包括:让我们想象一个有竞争力的优势地位;让我们来配置组织以最终达成目标;然后让我们通过确保组织的所有方面都遵守规定来推动执行。这就是我所说的“机械管理”,对于不同时期来说它都是一个出色的解决方案。
在当今动荡不定的世界中,我们预测和定义战略位置的能力正在下降,因为变化的速度,新变量的引入速度正在加速。传统的,长期的,战略性规划和执行不像以前那么有效。
在当今动荡不定的世界中,我们预测和定义战略位置的能力正在下降,因为变化的速度,新变量的引入速度正在加速。传统的、长期的、战略性规划和执行不像以前那么有效。
如果长期规划变得如此困难,那么规定必要的行为就更具有挑战性。并且衡量对计划的合规性几乎是不可能的。
这一切都极大地影响了人们的工作方式。与过去传统经营组织中的工人不同,他们为自己能够重复行动而感到自豪,几乎没有变化和舒适的确定性 -- 今天的工人在充满模糊性的环境中运作。他们的工作需要更大的创造力,直觉和批判性判断 -- 有更大的要求是背离过去的“正常”,适应当今的新情况。
这一切都极大地影响了人们的工作方式。与过去传统经营组织中的工人不同,他们为自己能够重复行动而感到自豪,几乎没有变化和舒适的确定性 —— 今天的工人在充满模糊性的环境中运作。他们的工作需要更大的创造力、直觉和批判性判断 —— 更多的需要背离过去的“常规”,以适应当今的新情况。
以这种新方式工作对于价值创造变得更加重要。我们的管理系统必须专注于构建结构,系统和流程,以帮助创建积极主动的工人,他们能够以快速和敏捷的方式进行创新和行动。
我们需要提出一个不同的解决方案来优化组织,以适应不同的经济时代,从自下而上而不是自上而下开始。我们需要替换过去的三步骤 -- 计划,规定,执行,以一种更适应当今动荡天气的方法来取得成功 -- 尝试,学习,修改。
我们需要提出一个不同的解决方案来优化组织,以适应不同的经济时代,从自下而上而不是自上而下开始。我们需要替换过去的三步骤 —— 计划、规定、执行,以一种更适应当今动荡天气的方法来取得成功 —— 尝试、学习、修改。
### 尝试,学习,修改
### 尝试、学习、修改
因为环境变化如此之快,而且几乎没有任何预警,并且因为我们需要采取的步骤不再提前计划,我们需要培养鼓励创造性尝试和错误的环境,而不是坚持对五年计划的忠诚。以下是以这种方式开始工作的一些暗示:
@ -46,7 +48,7 @@ via: https://opensource.com/open-organization/18/3/try-learn-modify
作者:[Jim Whitehurst][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,223 @@
逐层拼接云原生栈
======
> 看着我们在纽约的办公大楼,我们发现了一种观察不断变化的云原生领域的完美方式。
在 Packet我们的工作价值<ruby>基础设施<rt>infrastructure</rt></ruby>自动化)是非常基础的。因此,我们花费大量的时间来研究我们之上所有生态系统中的参与者和趋势 —— 以及之下的极少数!
当你在任何生态系统的汪洋大海中徜徉时,很容易困惑或迷失方向。我知道这是事实,因为当我去年进入 Packet 工作时,从 Bryn Mawr 获得的英语学位,并没有让我完全得到一个 [Kubernetes][Kubernetes] 的认证。:)
由于它超快的演进和巨大的影响,云原生生态系统打破了先例。似乎每眨一次眼睛,之前全新的技术(更不用说所有相关的理念了)就变得有意义……或至少有趣了。和其他许多人一样,我依据无处不在的 [CNCF][CNCF] 的 “[云原生蓝图][1]” 作为我去了解这个空间的参考标准。尽管如此,如果有一个定义这个生态系统的元素,那它一定是贡献和引领它们的人。
所以,在 12 月份一个很冷的下午,当我们走回办公室时,我们偶然发现了一个给投资人解释“云原生”的创新方式,当我们谈到从 [Aporeto][Aporeto] 中区分 [Cilium][Cilium] 的细微差别时,以及为什么从 [CoreDNS][CoreDNS] 和 [Spiffe][Spiffe] 到 [Digital Rebar][Digital Rebar] 和 [Fission][Fission] 的所有这些都这么有趣时,他的眼里充满了兴趣。
在新世贸中心的影子里向我们位于 13 层的狭窄办公室望去我们突然想到一个把我们带到那个神奇世界的好主意为什么不把它画出来呢LCTT 译注“rabbit hole” 有多种含义,此处采用“爱丽丝梦游仙境”中的“兔子洞”含义。)
![][2]
于是我们开始了把云原生栈逐层拼接起来的旅程。让我们一起探索它给你一个“仅限今日有效”的福利。LCTT 译注:意即云原生领域变化很快,可能本文/本图中所述很快过时。)
[查看高清大图][3]25Mb或给我们发邮件索取副本。
### 从最底层开始
当我们开始下笔的时候,我们希望首先亮出的是我们每天都在打交道的那一部分:硬件,但我们知道那对用户却是基本上不可见的。就像任何投资于下一个伟大的(通常是私有的)东西的秘密实验室一样,我们认为地下室是其最好的地点。
从大家公认的像 Intel、AMD 和华为(传言他们雇佣的工程师接近 80000 名)这样的巨头,到像 [Mellanox][Mellanox] 这样的细分市场参与者,硬件生态系统现在非常火。事实上,随着数十亿美元投入去攻克新的 offloadLCTT 译注offload 泛指以前由软件及 CPU 来完成的工作,现在通过硬件来完成,以提升速度并降低 CPU 负载的做法、GPU、定制协处理器我们可能正在进入硬件的黄金时代。
著名的软件先驱[艾伦·凯][Alan Kay]Alan Kay在 25 年前说过“真正认真对待软件的人应该自己创造硬件”。说得不错Alan
### 云即资本
就像我们的 CEO Zac Smith 多次跟我说的:一切都是钱的事。不仅要制造它,还要消费它!在云中,数十亿美元的投入才能让数据中心出现计算机,这样才能让开发者消费它。换句话说(根本没云,它只是别人的电脑而已):
![][4]
我们认为,对于“银行”(即能让云运转起来的借款人或投资人)来说最好的位置就是一楼。因此我们将大堂改造成银行家的咖啡馆,以便为所有的创业者提供幸运之轮。
![][5]
### 连通和动力
如果金钱是润滑油,那么消耗大量燃料的引擎就是数据中心供应商和连接它们的网络。我们称他们为“连通”和“动力”。
从像 [Equinix][Equinix] 这样处于核心地位的接入商的和像 [Vapor.io][Vapor.io] 这样的接入新贵,到 [Verizon][Verizon]、[Crown Castle][Crown Castle] 和其它接入商铺设在地下(或海底)的“管道”,这是我们所有的栈都依赖但很少有人能看到的一部分。
因为我们花费大量的时间去研究数据中心和连通性,需要注意的一件事情是,这一部分的变化非常快,尤其是在 5G 正式商用时,某些负载开始不再那么依赖中心化的基础设施了。
边缘计算即将到来!:-)
![][6]
### 嗨,它就是基础设施!
居于“连通”和“动力”之上的这一层,我们爱称为“处理器层”。这是奇迹发生的地方 —— 我们将来自下层的创新和实物投资转变成一个 API 终端的某些东西。
由于这是纽约的一个大楼,我们让在这里的云供应商处于纽约的中心。这就是为什么你会看到([Digital Ocean][Digital Ocean] 系的)鲨鱼 Sammy 和对 “meet me” 房间里面的 Google 标志的致意的原因了。
正如你所见,这个场景是非常写实的。它是由多层机架堆叠起来的。尽管我们爱 EWR1 的设备经理Michael Pedrazzini我们努力去尽可能减少这种体力劳动。毕竟布线专业的博士学位是很难拿到的。
![][7]
### 供给
再上一层,在基础设施层之上是供给层。这是我们最喜欢的地方之一,它以前被我们称为<ruby>配置管理<rt>config management</rt></ruby>。但是现在到处都是一开始就是<ruby>不可变基础设施<rt>immutable infrastructure</rt></ruby>和自动化:[Terraform][Terraform]、[Ansible][Ansible]、[Quay.io][Quay.io] 等等类似的东西。你可以看出软件是按它的方式来工作的,对吗?
Kelsey Hightower 最近写道“呆在无聊的基础设施中是一个让人兴奋的时刻”,我不认为这说的是物理部分(虽然我们认为它非常让人兴奋),但是由于软件持续侵入到栈的所有层,那必将是一个疯狂的旅程。
![][8]
### 操作系统
供应就绪后,我们来到操作系统层。在这里你可以看到我们打趣一些我们最喜欢的同事:注意上面 Brian Redbeard 那超众的瑜珈姿势。:)
Packet 为客户提供了 11 种主要的操作系统可供选择,包括一些你在图中看到的:[Ubuntu][Ubuntu]、[CoreOS][CoreOS]、[FreeBSD][FreeBSD]、[Suse][Suse]、和各种 [Red Hat][Red Hat] 系的发行版。我们看到越来越多的人们在这一层上加入了他们自己的看法:从定制内核和用于不可变部署的<ruby>黄金镜像<rt>golden images</rt></ruby>LCCT 注golden image 指定型的镜像或模板,一般是经过一些定制,并做快照和版本控制,由此可拷贝出大量与此镜像一致的开发、测试或部署环境,也有人称作 master image到像 [NixOS][NixOS] 和 [LinuxKit][LinuxKit] 这样的项目。
![][9]
### 运行时
为了有趣些,我们将<ruby>运行时<rt>runtime</rt></ruby>放在了体育馆内,并为 CoreOS 赞助的 [rkt][rkt] 和 [Docker][Docker] 的容器化举行了一次比赛。而无论如何赢家都是 CNCF
我们认为快速演进的存储生态系统应该是一些可上锁的储物柜。关于存储部分有趣的地方在于许多的新玩家尝试去解决持久性的挑战问题,以及性能和灵活性问题。就像他们说的:存储很简单。
![][10]
### 编排
在过去的这一年里,编排层全是 Kubernetes 了因此我们选取了其中一位著名的布道者Kelsey Hightower并在这个古怪的会议场景中给他一个特写。在我们的团队中有一些 [Nomad][Nomad]LCTT 译注:一个管理机器集群并在集群上运行应用程序的工具)的忠实粉丝,并且如果抛开 Docker 和它的工具集的影响,就无从谈起云原生。
虽然负载编排应用程序在我们栈中的地位非常高,我们看到的各种各样的证据表明,这些强大的工具开始去深入到栈中,以帮助用户利用 GPU 和其它特定硬件的优势。请继续关注 —— 我们正处于容器化革命的早期阶段!
![][11]
### 平台
这是栈中我们喜欢的层之一,因为每个平台都有如此多的工具帮助用户去完成他们想要做的事情(顺便说一下,不是去运行容器,而是运行应用程序)。从 [Rancher][Rancher] 和 [Kontena][Kontena],到 [Tectonic][Tectonic] 和 [Redshift][Redshift] 都是像 [Cycle.io][Cycle.io] 和 [Flynn.io][Flynn.io] 一样是完全不同的方法 —— 我们看到这些项目如何以不同的方式为用户提供服务,总是激动不已。
关键点:这些平台是帮助用户转化云原生生态系统中各种各样的快速变化的部分。很高兴能看到他们各自带来的东西!
![][12]
### 安全
当说到安全时,今年真是很忙的一年!我们尝试去展示一些很著名的攻击,并说明随着工作负载变得更加分散和更加可迁移(当然,同时攻击者也变得更加智能),这些各式各样的工具是如何去帮助保护我们的。
我们看到一个用于不可信环境(如 Aporeto和低级安全Cilium的强大动作以及尝试在网络级别上的像 [Tigera][Tigera] 这样的可信方法。不管你的方法如何,记住这一点:安全无止境。:0
![][13]
### 应用程序
如何去表示海量的、无限的应用程序生态系统?在这个案例中,很容易:我们在纽约,选我们最喜欢的。;) 从 [Postgres][Postgres] “房间里的大象” 和 [Timescale][Timescale] 时钟,到鬼鬼祟祟的 [ScyllaDB][ScyllaDB] 垃圾桶和那个悠闲的 [Travis][Travis] 哥们 —— 我们把这个片子拼到一起很有趣。
让我们感到很惊奇的一件事情是:很少有人注意到那个复印屁股的家伙。我想现在复印机已经不常见了吧?
![][14]
### 可观测性
由于我们的工作负载开始到处移动,规模也越来越大,这里没有一件事情能够像一个非常好用的 [Grafana][Grafana] 仪表盘、或方便的 [Datadog][Datadog] 代理让人更加欣慰了。由于复杂度的提升,[SRE][SRE] 时代开始越来越多地依赖监控告警和其它智能事件去帮我们感知发生的事件,出现越来越多的自我修复的基础设施和应用程序。
在未来的几个月或几年中,我们将看到什么样的面孔进入这一领域……或许是一些人工智能、区块链、机器学习支撑的仪表盘?:-)
![][15]
### 流量管理
人们往往认为互联网“只是能工作而已”,但事实上,我们很惊讶于它居然能如此工作。我的意思是,就这些大规模的、不同的网络间的松散连接 —— 你不是在开玩笑吧?
能够把所有的这些独立的网络拼接到一起的一个原因是流量管理、DNS 和类似的东西。随着规模越来越大,这些让互联网变得更快、更安全、同时更具弹性。我们尤其高兴的是看到像 [Fly.io][Fly.io] 和 [NS1][NS1] 这样的新贵与优秀的老牌玩家进行竞争,最后的结果是整个生态系统都得以提升。让竞争来的更激烈吧!
![][16]
### 用户
如果没有非常棒的用户,技术栈还有什么用呢?确实,他们享受了大量的创新,但在云原生的世界里,他们所做的远不止消费这么简单:他们也创造并贡献了很多。从像 Kubernetes 这样的大量的贡献者到越来越多的(但同样重要)更多方面,而我们都是其中的非常棒的一份子。
在我们屋顶上有许多悠闲的用户,比如 [Ticketmaster][Ticketmaster] 和[《纽约时报》][New York Times],而不仅仅是新贵:这些组织拥抱了部署和管理应用程序的方法的变革,并且他们的用户正在享受变革带来的回报。
![][17]
### 同样重要的,成熟的监管!
在以前的生态系统中,基金会扮演了一个非常被动的“幕后”角色。而 CNCF 不是!他们的目标(构建一个健壮的云原生生态系统),勇立潮流之先 —— 他们不仅已迎头赶上还一路领先。
从坚实的治理和经过深思熟虑的项目组,到提出像 CNCF 这样的蓝图CNCF 横跨云 CI、Kubernetes 认证、和讲师团 —— CNCF 已不再是 “仅仅” 受欢迎的 [KubeCon + CloudNativeCon][KCCNC] 了。
--------------------------------------------------------------------------------
via: https://www.packet.net/blog/splicing-the-cloud-native-stack/
作者:[Zoe Allen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw)
校对:[wxy](https://github.com/wxy), [pityonline](https://github.com/pityonline)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.packet.net/about/zoe-allen/
[1]: https://landscape.cncf.io/landscape=cloud
[2]: https://assets.packet.net/media/images/PIFg-30.vesey.street.ny.jpg
[3]: https://www.dropbox.com/s/ujxk3mw6qyhmway/Packet_Cloud_Native_Building_Stack.jpg?dl=0
[4]: https://assets.packet.net/media/images/3vVx-there.is.no.cloud.jpg
[5]: https://assets.packet.net/media/images/X0b9-the.bank.jpg
[6]: https://assets.packet.net/media/images/2Etm-ping.and.power.jpg
[7]: https://assets.packet.net/media/images/C800-infrastructure.jpg
[8]: https://assets.packet.net/media/images/0V4O-provisioning.jpg
[9]: https://assets.packet.net/media/images/eMYp-operating.system.jpg
[10]: https://assets.packet.net/media/images/9BII-run.time.jpg
[11]: https://assets.packet.net/media/images/njak-orchestration.jpg
[12]: https://assets.packet.net/media/images/1QUS-platforms.jpg
[13]: https://assets.packet.net/media/images/TeS9-security.jpg
[14]: https://assets.packet.net/media/images/SFgF-apps.jpg
[15]: https://assets.packet.net/media/images/SXoj-observability.jpg
[16]: https://assets.packet.net/media/images/tKhf-traffic.management.jpg
[17]: https://assets.packet.net/media/images/7cpe-users.jpg
[Kubernetes]: https://kubernetes.io/
[CNCF]: https://www.cncf.io/
[Aporeto]: https://www.aporeto.com/
[Cilium]: https://cilium.io/
[CoreDNS]: https://coredns.io/
[Spiffe]: https://spiffe.io/
[Digital Rebar]: http://rebar.digital/
[Fission]: https://fission.io/
[Mellanox]: http://www.mellanox.com/
[Alan Kay]: https://en.wikipedia.org/wiki/Alan_Kay
[Equinix]: https://www.equinix.com/
[Vapor.io]: https://www.vapor.io/
[Verizon]: https://www.verizon.com/
[Crown Castle]: http://www.crowncastle.com/
[Digital Ocean]: https://www.digitalocean.com/
[Terraform]: https://www.terraform.io/
[Ansible]: https://www.ansible.com/
[Quay.io]: https://quay.io/
[Ubuntu]: https://www.ubuntu.com/
[CoreOS]: https://coreos.com/
[FreeBSD]: https://www.freebsd.org/
[Suse]: https://www.suse.com/
[Red Hat]: https://www.redhat.com/
[NixOS]: https://nixos.org/
[LinuxKit]: https://github.com/linuxkit/linuxkit
[rkt]: https://coreos.com/rkt/
[Docker]: https://www.docker.com/
[Nomad]: https://www.nomadproject.io/
[Rancher]: https://rancher.com/
[Kontena]: https://kontena.io/
[Tectonic]: https://coreos.com/tectonic/
[Redshift]: https://aws.amazon.com/redshift/
[Cycle.io]: https://cycle.io/
[Flynn.io]: https://flynn.io/
[Tigera]: https://www.tigera.io/
[Postgres]: https://www.postgresql.org/
[Timescale]: https://www.timescale.com/
[ScyllaDB]: https://www.scylladb.com/
[Travis]: https://travis-ci.com/
[Grafana]: https://grafana.com/
[Datadog]: https://www.datadoghq.com/
[SRE]: https://en.wikipedia.org/wiki/Site_Reliability_Engineering
[Fly.io]: https://fly.io/
[NS1]: https://ns1.com/
[Ticketmaster]: https://www.ticketmaster.com/
[New York Times]: https://www.nytimes.com/
[KCCNC]: https://www.cncf.io/community/kubecon-cloudnativecon-events/

View File

@ -0,0 +1,154 @@
如何确定你的Linux发行版中有没有某个软件包
======
![](https://www.ostechnix.com/wp-content/uploads/2018/06/Whohas-720x340.png)
有时,你可能会想知道如何在你的 Linux 发行版上寻找一个特定的软件包。或者,你仅仅只是想知道安装在你的 Linux 上的软件包有什么版本。如果这就是你想知道的信息,你今天走运了。我正好知道一个小工具能帮你抓到上述信息,下面隆重推荐—— Whohas这是一个命令行工具它能一次查询好几个软件包列表以检查的你软件包是否存在。目前whohas 支持 Arch、Debian、Fedora、Gentoo、Mandriva、openSUSE、Slackware、Source Mage、Ubuntu、FreeBSD、NetBSD、OpenBSDLCTT 译注:*BSD 不是 Linux、Fink、MacPorts 和 Cygwin。使用这个小工具软件包的维护者能轻而易举从别的 Linux 发行版里找到 ebuilds、 pkgbuilds 等等类似的包定义文件。
Whohas 是用 Perl 语言开发的自由、开源的工具。
### 在你的 Linux 中寻找一个特定的包
#### 安装 Whohas
Whohas 在 Debian、Ubuntu、Linux Mint 的默认软件仓库里提供。如果你正在使用某种基于 DEB 的系统,你可以用如下命令安装:
```
$ sudo apt-get install whohas
```
对基于 Arch 的系统,[AUR][1] 里就有提供 whohas。你能使用任何的 AUR 助手程序来安装。
使用 [Packer][2]
```
$ packer -S whohas
```
或使用[Trizen][3]
```
$ trizen -S whohas
```
使用[Yay][4]
```
$ yay -S whohas
```
使用 [Yaourt][5]
```
$ yaourt -S whohas
```
在别的 Linux 发行版上,从[这里][6]下载源代码并手工编译安装。
#### 使用方法
Whohas 的主要目标是想让你知道:
* 哪个 Linux 发布版提供了用户依赖的包。
* 对于各个 Linux 发行版,指定的软件包是什么版本,或者在这个 Linux 发行版的各个不同版本上,指定的软件包是什么版本。
让我们试试看上面的的功能,比如说,哪个 Linux 发行版里有 vim 这个软件?我们可以运行如下命令:
```
$ whohas vim
```
这个命令将会显示所有包含可安装的 vim 的 Linux 发行版的信息包括包的大小仓库地址和下载URL。
![][8]
你甚至可以通过管道将输出的结果按照发行版的字母排序,只需加入 `sort` 命令即可。
```
$ whohas vim | sort
```
请注意上述命令将会显示所有以 vim 开头的软件包,包括 vim-spell、vimcommander、vimpager 等等。你可以继续使用 Linux 的 `grep` 命令在 “vim” 的前后加上空格来缩小你的搜索范围,直到满意为止。
```
$ whohas vim | sort | grep " vim"
$ whohas vim | sort | grep "vim "
$ whohas vim | sort | grep " vim "
```
所有将空格放在包名字前面的搜索将会显示以包名字结尾的包。所有将空格放在包名字后面的搜索将会显示以包名字开头的包。前后都有空格将会严格匹配。
又或者,你就使用 `--strict` 来严格限制结果。
```
$ whohas --strict vim
```
有时,你想知道一个包在不在一个特定的 Linux 发行版里。例如,你想知道 vim 是否在 Arch Linux 里,请运行:
```
$ whohas vim | grep "^Arch"
```
LCTT译注在结果里搜索以 Arch 开头的 Linux
Linux 发行版的命名缩写为:'archlinux'、'cygwin'、'debian'、'fedora'、 'fink'、'freebsd'、'gentoo'、'mandriva'、'macports'、'netbsd'、'openbsd'、'opensuse'、'slackware'、'sourcemage' 和 'ubuntu'。
你也可以用 `-d` 选项来得到同样的结果。
```
$ whohas -d archlinux vim
```
这个命令将在仅仅 Arch Linux 发行版下搜索 vim 包。
如果要在多个 Linux 发行版下搜索,如 'archlinux'、'ubuntu',请使用如下命令。
```
$ whohas -d archlinux,ubuntu vim
```
你甚至可以用 `whohas` 来查找哪个发行版有 whohas 包。
```
$ whohas whohas
```
更详细的信息,请参照手册。
```
$ man whohas
```
#### 最后的话
当然,任何一个 Linux 发行版的包管理器都能轻松的在对应的软件仓库里找到自己管理的包。不过whohas 帮你整合并比较了在不同的 Linux 发行版下指定的软件包信息,这样你能轻易的跨平台之间进行比较。试一下 whohas你一定不会失望的。
好了,今天就到这里吧,希望前面讲的对你有用,下次我还会带来更多好东西!!
欧耶!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/find-if-a-package-is-available-for-your-linux-distribution/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[DavidChenLiang](https://github.com/davidchenliang)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://aur.archlinux.org/packages/whohas/
[2]:https://www.ostechnix.com/install-packer-arch-linux-2/
[3]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/
[4]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[5]:https://www.ostechnix.com/install-yaourt-arch-linux/
[6]:http://www.philippwesche.org/200811/whohas/intro.html
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[8]:http://www.ostechnix.com/wp-content/uploads/2018/06/whohas-1.png

View File

@ -1,11 +1,11 @@
逃离 Google重获自由与君共勉
======
原名How I Fully Quit Google (And You Can, Too)
> 寻求挣脱科技巨头的一次开创性尝试
在过去的六个月里,难以想象我到底经历了些什么。艰难的、时间密集的、启发性的探索,为的只是完全摒弃一家公司 —— Google谷歌—— 的产品。本该是件简简单单的任务,但真要去做,花费在研究和测试上的又何止几个小时。但我成功了。现在,我已经不需要 Google 了,作为西方世界中极其少数的群体中的一份子,不再使用世界上最有价值的两家科技公司的产品(是的,我也不用 [Facebook脸书][6])。
![](https://cdn-images-1.medium.com/max/2000/1*BWtEYNsmqON6zdURcLa6hg.png)
在过去的六个月里,难以想象我到底经历了些什么。艰难的、耗时的、开创性的探索,为的只是完全摒弃一家公司 —— Google谷歌—— 的产品。本该是件简简单单的任务,但真要去做,花费在研究和测试上的又何止几个小时。但我成功了。现在,我已经不需要 Google 了,作为西方世界中极其少数的群体中的一份子,不再使用世界上最有价值的两家科技公司的产品(是的,我也不用 [Facebook脸书][6])。
本篇指南将向你展示我逃离 Google 生态的始末。以及根据本人的研究和个人需求,选择的替代方案。我不是技术方面的专家,或者说程序员,但作为记者,我的工作要求我对安全和隐私的问题保持警惕。
@ -17,17 +17,17 @@
Google 很快就从仅提供检索服务转向提供其它服务,其中许多都是我欣然拥抱的服务。早在 2005 年,当时你们可能还只能[通过邀请][7]加入 Gmail 的时候我就已经是早期使用者了。Gmail 采用了线程对话、归档、标签,毫无疑问是我使用过的最好的电子邮件服务。当 Google 在 2006 年推出其日历工具时那种对操作的改进绝对是革命性的。针对不同日历使用不同的颜色进行编排、检索事件、以及发送可共享的邀请操作极其简单。2007 年推出的 Google Docs 同样令人惊叹。在我的第一份全职工作期间,我还促成我们团队使用支持多人同时编辑的 Google 电子表格、文档和演示文稿来完成日常工作。
和许多人样,我也是 Google 开疆拓土过程中的受害者。从搜索引擎到电子邮件、文档、分析、再到照片许多其它服务都建立在彼此之上相互勾连。Google 从一家发布实用产品的公司转变成诱困用户公司与此同时将整个互联网转变为牟利和数据采集的机器。Google 在我们的数字生活中几乎无处不在,这种程度的存在远非其他公司可以比拟。与之相比使用其他科技巨头的产品想要抽身就相对容易。对于 Apple苹果你要么身处 iWorld 之中,要么是局外人。亚马逊亦是如此,甚至连 Facebook 也不过是拥有少数的几个平台不用Facebook更多的是[心理挑战][8],实际上并没有多么困难。
和许多人样,我也是 Google 开疆拓土过程中的受害者。从搜索引擎到电子邮件、文档、分析、再到照片许多其它服务都建立在彼此之上相互勾连。Google 从一家发布实用产品的公司转变成诱困用户公司与此同时将整个互联网转变为牟利和数据采集的机器。Google 在我们的数字生活中几乎无处不在,这种程度的存在远非其他公司可以比拟。与之相比使用其他科技巨头的产品想要抽身就相对容易。对于 Apple苹果你要么身处 iWorld 之中,要么是局外人。亚马逊亦是如此,甚至连 Facebook 也不过是拥有少数的几个平台不用Facebook更多的是[心理挑战][8],实际上并没有多么困难。
然而Google 无处不在。无论是笔记本电脑、智能手机或者平板电脑,我猜其中至少会有那么一个 Google 的应用程序。Google 就是搜索(引擎)、地图、电子邮件、浏览器和大多数智能手机操作系统的代名词。甚至还有些应用有赖于其提供的“[服务][9]”和分析,比方说 Uber 便需要采用 Google Maps 来运营其乘车服务。
然而Google 无处不在。无论是笔记本电脑、智能手机或者平板电脑,我猜其中至少会有那么一个 Google 的应用程序。Google 就是搜索(引擎)、地图、电子邮件、浏览器和大多数智能手机操作系统的代名词。甚至还有些应用有赖于其提供的“[服务][9]”和分析,比方说 Uber 便需要采用 Google 地图来运营其乘车服务。
Google 现在俨然已是许多语言中的单词,但彰显其超然全球统治地位的方面显然不止于此。可以说只要你不是极其注重个人隐私,那其庞大而成套的工具几乎没有多少众所周知或广泛使用的替代品。这恰好也是大家选择 Google 的原因,在很多方面能更好的替代现有的产品。但现在,使我们的难以割舍的主要原因其实是 Google 已经成为了默认选择,或者说由于其主导地位导致替代品无法对我们构成足够的吸引。
事实上,替代方案是存在的,这些年自 Edward Snowden爱德华·斯诺登披露 Google 涉事 [Prism棱镜][10]以来,又陆续涌现了许多替代品。我从去年年底开始着手这个项目。经过六个月的研究、测评以及大量的尝试和失败,我终于找到了所有我正在使用的 Google 产品对应的注重个人隐私的替代品。令我感到吃惊的是,其中的一些替代品比 Google 的做的还要好。
事实上,替代方案是存在的,这些年自<ruby>爱德华·斯诺登<rt>Edward Snowden</rt></ruby>披露 Google 涉事 <ruby>[棱镜][10]<rt>Prism</rt></ruby>以来,又陆续涌现了许多替代品。我从去年年底开始着手这个项目。经过六个月的研究、测评以及大量的尝试和失败,我终于找到了所有我正在使用的 Google 产品对应的注重个人隐私的替代品。令我感到吃惊的是,其中的一些替代品比 Google 的做的还要好。
### 一些注意事项
过程中需要面临的几个挑战之一便是,大多数的替代方案,特别是那些注重隐私空间的开源替代方案,确实对用户不太友好。我不是技术人员,但是自己有一个网站,了解如何管理 Wordpress可以排除一些基本的故障但我用不来命令行也做不来任何需要编码的事。
这个过程中需要面临的几个挑战之一便是,大多数的替代方案,特别是那些注重隐私空间的开源替代方案,确实对用户不太友好。我不是技术人员,但是自己有一个网站,了解如何管理 Wordpress可以排除一些基本的故障但我用不来命令行也做不来任何需要编码的事。
提供的这些替代方案中的大多数,即便不能完整替代 Google 产品的功能,但至少可以轻松上手。不过有些还是需要你有自己的 Web 主机或服务器的。
@ -39,7 +39,7 @@ Google 现在俨然已是许多语言中的单词,但彰显其超然全球统
[DuckDuckGo][12] 和 [startpage][13] 都是以保护个人隐私为中心的搜索引擎,不收集任何搜索数据。我用这两个搜索引擎来负责之前用 Google 搜索的所有需求。
其它的替代方案实际上并不多Google 坐拥全球 74% 的市场份额时,剩下的那些主要是因为中国的封锁。不过还有 Ask.com以及 Bing……
其它的替代方案实际上并不多Google 坐拥全球 74% 的市场份额时,剩下的那些主要是因为中国的原因。不过还有 Ask.com以及 Bing……
#### Chrome
@ -129,11 +129,11 @@ Google 现在俨然已是许多语言中的单词,但彰显其超然全球统
有些确实更好Jitsi Meet 运行更顺畅,需要的带宽更少,并且比 Hangouts 跨平台支持好。Firefox 比 Chrome 更稳定占用的内存更少。Fastmail 的日历具有更好的时区集成。
还有些旗鼓相当。ProtonMail 具有 Gmail 的大部分功能,但缺少一些好用的集成,例如我之前使用的 Boomerang 邮件日程功能。还缺少联系人界面,但我正在使用 Nextcloud。说到 Nextcloud它非常适合托管文件联系人,还包含了一个漂亮的笔记工具(以及诸多其它插件)。但它没有 Google Docs 丰富的多人编辑功能。在我的预算中,还没有找到可行的替代方案。虽然还有 Collabora Office但这需要升级我的服务器这对我来说不能算切实可行。
还有些旗鼓相当。ProtonMail 具有 Gmail 的大部分功能,但缺少一些好用的集成,例如我之前使用的 Boomerang 邮件日程功能。还缺少联系人界面,但我正在使用 Nextcloud。说到 Nextcloud它非常适合托管文件联系人,还包含了一个漂亮的笔记工具(以及诸多其它插件)。但它没有 Google Docs 丰富的多人编辑功能。在我的预算中,还没有找到可行的替代方案。虽然还有 Collabora Office但这需要升级我的服务器这对我来说不能算切实可行。
一些取决于位置。在一些国家如印度尼西亚MAPS.ME 实际上比 Google 地图更好用,而在另一些国家(包括美国)就差了许多。
还有些要求用户牺牲一些特性或功能。Piwic 是一个穷人的 Google Analytics缺乏前者的许多详细报告和搜索功能。DuckDuckGo 适用于一般搜索,但是在特定的搜索方面还存在问题,当我搜索非英文内容时,它和 startpage 时常都会检索失败。
还有些要求用户牺牲一些特性或功能。Piwic 是一个穷人的 Google Analytics缺乏前者的许多详细报告和搜索功能。DuckDuckGo 适用于一般搜索,但是在特定的搜索方面还存在问题,当我搜索非英文内容时,它和 startpage 时常都会检索失败。
### 最后,我不再心念 Google
@ -141,7 +141,7 @@ Google 现在俨然已是许多语言中的单词,但彰显其超然全球统
如果我们别无选择,只能使用 Google 的产品,那我们便失去了作为消费者的最后一丝力量。
我希望 GoogleFacebookApple 和其他科技巨头在对待用户时不要这么理所当然不要试图强迫我们进入其无所不包的生态系统。我也期待新选手能够出现并与之竞争就像以前一样Google 的新搜索工具可以与当时的行业巨头 Altavista 和 Yahoo 竞争,或者说 Facebook 的社交网络能够与 MySpace 和 Friendster 竞争。Google 给出了更好的搜索方案,使互联网变得更加美好。有选择是个好事,可移植也是。
我希望 Google、Facebook、Apple 和其他科技巨头在对待用户时不要这么理所当然不要试图强迫我们进入其无所不包的生态系统。我也期待新选手能够出现并与之竞争就像以前一样Google 的新搜索工具可以与当时的行业巨头 Altavista 和 Yahoo 竞争,或者说 Facebook 的社交网络能够与 MySpace 和 Friendster 竞争。Google 给出了更好的搜索方案,使互联网变得更加美好。有选择是个好事,可移植也是。
如今,我们很少有人哪怕只是尝试其它产品,因为我们已经习惯了 Google。我们不再更改邮箱地址因为这太难了。我们甚至不尝试使用 Facebook 以外的替代品,因为我们所有的朋友都在 Facebook 上。这些我明白。

View File

@ -1,25 +1,27 @@
开源网络工作: 创新与机遇的温床
开源网络方面的职位:创新与机遇的温床
======
> 诸如容器、边缘计算这样的技术焦点领域大红大紫,对在这一领域能够整合、协作、创新的开发者和系统管理员们的需求在日益增进。
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/os-jobs-networking.jpg?itok=PgUzydn-)
随着全球经济更加靠近数字化未来,每个垂直行业的公司和组织都在紧抓如何进一步在业务与运营上整合与部署技术。虽然 IT 企业在很大程度上遥遥领先,但是他们的经验与教训已经应用在了各行各业。尽管全国失业率为 4.1%,但整个科技专业人员的整体的失业率在 4 月份为 1.9%,开源工作的未来看起来尤其光明。我在开源网络领域工作,并且目睹着创新和机遇正在改变世界交流的方式。
曾经是个发展缓慢的行业,现在由网络运营商、供应商、系统集成商和开发者所组成的网络生态系统正在采用开源软件,并且正在向商用硬件上运行的虚拟化和软件定义网络上转移。事实上,接近 70% 的全球移动用户由[低频网络][1]运营商成员所代表。该网络运营商成员致力于协调构成开放网络栈和相邻技术的项目。
曾经是个发展缓慢的行业,现在由网络运营商、供应商、系统集成商和开发者所组成的网络生态系统正在采用开源软件,并且正在向商用硬件上运行的虚拟化和软件定义网络上转移。事实上,接近 70% 的全球移动用户由[低频网络][1]运营商成员所占据。该网络运营商成员致力于协调构成开放网络栈和相邻技术的项目。
### 技能需求
这一领域的开发者和系统管理员采用云原生和 DevOps 的方法开发新的使用案例,应对最紧迫的行业挑战。诸如容器、边缘计算等焦点领域大红大紫,并且在这一领域能够整合、协作、创新的开发者和系统管理员们的需求在日益增进。
开源软件与 Linux 使这一切成为可能,根据最近出版的[ 2018开源软件工作报告][2],高达 80% 的招聘经理寻找会 Linux 技能的应聘者,**而 46% 希望在网络领域招聘人才,可以说“网络技术”在他们的招聘决策中起到了至关重要的作用。**
开源软件与 Linux 使这一切成为可能,根据最近出版的 [2018开源软件工作报告][2],高达 80% 的招聘经理寻找会 Linux 技能的应聘者,**而 46% 希望在网络领域招聘人才,可以说“网络技术”在他们的招聘决策中起到了至关重要的作用。**
开发人员相当抢手72% 的招聘经理都在找他们,其次是 DevOps 开发者59%工程师57%和系统管理员49%)。报告同时指出,对容器技能需求的惊人的增长符合了我们在网络领域所见到的即创建云本地虚拟功能CNFs和在[ XCI倡议 ][3]的 OPNFV 中持续集成/持续部署方法的增长
开发人员相当抢手72% 的招聘经理都在找他们,其次是 DevOps 开发者59%工程师57%和系统管理员49%)。报告同时指出,对容器技能需求的惊人的增长符合我们在网络领域所见到的即云本地虚拟功能CNF的创建和持续集成持续部署方式的激增就如在 OPNFV 中的 [XCI 倡议][3] 一样
### 开始吧
对于求职者来说,好消息是有着大量的关于开源软件的内容,包括免费的[Linux 入门课程][4]。好的工作需要有多项证书,因此我鼓励你探索更多领域,去寻求培训的机会。计算机网络方面,在[OPNFV][5]上查看最新的培训课程或者是[ONAP][6]项目,也可以选择这门[开源网络技术简介][7]课程。
对于求职者来说,好消息是有着大量的关于开源软件的内容,包括免费的 [Linux 入门课程][4]。好的工作需要有多项证书,因此我鼓励你探索更多领域,去寻求培训的机会。计算机网络方面,在 [OPNFV][5] 上查看最新的培训课程或者是 [ONAP][6] 项目,也可以选择这门[开源网络技术简介][7]课程。
如果你还没有做好这些,下载 [2018开源软件工作报告][2] 以获得更多见解,在广阔的开放源码技术世界中规划你的课程,去寻找另一边等待你的令人兴奋的职业!
如果你还没有做好这些,下载 [2018 开源软件工作报告][2] 以获得更多见解,在广阔的开放源码技术世界中规划你的课程,去寻找另一边等待你的令人兴奋的职业!
点击这里[下载完整的开源软件工作报告][8]并且[了解更多关于 Linux 的认证][9]。
@ -29,8 +31,8 @@ via: https://www.linux.com/blog/os-jobs-report/2018/7/open-source-networking-job
作者:[Brandon Wick][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/LuuMing)
校对:[校对者ID](https://github.com/校对者ID)
译者:[LuuMing](https://github.com/LuuMing)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,32 +3,32 @@ Fedora 下的图像创建程序
![](https://fedoramagazine.org/wp-content/uploads/2018/08/creatingimages-816x346.png)
感觉有创意吗Fedora 有很多程序可以帮助你的创造力。从数字绘图、矢量到像素艺术,每个人都可以在这个周末得到创意。本文重点介绍了 Fedora 下创建很棒图像的程序。
创意吗Fedora 有很多程序可以帮助你的创造力。从数字绘图、矢量到像素艺术,每个人都可以在这个周末发挥创意。本文重点介绍了 Fedora 下创建很棒图像的程序。
### 矢量图形Inkscape
[Inkscape][1] 是一个众所周知受人喜爱的开源矢量图形编辑器。SVG 是 Inkscape 的主要文件格式因此你所有的图形都可以伸缩Inkscape 已存在多年,所以有一个坚实的社区和[大量教程和其他资源][2]用于入门
[Inkscape][1] 是一个众所周知的、受人喜爱的开源矢量图形编辑器。SVG 是 Inkscape 的主要文件格式,因此你所有的图形都可以任意伸缩Inkscape 已存在多年,所以有一个坚实的社区和用于入门的[大量教程和其他资源][2]。
作为矢量图形编辑器Inkscape 更适合于简单的插图(例如简单的漫画风格)。然而,使用矢量模糊,一些艺术家创造了一些[令人惊奇的矢量图][3]。
![][4]
从 Fedora Workstation 中的软件应用安装 Inkscape或在终端中使用以下命令
```
sudo dnf install inkscape
```
### 数字绘图Krita 和 Mypaint
[Krita][5] 是一个流行的图像创建程序用于数字绘图、光栅插图和纹理。此外Krita 是一个活跃的项目,拥有一个充满活力的社区 - 所以[有很多教程用于入门] [6]。Krita 有多个画笔引擎,带弹出调色板的 UI用于创建无缝图案的环绕模式、滤镜、图层等等。
[Krita][5] 是一个流行的图像创建程序用于数字绘图、光栅插图和纹理。此外Krita 是一个活跃的项目,拥有一个充满活力的社区 —— 所以[有用于入门的很多教程][6]。Krita 有多个画笔引擎、带有弹出调色板的 UI、用于创建无缝图案的环绕模式、滤镜、图层等等。
![][7]
从 Fedora Workstation 中的软件应用安装 Krita或在终端中使用以下命令
```
sudo dnf install krita
```
[Mypaint][8] 是另一款适用于 Fedora 令人惊奇的数字绘图程序。像 Krita 一样,它有多个画笔和使用图层的能力。
@ -36,14 +36,14 @@ sudo dnf install krita
![][9]
从 Fedora Workstation 中的软件应用安装 Mypaint或在终端中使用以下命令
```
sudo dnf install mypaint
```
### 像素艺术Libresprite
[Libresprite][10] 是一个专为创建像素艺术和像素动画而设计的程序。它支持一系列颜色模式并可导出为多种格式(包括动画 GIF。此外Libresprite 还有用于创建像素艺术的绘图工具:多边形工具、轮廓和着色工具。
[Libresprite][10] 是一个专为创建像素艺术和像素动画而设计的程序。它支持一系列颜色模式并可导出为多种格式(包括动画 GIF。此外Libresprite 还有用于创建像素艺术的绘图工具:多边形工具、轮廓和着色工具。
![][11]
@ -57,7 +57,7 @@ via: https://fedoramagazine.org/image-creation-applications-fedora/
作者:[Ryan Lerch][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,69 @@
L1 终端错误漏洞L1TF如何影响 Linux 系统
======
> L1 终端错误L1TF影响英特尔处理器和 Linux 操作系统。让我们了解一下这个漏洞是什么,以及 Linux 用户需要为它做点什么。
![](https://images.idgesg.net/images/article/2018/08/l1tf-copy-100768129-large.jpg)
昨天LCTT 译注:本文发表于 2018/8/15在英特尔、微软和红帽的安全建议中宣布一个新发现的漏洞英特尔处理器还有 Linux的漏洞称为 L1TF 或 “<ruby>L1 终端错误<rt>L1 Terminal Fault</rt></ruby>”,引起了 Linux 用户和管理员的注意。究竟什么是这个漏洞,谁应该担心它?
### L1TF、 L1 Terminal Fault 和 Foreshadow
处理器漏洞被称作 L1TF、L1 Terminal Fault 和 Foreshadow。研究人员在 1 月份发现了这个问题并向英特尔报告称其为 “Foreshadow”。它类似于过去发现的漏洞例如 Spectre
此漏洞是特定于英特尔的。其他处理器不受影响。与其他一些漏洞一样,它之所以存在,是因为设计时为了优化内核处理速度,但允许其他进程访问数据。
**[另请阅读:[22 个必要的 Linux 安全命令][1]]**
已为此问题分配了三个 CVE
* CVE-2018-3615英特尔<ruby>软件保护扩展<rt>Software Guard Extension</rt></ruby>(英特尔 SGX
* CVE-2018-3620操作系统和<ruby>系统管理模式<rt>ystem Management Mode</rt></ruby>SMM
* CVE-2018-3646虚拟化的影响
英特尔发言人就此问题发表了这一声明:
> “L1 Terminal Fault 通过今年早些时候发布的微代码更新得到解决,再加上从今天开始提供的操作系统和虚拟机管理程序软件的相应更新。我们在网上提供了更多信息,并继续鼓励每个人更新操作系统,因为这是得到保护的最佳方式之一。我们要感谢 imec-DistriNet、KU Leuven、以色列理工学院密歇根大学阿德莱德大学和 Data61 的研究人员以及我们的行业合作伙伴,他们帮助我们识别和解决了这个问题。“
### L1TF 会影响你的 Linux 系统吗?
简短的回答是“可能不会”。如果你因为在今年 1 月爆出的 [Spectre 和 Meltdown 漏洞][2]修补过系统,那你应该是安全的。与 Spectre 和 Meltdown 一样,英特尔声称真实世界中还没有系统受到影响的报告或者检测到。他们还表示,这些变化不太可能在单个系统上产生明显的性能影响,但它们可能对使用虚拟化操作系统的数据中心产生大的影响。
即使如此,仍然推荐频繁地打补丁。要检查你当前的内核版本,使用 `uname -r` 命令:
```
$ uname -r
4.18.0-041800-generic
```
### 更多资源
请查看如下资源以了解 L1TF 的更多细节,以及为什么会出现这个漏洞:
- [L1TF explained in ~3 minutes (Red Hat)][5]
- [L1TF explained in under 11 minutes (Red Hat)][6]
- [Technical deep dive][7]
- [Red Hat explanation of L1TF][8]
- [Ubuntu updates for L1TF][9]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3298157/linux/linux-and-l1tf.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]: https://www.networkworld.com/article/3272286/open-source-tools/22-essential-security-commands-for-linux.html
[2]: https://www.networkworld.com/article/3245813/security/meltdown-and-spectre-exploits-cutting-through-the-fud.html
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world
[5]: https://www.youtube.com/watch?v=kBOsVt0iXE4&feature=youtu.be
[6]: https://www.youtube.com/watch?v=kqg8_KH2OIQ
[7]: https://www.redhat.com/en/blog/deeper-look-l1-terminal-fault-aka-foreshadow
[8]: https://access.redhat.com/security/vulnerabilities/L1TF
[9]: https://blog.ubuntu.com/2018/08/14/ubuntu-updates-for-l1-terminal-fault-vulnerabilities

View File

@ -106,7 +106,7 @@ function func_return_value {
}
```
上面的函数向调用者返回 10。让我们执行这个函数
上面的函数向调用者返回 `10`。让我们执行这个函数:
```
$ func_return_value
@ -119,7 +119,7 @@ $ echo "Value returned by function is: $?"
Value returned by function is: 10
```
**提示**:在 Bash 中使用 `$?` 去获取函数的返回值
**提示**:在 Bash 中使用 `$?` 去获取函数的返回值
### 函数技巧

View File

@ -0,0 +1,154 @@
How to Install and Use FreeDOS on VirtualBox
======
This step-by-step guide shows you how to install FreeDOS on VirtualBox in Linux.
### Installing FreeDOS on VirtualBox in Linux
<https://www.youtube.com/embed/p1MegqzFAqA?enablejsapi=1&autoplay=0&cc_load_policy=0&iv_load_policy=1&loop=0&modestbranding=1&rel=0&showinfo=0&fs=1&playsinline=0&autohide=2&theme=dark&color=red&controls=2&>
In November of 2017, I [interviewed Jim Hall][1] about the history behind the [FreeDOS project][2]. Today, Im going to tell you how to install and use FreeDOS. Please note: I will be using [VirtualBox][3] 5.2.14 on [Solus][4].
Note: I used Solus as the host operating system for this tutorial because it is very easy to setup. One thing you should keep in mind is that Solus Software Center contains two versions of VirtualBox: `virtualbox` and `virtualbox-current`. Solus gives you the option to use the linux-lts kernel and the linux-current kernel. `virtualbox`is modified for linux-lts and `virtualbox-current` is for linux-current.
#### Step 1 Create New Virtual Machine
![][5]
Once you open VirtualBox, press the “New” button to create a new virtual machine. You can name it whatever you want, I just use “FreeDOS”. You can use the label to specify what version of FreeDOS you are installing. You also need to select the type and version of the operating system you will be installing. Select “Other” and “DOS”.
#### Step 2 Select Memory Size
![][6]
The next dialog box will ask you how much of the host computers memory you want to make available to FreeDOS. The default is 32MB. Dont change it. Back in the day, this would be a huge amount of RAM for a DOS machine. If you need to, you can increase it later by right-clicking on the virtual machine you created for FreeDOS and selecting Settings -> System.
![][7]
#### Step 3 Create Virtual Hard Disk
![][8]
Next, you will be asked to create a virtual hard drive where FreeDOS and its files will be stored. Since you havent created one yet, just click “Create”.
The next dialog box will ask you what hard disk file type you want to use. This default (VirtualBox Disk Image) works just fine. Click “Next”.
The next question you will encounter is how you want the virtual disk to act. Do you want it to start small and gradually grow to its full size as you create files and install programs? Then choose dynamically allocated. If you prefer that the virtual hard drive (vhd) is created at full size, then choose fixed size. Dynamically allocated is nice if you dont plan to use the whole vhd or if you dont have very much free space on your hard drive. (Keep in mind that while the size of a dynamically allocated vhd increases as you add files, it will not drop when you remove files.) I prefer dynamically allocated, but you can choose the option that serves your needs best and click “Next”.
![][9]
Now, you can choose the size and location of the vhd. 500 MB should be plenty of space. Remember most of the programs you will be using will be text-based, thus fairly small. Once you make your adjustments, click Create,
#### Step 4 Attach .iso file
Before we continue, you will need to [download][10] the FreeDOS .iso file. You will need to choose the CDROM “standard” installer.
![][11]
Once the file has been downloaded, return to VirtualBox. Select your virtual machine and open the settings. You can do this by either right-clicking on the virtual machine and selecting “Settings” or highlight the virtual machine and click the “Settings” button.
Now, click the “Storage” tab. Under “Storage Devices”, select the CD icon. (It should say “Empty” next to it.) In the “Attributes” panel on the right, click on the CD icon and select the location of the .iso file you just downloaded.
Note: Typically, after you install an operating system on VirtualBox you can delete the original .iso file. Not with FreeDOS. You need the .iso file if you want to install applications via the FreeDOS package manager. I generally keep the ,iso file attached the virtual machine in case I want to install something. If you do that, you have to make sure that you tell FreeDOS you want to boot from the hard drive each time you boot it up because it defaults to the attached CD/iso. If you forget to attach the .iso, dont worry. You can do so by selecting “Devices” on the top of your FreeDOS virtual machine window. The .iso files are listed under “Optical Drives”.
#### Step 5 Install FreeDOS
![][12]
Now that weve completed all of the preparations, lets install FreeDOS.
First, you need to be aware of a bug in the most recent version of VirtualBox. If you start the virtual machine that we just created and select “Install to harddisk” when the FreeDOS welcome screen appears, you will see an unending, scrolling mass of machine code. Ive only run into this issue recently and it affects both the Linux and Windows versions of VirtualBox. (I know first hand.)
To get around this, you need to make a simple edit. When you see the FreeDOS welcome screen, press Tab. (Make sure that the “Install to harddrive” option is selected.) Type the word `raw` after “fdboot.img” and hit Enter. The FreeDOS installer will then start.
![][13]
The first part of the installer will handle formatting your virtual drive. Once formatting is completed, the installer will reboot. When the FreeDOS welcome screen appears again, you will have to re-enter the `raw` comment you used earlier.
Make sure that you select “Yes” on all of the questions in the installer. One important question that doesnt have a “Yes” or “No” answer is: “What FreeDOS packages do you want to install?. The two options are “Base packages” or “Full installation”. Base packages are for those who want a DOS experience most like the original MS-DOS. The Full installation includes a bunch of tools and utilities to improve DOS.
At the end of the installation, you will be given the option to reboot or stay on DOS. Select “reboot”.
#### Step 6 Setup Networking
Unlike the original DOS, FreeDOS can access the internet. You can install new packages and update the ones already you have installed. In order to use networking, you need to install several applications in FreeDOS.
![][14]
First, boot into your newly created FreeDOS virtual machine. At the FreeDOS selection screen, select “Boot from System harddrive”.
![][15]
Now, to access the FreeDOS package manager, type `fdimples`. You can navigate around the package manager with the arrow keys and select categories or packages with the space bar. From the “Networking” category, you need to select `fdnet`. The FreeDOS Project also recommends installing `mtcp` and `wget`. Hit “Tab” several times until “OK” is selected and press “Enter”. Once the installation is complete, type `reboot` and hit enter. After the system reboots, boot to your system drive. If the network installation was successful, you will see several new messages at the terminal listing your network information.
![][16]
##### Note
Sometimes the default VirtualBox setup doesnt work. If that happens, close your FreeDOS VirtualBox window. Right-click your virtual machine from the main VirtualBox screen and select “Settings”. The default VirtualBox network setting is “NAT”. Change it to “Bridged Adapter” and retry installing the FreeDOS packages. It should work now.
#### Step 7 Basic Usage of FreeDOS
##### Commons Commands
Now that you have installed FreeDOS, lets look at a few basic commands. If you have ever used the Command Prompt on Windows, you will be familiar with some of these commands.
* `DIR` display the contents of the current directory
* `CD` change the directory you are currently in
* `COPY OLD.TXT NEW.TXT` copy files
* `TYPE TEST.TXT` display content of file
* `DEL TEST.TXT` delete file
* `XCOPY DIR NEWDIR` copy directory and all of its contents
* `EDIT TEST.TXT` edit a file
* `MKDIR NEWDIR` create a new directory
* `CLS` clear the screen
You can find more basic DOS commands on the web or the [handy cheat sheet][17] created by Jim Hall.
##### Running a Program
Running program on FreeDos is fairly easy. When you install an application with the `fdimples` package manager, be sure to note where the .EXE file of the application is located. This is shown in the applications details. To run the application, you generally need to navigate to the application folder and type the applications name.
For example, FreeDOS has an editor named `FED` that you can install. After installing it, all you need to do is navigate to `C:\FED` and type `FED`.
Sometimes a program, such as Pico, is stored in the `\bin` folder. These programs can be called up from any folder.
Games usually have an .EXE program or two that you have to run before you can play the game. These setup file usually fix sound, video, or control issues.
If you run into problems that this tutorial didnt cover, dont forget to visit the [home of FreeDOS][2]. They have a wiki and several other support options.
Have you ever used FreeDOS? What tutorials would you like to see in the future? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][18].
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-freedos/
作者:[John Paul][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[1]:https://itsfoss.com/interview-freedos-jim-hall/
[2]:http://www.freedos.org/
[3]:https://www.virtualbox.org/
[4]:https://solus-project.com/home/
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-1.jpg
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-2.jpg
[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-3.jpg
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-4.jpg
[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-6.jpg
[10]:http://www.freedos.org/download/
[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-7.jpg
[12]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-8.png
[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-9.png
[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-10.png
[15]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-11.png
[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-12.png
[17]:https://opensource.com/article/18/6/freedos-commands-cheat-sheet
[18]:http://reddit.com/r/linuxusersgroup

View File

@ -0,0 +1,104 @@
15 command-line aliases to save you time
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_blue.png?itok=IfckxN48)
Linux command-line aliases are great for helping you work more efficiently. Better still, some are included by default in your installed Linux distro.
This is an example of a command-line alias in Fedora 27:
![](https://opensource.com/sites/default/files/uploads/default.png)
The command `alias` shows the list of existing aliases. Setting an alias is as simple as typing:
`alias new_name="command"`
Here are 15 command-line aliases that will save you time:
1. To install any utility/application:
`alias install="sudo yum install -y"`
Here, `sudo` and `-y` are optional as per users preferences:
![install alias.png][2]
2. To update the system:
`alias update="sudo yum update -y"`
3. To upgrade the system:
`alias upgrade="sudo yum upgrade -y"`
4. To change to the root user:
`alias root="sudo su -"`
5. To change to "user," where "user" is set as your username:
`alias user="su user"`
6. To display the list of all available ports, their status, and IP:
`alias myip="ip -br -c a"`
7. To `ssh` to the server `myserver`:
`alias myserver="ssh user@my_server_ip”`
8. To list all processes in the system:
`alias process="ps -aux"`
9. To check the status of any system service:
`alias sstatus="sudo systemctl status"`
10. To restart any system service:
`alias srestart="sudo systemctl restart"`
11. To kill any process by its name:
`alias kill="sudo pkill"`
![kill process alias.png][4]
12. To display the total used and free memory of the system:
`alias mem="free -h"`
13. To display the CPU architecture, number of CPUs, threads, etc. of the system:
`alias cpu="lscpu"`
14. To display the total disk size of the system:
`alias disk="df -h"`
15. To display the current system Linux distro (for CentOS, Fedora, and Red Hat):
`alias os="cat /etc/redhat-release"`
![system_details alias.png][6]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/time-saving-command-line-aliases
作者:[Aarchit Modi][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/architmodi
[2]:https://opensource.com/sites/default/files/uploads/install.png (install alias.png)
[4]:https://opensource.com/sites/default/files/uploads/kill.png (kill process alias.png)
[6]:https://opensource.com/sites/default/files/uploads/system_details.png (system_details alias.png)

View File

@ -1,142 +0,0 @@
Translating by DavidChenLiang
What I Learned from Programming Interviews
============================================================
![](https://cdn-images-1.medium.com/max/1250/1*DXPdaGPM4oM6p5nSkup7IQ.jpeg)
Whiteboard programming interviews
In 2017, I went to the [Grace Hopper Celebration][1] of women in computing. Its the largest gathering of this kind, with 17,000 women attending last year.
This conference has a huge career fair where companies interview attendees. Some even get offers. Walking around the area, I noticed that some people looked stressed and worried. I overheard conversations, and some talked about how they didnt do well in the interview.
I approached a group of people that I overheard and gave them advice. I considered some of the advice I gave to be basic, such as “its okay to think of the naive solution first.” But people were surprised by most of the advice I gave them.
I wanted to help more people with this. I gathered a list of tips that worked for me and published a [podcast episode][2] about them. Theyre also the topic of this post.
Ive had many programming interviews both for internships and full-time jobs. When I was in college studying Computer Science, there was a career fair every fall semester where the first round of interviews took place. I have failed at the first and final rounds of interviews. After each interview, I reflected on what I couldve done better and had mock up interviews with friends who gave me feedback.
Whether we find a job through a job portal, networking, or university recruiting, part of the process involves doing a technical interview.
In recent years weve seen different interview formats emerge:
* Pair programming with an engineer
* Online quiz and online coding
* Whiteboard interviews
Ill focus on the whiteboard interview because its the one that I have experienced. Ive had many interviews. Some of them have gone well, while others havent.
### What I did wrong
First, I want to go over the things I did wrong in my interviews. This helps see the problems and what to improve.
When an interviewer gave me a technical problem, I immediately went to the whiteboard and started trying to solve it.  _Without saying a word._
I made two mistakes here:
#### Not clarifying information that is crucial to solve a problem
For example, are we only working with numbers or also strings? Are we supporting multiple data types? If you dont ask questions before you start working on a question, your interviewer can get the impression that you wont ask questions before you start working on a project at their company. This is an important skill to have in the workplace. It is not like school anymore. You dont get an assignment with all the steps detailed for you. You have to find out what those are and define them.
#### Thinking without writing or communicating
Often times I stood there thinking without writing. When I was doing a mock interview with a friend, he told me that he knew I was thinking because we had worked together. To a stranger, it can seem that Im clueless, or that Im thinking. It is also important not to rush on a solution right away. Take some time to brainstorm ideas. Sometimes the interviewer will gladly participate in this. After all, thats how it is at work meetings.
### Coming up with a solution
Before you begin writing code, it helps if you come up with the algorithm first. Dont start writing code and hope that youll solve the problem as you write.
This is what has worked for me:
1. Brainstorm
2. Coding
3. Error handling
4. Testing
#### 1\. Brainstorm
For me, it helps to visualize first what the problem is through a series of examples. If its a problem related to trees, I would start with the null case, one node, two nodes, three nodes. This can help you generalize a solution.
On the whiteboard, write down a list of the things the algorithm needs to do. This way, you can find bugs and issues before writing any code. Just keep track of the time. I made a mistake once where I spent too much time asking clarifying questions and brainstorming, and I barely had time to write the code. The downside of this is that your interviewer doesnt get to see how you code. You can also come off as if youre trying to avoid the coding portion. It helps to wear a wrist watch, or if theres a clock in the room, look at it occasionally. Sometimes the interviewer will tell you, “I think we have the necessary information, lets start coding it.”
#### 2\. Coding and code walkthrough
If you dont have the solution right away, it always helps to point out the obvious naive solution. While youre explaining this, you should be thinking of how to improve it. When you state the obvious, indicate why it is not the best solution. For this it helps to be familiar with big O notation. It is okay to go over 23 solutions first. The interviewer sometimes guides you by saying, “Can we do better?” This can sometimes mean they are looking for a more efficient solution.
#### 3\. Error handling
While youre coding, point out that youre leaving a code comment for error handling. Once an interviewer said, “Thats a good point. How would you handle it? Would you throw an exception? Or return a specific value?” This can make for a good short discussion about code quality. Mention a few error cases. Other times, the interviewer might say that you can assume that the parameters youre getting already passed a validation. However, it is still important to bring this up to show that you are aware of error cases and quality.
#### 4\. Testing
After you have finished coding the solution, re-use the examples from brainstorming to walk through your code and make sure it works. For example you can say, “Lets go over the example of a tree with one node, two nodes.”
After you finish this, the interviewer sometimes asks you how you would test your code, and what your test cases would be. I recommend that you organize your test cases in different categories.
Some examples are:
1. Performance
2. Error cases
3. Positive expected cases
For performance, think about extreme quantities. For example, if the problem is about lists, mention that you would have a case with a large list and a really small list. If its about numbers, youll test the maximum integer number and the smallest. I recommend reading about testing software to get more ideas. My favorite book on this is [How We Test Software at Microsoft][3].
For error cases, think about what is expected to fail and list those.
For positive expected cases, it helps to think of what the user requirements are. What are the cases that this solution is meant to solve? Those are the positive test cases.
### “Do you have any questions for me?”
Almost always there will be a few minutes dedicated at the end for you to ask questions. I recommend that you write down the questions you would ask your interviewer before the interview. Dont say, “I dont have any questions.” Even if you feel the interview didnt go well, or youre not super passionate about the company, theres always something you can ask. It can be about what the person likes and hates most about his or her job. Or it can be something related to the persons work, or technologies and practices used at the company. Dont feel discouraged to ask something even if you feel you didnt do well.
### Applying for a job
As for searching and applying for a job, Ive been told that you should only apply to a place that you would be truly passionate to work for. They say pick a company that you love, or a product that you enjoy using, and see if you can work there.
I dont recommend that you always do this. You can rule out many good options this way, especially if youre looking for an internship or an entry-level job.
You can focus on other goals instead. What do I want to get more experience in? Is it cloud computing, web development, or artificial intelligence? When you talk to companies at the career fair, find out if their job openings are in this area. You might find a really good position at a company or a non-profit that wasnt in your list.
#### Switching teams
After a year and a half at my first team, I decided that it was time to explore something different. I found a team I liked and had 4 rounds of interviews. I didnt do well.
I didnt practice anything, not even simply writing on a whiteboard. My logic had been, if I have been working at the company for almost 2 years, why would I need to practice? I was wrong about this. I struggled to write a solution on the whiteboard. Things like my writing being too small and running out of space by not starting at the top left all contributed to not passing.
I hadnt brushed up on data structures and algorithms. If I had, I wouldve been more confident. Even if youve been working at a company as a Software Engineer, before you do a round of interviews with another team, I strongly recommend you go through practice problems on a whiteboard.
As for finding a team, if you are looking to switch teams at your company, it helps to talk informally with members of that team. For this, I found that almost everyone is willing to have lunch with you. People are mostly available at noon too, so there is low risk of lack of availability and meeting conflicts. This is an informal way to find out what the team is working on, and see what the personalities of your potential team members are like. You can learn many things from lunch meetings that can help you in the formal interviews.
It is important to know that at the end of the day, you are interviewing for a specific team. Even if you do really well, you might not get an offer because you are not a culture fit. Thats part of why I try to meet different people in the team first, but this is not always possible. Dont get discouraged by a rejection, keep your options open, and practice.
This content is from the [“Programming interviews”][4] episode on [The Women in Tech Show: Technical Interviews with Prominent Women in Tech][5].
--------------------------------------------------------------------------------
作者简介:
Software Engineer II at Microsoft Research, opinions are my own, host of www.thewomenintechshow.com
------------
via: https://medium.freecodecamp.org/what-i-learned-from-programming-interviews-29ba49c9b851
作者:[Edaena Salinas ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@edaenas
[1]:https://anitab.org/event/2017-grace-hopper-celebration-women-computing/
[2]:https://thewomenintechshow.com/2017/12/18/programming-interviews/
[3]:https://www.amazon.com/How-We-Test-Software-Microsoft/dp/0735624259
[4]:https://thewomenintechshow.com/2017/12/18/programming-interviews/
[5]:https://thewomenintechshow.com/

View File

@ -1,3 +1,6 @@
Translating by MjSeven
How To Add Additional IP (Secondary IP) In Ubuntu System
======
Linux admin should be aware of this because its a routine task. Many of you wondering why we need to add more than one IP address in server? why we need add this to single network card? am i right?

View File

@ -1,97 +0,0 @@
translating---geekpi
Getting started with Etcher.io
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A)
Bootable USB drives are a great way to try out a new Linux distribution to see if you like it before you install. While some Linux distributions, like [Fedora][1], make it easy to create bootable media, most others provide the ISOs or image files and leave the media creation decisions up to the user. There's always the option to use `dd` to create media on the command line—but let's face it, even for the most experienced user, that's still a pain. There are other utilities—like UnetBootIn, Disk Utility on MacOS, and Win32DiskImager on Windows—that create bootable USBs.
### Installing Etcher
About 18 months ago, I came upon [Etcher.io][2] , a great open source project that allows easy and foolproof media creation on Linux, Windows, or MacOS. Etcher.io has become my "go-to" application for creating bootable media for Linux. I can easily download ISO or IMG files and burn them to flash drives and SD cards. It's an open source project licensed under [Apache 2.0][3] , and the [source code][4] is available on GitHub.
Go to the [Etcher.io][5] website and click on the download link for your operating system—32- or 64-bit Linux, 32- or 64-bit Windows, or MacOS.
![](https://opensource.com/sites/default/files/uploads/etcher_1.png)
Etcher provides great instructions in its GitHub repository for adding Etcher to your collection of Linux utilities.
If you are on Debian or Ubuntu, add the Etcher Debian repository:
```
$echo "deb https://dl.bintray.com/resin-io/debian stable etcher" | sudo tee
/etc/apt/sources.list.d/etcher.list
Trust Bintray.com GPG key
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 379CE192D401AB61
```
Then update your system and install:
```
$ sudo apt-get update
$ sudo apt-get install etcher-electron
```
If you are using Fedora or Red Hat Enterprise Linux, add the Etcher RPM repository:
```
$ sudo wget https://bintray.com/resin-io/redhat/rpm -O /etc/yum.repos.d/bintray-
resin-io-redhat.repo
```
Update and install using either:
```
$ sudo yum install -y etcher-electron
```
or:
```
$ sudo dnf install -y etcher-electron
```
### Creating bootable drives
In addition to creating bootable images for Ubuntu, EndlessOS, and other flavors of Linux, I have used Etcher to [create SD card images][6] for the Raspberry Pi. Here's how to create bootable media.
First, download to your computer the ISO or image you want to use. Then, launch Etcher and insert your USB or SD card into the computer.
![](https://opensource.com/sites/default/files/uploads/etcher_2.png)
Click on **Select Image**. In this example, I want to create a bootable USB drive to install Ubermix on a new computer. Once I have selected my Ubermix image file and inserted my USB drive into the computer, Etcher.io "sees" the drive, and I can begin the process of installing Ubermix on my USB.
![](https://opensource.com/sites/default/files/uploads/etcher_3.png)
Once I click on **Flash** , the installation process begins. The time required depends on the image's size. After the image is installed on the drive, the software verifies the installation; at the end, a banner announces my media creation is complete.
If you need [help with Etcher][7], contact the community through its [Discourse][8] forum. Etcher is very easy to use, and it has replaced all my other media creation tools because none of them do the job as easily or as well as Etcher.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/getting-started-etcherio
作者:[Don Watkins][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/don-watkins
[1]:https://getfedora.org/en_GB/workstation/download/
[2]:http://etcher.io
[3]:https://github.com/resin-io/etcher/blob/master/LICENSE
[4]:https://github.com/resin-io/etcher
[5]:https://etcher.io/
[6]:https://www.raspberrypi.org/magpi/pi-sd-etcher/
[7]:https://github.com/resin-io/etcher/blob/master/SUPPORT.md
[8]:https://forums.resin.io/c/etcher

View File

@ -1,92 +0,0 @@
translating---geekpi
How to Install 2048 Game in Ubuntu and Other Linux Distributions
======
**Popular mobile puzzle game 2048 can also be played on Ubuntu and Linux distributions. Heck! You can even play 2048 in Linux terminal. Dont blame me if your productivity goes down because of this addictive game.**
Back in 2014, 2048 was one of the most popular games on iOS and Android. This highly addictive game got so popular that it got a [browser version][1], desktop version as well as a terminal version on Linux.
<https://giphy.com/embed/wT8XEi5gckwJW>
This tiny game is played by moving the tiles up and down, left and right. The aim of this puzzle game is to reach 2048 by combining tiles with matching number. So 2+2 becomes 4, 4+4 becomes 16 and so on. It may sound simple and boring but trust me its hell of an addictive game.
### Play 2048 in Linux [GUI]
There are several implementations of 2048 game available in Ubuntu and other Linux. You can simply search for it in the Software Center and youll find a few of them there.
There is a [Qt-based][2] 2048 game that you can install on Ubuntu and other Debian and Ubuntu-based Linux distributions. You can install it using the command below:
```
sudo apt install 2048-qt
```
Once installed, you can find the game in the menu and start it. You can move the numbers using the arrow keys. Your highest score is saved as well.
![2048 Game in Ubuntu Linux][3]
### Play 2048 in Linux terminal
The popularity of 2048 brought it to the terminal. If this surprises you, you should know that there are plenty of [awesome terminal games in Linux][4] and 2048 is certainly one of them.
Now, there are a few ways you can play 2048 in Linux terminal. Ill mention two of them here.
#### 1\. term2058 Snap application
There is a [snap application][5] called [term2048][6] that you can install in any [Snap supported Linux distribution][7].
If you have Snap enabled, just use this command to install term2048:
```
sudo snap install term2048
```
Ubuntu users can also find this game in the Software Center and install it from there.
![2048 Terminal Game in Linux][8]
Once installed, you can use the command term2048 to run the game. It looks something like this:
![2048 Terminal game][9]
You can move using the arrow keys.
#### 2\. Bash script for 2048 terminal game
This game is actually a shell script which you can run in any Linux terminal. Download the game/script from Github:
[Download Bash2048][10]
Extract the downloaded file. Go in the extracted directory and youll see a shell script named 2048.sh. Just run the shell script. The game will start immediately. You can move the tiles using the arrow keys.
![Linux Terminal game 2048][11]
#### What games do you play on Linux?
If you like playing games in Linux terminal, you should also try the [classic Snake game in Linux terminal][12].
Which games do you regularly play in Linux? Do you also play games in terminal? If yes, which is your favorite terminal game?
--------------------------------------------------------------------------------
via: https://itsfoss.com/2048-game/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]:http://gabrielecirulli.github.io/2048/
[2]:https://www.qt.io/
[3]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/2048-qt-ubuntu.jpeg
[4]:https://itsfoss.com/best-command-line-games-linux/
[5]:https://itsfoss.com/use-snap-packages-ubuntu-16-04/
[6]:https://snapcraft.io/term2048
[7]:https://itsfoss.com/install-snap-linux/
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/term2048-game.png
[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/term2048.jpg
[10]:https://github.com/mydzor/bash2048
[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/2048-bash-terminal.png
[12]:https://itsfoss.com/nsnake-play-classic-snake-game-linux-terminal/ (nSnake: Play The Classic Snake Game In Linux Terminal)

View File

@ -1,160 +0,0 @@
How to use VS Code for your Python projects
======
![](https://fedoramagazine.org/wp-content/uploads/2018/07/pythonvscode-816x345.jpg)
Visual Studio Code, or VS Code, is an open source code editor that also includes tools for building and debugging an application. With the Python extension enabled, vscode becomes a great working environment for any Python developer. This article shows you which extensions are useful, and how to configure VS Code to get the most out of it.
If you dont have it installed, check out our previous article, [Using Visual Studio Code on Fedora][1]:
[Using Visual Studio Code on Fedora ](https://fedoramagazine.org/using-visual-studio-code-fedora/)
### Install the VS Code Python extension
First, to make VS Code Python friendly, install the Python extension from the marketplace.
![][2]
Once the Python extension installed, you can now configure the Python extension.
VS Code manages its configuration inside JSON files. Two files are used:
* One for the global settings that applies to all projects
* One for project specific settings
Press **Ctrl+,** (comma) to open the global settings.
#### Setup the Python Path
You can configure VS Code to automatically select the best Python interpreter for each of your projects. To do this, configure the python.pythonPath key in the global settings.
```
// Place your settings in this file to overwrite default and user settings.
{
"python.pythonPath":"${workspaceRoot}/.venv/bin/python",
}
```
This sets VS Code to use the Python interpreter located in the project root directory under the .venv virtual environment directory.
#### Use environment variables
By default, VS Code uses environment variables defined in the project root directory in a .env file. This is useful to set environment variables like:
```
PYTHONWARNINGS="once"
```
That setting ensures that warnings are displayed when your program is running.
To change this default, set the python.envFile configuration key as follows:
```
"python.envFile": "${workspaceFolder}/.env",
```
### Code Linting
The Python extension also supports different code linters (pep8, flake8, pylint). To enable your favorite linter, or the one used by the project youre working on, you need to set a few configuration items.
By default pylint is enabled. But for this example, configure flake8:
```
"python.linting.pylintEnabled": false,
"python.linting.flake8Path": "${workspaceRoot}/.venv/bin/flake8",
"python.linting.flake8Enabled": true,
"python.linting.flake8Args": ["--max-line-length=90"],
```
After enabling the linter, your code is underlined to show where it doesnt meet criteria enforced by the linter. Note that for this example to work, you need to install flake8 in the virtual environment of the project.
![][3]
### Code Formatting
VS Code also lets you configure automatic code formatting. The extension currently supports autopep8, black and yapf. Heres how to configure black.
```
"python.formatting.provider": "black",
"python.formatting.blackPath": "${workspaceRoot}/.venv/bin/black"
"python.formatting.blackArgs": ["--line-length=90"],
"editor.formatOnSave": true,
```
If you dont want the editor to format your file on save, set the option to false and use **Ctrl+Shift+I** to format the current document. Note that for this example to work, you need to install black in the virtual environment of the project.
### Running Tasks
Another great feature of VS Code is that it can run tasks. These tasks are also defined in a JSON file saved in the project root directory.
#### Run a development flask server
In this example, youll create a task to run a Flask development server. Create a new Build using the basic template that can run an external command:
![][4]
Edit the tasks.json file as follows to create a new task that runs the Flask development server:
```
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "Run Debug Server",
"type": "shell",
"command": "${workspaceRoot}/.venv/bin/flask run -h 0.0.0.0 -p 5000",
"group": {
"kind": "build",
"isDefault": true
}
}
]
}
```
The Flask development server uses an environment variable to get the entrypoint of the application. Use the .env file to declare these variables. For example:
```
FLASK_APP=wsgi.py
FLASK_DEBUG=True
```
Now you can execute the task using **Ctrl+Shift+B**.
### Unit tests
VS Code also has the unit test runners pytest, unittest, and nosetest integrated out of the box. After you enable a test runner, VS Code discovers the unit tests and letsyou to run them individually, by test suite, or simply all the tests.
For example, to enable pytest:
```
"python.unitTest.pyTestEnabled": true,
"python.unitTest.pyTestPath": "${workspaceRoot}/.venv/bin/pytest",
```
Note that for this example to work, you need to install pytest in the virtual environment of the project.
![][5]
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/vscode-python-howto/
作者:[Clément Verna][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org
[1]:https://fedoramagazine.org/using-visual-studio-code-fedora/
[2]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-09-44.gif
[3]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-12-05.gif
[4]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-13-26.gif
[5]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-15-33.gif

View File

@ -1,3 +1,5 @@
translating----geekpi
10 Popular Windows Apps That Are Also Available on Linux
======

View File

@ -1,77 +0,0 @@
translating---geekpi
Why I still love Alpine for email at the Linux terminal
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ)
Maybe you can relate to this story: You try out a program and really love it. Over the years, new programs are developed that can do the same things and more, maybe even better. You try them out, and they are great too—but you keep coming back to the first program.
That is the story of my relationship with [Alpine Mail][1]. So I decided to write a little article praising my de facto favorite mail program.
![alpine_main_menu.png][3]
The main menu screen of the Alpine email client
In the mid-90's, I discovered the [GNU/Linux][4] operating system. Because I had never seen a Unix-like system before, I read a lot of documentation and books and tried a lot of programs to find my way through this fascinating OS.
After a while, [Pine][5] became my favorite mail client, followed by its successor, Alpine. I found it intuitive and easy to use—you can always see the possible commands or options at the bottom, so navigation is easy to learn quickly, and Alpine comes with very good help.
Getting started is easy.
Most distributions include Alpine. It can be installed via the package manager: Just press **S** (or navigate the bar to the setup line) and you will be directed to the categories you can configure. At the bottom, you can use the shortcut keys for commands you can do right away. For commands that dont fit in there, press **O** (`Other Commands`).
Press **C** to enter the configuration dialog. When you scroll down the list, it becomes clear that you can make Alpine behave as you want. If you have only one mail account, simply navigate the bar to the line you want to change, press **C** (`Change Value`), and type in the values:
![alpine_setup_configuration.png][7]
The Alpine setup configuration screen
Note how the SNMP and IMAP servers are entered, as this is not the same as in mail clients with assistants and pre-filled fields. If you just enter the server/SSL/user like this:
`imap.myprovider.com:993/ssl/user=max@example.com`
Alpine will ask you if "Inbox" should be used (yes) and put curly brackets around the server part. When you're done, press **E** (`Exit Setup`) and commit your changes by pressing **Y** (yes). Back in the main menu, you can then move to the folder list and the Inbox to see if you have mail (you will be prompted for your password). You can now navigate using **`>`** and **`<`**.
![navigating_the_message_index.png][9]
Navigating the message index in Alpine
To compose an email, simply navigate to the corresponding menu entry and write. Note that the options at the bottom change depending on the line you are on. **`^T`** ( **Ctrl** \+ **T** ) can stand for `To Addressbook` or `To Files`. To attach files, just navigate to `Attchmt:` and press either **Ctrl** \+ **T** to go to a file browser, or **Ctrl** \+ **J** to enter a path.
Send the mail with `^X`.
![composing_an_email_in_alpine.png][11]
Composing an email in Alpine
### Why Alpine?
Of course, every user's personal preferences and needs are different. If you need a more "office-like" solution, an app like Evolution or Thunderbird might be a better choice.
But for me, Alpine (and Pine) are dinosaurs in the software world. You can manage your mail in a comfortable way—no more and no less. It is available for many operating systems (even [Termux for Android][12]). And because the configuration is stored in a plain text file (`.pinerc`), you can simply copy it to a device and it works.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/love-alpine
作者:[Heiko Ossowski][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/hossow
[1]:https://en.wikipedia.org/wiki/Alpine_(email_client)
[2]:/file/405641
[3]:https://opensource.com/sites/default/files/uploads/alpine_main_menu.png (alpine_main_menu.png)
[4]:https://www.gnu.org/gnu/linux-and-gnu.en.html
[5]:https://en.wikipedia.org/wiki/Pine_(email_client)
[6]:/file/405646
[7]:https://opensource.com/sites/default/files/uploads/alpine_setup_configuration.png (alpine_setup_configuration.png)
[8]:/file/405651
[9]:https://opensource.com/sites/default/files/uploads/navigating_the_message_index.png (navigating_the_message_index.png)
[10]:/file/405656
[11]:https://opensource.com/sites/default/files/uploads/composing_an_email_in_alpine.png (composing_an_email_in_alpine.png)
[12]:https://termux.com/

View File

@ -1,3 +1,5 @@
translating---geekpi
5 applications to manage your to-do list on Fedora
======

View File

@ -1,106 +0,0 @@
translated by hopefully2333
5 open source role-playing games for Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dice_tabletop_board_gaming_game.jpg?itok=y93eW7HN)
Gaming has traditionally been one of Linux's weak points. That has changed somewhat in recent years thanks to Steam, GOG, and other efforts to bring commercial games to multiple operating systems, but those games are often not open source. Sure, the games can be played on an open source operating system, but that is not good enough for an open source purist.
So, can someone who only uses free and open source software find games that are polished enough to present a solid gaming experience without compromising their open source ideals? Absolutely. While open source games are unlikely ever to rival some of the AAA commercial games developed with massive budgets, there are plenty of open source games, in many genres, that are fun to play and can be installed from the repositories of most major Linux distributions. Even if a particular game is not packaged for a particular distribution, it is usually easy to download the game from the project's website in order to install and play it.
This article looks at role-playing games. I have already written about [arcade-style games][1], [board & card games][2], [puzzle games][3], and [racing & flying games][4]. In the final article in this series, I plan to cover strategy and simulation games.
### Endless Sky
![](https://opensource.com/sites/default/files/uploads/endless_sky.png)
[Endless Sky][5] is an open source clone of the [Escape Velocity][6] series from Ambrosia Software. Players captain a spaceship and travel between worlds delivering trade goods or passengers, taking on other missions along the way, or they can turn to piracy and steal from cargo ships. The game lets the player decide how they want to experience the game, and the extremely large map of solar systems is theirs to explore as they see fit. Endless Sky is one of those games that defies normal genre classifications, but this action, role-playing, space simulation, trading game is well worth checking out.
To install Endless Sky, run the following command:
On Fedora: `dnf install endless-sky`
On Debian/Ubuntu: `apt install endless-sky`
### FreeDink
![](https://opensource.com/sites/default/files/uploads/freedink.png)
[FreeDink][7] is the open source version of [Dink Smallwood][8], an action role-playing game released by RTSoft in 1997. Dink Smallwood became freeware in 1999, and the source code was released in 2003. In 2008 the game's data files, minus a few sound files, were also released under an open license. FreeDink replaces those sound files with alternatives to provide a complete game. Gameplay is similar to Nintendo's [The Legend of Zelda][9] series. The player's character, the eponymous Dink Smallwood, explores an over-world map filled with hidden items and caves as he moves from one quest to another. Due to its age, FreeDink is not going to stand up to modern commercial games, but it is still a fun game with an amusing story. The game can be expanded by using [D-Mods][10], which are add-on modules that provide additional quests, but the D-Mods do vary greatly in complexity, quality, and age-appropriateness; the main game is suitable for teenagers, but some of the add-ons are for adult audiences.
To install FreeDink, run the following command:
On Fedora: `dnf install freedink`
On Debian/Ubuntu: `apt install freedink`
### ManaPlus
![](https://opensource.com/sites/default/files/uploads/manaplus.png)
Technically not a game in itself, [ManaPlus][11] is a client for accessing various massive multi-player online role-playing games. [The Mana World][12] and [Evol Online][13] are the two of the open source games available, but other servers are out there. The games feature 2D sprite graphics reminiscent of Super Nintendo games. While none of the games supported by ManaPlus are as popular as some of the commercial alternatives, they do have interesting worlds and at least a few players are online most of the time. Players are unlikely to run into massive groups of other players, but there are usually enough people around to make the games [MMORPG][14]s, not single-player games that require a connection to a server. The Mana World and Evol Online developers have joined together for future development, but for now, The Mana World's legacy server and Evol Online offer different experiences.
To install ManaPlus, run the following command:
On Fedora: `dnf install manaplus`
On Debian/Ubuntu: `apt install manaplus`
### Minetest
![](https://opensource.com/sites/default/files/uploads/minetest.png)
Explore and build in an open-ended world with [Minetest][15], a clone of Minecraft. Just like the game it is based on, Minetest provides an open-ended world where players can explore and build whatever they wish. Minetest provides a wide variety of block types and tools, making it a good alternative to Minecraft for anyone wanting a more open alternative. Beyond what comes with the basic game, Minetest can be extended with [add-on modules][16], which add even more options.
To install Minetest, run the following command:
On Fedora: `dnf install minetest`
On Debian/Ubuntu: `apt install minetest`
### NetHack
![](https://opensource.com/sites/default/files/uploads/nethack.png)
[NetHack][17] is a classic [Roguelike][18] role-playing game. Players explore a multi-level dungeon as one of several different character races, classes, and alignments. The object of the game is to retrieve the Amulet of Yendor. Players begin on the first level of the dungeon and try to work their way towards the bottom, with each level being randomly generated, which makes for a unique game experience each time. While this game features either ASCII graphics or basic tile graphics, the depth of game-play more than makes up for the primitive graphics. Players who want less primitive graphics might want to check out [Vulture for NetHack][19], which offers better graphics along with sound effects and background music.
To install NetHack, run the following command:
On Fedora: `dnf install nethack`
On Debian/Ubuntu: `apt install nethack-x11 or apt install nethack-console`
Did I miss one of your favorite open source role-playing games? Share it in the comments below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/role-playing-games-linux
作者:[Joshua Allen Holm][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/holmja
[1]:https://opensource.com/article/18/1/arcade-games-linux
[2]:https://opensource.com/article/18/3/card-board-games-linux
[3]:https://opensource.com/article/18/6/puzzle-games-linux
[4]:https://opensource.com/article/18/7/racing-flying-games-linux
[5]:https://endless-sky.github.io/
[6]:https://en.wikipedia.org/wiki/Escape_Velocity_(video_game)
[7]:http://www.gnu.org/software/freedink/
[8]:http://www.rtsoft.com/pages/dink.php
[9]:https://en.wikipedia.org/wiki/The_Legend_of_Zelda
[10]:http://www.dinknetwork.com/files/category_dmod/
[11]:http://manaplus.org/
[12]:http://www.themanaworld.org/
[13]:http://evolonline.org/
[14]:https://en.wikipedia.org/wiki/Massively_multiplayer_online_role-playing_game
[15]:https://www.minetest.net/
[16]:https://wiki.minetest.net/Mods
[17]:https://www.nethack.org/
[18]:https://en.wikipedia.org/wiki/Roguelike
[19]:http://www.darkarts.co.za/vulture-for-nethack

View File

@ -1,3 +1,4 @@
LuuMing translating
6 Reasons Why Linux Users Switch to BSD
======
Thus far I have written several articles about [BSD][1] for Its FOSS. There is always at least one person in the comments asking “Why bother with BSD?” I figure that the best way to respond was to write an article on the topic.

View File

@ -1,112 +0,0 @@
MPV Player: A Minimalist Video Player for Linux
======
MPV is an open source, cross platform video player that comes with a minimalist GUI and feature rich command line version.
VLC is probably the best video player for Linux or any other operating system. I have been using VLC for years and it is still my favorite.
However, lately, I am more inclined towards minimalist applications with a clean UI. This is how came across MPV. I loved it so much that I added it in the list of [best Ubuntu applications][1].
[MPV][2] is an open source video player available for Linux, Windows, macOS, BSD and Android. It is actually a fork of [MPlayer][3].
The graphical user interface is sleek and minimalist.
![MPV Player Interface in Linux][4]
MPV Player
### MPV Features
MPV has all the features required from a standard video player. You can play a variety of videos and control the playback with usual shortcuts.
* Minimalist GUI with only the necessary controls.
* Video codecs support.
* High quality video output and GPU video decoding.
* Supports subtitles.
* Can play YouTube and other streaming videos through the command line.
* CLI version of MPV can be embedded in web and other applications.
Though MPV player has a minimal UI with limited options, dont underestimate its capabilities. Its main power lies in the command line version.
Just type the command mpv list-options and youll see that it provides 447 different kind of options. But this article is not about utilizing the advanced settings of MPV. Lets see how good it is as a regular desktop video player.
### Installing MPV in Linux
MPV is a popular application and it should be found in the default repositories of most Linux distributions. Just look for it in the Software Center application.
I can confirm that it is available in Ubuntus Software Center. You can install it from there or simply use the following command:
```
sudo apt install mpv
```
You can find installation instructions for other platforms on [MPV website][5].
### Using MPV Video Player
Once installed, you can open a video file with MPV by right-clicking and choosing MPV.
![MPV Player Interface][6]
MPV Player Interface
The interface has only a control panel that is only visible when you hover your mouse on the player. As you can see, the control panel provides you the option to pause/play, change track, change audio track, subtitles and switch to full screen.
MPVs default size depends upon the quality of video you are playing. For a 240p video, the application video will be small while as 1080p video will result in almost full screen app window size on a Full-HD screen. You can always double click on the player to make it full screen irrespective of the video size.
#### The subtitle struggle
If your video has a subtitle file, MPV will [automatically play subtitles][7] and you can choose to disable it. However, if you want to use an external subtitle file, its not that available directly from the player.
You can rename the additional subtitle file exactly the same as the name of the video file and keep it in the same folder as the video file. MPV should now play your subtitles.
An easier option to play external subtitles is to simply drag and drop into the player.
#### Playing YouTube and other online video content
To play online videos, youll have to use the command line version of MPV.
Open a terminal and use it in the following fashion:
```
mpv <URL_of_Video>
```
![Playing YouTube videos on Linux desktop using MPV][8]
Playing YouTube videos with MPV
I didnt find playing YouTube videos in MPV player a pleasant experience. It kept on buffering and that was utter frustrating.
#### Should you use MPV player?
That depends on you. If you like to experiment with applications, you should give MPV a go. Otherwise, the default video player and VLC are always good enough.
Earlier when I wrote about [Sayonara][9], I wasnt sure if people would like an obscure music player over the popular ones but it was loved by Its FOSS readers.
Try MPV and see if it is something you would like to use as your default video player.
If you liked MPV but want slightly more features on the graphical interface, I suggest using [GNOME MPV Player][10].
Have you used MPV video player? How was your experience with it? What you liked or disliked about it? Do share your views in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/mpv-video-player/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]:https://itsfoss.com/best-ubuntu-apps/
[2]:https://mpv.io/
[3]:http://www.mplayerhq.hu/design7/news.html
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/mpv-player.jpg
[5]:https://mpv.io/installation/
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/mpv-player-interface.png
[7]:https://itsfoss.com/how-to-play-movie-with-subtitles-on-samsung-tv-via-usb/
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/play-youtube-videos-on-mpv-player.jpeg
[9]:https://itsfoss.com/sayonara-music-player/
[10]:https://gnome-mpv.github.io/

View File

@ -1,323 +0,0 @@
pinewall translating
A Collection Of More Useful Unix Utilities
======
![](https://www.ostechnix.com/wp-content/uploads/2017/08/Moreutils-720x340.png)
We all know about **GNU core utilities** that comes pre-installed with all Unix-like operating systems. These are the basic file, shell and text manipulation utilities of the GNU operating system. The GNU core utilities contains the commands, such as cat, ls, rm, mkdir, rmdir, touch, tail, wc and many more, for performing the day-to-day operations. Among these utilities, there are also some other useful collection of Unix utilities which are not included by default in the Unix-like operating systems. Meet **moreutilis** , a growing collection of more useful Unix utilities. The moreutils can be installed on GNU/Linux, and various Unix flavours such as FreeBSD, openBSD and Mac OS.
As of writing this guide, Moreutils provides the following utilities:
* **chronic** Runs a command quietly unless it fails.
* **combine** Combine the lines in two files using boolean operations.
* **errno** Look up errno names and descriptions.
* **ifdata** Get network interface info without parsing ifconfig output.
* **ifne** Run a program if the standard input is not empty.
* **isutf8** Check if a file or standard input is utf-8.
* **lckdo** Execute a program with a lock held.
* **mispipe** Pipe two commands, returning the exit status of the first.
* **parallel** Run multiple jobs at once.
* **pee** tee standard input to pipes.
* **sponge** Soak up standard input and write to a file.
* **ts** timestamp standard input.
* **vidir** Edit a directory in your text editor.
* **vipe** Insert a text editor into a pipe.
* **zrun** Automatically uncompress arguments to command.
### Install moreutils on Linux
The moreutils is packaged to many Linux distributions, so you can install it using the distributions package manager.
On **Arch Linux** and derivatives such as **Antergos** , **Manjaro Linux** , run the following command to install moreutils.
```
$ sudo pacman -S moreutils
```
On **Fedora** :
```
$ sudo dnf install moreutils
```
On **RHEL** , **CentOS** , **Scientific Linux** :
```
$ sudo yum install epel-release
$ sudo yum install moreutils
```
On **Debian** , **Ubuntu** , **Linux Mint** :
```
$ sudo apt-get install moreutils
```
### Moreutils A Collection Of More Useful Unix Utilities
Let us see the usage details of some moreutils tools.
##### The “Combine” utility
As the name implies, the **Combine** utility of moreutils package combines the sets of lines from two files using boolean operations such as “and”, “not”, “or”, “xor”.
* **and** Outputs lines that are in file1 if they are also present in file2.
* **not** Outputs lines that are in file1 but not in file2.
* **or** Outputs lines that are in file1 or file2.
* **xor** Outputs lines that are in either file1 or file2, but not in both files.
Let me show you an example, so you can understand what exactly this utility will do . I have two files namely **file1** and **file2**. Here is the contents of the those two files.
```
$ cat file1
is
was
were
where
there
$ cat file2
is
were
there
```
Now, let me combine them using “and” boolean operation.
```
$ combine file1 and file2
is
were
there
```
As you see in the above example, the “and” Boolean operator outputs lines that are in file1 if they are also present in file2. To put this more clearly, it displays the common lines(Ex. is, were, there) which are present in both files.
Let us now use “not” operator and see the result.
```
$ combine file1 not file2
was
where
```
As you see in the above output, the “not” operator displays the lines that are only in file1, but not in file2.
##### The “ifdata” utility
The “ifdata” utility can be used to check for the existence of a network interface, to get information about the network interface, such as its IP address. Unlike the built-in commands such as “ifconfig” or “ip”, ifdata has simple to parse output that is designed to be easily used by a shell script.
To display IP address details of a network interface, say wlp9s0, run:
```
$ ifdata -p wlp9s0
192.168.43.192 255.255.255.0 192.168.43.255 1500
```
To display the netmask only, run:
```
$ ifdata -pn wlp9s0
255.255.255.0
```
To check hardware address of a NIC:
```
$ ifdata -ph wlp9s0
A0:15:46:90:12:3E
```
To check if a NIC exists or not, use “-pe” flag.
```
$ ifdata -pe wlp9s0
yes
```
##### The “Pee” command
It is somewhat similar to “tee” command.
Let us see an example of “tee” command usage.
```
$ echo "Welcome to OSTechNIx" | tee file1 file2
Welcome to OSTechNIx
```
The above command will create two files namely **file1** and **file2**. Then, append the line “Welcome to OSTechNix” on both files. And finally prints the message “Welcome to OSTechNix” in your Terminal.
The “Pee” command performs a similar function, but slightly differs from “tee” command. Look at the following command:
```
$ echo "Welcome to OSTechNIx" | pee cat cat
Welcome to OSTechNIx
Welcome to OSTechNIx
```
As you see in the above output, the two instances of “cat” command receives the output from “echo” command and displays them twice in the Terminal.
##### The “Sponge” utility
This is yet another useful utility from moreutils package. **Sponge** reads standard input and writes it out to the specified file. Unlike a shell redirect, sponge soaks up all its input before writing the output file.
Have a look at the contents of following text file.
```
$ cat file1
I
You
Me
We
Us
```
As you see, the file contains some random lines, particularly “not” in alphabetical order. You want to sort the contents in alphabetical order. What would you do?
```
$ sort file1 > file1_sorted
```
Correct, isnt it? Of course! As you see in the above command, I have sorted the contents of the **file1** in alphabetical order and saved them in a new file called **“file1_sorted”**. But, You can do the same without creating a new (i.e file1_sorted) using “sponge” command as shown below.
```
$ sort file1 | sponge file1
```
Now, check if the contents are sorted in alphabetical order.
```
$ cat file1
I
Me
Us
We
You
```
See? we dont need to create a new file. Its very useful in scripting. And the good thing is sponge preserves the permissions of the output file if it already exists.
##### The “ts” utility
As the name says, “ts” command adds a timestamp to the beginning of each line of input.
Look at the following commands output:
```
$ ping -c 2 localhost
PING localhost(localhost.localdomain (::1)) 56 data bytes
64 bytes from localhost.localdomain (::1): icmp_seq=1 ttl=64 time=0.055 ms
64 bytes from localhost.localdomain (::1): icmp_seq=2 ttl=64 time=0.079 ms
--- localhost ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1018ms
rtt min/avg/max/mdev = 0.055/0.067/0.079/0.012 ms
```
Now, run the same command with “ts” utlity as shown below.
```
$ ping -c 2 localhost | ts
Aug 21 13:32:28 PING localhost(localhost (::1)) 56 data bytes
Aug 21 13:32:28 64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.063 ms
Aug 21 13:32:28 64 bytes from localhost (::1): icmp_seq=2 ttl=64 time=0.113 ms
Aug 21 13:32:28
Aug 21 13:32:28 --- localhost ping statistics ---
Aug 21 13:32:28 2 packets transmitted, 2 received, 0% packet loss, time 4ms
Aug 21 13:32:28 rtt min/avg/max/mdev = 0.063/0.088/0.113/0.025 ms
```
As you see in the above output, ts adds a timestamp at the beginning of each line. Here is another example.
```
$ ls -l | ts
Aug 21 13:34:25 total 120
Aug 21 13:34:25 drwxr-xr-x 2 sk users 12288 Aug 20 20:05 Desktop
Aug 21 13:34:25 drwxr-xr-x 2 sk users 4096 Aug 10 18:44 Documents
Aug 21 13:34:25 drwxr-xr-x 24 sk users 12288 Aug 21 13:06 Downloads
[...]
```
##### The “Vidir” utility
The “Vidir” utility allows you to edit the contents of a specified directory in **vi** editor (Or, whatever you have in **$EDITOR** ). If no directory is specified, it will edit your current working directory.
The following command edits the contents of the directory called “Desktop”.
```
$ vidir Desktop/
```
![vidir][2]
The above command will open the specified directory in your **vi** editor. Each item in the editing directory will contain a number. You can now edit the files as the way you do in vi editor. Say for example, delete lines to remove files from the directory, or edit filenames to rename files.
You can edit the sub directories as well. The following command edits the current working directory along with its sub-directories.
```
$ find | vidir -
```
Please note the “-” at the end of the command. If “-” is specified as the directory to edit, it reads a list of filenames from stdin and displays those for editing.
If you want to edit only the files in the current working directory, you can use the following command:
```
$ find -type f | vidir -
```
Want to edit a specific file types, say .PNG files? then you would use:
```
$ vidir *.png
```
This command edits only the .png files in the current directory.
##### The “Vipe” Utility
The “vipe” command allows you to run your default editor in the middle of a Unix pipeline and edit the data that is being piped between programs.
The following command opens the vi editor (my default editor, of course) and allows you to edit the input of the “echo” command (i.e Welcome To OSTechNix) and displays the final result.
```
$ echo "Welcome to OSTechNIx" | vipe
Hello World
```
As you see in the above output, I passed the input “Welcome to OSTechNix” to vi editor and edited them as “Hello World” and displayed the final output.
And, thats all for now. I have covered only few utilities. The “moreutils” has more useful utilities. I already have mentioned the currently included utilities in moreutils package in the introductory section. You can read the man pages for greater detail on the above commands. Say for example, to know more about “vidir” command, run:
```
$ man vidir
```
Hope this helps. I will be soon here with another interesting and useful guide. If you find our articles helpful, please share them on your social, professional networks and support OSTechNix.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/moreutils-collection-useful-unix-utilities/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2017/08/sk@sk_001-1.png

View File

@ -0,0 +1,117 @@
How I recorded user behaviour on my competitors websites
======
### Update
Googles team has tracked down my test site, most likely using the source code I shared and de-indexed the whole domain.
Last time [I publicly exposed a flaw][1], Google issued a [manual penalty][2] and devalued a single offending page. This time, there is no notice in Search Console. The site is completely removed from their index without any notification.
Ive received a lot of criticism in the way Ive handled this. Many are suggesting the right way is to approach Google directly with security flaws like this instead of writing about it publicly. Others are suggesting I acted unethically, or even illegally by running this test. I think it should be obvious that if I intended to exploit this method I wouldnt write about it. With so much risk and so little gain, is this even worth doing in practice? Of course not. Id be more concerned about those who do unethical things and dont write about it.
### My wish list:
a) Manipulating the back button in Chrome shouldnt be possible in 2018
b) Websites that employ this tactic should be detected and penalised by Googles algorithms
c) If still found in Googles results, such pages should be labelled with “this page may be harmful” notice.
### Heres what I did:
1. User lands on my page (referrer: google)
2. When they hit “back” button in Chrome, JS sends them to my copy of SERP
3. Click on any competitor takes them to my mirror of competitors site (noindex)
4. Now I generate heatmaps, scrollmaps, records screen interactions and typing.
![][3]
![script][4]
![][5]
![][6]
Interestingly, only about 50% of users found anything suspicious, partly due to the fact that I used https on all my pages, which is one of the main [trust factors on the web][7].
Many users are just happy to see the “padlock” in their browser.
At this point I was able to:
* Generate heatmaps (clicks, moves, scroll depth)
* Record actual sessions (mouse movement, clicks, typing)
I gasped when I realised I can actually **capture all form submissions and send them to my own email**.
Note: I never actually tried that.
Yikes!
### Wouldnt a website doing this be penalised?
You would think so.
I had this implemented for a **very brief period of time** (and for ethical reasons took it down almost immediately, realising that this may cause trouble). After that I changed the topic of the page completely and moved the test to one of my disposable domains where **remained** for five years and ranked really well, though for completely different search terms with rather low search volumes. Its new purpose was to mess with conspiracy theory people.
### Alternative Technique
You dont have to spoof Google SERPs to generate competitors heatmaps, you can simply A/B test your landing page VS your clone of theirs through paid traffic (e.g. social media). Is the A/B testing version of this ethically OK? I dont know, but it may get you in legal trouble depending on where you live.
### What did I learn?
Users seldom read home page “fluff” and often look for things like testimonials, case studies, pricing levels and staff profiles / company information in search for credibility and trust. One of my upcoming tests will be to combine home page with “about us”, “testimonials”, “case studies” and “packages”. This would give users all they really want on a single page.
### Reader Suggestions
“I wouldve thrown in an exit pop-up to let users know what theyd just been subjected to.”
<https://twitter.com/marcnashaat/status/1031915003224309760>
### From Hacker News
> Howdy, former Matasano pentester here.
> FWIW, I would probably have done something similar to them before Id worked in the security industry. Its an easy mistake to make, because its one you make by default: intellectual curiosity doesnt absolve you from legal judgement, and people on the internet tend to flip out if you do something illegal and say anything but “Youre right, I was mistaken. Ive learned my lesson.”
>
> To the author: The reason you pattern-matched into the blackhat category instead of whitehat/grayhat (grayhat?) category is that in the security industry, whenever we discover a vuln, we PoC it and then write it up in the report and tell them immediately. The report typically includes background info, reproduction steps, and recommended actions. The whole thing is typically clinical and detached.
>
> Most notably, the PoC is usually as simple as possible. alert(1) suffices to demonstrate XSS, for example, rather than implementing a fully-working cookie swipe. The latter is more fun, but the former is more impactful.
>
> One interesting idea wouldve been to create a fake competitor — e.g. “VirtualBagel: Just download your bagels and enjoy.” Once its ranking on Google, run this same experiment and see if you could rank higher.
>
> That experiment would demonstrate two things: (1) the history vulnerability exists, and (2) its possible for someone to clone a competitor and outrank them with this vulnerability, thereby raising it from sev:low to sev:hi.
>
> So to be clear, the crux of the issue was running the exploit on a live site without their blessing.
>
> But again, dont worry too much. I would have made similar errors without formal training. Its easy for everyone to say “Oh well its obvious,” but when you feel like you have good intent, its not obvious at all.
>
> I remind everyone that RTM once ran afoul of the law due to similar intellectual curiosity. (In fairness, his experiment exploded half the internet, but still.)
Source: <https://news.ycombinator.com/item?id=17826106>
### About the author
[Dan Petrovic][9]
Dan Petrovic, the managing director of DEJAN, is Australias best-known name in the field of search engine optimisation. Dan is a web author, innovator and a highly regarded search industry event speaker.
--------------------------------------------------------------------------------
via: https://dejanseo.com.au/competitor-hack/
作者:[Dan Petrovic][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://dejanseo.com.au/dan-petrovic/
[1]:https://dejanseo.com.au/hijack/
[2]:https://dejanseo.com.au/google-against-content-scrapers/
[3]:https://dejanseo.com.au/wp-content/uploads/2018/08/step-1.png
[4]:https://dejanseo.com.au/wp-content/uploads/2018/08/script.gif
[5]:https://dejanseo.com.au/wp-content/uploads/2018/08/step-2.png
[6]:https://dejanseo.com.au/wp-content/uploads/2018/08/step-3.png
[7]:https://dejanseo.com.au/trust/
[8]:https://secure.gravatar.com/avatar/9068275e6d3863b7dc11f7dff0974ced?s=100&d=mm&r=g
[9]:https://dejanseo.com.au/dan-petrovic/ (Dan Petrovic)
[10]:https://dejanseo.com.au/author/admin/ (More posts by Dan Petrovic)

View File

@ -0,0 +1,322 @@
What is a Makefile and how does it work?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_liberate%20docs_1109ay.png?itok=xQOLreya)
If you want to run or update a task when certain files are updated, the `make` utility can come in handy. The `make` utility requires a file, `Makefile` (or `makefile`), which defines set of tasks to be executed. You may have used `make` to compile a program from source code. Most open source projects use `make` to compile a final executable binary, which can then be installed using `make install`.
In this article, we'll explore `make` and `Makefile` using basic and advanced examples. Before you start, ensure that `make` is installed in your system.
### Basic examples
Let's start by printing the classic "Hello World" on the terminal. Create a empty directory `myproject` containing a file `Makefile` with this content:
```
say_hello:
        echo "Hello World"
```
Now run the file by typing `make` inside the directory `myproject`. The output will be:
```
$ make
echo "Hello World"
Hello World
```
In the example above, `say_hello` behaves like a function name, as in any programming language. This is called the target. The prerequisites or dependencies follow the target. For the sake of simplicity, we have not defined any prerequisites in this example. The command `echo "Hello World"` is called the recipe. The recipe uses prerequisites to make a target. The target, prerequisites, and recipes together make a rule.
To summarize, below is the syntax of a typical rule:
```
target: prerequisites
<TAB> recipe
```
As an example, a target might be a binary file that depends on prerequisites (source files). On the other hand, a prerequisite can also be a target that depends on other dependencies:
```
final_target: sub_target final_target.c
        Recipe_to_create_final_target
sub_target: sub_target.c
        Recipe_to_create_sub_target
```
It is not necessary for the target to be a file; it could be just a name for the recipe, as in our example. We call these "phony targets."
Going back to the example above, when `make` was executed, the entire command `echo "Hello World"` was displayed, followed by actual command output. We often don't want that. To suppress echoing the actual command, we need to start `echo` with `@`:
```
say_hello:
        @echo "Hello World"
```
Now try to run `make` again. The output should display only this:
```
$ make
Hello World
```
Let's add a few more phony targets: `generate` and `clean` to the `Makefile`:
```
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
If we try to run `make` after the changes, only the target `say_hello` will be executed. That's because only the first target in the makefile is the default target. Often called the default goal, this is the reason you will see `all` as the first target in most projects. It is the responsibility of `all` to call other targets. We can override this behavior using a special phony target called `.DEFAULT_GOAL`.
Let's include that at the beginning of our makefile:
```
.DEFAULT_GOAL := generate
```
This will run the target `generate` as the default:
```
$ make
Creating empty text files...
touch file-{1..10}.txt
```
As the name suggests, the phony target `.DEFAULT_GOAL` can run only one target at a time. This is why most makefiles include `all` as a target that can call as many targets as needed.
Let's include the phony target `all` and remove `.DEFAULT_GOAL`:
```
all: say_hello generate
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
Before running `make`, let's include another special phony target, `.PHONY`, where we define all the targets that are not files. `make` will run its recipe regardless of whether a file with that name exists or what its last modification time is. Here is the complete makefile:
```
.PHONY: all say_hello generate clean
all: say_hello generate
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
The `make` should call `say_hello` and `generate`:
```
$ make
Hello World
Creating empty text files...
touch file-{1..10}.txt
```
It is a good practice not to call `clean` in `all` or put it as the first target. `clean` should be called manually when cleaning is needed as a first argument to `make`:
```
$ make clean
Cleaning up...
rm *.txt
```
Now that you have an idea of how a basic makefile works and how to write a simple makefile, let's look at some more advanced examples.
### Advanced examples
#### Variables
In the above example, most target and prerequisite values are hard-coded, but in real projects, these are replaced with variables and patterns.
The simplest way to define a variable in a makefile is to use the `=` operator. For example, to assign the command `gcc` to a variable `CC`:
```
CC = gcc
```
This is also called a recursive expanded variable, and it is used in a rule as shown below:
```
hello: hello.c
    ${CC} hello.c -o hello
```
As you may have guessed, the recipe expands as below when it is passed to the terminal:
```
gcc hello.c -o hello
```
Both `${CC}` and `$(CC)` are valid references to call `gcc`. But if one tries to reassign a variable to itself, it will cause an infinite loop. Let's verify this:
```
CC = gcc
CC = ${CC}
all:
    @echo ${CC}
```
Running `make` will result in:
```
$ make
Makefile:8: *** Recursive variable 'CC' references itself (eventually).  Stop.
```
To avoid this scenario, we can use the `:=` operator (this is also called the simply expanded variable). We should have no problem running the makefile below:
```
CC := gcc
CC := ${CC}
all:
    @echo ${CC}
```
#### Patterns and functions
The following makefile can compile all C programs by using variables, patterns, and functions. Let's explore it line by line:
```
# Usage:
# make        # compile all binary
# make clean  # remove ALL binaries and objects
.PHONY = all clean
CC = gcc                        # compiler to use
LINKERFLAG = -lm
SRCS := $(wildcard *.c)
BINS := $(SRCS:%.c=%)
all: ${BINS}
%: %.o
        @echo "Checking.."
        ${CC} ${LINKERFLAG} $< -o $@
%.o: %.c
        @echo "Creating object.."
        ${CC} -c $<
clean:
        @echo "Cleaning up..."
        rm -rvf *.o ${BINS}
```
* Lines starting with `#` are comments.
* Line `.PHONY = all clean` defines phony targets `all` and `clean`.
* Variable `LINKERFLAG` defines flags to be used with `gcc` in a recipe.
* `SRCS := $(wildcard *.c)`: `$(wildcard pattern)` is one of the functions for filenames. In this case, all files with the `.c` extension will be stored in a variable `SRCS`.
* `BINS := $(SRCS:%.c=%)`: This is called as substitution reference. In this case, if `SRCS` has values `'foo.c bar.c'`, `BINS` will have `'foo bar'`.
* Line `all: ${BINS}`: The phony target `all` calls values in`${BINS}` as individual targets.
* Rule:
```
%: %.o
  @echo "Checking.."
  ${CC} ${LINKERFLAG} $&lt; -o $@
```
Let's look at an example to understand this rule. Suppose `foo` is one of the values in `${BINS}`. Then `%` will match `foo`(`%` can match any target name). Below is the rule in its expanded form:
```
foo: foo.o
  @echo "Checking.."
  gcc -lm foo.o -o foo
```
As shown, `%` is replaced by `foo`. `$<` is replaced by `foo.o`. `$<` is patterned to match prerequisites and `$@` matches the target. This rule will be called for every value in `${BINS}`
* Rule:
```
%.o: %.c
  @echo "Creating object.."
  ${CC} -c $&lt;
```
Every prerequisite in the previous rule is considered a target for this rule. Below is the rule in its expanded form:
```
foo.o: foo.c
  @echo "Creating object.."
  gcc -c foo.c
```
* Finally, we remove all binaries and object files in target `clean`.
Below is the rewrite of the above makefile, assuming it is placed in the directory having a single file `foo.c:`
```
# Usage:
# make        # compile all binary
# make clean  # remove ALL binaries and objects
.PHONY = all clean
CC = gcc                        # compiler to use
LINKERFLAG = -lm
SRCS := foo.c
BINS := foo
all: foo
foo: foo.o
        @echo "Checking.."
        gcc -lm foo.o -o foo
foo.o: foo.c
        @echo "Creating object.."
        gcc -c foo.c
clean:
        @echo "Cleaning up..."
        rm -rvf foo.o foo
```
For more on makefiles, refer to the [GNU Make manual][1], which offers a complete reference and examples.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/what-how-makefile
作者:[Sachin Patil][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/psachin
[1]:https://www.gnu.org/software/make/manual/make.pdf

View File

@ -0,0 +1,60 @@
translating---geekpi
An introduction to pipes and named pipes in Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe)
In Linux, the `pipe` command lets you sends the output of one command to another. Piping, as the term suggests, can redirect the standard output, input, or error of one process to another for further processing.
The syntax for the `pipe` or `unnamed pipe` command is the `|` character between any two commands:
`Command-1 | Command-2 | …| Command-N`
Here, the pipe cannot be accessed via another session; it is created temporarily to accommodate the execution of `Command-1` and redirect the standard output. It is deleted after successful execution.
![](https://opensource.com/sites/default/files/uploads/pipe.png)
In the example above, contents.txt contains a list of all files in a particular directory—specifically, the output of the ls -al command. We first grep the filenames with the "file" keyword from contents.txt by piping (as shown), so the output of the cat command is provided as the input for the grep command. Next, we add piping to execute the awk command, which displays the 9th column from the filtered output from the grep command. We can also count the number of rows in contents.txt using the wc -l command.
A named pipe can last until as long as the system is up and running or until it is deleted. It is a special file that follows the [FIFO][1] (first in, first out) mechanism. It can be used just like a normal file; i.e., you can write to it, read from it, and open or close it. To create a named pipe, the command is:
```
mkfifo <pipe-name>
```
This creates a named pipe file that can be used even over multiple shell sessions.
Another way to create a FIFO named pipe is to use this command:
```
mknod p <pipe-name>
```
To redirect a standard output of any command to another process, use the `>` symbol. To redirect a standard input of any command, use the `<` symbol.
![](https://opensource.com/sites/default/files/uploads/redirection.png)
As shown above, the output of the `ls -al` command is redirected to `contents.txt` and inserted in the file. Similarly, the input for the `tail` command is provided as `contents.txt` via the `<` symbol.
![](https://opensource.com/sites/default/files/uploads/create-named-pipe.png)
![](https://opensource.com/sites/default/files/uploads/verify-output.png)
Here, we have created a named pipe, `my-named-pipe`, and redirected the output of the `ls -al` command into the named pipe. We can the open a new shell session and `cat` the contents of the named pipe, which shows the output of the `ls -al` command, as previously supplied. Notice the size of the named pipe is zero and it has a designation of "p".
So, next time you're working with commands at the Linux terminal and find yourself moving data between commands, hopefully a pipe will make the process quick and easy.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/introduction-pipes-linux
作者:[Archit Modi][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/architmodi
[1]:https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)

View File

@ -0,0 +1,290 @@
Getting started with Sensu monitoring
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e)
Sensu is an open source infrastructure and application monitoring solution that monitors servers, services, and application health, and sends alerts and notifications with third-party integration. Written in Ruby, Sensu can use either [RabbitMQ][1] or [Redis][2] to handle messages. It uses Redis to store data.
If you want to monitor your cloud infrastructure in a simple and efficient manner, Sensu is a good option. It can be integrated with many of the modern DevOps stacks your organization may already be using, such as [Slack][3], [HipChat][4], or [IRC][5], and it can even send mobile/pager alerts with [PagerDuty][6].
Sensu's [modular architecture][7] means every component can be installed on the same server or on completely separate machines.
### Architecture
Sensu's main communication mechanism is the Transport. Every Sensu component must connect to the Transport in order to send messages to each other. Transport can use either RabbitMQ (recommended in production) or Redis.
Sensu Server processes event data and takes action. It registers clients and processes check results and monitoring events using filters, mutators, and handlers. The server publishes check definitions to the clients and the Sensu API provides a RESTful API, providing access to monitoring data and core functionality.
[Sensu Client][8] executes checks either scheduled by Sensu Server or local checks definitions. Sensu uses a data store (Redis) to keep all the persistent data. Finally, [Uchiwa][9] is the web interface to communicate with Sensu API.
![sensu_system.png][11]
### Installing Sensu
#### Prerequisites
* One Linux installation to act as the server node (I used CentOS 7 for this article)
* One or more Linux machines to monitor (clients)
#### Server side
Sensu requires Redis to be installed. To install Redis, enable the EPEL repository:
```
$ sudo yum install epel-release -y
```
Then install Redis:
```
$ sudo yum install redis -y
```
Modify `/etc/redis.conf` to disable protected mode, listen on every interface, and set a password:
```
$ sudo sed -i 's/^protected-mode yes/protected-mode no/g' /etc/redis.conf
$ sudo sed -i 's/^bind 127.0.0.1/bind 0.0.0.0/g' /etc/redis.conf
$ sudo sed -i 's/^# requirepass foobared/requirepass password123/g' /etc/redis.conf
```
Enable and start Redis service:
```
$ sudo systemctl enable redis
$ sudo systemctl start redis
```
Redis is now installed and ready to be used by Sensu.
Now lets install Sensu.
First, configure the Sensu repository and install the packages:
```
$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
[sensu]
name=sensu
baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
gpgcheck=0
enabled=1
EOF
$ sudo yum install sensu uchiwa -y
```
Lets create the bare minimum configuration files for Sensu:
```
$ sudo tee /etc/sensu/conf.d/api.json << EOF
{
  "api": {
        "host": "127.0.0.1",
        "port": 4567
  }
}
EOF
```
Next, configure `sensu-api` to listen on localhost, with Port 4567:
```
$ sudo tee /etc/sensu/conf.d/redis.json << EOF
{
  "redis": {
        "host": "<IP of server>",
        "port": 6379,
        "password": "password123"
  }
}
EOF
$ sudo tee /etc/sensu/conf.d/transport.json << EOF
{
  "transport": {
        "name": "redis"
  }
}
EOF
```
In these two files, we configure Sensu to use Redis as the transport mechanism and the address where Redis will listen. Clients need to connect directly to the transport mechanism. These two files will be required on each client machine.
```
$ sudo tee /etc/sensu/uchiwa.json << EOF
{
   "sensu": [
        {
        "name": "sensu",
        "host": "127.0.0.1",
        "port": 4567
        }
   ],
   "uchiwa": {
        "host": "0.0.0.0",
        "port": 3000
   }
}
EOF
```
In this file, we configure Uchiwa to listen on every interface (0.0.0.0) on Port 3000. We also configure Uchiwa to use `sensu-api` (already configured).
For security reasons, change the owner of the configuration files you just created:
```
$ sudo chown -R sensu:sensu /etc/sensu
```
Enable and start the Sensu services:
```
$ sudo systemctl enable sensu-server sensu-api sensu-client
$ sudo systemctl start sensu-server sensu-api sensu-client
$ sudo systemctl enable uchiwa
$ sudo systemctl start uchiwa
```
Try accessing the Uchiwa website: http://<IP of server>:3000
For production environments, its recommended to run a cluster of RabbitMQ as the Transport instead of Redis (a Redis cluster can be used in production too), and to run more than one instance of Sensu Server and API for load balancing and high availability.
Sensu is now installed. Now lets configure the clients.
#### Client side
To add a new client, you will need to enable Sensu repository on the client machines by creating the file `/etc/yum.repos.d/sensu.repo`.
```
$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
[sensu]
name=sensu
baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
gpgcheck=0
enabled=1
EOF
```
With the repository enabled, install the package Sensu:
```
$ sudo yum install sensu -y
```
To configure `sensu-client`, create the same `redis.json` and `transport.json` created in the server machine, as well as the `client.json` configuration file:
```
$ sudo tee /etc/sensu/conf.d/client.json << EOF
{
  "client": {
        "name": "rhel-client",
        "environment": "development",
        "subscriptions": [
        "frontend"
        ]
  }
}
EOF
```
In the name field, specify a name to identify this client (typically the hostname). The environment field can help you filter, and subscription defines which monitoring checks will be executed by the client.
Finally, enable and start the services and check in Uchiwa, as the new client will register automatically:
```
$ sudo systemctl enable sensu-client
$ sudo systemctl start sensu-client
```
### Sensu checks
Sensu checks have two components: a plugin and a definition.
Sensu is compatible with the [Nagios check plugin specification][12], so any check for Nagios can be used without modification. Checks are executable files and are run by the Sensu client.
Check definitions let Sensu know how, where, and when to run the plugin.
#### Client side
Lets install one check plugin on the client machine. Remember, this plugin will be executed on the clients.
Enable EPEL and install `nagios-plugins-http` :
```
$ sudo yum install -y epel-release
$ sudo yum install -y nagios-plugins-http
```
Now lets explore the plugin by executing it manually. Try checking the status of a web server running on the client machine. It should fail as we dont have a web server running:
```
$ /usr/lib64/nagios/plugins/check_http -I 127.0.0.1
connect to address 127.0.0.1 and port 80: Connection refused
HTTP CRITICAL - Unable to open TCP socket
```
It failed, as expected. Check the return code of the execution:
```
$ echo $?
2
```
The Nagios check plugin specification defines four return codes for the plugin execution:
| **Plugin return code** | **State** |
|------------------------|-----------|
| 0 | OK |
| 1 | WARNING |
| 2 | CRITICAL |
| 3 | UNKNOWN |
With this information, we can now create the check definition on the server.
#### Server side
On the server machine, create the file `/etc/sensu/conf.d/check_http.json`:
```
{
  "checks": {
    "check_http": {
      "command": "/usr/lib64/nagios/plugins/check_http -I 127.0.0.1",
      "interval": 10,
      "subscribers": [
        "frontend"
      ]
    }
  }
}
```
In the command field, use the command we tested before. `Interval` will tell Sensu how frequently, in seconds, this check should be executed. Finally, `subscribers` will define the clients where the check will be executed.
Restart both sensu-api and sensu-server and confirm that the new check is available in Uchiwa.
```
$ sudo systemctl restart sensu-api sensu-server
```
### Whats next?
Sensu is a powerful tool, and this article covers just a glimpse of what it can do. See the [documentation][13] to learn more, and visit the Sensu site to learn more about the [Sensu community][14].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/getting-started-sensu-monitoring-solution
作者:[Michael Zamot][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mzamot
[1]:https://www.rabbitmq.com/
[2]:https://redis.io/topics/config
[3]:https://slack.com/
[4]:https://en.wikipedia.org/wiki/HipChat
[5]:http://www.irc.org/
[6]:https://www.pagerduty.com/
[7]:https://docs.sensu.io/sensu-core/1.4/overview/architecture/
[8]:https://docs.sensu.io/sensu-core/1.4/installation/install-sensu-client/
[9]:https://uchiwa.io/#/
[10]:/file/406576
[11]:https://opensource.com/sites/default/files/uploads/sensu_system.png (sensu_system.png)
[12]:https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginapi.html
[13]:https://docs.sensu.io/
[14]:https://sensu.io/community

View File

@ -0,0 +1,131 @@
How To Easily And Safely Manage Cron Jobs In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/08/Crontab-UI-720x340.jpg)
When it comes to schedule tasks in Linux, which utility comes to your mind first? Yeah, you guessed it right. **Cron!** The cron utility helps you to schedule commands/tasks at specific time in Unix-like operating systems. We already published a [**beginners guides to Cron jobs**][1]. I have a few years experience in Linux, so setting up cron jobs is no big deal for me. But, it is not piece of cake for newbies. The noobs may unknowingly do small mistakes while editing plain text crontab and bring down all cron jobs. Just in case, if you think you might mess up with your cron jobs, there is a good alternative way. Say hello to **Crontab UI** , a web-based tool to easily and safely manage cron jobs in Unix-like operating systems.
You dont need to manually edit the crontab file to create, delete and manage cron jobs. Everything can be done via a web browser with a couple mouse clicks. Crontab UI allows you to easily create, edit, pause, delete, backup cron jobs, and even import, export and deploy jobs on other machines without much hassle. Error log, mailing and hooks support also possible. It is free, open source and written using NodeJS.
### Installing Crontab UI
Installing Crontab UI is just a one-liner command. Make sure you have installed NPM. If you havent install npm yet, refer the following link.
Next, run the following command to install Crontab UI.
```
$ npm install -g crontab-ui
```
Its that simple. Let us go ahead and see how to manage cron jobs using Crontab UI.
### Easily And Safely Manage Cron Jobs In Linux
To launch Crontab UI, simply run:
```
$ crontab-ui
```
You will see the following output:
```
Node version: 10.8.0
Crontab UI is running at http://127.0.0.1:8000
```
Now, open your web browser and navigate to **<http://127.0.0.1:8000>**. Make sure the port no 8000 is allowed in your firewall/router.
Please note that you can only access Crontab UI web dashboard within the local system itself.
If you want to run Crontab UI with your systems IP and custom port (so you can access it from any remote system in the network), use the following command instead:
```
$ HOST=0.0.0.0 PORT=9000 crontab-ui
Node version: 10.8.0
Crontab UI is running at http://0.0.0.0:9000
```
Now, Crontab UI can be accessed from the any system in the nework using URL **http:// <IP-Address>:9000**.
This is how Crontab UI dashboard looks like.
![](https://www.ostechnix.com/wp-content/uploads/2018/08/crontab-ui-dashboard.png)
As you can see in the above screenshot, Crontab UI dashbaord is very simply. All options are self-explanatory.
To exit Crontab UI, press **CTRL+C**.
**Create, edit, run, stop, delete a cron job**
To create a new cron job, click on “New” button. Enter your cron job details and click Save.
1. Name the cron job. It is optional.
2. The full command you want to run.
3. Choose schedule time. You can either choose the quick schedule time, (such as Startup, Hourly, Daily, Weekly, Monthly, Yearly) or set the exact time to run the command. After you choosing the schedule time, the syntax of the cron job will be shown in **Jobs** field.
4. Choose whether you want to enable error logging for the particular job.
Here is my sample cron job.
![](https://www.ostechnix.com/wp-content/uploads/2018/08/create-new-cron-job.png)
As you can see, I have setup a cron job to clear pacman cache at every month.
Similarly, you can create any number of jobs as you want. You will see all cron jobs in the dashboard.
![](https://www.ostechnix.com/wp-content/uploads/2018/08/crontab-ui-dashboard-1.png)
If you wanted to change any parameter in a cron job, just click on the **Edit** button below the job and modify the parameters as you wish. To run a job immediately, click on the button that says **Run**. To stop a job, click **Stop** button. You can view the log details of any job by clicking on the **Log** button. If the job is no longer required, simply press **Delete** button.
**Backup cron jobs**
To backup all cron jobs, press the Backup from main dashboard and choose OK to confirm the backup.
![](https://www.ostechnix.com/wp-content/uploads/2018/08/backup-cron-jobs.png)
You can use this backup in case you messed with the contents of the crontab file.
**Import/Export cron jobs to other systems**
Another notable feature of Crontab UI is you can import, export and deploy cron jobs to other systems. If you have multiple systems on your network that requires the same cron jobs, just press **Export** button and choose the location to save the file. All contents of crontab file will be saved in a file named **crontab.db**.
Here is the contents of the crontab.db file.
```
$ cat Downloads/crontab.db
{"name":"Remove Pacman Cache","command":"rm -rf /var/cache/pacman","schedule":"@monthly","stopped":false,"timestamp":"Thu Aug 23 2018 10:34:19 GMT+0000 (Coordinated Universal Time)","logging":"true","mailing":{},"created":1535020459093,"_id":"lcVc1nSdaceqS1ut"}
```
Then you can transfer the entire crontab.db file to some other system and import its to the new system. You dont need to manually create cron jobs in all systems. Just create them in one system and export and import all of them to every system on the network.
**Get the contents from or save to existing crontab file**
There are chances that you might have already created some cron jobs using **crontab** command. If so, you can retrieve contents of the existing crontab file by click on the **“Get from crontab”** button in main dashboard.
![](https://www.ostechnix.com/wp-content/uploads/2018/08/get-from-crontab.png)
Similarly, you can save the newly created jobs using Crontab UI utility to existing crontab file in your system. To do so, just click **Save to crontab** option in the dashboard.
See? Managing cron jobs is not that complicated. Any newbie user can easily maintain any number of jobs without much hassle using Crontab UI. Give it a try and let us know what do you think about this tool. I am all ears!
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-easily-and-safely-manage-cron-jobs-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/

View File

@ -0,0 +1,90 @@
How to publish a WordPress blog to a static GitLab Pages site
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web-design-monitor-website.png?itok=yUK7_qR0)
A long time ago, I set up a WordPress blog for a family member. There are lots of options these days, but back then there were few decent choices if you needed a web-based CMS with a WYSIWYG editor. An unfortunate side effect of things working well is that the blog has generated a lot of content over time. That means I was also regularly updating WordPress to protect against the exploits that are constantly popping up.
So I decided to convince the family member that switching to [Hugo][1] would be relatively easy, and the blog could then be hosted on [GitLab][2]. But trying to extract all that content and convert it to [Markdown][3] turned into a huge hassle. There were automated scripts that got me 95% there, but nothing worked perfectly. Manually updating all the posts was not something I wanted to do, so eventually, I gave up trying to move the blog.
Recently, I started thinking about this again and realized there was a solution I hadn't considered: I could continue maintaining the WordPress server but set it up to publish a static mirror and serve that with [GitLab Pages][4] (or [GitHub Pages][5] if you like). This would allow me to automate [Let's Encrypt][6] certificate renewals as well as eliminate the security concerns associated with hosting a WordPress site. This would, however, mean comments would stop working, but that feels like a minor loss in this case because the blog did not garner many comments.
Here's the solution I came up with, which so far seems to be working well:
* Host WordPress site at URL that is not linked to or from anywhere else to reduce the odds of it being exploited. In this example, we'll use <http://private.localconspiracy.com> (even though this site is actually built with Pelican).
* [Set up hosting on GitLab Pages][7] for the public URL <https://www.localconspiracy.com>.
* Add a [cron job][8] that determines when the last-built date differs between the two URLs; if the build dates differ, mirror the WordPress version.
* After mirroring with `wget`, update all links from "private" version to "public" version.
* Do a `git push` to publish the new content.
These are the two scripts I use:
`check-diff.sh` (called by cron every 15 minutes)
```
#!/bin/bash
ORIGINDATE="$(curl -v --silent http://private.localconspiracy.com/feed/ 2>&1|grep lastBuildDate)"
PUBDATE="$(curl -v --silent https://www.localconspiracy.com/feed/ 2>&1|grep lastBuildDate)"
if [ "$ORIGINDATE" !=  "$PUBDATE" ]
then
  /home/doc/repos/localconspiracy/mirror.sh
fi
```
`mirror.sh:`
```
#!/bin/sh
cd /home/doc/repos/localconspiracy
wget \
--mirror \
--convert-links  \
--adjust-extension \
--page-requisites  \
--retry-connrefused  \
--exclude-directories=comments \
--execute robots=off \
http://private.localconspiracy.com
git rm -rf public/*
mv private.localconspiracy.com/* public/.
rmdir private.localconspiracy.com
find ./public/ -type f -exec sed -i -e 's|http://private.localconspiracy|https://www.localconspiracy|g' {} \;
find ./public/ -type f -exec sed -i -e 's|http://www.localconspiracy|https://www.localconspiracy|g' {} \;
git add public/*
git commit -m "new snapshot"
git push origin master
```
That's it! Now, when the blog is changed, within 15 minutes the site is mirrored to a static version and pushed up to the repo where it will be reflected in GitLab pages.
This concept could be extended a little further if you wanted to [run WordPress locally][9]. In that case, you would not need a server to host your WordPress blog; you could just run it on your local machine. In that scenario, there's no chance of your blog getting exploited. As long as you can run `wget` against it locally, you could use the approach outlined above to have a WordPress site hosted on GitLab Pages.
_This article was originally posted at[Local Conspiracy][10]. Reposted with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/publish-wordpress-static-gitlab-pages-site
作者:[Christopher Aedo][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/docaedo
[1]:https://gohugo.io/
[2]:https://gitlab.com/
[3]:https://en.wikipedia.org/wiki/Markdown
[4]:https://docs.gitlab.com/ee/user/project/pages/
[5]:https://pages.github.com/
[6]:https://letsencrypt.org/
[7]:https://about.gitlab.com/2016/04/07/gitlab-pages-setup/
[8]:https://en.wikipedia.org/wiki/Cron
[9]:https://codex.wordpress.org/Installing_WordPress_Locally_on_Your_Mac_With_MAMP
[10]:https://localconspiracy.com/2018/08/wp-on-gitlab.html

View File

@ -0,0 +1,108 @@
5 cool music player apps
======
![](https://fedoramagazine.org/wp-content/uploads/2018/08/5-cool-music-apps-816x345.jpg)
Do you like music? Then Fedora may have just what youre looking for. This article introduces different music player apps that run on Fedora. Youre covered whether you have an extensive music library, a small one, or none at all. Here are four graphical application and one terminal-based music player that will have you jamming.
### Quod Libet
Quod Libet is a complete manager for your large audio library. If you have an extensive audio library that you would like not just listen to, but also manage, Quod Libet might a be a good choice for you.
![][1]
Quod Libet can import music from multiple locations on your disk, and allows you to edit tags of the audio files — so everything is under your control. As a bonus, there are various plugins available for anything from a simple equalizer to a [last.fm][2] sync. You can also search and play music directly from [Soundcloud][3].
Quod Libet works great on HiDPI screens, and is available as an RPM in Fedora or on [Flathub][4] in case you run [Silverblue][5]. Install it using Gnome Software or the command line:
```
$ sudo dnf install quodlibet
```
### Audacious
If you like a simple music player that could even look like the legendary Winamp, Audacious might be a good choice for you.
![][6]
Audacious probably wont manage all your music at once, but it works great if you like to organize your music as files. You can also export and import playlists without reorganizing the music files themselves.
As a bonus, you can make it look likeWinamp. To make it look the same as on the screenshot above, go to Settings / Appearance, select Winamp Classic Interface at the top, and choose the Refugee skin right below. And Bobs your uncle!
Audacious is available as an RPM in Fedora, and can be installed using the Gnome Software app or the following command on the terminal:
```
$ sudo dnf install audacious
```
### Lollypop
Lollypop is a music player that provides great integration with GNOME. If you enjoy how GNOME looks, and would like a music player thats nicely integrated, Lollypop could be for you.
![][7]
Apart from nice visual integration with the GNOME Shell, it woks nicely on HiDPI screens, and supports a dark theme.
As a bonus, Lollypop has an integrated cover art downloader, and a so-called Party Mode (the note button at the top-right corner) that selects and plays music automatically for you. It also integrates with online services such as [last.fm][2] or [libre.fm][8].
Available as both an RPM in Fedora or a [Flathub][4] for your [Silverblue][5] workstation, install it using the Gnome Software app or using the terminal:
```
$ sudo dnf install lollypop
```
### Gradio
What if you dont own any music, but still like to listen to it? Or you just simply love radio? Then Gradio is here for you.
![][9]
Gradio is a simple radio player that allows you to search and play internet radio stations. You can find them by country, language, or simply using search. As a bonus, its visually integrated into GNOME Shell, works great with HiDPI screens, and has an option for a dark theme.
Gradio is available on [Flathub][4] which works with both Fedora Workstation and [Silverblue][5]. Install it using the Gnome Software app.
### sox
Do you like using the terminal instead, and listening to some music while you work? You dont have to leave the terminal thanks to sox.
![][10]
sox is a very simple, terminal-based music player. All you need to do is to run a command such as:
```
$ play file.mp3
```
…and sox will play it for you. Apart from individual audio files, sox also supports playlists in the m3u format.
As a bonus, because sox is a terminal-based application, you can run it over ssh. Do you have a home server with speakers attached to it? Or do you want to play music from a different computer? Try using it together with [tmux][11], so you can keep listening even when the session closes.
sox is available in Fedora as an RPM. Install it by running:
```
$ sudo dnf install sox
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/5-cool-music-player-apps/
作者:[Adam Šamalík][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/asamalik/
[1]:https://fedoramagazine.org/wp-content/uploads/2018/08/qodlibet-300x217.png
[2]:https://last.fm
[3]:https://soundcloud.com/
[4]:https://flathub.org/home
[5]:https://teamsilverblue.org/
[6]:https://fedoramagazine.org/wp-content/uploads/2018/08/audacious-300x136.png
[7]:https://fedoramagazine.org/wp-content/uploads/2018/08/lollypop-300x172.png
[8]:https://libre.fm
[9]:https://fedoramagazine.org/wp-content/uploads/2018/08/gradio.png
[10]:https://fedoramagazine.org/wp-content/uploads/2018/08/sox-300x179.png
[11]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/

View File

@ -0,0 +1,183 @@
Add free books to your eReader: Formatting tips
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_library_reading_list_colorful.jpg?itok=jJtnyniB)
In my recent article, [A handy way to add free books to your eReader][1], I explained how to convert the plaintext indexes at [Project Gutenberg][2] to HTML and then EPUBs. But as one commenter noted, there is a problem in older indexes, where individual books are not always separated by an extra newline character.
I saw quite vividly the extent of the problem when I was working on the index for 2007, where you see things like this:
```
Audio: The General Epistle of James                                      22931
Audio: The Epistle to the Hebrews                                        22930
Audio: The Epistle of Philemon                                           22929
Sacrifice, by Stephen French Whitman                                     22928
The Atlantic Monthly, Volume 18, No. 105, July 1866, by Various          22927
The Continental Monthly, Vol. 6, No 3,  September 1864, by Various       22926
The Story of Young Abraham Lincoln, by Wayne Whipple                     22925
Pathfinder, by Alan Douglas                                              22924
  [Subtitle: or, The Missing Tenderfoot]
Pieni helmivyo, by Various                                               22923
  [Subtitle: Suomen runoja koulunuorisolle]
  [Editor: J. Waananen]  [Language: Finnish]
The Posy Ring, by Various                                                22922
```
My first reaction was, "Well, how bad can it be to just add newlines where needed?" The answer: "Really bad." After days of working this way and stopping only when the cramps in my hand became too annoying, I decided to revisit the problem. I thought I might need to do multiple Find-Replace passes, maybe keyed on things like `[Language: Finnish] `or maybe just the `]` bracket, but this seemed almost as laborious as the manual method.
Then I noticed a particular feature: For most instances where a newline was needed, a newline character was immediately followed by the capital letter of the next title. For lines where there was still more information about the book, the newline was followed by spaces. So I tried this: In the Find text box in [KWrite][3] (remember, were using regex), I put:
```
(\n[A-Z])
```
and in Replace, I put:
```
\n\1
```
For every match inside the parentheses, I added a preceding newline, retaining whatever the capital letter was. This worked extremely well. The few instances where it failed involved book titles beginning with a number or with quotes. I fixed these manually, but I could have put this:
```
(\n[0-9])
```
In Find and run Replace All again. Later, I also tried it with the quotes—this requires a backslash, like this:
```
(\n\”) and (\n\)
```
One side effect is that a number of the listings were separated by three newline characters. Not an issue for XHTML, but easily fixed by putting in Find:
```
\n\n\n
```
and in Replace:
```
\n\n
```
To review the process with the new features:
1. Remove the preamble and other text you dont want
2. Add extra newlines with the method shown above
3. Convert three consecutive newlines to two (optional)
4. Add the appropriate HTML tags at the beginning and end
5. Create the links based on finding `(\d\d\d\d\d)`, replacing with `<a href=”http://www.gutenberg.org/ebooks/``\1”>\1</a>`
6. Add paragraph tags by finding `\n\n` and replacing with `</p>\n\n<p>`
7. Add a `</p>` just before the `</body>` tag at the end
8. Fix the headers, preceding each with `<h3>` and changing the `</p>` to `</h3>` the older indexes have only a single header
9. Save the file with an `.xhtml` suffix, then import to [Sigil][4] to make your EPUB.
The next issue that comes up is when the eBook numbers include only four digits. This is a problem since there are many four-digit numbers in the listings, many of which are dates. The answer comes from modifying our strategy in point 5 in the above listing.
In Find, put:
`(\d\d\d\d)\n`
and in Replace, put:
`<a href="[http://www.gutenberg.org/ebooks/](http://www.gutenberg.org/ebooks/)\1">\1</a>\n`
Notice that the `\n` is outside the parentheses; therefore, we need to add it at the end of the new replacement. Now we see another problem resulting from this new method: Some of the eBook numbers are followed by C (copyrighted). So we need to do another pass in Find:
`(\d\d\d\d)C\n`
and in Replace:
`<a href=”[http://www.gutenberg.org/ebooks/](http://www.gutenberg.org/ebooks/)\1”>\1</a>C\n`
I noticed that as of the 2002 index, the lack of extra newlines between listings was no longer a problem, and this continued all the way to the very first index, so steps 2 and 3 became unnecessary.
Ive now taken the process all the way to the beginning, GUTINDEX.1996, and this process works all the way. At one point three-digit eBook numbers appear, so you must begin to Find:
`(\d\d\d)\n` and then `(\d\d\d)C\n`
Then later:
`(\d\d)\n` and then `(\d\d)C\n`
And finally:
`(\d)\n`
The only glitch was in one book, eBook number 2, where the date "1798" was snagged by the three-digit search. At this point, I now have eBooks of the entire Gutenberg catalog, not counting new books presently being added.
### Troubleshooting and a bonus
I strongly advise you to test your XHTML files by trying to load them in a browser. Your browser should tell you if your XHTML is not properly formatted, in which case the file wont show in your browser window. Two particular problems I found, having initially ignored my own advice, resulting from improper characters. I copied the link specification tags from my first article. If you do that, you will find that the typewriter quotes are substituted with typographic (curly) quotes. Fixing this was just a matter of doing a Find/Replace.
Second, there are a number of ampersands (&) in the listings, and these need to be replaced by &amp; for the browser to make sense of them. Some recent listings also use the Unicode non-breaking space, and these should be replaced with a regular space. (Hint: Copy one, put it in Find, put a regular space in Replace, then Replace All)
Finally, there may be some accented characters lurking, and the browser feedback should help locate them. Example: Ibáñez needed to be Ib&aacute;&ntilde;ez.
And now the bonus: Once your XHTML is well-formed, you can use your browser to comb Project Gutenberg just like on your e-reader. I also found that [Calibre][5] would not make the links properly until the quotes were fixed.
Finally, here is a template for a separate web page you can place on your system to easily link to each years listing on your system. Make sure you fix the locations for your personal directory structure and filenames. Also, make sure all these quotes are typewriter quotes, not curly quotes.
```
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>GutIndexes</title>
</head>
<body leftmargin="100">
<h2>GutIndexes</h2>
<font size="5">
<table cellpadding="20"><tr>
<td><a href="/home/gregp/Documents/GUTINDEX.1996.xhtml">1996</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.1997.xhtml">1997</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.1998.xhtml">1998</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.1999.xhtml">1999</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2000.xhtml">2000</a></td></tr>
<tr><td><a href="/home/gregp/Documents/GUTINDEX.2001.xhtml">2001</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2002.xhtml">2002</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2003.xhtml">2003</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2004.xhtml">2004</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2005.xhtml">2005</a></td></tr>
<tr><td><a href="/home/gregp/Documents/GUTINDEX.2006.xhtml">2006</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2007.xhtml">2007</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2008.xhtml">2008</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2009.xhtml">2009</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2010.xhtml">2010</a></td></tr>
<tr><td><a href="/home/gregp/Documents/GUTINDEX.2011.xhtml">2011</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2012.xhtml">2012</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2013.xhtml">2013</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2014.xhtml">2014</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2015.xhtml">2015</a></td></tr>
<tr><td><a href="/home/gregp/Documents/GUTINDEX.2016.xhtml">2016</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2017.xhtml">2017</a></td>
<td><a href="/home/gregp/Documents/GUTINDEX.2018.xhtml">2018</a></td>
</tr>
</table>
</font>
</body>
</html>
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/more-books-your-ereader
作者:[Greg Pittman][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/greg-p
[1]:https://opensource.com/article/18/4/browse-project-gutenberg-library
[2]:https://www.gutenberg.org/
[3]:https://www.kde.org/applications/utilities/kwrite/
[4]:https://sigil-ebook.com/
[5]:https://calibre-ebook.com/

View File

@ -0,0 +1,106 @@
How to install software from the Linux command line
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/suitcase_container_bag.png?itok=q40lKCBY)
If you use Linux for any amount of time, you'll soon learn there are many different ways to do the same thing. This includes installing applications on a Linux machine via the command line. I have been a Linux user for roughly 25 years, and time and time again I find myself going back to the command line to install my apps.
The most common method of installing apps from the command line is through software repositories (a place where software is stored) using what's called a package manager. All Linux apps are distributed as packages, which are nothing more than files associated with a package management system. Every Linux distribution comes with a package management system, but they are not all the same.
### What is a package management system?
A package management system is comprised of sets of tools and file formats that are used together to install, update, and uninstall Linux apps. The two most common package management systems are from Red Hat and Debian. Red Hat, CentOS, and Fedora all use the `rpm` system (.rpm files), while Debian, Ubuntu, Mint, and Ubuntu use `dpkg` (.deb files). Gentoo Linux uses a system called Portage, and Arch Linux uses nothing but tarballs (.tar files). The primary difference between these systems is how they install and maintain apps.
You might be wondering what's inside an `.rpm`, `.deb`, or `.tar` file. You might be surprised to learn that all are nothing more than plain old archive files (like `.zip`) that contain an application's code, instructions on how to install it, dependencies (what other apps it may depend on), and where its configuration files should be placed. The software that reads and executes all of those instructions is called a package manager.
### Debian, Ubuntu, Mint, and others
Debian, Ubuntu, Mint, and other Debian-based distributions all use `.deb` files and the `dpkg` package management system. There are two ways to install apps via this system. You can use the `apt` application to install from a repository, or you can use the `dpkg` app to install apps from `.deb` files. Let's take a look at how to do both.
Installing apps using `apt` is as easy as:
```
$ sudo apt install app_name
```
Uninstalling an app via `apt` is also super easy:
```
$ sudo apt remove app_name
```
To upgrade your installed apps, you'll first need to update the app repository:
```
$ sudo apt update
```
Once finished, you can update any apps that need updating with the following:
```
$ sudo apt upgrade
```
What if you want to update only a single app? No problem.
```
$ sudo apt update app_name
```
Finally, let's say the app you want to install is not available in the Debian repository, but it is available as a `.deb` download.
```
$ sudo dpkg -i app_name.deb
```
### Red Hat, CentOS, and Fedora
Red Hat, by default, uses several package management systems. These systems, while using their own terminology, are still very similar to each other and to the one used in Debian. For example, we can use either the `yum` or `dnf` manager to install apps.
```
$ sudo yum install app_name
$ sudo dnf install app_name
```
Apps in the `.rpm` format can also be installed with the `rpm` command.
```
$ sudo rpm -i app_name.rpm
```
Removing unwanted applications is just as easy.
```
$ sudo yum remove app_name
$ sudo dnf remove app_name
```
Updating apps is similarly easy.
```
$ yum update
$ sudo dnf upgrade --refresh
```
As you can see, installing, uninstalling, and updating Linux apps from the command line isn't hard at all. In fact, once you get used to it, you'll find it's faster than using desktop GUI-based management tools!
For more information on installing apps from the command line, please visit the Debian [Apt wiki][1], the [Yum cheat sheet][2], and the [DNF wiki][3].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/how-install-software-linux-command-line
作者:[Patrick H.Mullins][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/pmullins
[1]:https://wiki.debian.org/Apt
[2]:https://access.redhat.com/articles/yum-cheat-sheet
[3]:https://fedoraproject.org/wiki/DNF?rd=Dnf

View File

@ -0,0 +1,72 @@
Steam Makes it Easier to Play Windows Games on Linux
======
![Steam Wallpaper][1]
Its no secret that the [Linux gaming][2] library offers only a fraction of what the Windows library offers. In fact, many people wouldnt even consider [switching to Linux][3] simply because most of the games they want to play arent available on the platform.
At the time of writing this article, Linux has just over 5,000 games available on Steam compared to the librarys almost 27,000 total games. Now, 5,000 games may be a lot, but it isnt 27,000 games, thats for sure.
And though almost every new indie game seems to launch with a Linux release, we are still left without a way to play many [Triple-A][4] titles. For me, though there are many titles I would love the opportunity to play, this has never been a make-or-break problem since almost all of my favorite titles are available on Linux since I primarily play indie and [retro games][5] anyway.
### Meet Proton: a WINE Fork by Steam
Now, that problem is a thing of the past since this week Valve [announced][6] a new update to Steam Play that adds a forked version of Wine to the Linux and Mac Steam clients called Proton. Yes, the tool is open-source, and Valve has made the source code available on [Github][7]. The feature is still in beta though, so you must opt into the beta Steam client in order to take advantage of this functionality.
#### With proton, more Windows games are available for Linux on Steam
What does that actually mean for us Linux users? In short, it means that both Linux and Mac computers can now play all 27,000 of those games without needing to configure something like [PlayOnLinux][8] or [Lutris][9] to do so! Which, let me tell you, can be quite the headache at times.
The more complicated answer to this is that it sounds too good to be true for a reason. Though, in theory, you can play literally every Windows game on Linux this way, there is only a short list of games that are officially supported at launch, including DOOM, Final Fantasy VI, Tekken 7, Star Wars: Battlefront 2, and several more.
#### You can play all Windows games on Linux (in theory)
Though the list only has about 30 games thus far, you can force enable Steam to install and play any game through Proton by marking the “Enable Steam Play for all titles” checkbox. But dont get your hopes too high. They do not guarantee the stability and performance you may be hoping for, so keep your expectations reasonable.
![Steam Play][10]
#### Experiencing Proton: Not as bad as I expected
For example, I installed a few moderately taxing games to put Proton through its paces. One of which was The Elder Scrolls IV: Oblivion, and in the two hours I played the game, it only crashed once, and it was almost immediately after an autosave point during the tutorial.
I have an Nvidia Gtx 1050 Ti, so I was able to play the game at 1080p with high settings, and I didnt see a single problem outside of that one crash. The only negative feedback I really have is that the framerate was not nearly as high as it would have been if it was a native game. I got above 60 frames 90% of the time, but I admit it could have been better.
Every other game that I have installed and launched has also worked flawlessly, granted I havent played any of them for an extended amount of time yet. Some games I installed include The Forest, Dead Rising 4, H1Z1, and Assassins Creed II (can you tell I like horror games?).
#### Why is Steam (still) betting on Linux?
Now, this is all fine and dandy, but why did this happen? Why would Valve spend the time, money, and resources needed to implement something like this? I like to think they did so because they value the Linux community, but if I am honest, I dont believe we had anything to do with it.
If I had to put money on it, I would say Valve has developed Proton because they havent given up on [Steam machines][11] yet. And since [Steam OS][12] is running on Linux, it is in their best interest financially to invest in something like this. The more games available on Steam OS, the more people might be willing to buy a Steam Machine.
Maybe I am wrong, but I bet this means we will see a new wave of Steam machines coming in the not-so-distant future. Maybe we will see them in one year, or perhaps we wont see them for another five, who knows!
Either way, all I know is that I am beyond excited to finally play the games from my Steam library that I have slowly accumulated over the years from all of the Humble Bundles, promo codes, and random times I bought a game on sale just in case I wanted to try to get it running in Lutris.
#### Excited for more gaming on Linux?
What do you think? Are you excited about this, or are you afraid fewer developers will create native Linux games because there is almost no need to now? Does Valve love the Linux community, or do they love money? Let us know what you think in the comment section below, and check back in for more FOSS content like this.
--------------------------------------------------------------------------------
via: https://itsfoss.com/steam-play-proton/
作者:[Phillip Prado][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/phillip/
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/steam-wallpaper.jpeg
[2]:https://itsfoss.com/linux-gaming-guide/
[3]:https://itsfoss.com/reasons-switch-linux-windows-xp/
[4]:https://itsfoss.com/triplea-game-review/
[5]:https://itsfoss.com/play-retro-games-linux/
[6]:https://steamcommunity.com/games/221410
[7]:https://github.com/ValveSoftware/Proton/
[8]:https://www.playonlinux.com/en/
[9]:https://lutris.net/
[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/SteamProton.jpg
[11]:https://store.steampowered.com/sale/steam_machines
[12]:https://itsfoss.com/valve-annouces-linux-based-gaming-operating-system-steamos/

View File

@ -0,0 +1,139 @@
What Stable Kernel Should I Use?
======
I get a lot of questions about people asking me about what stable kernel should they be using for their product/device/laptop/server/etc. all the time. Especially given the now-extended length of time that some kernels are being supported by me and others, this isnt always a very obvious thing to determine. So this post is an attempt to write down my opinions on the matter. Of course, you are free to use what ever kernel version you want, but heres what I recommend.
As always, the opinions written here are my own, I speak for no one but myself.
### What kernel to pick
Heres the my short list of what kernel you should use, raked from best to worst options. Ill go into the details of all of these below, but if you just want the summary of all of this, here it is:
Hierarchy of what kernel to use, from best solution to worst:
* Supported kernel from your favorite Linux distribution
* Latest stable release
* Latest LTS release
* Older LTS release that is still being maintained
What kernel to never use:
* Unmaintained kernel release
To give numbers to the above, today, as of August 24, 2018, the front page of kernel.org looks like this:
![][1]
So, based on the above list that would mean that:
* 4.18.5 is the latest stable release
* 4.14.67 is the latest LTS release
* 4.9.124, 4.4.152, and 3.16.57 are the older LTS releases that are still being maintained
* 4.17.19 and 3.18.119 are “End of Life” kernels that have had a release in the past 60 days, and as such stick around on the kernel.org site for those who still might want to use them.
Quite easy, right?
Ok, now for some justification for all of this:
### Distribution kernels
The best solution for almost all Linux users is to just use the kernel from your favorite Linux distribution. Personally, I prefer the community based Linux distributions that constantly roll along with the latest updated kernel and it is supported by that developer community. Distributions in this category are Fedora, openSUSE, Arch, Gentoo, CoreOS, and others.
All of these distributions use the latest stable upstream kernel release and make sure that any needed bugfixes are applied on a regular basis. That is the one of the most solid and best kernel that you can use when it comes to having the latest fixes ([remember all fixes are security fixes][2]) in it.
There are some community distributions that take a bit longer to move to a new kernel release, but eventually get there and support the kernel they currently have quite well. Those are also great to use, and examples of these are Debian and Ubuntu.
Just because I did not list your favorite distro here does not mean its kernel is not good. Look on the web site for the distro and make sure that the kernel package is constantly updated with the latest security patches, and all should be well.
Lots of people seem to like the old, “traditional” model of a distribution and use RHEL, SLES, CentOS or the “LTS” Ubuntu release. Those distros pick a specific kernel version and then camp out on it for years, if not decades. They do loads of work backporting the latest bugfixes and sometimes new features to these kernels, all in a Quixote quest to keep the version number from never being changed, despite having many thousands of changes on top of that older kernel version. This work is a truly thankless job, and the developers assigned to these tasks do some wonderful work in order to achieve these goals. If you like never seeing your kernel version number change, then use these distributions. They usually cost some money to use, but the support you get from these companies is worth it when something goes wrong.
So again, the best kernel you can use is one that someone else supports, and you can turn to for help. Use that support, usually you are already paying for it (for the enterprise distributions), and those companies know what they are doing.
But, if you do not want to trust someone else to manage your kernel for you, or you have hardware that a distribution does not support, then you want to run the Latest stable release:
### Latest stable release
This kernel is the latest one from the Linux kernel developer community that they declare as “stable”. About every three months, the community releases a new stable kernel that contains all of the newest hardware support, the latest performance improvements, as well as the latest bugfixes for all parts of the kernel. Over the next 3 months, bugfixes that go into the next kernel release to be made are backported into this stable release, so that any users of this kernel are sure to get them as soon as possible.
This is usually the kernel that most community distributions use as well, so you can be sure it is tested and has a large audience of users. Also, the kernel community (all 4000+ developers) are willing to help support users of this release, as it is the latest one that they made.
After 3 months, a new kernel is released and you should move to it to ensure that you stay up to date, as support for this kernel is usually dropped a few weeks after the newer release happens.
If you have new hardware that is purchased after the last LTS release came out, you almost are guaranteed to have to run this kernel in order to have it supported. So for desktops or new servers, this is usually the recommended kernel to be running.
### Latest LTS release
If your hardware relies on a vendors out-of-tree patch in order to make it work properly (like almost all embedded devices these days), then the next best kernel to be using is the latest LTS release. That release gets all of the latest kernel fixes that goes into the stable releases where applicable, and lots of users test and use it.
Note, no new features and almost no new hardware support is ever added to these kernels, so if you need to use a new device, it is better to use the latest stable release, not this release.
Also this release is common for users that do not like to worry about “major” upgrades happening on them every 3 months. So they stick to this release and upgrade every year instead, which is a fine practice to follow.
The downsides of using this release is that you do not get the performance improvements that happen in newer kernels, except when you update to the next LTS kernel, potentially a year in the future. That could be significant for some workloads, so be very aware of this.
Also, if you have problems with this kernel release, the first thing that any developer whom you report the issue to is going to ask you to do is, “does the latest stable release have this problem?” So you will need to be aware that support might not be as easy to get as with the latest stable releases.
Now if you are stuck with a large patchset and can not update to a new LTS kernel once a year, perhaps you want the older LTS releases:
### Older LTS release
These releases have traditionally been supported by the community for 2 years, sometimes longer for when a major distribution relies on this (like Debian or SLES). However in the past year, thanks to a lot of suport and investment in testing and infrastructure from Google, Linaro, Linaro member companies, [kernelci.org][3], and others, these kernels are starting to be supported for much longer.
Heres the latest LTS releases and how long they will be supported for, as shown at [kernel.org/category/releases.html][4] on August 24, 2018:
![][5]
The reason that Google and other companies want to have these kernels live longer is due to the crazy (some will say broken) development model of almost all SoC chips these days. Those devices start their development lifecycle a few years before the chip is released, however that code is never merged upstream, resulting in a brand new chip being released based on a 2 year old kernel. These SoC trees usually have over 2 million lines added to them, making them something that I have started calling “Linux-like” kernels.
If the LTS releases stop happening after 2 years, then support from the community instantly stops, and no one ends up doing bugfixes for them. This results in millions of very insecure devices floating around in the world, not something that is good for any ecosystem.
Because of this dependency, these companies now require new devices to constantly update to the latest LTS releases as they happen for their specific release version (i.e. every 4.9.y release that happens). An example of this is the Android kernel requirements for new devices shipping for the “O” and now “P” releases specified the minimum kernel version allowed, and Android security releases might start to require those “.y” releases to happen more frequently on devices.
I will note that some manufacturers are already doing this today. Sony is one great example of this, updating to the latest 4.4.y release on many of their new phones for their quarterly security release. Another good example is the small company Essential which has been tracking the 4.4.y releases faster than anyone that I know of.
There is one huge caveat when using a kernel like this. The number of security fixes that get backported are not as great as with the latest LTS release, because the traditional model of the devices that use these older LTS kernels is a much more reduced user model. These kernels are not to be used in any type of “general computing” model where you have untrusted users or virtual machines, as the ability to do some of the recent Spectre-type fixes for older releases is greatly reduced, if present at all in some branches.
So again, only use older LTS releases in a device that you fully control, or lock down with a very strong security model (like Android enforces using SELinux and application isolation). Never use these releases on a server with untrusted users, programs, or virtual machines.
Also, support from the community for these older LTS releases is greatly reduced even from the normal LTS releases, if available at all. If you use these kernels, you really are on your own, and need to be able to support the kernel yourself, or rely on you SoC vendor to provide that support for you (note that almost none of them do provide that support, so beware…)
### Unmaintained kernel release
Surprisingly, many companies do just grab a random kernel release, slap it into their product and proceed to ship it in hundreds of thousands of units without a second thought. One crazy example of this would be the Lego Mindstorm systems that shipped a random -rc release of a kernel in their device for some unknown reason. A -rc release is a development release that not even the Linux kernel developers feel is ready for everyone to use just yet, let alone millions of users.
You are of course free to do this if you want, but note that you really are on your own here. The community can not support you as no one is watching all kernel versions for specific issues, so you will have to rely on in-house support for everything that could go wrong. Which for some companies and systems, could be just fine, but be aware of the “hidden” cost this might cause if you do not plan for this up front.
### Summary
So, heres a short list of different types of devices, and what I would recommend for their kernels:
* Laptop / Desktop: Latest stable release
* Server: Latest stable release or latest LTS release
* Embedded device: Latest LTS release or older LTS release if the security model used is very strong and tight.
And as for me, what do I run on my machines? My laptops run the latest development kernel (i.e. Linuss development tree) plus whatever kernel changes I am currently working on and my servers run the latest stable release. So despite being in charge of the LTS releases, I dont run them myself, except in testing systems. I rely on the development and latest stable releases to ensure that my machines are running the fastest and most secure releases that we know how to create at this point in time.
--------------------------------------------------------------------------------
via: http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/
作者:[Greg Kroah-Hartman][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://kroah.com
[1]:https://s3.amazonaws.com/kroah.com/images/kernel.org_2018_08_24.png
[2]:http://kroah.com/log/blog/2018/02/05/linux-kernel-release-model/
[3]:https://kernelci.org/
[4]:https://www.kernel.org/category/releases.html
[5]:https://s3.amazonaws.com/kroah.com/images/kernel.org_releases_2018_08_24.png

View File

@ -0,0 +1,116 @@
[Solved] “sub process usr bin dpkg returned an error code 1″ Error in Ubuntu
======
If you are encountering “sub process usr bin dpkg returned an error code 1” while installing software on Ubuntu Linux, here is how you can fix it.
One of the common issue in Ubuntu and other Debian based distribution is the broken packages. You try to update the system or install a new package and you encounter an error like Sub-process /usr/bin/dpkg returned an error code.
Thats what happened to me the other day. I was trying to install a radio application in Ubuntu when it threw me this error:
```
Unpacking python-gst-1.0 (1.6.2-1build1) ...
Selecting previously unselected package radiotray.
Preparing to unpack .../radiotray_0.7.3-5ubuntu1_all.deb ...
Unpacking radiotray (0.7.3-5ubuntu1) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for desktop-file-utils (0.22-1ubuntu5.2) ...
Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20180209-0ubuntu1) ...
Rebuilding /usr/share/applications/bamf-2.index...
Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) ...
Processing triggers for mime-support (3.59ubuntu1) ...
Setting up polar-bookshelf (1.0.0-beta56) ...
ln: failed to create symbolic link '/usr/local/bin/polar-bookshelf': No such file or directory
dpkg: error processing package polar-bookshelf (--configure):
subprocess installed post-installation script returned error exit status 1
Setting up python-appindicator (12.10.1+16.04.20170215-0ubuntu1) ...
Setting up python-gst-1.0 (1.6.2-1build1) ...
Setting up radiotray (0.7.3-5ubuntu1) ...
Errors were encountered while processing:
polar-bookshelf
E: Sub-process /usr/bin/dpkg returned an error code (1)
```
The last three lines are of the utmost importance here.
```
Errors were encountered while processing:
polar-bookshelf
E: Sub-process /usr/bin/dpkg returned an error code (1)
```
It tells me that the package polar-bookshelf is causing and issue. This might be crucial to how you fix this error here.
### Fixing Sub-process /usr/bin/dpkg returned an error code (1)
![Fix update errors in Ubuntu Linux][1]
Lets try to fix this broken error package. Ill show several methods that you can try one by one. The initial ones are easy to use and simply no-brainers.
You should try to run sudo apt update and then try to install a new package or upgrade after trying each of the methods discussed here.
#### Method 1: Reconfigure Package Database
The first method you can try is to reconfigure the package database. Probably the database got corrupted while installing a package. Reconfiguring often fixes the problem.
```
sudo dpkg --configure -a
```
#### Method 2: Use force install
If a package installation was interrupted previously, you may try to do a force install.
```
sudo apt-get install -f
```
#### Method 3: Try removing the troublesome package
If its not an issue for you, you may try to remove the package manually. Please dont do it for Linux Kernels (packages starting with linux-).
```
sudo apt remove
```
#### Method 4: Remove post info files of the troublesome package
This should be your last resort. You can try removing the files associated to the package in question from /var/lib/dpkg/info.
**You need to know a little about basic Linux commands to figure out whats happening and how can you use the same with your problem.**
In my case, I had an issue with polar-bookshelf. So I looked for the files associated with it:
```
ls -l /var/lib/dpkg/info | grep -i polar-bookshelf
-rw-r--r-- 1 root root 2324811 Aug 14 19:29 polar-bookshelf.list
-rw-r--r-- 1 root root 2822824 Aug 10 04:28 polar-bookshelf.md5sums
-rwxr-xr-x 1 root root 113 Aug 10 04:28 polar-bookshelf.postinst
-rwxr-xr-x 1 root root 84 Aug 10 04:28 polar-bookshelf.postrm
```
Now all I needed to do was to remove these files:
```
sudo mv /var/lib/dpkg/info/polar-bookshelf.* /tmp
```
Use the sudo apt update and then you should be able to install software as usual.
#### Which method worked for you (if it worked)?
I hope this quick article helps you in fixing the E: Sub-process /usr/bin/dpkg returned an error code (1) error.
If it did work for you, which method was it? Did you manage to fix this error with some other method? If yes, please share that to help others with this issue.
--------------------------------------------------------------------------------
via: https://itsfoss.com/dpkg-returned-an-error-code-1/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/fix-common-update-errors-ubuntu.jpeg

View File

@ -0,0 +1,417 @@
How to capture and analyze packets with tcpdump command on Linux
======
tcpdump is a well known command line **packet analyzer** tool. Using tcpdump command we can capture the live TCP/IP packets and these packets can also be saved to a file. Later on these captured packets can be analyzed via tcpdump command. tcpdump command becomes very handy when it comes to troubleshooting on network level.
![](https://www.linuxtechi.com/wp-content/uploads/2018/08/tcpdump-command-examples-linux.jpg)
tcpdump is available in most of the Linux distributions, for Debian based Linux, it be can be installed using apt command,
```
# apt install tcpdump -y
```
On RPM based Linux OS, tcpdump can be installed using below yum command
```
# yum install tcpdump -y
```
When we run the tcpdump command without any options then it will capture packets of all the interfaces. So to stop or cancel the tcpdump command, type “ **ctrl+c** ” . In this tutorial we will discuss how to capture and analyze packets using different practical examples,
### Example:1) Capturing packets from a specific interface
When we run the tcpdump command without any options, it will capture packets on the all interfaces, so to capture the packets from a specific interface use the option **-i** followed by the interface name.
Syntax :
```
# tcpdump -i {interface-name}
```
Lets assume, i want to capture packets from interface “enp0s3”
Output would be something like below,
```
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
06:43:22.905890 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952160:21952540, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 380
06:43:22.906045 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952540:21952760, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 220
06:43:22.906150 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952760:21952980, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 220
06:43:22.906291 IP 169.144.0.1.39374 > compute-0-1.example.com.ssh: Flags [.], ack 21952980, win 13094, options [nop,nop,TS val 6580205 ecr 26164373], length 0
06:43:22.906303 IP 169.144.0.1.39374 > compute-0-1.example.com.ssh: Flags [P.], seq 13537:13609, ack 21952980, win 13094, options [nop,nop,TS val 6580205 ecr 26164373], length 72
06:43:22.906322 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952980:21953200, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 220
^C
109930 packets captured
110065 packets received by filter
133 packets dropped by kernel
[[email protected] ~]#
```
### Example:2) Capturing specific number number of packet from a specific interface
Lets assume we want to capture 12 packets from the specific interface like “enp0s3”, this can be easily achieved using the options “ **-c {number} -i {interface-name}** ”
```
root@compute-0-1 ~]# tcpdump -c 12 -i enp0s3
```
Above command will generate the output something like below
[![N-Number-Packsets-tcpdump-interface][1]][2]
### Example:3) Display all the available Interfaces for tcpdump
Use **-D** option to display all the available interfaces for tcpdump command,
```
[root@compute-0-1 ~]# tcpdump -D
1.enp0s3
2.enp0s8
3.ovs-system
4.br-int
5.br-tun
6.nflog (Linux netfilter log (NFLOG) interface)
7.nfqueue (Linux netfilter queue (NFQUEUE) interface)
8.usbmon1 (USB bus number 1)
9.usbmon2 (USB bus number 2)
10.qbra692e993-28
11.qvoa692e993-28
12.qvba692e993-28
13.tapa692e993-28
14.vxlan_sys_4789
15.any (Pseudo-device that captures on all interfaces)
16.lo [Loopback]
[[email protected] ~]#
```
I am running the tcpdump command on one of my openstack compute node, thats why in the output you have seen number interfaces, tab interface, bridges and vxlan interface.
### Example:4) Capturing packets with human readable timestamp (-tttt option)
By default in tcpdump command output, there is no proper human readable timestamp, if you want to associate human readable timestamp to each captured packet then use **-tttt** option, example is shown below,
```
[[email protected] ~]# tcpdump -c 8 -tttt -i enp0s3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
2018-08-25 23:23:36.954883 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1449206247:1449206435, ack 3062020950, win 291, options [nop,nop,TS val 86178422 ecr 21583714], length 188
2018-08-25 23:23:36.955046 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 13585, options [nop,nop,TS val 21583717 ecr 86178422], length 0
2018-08-25 23:23:37.140097 IP controller0.example.com.amqp > compute-0-1.example.com.57818: Flags [P.], seq 814607956:814607964, ack 2387094506, win 252, options [nop,nop,TS val 86172228 ecr 86176695], length 8
2018-08-25 23:23:37.140175 IP compute-0-1.example.com.57818 > controller0.example.com.amqp: Flags [.], ack 8, win 237, options [nop,nop,TS val 86178607 ecr 86172228], length 0
2018-08-25 23:23:37.355238 IP compute-0-1.example.com.57836 > controller0.example.com.amqp: Flags [P.], seq 1080415080:1080417400, ack 1690909362, win 237, options [nop,nop,TS val 86178822 ecr 86163054], length 2320
2018-08-25 23:23:37.357119 IP controller0.example.com.amqp > compute-0-1.example.com.57836: Flags [.], ack 2320, win 1432, options [nop,nop,TS val 86172448 ecr 86178822], length 0
2018-08-25 23:23:37.357545 IP controller0.example.com.amqp > compute-0-1.example.com.57836: Flags [P.], seq 1:22, ack 2320, win 1432, options [nop,nop,TS val 86172449 ecr 86178822], length 21
2018-08-25 23:23:37.357572 IP compute-0-1.example.com.57836 > controller0.example.com.amqp: Flags [.], ack 22, win 237, options [nop,nop,TS val 86178825 ecr 86172449], length 0
8 packets captured
134 packets received by filter
69 packets dropped by kernel
[[email protected] ~]#
```
### Example:5) Capturing and saving packets to a file (-w option)
Use “ **-w** ” option in tcpdump command to save the capture TCP/IP packet to a file, so that we can analyze those packets in the future for further analysis.
Syntax :
```
# tcpdump -w file_name.pcap -i {interface-name}
```
Note: Extension of file must be **.pcap**
Lets assume i want to save the captured packets of interface “ **enp0s3** ” to a file name **enp0s3-26082018.pcap**
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018.pcap -i enp0s3
```
Above command will generate the output something like below,
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018.pcap -i enp0s3
tcpdump: listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
^C841 packets captured
845 packets received by filter
0 packets dropped by kernel
[root@compute-0-1 ~]# ls
anaconda-ks.cfg enp0s3-26082018.pcap
[root@compute-0-1 ~]#
```
Capturing and Saving the packets whose size **greater** than **N bytes**
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018-2.pcap greater 1024
```
Capturing and Saving the packets whose size **less** than **N bytes**
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018-3.pcap less 1024
```
### Example:6) Reading packets from the saved file ( -r option)
In the above example we have saved the captured packets to a file, we can read those packets from the file using the option **-r** , example is shown below,
```
[root@compute-0-1 ~]# tcpdump -r enp0s3-26082018.pcap
```
Reading the packets with human readable timestamp,
```
[root@compute-0-1 ~]# tcpdump -tttt -r enp0s3-26082018.pcap
reading from file enp0s3-26082018.pcap, link-type EN10MB (Ethernet)
2018-08-25 22:03:17.249648 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1426167803:1426167927, ack 3061962134, win 291, options
[nop,nop,TS val 81358717 ecr 20378789], length 124
2018-08-25 22:03:17.249840 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 124, win 564, options [nop,nop,TS val 20378791 ecr 81358
717], length 0
2018-08-25 22:03:17.454559 IP controller0.example.com.amqp > compute-0-1.example.com.57836: Flags [.], ack 1079416895, win 1432, options [nop,nop,TS v
al 81352560 ecr 81353913], length 0
2018-08-25 22:03:17.454642 IP compute-0-1.example.com.57836 > controller0.example.com.amqp: Flags [.], ack 1, win 237, options [nop,nop,TS val 8135892
2 ecr 81317504], length 0
2018-08-25 22:03:17.646945 IP compute-0-1.example.com.57788 > controller0.example.com.amqp: Flags [.], seq 106760587:106762035, ack 688390730, win 237
, options [nop,nop,TS val 81359114 ecr 81350901], length 1448
2018-08-25 22:03:17.647043 IP compute-0-1.example.com.57788 > controller0.example.com.amqp: Flags [P.], seq 1448:1956, ack 1, win 237, options [nop,no
p,TS val 81359114 ecr 81350901], length 508
2018-08-25 22:03:17.647502 IP controller0.example.com.amqp > compute-0-1.example.com.57788: Flags [.], ack 1956, win 1432, options [nop,nop,TS val 813
52753 ecr 81359114], length 0
.........................................................................................................................
```
### Example:7) Capturing only IP address packets on a specific Interface (-n option)
Using -n option in tcpdum command we can capture only IP address packets on specific interface, example is shown below,
```
[root@compute-0-1 ~]# tcpdump -n -i enp0s3
```
Output of above command would be something like below,
```
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
22:22:28.537904 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1433301395:1433301583, ack 3061976250, win 291, options [nop,nop,TS val 82510005 ecr 20666610], length 188
22:22:28.538173 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 188, win 9086, options [nop,nop,TS val 20666613 ecr 82510005], length 0
22:22:28.538573 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 188:552, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666613], length 364
22:22:28.538736 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 552, win 9086, options [nop,nop,TS val 20666613 ecr 82510006], length 0
22:22:28.538874 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 552:892, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666613], length 340
22:22:28.539042 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 892, win 9086, options [nop,nop,TS val 20666613 ecr 82510006], length 0
22:22:28.539178 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 892:1232, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666613], length 340
22:22:28.539282 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1232, win 9086, options [nop,nop,TS val 20666614 ecr 82510006], length 0
22:22:28.539479 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1232:1572, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666614], length 340
22:22:28.539595 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1572, win 9086, options [nop,nop,TS val 20666614 ecr 82510006], length 0
22:22:28.539760 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1572:1912, ack 1, win 291, options [nop,nop,TS val 82510007 ecr 20666614], length 340
.........................................................................
```
You can also capture N number of IP address packets using -c and -n option in tcpdump command,
```
[root@compute-0-1 ~]# tcpdump -c 25 -n -i enp0s3
```
### Example:8) Capturing only TCP packets on a specific interface
In tcpdump command we can capture only tcp packets using the **tcp** option,
```
[root@compute-0-1 ~]# tcpdump -i enp0s3 tcp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
22:36:54.521053 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1433336467:1433336655, ack 3061986618, win 291, options [nop,nop,TS val 83375988 ecr 20883106], length 188
22:36:54.521474 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 188, win 9086, options [nop,nop,TS val 20883109 ecr 83375988], length 0
22:36:54.522214 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 188:552, ack 1, win 291, options [nop,nop,TS val 83375989 ecr 20883109], length 364
22:36:54.522508 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 552, win 9086, options [nop,nop,TS val 20883109 ecr 83375989], length 0
22:36:54.522867 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 552:892, ack 1, win 291, options [nop,nop,TS val 83375990 ecr 20883109], length 340
22:36:54.523006 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 892, win 9086, options [nop,nop,TS val 20883109 ecr 83375990], length 0
22:36:54.523304 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 892:1232, ack 1, win 291, options [nop,nop,TS val 83375990 ecr 20883109], length 340
22:36:54.523461 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1232, win 9086, options [nop,nop,TS val 20883110 ecr 83375990], length 0
22:36:54.523604 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1232:1572, ack 1, win 291, options [nop,nop,TS val 83375991 ecr 20883110], length 340
...................................................................................................................................................
```
### Example:9) Capturing packets from a specific port on a specific interface
Using tcpdump command we can capture packet from a specific port (e.g 22) on a specific interface enp0s3
Syntax :
```
# tcpdump -i {interface-name} port {Port_Number}
```
```
[root@compute-0-1 ~]# tcpdump -i enp0s3 port 22
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
22:54:45.032412 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1435010787:1435010975, ack 3061993834, win 291, options [nop,nop,TS val 84446499 ecr 21150734], length 188
22:54:45.032631 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 9131, options [nop,nop,TS val 21150737 ecr 84446499], length 0
22:54:55.037926 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 188:576, ack 1, win 291, options [nop,nop,TS val 84456505 ecr 21150737], length 388
22:54:55.038106 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 576, win 9154, options [nop,nop,TS val 21153238 ecr 84456505], length 0
22:54:55.038286 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 576:940, ack 1, win 291, options [nop,nop,TS val 84456505 ecr 21153238], length 364
22:54:55.038564 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 940, win 9177, options [nop,nop,TS val 21153238 ecr 84456505], length 0
22:54:55.038708 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 940:1304, ack 1, win 291, options [nop,nop,TS val 84456506 ecr 21153238], length 364
............................................................................................................................
[root@compute-0-1 ~]#
```
### Example:10) Capturing the packets from a Specific Source IP on a Specific Interface
Using “ **src** ” keyword followed by “ **ip address** ” in tcpdump command we can capture the packets from a specific Source IP,
syntax :
```
# tcpdump -n -i {interface-name} src {ip-address}
```
Example is shown below,
```
[root@compute-0-1 ~]# tcpdump -n -i enp0s3 src 169.144.0.10
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
23:03:45.912733 IP 169.144.0.10.amqp > 169.144.0.20.57800: Flags [.], ack 526623844, win 243, options [nop,nop,TS val 84981008 ecr 84982372], length 0
23:03:46.136757 IP 169.144.0.10.amqp > 169.144.0.20.57796: Flags [.], ack 2535995970, win 252, options [nop,nop,TS val 84981232 ecr 84982596], length 0
23:03:46.153398 IP 169.144.0.10.amqp > 169.144.0.20.57798: Flags [.], ack 3623063621, win 243, options [nop,nop,TS val 84981248 ecr 84982612], length 0
23:03:46.361160 IP 169.144.0.10.amqp > 169.144.0.20.57802: Flags [.], ack 2140263945, win 252, options [nop,nop,TS val 84981456 ecr 84982821], length 0
23:03:46.376926 IP 169.144.0.10.amqp > 169.144.0.20.57808: Flags [.], ack 175946224, win 252, options [nop,nop,TS val 84981472 ecr 84982836], length 0
23:03:46.505242 IP 169.144.0.10.amqp > 169.144.0.20.57810: Flags [.], ack 1016089556, win 252, options [nop,nop,TS val 84981600 ecr 84982965], length 0
23:03:46.616994 IP 169.144.0.10.amqp > 169.144.0.20.57812: Flags [.], ack 832263835, win 252, options [nop,nop,TS val 84981712 ecr 84983076], length 0
23:03:46.809344 IP 169.144.0.10.amqp > 169.144.0.20.57814: Flags [.], ack 2781799939, win 252, options [nop,nop,TS val 84981904 ecr 84983268], length 0
23:03:46.809485 IP 169.144.0.10.amqp > 169.144.0.20.57816: Flags [.], ack 1662816815, win 252, options [nop,nop,TS val 84981904 ecr 84983268], length 0
23:03:47.033301 IP 169.144.0.10.amqp > 169.144.0.20.57818: Flags [.], ack 2387094362, win 252, options [nop,nop,TS val 84982128 ecr 84983492], length 0
^C
10 packets captured
12 packets received by filter
0 packets dropped by kernel
[root@compute-0-1 ~]#
```
### Example:11) Capturing packets from a specific destination IP on a specific Interface
Syntax :
```
# tcpdump -n -i {interface-name} dst {IP-address}
```
```
[root@compute-0-1 ~]# tcpdump -n -i enp0s3 dst 169.144.0.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
23:10:43.520967 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1439564171:1439564359, ack 3062005550, win 291, options [nop,nop,TS val 85404988 ecr 21390356], length 188
23:10:43.521441 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 188:408, ack 1, win 291, options [nop,nop,TS val 85404988 ecr 21390359], length 220
23:10:43.521719 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 408:604, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
23:10:43.521993 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 604:800, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
23:10:43.522157 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 800:996, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
23:10:43.522346 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 996:1192, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
.........................................................................................
```
### Example:12) Capturing TCP packet communication between two Hosts
Lets assume i want to capture tcp packets between two hosts 169.144.0.1 & 169.144.0.20, example is shown below,
```
[root@compute-0-1 ~]# tcpdump -w two-host-tcp-comm.pcap -i enp0s3 tcp and \(host 169.144.0.1 or host 169.144.0.20\)
```
Capturing only SSH packet flow between two hosts using tcpdump command,
```
[root@compute-0-1 ~]# tcpdump -w ssh-comm-two-hosts.pcap -i enp0s3 src 169.144.0.1 and port 22 and dst 169.144.0.20 and port 22
```
### Example:13) Capturing the udp network packets (to & fro) between two hosts
Syntax :
```
# tcpdump -w -s -i udp and \(host and host \)
```
```
[root@compute-0-1 ~]# tcpdump -w two-host-comm.pcap -s 1000 -i enp0s3 udp and \(host 169.144.0.10 and host 169.144.0.20\)
```
### Example:14) Capturing packets in HEX and ASCII Format
Using tcpdump command, we can capture tcp/ip packet in ASCII and HEX format,
To capture the packets in ASCII format use **-A** option, example is shown below,
```
[root@compute-0-1 ~]# tcpdump -c 10 -A -i enp0s3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
00:37:10.520060 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1452637331:1452637519, ack 3062125586, win 333, options [nop,nop,TS val 90591987 ecr 22687106], length 188
E...[root@compute-0-1 @...............V.|...T....MT......
.fR..Z-....b.:..Z5...{.'p....]."}...Z..9.?......."root@compute-0-1 <.....V..C.....{,...OKP.2.*...`..-sS..1S...........:.O[.....{G..%ze.Pn.T..N.... ....qB..5...n.....`...:=...[..0....k.....S.:..5!.9..G....!-..'..
00:37:10.520319 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 13930, options [nop,nop,TS val 22687109 ecr 90591987], length 0
root@compute-0-1 @.|+..............T.V.}O..6j.d.....
.Z-..fR.
00:37:11.687543 IP controller0.example.com.amqp > compute-0-1.example.com.57800: Flags [.], ack 526624548, win 243, options [nop,nop,TS val 90586768 ecr 90588146], length 0
root@compute-0-1 @.!L...
.....(..g....c.$...........
.f>..fC.
00:37:11.687612 IP compute-0-1.example.com.57800 > controller0.example.com.amqp: Flags [.], ack 1, win 237, options [nop,nop,TS val 90593155 ecr 90551716], length 0
root@compute-0-1 @..........
...(.c.$g.......Se.....
.fW..e..
..................................................................................................................................................
```
To Capture the packets both in HEX and ASCII format use **-XX** option
```
[root@compute-0-1 ~]# tcpdump -c 10 -XX -i enp0s3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
00:39:15.124363 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1452640859:1452641047, ack 3062126346, win 333, options [nop,nop,TS val 90716591 ecr 22718257], length 188
0x0000: 0a00 2700 0000 0800 27f4 f935 0800 4510 ..'.....'..5..E.
0x0010: 00f0 5bc6 4000 4006 8afc a990 0014 a990 ..[root@compute-0-1 @.........
0x0020: 0001 0016 99ee 5695 8a5b b684 570a 8018 ......V..[..W...
0x0030: 014d 5418 0000 0101 080a 0568 39af 015a .MT........h9..Z
0x0040: a731 adb7 58b6 1a0f 2006 df67 c9b6 4479 .1..X......g..Dy
0x0050: 19fd 2c3d 2042 3313 35b9 a160 fa87 d42c ..,=.B3.5..`...,
0x0060: 89a9 3d7d dfbf 980d 2596 4f2a 99ba c92a ..=}....%.O*...*
0x0070: 3e1e 7bf7 3af2 a5cc ee4f 10bc 7dfc 630d >.{.:....O..}.c.
0x0080: 898a 0e16 6825 56c7 b683 1de4 3526 ff04 ....h%V.....5&..
0x0090: 68d1 4f7d babd 27ba 84ae c5d3 750b 01bd h.O}..'.....u...
0x00a0: 9c43 e10a 33a6 8df2 a9f0 c052 c7ed 2ff5 .C..3......R../.
0x00b0: bfb1 ce84 edfc c141 6dad fa19 0702 62a7 .......Am.....b.
0x00c0: 306c db6b 2eea 824e eea5 acd7 f92e 6de3 0l.k...N......m.
0x00d0: 85d0 222d f8bf 9051 2c37 93c8 506d 5cb5 .."-...Q,7..Pm\.
0x00e0: 3b4a 2a80 d027 49f2 c996 d2d9 a9eb c1c4 ;J*..'I.........
0x00f0: 7719 c615 8486 d84c e42d 0ba3 698c w......L.-..i.
00:39:15.124648 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 13971, options [nop,nop,TS val 22718260 ecr 90716591], length 0
0x0000: 0800 27f4 f935 0a00 2700 0000 0800 4510 ..'..5..'.....E.
0x0010: 0034 6b70 4000 4006 7c0e a990 0001 a990 root@compute-0-1 @.|.......
0x0020: 0014 99ee 0016 b684 570a 5695 8b17 8010 ........W.V.....
0x0030: 3693 7c0e 0000 0101 080a 015a a734 0568 6.|........Z.4.h
0x0040: 39af
.......................................................................
```
Thats all from this article, i hope you got an idea how to capture and analyze tcp/ip packets using tcpdump command. Please do share your feedback and comments.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/capture-analyze-packets-tcpdump-command-linux/
作者:[Pradeep Kumar][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxtechi.com/author/pradeep/
[1]:https://www.linuxtechi.com/wp-content/uploads/2018/08/N-Number-Packsets-tcpdump-interface-1024x422.jpg
[2]:https://www.linuxtechi.com/wp-content/uploads/2018/08/N-Number-Packsets-tcpdump-interface.jpg

View File

@ -0,0 +1,89 @@
translating by lujun9972
4 tips for better tmux sessions
======
![](https://fedoramagazine.org/wp-content/uploads/2018/08/tmux-4-tips-816x345.jpg)
The tmux utility, a terminal multiplexer, lets you treat your terminal as a multi-paned window into your system. You can arrange the configuration, run different processes in each, and generally make better use of your screen. We introduced some readers to this powerful tool [in this earlier article][1]. Here are some tips that will help you get more out of tmux if youre getting started.
This article assumes your current prefix key is Ctrl+b. If youve remapped that prefix, simply substitute your prefix in its place.
### Set your terminal to automatically use tmux
One of the biggest benefits of tmux is being able to disconnect and reconnect to sesions at wilI. This makes remote login sessions more powerful. Have you ever lost a connection and wished you could get back the work you were doing on the remote system? With tmux this problem is solved.
However, you may sometimes find yourself doing work on a remote system, and realize you didnt start a session. One way to avoid this is to have tmux start or attach every time you login to a system with in interactive shell.
Add this to your remote systems ~/.bash_profile file:
```
if [ -z "$TMUX" ]; then
tmux attach -t default || tmux new -s default
fi
```
Then logout of the remote system, and log back in with SSH. Youll find youre in a tmux session named default. This session will be regenerated at next login if you exit it. But more importantly, if you detach from it as normal, your work is waiting for you next time you login — especially useful if your connection is interrupted.
Of course you can add this to your local system as well. Note that terminals inside most GUIs wont use the default session automatically, because they arent login shells. While you can change that behavior, it may result in nesting that makes the session less usable, so proceed with caution.
### Use zoom to focus on a single process
While the point of tmux is to offer multiple windows, panes, and processes in a single session, sometimes you need to focus. If youre in a process and need more space, or to focus on a single task, the zoom command works well. It expands the current pane to take up the entire current window space.
Zoom can be useful in other situations too. For instance, imagine youre using a terminal window in a graphical desktop. Panes can make it harder to copy and paste multiple lines from inside your tmux session. If you zoom the pane, you can do a clean copy/paste of multiple lines of data with ease.
To zoom into the current pane, hit Ctrl+b, z. When youre finished with the zoom function, hit the same key combo to unzoom the pane.
### Bind some useful commands
By default tmux has numerous commands available. But its helpful to have some of the more common operations bound to keys you can easily remember. Here are some examples you can add to your ~/.tmux.conf file to make sessions more enjoyable:
```
bind r source-file ~/.tmux.conf \; display "Reloaded config"
```
This command rereads the commands and bindings in your config file. Once you add this binding, exit any tmux sessions and then restart one. Now after you make any other future changes, simply run Ctrl+b, r and the changes will be part of your existing session.
```
bind V split-window -h
bind H split-window
```
These commands make it easier to split the current window across a vertical axis (note thats Shift+V) or across a horizontal axis (Shift+H).
If you want to see how all keys are bound, use Ctrl+B, ? to see a list. You may see keys bound in copy-mode first, for when youre working with copy and paste inside tmux. The prefix mode bindings are where youll see ones youve added above. Feel free to experiment with your own!
### Use powerline for great justice
[As reported in a previous Fedora Magazine article][2], the powerline utility is a fantastic addition to your shell. But it also has capabilities when used with tmux. Because tmux takes over the entire terminal space, the powerline window can provide more than just a better shell prompt.
[![Screenshot of tmux powerline in git folder](https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53-1024x690.png)][3]
If you havent already, follow the instructions in the [Magazines powerline article][4] to install that utility. Then, install the addon [using sudo][5]:
```
sudo dnf install tmux-powerline
```
Now restart your session, and youll see a spiffy new status line at the bottom. Depending on the terminal width, the default status line now shows your current session ID, open windows, system information, date and time, and hostname. If you change directory into a git-controlled project, youll see the branch and color-coded status as well.
Of course, this status bar is highly configurable as well. Enjoy your new supercharged tmux session, and have fun experimenting with it.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-tips-better-tmux-sessions/
作者:[Paul W. Frields][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/pfrields/
[1]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/
[2]:https://fedoramagazine.org/add-power-terminal-powerline/
[3]:https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53.png
[4]:https://fedoramagazine.org/add-power-terminal-powerline/
[5]:https://fedoramagazine.org/howto-use-sudo/

View File

@ -0,0 +1,59 @@
A sysadmin's guide to containers
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP)
The term "containers" is heavily overused. Also, depending on the context, it can mean different things to different people.
Traditional Linux containers are really just ordinary processes on a Linux system. These groups of processes are isolated from other groups of processes using resource constraints (control groups [cgroups]), Linux security constraints (Unix permissions, capabilities, SELinux, AppArmor, seccomp, etc.), and namespaces (PID, network, mount, etc.).
If you boot a modern Linux system and took a look at any process with `cat /proc/PID/cgroup`, you see that the process is in a cgroup. If you look at `/proc/PID/status`, you see capabilities. If you look at `/proc/self/attr/current`, you see SELinux labels. If you look at `/proc/PID/ns`, you see the list of namespaces the process is in. So, if you define a container as a process with resource constraints, Linux security constraints, and namespaces, by definition every process on a Linux system is in a container. This is why we often say [Linux is containers, containers are Linux][1]. **Container runtimes** are tools that modify these resource constraints, security, and namespaces and launch the container.
Docker introduced the concept of a **container image** , which is a standard TAR file that combines:
* **Rootfs (container root filesystem):** A directory on the system that looks like the standard root (`/`) of the operating system. For example, a directory with `/usr`, `/var`, `/home`, etc.
* **JSON file (container configuration):** Specifies how to run the rootfs; for example, what **command** or **entrypoint** to run in the rootfs when the container starts; **environment variables** to set for the container; the container's **working directory** ; and a few other settings.
Docker "`tar`'s up" the rootfs and the JSON file to create the **base image**. This enables you to install additional content on the rootfs, create a new JSON file, and `tar` the difference between the original image and the new image with the updated JSON file. This creates a **layered image**.
The definition of a container image was eventually standardized by the [Open Container Initiative (OCI)][2] standards body as the [OCI Image Specification][3].
Tools used to create container images are called **container image builders**. Sometimes container engines perform this task, but several standalone tools are available that can build container images.
Docker took these container images ( **tarballs** ) and moved them to a web service from which they could be pulled, developed a protocol to pull them, and called the web service a **container registry**.
**Container engines** are programs that can pull container images from container registries and reassemble them onto **container storage**. Container engines also launch **container runtimes** (see below).
![](https://opensource.com/sites/default/files/linux_container_internals_2.0_-_hosts.png)
Container storage is usually a **copy-on-write** (COW) layered filesystem. When you pull down a container image from a container registry, you first need to untar the rootfs and place it on disk. If you have multiple layers that make up your image, each layer is downloaded and stored on a different layer on the COW filesystem. The COW filesystem allows each layer to be stored separately, which maximizes sharing for layered images. Container engines often support multiple types of container storage, including `overlay`, `devicemapper`, `btrfs`, `aufs`, and `zfs`.
After the container engine downloads the container image to container storage, it needs to create aThe runtime configuration combines input from the caller/user along with the content of the container image specification. For example, the caller might want to specify modifications to a running container's security, add additional environment variables, or mount volumes to the container.
The layout of the container runtime configuration and the exploded rootfs have also been standardized by the OCI standards body as the [OCI Runtime Specification][4].
Finally, the container engine launches a **container runtime** that reads the container runtime specification; modifies the Linux cgroups, Linux security constraints, and namespaces; and launches the container command to create the container's **PID 1**. At this point, the container engine can relay `stdin`/`stdout` back to the caller and control the container (e.g., stop, start, attach).
Note that many new container runtimes are being introduced to use different parts of Linux to isolate containers. People can now run containers using KVM separation (think mini virtual machines) or they can use other hypervisor strategies (like intercepting all system calls from processes in containers). Since we have a standard runtime specification, these tools can all be launched by the same container engines. Even Windows can use the OCI Runtime Specification for launching Windows containers.
At a much higher level are **container orchestrators.** Container orchestrators are tools used to coordinate the execution of containers on multiple different nodes. Container orchestrators talk to container engines to manage containers. Orchestrators tell the container engines to start containers and wire their networks together. Orchestrators can monitor the containers and launch additional containers as the load increases.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/sysadmins-guide-containers
作者:[Daniel J Walsh][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/rhatdan
[1]:https://www.redhat.com/en/blog/containers-are-linux
[2]:https://www.opencontainers.org/
[3]:https://github.com/opencontainers/image-spec/blob/master/spec.md
[4]:https://github.com/opencontainers/runtime-spec

View File

@ -0,0 +1,112 @@
An introduction to diffs and patches
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
If youve ever worked on a large codebase with a distributed development model, youve probably heard people say things like “Sue just sent a patch,” or “Rajiv is checking out the diff.” Maybe those terms were new to you and you wondered what they meant. Open source has had an impact here, as the main development model of large projects from Apache web server to the Linux kernel have been “patch-based” development projects throughout their lifetime. In fact, did you know that Apaches name originated from the set of patches that were collected and collated against the original [NCSA HTTPd server source code][1]?
You might think this is folklore, but an early [capture of the Apache website][2] claims that the name was derived from this original “patch” collection; hence **APA** t **CH** y server, which was then simplified to Apache.
But enough history trivia. What exactly are these patches and diffs that developers talk about?
First, for the sake of this article, lets assume that these two terms reference one and the same thing. “Diff” is simply short for “difference;” a Unix utility by the same name reveals the difference between one or more files. We will look at a diff utility example below.
A “patch” refers to a specific collection of differences between files that can be applied to a source code tree using the Unix diff utility. So we can create diffs (or patches) using the diff tool and apply them to an unpatched version of that same source code using the patch tool. As an aside (and breaking my rule of no more history trivia), the word “patch” comes from the physical covering of punchcard holes to make software changes in the early computing days, when punchcards represented the program executed by the computers processor. The image below, found on this [Wikipedia page][3] describing software patches, shows this original “patching” concept:
![](https://opensource.com/sites/default/files/uploads/360px-harvard_mark_i_program_tape.agr_.jpg)
Now that you have a basic understanding of patches and diffs, lets explore how software developers use these tools. If you havent used a source code control system like [Git][4] or [Subversion][5], I will set the stage for how most non-trivial software projects are developed. If you think of the life of a software project as a set of actions along a timeline, you might visualize changes to the software—such as adding a feature or a function to a source code file or fixing a bug—appearing at different points on the timeline, with each discrete point representing the state of all the source code files at that time. We will call these points of change “commits,” using the same nomenclature that todays most popular source code control tool, Git, uses. When you want to see the difference between the source code before and after a certain commit, or between many commits, you can use a tool to show us diffs, or differences.
If you are developing software using this same source code control tool, Git, you may have changes in your local system that you want to provide for others to potentially add as commits to their own tree. One way to provide local changes to others is to create a diff of your local tree's changes and send this “patch” to others who are working on the same source code. This lets others patch their tree and see the source code tree with your changes applied.
### Linux, Git, and GitHub
This model of sharing patch files is how the Linux kernel community operates regarding proposed changes today. If you look at the archives for any of the popular Linux kernel mailing lists—[LKML][6] is the primary one, but others include [linux-containers][7], [fs-devel][8], [Netdev][9], to name a few—youll find many developers posting patches that they wish to have others review, test, and possibly bring into the official Linux kernel Git tree at some point. It is outside of the scope of this article to discuss Git, the source code control system written by Linus Torvalds, in more detail, but it's worth noting that Git enables this distributed development model, allowing patches to live separately from a main repository, pushing and pulling into different trees and following their specific development flow.
Before moving on, we cant ignore the most popular service in which patches and diffs are relevant: [GitHub][10]. Given its name, you can probably guess that GitHub is based on Git, but it offers a web- and API-based workflow around the Git tool for distributed open source project development. One of the main ways that patches are shared in GitHub is not via email, like the Linux kernel, but by creating a **pull request**. When you commit changes on your own copy of a source code tree, you can share those changes by creating a pull request against a commonly shared repository for that software project. GitHub is used by many active and popular open source projects today, such as [Kubernetes][11], [Docker][12], [the Container Network Interface (CNI)][13], [Istio][14], and many others. In the GitHub world, users tend to use the web-based interface to review the diffs or patches that comprise a pull request, but you can still access the raw patch files and use them at the command line with the patch utility.
### Getting down to business
Now that weve covered patches and diffs and how they are used in popular open source communities or tools, let's look at a few examples.
The first example includes two copies of a source tree, and one has changes that we want to visualize using the diff utility. In our examples, we will look at “unified” diffs because that is the expected view for patches in most of the modern software development world. Check the diff manual page for more information on options and ways to produce differences. The original source code is located in sources-orig and our second, modified codebase is located in a directory named sources-fixed. To show the differences in a unified diff format in your terminal, use the following command:
```
$ diff -Naur sources-orig/ sources-fixed/
```
...which then shows the following diff command output:
```
diff -Naur sources-orig/officespace/interest.go sources-fixed/officespace/interest.go
--- sources-orig/officespace/interest.go        2018-08-10 16:39:11.000000000 -0400
+++ sources-fixed/officespace/interest.go       2018-08-10 16:39:40.000000000 -0400
@@ -11,15 +11,13 @@
   InterestRate float64
 }
+// compute the rounded interest for a transaction
 func computeInterest(acct *Account, t Transaction) float64 {
   interest := t.Amount 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated t.InterestRate
   roundedInterest := math.Floor(interest*100) / 100.0
   remainingInterest := interest - roundedInterest
-  // a little extra..
-  remainingInterest *= 1000
-
   // Save the remaining interest into an account we control:
   acct.Balance = acct.Balance + remainingInterest
```
The first few lines of the diff command output could use some explanation: The three `---` signs show the original filename; any lines that exist in the original file but not in the compared new file will be prefixed with a single `-` to note that this line was “subtracted” from the sources. The `+++` signs show the opposite: The compared new file and additions found in this file are marked with a single `+` symbol to show they were added in the new version of the file. Each “hunk” (thats what sections prefixed by `@@` are called) of the difference patch file has contextual line numbers that help the patch tool (or other processors) know where to apply this change. You can see from the "Office Space" movie reference function that weve corrected (by removing three lines) the greed of one of our software developers, who added a bit to the rounded-out interest calculation along with a comment to our function.
If you want someone else to test the changes from this tree, you could save this output from diff into a patch file:
```
$ diff -Naur sources-orig/ sources-fixed/ >myfixes.patch
```
Now you have a patch file, myfixes.patch, which can be shared with another developer to apply and test this set of changes. A fellow developer can apply the changes using the patch tool, given that their current working directory is in the base of the source code tree:
```
$ patch -p1 < ../myfixes.patch
patching file officespace/interest.go
```
Now your fellow developers source tree is patched and ready to build and test the changes that were applied via the patch. What if this developer had made changes to interest.go separately? As long as the changes do not conflict directly—for example, change the same exact lines—the patch tool should be able to solve where to merge the changes in. As an example, an interest.go file with several other changes is used in the following example run of patch:
```
$ patch -p1 < ../myfixes.patch
patching file officespace/interest.go
Hunk #1 succeeded at 26 (offset 15 lines).
```
In this case, patch warns that the changes did not apply at the original location in the file, but were offset by 15 lines. If you have heavily changed files, patch may give up trying to find where the changes fit, but it does provide options (with requisite warnings in the documentation) for turning up the matching “fuzziness” (which are beyond the scope of this article).
If you are using Git and/or GitHub, you will probably not use the diff or patch tools as standalone tools. Git offers much of this functionality so you can use the built-in capabilities of working on a shared source tree with merging and pulling other developers changes. One similar capability is to use git diff to provide the unified diff output in your local tree or between any two references (a commit identifier, the name of a tag or branch, and so on). You can even create a patch file that someone not using Git might find useful by simply piping the git diff output to a file, given that it uses the exact format of the diffcommand that patch can consume. Of course, GitHub takes these capabilities into a web-based user interface so you can view file changes on a pull request. In this view, you will note that it is effectively a unified diff view in your web browser, and GitHub allows you to download these changes as a raw patch file.
### Summary
Youve learned what a diff and a patch are, as well as the common Unix/Linux command line tools that interact with them. Unless you are a developer on a project still using a patch file-based development method—like the Linux kernel—you will consume these capabilities primarily through a source code control system like Git. But its helpful to know the background and underpinnings of features many developers use daily through higher-level tools like GitHub. And who knows—they may come in handy someday when you need to work with patches from a mailing list in the Linux world.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/diffs-patches
作者:[Phil Estes][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/estesp
[1]:https://github.com/TooDumbForAName/ncsa-httpd
[2]:https://web.archive.org/web/19970615081902/http:/www.apache.org/info.html
[3]:https://en.wikipedia.org/wiki/Patch_(computing)
[4]:https://git-scm.com/
[5]:https://subversion.apache.org/
[6]:https://lkml.org/
[7]:https://lists.linuxfoundation.org/pipermail/containers/
[8]:https://patchwork.kernel.org/project/linux-fsdevel/list/
[9]:https://www.spinics.net/lists/netdev/
[10]:https://github.com/
[11]:https://kubernetes.io/
[12]:https://www.docker.com/
[13]:https://github.com/containernetworking/cni
[14]:https://istio.io/

View File

@ -0,0 +1,50 @@
translating by lujun9972
Solve "error: failed to commit transaction (conflicting files)" In Arch Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/06/arch_linux_wallpaper-720x340.png)
Its been a month since I upgraded my Arch Linux desktop. Today, I tried to update my Arch Linux system, and ran into an error that said **“error: failed to commit transaction (conflicting files) stfl: /usr/lib/libstfl.so.0 exists in filesystem”**. It looks like one library (/usr/lib/libstfl.so.0) that exists on my filesystem and pacman cant upgrade it. If youre encountered with the same error, here is a quick fix to resolve it.
### Solve “error: failed to commit transaction (conflicting files)” In Arch Linux
You have three options.
1. Simply ignore the problematic **stfl** library from being upgraded and try to update the system again. Refer this guide to know [**how to ignore package from being upgraded**][1].
2. Overwrite the package using command:
```
$ sudo pacman -Syu --overwrite /usr/lib/libstfl.so.0
```
3. Remove stfl library file manually and try to upgrade the system again. Please make sure the intended package is not a dependency to any important package. and check the archlinux.org whether are mentions of this conflict.
```
$ sudo rm /usr/lib/libstfl.so.0
```
Now, try to update the system:
```
$ sudo pacman -Syu
```
I chose the third option and just deleted the file and upgraded my Arch Linux system. It works now!
Hope this helps. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-solve-error-failed-to-commit-transaction-conflicting-files-in-arch-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/safely-ignore-package-upgraded-arch-linux/

View File

@ -0,0 +1,92 @@
Top 10 Raspberry Pi blogs to follow
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry-pi-juggle.png?itok=oTgGGSRA)
There are plenty of great Raspberry Pi fan sites, tutorials, repositories, YouTube channels, and other resources on the web. Here are my top 10 favorite Raspberry Pi blogs, in no particular order.
### 1. Raspberry Pi Spy
Raspberry Pi fan Matt Hawkins has been writing a broad range of comprehensive and informative tutorials on his site, Raspberry Pi Spy, since the early days. I have learned a lot directly from this site, and Matt always seems to be the first to cover many topics. I have reached out for help many times in my first three years in the world of hacking and making with Raspberry Pi.
Fortunately for everyone, this early adopter site is still going strong. I hope to see it live on, giving new community members a helping hand when they need it.
### 2. Adafruit
Adafruit is one of the biggest names in hardware hacking. The company makes and sells beautiful hardware and provides excellent tutorials written by staff, community members, and even the wonderful Lady Ada herself.
As well as being a webshop, Adafruit also run a blog, which is full to the brim of great content from around the world. Check out the Raspberry Pi category, especially at the end of the work week, as [Friday is Pi Day][1] at Adafruit Towers.
### 3. Recantha's Raspberry Pi Pod
Mike Horne (Recantha) is a key Pi community member in the UK who runs the [CamJam and Potton Pi & Pint][2] (two Raspberry Jams in Cambridge) and [Pi Wars][3] (an annual Pi robotics competition). He gives advice to others setting up Jams and always has time to help beginners. With his co-organizer Tim Richardson, Horne developed the CamJam Edu Kit (a series of small and affordable kits for beginners to learn physical computing with Python).
On top of all this, he runs the Pi Pod, a blog full of anything and everything Pi-related from around the world. It's probably the most regularly updated Pi blog on this list, so it's a great way to keep your finger on the pulse of the Pi community.
### 4. Raspberry Pi blog
Not forgetting the official [Raspberry Pi Foundation][4], this blog covers a range of content from the Foundation's world of hardware, software, education, community, and charity and youth coding clubs. Big themes on the blog are digital making at home, empowerment through education, as well as official news on hardware releases and software updates.
The blog has been running [since 2011][5] and provides an [archive][6] of all 1800+ posts since that time. You can also follow [@raspberrypi_otd][7] on Twitter, which is a bot I created in [Python][8] (for an [Opensource.com tutorial][9], of course). The bot tweets links to blog posts from the current day in previous years from the Raspberry Pi blog archive.
### 5. RasPi.tv
Another seminal Raspberry Pi community member is Alex Eames, who got on board early on with his blog and YouTube channel, RasPi.tv. The site is packed with high-quality, well-produced video tutorials and written guides covering maker projects for all.
Alex makes a series of add-on boards and accessories for the Pi as [RasP.iO][10], including a handy GPIO port label, reference rulers, and more. His blog branches out into [Arduino][11], [WEMO][12], and other small boards too.
### 6. pyimagesearch
Though not strictly a Raspberry Pi blog (the "py" in the name is for "Python," not "Raspberry Pi"), this site features an extensive [Raspberry Pi category][13]. Adrian Rosebrock earned a PhD studying the fields of computer vision and machine learning. His blog aims to share the machine learning tricks he's picked up while studying and making his own computer vision projects.
If you want to learn about facial or object recognition using the Pi camera module, this is the place to be. Adrian's knowledge and practical application of deep learning and AI for image recognition is second to none—and he writes up his projects so that anyone can try.
### 7. Raspberry Pi Roundup
One of the UK's official Raspberry Pi resellers, The Pi Hut, maintains a blog curating the finds of the week. It's another great resource to keep up with what's on in the Pi world, and worth looking back through past issues.
### 8. Dave Akerman
A leading expert in high-altitude ballooning, Dave Akerman shares his knowledge and experience with balloon launches at minimal cost using Raspberry Pi. He publishes writeups of his launches with photos from the stratosphere and offers tips on how to launch a Pi balloon yourself.
Check out Dave's blogfor amazing photography from near space.
### 9. Pimoroni
A world-renowned Raspberry Pi reseller based in Sheffield in the UK, Pimoroni made the famous [Pibow Rainbow case][14] and followed it up with a host of incredible custom add-on boards and accessories.
Pimoroni's blog is laid out as beautifully as its hardware design and branding, and it provides great content for makers and hobbyists at home. The blog accompanies their entertaining YouTube channel [Bilge Tank][15].
### 10. Stuff About Code
Martin O'Hanlon is a Pi community member-turned-Foundation employee who started out hacking Minecraft on the Pi for fun and recently joined the Foundation as a content writer. Luckily, Martin's new job hasn't stopped him from updating his blog and sharing useful tidbits with the world. As well as lots on Minecraft, you'll find stuff on the Python libraries, [Blue Dot][16], and [guizero][17], along with general Raspberry Pi tips.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/top-10-raspberry-pi-blogs-follow
作者:[Ben Nuttall][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/bennuttall
[1]:https://blog.adafruit.com/category/raspberry-pi/
[2]:https://camjam.me/?page_id=753
[3]:https://piwars.org/
[4]:https://www.raspberrypi-spy.co.uk/
[5]:https://www.raspberrypi.org/blog/first-post/
[6]:https://www.raspberrypi.org/blog/archive/
[7]:https://twitter.com/raspberrypi_otd
[8]:https://github.com/bennuttall/rpi-otd-bot/blob/master/src/bot.py
[9]:https://opensource.com/article/17/8/raspberry-pi-twitter-bot
[10]:https://rasp.io/
[11]:https://www.arduino.cc/
[12]:http://community.wemo.com/
[13]:https://www.pyimagesearch.com/category/raspberry-pi/
[14]:https://shop.pimoroni.com/products/pibow-for-raspberry-pi-3-b-plus
[15]:https://www.youtube.com/channel/UCuiDNTaTdPTGZZzHm0iriGQ
[16]:https://bluedot.readthedocs.io/en/latest/#
[17]:https://lawsie.github.io/guizero/

View File

@ -0,0 +1,167 @@
A Cat Clone With Syntax Highlighting And Git Integration
======
![](https://www.ostechnix.com/wp-content/uploads/2018/08/Bat-command-720x340.png)
In Unix-like systems, we use **cat** command to print and concatenate files. Using cat command, we can print the contents of a file to the standard output, concatenate several files into the target file, and append several files into the target file. Today, I stumbled upon a similar utility named **“Bat”** , a clone to the cat command, with some additional cool features such as syntax highlighting, git integration and automatic paging etc. In this brief guide, we will how to install and use Bat command in Linux.
### Installation
Bat is available in the default repositories of Arch Linux. So, you can install it using pacman on any arch-based systems.
```
$ sudo pacman -S bat
```
On Debian, Ubuntu, Linux Mint systems, download the **.deb** file from the [**Releases page**][1] and install it as shown below.
```
$ sudo apt install gdebi
$ sudo gdebi bat_0.5.0_amd64.deb
```
For other systems, you may need to compile and install from source. Make sure you have installed Rust 1.26 or higher.
Then, run the following command to install Bat:
```
$ cargo install bat
```
Alternatively, you can install it using [**Linuxbrew**][2] package manager.
```
$ brew install bat
```
### Bat command Usage
The Bat commands usage is very similar to cat command.
To create a new file using bat command, do:
```
$ bat > file.txt
```
To view the contents of a file using bat command, just do:
```
$ bat file.txt
```
You can also view multiple files at once:
```
$ bat file1.txt file2.txt
```
To append the contents of the multiple files in a single file:
```
$ bat file1.txt file2.txt file3.txt > document.txt
```
Like I already mentioned, apart from viewing and editing files, the Bat command has some additional cool features though.
The bat command supports **syntax highlighting** for large number of programming and markup languages. For instance, look at the following example. I am going to display the contents of the **reverse.py** file using both cat and bat commands.
![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-and-cat-command-output-comparison.png)
Did you notice the difference? Cat command shows the contents of the file in plain text format, whereas bat command shows output with syntax highlighting, order number in a neat tabular column format. Much better, isnt it?
If you want to display only the line numbers (not the tabular column), use **-n** flag.
```
$ bat -n reverse.py
```
**Sample output:**
![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-3.png)
Another notable feature of Bat command is it supports **automatic paging**. That means if output of a file is too large for one screen, the bat command automatically pipes its own output to **less** command, so you can view the output page by page.
Let me show you an example. When you view the contents of a file which spans multiple pages using cat command, the prompt quickly jumps to the last page of the file, and you do not see the content in the beginning or in the middle.
Have a look at the following output:
![](https://www.ostechnix.com/wp-content/uploads/2018/08/cat-command-output.png)
As you can see, the cat command displays last page of the file.
So, you may need to pipe the output of the cat command to **less** command to view its contents page by page from the beginning.
```
$ cat reverse.py | less
```
Now, you can view output page by page by hitting the ENTER key. However, it is not necessary if you use bat command. The bat command will automatically pipe the output of a file which spans multiple pages.
```
$ bat reverse.py
```
**Sample output:**
![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-1.png)
Now hit the ENTER key to go to the next page.
The bat command also supports **GIT integration** , so you can view/edit the files in your Git repository without much hassle. It communicates with git to show modifications with respect to the index (see left side bar).
![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-2.png)
**Customizing Bat**
If you dont like the default themes, you can change it too. Bat has option for that too.
To list the available themes, just run:
```
$ bat --list-themes
1337
DarkNeon
Default
GitHub
Monokai Extended
Monokai Extended Bright
Monokai Extended Light
Monokai Extended Origin
TwoDark
```
To use a different theme, for example TwoDark, run:
```
$ bat --theme=TwoDark file.txt
```
If you want to make the theme permanent, use `export BAT_THEME="TwoDark"` in your shells startup file.
Bat also have the option to control the appearance of the output. To do so, use the `--style` option. To show only Git changes and line numbers but no grid and no file header, use `--style=numbers,changes`.
For more details, refer the Bat project GitHub Repository (Link at the end).
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/bat-a-cat-clone-with-syntax-highlighting-and-git-integration/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://github.com/sharkdp/bat/releases
[2]:https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/

View File

@ -0,0 +1,228 @@
An Introduction to Quantum Computing with Open Source Cirq Framework
======
As the title suggests what we are about to begin discussing, this article is an effort to understand how far we have come in Quantum Computing and where we are headed in the field in order to accelerate scientific and technological research, through an Open Source perspective with Cirq.
First, we will introduce you to the world of Quantum Computing. We will try our best to explain the basic idea behind the same before we look into how Cirq would be playing a significant role in the future of Quantum Computing. Cirq, as you might have heard of recently, has been breaking news in the field and in this Open Science article, we will try to find out why.
<https://www.youtube.com/embed/WVv5OAR4Nik?enablejsapi=1&autoplay=0&cc_load_policy=0&iv_load_policy=1&loop=0&modestbranding=1&rel=0&showinfo=0&fs=1&playsinline=0&autohide=2&theme=dark&color=red&controls=2&>
Before we start with what Quantum Computing is, it is essential to get to know about the term Quantum, that is, a [subatomic particle][1] referring to the smallest known entity. The word [Quantum][2] is based on the Latin word Quantus, meaning, “how little”, as described in this short video:
<https://www.youtube.com/embed/-pUOxVsxu3o?enablejsapi=1&autoplay=0&cc_load_policy=0&iv_load_policy=1&loop=0&modestbranding=1&rel=0&showinfo=0&fs=1&playsinline=0&autohide=2&theme=dark&color=red&controls=2&>
It will be easier for us to understand Quantum Computing by comparing it first to Classical Computing. Classical Computing refers to how todays conventional computers are designed to work. The device with which you are reading this article right now, can also be referred to as a Classical Computing Device.
### Classical Computing
Classical Computing is just another way to describe how a conventional computer works. They work via a binary system, i.e, information is stored using either 1 or 0. Our Classical computers cannot understand any other form.
In literal terms inside the computer, a transistor can be either on (1) or off (0). Whatever information we provide input to, is translated into 0s and 1s, so that the computer can understand and store that information. Everything is represented only with the help of a combination of 0s and 1s.
<https://www.youtube.com/embed/Xpk67YzOn5w?enablejsapi=1&autoplay=0&cc_load_policy=0&iv_load_policy=1&loop=0&modestbranding=1&rel=0&showinfo=0&fs=1&playsinline=0&autohide=2&theme=dark&color=red&controls=2&>
### Quantum Computing
Quantum Computing, on the other hand, does not follow an “on or off” model like Classical Computing. Instead, it can simultaneously handle multiple states of information with help of two phenomena called [superimposition and entanglement][3], thus accelerating computing at a much faster rate and also facilitating greater productivity in information storage.
Please note that superposition and entanglement are [not the same phenomena][4].
<https://www.youtube.com/embed/jiXuVIEg10Q?enablejsapi=1&autoplay=0&cc_load_policy=0&iv_load_policy=1&loop=0&modestbranding=1&rel=0&showinfo=0&fs=1&playsinline=0&autohide=2&theme=dark&color=red&controls=2&>
![][5]
So, if we have bits in Classical Computing, then in the case of Quantum Computing, we would have qubits (or Quantum bits) instead. To know more about the vast difference between the two, check this [page][6] from where the above pic was obtained for explanation.
Quantum Computers are not going to replace our Classical Computers. But, there are certain humongous tasks that our Classical Computers will never be able to accomplish and that is when Quantum Computers would prove extremely resourceful. The following video describes the same in detail while also describing how Quantum Computers work:
<https://www.youtube.com/embed/JhHMJCUmq28?enablejsapi=1&autoplay=0&cc_load_policy=0&iv_load_policy=1&loop=0&modestbranding=1&rel=0&showinfo=0&fs=1&playsinline=0&autohide=2&theme=dark&color=red&controls=2&>
A comprehensive video on the progress in Quantum Computing so far:
<https://www.youtube.com/embed/CeuIop_j2bI?enablejsapi=1&autoplay=0&cc_load_policy=0&iv_load_policy=1&loop=0&modestbranding=1&rel=0&showinfo=0&fs=1&playsinline=0&autohide=2&theme=dark&color=red&controls=2&>
### Noisy Intermediate Scale Quantum
According to the very recently updated research paper (31st July 2018), the term “Noisy” refers to inaccuracy because of producing an incorrect value caused by imperfect control over qubits. This inaccuracy is why there will be serious limitations on what Quantum devices can achieve in the near term.
“Intermediate Scale” refers to the size of Quantum Computers which will be available in the next few years, where the number of qubits can range from 50 to a few hundred. 50 qubits is a significant milestone because thats beyond what can be simulated by [brute force][7] using the most powerful existing digital [supercomputers][8]. Read more in the paper [here][9].
With the advent of Cirq, a lot is about to change.
### What is Cirq?
Cirq is a python framework for creating, editing, and invoking Noisy Intermediate Scale Quantum (NISQ) circuits that we just talked about. In other words, Cirq can address challenges to improve accuracy and reduce noise in Quantum Computing.
Cirq does not necessarily require an actual Quantum Computer for execution. Cirq can also use a simulator-like interface to perform Quantum circuit simulations.
Cirq is gradually grabbing a lot of pace, with one of its first users being [Zapata][10], formed last year by a [group of scientists][11] from Harvard University focused on Quantum Computing.
### Getting started with Cirq on Linux
The developers of the Open Source [Cirq library][12] recommend the installation in a [virtual python environment][13] like [virtualenv][14]. The developers installation guide for Linux can be found [here][15].
However, we successfully installed and tested Cirq directly for Python3 on an Ubuntu 16.04 system via the following steps:
#### Installing Cirq on Ubuntu
![Cirq Framework for Quantum Computing in Linux][16]
First, we would require pip or pip3 to install Cirq. [Pip][17] is a tool recommended for installing and managing Python packages.
For Python 3.x versions, Pip can be installed with:
```
sudo apt-get install python3-pip
```
Python3 packages can be installed via:
```
pip3 install <package-name>
```
We went ahead and installed the Cirq library with Pip3 for Python3:
```
pip3 install cirq
```
#### Enabling Plot and PDF generation (optional)
Optional system dependencies not install-able with pip can be installed with:
```
sudo apt-get install python3-tk texlive-latex-base latexmk
```
* python3-tk is Pythons own graphic library which enables plotting functionality.
* texlive-latex-base and latexmk enable PDF writing functionality.
Later, we successfully tested Cirq with the following command and code:
```
python3 -c 'import cirq; print(cirq.google.Foxtail)'
```
We got the resulting output as:
![][18]
#### Configuring Pycharm IDE for Cirq
We also configured a Python IDE [PyCharm on Ubuntu][19] to test the same results:
Since we installed Cirq for Python3 on our Linux system, we set the path to the project interpreter in the IDE settings to be:
```
/usr/bin/python3
```
![][20]
In the output above, you can note that the path to the project interpreter that we just set, is shown along with the path to the test program file (test.py). An exit code of 0 shows that the program has finished executing successfully without errors.
So, thats a ready-to-use IDE environment where you can import the Cirq library to start programming with Python and simulate Quantum circuits.
#### Get started with Cirq
A good place to start are the [examples][21] that have been made available on Cirqs Github page.
The developers have included this [tutorial][22] on GitHub to get started with learning Cirq. If you are serious about learning Quantum Computing, they recommend an excellent book called [“Quantum Computation and Quantum Information” by Nielsen and Chuang][23].
#### OpenFermion-Cirq
[OpenFermion][24] is an open source library for obtaining and manipulating representations of fermionic systems (including Quantum Chemistry) for simulation on Quantum Computers. Fermionic systems are related to the generation of [fermions][25], which according to [particle physics][26], follow [Fermi-Dirac statistics][27].
OpenFermion has been hailed as [a great practice tool][28] for chemists and researchers involved with [Quantum Chemistry][29]. The main focus of Quantum Chemistry is the application of [Quantum Mechanics][30] in physical models and experiments of chemical systems. Quantum Chemistry is also referred to as [Molecular Quantum Mechanics][31].
The advent of Cirq has now made it possible for OpenFermion to extend its functionality by providing routines and tools for using Cirq to compile and compose circuits for Quantum simulation algorithms.
#### Google Bristlecone
On March 5, 2018, Google presented [Bristlecone][32], their new Quantum processor, at the annual [American Physical Society meeting][33] in Los Angeles. The [gate-based superconducting system][34] provides a test platform for research into [system error rates][35] and [scalability][36] of Googles [qubit technology][37], along-with applications in Quantum [simulation][38], [optimization][39], and [machine learning.][40]
In the near future, Google wants to make its 72 qubit Bristlecone Quantum processor [cloud accessible][41]. Bristlecone will gradually become quite capable to perform a task that a Classical Supercomputer would not be able to complete in a reasonable amount of time.
Cirq would make it easier for researchers to directly write programs for Bristlecone on the cloud, serving as a very convenient interface for real-time Quantum programming and testing.
Cirq will allow us to:
* Fine tune control over Quantum circuits,
* Specify [gate][42] behavior using native gates,
* Place gates appropriately on the device &
* Schedule the timing of these gates.
### The Open Science Perspective on Cirq
As we all know Cirq is Open Source on GitHub, its addition to the Open Source Scientific Communities, especially those which are focused on Quantum Research, can now efficiently collaborate to solve the current challenges in Quantum Computing today by developing new ways to reduce error rates and improve accuracy in the existing Quantum models.
Had Cirq not followed an Open Source model, things would have definitely been a lot more challenging. A great initiative would have been missed out and we would not have been one step closer in the field of Quantum Computing.
### Summary
To summarize in the end, we first introduced you to the concept of Quantum Computing by comparing it to existing Classical Computing techniques followed by a very important video on recent developmental updates in Quantum Computing since last year. We then briefly discussed Noisy Intermediate Scale Quantum, which is what Cirq is specifically built for.
We saw how we can install and test Cirq on an Ubuntu system. We also tested the installation for usability on an IDE environment with some resources to get started to learn the concept.
Finally, we also saw two examples of how Cirq would be an essential advantage in the development of research in Quantum Computing, namely OpenFermion and Bristlecone. We concluded the discussion by highlighting some thoughts on Cirq with an Open Science Perspective.
We hope we were able to introduce you to Quantum Computing with Cirq in an easy to understand manner. If you have any feedback related to the same, please let us know in the comments section. Thank you for reading and we look forward to see you in our next Open Science article.
--------------------------------------------------------------------------------
via: https://itsfoss.com/qunatum-computing-cirq-framework/
作者:[Avimanyu Bandyopadhyay][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/avimanyu/
[1]:https://en.wikipedia.org/wiki/Subatomic_particle
[2]:https://en.wikipedia.org/wiki/Quantum
[3]:https://www.clerro.com/guide/491/quantum-superposition-and-entanglement-explained
[4]:https://physics.stackexchange.com/questions/148131/can-quantum-entanglement-and-quantum-superposition-be-considered-the-same-phenom
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/bit-vs-qubit.jpg
[6]:http://www.rfwireless-world.com/Terminology/Difference-between-Bit-and-Qubit.html
[7]:https://en.wikipedia.org/wiki/Proof_by_exhaustion
[8]:https://www.explainthatstuff.com/how-supercomputers-work.html
[9]:https://arxiv.org/abs/1801.00862
[10]:https://www.xconomy.com/san-francisco/2018/07/19/google-partners-with-zapata-on-open-source-quantum-computing-effort/
[11]:https://www.zapatacomputing.com/about/
[12]:https://github.com/quantumlib/Cirq
[13]:https://itsfoss.com/python-setup-linux/
[14]:https://virtualenv.pypa.io
[15]:https://cirq.readthedocs.io/en/latest/install.html#installing-on-linux
[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/cirq-framework-linux.jpeg
[17]:https://pypi.org/project/pip/
[18]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/cirq-test-output.jpg
[19]:https://itsfoss.com/install-pycharm-ubuntu/
[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/cirq-tested-on-pycharm.jpg
[21]:https://github.com/quantumlib/Cirq/tree/master/examples
[22]:https://github.com/quantumlib/Cirq/blob/master/docs/tutorial.md
[23]:http://mmrc.amss.cas.cn/tlb/201702/W020170224608149940643.pdf
[24]:http://openfermion.org
[25]:https://en.wikipedia.org/wiki/Fermion
[26]:https://en.wikipedia.org/wiki/Particle_physics
[27]:https://en.wikipedia.org/wiki/Fermi-Dirac_statistics
[28]:https://phys.org/news/2018-03-openfermion-tool-quantum-coding.html
[29]:https://en.wikipedia.org/wiki/Quantum_chemistry
[30]:https://en.wikipedia.org/wiki/Quantum_mechanics
[31]:https://ocw.mit.edu/courses/chemical-engineering/10-675j-computational-quantum-mechanics-of-molecular-and-extended-systems-fall-2004/lecture-notes/
[32]:https://techcrunch.com/2018/03/05/googles-new-bristlecone-processor-brings-it-one-step-closer-to-quantum-supremacy/
[33]:http://meetings.aps.org/Meeting/MAR18/Content/3475
[34]:https://en.wikipedia.org/wiki/Superconducting_quantum_computing
[35]:https://en.wikipedia.org/wiki/Quantum_error_correction
[36]:https://en.wikipedia.org/wiki/Scalability
[37]:https://research.googleblog.com/2015/03/a-step-closer-to-quantum-computation.html
[38]:https://research.googleblog.com/2017/10/announcing-openfermion-open-source.html
[39]:https://research.googleblog.com/2016/06/quantum-annealing-with-digital-twist.html
[40]:https://arxiv.org/abs/1802.06002
[41]:https://www.computerworld.com.au/article/644051/google-launches-quantum-framework-cirq-plans-bristlecone-cloud-move/
[42]:https://en.wikipedia.org/wiki/Logic_gate

View File

@ -0,0 +1,91 @@
How to Play Windows-only Games on Linux with Steam Play
======
The new experimental feature of Steam allows you to play Windows-only games on Linux. Heres how to use this feature in Steam right now.
You have heard the news. Game distribution platform [Steam is implementing a fork of WINE to allow you to play games that are available on Windows only][1]. This is definitely a great news for us Linux users for we have complained about the lack of the number of games for Linux.
This new feature is still in beta but you can try it out and play Windows-only games on Linux right now. Lets see how to do that.
### Play Windows-only games in Linux with Steam Play
![Play Windows-only games on Linux][2]
You need to install Steam first. Steam is available for all major Linux distributions. I have written in detail about [installing Steam on Ubuntu][3] and you may refer to that article if you dont have Steam installed yet.
Once you have Steam installed and you have logged into your Steam account, its time to see how to enable Windows games in Steam Linux client.
#### Step 1: Go to Account Settings
Run Steam client. On the top left, click on Steam and then on Settings.
![Enable steam play beta on Linux][4]
#### Step 2: Opt in to the beta program
In the Settings, select Account from left side pane and then click on the CHANGE button under Beta participation.
![Enable beta feature in Steam Linux][5]
You should select Steam Beta Update here.
![Enable beta feature in Steam Linux][6]
Once you save the settings here, Steam will restart and download the new beta updates.
#### Step 3: Enable Steam Play beta
Once Steam has downloaded the new beta updates, it will be restarted. Now you are almost set.
Go to Settings once again. Youll see a new option Steam Play in the left side pane now. Click on it and check the boxes:
* Enable Steam Play for supported titles (You can play the whitelisted Windows-only games)
* Enable Steam Play for all titles (You can try to play all Windows-only games)
![Play Windows games on Linux using Steam Play][7]
I dont remember if Steam restarts at this point again or not but I guess thats trivial. You should now see the option to install Windows-only games on Linux.
For example, I have Age of Empires in my Steam library which is not available on Linux normally. But after I enabled Steam Play beta for all Windows titles, it now gives me the option for installing Age of Empires on Linux.
![Install Windows-only games on Linux using Steam][8]
Windows-only games can now be installed on Linux
### Things to know about Steam Play beta feature
There are a few things you should know and keep in mind about using Windows-only games on Linux with Steam Play beta.
* At present, [only 27 Windows-games are whitelisted][9] for Steam Play. These whitelisted games work seamlessly on Linux.
* You can try any Windows game with Steam Play beta but it might not work all the time. Some games will crash sometimes while some game might not run at all.
* While in beta, you wont see the Windows-only games available for Linux in the Steam store. Youll have to either try the game on your own or refer to [this community maintained list][10] to see the compatibility status of the said Windows game. You can also contribute to the list by filling [this form][11].
* If you have games downloaded on Windows via Steam, you can save some download data by [sharing Steam game files between Linux and Windows][12].
I hope this tutorial helped you in running Windows-only games on Linux. Which game(s) are you looking forward to play on Linux?
--------------------------------------------------------------------------------
via: https://itsfoss.com/steam-play/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]:https://itsfoss.com/steam-play-proton/
[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/play-windows-games-on-linux-featured.jpeg
[3]:https://itsfoss.com/install-steam-ubuntu-linux/
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/enable-steam-play-beta.jpeg
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/enable-steam-play-beta-2.jpeg
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/enable-steam-play-beta-3.jpeg
[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/enable-steam-play-beta-4.jpeg
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/install-windows-games-linux.jpeg
[9]:https://steamcommunity.com/games/221410
[10]:https://docs.google.com/spreadsheets/d/1DcZZQ4HL_Ol969UbXJmFG8TzOHNnHoj8Q1f8DIFe8-8/htmlview?sle=true#
[11]:https://docs.google.com/forms/d/e/1FAIpQLSeefaYQduMST_lg0IsYxZko8tHLKe2vtVZLFaPNycyhY4bidQ/viewform
[12]:https://itsfoss.com/share-steam-files-linux-windows/

View File

@ -0,0 +1,146 @@
我从编程面试中学到的
============================================================
![](https://cdn-images-1.medium.com/max/1250/1*DXPdaGPM4oM6p5nSkup7IQ.jpeg)
聊聊白板编程面试
在2017年我参加了[Grace Hopper Celebration][1]计算机行业中的女性这一活动。这个活动是这类科技活动中最大的一个。共有17,000名女性IT工作者参加。
这个会议有个大型的配套招聘会会上有招聘公司来面试会议参加者。有些人甚至现场拿到offer。我在现场晃荡了一下注意到一些应聘者看上去非常紧张忧虑。我还隐隐听到应聘者之间的谈话其中一些人谈到在面试中做的并不好。
我走近我听到谈话的那群人并和她们聊了起来并给了一些面试上的小建议。我想我的建议还是比较偏基本的,如“(在面试时)一开始给出个能工作的解决方案也还说的过去”之类的,但是当她们听到我的一些其他的建议时还是颇为吃惊。
为了能更多的帮到像她们一样的白面面试者,我收集了一些过去对我有用的小点子,这些小点子我已经发表在了[prodcast episode][2]上。它们也是这篇文章的主题。
为了实习生职位和全职工作,我做过很多次的面试。当我还在大学主修计算机科学时,学校每个秋季学期都有招聘会,第一轮招聘会在校园里举行。(我在第一和最后一轮都搞砸过。)不过,每次面试后,我都会反思哪些方面我能做的更好,我还会和朋友们做模拟面试,这样我就能从他们那儿得到更多的面试反馈。
不管我们怎么样找工作: 工作中介,网络,或者学校招聘,他们的招聘流程中都会涉及到技术面试:
近年来,我注意到了一些新的不同的面试形式出现了:
* 与招聘方的一位工程师结对编程
* 网络在线测试及在线编码
* 白板编程LCTT译者注 这种形式应该不新了)
我将重点谈谈白板面试,这种形式我经历的最多。我有过很多次面试,有些挺不错的,有些被我搞砸了。
#### 我做错的地方
首先,我想回顾一下我做的不好的地方。知错能改,善莫大焉。
当面试者提出一个要我解决的问题时, 我立即马上立刻开始在白板上写代码,_什么都不问。_
这里我犯了两个错误:
#### 没有澄清对解决问题有关键作用的信息
比如,我们是否只用处理数字或者字符串?我们要支持多种数据类型吗?如果你在开始解题前不去问这些问题的话,你的面试官会有一种不好的印象:这个人在我们公司的话,他不会在开始项目工作之前不问清楚到底要做什么。而这恰恰是在工作场合很重要的一个工作习惯。公司可不像学校,你在开始工作前可不会得到写有所有详细步骤的作业说明。你得靠自己找到这些步骤并自己定义他们。
#### 只会默默思考,不去记录想法或和面试官沟通
在面试中,很多时候我也会傻傻站在那思考,什么都不写。我和一个朋友模拟面试的时候,他告诉我因为他曾经和我一起工作过所以他知道我在思考,但是如果他是个陌生的面试官的话,他会觉得要么我正站在那冥思苦想,毫无头绪。不要急匆匆的直奔解题而去是很重要的。花点时间多想想各种解题的可能性。有时候面试官会乐意和你一起探索解题的步骤。不管怎样,这就是在一家公司开工作会议的的普遍方式,大家各抒己见,一起讨论如何解决问题。
### 想到一个解题方法
在你开始写代码之前,如果你能总结一下要使用到的算法就太棒了。不要上来就写代码并认为你的代码肯定能解决问题。
这是对我管用的步骤:
1. 头脑风暴
2. 写代码
3. 处理错误路径
4. 测试
#### 1\. 头脑风暴
对我来说,我会首先通过一些例子来视觉化我要解决的问题。比如说如果这个问题和数据结构中的树有关,我就会从树底层的空节点开始思考,如何处理一个节点的情况呢?两个节点呢?三个节点呢?这能帮助你从具体例子里抽象出你的解决方案。
在白板上先写下你的算法要做的事情列表。这样做你往往能在开始写代码前就发现bug和缺陷不过你可得掌握好时间。我犯过的一个错误是我花了过多的时间在澄清问题和头脑风暴上最后几乎没有留下时间给我写代码。你的面试官可能没有机会看你在白板上写下代码这可太糟了。你可以带块手表或者房间有钟的话你也可以抬头看看时间。有些时候面试者会提醒你你已经得到了所有的信息这时你就不要再问别的了'我想我们已经把所有需要的信息都澄清了,让我们写代码实现吧'
#### 2\. 开始写代码,一气呵成
如果你还没有得到问题的完美解决方法从最原始的解法开始总的可以的。当你在向面试官解释最显而易见的解法时你要想想怎么去完善它并指明这种做法是最原始未加优化的。请熟悉算法中的O()的概念,这对面试非常有用。)在向面试者提交前请仔细检查你的解决方案两三遍。面试者有时会给你些提示, ‘还有更好的方法吗?’,这句话的意思是面试官提示你有更优化的解决方案。
#### 3\. 错误处理
当你在编码时,对你想做错误处理的代码行做个注释。当面试者说,'很好,这里你想到了错误处理。你想怎么处理呢?抛出异常还是返回错误码?',这将给你个机会去引出关于代码质量的一番讨论。当然,这种地方提出几个就够了。有时,面试者为了节省编码的时间,会告诉你可以假设外界输入的参数都已经通过了校验。不管怎样,你都要展现你对错误处理和编码质量的重要性的认识。
#### 4\. 测试
在编码完成后,用你在前面头脑风暴中写的用例来在你脑子里“跑”一下你的代码,确定万无一失。例如你可以说,‘让我用前面写下的树的例子来跑一下我的代码,如果是一个节点是什么结果,如果是两个节点是什么结果。。。’
在你结束之后,面试者有时会问你你将会怎么测试你的代码,你会涉及什么样的测试用例。我建议你用下面不同的分类来组织你的错误用例:
一些分类可以为:
1. 性能
2. 错误用例
3. 期望的正常用例
对于性能测试,要考虑极端数量下的情况。例如,如果问题是关于列表的,你可以说你将会使用一个非常大的列表以及的非常小的列表来测试。如果和数字有关,你将会测试系统中的最大整数和最小整数。我建议读一些有关软件测试的书来得到更多的知识。在这个领域我最喜欢的书是[How We Test Software at Microsoft][3]。
对于错误用例,想一下什么是期望的错误情况并一一写下。
对于正向期望用例,想想用户需求是什么?你的解决方案要解决什么问题?这些都可以成为正向期望用例。
### “你还有什么要问我的吗?”
面试最后总是会留几分钟给你问问题。我建议你在面试前写下你想问的问题。千万别说,‘我没什么问题了’,就算你觉得面试砸了或者你对这间公司不怎么感兴趣,你总有些东西可以问问。你甚至可以问面试者他最喜欢自己的工作什么,最讨厌自己的工作什么。或者你可以问问面试官的工作具体是什么,在用什么技术和实践。不要因为觉得自己在面试中做的不好而心灰意冷,不想问什么问题。
### 申请一份工作
关于找工作申请工作,有人曾经告诉我,你应该去找你真正有激情工作的地方。去找一家你喜欢的公司,或者你喜欢使用的产品,看看你能不能去那儿工作。
我个人并不推荐你用上述的方法去找工作。你会排除很多很好的公司,特别是你是在找实习工作或者入门级的职位时。
你也可以集中在其他的一些目标上。如我想从这个工作里得到哪方面的更多经验这个工作是关于云计算Web开发或是人工智能当在招聘会上与招聘公司沟通是看看他们的工作单位有没有在这些领域的。你可能会在一家并非在你的想去公司列表上的公司或非盈利机构里找到你想找的职位。
#### 换组
在这家公司里的第一个组里呆了一年半以后我觉得是时候去探索一下不同的东西了。我找到了一个我喜欢的组并进行了4轮面试。结果我搞砸了。
我什么都没有准备甚至都没在白板上练练手。我当时的逻辑是如果我都已经在一家公司干了快2年了我还需要练什么我完全错了我在接下去的白板面试中跌跌撞撞。我的板书写得太小而且因为没有从最左上角开始写代码我的代码大大超出了一个白板的空间这些都导致了白板面试失败。
我在面试前也没有刷过数据结构和算法题。如果我做了的话,我将会在面试中更有信心。就算你已经在一家公司担任了软件工程师,在你去另外一个组面试前,我强烈建议你在一块白板上演练一下如何写代码。
对于换项目组这件事,如果你是在公司内部换组的话,事先能同那个组的人非正式聊聊会很有帮助。对于这一点,我发现几乎每个人都很乐于和你一起吃个午饭。人一般都会在中午有空,约不到人或者别人正好有会议冲突的风险会很低。这是一种非正式的途径来了解你想去的组正在干什么,以及这个组成员个性是怎么样的。相信我, 你能从一次午餐中得到很多信息,这可会对你的正式面试帮助不小。
非常重要的一点是你在面试一个特定的组时就算你在面试中做的很好因为文化不契合的原因你也很可能拿不到offer。这也是为什么我一开始就想去见见组里不同的人的原因有时这也不太可能我希望你不要被一次拒绝所击倒请保持开放的心态选择新的机会并多多练习。
以上内容来自["Programming interviews"][4] 章节,选自 [The Women in Tech Show: Technical Interviews with Prominent Women in Tech][5]
--------------------------------------------------------------------------------
作者简介:
微软研究院Software Engineer II, www.thewomenintechshow.com站长所有观点都只代表本人意见。
------------
via: https://medium.freecodecamp.org/what-i-learned-from-programming-interviews-29ba49c9b851
作者:[Edaena Salinas ][a]
译者DavidChenLiang (https://github.com/DavidChenLiang)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@edaenas
[1]:https://anitab.org/event/2017-grace-hopper-celebration-women-computing/
[2]:https://thewomenintechshow.com/2017/12/18/programming-interviews/
[3]:https://www.amazon.com/How-We-Test-Software-Microsoft/dp/0735624259
[4]:https://thewomenintechshow.com/2017/12/18/programming-interviews/
[5]:https://thewomenintechshow.com/

View File

@ -1,176 +0,0 @@
逐层拼接云原生栈
======
> 看着我们在纽约的办公大楼,我们发现了一种观察不断变化的云原生领域的完美方式。
在 Packet我们的工作价值基础设施自动化是非常基础的。因此我们花费大量的时间来研究我们之上所有生态系统中的参与者和趋势 —— 以及之下的极少数!
当你在任何生态系统的汪洋大海中徜徉时,很容易困惑或迷失方向。我知道这是事实,因为当我去年进入 Packet 工作时,从 Bryn Mawr 获得的英语学位,并没有让我完全得到一个 Kubernetes 的认证。 :)
由于它超快的演进和巨大的影响,云原生生态系统打破了先例。似乎每眨一次眼睛,之前全新的技术(更不用说所有相关的理念了)就变得有意义 ... 或至少有趣了。和其他许多人一样,我依据无处不在的 CNCF 的 “[云原生蓝图][1]” 作为我去了解这个空间的参考标准。尽管如此,如果有一个定义这个生态系统的元素,那它一定是贡献和控制它们的人。
所以,在 12 月份一个很冷的下午,当我们走回办公室时,我们偶然发现了一个给投资人解释“云原生”的创新方式,当我们谈到从 Aporeto 中区分 Cilium 的细微差别时,以及为什么从 CoreDNS 和 Spiffe 到 Digital Rebar 和 Fission 的所有这些都这么有趣时,他的眼睛中充满了兴趣。
在新世贸中心的阴影下,看到我们位于 13 层的狭窄办公室我突然想到一个把我们带到那个神奇世界的好主意为什么不把它画出来呢LCTT 译注“rabbit hole” 有多种含义,此处采用“爱丽丝梦游仙境”中的“兔子洞”含义。)
![][2]
于是我们开始了把云原生栈逐层拼接起来的旅程。让我们一起探索它给你一个“仅限今日有效”的福利。LCTT 译注:意即云原生领域变化很快,可能本文/本图中所述很快过时。)
[[查看高清大图][3]] 25Mb或给我们发邮件索取副本。
### 从最底层开始
当我们开始下笔的时候,我们知道,我们希望首先亮出的是我们每天都与之交互的而对用户却是基本上不可见的那一部分:硬件。就像任何投资于下一个伟大的(通常是私有的)东西的秘密实验室一样,我们认为地下室是其最好的地点。
从大家公认的像 Intel、AMD 和华为(传言他们雇佣的工程师接近 80000 名)这样的巨头,到像 Mellanox 这样的细分市场参与者,硬件生态系统现在非常火。事实上,随着数十亿美元投入去攻克新的 offloadLCTT 译注:网卡的 offload 特性是将本来该操作系统进行的一些诸如数据包分片、重组等处理任务放到网卡硬件中去做,降低系统 CPU 消耗的同时提高处理的性能、GPU、定制协处理器我们可能正在进入硬件的黄金时代。
著名的软件先驱<ruby>阿伦凯<rt>Alan Kay</rt></ruby> 在 25 年前说过:“对软件非常认真的人都应该去制造他自己的硬件” ,为阿伦凯打 call
### 云即资本
就像我们的 CEO Zac Smith 多次告诉我:所有都是钱的问题。不仅要制造它,还要消费它!在云中,数十亿美元的投入才能让数据中心出现计算机,这样才能让开发者使用软件去消费它。换句话说(根本没云,它只是别人的电脑而已):
![][4]
我们认为,对于“银行”(即能让云运转起来的借款人或投资人)来说最好的位置就是一楼。因此我们将大堂改造成银行家的咖啡馆,以便为所有的创业者提供幸运之轮。
![][5]
### 连通和动力
如果金钱是燃料,那么消耗大量燃料的引擎就是数据中心供应商和连接它们的网络。我们称他们为“动力”和“连通”。
从像 Equinix 这样处于核心的和像 Vapor.io 这样的接入新贵,到 Verizon、Crown Castle 和其它的处于地下(或海底)的“管道”,这是我们所有的栈都依赖但很少有人能看到的一部分。
因为我们花费大量的时间去研究数据中心和连通性,需要注意的一件事情是,这一部分的变化非常快,尤其是在 5G 正式商用时,某些负载开始不再那么依赖中心化的基础设施了。
接入即将到来! :-)
![][6]
### 嗨,它就是基础设施!
居于“连接”和“动力”之上的这一层,我们爱称为“处理器们”。这是奇迹发生的地方 —— 我们将来自下层的创新和实物投资转变成一个 API 尽头的某些东西。
由于这是纽约的一个大楼我们让在这里的云供应商处于纽约的中心。这就是为什么你会看到Digital Ocean 系的)鲨鱼 Sammy以及 Google 出现在会客室中的原因了。
正如你所见,这个场景是非常写实的。它就是一垛一垛堆起来的。尽管我们爱 EWR1 的设备经理Michael Pedrazzini我们努力去尽可能减少这种体力劳动。毕竟布线专业的博士学位是很难拿到的。
![][7]
### 供给
再上一层,在基础设施层之上是供给层。这是我们最喜欢的地方之一,它以前被我们称为“配置管理”。但是现在到处都是一开始就是<ruby>不可变基础设施<rt>immutable infrastructure</rt></ruby>和自动化Terraform、Ansible、Quay.io 等等类似的东西。你可以看出软件是按它的方式来工作的,对吗?
Kelsey Hightower 最近写道“呆在无聊的基础设施中是一个让人兴奋的时刻”,我不认为它说的是物理部分(虽然我们认为它非常让人兴奋),但是由于软件持续侵入到栈的所有层,那必将是一个疯狂的旅程。
![][8]
### 操作系统
供应就绪后,我们来到操作系统层。在这里你可以看到我们打趣一些我们最喜欢的同事:注意上面 Brian Redbeard 的瑜珈姿势。:)
Packet 为我们的客户提供了 11 种主要的操作系统可供选择包括一些你在图中看到的Ubuntu、CoreOS、FreeBSD、Suse、和各种 Red Hat 的作品。我们看到越来越多的人们在这一层上有了他们自己的看法:从定制的内核和用于不可变部署中的惯用发行版光盘,到像 NixOS 和 LinuxKit 这样的项目。
![][9]
### 运行时
为了有趣些,我们将运行时放在了体育馆内,并为 CoreOS 赞助的 rkt 和 Docker 的容器化举行了一次比赛。而无论如何赢家都是 CNCF
我们认为快速演进的存储生态系统应该是一些可上锁的储物柜。关于存储部分有趣的地方在于许多的新玩家尝试去解决持久性的挑战问题,以及性能和灵活性问题。就像他们说的:存储很简单。
![][10]
### 编排
在过去的这些年里,编排层全是 Kubernetes 了因此我们选取了其中一位著名的布道者Kelsey Hightower并在这个古怪的会议场景中给他一个特写。在我们的团队中有一些 Nomad LCTT 译注:一个管理机器集群并在集群上运行应用程序的工具)的忠实粉丝,并且如果抛开 Docker 和它的工具集的影响,就无从谈起云原生。
虽然负载编排应用程序在我们栈中的地位非常高,我们看到的各种各样的证据表明,这些强大的工具开始去深入到栈中,以帮助用户利用 GPU 和其它特定硬件的优势。请继续关注 —— 我们正处于容器化革命的早期阶段!
![][11]
### 平台
这是栈中我们喜欢的层之一,因为每个平台都有如此多的工具帮助用户去完成他们想要做的事情(顺便说一下,不是去运行容器,而是运行应用程序)。从 Rancher 和 Kontena到 Tectonic 和 Redshift 都是像 Cycle.io 和 Flynn.io 一样是完全不同的方法 —— 我们看到这些项目如何以不同的方式为用户提供服务,总是激动不已。
关键点:这些平台是帮助去转化各种各样的快速变化的云原生生态系统给用户。很高兴能看到他们每个人带来的东西!
![][12]
### 安全
当说到安全时,今年真是很忙的一年!我们尝试去展示一些很著名的攻击,并说明随着工作负载变得更加分散和更加可迁移(当然,同时攻击者也变得更加智能),这些各式各样的工具是如何去帮助保护我们的。
我们看到一个用于不可信环境(如 Aporeto和低级安全Cilium的强大动作以及尝试在网络级别上的像 Tigera 这样的可信方法。不管你的方法如何,记住这一点:安全无止境。:0
![][13]
### 应用程序
如何去表示海量的、无限的应用程序生态系统?在这个案例中,很容易:我们在纽约,选我们最喜欢的。;) 从 Postgres “房间里的大象” 和 Timescale 时钟,到暗藏的 ScyllaDB 垃圾桶和无所事事的《特拉维斯兄弟》—— 我们把这个片子拼到一起很有趣。
让我们感到很惊奇的一件事情是:很少有人注意到那个复印他的屁股的家伙。我想现在复印机已经不常见了吧?
![][14]
### 可观测性
由于我们的工作负载开始到处移动,规模也越来越大,这里没有一件事情能够像一个非常好用的 Grafana 仪表盘、或方便的 Datadog 代理让人更加欣慰了。由于复杂度的提升“SRE” 一代开始越来越多地依赖警报和其它智能事件去帮我们感知发生的事件,出现越来越多的自我修复的基础设施和应用程序。
在未来的几个月或几年中,我们将看到什么样的公司进入这一领域 … 或许是一些人工智能、区块链、机器学习支撑的仪表盘?:-)
![][15]
### 流量管理
人们往往认为互联网“就该这样工作”,但事实上,我们很惊讶于它能工作。我的意思是,这是大规模的、不同的网络间的松散连接 —— 你不是在开玩笑吧?
能够把所有的这些独立的网络拼接到一起的一个原因是流量管理、DNS 和类似的东西。随着规模越来越大,这些让互联网变得更快、更安全、同时更具弹性。我们尤其高兴的是看到像 Fly.io 和 NS1 这样的新贵与优秀的老牌玩家进行竞争,最后的结果是整个生态系统都得以提升。让竞争来的更激烈吧!
![][16]
### 用户
如果没有非常棒的用户,技术栈还有什么用呢?确实,他们享受了大量的创新,但在云原生的世界里,他们所做的远不止消费这么简单:他们创立并贡献了很多。从像 Kubernetes 这样的大量的贡献者到越来越多的(但同样重要)更多方面,而我们都是其中的非常棒的一份子。
在我们屋顶的客厅上的许多用户,比如 Ticketmaster 和《纽约时报》,而不仅仅是新贵:这些组织采用了一种新的方式去部署和管理他们的应用程序,并且他们自己的用户正在收获回报。
![][17]
### 同样重要的,成熟的监管!
在以前的生态系统中,基金会扮演了一个非常被动的“幕后”角色。而 CNCF 不是!他们的目标(构建一个健壮的云原生生态系统),勇立潮流之先 —— 他们不仅已迎头赶上还一路领先。
从坚实的治理和经过深思熟虑的项目组,到提出像 CNCF 这样的蓝图CNCF 横跨云 CI、Kubernetes 认证、和演讲者委员会 —— CNCF 已不再是 “仅仅” 受欢迎的 KubeCon + CloudNativeCon 了。
--------------------------------------------------------------------------------
via: https://www.packet.net/blog/splicing-the-cloud-native-stack/
作者:[Zoe Allen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.packet.net/about/zoe-allen/
[1]:https://landscape.cncf.io/landscape=cloud
[2]:https://assets.packet.net/media/images/PIFg-30.vesey.street.ny.jpg
[3]:https://www.dropbox.com/s/ujxk3mw6qyhmway/Packet_Cloud_Native_Building_Stack.jpg?dl=0
[4]:https://assets.packet.net/media/images/3vVx-there.is.no.cloud.jpg
[5]:https://assets.packet.net/media/images/X0b9-the.bank.jpg
[6]:https://assets.packet.net/media/images/2Etm-ping.and.power.jpg
[7]:https://assets.packet.net/media/images/C800-infrastructure.jpg
[8]:https://assets.packet.net/media/images/0V4O-provisioning.jpg
[9]:https://assets.packet.net/media/images/eMYp-operating.system.jpg
[10]:https://assets.packet.net/media/images/9BII-run.time.jpg
[11]:https://assets.packet.net/media/images/njak-orchestration.jpg
[12]:https://assets.packet.net/media/images/1QUS-platforms.jpg
[13]:https://assets.packet.net/media/images/TeS9-security.jpg
[14]:https://assets.packet.net/media/images/SFgF-apps.jpg
[15]:https://assets.packet.net/media/images/SXoj-observability.jpg
[16]:https://assets.packet.net/media/images/tKhf-traffic.management.jpg
[17]:https://assets.packet.net/media/images/7cpe-users.jpg

View File

@ -1,195 +0,0 @@
如何确定你的Linux发行版中有没有某个软件包
======
![](https://www.ostechnix.com/wp-content/uploads/2018/06/Whohas-720x340.png)
有时你可能会想知道如何在你的Linux发行版上寻找一个特定的软件包。或者你仅仅只是想知道安装在你的Linux上的软件包有什么版本。如果这就是你想知道的信息你今天走运了。我正好知道一个小工具能帮你抓到上述信息下面隆重推荐--'Whohas'这是一个命令行工具它能一次查询好几个软件包列表以检查的你软件包是否存在。目前whohas支持Arch Debian Fedora Gentoo, Mandriva openSUSE Slackware Source Mage Ubuntu FreeBSD NetBSD OpenBSDLCTT译注*BSD不是Linux Fink MacPorts 和Cygwin. 使用这个小工具软件包的维护者能轻而易举从别的Linux发行版里找到ebuilds, pkgbuilds 等等类似的包定义文件。
'Whohas'是用Perl语言开发的免费开源的工具。
###在你的Linux中寻找一个特定的包
**安装 Whohas**
Whohas 在Debian, Ubuntu, Linux Mint的默认软件仓库里提供。如果你正在使用某种基于DEB的系统 你可以用如下命令安装:
```
$ sudo apt-get install whohas
```
对基于Arch的系统[**AUR**][1]里就有提供whohas。你能使用任何的ARU助手程序来安装。
使用 [**Packer**][2]:
```
$ packer -S whohas
```
或使用[**Trizen**][3]:
```
$ trizen -S whohas
```
使用[**Yay**][4]:
```
$ yay -S whohas
```
使用[**Yaourt**][5]:
```
$ yaourt -S whohas
```
在别的Linux发行版上从[**here**][6]下载源代码并手工编译安装。
**使用方法**
Whohas的主要目标是想让你知道
* 哪个Linux发布版提供了用户依赖的包。
* 对于各个Linux发行版指定的软件包是什么版本或者在这个Linux发行版的各个不同版本上指定的软件包是什么版本。
让我们试试看上面的的功能比如说哪个Linux发行版里有**vim**这个软件?我们可以运行如下命令:
```
$ whohas vim
```
这个命令将会显示所有包含可安装的vim的Linux发行版的信息包括包的大小仓库地址和下载URL。
![][8]
你甚至可以通过管道将输出的结果按照发行版的字母排序只需加入sort 命令即可。
```
$ whohas vim | sort
```
请注意上述命令将会显示所有以**vim**开头的软件包包括vim-spell,vimcommander, vimpager等等。你可以继续使用Linux的grep命令在vim的前后加上空格来缩小你的搜索范围直到满意为止。
```
$ whohas vim | sort | grep " vim"
$ whohas vim | sort | grep "vim "
$ whohas vim | sort | grep " vim "
```
所有将空格放在包名字前面的搜索将会显示以包名字结尾的包。所有将空格放在包名字后面的搜索将会显示以包名字开头的包。前后都有空格将会严格匹配。
又或者,你就使用‘--strict来严格限制结果。
```
$ whohas --strict vim
```
有时你想知道一个包在不在一个特定的Linux发行版里。例如你想知道vim是否在Arch Linux里请运行
```
$ whohas vim | grep "^Arch"
```
LCTT译注在结果里搜索以Arch开头的Linux
Linux发行版的命名缩写为'archlinux', 'cygwin', 'debian', 'fedora', 'fink', 'freebsd', 'gentoo', 'mandriva', 'macports', 'netbsd', 'openbsd', 'opensuse', 'slackware', 'sourcemage'和ubuntu
你也可以用**-d**选项来得到同样的结果。
```
$ whohas -d archlinux vim
```
这个命令将在仅仅Arch Linux发行版下搜索vim包。
如果要在多个Linux发行版下搜索arch linux,'ubuntu',请使用如下命令。
```
$ whohas -d archlinux,ubuntu vim
```
你甚至可以用whohas来查找哪个发行版有'whohas'包。
```
$ whohas whohas
```
更详细的信息,请参照手册。
```
$ man whohas
```
**最后的话**
当然任何一个Linux发行版的包管理器都能轻松的在对应的软件仓库里找到自己管理的包。不过, whohas帮你整合并比较了在不同的Linux发行版下指定的软件包信息这样你能轻易的跨平台之间进行比较。试一下whohas你一定不会失望的。
好了,今天就到这里吧,希望前面讲的对你有用,下次我还会带来更多好东西!!
欧耶!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/find-if-a-package-is-available-for-your-linux-distribution/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[DavidChenLiang](https://github.com/davidchenliang)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://aur.archlinux.org/packages/whohas/
[2]:https://www.ostechnix.com/install-packer-arch-linux-2/
[3]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/
[4]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[5]:https://www.ostechnix.com/install-yaourt-arch-linux/
[6]:http://www.philippwesche.org/200811/whohas/intro.html
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[8]:http://www.ostechnix.com/wp-content/uploads/2018/06/whohas-1.png

View File

@ -1,32 +1,36 @@
Translating by DavidChenLiang
How To View Detailed Information About A Package In Linux
如何在Linux上检查一个包package的详细信息
======
This is know topic and we can write so many articles because most of the time we would stick with package managers for many reasons.
Each distribution clones has their own package manager, each has comes with their unique features that allow users to perform many actions such as installing new software packages, removing unnecessary software packages, updating the existing software packages, searching for specific software packages, and updating the system to latest available version, etc.
Whoever is sticking with command-line most of the time they would preferring the CLI based package managers. The major CLI package managers for Linux are Yum, Dnf, Rpm,Apt, Apt-Get, Deb, pacman and zypper.
我们可以就这个已经被广泛讨论的话题写出大量的文档大多数情况下因为各种各样的原因我们都愿意让包管理器package manager来帮我们做这些事情。
**Suggested Read :**
每个Linux发行版都有自己的包管理器并且每个都有各自有不同的特性这些特性包括允许用户执行安装新软件包删除无用的软件包更新现存的软件包搜索某些具体的软件包以及更新整个系统到其最新的状态之类的操作。
习惯于命令行的用户大多数时间都会使用基于命令行方式的包管理器。对于Linux而言这些基于命令行的包管理器有YumDnf, Rpm, Apt, Apt-Get, Deb, pacman 和zypper.
**推荐阅读**
**(#)** [List of Command line Package Managers For Linux & Usage][1]
**(#)** [A Graphical frontend tool for Linux Package Manager][2]
**(#)** [How To Search If A Package Is Available On Your Linux Distribution Or Not][3]
**(#)** [How To Add, Enable And Disable A Repository By Using The DNF/YUM Config Manager Command On Linux][4]
As a system administrator you should aware of from where packages are coming, which repository, version of the package, size of the package, release, package source url, license info, etc,.
This will help you to understand the package usage in simple way since its coming with package summary & Description. Run the below commands based on your distribution to get detailed information about given package.
作为一个系统管理员你应该熟知以下事实安装包来自何方具体来自哪个软件仓库包的具体版本包的大小发行版的版本包的源URL包的许可证信息等等等等。
### [YUM Command][5] : View Package Information On RHEL & CentOS Systems
YUM stands for Yellowdog Updater, Modified is an open-source command-line front-end package-management utility for RPM based systems such as Red Hat Enterprise Linux (RHEL) and CentOS.
这篇短文将用尽可能简单的方式帮你理解包管理器的用法这些用法正是来自随包自带的总结和描述文件。按你所使用的Linux发行版的不同运行下面相应的命令你能得到你所使用的发行版下的包的详细信息。
### [YUM 命令][5] : 在RHEL和CentOS系统上获得包的信息
YUM 英文直译是黄狗更新器--修改版它是一个开源的基于命令行的包管理器前端实用工具。它被广泛应用在基于RPM的系统上例如RHEL和CentOS。
Yum是用于在官方发行版仓库以及其他第三方发行版仓库下获取安装删除查询RPM包的主要工具。
Yum is the primary tool for getting, installing, deleting, querying, and managing RPM packages from distribution repositories, as well as other third-party repositories.
```
# yum info python
# yum info pythonLCTT译注用yum info 获取python包的信息
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
* epel: epel.mirror.constant.com
@ -60,11 +64,13 @@ Description : Python is an interpreted, interactive, object-oriented programming
```
### YUMDB Command : View Package Information On RHEL & CentOS Systems
### YUMDB 命令: 查看RHEL和CentOS系统上的包信息
Yumdb info这个命令提供与yum info相类似的的信息不过它还额外提供了诸如包校验值包类型用户信息由何人安装。从yum 3.2.26版本后yum开始在rpm数据库外储存额外的信息了下文输出的用户信息指该python由该用户安装而dep说明该包是被作为被依赖的包而被安装的
Yumdb info provides information similar to yum info but additionally it provides package checksum data, type, user info (who installed the package). Since yum 3.2.26 yum has started storing additional information outside of the rpmdatabase (where user indicates it was installed by the user, and dep means it was brought in as a dependency).
```
# yumdb info python
# yumdb info pythonLCTT译注用yumdb info 来获取Python的信息
Loaded plugins: fastestmirror
python-2.6.6-66.el6_8.x86_64
changed_by = 4294967295
@ -81,11 +87,13 @@ python-2.6.6-66.el6_8.x86_64
```
### [RPM Command][6] : View Package Information On RHEL/CentOS/Fedora Systems
### [RPM 命令][6] : 在RHEL/CentOS/Fedora系统上查看包的信息
RPM 英文直译为红帽包管理器这是一个在RedHat以及其变种发行版如RHEL CentOS Fedora openSUSEMegeia下的功能强大的命令行包管理工具。它能让你轻松的安装升级删除查询以及校验你的系统或服务器上的软件。RPM文件以.rpm结尾。RPM包由它所依赖的软件库以及其他依赖构成它不会与系统上已经安装的包冲突。
RPM stands for Red Hat Package Manager is a powerful, command line Package Management utility for Red Hat based system such as (RHEL, CentOS, Fedora, openSUSE & Mageia) distributions. The utility allow you to install, upgrade, remove, query & verify the software on your Linux system/server. RPM files comes with .rpm extension. RPM package built with required libraries and dependency which will not conflicts other packages were installed on your system.
```
# rpm -qi nano
# rpm -qi nano LCTT译注用RPM -qi 查询nano包的具体信息
Name : nano Relocations: (not relocatable)
Version : 2.0.9 Vendor: CentOS
Release : 7.el6 Build Date: Fri 12 Nov 2010 02:18:36 AM EST
@ -101,11 +109,13 @@ GNU nano is a small and friendly text editor.
```
### [DNF Command][7] : View Package Information On Fedora System
### [DNF 命令][7] : 在Fedora系统上查看报信息
DNF指时髦版的Yum,我们也可以认为DNF是下一代的YUM包管理器Yum的一个分支它在后台使用了hawkey/libsolv库。Aleš Kozumplík在Fedora 18上开始开发DNF在Fedora 22上正式最后发布。 DNF命令用来在Fedora 22及以后系统安装 更新,搜索以及删除包。它能自动的解决包安装过程中的包依赖问题。
DNF stands for Dandified yum. We can tell DNF, the next generation of yum package manager (Fork of Yum) using hawkey/libsolv library for backend. Aleš Kozumplík started working on DNF since Fedora 18 and its implemented/launched in Fedora 22 finally. Dnf command is used to install, update, search & remove packages on Fedora 22 and later system. It automatically resolve dependencies and make it smooth package installation without any trouble.
```
$ dnf info tilix
$ dnf info tilix LCTT译注 用dnf info 查看tilix的包信息
Last metadata expiration check: 27 days, 10:00:23 ago on Wed 04 Oct 2017 06:43:27 AM IST.
Installed Packages
Name : tilix
@ -139,11 +149,13 @@ Description : Tilix is a tiling terminal emulator with the following features:
```
### [Zypper Command][8] : View Package Information On openSUSE System
### [Zypper 命令][8] : 在openSUSE系统上查看包信息
Zypper是一个使用libzypp库的命令行包管理器。Zypper提供诸如软件仓库访问安装依赖解决软件包安装等等功能。
Zypper is a command line package manager which makes use of libzypp. Zypper provides functions like repository access, dependency solving, package installation, etc.
```
$ zypper info nano
$ zypper info nano (译注: 用zypper info查询nano的信息
Loading repository data...
Reading installed packages...
@ -167,11 +179,12 @@ Description :
```
### [pacman Command][9] : View Package Information On Arch Linux & Manjaro Systems
### [pacman 命令][9] 在ArchLinux及Manjaro系统上查看包信息
Pacman指包管理器实用工具。pacman是一个用于安装构建删除管理Arch Linux上包的命令行工具。它后端使用libalpm(Arch Linux package ManagerALPM库)来完成所有功能。
Pacman stands for package manager utility. pacman is a simple command-line utility to install, build, remove and manage Arch Linux packages. Pacman uses libalpm (Arch Linux Package Management (ALPM) library) as a back-end to perform all the actions.
```
$ pacman -Qi bash
$ pacman -Qi bash LCTT译注 用pacman -Qi 来查询bash
Name : bash
Version : 4.4.012-2
Description : The GNU Bourne Again shell
@ -203,11 +216,14 @@ Validated By : Signature
```
### [Apt-Cache Command][10] : View Package Information On Debian/Ubuntu/Mint Systems
### [Apt-Cache 命令][10] 在Debian/Ubuntu/Mint系统上查看包信息
apt-cache命令能显示Apt内部数据库中的大量信息。这些信息是从sources.list中的不同的软件源中搜集而来因此从某种意义上这些信息也可以被认为是某种缓存。
这些信息搜集工作是在运行apt update命令时执行的。
The apt-cache command can display much of the information stored in APTs internal database. This information is a sort of cache since it is gathered from the different sources listed in the sources.list file. This happens during the apt update operation.
```
$ sudo apt-cache show apache2
$ sudo apt-cache show apache2 LCTT译注用管理员权限查询apache2的信息
Package: apache2
Priority: optional
Section: web
@ -244,11 +260,13 @@ Task: lamp-server, mythbuntu-frontend, mythbuntu-desktop, mythbuntu-backend-slav
```
### [APT Command][11] : View Package Information On Debian/Ubuntu/Mint Systems
### [APT 命令][11] : 查看Debian/Ubuntu/Mint系统上的包信息
APT意为高级打包工具就像DNF将如何替代YUM一样APT是apt-get的替代物。它功能丰富的命令行工具包括了如下所有命令的功能如apt-cache,apt-search,dpkg, apt-cdrom, apt-config, apt-key等等我们可以方便的通过apt来安装.dpkg包但是我们却不能通过apt-get来完成这一点还有一些其他的类似的功能也不能用apt-get来完成所以apt-get因为没有解决上述功能缺乏的原因而被apt所取代。
APT stands for Advanced Packaging Tool (APT) which is replacement for apt-get, like how DNF came to picture instead of YUM. Its feature rich command-line tools with included all the futures in one command (APT) such as apt-cache, apt-search, dpkg, apt-cdrom, apt-config, apt-key, etc..,. and several other unique features. For example we can easily install .dpkg packages through APT but we cant do through Apt-Get similar more features are included into APT command. APT-GET replaced by APT Due to lock of futures missing in apt-get which was not solved.
```
$ apt show nano
$ apt show nano LCTT译注 用apt show查看nano
Package: nano
Version: 2.8.6-3
Priority: standard
@ -290,11 +308,13 @@ Description: small, friendly text editor inspired by Pico
```
### [dpkg Command][12] : View Package Information On Debian/Ubuntu/Mint Systems
### [dpkg 命令][12] : 查看Debian/Ubuntu/Mint系统上的包信息
dpkg意指Debian包管理器dpkg。dpkg用于Debian系统上的安装构建移除以及管理Debian包的命令行工具。dpkg 使用Aptitude因为它更为主流及用户友好作为前端工具来完成所有的功能。其他的工具如dpkg-deb和dpkg-query使用dpkg做为前端来实现功能。尽管系统管理员还是时不时会在必要时使用dpkg来完成一些软件安装的任务他大多数情况下还是会因为APtApt-Get以及Aptitude的健壮性而使用后者。
dpkg stands for Debian package manager (dpkg). dpkg is a command-line tool to install, build, remove and manage Debian packages. dpkg uses Aptitude (primary and more user-friendly) as a front-end to perform all the actions. Other utility such as dpkg-deb and dpkg-query uses dpkg as a front-end to perform some action. Now a days most of the administrator using Apt, Apt-Get & Aptitude to manage packages easily without headache and its robust management too. Even though still we need to use dpkg to perform some software installation where its necessary.
```
$ dpkg -s python
$ dpkg -s python LCTT译注 用dpkg -s查看python
Package: python
Status: install ok installed
Priority: optional
@ -324,9 +344,11 @@ Original-Maintainer: Matthias Klose
```
Alternatively we can use `-p` option with dpkg that provides information similar to `dpkg -s` info but additionally it provides package checksum data and type.
我们也可使用dpkg的-p选项这个选项提供和dpkg -s相类似的信息但是它还提供了包的校验值和包类型。
```
$ dpkg -p python3
$ dpkg -p python3 LCTT译注 用dpkg -p查看python3的信息
Package: python3
Priority: important
Section: python
@ -357,11 +379,13 @@ Supported: 9m
```
### Aptitude Command : View Package Information On Debian/Ubuntu/Mint Systems
### Aptitude 命令 : 查看Debian/Ubuntu/Mint 系统上的包信息
aptitude是Debian GNU/Linux包管理系统的面向文本的接口。它允许用户查看已安装的包的列表以及完成诸如安装升级删除包之类的包管理任务。这些管理行为也能从图形接口来执行。
aptitude is a text-based interface to the Debian GNU/Linux package system. It allows the user to view the list of packages and to perform package management tasks such as installing, upgrading, and removing packages. Actions may be performed from a visual interface or from the command-line.
```
$ aptitude show htop
$ aptitude show htop LCTT译注 用aptitude show查看htop信息
Package: htop
Version: 2.0.2-1
State: installed
@ -388,7 +412,7 @@ via: https://www.2daygeek.com/how-to-view-detailed-information-about-a-package-i
作者:[Prakash Subramanian][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
译者:[DavidChenLiang](https://github.com/davidchenliang)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,95 @@
Etcher.io 入门
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A)
可启动 USB 盘是尝试新的 Linux 发行版的很好的方式,以便在安装之前查看你是否喜欢它。虽然一些 Linux 发行版(如 [Fedora][1])可以轻松创建可启动媒体,但大多数其他发行版提供 ISO 或镜像文件,并将创建媒体决定留给用户。用户总是可以选择使用 `dd` 在命令行上创建媒体 - 但让我们面对它,即使对于最有经验的用户来说,这仍然很痛苦。还有其他程序,如 Mac 上的 UnetBootIn、Disk Utility 和 Windows 上的 Win32DiskImager它们都可以创建可启动的 USB。
### 安装 Etcher
大约 18 个月前,我遇到了 [Etcher.io][2],这是一个很棒的开源项目,可以在 Linux、Windows 或 MacOS 上轻松简单地创建媒体。Etcher.io 已成为我为 Linux 创建可启动媒体的“首选”程序。我可以轻松下载 ISO 或 IMG 文件并将其刻录到闪存和 SD 卡。这是一个 [Apache 2.0][3] 许可证下的开源项目,[源代码][4] 可在 GitHub 上获得。
进入 [Etcher.io][5] 网站,然后单击适用于你的操作系统-32 位或 64 位 Linux32 位或 64 位 Windows 或 MacOS 的下载链接。
![](https://opensource.com/sites/default/files/uploads/etcher_1.png)
Etcher 在 GitHub 仓库中提供了很好的指导,用于将 Etcher 添加到你的 Linux 实用程序集合中。
如果你使用的是 Debian 或 Ubuntu请添加 Etcher Debian 仓库:
```
$echo "deb https://dl.bintray.com/resin-io/debian stable etcher" | sudo tee
/etc/apt/sources.list.d/etcher.list
信任 Bintray.com GPG 密钥
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 379CE192D401AB61
```
然后更新你的系统并安装:
```
$ sudo apt-get update
$ sudo apt-get install etcher-electron
```
如果你使用的是 Fedora 或 Red Hat Enterprise Linux请添加 Etcher RPM 仓库:
```
$ sudo wget https://bintray.com/resin-io/redhat/rpm -O /etc/yum.repos.d/bintray-
resin-io-redhat.repo
```
使用以下任一方式更新和安装:
```
$ sudo yum install -y etcher-electron
```
或者:
```
$ sudo dnf install -y etcher-electron
```
### 创建可启动盘
除了为 Ubuntu、EndlessOS 和其他版本的 Linux 创建可启动镜像之外,我还使用 Etcher [创建 SD 卡镜像][6]用于树莓派。以下是如何创建可启动媒体。
首先,将要使用的 ISO 或镜像下载到计算机。然后,启动 Etcher 并将 USB 或 SD 卡插入计算机。
![](https://opensource.com/sites/default/files/uploads/etcher_2.png)
单击 **Select Image**。在本例中,我想创建一个可启动的 USB 盘,以便在新计算机上安装 Ubermix。在我选择了我的 Ubermix 镜像文件并将我的 USB 盘插入计算机Etcher.io “看到”了驱动器,我就可以开始在 USB 上安装 Ubermix 了。
![](https://opensource.com/sites/default/files/uploads/etcher_3.png)
在我点击 **Flash** 后,安装就开始了。所需时间取决于镜像的大小。在驱动器上安装镜像后,软件会验证安装。最后,一条提示宣布我的媒体创建已经完成。
如果您需要[ Etcher 的帮助][7],请通过其 [Discourse][8] 论坛联系社区。Etcher 非常易于使用,它已经取代了我所有其他的媒体创建工具,因为它们都不像 Etcher 那样轻松地完成工作。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/getting-started-etcherio
作者:[Don Watkins][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/don-watkins
[1]:https://getfedora.org/en_GB/workstation/download/
[2]:http://etcher.io
[3]:https://github.com/resin-io/etcher/blob/master/LICENSE
[4]:https://github.com/resin-io/etcher
[5]:https://etcher.io/
[6]:https://www.raspberrypi.org/magpi/pi-sd-etcher/
[7]:https://github.com/resin-io/etcher/blob/master/SUPPORT.md
[8]:https://forums.resin.io/c/etcher

View File

@ -0,0 +1,90 @@
如何在 Ubuntu 和其他 Linux 发行版中安装 2048 游戏
======
**流行的移动益智游戏 2048 也可以在 Ubuntu 和 Linux 发行版上玩。啊!你甚至可以在 Linux 终端上玩 2048。如果你的生产率因为这个让人上瘾的游戏下降请不要怪我。**
早在 2014 年2048 就是 iOS 和 Android 上最受欢迎的游戏之一。这款令人上瘾的游戏非常受欢迎,它在 Linux 上有[浏览器版][1]、桌面版和终端版。
<https://giphy.com/embed/wT8XEi5gckwJW>
通过向上和向下,向左和向右移动滑块来玩这个小游戏。这个益智游戏的目的是通过组合匹配的滑块到数字 2048。因此 2+2 变成 44+4 变成 16依此类推。这可能听起来简单而无聊但相信我是一个令人上瘾的游戏。
### 在 Linux 中玩 2048 [GUI]
在 Ubuntu 和其他 Linux 中有些 2048 游戏。你可以在软件中心中搜索它,你可以在那里找到一些。
有一个[基于 Qt ][2]的 2048 游戏,你可以在 Ubuntu 和其他基于 Debian 和 Ubuntu 的 Linux 发行版上安装。你可以使用以下命令安装它:
```
sudo apt install 2048-qt
```
安装完成后,你可以在菜单中找到该游戏并启动它。你可以使用箭头键移动数字。你的最高分也会保存。
![2048 Game in Ubuntu Linux][3]
### 在 Linux 终端玩 2048
2048 的流行将它带到了终端。如果这让你感到惊讶,你应该知道 Linux 中有很多[很棒的终端游戏][4],而 2048 肯定就是其中之一。
现在,有几种方法可以在 Linux 终端中玩 2048。我在这里提其中两个。
#### 1\. term2048 Snap 程序
有一个名为 [term2048][6] 的[ snap 程序][5]可以安装在任何[支持 Snap 的 Linux 发行版][7]中。
如果你启用了 Snap只需使用此命令安装 term2048
```
sudo snap install term2048
```
Ubuntu 用户也可以在软件中心找到这个游戏并从那里安装它。
![2048 Terminal Game in Linux][8]
安装后,你可以使用命令 term2048 来运行游戏。它看起来像这样:
![2048 Terminal game][9]
你可以使用箭头键移动。
#### 2\. 2048 游戏的 Bash 脚本
这个游戏实际上是一个 shell 脚本,你可以在任何 Linux 终端上运行。从 Github 下载游戏/脚本:
[下载 Bash2048][10]
解压下载的文件。进入解压后的目录,你将看到名为 2048.sh 的 shell 脚本。只需运行 shell 脚本。游戏将立即开始。你可以使用箭头键移动滑块。
![Linux Terminal game 2048][11]
#### 你在Linux上玩什么游戏
如果你喜欢在 Linux 终端上玩游戏,你也应该尝试 [Linux 终端中的经典 Snake 游戏][12]。
你经常在 Linux 中玩哪些游戏?你也在终端中玩游戏吗?如果是的话,哪个是你最喜欢的终端游戏?
--------------------------------------------------------------------------------
via: https://itsfoss.com/2048-game/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]:http://gabrielecirulli.github.io/2048/
[2]:https://www.qt.io/
[3]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/2048-qt-ubuntu.jpeg
[4]:https://itsfoss.com/best-command-line-games-linux/
[5]:https://itsfoss.com/use-snap-packages-ubuntu-16-04/
[6]:https://snapcraft.io/term2048
[7]:https://itsfoss.com/install-snap-linux/
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/term2048-game.png
[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/term2048.jpg
[10]:https://github.com/mydzor/bash2048
[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/2048-bash-terminal.png
[12]:https://itsfoss.com/nsnake-play-classic-snake-game-linux-terminal/ (nSnake: Play The Classic Snake Game In Linux Terminal)

View File

@ -0,0 +1,152 @@
使用 VS Code 进行 Python 编程
======
![](https://fedoramagazine.org/wp-content/uploads/2018/07/pythonvscode-816x345.jpg)
Visual Studio Code简称 VS Code是一个开源的文本编辑器包含用于构建和调试应用程序的工具。安装启用 Python 扩展后VS Code 可以配置成 Python 开发的理想工作环境。本文将介绍一些有用的 VS Code 扩展,并配置它们以充分提高 Python 开发效率。
如果你的计算机上还没有安装 VS Code可以参考文章 [Using Visual Studio Code on Fedora ](https://fedoramagazine.org/using-visual-studio-code-fedora/) 安装。
### 在 VS Code 中安装 Python 扩展
首先,为了更方便地在 VS Code 中进行 Python 开发,需要从 VS Code 扩展商店中安装 Python 扩展。
![][2]
Python 扩展安装完成后,就可以开始配置 Python 扩展了。
VS Code 通过两个 JSON 文件管理设置:
* 一个文件用于 VS Code 的全局设置,作用于所有的项目
* 另一个文件用于特殊设置,作用于单独项目
可以用快捷键 **Ctrl+,** (逗号)打开全局设置,也可以通过 **文件 -> 首选项 -> 设置** 来打开。
#### 设置 Python 路径
您可以在全局设置中配置 python.pythonPath 使 VS Code 自动为每个项目选择最适合的 Python 解释器。 。
```
// 将设置放在此处以覆盖默认设置和用户设置。
// Path to Python, you can use a custom version of Python by modifying this setting to include the full path.
{
"python.pythonPath":"${workspaceRoot}/.venv/bin/python",
}
```
这样VS Code 将使用虚拟环境目录 .venv 下项目根目录中的 Python 解释器。
#### 使用环境变量
默认情况下VS Code 使用项目根目录下的 .env 文件中定义的环境变量。 这对于设置环境变量很有用,如:
```
PYTHONWARNINGS="once"
```
可使程序在运行时显示警告。
可以通过设置 python.envFile 来加载其他的默认环境变量文件:
```
// Absolute path to a file containing environment variable definitions.
"python.envFile": "${workspaceFolder}/.env",
```
### 代码分析
Python 扩展还支持不同的代码分析工具pep8flake8pylint。要启用你喜欢的或者正在进行的项目所使用的分析工具只需要进行一些简单的配置。
扩展默认情况下使用 pylint 进行代码分析。你可以这样配置以使用 flake8 进行分析:
```
"python.linting.pylintEnabled": false,
"python.linting.flake8Path": "${workspaceRoot}/.venv/bin/flake8",
"python.linting.flake8Enabled": true,
"python.linting.flake8Args": ["--max-line-length=90"],
```
启用代码分析后,分析器会在不符合要求的位置加上波浪线,鼠标置于该位置,将弹窗提示其原因。注意,项目的虚拟环境中需要安装有 flake8此示例方能有效。
![][3]
### 格式化代码
可以配置 VS Code 使其自动格式化代码。目前支持 autopep8black 和 yapf。下面的设置将启用 “black” 模式。
```
// Provider for formatting. Possible options include 'autopep8', 'black', and 'yapf'.
"python.formatting.provider": "black",
"python.formatting.blackPath": "${workspaceRoot}/.venv/bin/black"
"python.formatting.blackArgs": ["--line-length=90"],
"editor.formatOnSave": true,
```
如果不需要编辑器在保存时自动格式化代码,可以将 editor.formatOnSave 设置为 false 并手动使用快捷键 **Ctrl + Shift + I** 格式化当前文档中的代码。 注意,项目的虚拟环境中需要安装有 black此示例方能有效。
### 运行任务
VS Code 的一个重要特点是它可以运行任务。需要运行的任务保存在项目根目录中的 JSON 文件中。
#### 运行 flask 开发服务
这个例子将创建一个任务来运行 Flask 开发服务器。 使用一个可以运行外部命令的基本模板来创建新的工程:
![][4]
编辑如下所示的 tasks.json 文件,创建新任务来运行 Flask 开发服务:
```
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "Run Debug Server",
"type": "shell",
"command": "${workspaceRoot}/.venv/bin/flask run -h 0.0.0.0 -p 5000",
"group": {
"kind": "build",
"isDefault": true
}
}
]
}
```
Flask 开发服务使用环境变量来获取应用程序的入口点。 如 **使用环境变量** 一节所说,可以在 .env 文件中声明这些变量:
```
FLASK_APP=wsgi.py
FLASK_DEBUG=True
```
这样就可以使用快捷键 **Ctrl + Shift + B** 来执行任务了。
### 单元测试
VS Code 还支持单元测试框架 pytestunittest 和 nosetest。启用测试框架后可以在 VS Code 中单独运行搜索到的单元测试,通过测试套件运行测试或者运行所有的测试。
例如,可以这样启用 pytest 测试框架:
```
"python.unitTest.pyTestEnabled": true,
"python.unitTest.pyTestPath": "${workspaceRoot}/.venv/bin/pytest",
```
注意,项目的虚拟环境中需要安装有 pytest此示例方能有效。
![][5]
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/vscode-python-howto/
作者:[Clément Verna][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[idea2act](https://github.com/idea2act)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org
[1]:https://fedoramagazine.org/using-visual-studio-code-fedora/
[2]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-09-44.gif
[3]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-12-05.gif
[4]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-13-26.gif
[5]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-15-33.gif

View File

@ -0,0 +1,75 @@
为什么我仍然喜欢在 Linux 终端中用 Alpine 发送电子邮件
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ)
也许你有这个经历:你试了一个程序,并且很喜欢它。多年后,有新的程序开发出来,它可以做同样的事情或者更多,甚至更好。你试了下它们,它们也很棒 - 但你会继续使用第一个程序。
这是我与 [Alpine Mai][1] 关系的故事。所以我决定写一篇赞美我最喜欢的邮件程序的文章。
![alpine_main_menu.png][3]
Alpine 邮件客户端的主菜单屏幕
在 90 年代中期,我发现了 [GNU/Linux][4] 操作系统。因为我之前从未见过类 Unix 的系统,所以我阅读了大量的文档和书籍,并尝试了很多程序来熟悉这个迷人的系统。
过了一会儿,[Pine][5] 成了我最喜欢的邮件客户端,其后是其继任者 Alpine。我发现它直观且易于使用 - 你始终可以在底部看到可能的命令或选项,因此引导很容易快速学习,并且 Alpine 提供了很好的帮助。
入门很容易。
大多数发行版包含 Alpine。它可以通过包管理器安装只需按下 **S**(或移动到设置),你会看到可以配置的类别。在底部,你可以使用快捷键来执行你可以立即执行的命令。对于其他命令,按下 **O**`其他命令`)。
按下 **C** 进入配置对话框。当你向下滚动列表时,很明显你可以让 Apline 如你所希望的那样运行。如果你只有一个邮件帐户,只需移动到你想要更改的行,按下 **C**(“更改值”),然后输入值:
![alpine_setup_configuration.png][7]
Alpine 设置配置屏幕
请注意如何输入 SNMP 和 IMAP 服务器,因为这与有助手和预填字段的邮件客户端不同。如果你像这样输入服务器/SSL/用户:
`imap.myprovider.com:993/ssl/user=max@example.com`
Alpine 会询问你是否使用“收件箱”(是)并在服务器两边加上大括号。完成后,按下 **E**`退出设置`)并按下 **Y**(是)提交更改。回到主菜单,然后你可以移动到文件夹列表和收件箱以查看是否有邮件(系统将提示你输入密码)。你现在可以使用 **`>`** 和 **`<`** 进行移动。
![navigating_the_message_index.png][9]
在 Apline 中浏览消息索引
要撰写邮件,只需移动到相应的菜单并编写即可。请注意,底部的选项会根据你所在的行而变化。**`^T`****Ctrl** \+ **T**)可代表 `To Addressbook``To Files`。要附加文件,只需移动到 `Attchmt:` 然后按 **Ctrl** \+ **T** 转到文件浏览器,或按**Ctrl** \+ **J** shuru 路径。
`^X` 发送邮件。
![composing_an_email_in_alpine.png][11]
在 Alpine 中撰写电子邮件
### 为何选择 Alpine
当然,每个用户的个人偏好和需求都是不同的。如果你需要一个更像 “office” 的解决方案,像 Evolution 或 Thunderbird 这样的应用可能是更好的选择。
但对我来说Alpine和 Pine是软件界的恐龙。你可以以舒适的方式管理邮件 - 不多也不少。它适用于许多操作系统(甚至 [Termux for Android][12])。并且因为配置存储在纯文本文件(`.pinerc`)中,所以你只需将其复制到设备即可。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/love-alpine
作者:[Heiko Ossowski][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/hossow
[1]:https://en.wikipedia.org/wiki/Alpine_(email_client)
[2]:/file/405641
[3]:https://opensource.com/sites/default/files/uploads/alpine_main_menu.png (alpine_main_menu.png)
[4]:https://www.gnu.org/gnu/linux-and-gnu.en.html
[5]:https://en.wikipedia.org/wiki/Pine_(email_client)
[6]:/file/405646
[7]:https://opensource.com/sites/default/files/uploads/alpine_setup_configuration.png (alpine_setup_configuration.png)
[8]:/file/405651
[9]:https://opensource.com/sites/default/files/uploads/navigating_the_message_index.png (navigating_the_message_index.png)
[10]:/file/405656
[11]:https://opensource.com/sites/default/files/uploads/composing_an_email_in_alpine.png (composing_an_email_in_alpine.png)
[12]:https://termux.com/

View File

@ -0,0 +1,107 @@
五个 Linux 上的开源角色扮演游戏
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dice_tabletop_board_gaming_game.jpg?itok=y93eW7HN)
游戏是 Linux 的传统弱点之一,感谢 Stean、GOG 和其他的游戏开发商将商业游戏移植到了多个操作系统Linux 的这个弱点在近几年有所改观,但是这些游戏通常都不是开源的。当然,这些游戏可以在开源系统上运行,但是对于开源的纯粹主义者来说这还不够好。
那么,有没有一款能让只使用免费和开源软件的人在不影响他们开源理念的情况下也能享受到可靠游戏体验的精致游戏呢?
当然有啦!虽然开源游戏不太可能和拥有大量开发预算的 3A 级大作相媲美,但有许多类型的开源游戏也很有趣,而且他们可以直接从大多数主要的 Linux 发行版的仓库中进行安装。即使某个游戏没有被某些仓库打包,你也可以很简单地从这个游戏的官网下载它,并进行安装和运行。
这篇文章着眼于角色扮演游戏,我已经写过关于街机游戏,棋牌游戏,益智游戏,以及赛车和飞行游戏。在本系列的最后一篇文章中,我打算覆盖战略游戏和模拟游戏这两方面。
### Endless Sky
![](https://opensource.com/sites/default/files/uploads/endless_sky.png)
Endless Sky 是 Ambrosia Software 的 Escape Velocity 系列的开源克隆。玩家乘坐一艘宇宙飞船在不同的世界之间旅行来运送货物和乘客并在沿途中承接其他任务或者玩家也可以变成海盗并从其他货船中偷取货物。这个游戏让玩家自己决定要如何去体验这个游戏以太阳系为背景的超大地图是非常具有探索性的。Endless Sky 是那些违背正常游戏类别分类的游戏之一。但这个兼具动作、角色扮演、太空模拟和交易这四种类型的游戏非常值得一试。
如果要安装 Endless Sky ,请运行下面的命令:
在 Fedora 上: `dnf install endless-sky`
在 Debian/Ubuntu 上: `apt install endless-sky`
### FreeDink
![](https://opensource.com/sites/default/files/uploads/freedink.png)
FreeDink 是 Dink Smallwood 的开源版本Dink Smallwood 是一个由 RTSoft 在1997 年发售的动作角色扮演游戏。Dink Smallwood 在 1999 年时变为了免费游戏,并在 2003 年时公布了源代码。在 2008 年时游戏的数据除了少部分的声音文件都在开源协议下进行了开源。FreeDink 用一些替代的声音文件替换了缺少的那部分文件,来提供了一个完整的游戏。游戏的玩法类似于任天堂的塞尔达传说系列。玩家控制的角色和 Dink Smallwood 同名他在从一个任务地点移动到下一个任务地点的时候探索这个充满隐藏物品和隐藏洞穴的世界地图。由于这个游戏的年龄FreeDink 不能和现代的商业游戏相抗衡,但它仍然是一个拥有着有趣故事的有趣的游戏。游戏可以通过 D-Mods 进行扩展D-Mods 是提供额外任务的附加模块,但是 D-Mods 在复杂性,质量,和年龄适应性上确实有很大的差异。游戏主要适合青少年,但也有部分额外组件适用于成年玩家。
要安装 FreeDink ,请运行下面的命令:
在 Fedora 上: `dnf install freedink`
在 Debian/Ubuntu 上: `apt install freedink`
### ManaPlus
![](https://opensource.com/sites/default/files/uploads/manaplus.png)
从技术上讲ManaPlus 本身并不是一个游戏它是一个访问各种大型多人在线角色扮演游戏的客户端。The Mana World 和 Evol Online 是两款可以通过 ManaPlus 访问的开源游戏,但是游戏的服务器不在那里。这个游戏的 2D 精灵图像让人想起超级任天堂游戏,虽然 ManaPlus 支持的游戏没有一款能像商业游戏那样受欢迎的,但他们都有一个有趣的世界,并且在绝大部分时间里都有至少一小部分玩家在线。一个玩家不太可能遇到很多的其他玩家,但通常都能有足够的人一起在这个 MMORPG 游戏里进行冒险而不是一个需要连接到服务器的单机游戏。Mana World 和 Evol Online 的开发者联合起来进行未来的开发但是对于目前而言Mana World 的历史服务器和 Evol Online 提供了不同的游戏体验。
要安装 ManaPlus请运行下面的命令
在 Fedora 上: `dnf install manaplus`
在 Debian/Ubuntu 上: `apt install manaplus`
### Minetest
![](https://opensource.com/sites/default/files/uploads/minetest.png)
使用 Minetest 来在一个开放式世界里进行探索和创造Minetest 是 Minecraft 的克隆就像它所基于的游戏一样Minetest 提供了一个开放的世界玩家可以在这个世界里探索和创造他们想要的一切。Minetest 提供了各种各样的方块和工具,对于想要一个比 Minecraft 更加开放的游戏的人来说Minetest 是一个很好的替代品。除了基本的游戏之外Minetest 还可以通过额外的模块进行可扩展,增加更多的选项。
如果要安装 Minetest ,请运行下面的命令:
在 Fedora 上: `dnf install minetest`
在 Debian/Ubuntu 上: `apt install minetest`
### NetHack
![](https://opensource.com/sites/default/files/uploads/nethack.png)
NetHack 是一款经典的 Roguelike 类型的角色扮演游戏,玩家可以从不同的角色种族、层次和路线中进行选择,来探索这个多层次的地下层。这个游戏的目的就是找回 Yendor 的护身符,玩家从地下层的第一层开始探索,并尝试向下一层移动,每一层都是随机生成的,这样每次都能获得不同的游戏体验。虽然这个游戏只具有 ASCII 图形和基本图形,但是游戏玩法的深度能够弥补画面的不足。玩家如果想要更好一些的画面的话,可能就需要去查看 NetHack 中的 Vulture 了,这个选项可以提供更好的图像、声音和背景音乐。
如果要安装 NetHack ,请运行下面的命令:
在 Fedora 上: `dnf install nethack`
在 Debian/Ubuntu 上: `apt install nethack-x11 or apt install nethack-console`
我有错过了你最喜欢的角色扮演游戏吗?请在下面的评论区分享出来。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/role-playing-games-linux
作者:[Joshua Allen Holm][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[hopefully2333](https://github.com/hopefully2333)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/holmja
[1]:https://opensource.com/article/18/1/arcade-games-linux
[2]:https://opensource.com/article/18/3/card-board-games-linux
[3]:https://opensource.com/article/18/6/puzzle-games-linux
[4]:https://opensource.com/article/18/7/racing-flying-games-linux
[5]:https://endless-sky.github.io/
[6]:https://en.wikipedia.org/wiki/Escape_Velocity_(video_game)
[7]:http://www.gnu.org/software/freedink/
[8]:http://www.rtsoft.com/pages/dink.php
[9]:https://en.wikipedia.org/wiki/The_Legend_of_Zelda
[10]:http://www.dinknetwork.com/files/category_dmod/
[11]:http://manaplus.org/
[12]:http://www.themanaworld.org/
[13]:http://evolonline.org/
[14]:https://en.wikipedia.org/wiki/Massively_multiplayer_online_role-playing_game
[15]:https://www.minetest.net/
[16]:https://wiki.minetest.net/Mods
[17]:https://www.nethack.org/
[18]:https://en.wikipedia.org/wiki/Roguelike
[19]:http://www.darkarts.co.za/vulture-for-nethack

View File

@ -0,0 +1,112 @@
MPV 播放器Linux 下的极简视频播放器
======
MPV 是一个开源的,跨平台视频播放器,带有极简的 GUI 界面以及丰富的命令行控制。
VLC 可能是 Linux 或者其他平台下最好的视频播放器。我已经使用 VLC 很多年了,它现在仍是我最喜欢的播放器。
不过最近,我倾向于使用简洁界面的极简应用。这也是我偶然发现 MPV 的原因。我太喜欢这个软件,并把它加入了 [Ubuntu 最佳应用][1]列表里。
[MPV][2] 是一个开源的视频播放器,有 LinuxWindowsMacOSBSD 以及 Android 等平台下的版本。它实际上是从 [MPlayer][3] 分支出来的。
它的图形界面只有必须的元素而且非常整洁。
![MPV 播放器在 Linux 下的界面][4]
MPV 播放器
### MPV 的功能
MPV 有标准播放器该有的所有功能。你可以播放各种视频,以及通过常用快捷键来控制播放。
* 极简图形界面以及必须的控件。
* 自带视频解码器。
* 高质量视频输出以及支持 GPU 硬件视频解码。
* 支持字幕。
* 可以通过命令行播放 YouTube 等流媒体视频。
* 命令行模式的 MPV 可以嵌入到网页或其他应用中。
尽管 MPV 播放器只有极简的界面以及有限的选项,但请不要怀疑它的功能。它主要的能力都来自命令行版本。
只需要输入命令 mpv --list-options然后你会看到它所提供的 447 个不同的选项。但是本文不会介绍 MPV 的高级应用。让我们看看作为一个普通的桌面视频播放器,它能有多么优秀。
### 在 Linux 上安装 MPV
MPV 是一个常用应用,加入了大多数 Linux 发行版默认仓库里。在软件中心里搜索一下就可以了。
我可以确认在 Ubuntu 的软件中心里能找到。你可以在里面选择安装,或者通过下面的命令安装:
```
sudo apt install mpv
```
你可以在 [MPV 网站][5]上查看其他平台的安装指引。
### 使用 MPV 视频播放器
在安装完成以后,你可以通过鼠标右键点击视频文件,然后在列表里选择 MPV 来播放。
![MPV 播放器界面][6]
MPV 播放器界面
整个界面只有一个控制面板,只有在鼠标移动到播放窗口上才会显示出来。控制面板上有播放/暂停,选择视频轨道,切换音轨,字幕以及全屏等选项。
MPV 的默认大小取决于你所播放视频的画质。比如一个 240p 的视频,播放窗口会比较小,而在全高清显示器上播放 1080p 视频时,会几乎占满整个屏幕。不管视频大小,你总是可以在播放窗口上双击鼠标切换成全屏。
#### The subtitle struggle
如果你的视频带有字幕MPV 会[自动加载字幕][7],你也可以选择关闭。不过,如果你想使用其他外挂字幕文件,不能直接在播放器界面上操作。
你可以将额外的字幕文件名改成和视频文件一样并且将它们放在同一个目录下。MPV 会加载你的字幕文件。
更简单的播放外挂字幕的方式是,用鼠标选中文件拖到播放窗口里放开。
#### 播放 YouTube 或其他在线视频
要播放在线视频,你只能使用命令行模式的 MPV。
打开终端窗口,然后用类似下面的方式来播放:
```
mpv <URL_of_Video>
```
![在 Linux 桌面上使用 MPV 播放 YouTube 视频][8]
在 Linux 桌面上使用 MPV 播放 YouTube 视频
用 MPV 播放 YouTube 视频的体验不怎么好。它总是在缓冲缓冲,有点烦。
#### 是否需要安装 MPV 播放器?
这个看你自己。如果你想体验各种应用,大可以试试 MPV。否则默认的视频播放器或者 VLC 就足够了。
我在早些时候写关于 [Sayonara][9] 的文章时,并不确定大家会不会喜欢一个相对不常用的音乐播放器,但是 FOSS 的读者觉得很好。
试一下 MPV然后看看你会不会将它作为你的默认视频播放器。
如果你喜欢 MPV但又觉得它的图形界面需要更多功能我推荐你使用 [GNOME MPV 播放器][10]。
你用过 MPV 视频播放器吗?体验怎么样?喜欢还是不喜欢?欢迎在下面的评论区留言。
--------------------------------------------------------------------------------
via: https://itsfoss.com/mpv-video-player/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]:https://itsfoss.com/best-ubuntu-apps/
[2]:https://mpv.io/
[3]:http://www.mplayerhq.hu/design7/news.html
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/mpv-player.jpg
[5]:https://mpv.io/installation/
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/mpv-player-interface.png
[7]:https://itsfoss.com/how-to-play-movie-with-subtitles-on-samsung-tv-via-usb/
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/play-youtube-videos-on-mpv-player.jpeg
[9]:https://itsfoss.com/sayonara-music-player/
[10]:https://gnome-mpv.github.io/

View File

@ -1,53 +0,0 @@
L1 终端错误漏洞如何影响 Linux 系统
======
![](https://images.idgesg.net/images/article/2018/08/l1tf-copy-100768129-large.jpg)
昨天在英特尔、微软和红帽的安全建议中宣布,一个新发现的漏洞英特尔处理器(还有 Linux的漏洞称为 L1TF 或 “L1 Terminal Fault”引起了 Linux 用户和管理员的注意。究竟什么是这个漏洞,谁应该担心它?
### 1TF、 L1 Terminal Fault 和 Foreshadow
处理器漏洞由 L1TF、L1 Terminal Fault 和 Foreshadow 组成。研究人员在 1 月份发现了这个问题并向英特尔报告称其为 “Foreshadow”。它类似于过去发现的漏洞例如 Spectre
此漏洞是特定于 Intel 的。其他处理器不受影响。与其他一些漏洞一样,它之所以存在,是因为设计时为了优化内核处理速度,但允许其他进程访问数据。
**[另请阅读:[22 个必要的 Linux 安全命令][1]]**
已为此问题分配了三个 CVE
* CVE-2018-3615英特尔软件保护扩展英特尔 SGX
* CVE-2018-3620操作系统和系统管理模式SMM
* CVE-2018-3646虚拟化的影响
英特尔发言人就此问题发表了这一声明_“L1 Terminal Fault 通过今年早些时候发布的微代码更新得到解决,再加上从今天开始提供的操作系统和虚拟机管理程序软件的相应更新。我们在网上提供了更多信息,并继续鼓励每个人更新系统,因为这是受到保护的最佳方式之一。我们要感谢 imec-DistriNet、KU Leuven、以色列理工学院密歇根大学阿德莱德大学和 Data61 的研究人员以及我们的行业合作伙伴他们帮助我们识别和解决了这个问题。“_
### L1TF 会影响你的 Linux 系统吗?
简短的回答是“可能不会”。如果你因为在今年 1 月爆出的[ Spectre 和 Meltdown 漏洞][2]修补过系统,那你应该是安全的。与 Spectre 和 Meltdown 一样,英特尔声称真实世界中还没有系统受到影响的报告或者检测到。他们还表示,这些变化不太可能在单个系统上产生明显的性能影响,但它们可能对使用虚拟化操作系统的数据中心产生大的影响。
即使如此,仍然推荐频繁地打补丁。要检查你当前的内核版本,使用 **uname -r** 命令:
```
$ uname -r
4.18.0-041800-generic
```
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3298157/linux/linux-and-l1tf.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:https://www.networkworld.com/article/3272286/open-source-tools/22-essential-security-commands-for-linux.html
[2]:https://www.networkworld.com/article/3245813/security/meltdown-and-spectre-exploits-cutting-through-the-fud.html
[3]:https://www.facebook.com/NetworkWorld/
[4]:https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,313 @@
打包更多有用的 Unix 实用程序
======
![](https://www.ostechnix.com/wp-content/uploads/2017/08/Moreutils-720x340.png)
我们都了解 **<ruby>GNU 核心实用程序<rt>GNU Core Utilities</rt></ruby>**,所有类 Unix 操作系统都预装了它们。它们是 GNU 操作系统中与文件、Shell 和 文本处理相关的基础实用工具。GNU 核心实用程序包括很多日常操作命令,例如 `cat``ls`, `rm``mkdir``rmdir``touch``tail` 和 `wc` 等。除了这些实用程序,还有更多有用的实用程序没有预装在类 Unix 操作系统中,它们汇集起来构成了 `moreutilis` 这个日益增长的集合。`moreutils` 可以在 GNU/Linux 和包括 FreeBSDopenBSD 及 Mac OS 在内的多种 Unix 类型操作系统上安装。
截至到编写这份指南时, `moreutils` 提供如下实用程序:
* `chronic` 运行程序并忽略正常运行的输出
* `combine` 使用布尔操作合并文件
* `errno` 查询 errno 名称及描述
* `ifdata` 获取网络接口信息,无需解析 `ifconfig` 的结果
* `ifne` 在标准输入非空的情况下运行程序
* `isutf8` 检查文件或标准输入是否采用 UTF-8 编码
* `lckdo` 运行程序时考虑文件锁
* `mispipe` 使用管道连接两个命令,返回第一个命令的退出状态
* `parallel` 同时运行多个任务
* `pee` 将标准输入传递给多个管道
* `sponge` 整合标准输入并写入文件
* `ts` 为标准输入增加时间戳信息
* `vidir` 使用你默认的文本编辑器操作目录文件
* `vipe` 在管道中插入信息编辑
* `zrun` 自动解压并将其作为参数传递给命令
### 在 Linux 上安装 moreutils
由于 `moreutils` 已经被打包到多种 Linux 发行版中,你可以使用发行版对应的软件包管理器安装 `moreutils`
**Arch Linux** 或衍生的 **Antergos****Manjaro Linux** 上,运行如下命令安装 `moreutils`:
```
$ sudo pacman -S moreutils
```
**Fedora** 上,运行:
```
$ sudo dnf install moreutils
```
**RHEL****CentOS** 和 **Scientific Linux** 上,运行:
```
$ sudo yum install epel-release
$ sudo yum install moreutils
```
**Debian****Ubuntu** 和 **Linux Mint** 上,运行:
```
$ sudo apt-get install moreutils
```
### Moreutils 打包更多有用的 Unix 实用程序
让我们看一下几个 `moreutils` 工具的用法细节。
##### combine 实用程序
正如 `combine` 名称所示moreutils 中的这个实用程序可以使用包括 `and``not``or` 和 `xor` 在内的布尔操作,合并两个文件中的行。
* `and` 输出 `file1``file2` 都包含的行。
* `not` 输出 `file1` 包含但 `file2` 不包含的行。
* `or` 输出 `file1``file2` 包含的行。
* `xor` 输出仅被 `file1``file2` 包含的行
下面举例说明,方便你理解该实用程序的功能。这里有两个文件,文件名分别为 `file1``file2`,其内容如下:
```
$ cat file1
is
was
were
where
there
$ cat file2
is
were
there
```
下面,我使用 `and` 布尔操作合并这两个文件。
```
$ combine file1 and file2
is
were
there
```
从上例的输出中可以看出,`and` 布尔操作只输出那些 `file1``file2` 都包含的行;更具体的来说,命令输出为两个文件共有的行,即 iswere 和 there。
下面我们换成 `not` 操作,观察一下输出。
```
$ combine file1 not file2
was
where
```
从上面的输出中可以看出,`not` 操作输出 `file1` 包含但 `file2` 不包含的行。
##### ifdata 实用程序
`ifdata` 实用程序可用于检查网络接口是否存在,也可用于获取网络接口的信息,例如 IP 地址等。与预装的 `ifconfig``ip` 命令不同,`ifdata` 的输出更容易解析,这种设计的初衷是便于在 Shell 脚本中使用。
如果希望查看某个接口的 IP 地址,不妨以 `wlp9s0` 为例,运行如下命令:
```
$ ifdata -p wlp9s0
192.168.43.192 255.255.255.0 192.168.43.255 1500
```
如果只查看掩码信息,运行如下命令:
```
$ ifdata -pn wlp9s0
255.255.255.0
```
如果查看网络接口的物理地址,运行如下命令:
```
$ ifdata -ph wlp9s0
A0:15:46:90:12:3E
```
如果判断接口是否存在,可以使用 `-pe` 参数:
```
$ ifdata -pe wlp9s0
yes
```
##### pee 命令
该命令某种程度上类似于 `tee` 命令。
我们先用一个例子看一下 `tee` 的用法。
```
$ echo "Welcome to OSTechNIx" | tee file1 file2
Welcome to OSTechNIx
```
上述命令首先创建两个文件,名为 `file1``file2`;接着,将 “Welcome to OSTechNix” 行分别附加到两个文件中;最后,在终端中打印输出 “Welcome to OSTechNix”。
`pee` 命令提供类似的功能,但与 `tee` 又稍微有些差异。查看下面的例子:
```
$ echo "Welcome to OSTechNIx" | pee cat cat
Welcome to OSTechNIx
Welcome to OSTechNIx
```
从上面的命令输出中可以看出,有两个 `cat` 命令实例获取 `echo` 命令的输出并执行,因而终端中出现两个同样的输出。
##### sponge 实用程序
这是 `moreutils` 软件包中的另一个有用的实用程序。`sponge` 读取标准输入并写入到指定的文件中。与 Shell 中的重定向不同,`sponge` 接收到完整输入后再写入输出文件。
查看下面这个文本文件的内容:
```
$ cat file1
I
You
Me
We
Us
```
可见,文件包含了一些无序的行;更具体的说,这些行“没有”按照字母顺序排序。如果希望将其内容安装字母顺序排序,你会怎么做呢?
```
$ sort file1 > file1_sorted
```
这样做没错,对吧?当然没错!在上面的命令中,我将 `file1` 文件内容按照字母顺序排序,将排序后的内容保存在 `file1_sorted` 文件中。但如果使用 `sponge` 命令,你可以在不创建新文件(即 `file1_sorted`)的情况下完成同样的任务,命令如下:
```
$ sort file1 | sponge file1
```
那么,让我们检查一下文件内容是否已经按照字母顺序排序:
```
$ cat file1
I
Me
Us
We
You
```
看到了吧?并不需要创建新文件。在脚本编程中,这非常有用。另一个好消息是,如果待写入的文件已经存在,`sponge` 会保持其<ruby>权限信息<rt>permissions</rt></ruby>不变。
##### ts 实用程序
正如名称所示,`ts` 命令在每一行输出的行首增加<ruby>时间戳<rt>timestamp</rt></ruby>
查看如下命令的输出:
```
$ ping -c 2 localhost
PING localhost(localhost.localdomain (::1)) 56 data bytes
64 bytes from localhost.localdomain (::1): icmp_seq=1 ttl=64 time=0.055 ms
64 bytes from localhost.localdomain (::1): icmp_seq=2 ttl=64 time=0.079 ms
--- localhost ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1018ms
rtt min/avg/max/mdev = 0.055/0.067/0.079/0.012 ms
```
下面,结合 `ts` 实用程序运行同样地命令:
```
$ ping -c 2 localhost | ts
Aug 21 13:32:28 PING localhost(localhost (::1)) 56 data bytes
Aug 21 13:32:28 64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.063 ms
Aug 21 13:32:28 64 bytes from localhost (::1): icmp_seq=2 ttl=64 time=0.113 ms
Aug 21 13:32:28
Aug 21 13:32:28 --- localhost ping statistics ---
Aug 21 13:32:28 2 packets transmitted, 2 received, 0% packet loss, time 4ms
Aug 21 13:32:28 rtt min/avg/max/mdev = 0.063/0.088/0.113/0.025 ms
```
对比输出可以看出,`ts` 在每一行行首增加了时间戳。下面给出另一个例子:
```
$ ls -l | ts
Aug 21 13:34:25 total 120
Aug 21 13:34:25 drwxr-xr-x 2 sk users 12288 Aug 20 20:05 Desktop
Aug 21 13:34:25 drwxr-xr-x 2 sk users 4096 Aug 10 18:44 Documents
Aug 21 13:34:25 drwxr-xr-x 24 sk users 12288 Aug 21 13:06 Downloads
[...]
```
##### vidir 实用程序
`vidir` 实用程序可以让你使用 `vi` 编辑器(或其它 `$EDITOR` 环境变量指定的编辑器)编辑指定目录的内容。如果没有指定目录,`vidir` 会默认编辑你当前的目录。
下面的命令编辑 `Desktop` 目录的内容:
```
$ vidir Desktop/
```
![vidir][2]
上述命令使用 `vi` 编辑器打开了指定的目录,其中目录内的文件都会对应一个数字。下面你可以按照 `vi` 的操作方式来编辑目录中的这些文件:例如,删除行意味着删除目录中对应的文件,修改行中字符串意味着对文件进行重命名。
你也可以编辑子目录。下面的命令会编辑当前目录及所有子目录:
```
$ find | vidir -
```
请注意命令结尾的 `-`。如果 `-` 被指定为待编辑的目录,`vidir` 会从标准输入读取一系列文件名,列出它们让你进行编辑。
如果你只想编辑当前目录下的文件,可以使用如下命令:
```
$ find -type f | vidir -
```
只想编辑特定类型的文件,例如 `.PNG` 文件?你可以使用如下命令:
```
$ vidir *.png
```
这时命令只会编辑当前目录下以 `.PNG` 为后缀的文件。
##### vipe 实用程序
`vipe` 命令可以让你使用默认编辑器接收 Unix 管道输入,编辑之后使用管道输出供下一个程序使用。
执行下面的命令会打开 `vi` 编辑器(当然是我默认使用的编辑器),你可以编辑 `echo` 命令的管道输入(即 “Welcome to OSTechNix”最后将编辑过的内容输出到终端中。
```
$ echo "Welcome to OSTechNIx" | vipe
Hello World
```
从上面的输出可以看出我通过管道将“Welcome to OSTechNix”输入到 `vi` 编辑器中将内容编辑为“Hello World”最后显示该内容。
好了,就介绍这么多吧。我只介绍了一小部分实用程序,而 `moreutils` 包含更多有用的实用程序。我在文章开始的时候已经列出目前 `moreutils` 软件包内包含的实用程序,你可以通过 `man` 帮助页面获取更多相关命令的细节信息。举个例子,如果你想了解 `vidir` 命令,请运行:
```
$ man vidir
```
希望这些内容对你有所帮助。我还将继续分享其它有趣且实用的指南,如果你认为这些内容对你有所帮助,请分享到社交网络或专业圈子,也欢迎你支持 OSTechNix 项目。
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/moreutils-collection-useful-unix-utilities/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2017/08/sk@sk_001-1.png