Merge branch 'master' of https://github.com/LCTT/TranslateProject into translating

This commit is contained in:
geekpi 2019-11-13 08:53:58 +08:00
commit ea290a0e14
14 changed files with 1669 additions and 302 deletions

View File

@ -0,0 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11565-1.html)
[#]: subject: (Why containers and Kubernetes have the potential to run almost anything)
[#]: via: (https://opensource.com/article/19/6/kubernetes-potential-run-anything)
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux)
为什么容器和 Kubernetes 有潜力运行一切
======
> 不仅可以部署简单的应用程序,还可以用 Kubernetes 运维器应对第 2 天运营。
![](https://img.linux.net.cn/data/attachment/album/201911/12/011140mp75sd0ynppd77da.jpg)
在我的第一篇文章 [为什么说 Kubernetes 是一辆翻斗车][2] 中,我谈到了 Kubernetes 如何在定义、分享和运行应用程序方面很出色,类似于翻斗车在移动垃圾方面很出色。在第二篇中,[如何跨越 Kubernetes 学习曲线][3],我解释了 Kubernetes 的学习曲线实际上与运行任何生产环境中的应用程序的学习曲线相同,这确实比学习所有传统组件要容易(如负载均衡器、路由器、防火墙、交换机、集群软件、集群文件系统等)。这是 DevOps是开发人员和运维人员之间的合作用于指定事物在生产环境中的运行方式这意味着双方都需要学习。在第三篇 [Kubernetes 基础:首先学习如何使用][4] 中,我重新设计了 Kubernetes 的学习框架,重点是驾驶翻斗车而不是制造或装备翻斗车。在第四篇文章 [帮助你驾驭 Kubernetes 的 4 个工具][5] 中,我分享了我喜爱的工具,这些工具可帮助你在 Kubernetes 中构建应用程序(驾驶翻斗车)。
在这最后一篇文章中,我会分享我为什么对在 Kubernetes 上运行应用程序的未来如此兴奋的原因。
从一开始Kubernetes 就能够很好地运行基于 Web 的工作负载容器化的。Web 服务器、Java 和相关的应用程序服务器PHP、Python等之类的工作负载都可以正常工作。该平台处理诸如 DNS、负载平衡和 SSH`kubectl exec` 取代)之类的支持服务。在我的职业生涯的大部分时间里,这些都是我在生产环境中运行的工作负载,因此,我立即意识到,除了 DevOps 之外,除了敏捷之外,使用 Kubernetes 运行生产环境工作负载的强大功能。即使是我们几乎不改变我们的文化习惯,也可以提高效率。调试和退役变得非常容易,而这对于传统 IT 来说是极为困难的。因此从早期开始Kubernetes 就用一种单一的配置语言Kube YAML/Json为我提供了对生产环境工作负载进行建模所需的所有基本原语。
但是,如果你需要运行具有复制功能的多主 MySQL会发生什么情况使用 Galera 的冗余数据呢?你如何进行快照和备份?那么像 SAP 这样复杂的工作呢?使用 Kubernetes简单的应用程序Web 服务器等)的第 0 天(部署)相当简单,但是没有解决第 2 天的运营和工作负载。这并不是说,具有复杂工作负载的第 2 天运营要比传统 IT 难解决,而是使用 Kubernetes 并没有使它们变得更容易。每个用户都要设计自己的天才想法来解决这些问题,这基本上是当今的现状。在过去的五年中,我遇到的第一类问题是复杂工作负载的第 2 天操作。LCTT 译注:在软件生命周期中,第 0 天是指软件的设计阶段;第 1 天是指软件的开发和部署阶段;第 2 天是指生产环境中的软件运维阶段。)
值得庆幸的是,随着 Kubernetes <ruby>运维器<rt>Operator</rt></ruby>的出现,这种情况正在改变。随着运维器的出现,我们现在有了一个框架,可以将第 2 天的运维知识汇总到平台中。现在,我们可以应用我在 [Kubernetes 基础:首先学习如何使用][4] 中描述的相同的定义状态、实际状态的方法,现在我们可以定义、自动化和维护各种各样的系统管理任务。
LCTT 译注: Operator 是 Kubernetes 中的一种可以完成运维工程师的特定工作的组件,业界大多没有翻译这个名词,此处仿运维工程师例首倡翻译为“运维器”。)
我经常将运维器称为“系统管理机器人”,因为它们实质上是在第 2 天的工作中整理出一堆运维知识,该知识涉及<ruby>主题专家<rt>Subject Matter Expert</rt></ruby>SME、例如数据库管理员或系统管理员针对的工作负载类型数据库、Web 服务器等),通常会记录在 Wiki 中的某个地方。这些知识放在 Wiki 中的问题是,为了将该知识应用于解决问题,我们需要:
1. 生成事件,通常监控系统会发现故障,然后我们创建故障单
2. SME 人员必须对此问题进行调查,即使这是我们之前见过几百万次的问题
3. SME 人员必须执行该知识(执行备份/还原、配置 Galera 或事务复制等)
通过运维器,所有这些 SME 知识都可以嵌入到单独的容器镜像中,该镜像在有实际工作负荷之前就已部署。 我们部署运维器容器然后运维器部署和管理一个或多个工作负载实例。然后我们使用“运维器生命周期管理器”Katacoda 教程)之类的方法来管理运维器。
因此,随着我们进一步使用 Kubernetes我们不仅简化了应用程序的部署而且简化了整个生命周期的管理。运维器还为我们提供了工具可以管理具有深层配置要求群集、复制、修复、备份/还原)的非常复杂的有状态应用程序。而且,最好的地方是,构建容器的人员可能是做第 2 天运维的主题专家,因此现在他们可以将这些知识嵌入到操作环境中。
### 本系列的总结
Kubernetes 的未来是光明的,就像之前的虚拟化一样,工作负载的扩展是不可避免的。学习如何驾驭 Kubernetes 可能是开发人员或系统管理员可以对自己的职业发展做出的最大投资。随着工作负载的增多,职业机会也将增加。因此,这是驾驶一辆令人惊叹的 [在移动垃圾时非常优雅的翻斗车][2]……
你可能想在 Twitter 上关注我,我在 [@fatherlinux][6] 上分享有关此主题的很多内容。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/kubernetes-potential-run-anything
作者:[Scott McCarty][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fatherlinux
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster)
[2]: https://opensource.com/article/19/6/kubernetes-dump-truck
[3]: https://opensource.com/article/19/6/kubernetes-learning-curve
[4]: https://opensource.com/article/19/6/kubernetes-basics
[5]: https://opensource.com/article/19/6/tools-drive-kubernetes
[6]: https://twitter.com/fatherlinux

View File

@ -0,0 +1,57 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11564-1.html)
[#]: subject: (Cloning a MAC address to bypass a captive portal)
[#]: via: (https://fedoramagazine.org/cloning-a-mac-address-to-bypass-a-captive-portal/)
[#]: author: (Esteban Wilson https://fedoramagazine.org/author/swilson/)
克隆 MAC 地址来绕过强制门户
======
![][1]
如果你曾经在家和办公室之外连接到 WiFi那么通常会看到一个门户页面。它可能会要求你接受服务条款或其他协议才能访问。但是当你无法通过这类门户进行连接时会发生什么本文向你展示了如何在 Fedora 上使用 NetworkManager 在某些故障情况下让你仍然可以访问互联网。
### 强制门户如何工作
强制门户是新设备连接到网络时显示的网页。当用户首次访问互联网时,门户网站会捕获所有网页请求并将其重定向到单个门户页面。
然后,页面要求用户采取一些措施,通常是同意使用政策。用户同意后,他们可以向 RADIUS 或其他类型的身份验证系统进行身份验证。简而言之,强制门户根据设备的 MAC 地址和终端用户接受条款来注册和授权设备。MAC 地址是附加到任何网络接口的[基于硬件的值][2],例如 WiFi 芯片或卡。)
有时设备无法加载强制门户来进行身份验证和授权以使用 WiFI 接入。这种情况的例子包括移动设备和游戏机Switch、Playstation 等)。当连接到互联网时,它们通常不会打开强制门户页面。连接到酒店或公共 WiFi 接入点时,你可能会看到这种情况。
不过,你可以在 Fedora 上使用 NetworkManager 来解决这些问题。Fedora 可以使你临时克隆要连接的设备的 MAC 地址,并代表该设备通过强制门户进行身份验证。你需要得到连接设备的 MAC 地址。通常,它被打印在设备上的某个地方并贴上标签。它是一个六字节的十六进制值,因此看起来类似 `4A:1A:4C:B0:38:1F`。通常,你也可以通过设备的内置菜单找到它。
### 使用 NetworkManager 克隆
首先,打开 `nm-connection-editor`,或通过“设置”打开 WiFi 设置。然后,你可以使用 NetworkManager 进行克隆:
* 对于以太网:选择已连接的以太网连接。然后选择 “Ethernet” 选项卡。记录或复制当前的 MAC 地址。在 “<ruby>克隆 MAC 地址<rt>Cloned MAC address</rt></ruby>” 字段中输入游戏机或其他设备的 MAC 地址。
* 对于 WiFi选择 WiFi 配置名。然后选择 “WiFi” 选项卡。记录或复制当前的 MAC 地址。在 “<ruby>克隆 MAC 地址<rt>Cloned MAC address</rt></ruby>” 字段中输入游戏机或其他设备的 MAC 地址。
### 启动所需的设备
当 Fedora 系统与以太网或 WiFi 配置连接,克隆的 MAC 地址将用于请求 IP 地址,并加载强制门户。输入所需的凭据和/或选择用户协议。该 MAC 地址将获得授权。
现在,断开 WiF i或以太网配置连接然后将 Fedora 系统的 MAC 地址更改回其原始值。然后启动游戏机或其他设备。该设备现在应该可以访问互联网了,因为它的网络接口已通过你的 Fedora 系统进行了授权。
不过,这不是 NetworkManager 全部能做的。例如,请参阅[随机化系统硬件地址][3],来获得更好的隐私保护。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/cloning-a-mac-address-to-bypass-a-captive-portal/
作者:[Esteban Wilson][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/swilson/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/clone-mac-nm-816x345.jpg
[2]: https://en.wikipedia.org/wiki/MAC_address
[3]: https://linux.cn/article-10028-1.html

View File

@ -0,0 +1,90 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11562-1.html)
[#]: subject: (Confirmed! Microsoft Edge Will be Available on Linux)
[#]: via: (https://itsfoss.com/microsoft-edge-linux/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
确认了!微软 Edge 浏览器将发布 Linux 版
======
![](https://img.linux.net.cn/data/attachment/album/201911/11/164600uv7yrbe7gtkxi4xg.jpg)
> 微软正在全面重制其 Edge Web 浏览器,它将基于开源 [Chromium][2] 浏览器。微软还要将新的 Edge 浏览器带到 Linux 桌面上,但是 Linux 版本可能会有所延迟。
微软的 Internet Explorer 曾经一度统治了浏览器市场,但在过去的十年中,它将统治地位丢给了谷歌的 Chrome。
微软试图通过创造 Edge 浏览器来找回失去的位置Edge 是一种使用 EdgeHTML 和 [Chakra 引擎][6]构建的全新 Web 浏览器。它与 Microsoft 的数字助手 [Cortana][7] 和 Windows 10 紧密集成。
但是,它仍然无法夺回冠军位置,截至目前,它处于[桌面浏览器使用份额的第四位][8]。
最近,微软决定通过基于[开源 Chromium 项目][9]重新对 Edge 进行大修。谷歌的 Chrome 浏览器也是基于 Chromium 的。[Chromium 还可以作为独立的 Web 浏览器使用][2],某些 Linux 发行版将其用作默认的 Web 浏览器。
### Linux 上新的微软 Edge Web 浏览器
经过最初的犹豫和不确定性之后,微软似乎最终决定把新的 Edge 浏览器引入到 Linux。
在其年度开发商大会 [Microsoft Ignite][10] 中,[关于 Edge 浏览器的演讲][11]中提到了它未来将进入 Linux 中。
![微软确认 Edge 未来将进入 Linux 中][12]
新的 Edge 浏览器将于 2020 年 1 月 15 日发布,但我认为 Linux 版本会推迟。
### 微软 Edge 进入 Linux 真的重要吗?
微软 Edge 进入 Linux 有什么大不了的吗?我们又不是没有很多[可用于 Linux 的 Web 浏览器][13]
我认为这与 “微软 Linux 竞争”(如果有这样的事情)有关。微软为 Linux特别是 Linux 桌面)做的任何事情,都会成为新闻。
我还认为 Linux 上的 Edge 对于微软和 Linux 用户都有好处。这就是为什么。
#### 对于微软有什么用?
当谷歌在 2008 年推出其 Chrome 浏览器时,没有人想到它会在短短几年内占领市场。但是,为什么作为一个搜索引擎会在一个“免费的 Web 浏览器”后面投入如此多的精力呢?
答案是谷歌是一家搜索引擎,它希望有更多的人使用其搜索引擎和其他服务,以便它可以从广告服务中获得收入。使用 ChromeGoogle 是默认的搜索引擎。在 Firefox 和 Safari 等其他浏览器上,谷歌支付了数亿美元作为默认 Web 浏览器的费用。如果没有 Chrome则谷歌必须完全依赖其他浏览器。
微软也有一个名为 Bing 的搜索引擎。Internet Explorer 和 Edge 使用 Bing 作为默认搜索引擎。如果更多用户使用 Edge它可以增加将更多用户带到 Bing 的机会。而微软显然希望拥有更多的 Bing 用户。
#### 对 Linux 用户有什么用?
对于 Linux 桌面用户,我看到有两个好处。借助 Edge你可以在 Linux 上使用某些微软特定的产品。 例如,微软的流式游戏服务 [xCloud][14] 可能仅能在 Edge 浏览器上使用。另一个好处是提升了 [Linux 上的 Netflix 体验][15]。当然,你可以在 Linux 上使用 Chrome 或 [Firefox 观看 Netflix][16],但可能无法获得全高清或超高清流。
据我所知,[全高清和超高清 Netflix 流仅在微软 Edge 上可用][17]。这意味着你可以使用 Linux 上的 Edge 以高清格式享受 Netflix。
### 你怎么看?
你对微软 Edge 进入 Linux 有什么感觉?当 Linux 版本可用时,你会使用吗?请在下面的评论部分中分享你的观点。
--------------------------------------------------------------------------------
via: https://itsfoss.com/microsoft-edge-linux/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/microsoft_edge_logo_transparent.png?ssl=1
[2]: https://itsfoss.com/install-chromium-ubuntu/
[3]: https://twitter.com/hashtag/opensource?src=hash&ref_src=twsrc%5Etfw
[4]: https://t.co/Co5Xj3dKIQ
[5]: https://twitter.com/abhishek_foss/status/844666818665025537?ref_src=twsrc%5Etfw
[6]: https://itsfoss.com/microsoft-chakra-core/
[7]: https://www.microsoft.com/en-in/windows/cortana
[8]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
[9]: https://www.chromium.org/Home
[10]: https://www.microsoft.com/en-us/ignite
[11]: https://myignite.techcommunity.microsoft.com/sessions/79341?source=sessions
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/Microsoft_Edge_Linux.jpg?ssl=1
[13]: https://itsfoss.com/open-source-browsers-linux/
[14]: https://www.pocket-lint.com/games/news/147429-what-is-xbox-project-xcloud-cloud-gaming-service-price-release-date-devices
[15]: https://itsfoss.com/watch-netflix-in-ubuntu-linux/
[16]: https://itsfoss.com/netflix-firefox-linux/
[17]: https://help.netflix.com/en/node/23742

View File

@ -0,0 +1,61 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My Linux story: Learning Linux in the 90s)
[#]: via: (https://opensource.com/article/19/11/learning-linux-90s)
[#]: author: (Mike Harris https://opensource.com/users/mharris)
My Linux story: Learning Linux in the 90s
======
This is the story of how I learned Linux before the age of WiFi, when
distributions came in the form of a CD.
![Sky with clouds and grass][1]
Most people probably don't remember where they, the computing industry, or the everyday world were in 1996. But I remember that year very clearly. I was a sophomore in high school in the middle of Kansas, and it was the start of my journey into free and open source software (FOSS).
I'm getting ahead of myself here. I was interested in computers even before 1996. I was born and raised on my family's first Apple ][e, followed many years later by the IBM Personal System/2. (Yes, there were definitely some generational skips along the way.) The IBM PS/2 had a very exciting feature: a 1200 baud Hayes modem.
I don't remember how, but early on, I got the phone number of a local [BBS][2]. Once I dialed into it, I could get a list of other BBSes in the local area, and my adventure into networked computing began.
In 1995, the people [lucky enough][3] to have a home internet connection spent less than 30 minutes a month using it. That internet was nothing like our modern services that operate over satellite, fiber, CATV coax, or any version of copper lines. Most homes dialed in with a modem, which tied up their phone line. (This was also long before cellphones were pervasive, and most people had just one home phone line.) I don't think there were many independent internet service providers (ISPs) back then, although that may have depended upon where you were located, so most people got service from a handful of big names, including America Online, CompuServe, and Prodigy.
And the service you did get was very slow; even at dial-up's peak evolution at 56K, you could only expect to get a maximum of about 3.5 Kbps. If you wanted to try Linux, downloading a 200MB to 800MB ISO image or (more realistically) a disk image set was a dedication to time, determination, and lack of phone usage.
I went with the easier route: In 1996, I ordered a "tri-Linux" CD set from a major Linux distributor. These tri-Linux disks provided three distributions; mine included Debian 1.1 (the first stable release of Debian), Red Hat Linux 3.0.3, and Slackware 3.1 (nicknamed Slackware '96). As I recall, the discs were purchased from an online store called [Linux Systems Labs][4]. The online store doesn't exist now, but in the 90s and early 00s, such distributors were common. And so were multi-disc sets of Linux. This one's from 1998 but gives you an idea of what they involved:
![A tri-linux CD set][5]
![A tri-linux CD set][6]
On a fateful day in the summer of 1996, while living in a new and relatively rural city in Kansas, I made my first attempt at installing and working with Linux. Throughout the summer of '96, I tried all three distributions on that tri-Linux CD set. They all ran beautifully on my mom's older Pentium 75MHz computer.
I ended up choosing [Slackware][7] 3.1 as my preferred distribution, probably more because of the terminal's appearance than the other, more important reasons one should consider before deciding on a distribution.
I was up and running. I was connecting to an "off-brand" ISP (a local provider in the area), dialing in on my family's second phone line (ordered to accommodate all my internet use). I was in heaven. I had a dual-boot (Microsoft Windows 95 and Slackware 3.1) computer that worked wonderfully. I was still dialing into the BBSes that I knew and loved and playing online BBS games like Trade Wars, Usurper, and Legend of the Red Dragon.
I can remember spending days upon days of time in #Linux on EFNet (IRC), helping other users answer their Linux questions and interacting with the moderation crew.
More than 20 years after taking my first swing at using the Linux OS at home, I am now entering my fifth year as a consultant for Red Hat, still using Linux (now Fedora) as my daily driver, and still on IRC helping people looking to use Linux.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/learning-linux-90s
作者:[Mike Harris][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mharris
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-cloud.png?itok=vz0PIDDS (Sky with clouds and grass)
[2]: https://en.wikipedia.org/wiki/Bulletin_board_system
[3]: https://en.wikipedia.org/wiki/Global_Internet_usage#Internet_users
[4]: https://web.archive.org/web/19961221003003/http://lsl.com/
[5]: https://opensource.com/sites/default/files/20191026_142009.jpg (A tri-linux CD set)
[6]: https://opensource.com/sites/default/files/20191026_142020.jpg (A tri-linux CD set)
[7]: http://slackware.com

View File

@ -0,0 +1,45 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My first open source contribution: Talk about your pull request)
[#]: via: (https://opensource.com/article/19/11/first-open-source-contribution-communicate-pull-request)
[#]: author: (Galen Corey https://opensource.com/users/galenemco)
My first open source contribution: Talk about your pull request
======
I finally heard back from the project and my code was merged.
![speech bubble that says tell me more][1]
Previously, I wrote about [keeping your code relevant][2] when making a contribution to an open source project. Now, you finally click **Create pull request**. You're elated, you're done.
At first, I didnt even care whether my code would get merged or not. I had done my part. I knew I could do it. The future lit up with the many future pull requests that I would make to open source projects.
But of course, I did want my code to become a part of my chosen project, and soon I found myself googling, "How long does it take for an open source pull request to get merged?" The results werent especially conclusive. Due to the nature of open source (the fact that anyone can participate in it), processes for maintaining projects vary widely. But I found a tweet somewhere that confidently said: "If you dont hear back in two months, you should reach out to the maintainers."
Well, two months came and went, and I heard nothing. I also did not reach out to the maintainers, since talking to people and asking them to critique your work is scary. But I wasnt overly concerned. I told myself that two months was probably an average, so I put it in the back of my mind.
At four months, there was still no response. I opted for the passive approach again. I decided not to try to get in touch with the maintainers, but my reasoning this time was more negative. I started to wonder if some of my earlier assumptions about how actively maintained the project was were wrong—maybe no one was keeping up with incoming pull requests. Or maybe they didnt look at pull requests from random people. I put the issue in the back of my mind again, this time with less hope of ever seeing a result.
I had nearly given up hope entirely and forgotten about the whole thing when, six months after I made my original pull request, I finally heard back. After making a few small changes that they requested, my code was approved and merged. My fifth mistake was giving up on my contribution when I did not hear back and failing to be communicative about my work.
Dont be afraid to communicate about your pull request. Doing so could mean something as simple as adding a comment to your issue that says, “Hey, Im working on this!" And dont give up hope just because you dont get a response for a while. The amount of time that it takes will vary based on who is maintaining the project and how much time they have to devote to maintaining it.
This story has a happy ending. My code was merged. I hope that by sharing some parts of the experience that tripped me up on my first open source journey, I can smooth the path for some of you who want to explore open source for the first time.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/first-open-source-contribution-communicate-pull-request
作者:[Galen Corey][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/galenemco
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSCD_MPL3_520x292_FINAL.png?itok=cp6TbjVI (speech bubble that says tell me more)
[2]: https://opensource.com/article/19/10/my-first-open-source-contribution-relevant-code

View File

@ -0,0 +1,209 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How universities are using open source to attract students)
[#]: via: (https://opensource.com/article/19/11/open-source-universities)
[#]: author: (Joshua Pearce https://opensource.com/users/jmpearce)
How universities are using open source to attract students
======
Many universities have begun new initiatives to attract students that
are excited about technical freedom and open source.
![Open education][1]
Michigan Tech just launched [opensource.mtu.edu][2], a virtual one-stop free shop for all things open source on campus. According to their site, _[Tech Today][3]_:
> "With the [majority of big companies now contributing to open source projects][4] it is clearly a major trend. [All [major] supercomputers][5] (including our own supercomputer: [Superior][6]), 90% of cloud servers, 82% of smartphones, and 62% of embedded systems run on open source operating systems. More than 70% of internet of things devices also use open source software. 90% of the Fortune Global 500 pay for the open source Linux operating system from Red Hat, a company that makes billions of dollars a year for the service they provide on top of the product that can be downloaded for free."
The publication also says that "the open source hardware movement is [roughly 15 years][7] behind its software counterpart," but it appears to be catching up quickly. Given their mandate to "attract students that are excited about technical freedom and open source," many universities have started a new front in the battle for educational supremacy.
Unlike conventional warfare, this is a battle that benefits the public. The more universities share using the open source paradigm, the faster technology moves forward with all of its concomitant benefits. The resources available through [opensource.mtu.edu][2] include:
* [Thousands of free and open access articles in their Digital Commons][8].
* Free data, including housing the [Free Inactive Patent Search][9], a tool to help find inactive patents that have fallen into the public domain.
* Free open source courses like [FOSS101][10]: Essentials of Free and Open Source Software, which teaches Linux commands and the Git revision control system, or [Open source 3D printing][11], which teaches OpenSCAD, FreeCAD, Blender, Arduino, and RepRap 3D printing.
* Student organizations like the [Open Source Hardware Enterprise][12], which is dedicated to the development and availability of open source hardware, and the [Open Source Club][13], which develops open source software.
* Free software, including the [Astrophysics Source Code Library (ASCL)][14] open repository, which now lists over 2,000 codes and the [Psychology Experiment Building Language (PEBL)][15] software for psychological testing used in laboratories and by clinicians around the world.
* Free hardware, including hundreds of digitally manufactured designs and dozens of complex machines for everything from [plastic recycling systems][16] to [open source lab equipment][17].
Michigan Tech is hardly alone with major initiatives across a broad swath of academia. Open access databases like [Academia][18], [OSF preprints][19], [ResearchGate][20], [PrePrints][21], and [Science Open][22] swell with millions of free, open access, peer-reviewed articles. The Center for Open Science supports the [Open Science Framework][23], which is a "free and open source project management tool that supports researchers throughout their entire project" lifecycle, including storing Gigabytes of data:
![Open Source Framework \(OSF\) workflow.][24]
_Source: [OSF][25]_
You can choose from a wide variety of course options at other institutions as well, and are generally able to take these courses at your own pace:
* Rochester Institute of Technology students can [earn a minor in free and open source software][26] and free culture.
* Many of the worlds most renowned colleges and universities offer free courses to self-learners through [OpenCourseWare (OCW)][27]. None of the courses offered through OCW award credit, though. For that, you need to pay.
* Schools like [MIT][28], the University of Notre Dame, Yale, Carnegie Mellon, Delft, Stanford, Johns Hopkins, University of California Berkeley and the Open University (among many more) offer free academic content, such as syllabi, lecture notes, assignments, and examinations.
Many universities also contribute to free and open source software (FOSS) and free and open source hardware (FOSH). In fact, many universities—including American International University West Africa, Brandeis University, Indiana University, and the University of Southern Queensland—are [Open Source Initiative (OSI) Affiliates][29]. The University of Texas even has [formal policies][30] in place for contributing to open source.
### Universities using open source in higher education
In addition, the vast majority of universities use FOSS. [PortalProgramas][31] ranked Tufts University as the top higher education user of FOSS. Even more representative is [Apereo][32], which is a network of universities actively supporting the use of open source in higher education. This network includes a long list of [member institutions][33]:
* American Public University System  
* Beijing Open-mindness Technology Co., Ltd.  
* Blindside Networks  
* Boston University Questrom School of Business  
* Brigham Young University  
* Brock University  
* Brown University  
* California Community Colleges Technology Center
* California State University, Sacramento  
* Cirrus Identity  
* Claremont Colleges  
* Clark County School District  
* Duke University  
* Edalex  
* Educational Service Unit Coordinating Council  
* ELAN e.V.  
* Entornos de Formación S.L (EDF)  
* ETH Zürich  
* Gert Sibande TVET College  
* HEC Montreal  
* Hosei University  
* Hotelschool the Hague  
* IlliniCloud  
* Instructional Media &amp; Magic  
* JISC  
* Kyoto University  
* LAMP
* Learning Experiences  
* Longsight. Inc.  
* MPL, Ltda.  
* Nagoya University  
* New York University  
* North-West University  
* Oakland University  
* OPENCOLLAB
* Oxford University  
* Pepperdine University  
* Princeton University  
* Rice University  
* Roger Williams University  
* Rutgers University  
* Sinclair Community College  
* SWITCH  
* Texas State University, San Marcos  
* Unicon  
* Universidad Politecnica de Valencia  
* Universidad Publica de Navarra  
* Universitat de Lleida  
* Universite de Rennes 1  
* Universite de Valenciennes  
* University of Amsterdam  
* University of California, Berkeley  
* University of Cape Town  
* University of Edinburgh  
* University of Illinois  
* University of Kansas  
* University of Manchester  
* University of Michigan  
* University of North Carolina, Chapel Hill  
* University of Notre Dame  
* University of South Africa UNISA  
* University of Virginia  
* University of Wisconsin-Madison  
* University of Witwatersrand  
* Western University  
* Whitman College
 
Another popular organization is [Kuali][34], which is a nonprofit that produces open source administrative software for higher education institutions. Their members include:
* Boston University
* California State University, Office of the Chancellor
* Colorado State University
* Cornell University
* Drexel University
* Indiana University
* Marist College
* Massachusetts Institute of Technology
* Michigan State University
* North-West University, South Africa
* Research Foundation of The City University of New York
* Stevens Institute of Technology
* Strathmore University
* Tufts University
* University Corporation for Atmospheric Research
* Universidad del Sagrado Corazon
* University of Arizona
* University of California, Davis
* University of California, Irvine
* University of Connecticut
* University of Hawaii
* University of Illinois
* University of Maryland, Baltimore
* University of Maryland, College Park
* University of Toronto
* West Virginia University
Didn't see your favorite university on the list? If that school has been involved in open source, please leave a comment below telling me what your school is doing in open source. If you want to see your favorite school on the list and they aren't doing much in open source, you can encourage them by sending a letter asking the program heads to:
* Institutionalize sharing their research open access in their own Digital Commons and/or use one of the many free repositories.
* Share research data on the Open Science Framework.
* Provide OCW and/or offer courses and programs specifically focused on open source.
* Start and/or expand their use of FOSS and FOSH on campus, and/or join <https://kuali.org/membership> or <https://www.apereo.org/content/apereo-membership>.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/open-source-universities
作者:[Joshua Pearce][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jmpearce
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_OER_520x292_FINAL.png?itok=DBCJ4H1s (Open education)
[2]: https://opensource.mtu.edu/
[3]: https://www.mtu.edu/ttoday/?issue=20191022
[4]: https://opensource.com/business/16/5/2016-future-open-source-survey
[5]: https://www.zdnet.com/article/supercomputers-all-linux-all-the-time/
[6]: https://hpc.mtu.edu/
[7]: https://www.mdpi.com/2411-5134/3/3/44
[8]: https://digitalcommons.mtu.edu/
[9]: https://opensource.com/article/17/1/making-us-patent-system-useful-again
[10]: https://mtu.instructure.com/courses/1147020
[11]: https://opensource.com/article/19/2/3d-printing-course
[12]: http://openhardware.eit.mtu.edu/
[13]: http://mtuopensource.club/
[14]: https://ascl.net/
[15]: http://pebl.sourceforge.net/
[16]: https://www.appropedia.org/Recyclebot
[17]: https://www.appropedia.org/Open-source_Lab
[18]: https://www.academia.edu/
[19]: https://cos.io/our-products/osf-preprints/
[20]: https://www.researchgate.net/
[21]: https://www.preprints.org/
[22]: https://www.scienceopen.com/
[23]: https://osf.io/
[24]: https://opensource.com/sites/default/files/uploads/osf_workflow_-_hero.original600_copy_0.png
[25]: https://cdn.cos.io/media/images/OSF_workflow_-_hero.original.png
[26]: http://www.rit.edu/news/story.php?id=50590
[27]: https://learn.org/articles/25_Colleges_and_Universities_Ranked_by_Their_OpenCourseWare.html
[28]: https://ocw.mit.edu/index.htm
[29]: https://opensource.org/affiliates
[30]: https://it.utexas.edu/policies/releasing-software-open-source
[31]: http://www.portalprogramas.com/en/open-source-universities-ranking/about
[32]: https://www.apereo.org/
[33]: https://www.apereo.org/content/apereo-member-organizations
[34]: https://www.kuali.org/

View File

@ -1,241 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Schedule and Automate tasks in Linux using Cron Jobs)
[#]: via: (https://www.linuxtechi.com/schedule-automate-tasks-linux-cron-jobs/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Schedule and Automate tasks in Linux using Cron Jobs
======
Sometimes, you may have tasks that need to be performed on a regular basis or at certain predefined intervals. Such tasks include backing up databases, updating the system, performing periodic reboots and so on. Such tasks are referred to as **cron jobs**. Cron jobs are used for **automation of tasks** that come in handy and help in simplifying the execution of repetitive and sometimes mundane tasks. **Cron** is a daemon that allows you to schedule these jobs which are then carried out at specified intervals. In this tutorial, you will learn how to schedule jobs using cron jobs.
[![Schedule -tasks-in-Linux-using cron][1]][2]
### The Crontab file
A crontab file, also known as a **cron table**, is a simple text file that contains rules or commands that specify the time interval of execution of a task. There are two categories of crontab files:
**1)  System-wide crontab file**
These are usually used by Linux services &amp; critical applications requiring root privileges. The system crontab file is located at **/etc/crontab** and can only be accessed and edited by the root user. Its usually used for the configuration of system-wide daemons. The crontab file looks as shown:
[![etc-crontab-linux][1]][3]
**2) User-created crontab files**
Linux users can also create their own cron jobs with the help of the crontab command. The cron jobs created will run as the user who created them.
All cron jobs are stored in /var/spool/cron (For RHEL and CentOS distros) and /var/spool/cron/crontabs (For Debian and Ubuntu distros), the cron jobs are listed using the username of the user that created the cron job
The **cron daemon** runs silently in the background checking the **/etc/crontab** file and **/var/spool/cron** and **/etc/cron.d*/** directories
The **crontab** command is used for editing cron files. Let us take a look at the anatomy of a crontab file.
### The anatomy of a crontab file
Before we go further, its important that we first explore how a crontab file looks like. The basic syntax for a crontab file comprises 5 columns represented by asterisks followed by the command to be carried out.
*    *    *    *    *    command
This format can also be represented as shown below:
m h d moy dow command
OR
m h d moy dow /path/to/script
Lets expound on each entry
* **m**: This represents minutes. Its specified from 0 to 59
* **h**: This denoted the hour specified from 0 to 23
* **d**:  This represents the day of the month. Specified between 1 to 31`
* **moy**: This is the month of the year. Its specified between 1 to 12
* **doy**:  This is the day of the week. Its specified between 0 and 6 where 0 = Sunday
* **Command**: This is the command to be executed e.g backup command, reboot, &amp; copy
### Managing cron jobs
Having looked at the architecture of a crontab file, lets see how you can create, edit and delete cron jobs
**Creating cron jobs**
To create or edit a cron job as the root user, run the command
# crontab -e
To create a cron job or schedule a task as another user, use the syntax
# crontab -u username -e
For instance, to run a cron job as user Pradeep, issue the command:
# crontab -u Pradeep -e
If there is no preexisting crontab file, then you will get a blank text document. If a crontab file was existing, The  -e option allows  to edit the file,
**Listing crontab files**
To view the cron jobs that have been created, simply pass the -l option as shown
# crontab -l
**Deleting a  crontab file**
To delete a cron file, simply run crontab -e and delete or the line of the cron job that you want and save the file.
To remove all cron jobs, run the command:
# crontab -r
That said, lets have a look at different ways that you can schedule tasks
### Crontab examples in Scheduling tasks.
All cron jobs being with a shebang header as shown
#!/bin/bash
This indicates the shell you are using, which, for this case, is bash shell.
Next, specify the interval at which you want to schedule the tasks using the cron job entries we specified earlier on.
To reboot a system daily at 12:30 pm, use the syntax:
30  12 *  *  * /sbin/reboot
To schedule the reboot at 4:00 am use the syntax:
0  4  *  *  *  /sbin/reboot
**NOTE:**  The asterisk * is used to match all records
To run a script twice every day, for example, 4:00 am and 4:00 pm, use the syntax.
0  4,16  *  *  *  /path/to/script
To schedule a cron job to run every Friday at 5:00 pm  use the syntax:
0  17  *  *  Fri  /path/to/script
OR
0 17  *  *  *  5  /path/to/script
If you wish to run your cron job every 30 minutes then use:
*/30  *  *  *  * /path/to/script
To schedule cron to run after every 5 hours, run
*  */5  *  *  *  /path/to/script
To run a script on selected days, for example, Wednesday and Friday at 6.00 pm execute:
0  18  *  *  wed,fri  /path/to/script
To schedule multiple tasks to use a single cron job, separate the tasks using a semicolon for example:
*  *  *  *  *  /path/to/script1 ; /path/to/script2
### Using special strings to save time on writing cron jobs
Some of the cron jobs can easily be configured using special strings that correspond to certain time intervals. For example,
1)  @hourly timestamp corresponds to  0 * * * *
It will execute a task in the first minute of every hour.
@hourly /path/to/script
2) @daily timestamp is equivalent to  0 0 * * *
It executes a task in the first minute of every day (midnight). It comes in handy when executing daily jobs.
  @daily /path/to/script
3) @weekly   timestamp is the equivalent to  0 0 1 * mon
It executes a cron job in the first minute of every week where a week whereby, a  week starts on Monday.
 @weekly /path/to/script
3) @monthly is similar to the entry 0 0 1 * *
It carries out a task in the first minute of the first day of the month.
  @monthly /path/to/script
4) @yearly corresponds to 0 0 1 1 *
It executes a task in the first minute of every year and is useful in sending New year greetings 🙂
@monthly /path/to/script
### Crontab Restrictions
As a Linux user, you can control who has the right to use the crontab command. This is possible using the **/etc/cron.deny** and **/etc/cron.allow** file. By default, only the /etc/cron.deny file exists and does not contain any entries. To restrict a user from using the crontab utility, simply add a users username to the file. When a user is added to this file, and the user tries to run the crontab command, he/she will encounter the error below.
![restricted-cron-user][1]
To allow the user to continue using the crontab utility,  simply remove the username from the /etc/cron.deny file.
If /etc/cron.allow file is present, then only the users listed in the file can access and use the crontab utility.
If neither file exists, then only the root user will have privileges to use the crontab command.
### Backing up crontab entries
Its always advised to backup your crontab entries. To do so, use the syntax
# crontab -l &gt; /path/to/file.txt
For example,
```
# crontab -l > /home/james/backup.txt
```
**Checking cron logs**
Cron logs are stored in /var/log/cron file. To view the cron logs run the command:
```
# cat /var/log/cron
```
![view-cron-log-files-linux][1]
To view live logs, use the tail command as shown:
```
# tail -f /var/log/cron
```
![view-live-cron-logs][1]
**Conclusion**
In this guide, you learned how to create cron jobs to automate repetitive tasks, how to backup as well as how to view cron logs. We hope that this article provided useful insights with regard to cron jobs. Please dont hesitate to share your feedback and comments.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/schedule-automate-tasks-linux-cron-jobs/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Schedule-tasks-in-Linux-using-cron.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/11/etc-crontab-linux.png

View File

@ -0,0 +1,164 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (7 Best Open Source Tools that will help in AI Technology)
[#]: via: (https://opensourceforu.com/2019/11/7-best-open-source-tools-that-will-help-in-ai-technology/)
[#]: author: (Nitin Garg https://opensourceforu.com/author/nitin-garg/)
7 Best Open Source Tools that will help in AI Technology
======
[![][1]][2]
_Artificial intelligence is an exceptional technology following the futuristic approach. In this progressive era, its capturing the attention of all the multination organizations. Some of the popular names in the industry like Google, IBM, Facebook, Amazon, Microsoft constantly investing in this new-age technology._
Anticipate in business needs using artificial intelligence and take research and development on another level. This advanced technology is becoming an integral part of organizations in research and development offering ultra-intelligent solutions. It helps you maintain accuracy and increase productivity with better results.
AI open source tools and technologies are capturing the attention of every industry providing with frequent and accurate results. These tools help you analyse your performance while providing you with a boost to generate greater revenue.
Without further ado, here we have listed some of the best open-source tools to help you understand artificial intelligence better.
**1\. TensorFlow**
TensorFlow is an open-source machine learning framework used for Artificial Intelligence. It is basically developed to conduct machine learning and deep learning for research and production. TensorFlow allows developers to create dataflow graphics structure, It moves through a network or a system node, and the graph provides a multidimensional array or tensor of data.
TensorFlow is an exceptional tool that offers countless advantages.
* Simplifies the numeric computation
* TensorFlow offers flexibility on multiple models.
* TensorFlow improves business efficiency
* Highly portable
* Automatic differentiate capabilities.
**2\. Apache SystemML**
Apache SystemML is a very popular open-source machine learning platform created by IBM offering a favourable workplace using big data. It can run efficiently and on Apache Spark and automatically scale your data while determining whether your code can run on the drive or Apache Spark Cluster. Not just that, its lucrative features make it stand out in the industry offers;
* Algorithms customization
* Multiple Execution Modes
* Automatic Optimisation
It also supports deep learning while enabling developers to implement machine learning code and optimizing it with more effectiveness.
**3\. OpenNN**
OpenNN is an open-source artificial intelligence neural network library for progressive analytics. It helps you develop robust models with C++ and Python while containing algorithms and utilities to deal with machine learning solutions likes forecasting and classification. It also covers regression and association providing high performance and technology evolution in the industry.
It possesses numerous lucrative features like;
* Digital Assistance
* Predictive Analysis
* Fast Performance
* Virtual Personal Assistance
* Speech Recognition
* Advanced Analytics
It helps you design advance solutions implementing data mining methods for fruitful results.
**4\. Caffe**
Caffe (Convolutional Architecture for Fast Feature Embedding) is an open-source deep learning framework. It considers speed, modularity, and expressions the most. Caffe was originally developed at the University of California, Berkeley Vision and Learning Centre, written in C++ with a python interface. It smoothly works on operating system Linux, macOS, and Windows.
Some of the key features of Caffe that helps in AI technology.
1. Expressive Architecture
2. Extensive Code
3. Large Community
4. Active Development
5. Speedy Performance
It helps you inspire innovation while introducing stimulated growth. Make full use of this tool to get desired results.
**5\. Torch**
Torch is an open-source machine learning library which, helps you simplify complex task like serialization, object-oriented programming by offering multiple convenient functions. It offers the utmost flexibility and speed in machine learning projects. Torch is written using scripting language Lua and comes with an underlying C implementation. It is used in multiple organization and research labs.
Torch has countless advantages like;
* Fast &amp; Effective GPU Support
* Linear algebra Routines
* Support for iOS &amp; Android Platform
* Numeric Optimization Routine
* N-dimensional arrays
**6\. Accord .NET**
Accord .NET is one of the renown free, open-source AI development tool. It has a set of libraries for combining audio and image processing libraries written in C#. From computer vision to computer audition, signal processing and statistics applications it helps you build everything for commercial use. It comes with a comprehensive set of the sample application for quick running and extensive range of libraries.
You can develop an advance app using Accord .NET using attention-grabbing features like;
* Statistical Analysis
* Data Ingestions
* Adaptive
* Deep Learning
* Second-order neural network learning algorithms
* Digital Assistance &amp; Multi-languages
* Speech recognition
**7\. Scikit-Learn**
Scikit-learn is one of the popular open-source tools that will help in AI technology. It is a valuable library for machine learning in Python. It includes efficient tools like machine learning and statistical modelling including classification, clustering, regression and dimensionality reduction.
Lets find out more about Scikit-Learn features;
* Cross-validation
* Clustering and Classification
* Manifold Learning
* Machine Learning
* Virtual process Automation
* Workflow Automation
From preprocessing to model selection Scikit-learn helps you take care of everything. It simplifies the complete task from data mining to data analysis.
**Final Thought**
These are some of the popular open-source AI tools which provide with the comprehensive range of features. Before developing the new-age application, one must select one of the tools and work accordingly. These tools provide with advanced Artificial Intelligence solutions keeping recent trends in mind.
Artificial intelligence is used globally and its marking its presence all around the world. With applications like Amazon Alexa, Siri, AI is providing customers with ultimate user experience. Its offering significant benefit in the industry capturing users attention. Among all the industries like healthcare, banking, finance, e-commerce artificial intelligence is contributing to growth and productivity while saving a lot of time and efforts.
Select any one of these open-source tools for better user experience and unbelievable results. It will help you grow and get a better result in terms of quality and security.
![Avatar][3]
[Nitin Garg][4]
The author is the CEO and co-founder of BR Softech [Business intelligence software company][5]. Likes to share his opinions on IT industry via blogs. His interest is to write on the latest and advanced IT technologies which include IoT, VR &amp; AR app development, web, and app development services. Along with this, he also offers consultancy services for RPA, Big Data and Cyber Security services.
[![][6]][7]
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/7-best-open-source-tools-that-will-help-in-ai-technology/
作者:[Nitin Garg][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/nitin-garg/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2018/05/Artificial-Intelligence_EB-June-17.jpg?resize=696%2C464&ssl=1 (Artificial Intelligence_EB June 17)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2018/05/Artificial-Intelligence_EB-June-17.jpg?fit=1000%2C667&ssl=1
[3]: https://secure.gravatar.com/avatar/d4e6964b80590824b981f06a451aa9e6?s=100&r=g
[4]: https://opensourceforu.com/author/nitin-garg/
[5]: https://www.brsoftech.com/bi-consulting-services.html
[6]: https://opensourceforu.com/wp-content/uploads/2019/11/assoc.png
[7]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US

View File

@ -0,0 +1,129 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Managing software and services with Cockpit)
[#]: via: (https://fedoramagazine.org/managing-software-and-services-with-cockpit/)
[#]: author: (Shaun Assam https://fedoramagazine.org/author/sassam/)
Managing software and services with Cockpit
======
![][1]
The Cockpit series continues to focus on some of the tools users and administrators can use to perform everyday tasks within the web user-interface. So far weve covered [introducing the user-interface][2], [storage][3] and [network management][4], and [user accounts][5]. Hence, this article will highlight how Cockpit handles software and services.
The menu options for Applications and Software Updates are available through Cockpits PackageKit feature. To install it from the command-line, run:
```
sudo dnf install cockpit-packagekit
```
For [Fedora Silverblue][6], [Fedora CoreOS][7], and other ostree-based operating systems, install the _cockpit-ostree_ package and reboot the system:
```
sudo rpm-ostree install cockpit-ostree; sudo systemctl reboot
```
### Software updates
On the main screen, Cockpit notifies the user whether the system is updated, or if any updates are available. Click the **Updates Available** link on the main screen, or **Software Updates** in the menu options, to open the updates page.
#### RPM-based updates
The top of the screen displays general information such as the number of updates and the number of security-only updates. It also shows when the system was last checked for updates, and a button to perform the check. Likewise, this button is equivalent to the command **sudo dnf check-update**.
Below is the **Available Updates** section, which lists the packages requiring updates. Furthermore, each package displays the name, version, and best of all, the severity of the update. Clicking a package in the list provides additional information such as the CVE, the Bugzilla ID, and a brief description of the update. For details about the CVE and related bugs, click their respective links.
Also, one of the best features about Software Updates is the option to only install security updates. Distinguishing which updates to perform makes it simple for those who may not need, or want, the latest and greatest software installed. Of course, one can always use [Red Hat Enterprise Linux][8] or [CentOS][9] for machines requiring long-term support.
The example below demonstrates how Cockpit applies RPM-based updates.
![][10]
#### OSTree-based updates
The popular article [What is Silverblue][11] states:
> OSTree is used by rpm-ostree, a hybrid package/image based system… It atomically replicates a base OS and allows the user to “layer” the traditional RPM on top of the base OS if needed.
Because of this setup, Cockpit uses a snapshot-like layout for these operating systems. As seen in the demo below, the top of the screen displays the repository (_fedora_), the base OS image, and a button to **Check for Updates**.
Clicking the repository name (_fedora_ in the demo below) opens the **Change Repository** screen. From here one can **Add New Repository**, or click the pencil icon to edit an existing repository. Editing provides the option to delete the repository, or **Add Another Key**. To add a new repository, enter the name and URL. Also, select whether or not to **Use trusted GPG key**.
There are three categories that provide details of its respective image: Tree, Packages, and Signature. **Tree** displays basic information such as the operating system, version of the image, how long ago it was released, and the origin of the image. **Packages** displays a list of installed packages within that image. **Signature** verifies the integrity of the image such as the author, date, RSA key ID, and status.
The current, or running, image displays a green check-mark beside it. If something happens, or an update causes an issue, click the **Roll Back and Reboot** button. This restores the system to a previous image.
![][12]
### Applications
The **Applications** screen displays a list of add-ons available for Cockpit. This makes it easy to find and install the plugins required by the user. At the time of this article, some of the options include the 389 Directory Service, Fleet Commander, and Subscription Manager. The demo below shows a complete list of available Cockpit add-ons.
Also, each item displays the name, a brief description, and a button to install, or remove, the add-on. Furthermore, clicking the item displays more information (if available). To refresh the list, click the icon at the top-right corner.
![][13]
### Subscription Management
Subscription managers allow admins to attach subscriptions to the machine. Even more, subscriptions give admins control over user access to content and packages. One example of this is the famous [Red Hat subscription model][14]. This feature works in relation to the **subscription-manager** command
The Subscriptions add-on can be installed via Cockpits Applications menu option. It can also be installed from the command-line with:
```
sudo dnf install cockpit-subscriptions
```
To begin, click **Subscriptions** in the main menu. If the machine is currently unregistered, it opens the **Register System** screen. Next, select the URL. You can choose **Default**, which uses Red Hats subscription server, or enter a **Custom URL**. Enter the **Login**, **Password**, **Activation Key**, and **Organization** ID. Finally, to complete the process, click the **Register** button.
The main page for Subscriptions show if the machine is registered, the System Purpose, and a list of installed products.
![][15]
### Services
To start, click the **Services** menu option. Because Cockpit uses _[systemd][16]_, we get the options to view **System Services**, **Targets**, **Sockets**, **Timers**, and **Paths**. Cockpit also provides an intuitive interface to help users search and find the service they want to configure. Services can also be filtered by its state: **All**, **Enabled**, **Disabled**, or **Static**. Below this is the list of services. Each row displays the service name, description, state, and automatic startup behavior.
For example, lets take _bluetooth.service_. Typing _bluetooth_ in the search bar automatically displays the service. Now, select the service to view the details of that service. The page displays the status and path of the service file. It also displays information in the service file such as the requirements and conflicts. Finally, at the bottom of the page, are the logs pertaining to that service.
Also, users can quickly start and stop the service by toggling the switch beside the service name. The three-dots to the right of that switch expands those options to **Enable**, **Disable**, **Mask/Unmask** the service
To learn more about _systemd_, check out the series in the Fedora Magazine starting with [What is an init system?][17]
![][18]
In the next article well explore the security features available in Cockpit.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/managing-software-and-services-with-cockpit/
作者:[Shaun Assam][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/sassam/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/11/cockpit-sw-services-816x345.jpg
[2]: https://fedoramagazine.org/cockpit-and-the-evolution-of-the-web-user-interface/
[3]: https://fedoramagazine.org/performing-storage-management-tasks-in-cockpit/
[4]: https://fedoramagazine.org/managing-network-interfaces-and-firewalld-in-cockpit/
[5]: https://fedoramagazine.org/managing-user-accounts-with-cockpit/
[6]: https://silverblue.fedoraproject.org/
[7]: https://getfedora.org/en/coreos/
[8]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux?intcmp=701f2000001OEGhAAO
[9]: https://www.centos.org/
[10]: https://fedoramagazine.org/wp-content/uploads/2019/11/cockpit-software-updates-rpm.gif
[11]: https://fedoramagazine.org/what-is-silverblue/
[12]: https://fedoramagazine.org/wp-content/uploads/2019/11/cockpit-software-updates-ostree.gif
[13]: https://fedoramagazine.org/wp-content/uploads/2019/11/cockpit-applications.gif
[14]: https://www.redhat.com/en/about/value-of-subscription
[15]: https://fedoramagazine.org/wp-content/uploads/2019/11/cockpit-subscriptions.gif
[16]: https://fedoramagazine.org/series/systemd-series/
[17]: https://fedoramagazine.org/what-is-an-init-system/
[18]: https://fedoramagazine.org/wp-content/uploads/2019/11/cockpit-services.gif

View File

@ -0,0 +1,214 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Create Affinity and Anti-Affinity Policy in OpenStack)
[#]: via: (https://www.linuxtechi.com/create-affinity-anti-affinity-policy-openstack/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Create Affinity and Anti-Affinity Policy in OpenStack
======
In the organizations where the **OpenStack** is used aggressively, so in such organizations application and database teams can come up with requirement that their application and database instances are required to launch either on same **compute nodes** (hypervisor) or different compute nodes.
[![OpenStack-VMs-Affinity-AntiAffinity-Policy][1]][2]
So, this requirement in OpenStack is fulfilled via **server groups** with **affinity** and **anti-affinity** policies. Server Group is used control affinity and anti-affinity rules for scheduling openstack instances.
When we try to provision virtual machines with affinity server group then all virtual machines will be launched on same compute node. When VMs are provisioned with ant-affinity server group then all VMs will be launched in different compute nodes. In this article we will demonstrate how to create OpenStack server groups with Affinity and Anti-Affinity rules.
Lets first verify whether your OpenStack setup support Affinity and Anti-Affinity Policies or not, execute the following grep command from your controller nodes,
```
# grep -i "scheduler_default_filters" /etc/nova/nova.conf
```
Output should be something like below,
![Affinity-AntiAffinity-Filter-Nova-Conf-OpenStack][1]
As we can see Affinity and Ant-Affinity filters are enabled but in case if these are not enabled then add these filters in **/etc/nova/nova.conf**  file of controller nodes under “**scheduler_default_filters**” parameters.
```
# vi /etc/nova/nova.conf
………………
scheduler_default_filters=xx,xxx,xxx,xxxxx,xxxx,xxx,xxx,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,xx,xxx,xxxx,xx
………………
```
Save and exit the file
To make above changes into the effect, restart the following services
```
# systemctl restart openstack-nova-scheduler
# systemctl restart openstack-nova-conductor
```
Now lets create OpenStack Server Groups with Affinity and Anti-Affinity Policies
### Server Group with Affinity Policy
To create a server group with name “app” for affinity policy, execute the following openstack command from controller node,
**Syntax:**
# openstack server group create policy affinity &lt;Server-Group-Name&gt;
Or
# nova server-group-create &lt;Server-Group-Name&gt; affinity
**Note:** Before start executing openstack command, please make sure you source project credential file, in my case project credential file is “**openrc**”
Example:
```
# source openrc
# openstack server group create --policy affinity app
```
### Server Group with Anti-Affinity Policy
To create a server group with anti-affinity policy, execute the following openstack command from controller node, I am assuming server group name is “database”
**Syntax:**
# openstack server group create policy anti-affinity &lt;Server-Group-Name&gt;
Or
# nova server-group-create &lt;Server-Group-Name&gt; anti-affinity
Example:
```
# source openrc
# openstack server group create --policy anti-affinity database
```
### List Server Groups ID and Policies
Execute either nova command or Openstack command to get server groups id and their policies
```
# nova server-group-list | grep -Ei "Policies|database"
Or
# openstack server group list --long | grep -Ei "Policies|app|database"
```
Output would be something like below,
![Server-Group-Policies-OpenStack][1]
### [Launch Virtual Machines (VMs)][3] with Affinity Policy
Lets assume we want to launch 4 vms with affinity policy, run the following “**openstack server create**” command
**Syntax:**
# openstack server create image &lt;img-name&gt; flavor &lt;id-or-flavor-name&gt; security-group &lt;security-group-name&gt; nic net-id=&lt;network-id&gt; hint group=&lt;Server-Group-ID&gt; max &lt;number-of-vms&gt;  &lt;VM-Name&gt;
**Example:**
```
# openstack server create --image Cirros --flavor m1.small --security-group default --nic net-id=37b9ab9a-f198-4db1-a5d6-5789b05bfb4c --hint group="a9847c7f-b7c2-4751-9c9a-03b117e704ff" --max 4 affinity-test
```
Output of above command,
![OpenStack-Server-create-with-hint-option][1]
Lets verify whether VMs are launched on same compute node or not, run following command
```
# openstack server list --long -c Name -c Status -c Host -c "Power State" | grep -i affinity-test
```
![Affinity-VMs-Status-OpenStack][1]
This confirms that our affinity policy is working fine as all the VMs are launched on same compute node.
Now lets test anti-affinity policy
### Launch Virtual Machines (VMs) with Anti-Affinity Policy
For anti-affinity policy we will launch 4 VMs, in above openstack server create command, we need to replace Anti-Affinity Server Groups ID. In our case we will be using database server group id.
Run the following openstack command to launch 4 VMs on different computes with anti-affinity policy,
```
# openstack server create --image Cirros --flavor m1.small --security-group default --nic net-id=37b9ab9a-f198-4db1-a5d6-5789b05bfb4c --hint group="498fd41b-8a8a-497a-afd8-bc361da2d74e" --max 4 anti-affinity-test
```
Output
![Openstack-server-create-anti-affinity-hint-option][1]
Use below openstack command to verify whether VMs are launched on different compute nodes or not
```
# openstack server list --long -c Name -c Status -c Host -c "Power State" | grep -i anti-affinity-test
```
![Anti-Affinity-VMs-Status-OpenStack][1]
Above output confirms that our anti-affinity policy is also working fine.
**Note:** Default Quota for Server group is 10 for every tenant , it means we can max launch 10 VMs inside a server group.
Use below command to view Server Group quota for a specific tenant, replace the tenant id that suits to your setup
```
# openstack quota show f6852d73eaee497a8a640757fe02b785 | grep -i server_group
| server_group_members | 10 |
| server_groups | 10 |
#
```
To update Server Group Quota, execute the following commands
```
# nova quota-update --server-group-members 15 f6852d73eaee497a8a640757fe02b785
# nova quota-update --server-groups 15 f6852d73eaee497a8a640757fe02b785
```
Now re-run the openstack quota command to verify server group quota
```
# openstack quota show f6852d73eaee497a8a640757fe02b785 | grep -i server_group
| server_group_members | 15 |
| server_groups | 15 |
#
```
Thats all, we have successfully updated Server Group quota for the tenant. This conclude the article as well, please do hesitate to share it among your technical friends.
* [Facebook][4]
* [Twitter][5]
* [LinkedIn][6]
* [Reddit][7]
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/create-affinity-anti-affinity-policy-openstack/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/11/OpenStack-VMs-Affinity-AntiAffinity-Policy.jpg
[3]: https://www.linuxtechi.com/create-delete-virtual-machine-command-line-openstack/
[4]: http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.linuxtechi.com%2Fcreate-affinity-anti-affinity-policy-openstack%2F&t=How%20to%20Create%20Affinity%20and%20Anti-Affinity%20Policy%20in%20OpenStack
[5]: http://twitter.com/share?text=How%20to%20Create%20Affinity%20and%20Anti-Affinity%20Policy%20in%20OpenStack&url=https%3A%2F%2Fwww.linuxtechi.com%2Fcreate-affinity-anti-affinity-policy-openstack%2F&via=Linuxtechi
[6]: http://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww.linuxtechi.com%2Fcreate-affinity-anti-affinity-policy-openstack%2F&title=How%20to%20Create%20Affinity%20and%20Anti-Affinity%20Policy%20in%20OpenStack
[7]: http://www.reddit.com/submit?url=https%3A%2F%2Fwww.linuxtechi.com%2Fcreate-affinity-anti-affinity-policy-openstack%2F&title=How%20to%20Create%20Affinity%20and%20Anti-Affinity%20Policy%20in%20OpenStack

View File

@ -0,0 +1,200 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bash Script to Monitor Disk Space Usage on Multiple Remote Linux Systems With eMail Alert)
[#]: via: (https://www.2daygeek.com/linux-bash-script-to-monitor-disk-space-usage-on-multiple-remote-linux-systems-send-email/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Bash Script to Monitor Disk Space Usage on Multiple Remote Linux Systems With eMail Alert
======
Some time ago, we had wrote **[Bash script to monitor disk space usage on a Linux][1]** system with an email alert.
That script works on a single machine, and you have to put the script on the corresponding machine.
If you want to set disk space usage alerts on multiple computers at the same time, that script does not help you.
So we have written this new **[shell script][2]** to achieve this.
To do so, you need a JUMP server (centralized server) that can communicate with any other computer without a password.
This means that password-less authentication must be set as a prerequisite.
When the prerequisite is complete, run the script on the JUMP server.
Finally add a **[cronjob][3]** to completely automate this process.
Three shell scripts are included in this article, and choose the one you like.
### 1) Bash Script-1: Bash Script to Check Disk Space Usage on Multiple Remote Linux Systems and Print Output on Terminal
This **[bash script][4]** checks the disk space usage on a given remote machine and print the output to the terminal if the system reaches the specified threshold.
In this example, we set the threshold limit to 80% for testing purpose and you can adjust this limit to suit your needs.
Also, replace your email id instead of us to receive this alert.
```
# vi /opt/scripts/disk-usage-multiple.sh
#!/bin/sh
output1=/tmp/disk-usage.out
echo "---------------------------------------------------------------------------"
echo "HostName Filesystem Size Used Avail Use% Mounted on"
echo "---------------------------------------------------------------------------"
for server in `more /opt/scripts/servers.txt`
do
output=`ssh $server df -Ph | tail -n +2 | sed s/%//g | awk '{ if($5 > 80) print $0;}'`
echo "$server: $output" >> $output1
done
cat $output1 | grep G | column -t
rm $output1
```
Run the script file once you have added the above script to a file.
```
# sh /opt/scripts/disk-usage-multiple.sh
```
You get an output like the one below.
```
------------------------------------------------------------------------------------------------
HostName Filesystem Size Used Avail Use% Mounted on
------------------------------------------------------------------------------------------------
server01: /dev/mapper/vg_root-lv_red 5.0G 4.3G 784M 85 /var/log/httpd
server02: /dev/mapper/vg_root-lv_var 5.8G 4.5G 1.1G 81 /var
server03: /dev/mapper/vg01-LogVol01 5.7G 4.5G 1003M 82 /usr
server04: /dev/mapper/vg01-LogVol04 4.9G 3.9G 711M 85 /usr
server05: /dev/mapper/vg_root-lv_u01 74G 56G 15G 80 /u01
```
### 2) Shell Script-2: Shell Script to Monitor Disk Space Usage on Multiple Remote Linux Systems With eMail Alerts
This shell script checks the disk space usage on a given remote machine and sends the output via a mail in a simple text once the system reaches the specified threshold.
```
# vi /opt/scripts/disk-usage-multiple-1.sh
#!/bin/sh
SUBJECT="Disk Usage Report on "`date`""
MESSAGE="/tmp/disk-usage.out"
MESSAGE1="/tmp/disk-usage-1.out"
TO="[email protected]"
echo "---------------------------------------------------------------------------------------------------" >> $MESSAGE1
echo "HostName Filesystem Size Used Avail Use% Mounted on" >> $MESSAGE1
echo "---------------------------------------------------------------------------------------------------" >> $MESSAGE1
for server in `more /opt/scripts/servers.txt`
do
output=`ssh $server df -Ph | tail -n +2 | sed s/%//g | awk '{ if($5 > 80) print $0;}'`
echo "$server: $output" >> $MESSAGE
done
cat $MESSAGE | grep G | column -t >> $MESSAGE1
mail -s "$SUBJECT" "$TO" < $MESSAGE1
rm $MESSAGE
rm $MESSAGE1
```
Run the script file once you have added the above script to a file.
```
# sh /opt/scripts/disk-usage-multiple-1.sh
```
You get an output like the one below.
```
------------------------------------------------------------------------------------------------
HostName Filesystem Size Used Avail Use% Mounted on
------------------------------------------------------------------------------------------------
server01: /dev/mapper/vg_root-lv_red 5.0G 4.3G 784M 85 /var/log/httpd
server02: /dev/mapper/vg_root-lv_var 5.8G 4.5G 1.1G 81 /var
server03: /dev/mapper/vg01-LogVol01 5.7G 4.5G 1003M 82 /usr
server04: /dev/mapper/vg01-LogVol04 4.9G 3.9G 711M 85 /usr
server05: /dev/mapper/vg_root-lv_u01 74G 56G 15G 80 /u01
```
Finally add a cronjob to automate this. It will run every 10 minutes.
```
# crontab -e
*/10 * * * * /bin/bash /opt/scripts/disk-usage-multiple-1.sh
```
### 3) Bash Script-3: Bash Script to Monitor Disk Space Usage on Multiple Remote Linux Systems With eMail Alerts
This shell script checks the disk space usage on a given remote machine and sends the output via the mail with a CSV file if the system reaches the specified threshold.
```
# vi /opt/scripts/disk-usage-multiple-2.sh
#!/bin/sh
MESSAGE="/tmp/disk-usage.out"
MESSAGE2="/tmp/disk-usage-1.csv"
echo "Server Name, Filesystem, Size, Used, Avail, Use%, Mounted on" > $MESSAGE2
for server in thvtstrhl7 thvrhel6
for server in `more /opt/scripts/servers-disk-usage.txt`
do
output1=`ssh $server df -Ph | tail -n +2 | sed s/%//g | awk '{ if($5 > 80) print $0;}'`
echo "$server $output1" >> $MESSAGE
done
cat $MESSAGE | grep G | column -t | while read output;
do
Sname=$(echo $output | awk '{print $1}')
Fsystem=$(echo $output | awk '{print $2}')
Size=$(echo $output | awk '{print $3}')
Used=$(echo $output | awk '{print $4}')
Avail=$(echo $output | awk '{print $5}')
Use=$(echo $output | awk '{print $6}')
Mnt=$(echo $output | awk '{print $7}')
echo "$Sname,$Fsystem,$Size,$Used,$Avail,$Use,$Mnt" >> $MESSAGE2
done
echo "Disk Usage Report for `date +"%B %Y"`" | mailx -s "Disk Usage Report on `date`" -a /tmp/disk-usage-1.csv [email protected]
rm $MESSAGE
rm $MESSAGE2
```
Run the script file once you have added the above script to a file.
```
# sh /opt/scripts/disk-usage-multiple-2.sh
```
You get an output like the one below.
![][5]
Finally add a cronjob to automate this. It will run every 10 minutes.
```
# crontab -e
*/10 * * * * /bin/bash /opt/scripts/disk-usage-multiple-1.sh
```
**Note:** Because the script is scheduled to run once every 10 minutes, you will receive an email alert every 10 minutes.
If your system reaches a given limit after 18 minutes, you will receive an email alert on the second cycle, such as after 20 minutes (2nd 10 minute cycle).
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/linux-bash-script-to-monitor-disk-space-usage-on-multiple-remote-linux-systems-send-email/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/linux-shell-script-to-monitor-disk-space-usage-and-send-email/
[2]: https://www.2daygeek.com/category/shell-script/
[3]: https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/
[4]: https://www.2daygeek.com/category/bash-script/
[5]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7

View File

@ -1,61 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cloning a MAC address to bypass a captive portal)
[#]: via: (https://fedoramagazine.org/cloning-a-mac-address-to-bypass-a-captive-portal/)
[#]: author: (Esteban Wilson https://fedoramagazine.org/author/swilson/)
克隆 MAC 地址来绕过强制门户
======
![][1]
如果你曾经不在家和办公室连接到 WiFi那么通常会看到一个门户页面。它可能会要求你接受服务条款或其他协议才能访问。但是当你无法通过这类门户进行连接时会发生什么本文向你展示了如何在 Fedora 上使用 NetworkManager 处理某些故障情况,以便你仍然可以访问互联网。
### 强制门户如何工作
强制门户是新设备连接到网络时显示的网页。当用户首次访问互联网时,门户网站会捕获所有网页请求并将其重定向到单个门户页面。
然后,页面要求用户采取一些措施,通常是同意使用政策。用户同意后,他们可以向 RADIUS 或其他类型的身份验证系统进行身份验证。简而言之,强制门户根据设备的 MAC 地址和终端用户接受条款来注册和授权设备。 MAC 地址是附加到任何网络接口(例如 WiFi 芯片或卡)的[基于硬件的值][2]。)
有时设备无法加载强制门户来进行身份验证和授权,以使用 WiFI 接入。这种情况的例子包括移动设备和游戏机SwitchPlaystation 等)。当连接到互联网时,它们通常不会打开动强制门户页面。连接到酒店或公共 WiFi 接入点时,你可能会看到这种情况。
不过,你可以在 Fedora 上使用 NetworkManager 来解决这些问题。Fedora 使你可以临时克隆连接设备的 MAC 地址,并代表该设备通过强制门户进行身份验证。你需要得到连接设备的 MAC 地址。通常,它被打印在设备上的某个地方并贴上标签。它是一个六字节的十六进制值,因此看起来类似 _4A:1A:4C:B0:38:1F_。通常,你也可以通过设备的内置菜单找到它。
### 使用 NetworkManager 克隆
首先,打开 _**nm-connection-editor**_,或通过”设置“打开 WiFi 设置。然后,你可以使用 NetworkManager 进行克隆:
* 对于以太网–选择已连接的以太网连接。然后选择 _Ethernet_ 选项卡。记录或复制当前的 MAC 地址。在 _Cloned MAC address_ 字段中输入游戏机或其他设备的 MAC 地址。
  * 对于 WiFi –选择 WiFi 配置名。然后选择 “WiFi” 选项卡。记录或复制当前的 MAC 地址。在 _Cloned MAC address_ 字段中输入游戏机或其他设备的 MAC 地址。
### **启动所需的设备**
当 Fedora 系统与以太网或 WiFi 配置连接,克隆的 MAC 地址将用于请求 IP 地址,并加载强制门户。输入所需的凭据和/或选择用户协议。MAC 地址将获得授权。
现在,断开 WiF i或以太网配置连接然后将 Fedora 系统的 MAC 地址更改回其原始值。然后启动游戏机或其他设备。设备现在应该可以访问互联网了,因为它的网络接口已通过你的 Fedora 系统进行了授权。
不过,这不是 NetworkManager 全部能做的。例如,请参阅[随机化系统硬件地址][3],来获得更好的隐私保护。
> [使用 NetworkManager 随机化你的 MAC 地址][3]
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/cloning-a-mac-address-to-bypass-a-captive-portal/
作者:[Esteban Wilson][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/swilson/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/clone-mac-nm-816x345.jpg
[2]: https://en.wikipedia.org/wiki/MAC_address
[3]: https://fedoramagazine.org/randomize-mac-address-nm/

View File

@ -0,0 +1,290 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Schedule and Automate tasks in Linux using Cron Jobs)
[#]: via: (https://www.linuxtechi.com/schedule-automate-tasks-linux-cron-jobs/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
如何使用 cron 任务在 Linux 中计划和自动化任务
======
有时,你可能需要定期执行任务或以预定的时间间隔执行任务。这些任务包括备份数据库、更新系统、执行定期重新引导等。这些任务称为 “cron 任务”。cron 任务用于“自动执行的任务”它有助于简化重复的、有时是乏味的任务的执行。cron 是一个守护进程,可让你调度这些任务,然后按指定的时间间隔执行这些任务。在本教程中,你将学习如何使用 cron 来调度任务。
![Schedule -tasks-in-Linux-using cron][2]
### crontab 文件
crontab 即 “cron table”是一个简单的文本文件其中包含指定任务执行时间间隔的规则或命令。 crontab 文件分为两类:
1系统范围的 crontab 文件
这些通常由需要 root 特权的 Linux 服务及关键应用程序使用。系统 crontab 文件位于 `/etc/crontab` 中,并且只能由 root 用户访问和编辑。通常用于配置系统范围的守护程序。`crontab` 文件的看起来类似如下所示:
![etc-crontab-linux][3]
2用户创建的 crontab 文件
Linux 用户还可以在 `crontab` 命令的帮助下创建自己的 cron 任务。创建的 cron 任务将以创建它们的用户身份运行。
所有 cron 任务都存储在 `/var/spool/cron`(对于 RHEL 和 CentOS 发行版)和 `/var/spool/cron/crontabs`(对于 Debian 和 Ubuntu 发行版cron 任务使用创建该文件的用户的用户名列出。
cron 守护进程在后台静默地检查 `/etc/crontab` 文件和 `/var/spool/cron``/etc/cron.d*/` 目录。
`crontab` 命令用于编辑 cron 文件。让我们看一下 crontab 文件的结构。
### crontab 文件剖析
在继续之前,我们要首先探索 crontab 文件的格式。crontab 文件的基本语法包括 5 列,由星号表示,后跟要执行的命令。
```
*    *    *    *    *    command
```
此格式也可以表示如下:
```
m h d moy dow command
```
```
m h d moy dow /path/to/script
```
让我们来解释一下每个条目
* `m`:代表分钟。范围是 0 到 59
* `h`:表示小时,范围是 0 到 23
* `d`:代表一个月中的某天,范围是 1 到 31
* `moy`:这是一年中的月份。范围是 1 到 12
* `doy`:这是星期几。范围是 0 到 6其中 0 代表星期日
* `Command`:这是要执行的命令,例如备份命令、重新启动和复制命令等
### 管理 cron 任务
看完 crontab 文件的结构之后,让我们看看如何创建、编辑和删除 cron 任务。
#### 创建 cron 任务
要以 root 用户身份创建或编辑 cron 任务,请运行以下命令:
```
# crontab -e
```
要为另一个用户创建或安排 cron 任务,请使用以下语法:
```
# crontab -u username -e
```
例如,要以 Pradeep 用户身份运行 cron 任务,请发出以下命令:
```
# crontab -u Pradeep -e
```
如果该 crontab 文件尚不存在,那么你将打开一个空白文本文档。如果该 crontab 文件已经存在,则 `-e` 选项会让你编辑该文件,
#### 列出 crontab 文件
要查看已创建的 cron 任务,只需传递 `-l` 选项:
```
# crontab -l
```
#### 删除 crontab 文件
要删除 cron 任务,只需运行 `crontab -e` 并删除所需的 cron 任务行,然后保存该文件。
要删除所有的 cron 任务,请运行以下命令:
```
# crontab -r
```
然后,让我们看一下安排任务的不同方式。
### crontab 安排任务示例
如图所示,所有 cron 任务文件都带有释伴标头。
```
#!/bin/bash
```
这表示你正在使用的 shell在这种情况下即 bash shell。
接下来,使用我们之前指定的 cron 任务条目指定要安排任务的时间间隔。
要每天下午 12:30 重新引导系统,请使用以下语法:
```
30  12 *  *  * /sbin/reboot
```
要安排在凌晨 4:00 重启,请使用以下语法:
```
0  4  *  *  *  /sbin/reboot
```
注:星号 `*` 用于匹配所有记录。
要每天两次运行脚本(例如,凌晨 4:00 和下午 4:00请使用以下语法
```
0  4,16  *  *  *  /path/to/script
```
要安排 cron 任务在每个星期五下午 5:00 运行,请使用以下语法:
```
0  17  *  *  Fri  /path/to/script
```
```
0 17  *  *  *  5  /path/to/script
```
如果你希望每 30 分钟运行一次 cron 任务,请使用:
```
*/30  *  *  *  * /path/to/script
```
要安排 cron 任务每 5 小时运行一次,请运行:
```
*  */5  *  *  *  /path/to/script
```
要在选定的日期(例如,星期三和星期五的下午 6:00运行脚本请执行以下操作
```
0  18  *  *  wed,fri  /path/to/script
```
要使用单个 cron 任务运行多个命令,请使用分号分隔任务,例如:
```
*  *  *  *  *  /path/to/script1 ; /path/to/script2
```
### 使用特殊字符串节省编写 cron 任务的时间
某些 cron 任务可以使用对应于特定时间间隔的特殊字符串轻松配置。例如,
1`@hourly` 时间戳等效于 `0 * * * *`
它将在每小时的第一分钟执行一次任务。
```
@hourly /path/to/script
```
2`@daily` 时间戳等效于 `0 0 * * *`
它在每天的第一分钟(午夜)执行任务。它可以在执行日常工作时派上用场。
```
@daily /path/to/script
```
3`@weekly` 时间戳等效于 `0 0 1 * mon`
它在每周的第一分钟执行 cron 任务,一周是从星期一开始的。
```
@weekly /path/to/script
```
3`@monthly` 时间戳等效于 `0 0 1 * *`
它在每月第一天的第一分钟执行任务。
```
@monthly /path/to/script
```
4`@yearly` 时间戳等效于 `0 0 1 1 *`
它在每年的第一分钟执行任务,并且对发送新年问候很有用。
```
@yearly /path/to/script
```
### 限制 crontab
作为 Linux 用户,你可以控制谁有权使用 `crontab` 命令。可以使用 `/etc/cron.deny``/etc/cron.allow` 文件来控制。默认情况下,只有一个 `/etc/cron.deny` 文件,并且不包含任何条目。要限制用户使用 `crontab` 实用程序,只需将用户的用户名添加到文件中即可。当用户添加到该文件中,并且该用户尝试运行 `crontab` 命令时,他/她将遇到以下错误。
![restricted-cron-user][4]
要允许用户继续使用 `crontab` 实用程序,只需从 `/etc/cron.deny` 文件中删除用户名即可。
如果存在 `/etc/cron.allow` 文件,则仅文件中列出的用户可以访问和使用 `crontab` 实用程序。
如果两个文件都不存在,则只有 root 用户具有使用 `crontab` 命令的特权。
### 备份 crontab 条目
始终建议你备份 crontab 条目。为此,请使用语法
```
# crontab -l > /path/to/file.txt
```
例如:
```
# crontab -l > /home/james/backup.txt
```
### 检查 cron 日志
cron 日志存储在 `/var/log/cron` 文件中。要查看 cron 日志,请运行以下命令:
```
# cat /var/log/cron
```
![view-cron-log-files-linux][5]
要实时查看日志,请使用 `tail` 命令,如下所示:
```
# tail -f /var/log/cron
```
![view-live-cron-logs][6]
### 总结
在本指南中,你学习了如何创建 cron 任务以自动执行重复性任务,如何备份和查看 cron 日志。我们希望本文提供有关 cron 作业的有用见解。请随时分享你的反馈和意见。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/schedule-automate-tasks-linux-cron-jobs/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Schedule-tasks-in-Linux-using-cron.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/11/etc-crontab-linux.png
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/11/restricted-cron-user.png
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/11/view-cron-log-files-linux.png
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/11/view-live-cron-logs.png

View File

@ -0,0 +1,147 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to manage music tags using metaflac)
[#]: via: (https://opensource.com/article/19/11/metaflac-fix-music-tags)
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
如何使用 metaflac 管理音乐标签
======
使用这个强大的开源工具可以在命令行中纠正音乐标签错误。
![website design image][1]
我将 CD 翻录到电脑已经有很长一段时间了。在此期间,我用过几种不同的翻录工具,观察到每种工具在标记上似乎有不同的做法,特别是在保存哪些音乐元数据上。所谓“观察”,是指音乐播放器似乎按照有趣的顺序对专辑进行排序,他们将一个目录中的曲目分为两张专辑,或者产生了其他令人沮丧的烦恼。
我还看到有些标签非常模糊,许多音乐播放器和标签编辑器没有显示它们。即使这样,在某些极端情况下,它们仍可以使用这些标签来分类或显示音乐,例如播放器将所有包含 XYZ 标签的音乐文件与不包含该标签的所有文件分离到不同的专辑中。
那么,如果标记应用和音乐播放器没有显示“奇怪”的标记,但是它们受到了某种影响,你该怎么办?
### Metaflac 来拯救!
我一直想要熟悉 **[metaflac][2]**,它是一款开源命令行 [FLAC文件][3] 元数据编辑器,这是我选择的开源音乐文件格式。并不是说 [EasyTAG][4] 这样的出色标签编辑软件有什么问题,但我想起“如果你手上有个锤子。。”这句老话(译注:原文是如果你手上有个锤子, 那么所有的东西看起来都像钉子。意指人们惯于用熟悉的方式解决问题,而不管合不合适)。另外,从实际的角度来看,运行 [Armbian][5] 和 [MPD][6]、音乐存储在本地、运行精简、仅限音乐的无头环境的小型专用服务器可以满足我的家庭和办公室立体音乐的需求,因此命令行元数据管理工具将非常有用。
下面的截图显示了我的长期翻录程序产生的典型问题Putumayo 的哥伦比亚音乐汇编显示为两张单独的专辑,一张包含单首曲目,另一张包含其余 11 首:
![Album with incorrect tags][7]
我使用 metaflac 为目录中包含这些曲目的所有 FLAC 文件生成了所有标签的列表:
```
rm -f tags.txt
for f in *.flac; do
        echo $f &gt;&gt; tags.txt
        metaflac --export-tags-to=tags.tmp "$f"
        cat tags.tmp &gt;&gt; tags.txt
        rm tags.tmp
done
```
我将其保存为可执行的 shell 脚本(请参阅我的同事 [David Both][8] 关于 Bash shell 脚本的精彩系列专栏文章,[特别是关于循环这章][9])。基本上,我在这做的是创建一个文件 _tags.txt_,包含文件名(**echo** 命令),后面是它的所有标签,然后是下一个文件名,依此类推。 这是结果的前几行:
```
A Guapi.flac
TITLE=A Guapi
ARTIST=Grupo Bahia
ALBUMARTIST=Various Artists
ALBUM=Putumayo Presents: Colombia
DATE=2001
TRACKTOTAL=12
GENRE=Latin Salsa
MUSICBRAINZ_ALBUMARTISTID=89ad4ac3-39f7-470e-963a-56509c546377
MUSICBRAINZ_ALBUMID=6e096386-1655-4781-967d-f4e32defb0a3
MUSICBRAINZ_ARTISTID=2993268d-feb6-4759-b497-a3ef76936671
DISCID=900a920c
ARTISTSORT=Grupo Bahia
MUSICBRAINZ_DISCID=RwEPU0UpVVR9iMP_nJexZjc_JCc-
COMPILATION=1
MUSICBRAINZ_TRACKID=8a067685-8707-48ff-9040-6a4df4d5b0ff
ALBUMARTISTSORT=50 de Joselito, Los
Cumbia Del Caribe.flac
```
经过一番调查,结果发现我同时翻录了很多 Putumayo CD并且当时我所使用的所有软件似乎给除了一个之外的所有文件加上了 MUSICBRAINZ_ 标签。 (是 bug 么大概吧。我在六张专辑中都看到了。此外关于有时不寻常的排序注意到ALBUMARTISTSORT 标签将西班牙语标题 “Los” 移到了标题的最后面(逗号之后)。
我使用了一个简单的 **awk** 脚本来列出 _tags.txt_ 中报告的所有标签:
```
`awk -F= 'index($0,"=") > 0 {print $1}' tags.txt | sort -u`
```
这会使用 **=** 作为字段分隔符将所有行拆分为字段,并打印包含等号的行的第一个字段。结果通过使用 sort 带上 **-u** 标志来传递,从而消除了输出中的所有重复项(请参阅我的同事 Seth Kenlon 的[关于 **sort** 程序的文章][10])。对于这个 _tags.txt_ 文件,输出为:
```
ALBUM
ALBUMARTIST
ALBUMARTISTSORT
ARTIST
ARTISTSORT
COMPILATION
DATE
DISCID
GENRE
MUSICBRAINZ_ALBUMARTISTID
MUSICBRAINZ_ALBUMID
MUSICBRAINZ_ARTISTID
MUSICBRAINZ_DISCID
MUSICBRAINZ_TRACKID
TITLE
TRACKTOTAL
```
研究一会后,我发现 MUSICBRAINZ_ 标签出现在除了一个 FLAC 文件之外的所有文件上,因此我使用 metaflac 命令删除了这些标签:
```
for f in *.flac; do metaflac --remove-tag MUSICBRAINZ_ALBUMARTISTID "$f"; done
for f in *.flac; do metaflac --remove-tag MUSICBRAINZ_ALBUMID "$f"; done
for f in *.flac; do metaflac --remove-tag MUSICBRAINZ_ARTISTID "$f"; done
for f in *.flac; do metaflac --remove-tag MUSICBRAINZ_DISCID "$f"; done
for f in *.flac; do metaflac --remove-tag MUSICBRAINZ_TRACKID "$f"; done
```
完成后,我可以使用音乐播放器重建 MPD 数据库。结果如下:
![Album with correct tags][11]
完成了12 首曲目出现在了一张专辑中。
太好了,我很喜欢 metaflac。我希望我会更频繁地使用它因为我会试图去纠正最后一些我弄乱的音乐收藏标签。强烈推荐
### 关于音乐
我花了几个晚上在 CBC 音乐CBC 是加拿大的公共广播公司)上收听 Odario Williams 的节目 _After Dark_。感谢 Odario我听到了让我非常享受的 [Kevin Fox 的 _Songs for Cello and Voice_] [12]。在这里,他演唱了 Eurythmics 的歌曲 “[Sweet DreamsAre Made of This][13]”。
我购买了这张 CD现在它在我的音乐服务器上还有组织正确的标签
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/metaflac-fix-music-tags
作者:[Chris Hermansen][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web-design-monitor-website.png?itok=yUK7_qR0 (website design image)
[2]: https://xiph.org/flac/documentation_tools_metaflac.html
[3]: https://xiph.org/flac/index.html
[4]: https://wiki.gnome.org/Apps/EasyTAG
[5]: https://www.armbian.com/
[6]: https://www.musicpd.org/
[7]: https://opensource.com/sites/default/files/uploads/music-tags1_before.png (Album with incorrect tags)
[8]: https://opensource.com/users/dboth
[9]: https://opensource.com/article/19/10/programming-bash-loops
[10]: https://opensource.com/article/19/10/get-sorted-sort
[11]: https://opensource.com/sites/default/files/uploads/music-tags2_after.png (Album with correct tags)
[12]: https://burlingtonpac.ca/events/kevin-fox/
[13]: https://www.youtube.com/watch?v=uyN66XI1zp4