Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-07-31 18:49:55 +08:00
commit a2b6b5cc0f
15 changed files with 1097 additions and 167 deletions

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: (JonnieWayy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12469-1.html)
[#]: subject: (What you need to know about Rust in 2020)
[#]: via: (https://opensource.com/article/20/1/rust-resources)
[#]: author: (Ryan Levick https://opensource.com/users/ryanlevick)
2020 年关于 Rust 你所需要知道的
======
> 尽管许多程序员长期以来一直将 Rust 用于业余爱好项目,但正如许多有关 Rust 的热门文章所解释的那样,该语言在 2019 年吸引了主要技术公司的支持。
![](https://img.linux.net.cn/data/attachment/album/202007/31/001101fkh88966ktvvee99.jpg)
一段时间以来, [Rust][2] 在诸如 Hacker News 之类的网站上引起了程序员大量的关注。尽管许多人一直喜欢在业余爱好项目中[使用该语言][3],但直到 2019 年它才开始在业界流行,直到那会儿情况才真正开始有所转变。
在过去的一年中,包括[微软][4]、 [Facebook][5] 和 [Intel][6] 在内的许多大公司都出来支持 Rust许多[较小的公司][7]也注意到了这一点。2016 年,作为欧洲最大的 Rust 大会 [RustFest][8] 的第一任主持人,除了 Mozilla我没见到任何一个人在工作中使用 Rust。三年后似乎我在 RustFest 2019 所交流的每个人都在不同的公司的日常工作中使用 Rust无论是作为游戏开发人员、银行的后端工程师、开发者工具的创造者或是其他的一些岗位。
在 2019 年, Opensource.com 也通过报道 Rust 日益增长的受欢迎程度而发挥了作用。万一你错过了它们,这里是过去一年里 Opensource.com 上关于 Rust 的热门文章。
### 《使用 rust-vmm 构建未来的虚拟化堆栈》
Amazon 的 [Firecracker][9] 是支持 AWS Lambda 和 Fargate 的虚拟化技术,它是完全使用 Rust 编写的。这项技术的作者之一 Andreea Florescu 在 《[使用 rust-vmm 构建未来的虚拟化堆栈][10]》中对 Firecracker 及其相关技术进行了深入探讨。
Firecracker 最初是 Google [CrosVM][11] 的一个分支,但是很快由于两个项目的不同需求而分化。尽管如此,在这个项目与其他用 Rust 所编写的虚拟机管理器VMM之间仍有许多得到了很好共享的通用片段。考虑到这一点[rust-vmm][12] 项目起初是以一种让 Amazon 和 Google、Intel 和 Red Hat 以及其余开源社区去相互共享通用 Rust “crates” (即程序包)的方式开始的。其中包括 KVM 接口Linux 虚拟化 API、Virtio 设备支持以及内核加载程序。
看到软件行业的一些巨头围绕用 Rust 编写的通用技术栈协同工作,实在是很神奇。鉴于这种和其他[使用 Rust 编写的技术堆栈][13]之间的伙伴关系,到了 2020 年,看到更多这样的情况我不会感到惊讶。
### 《为何选择 Rust 作为你的下一门编程语言》
采用一门新语言,尤其是在有着建立已久技术栈的大公司,并非易事。我很高兴写了《[为何选择 Rust 作为你的下一门编程语言][14]》,书中讲述了微软是如何在许多其他有趣的编程语言没有被考虑的情况下考虑采用 Rust 的。
选择编程语言涉及许多不同的标准——从技术上到组织上,甚至是情感上。其中一些标准比其他的更容易衡量。比方说,了解技术变更的成本(例如适应构建系统和构建新工具链)要比理解组织或情感问题(例如高效或快乐的开发人员将如何使用这种新语言)容易得多。此外,易于衡量的标准通常与成本相关,而难以衡量的标准通常以收益为导向。这通常会导致成本在决策过程中变得越来越重要,即使这不一定就是说成本要比收益更重要——只是成本更容易衡量。这使得公司不太可能采用新的语言。
然而Rust 最大的好处之一是很容易衡量其编写安全且高性能系统软件的能力。鉴于微软 70% 的安全漏洞是由于内存安全问题导致的,而 Rust 正是旨在防止这些问题的,而且这些问题每年都使公司付出了几十亿美元的代价,所以很容易衡量并理解采用这门语言的好处。
是否会在微软全面采用 Rust 尚待观察但是仅凭着相对于现有技术具有明显且可衡量的好处这一事实Rust 的未来一片光明。
### 2020 年的 Rust
尽管要达到 C++ 等语言的流行度还有很长的路要走。Rust 实际上已经开始在业界引起关注。我希望更多公司在 2020 年开始采用 Rust。Rust 社区现在必须着眼于欢迎开发人员和公司加入社区,同时确保将推动该语言发展到现在的一切都保留下来。
Rust 不仅仅是一个编译器和一组库,而是一群想要使系统编程变得容易、安全而且有趣的人。即将到来的这一年,对于 Rust 从业余爱好语言到软件行业所使用的主要语言之一的转型至关重要。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/rust-resources
作者:[Ryan Levick][a]
选题:[lujun9972][b]
译者:[JonnieWayy](https://github.com/JonnieWayy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ryanlevick
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
[2]: http://rust-lang.org/
[3]: https://insights.stackoverflow.com/survey/2019#technology-_-most-loved-dreaded-and-wanted-languages
[4]: https://youtu.be/o01QmYVluSw
[5]: https://youtu.be/kylqq8pEgRs
[6]: https://youtu.be/l9hM0h6IQDo
[7]: https://oxide.computer/blog/introducing-the-oxide-computer-company/
[8]: https://rustfest.eu
[9]: https://firecracker-microvm.github.io/
[10]: https://opensource.com/article/19/3/rust-virtual-machine
[11]: https://chromium.googlesource.com/chromiumos/platform/crosvm/
[12]: https://github.com/rust-vmm
[13]: https://bytecodealliance.org/
[14]: https://opensource.com/article/19/10/choose-rust-programming-language

View File

@ -0,0 +1,70 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Beginner-friendly Terminal-based Text Editor GNU Nano Version 5.0 Released)
[#]: via: (https://itsfoss.com/nano-5-release/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Beginner-friendly Terminal-based Text Editor GNU Nano Version 5.0 Released
======
_**Open source text editor GNU nano has reached the milestone of version 5.0. Take a look at what features this new release brings.**_
There are plenty [terminal-based text editors available for Linux][1]. While editors like Emacs and Vim require a steep learning curve with bunch of unusual keyboard shortcuts, GNU nano is considered easier to use.
Perhaps thats the reason why Nano is the default terminal-based text editor in Ubuntu and many other distributions. Upcoming [Fedora 33 release][2] is also going to set Nano as the default text editor in terminal.
GNU nano 5.0 has just been released. Here are the new features it brings.
### New features in GNU nano 5.0
![][3]
Some of the main highlights of GNU nano 5.0 as mentioned in its [changelog][4] are:
* The indicator option will show a kind of scroll bar on the right-hand side of the screen to indicate where in the buffer the viewport is located and how much it covers.
* Lines can be tagged with Alt+Insert keys and you can jump to these tags with Alt+PageUp and Alt+PageDown keys.
* The Execute Command prompt is now directly accessible from the main menu.
* On terminals supporting at least 256 colors, there are new colors available.
* New bookstyle mode in which any line that begins with whitespace is considered as the start of a paragraph.
* Refreshing the screen with ^L now works in every menu. It also centers the line with the cursor.
* Bindable function curpos has been renamed to location, long option tempfile has been renamed to saveonexit and short option -S is now a synonym of softwrap.
* Backup files will retain their group ownership (when possible).
* Data is synced to disk before “… lines written” is shown.
* Syntaxes for Markdown, Haskell, and Ada were added.
### Getting GNU nano 5.0
The current version of nano in Ubuntu 20.04 is 4.8 and its less likely that youll be getting the new version anytime soon in this LTS release. When and if it is available from Ubuntu, you should get it via the system updates.
Arch users should be getting it before everyone else, as always. Other distributions should also provide the new version, sooner or later.
If you are one of the few who likes [installing software from its source code][5], you can get it from its [download page][6].
If you are new to it, I highly recommend this [beginners guide to Nano editor][1].
How do you like the new release? Are you looking forward to using Nano 5?
--------------------------------------------------------------------------------
via: https://itsfoss.com/nano-5-release/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/nano-editor-guide/
[2]: https://itsfoss.com/fedora-33/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/09/Nano.png?ssl=1
[4]: https://www.nano-editor.org/news.php
[5]: https://itsfoss.com/install-software-from-source-code/
[6]: https://www.nano-editor.org/download.php

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (JonnieWayy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,70 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 ways I contribute to open source as a Linux systems administrator)
[#]: via: (https://opensource.com/article/20/7/open-source-sysadmin)
[#]: author: (Elizabeth K. Joseph https://opensource.com/users/pleia2)
4 ways I contribute to open source as a Linux systems administrator
======
You don't have to be a coder to make valuable contributions to the open
source community; here are some other important roles you can fill as a
sysadmin.
![open source button on keyboard][1]
I recently participated in The Linux Foundation Open Source Summit North America, held virtually June 29-July 2, 2020. In the course of that event, I had the opportunity to speak with a fellow attendee about my career in Linux systems administration and how it had led me to a career focused on [open source][2]. Specifically, he asked, how does a systems administrator who doesn't do a lot of coding participate in open source projects?
That's a great question!
A lot of focus in open source projects is placed on the actual code, but there's a lot more to it than that. The following are some ways that I've been deeply involved in open source projects, without writing code.
### Improving documentation
I got my official start in open source by rewriting a quickstart guide for a project I used heavily. We spend most of our time using software in production and at scale. We routinely run into configuration gotchas and edge cases, and we often privately develop best practices for managing services effectively.
Inevitably, we run into things that aren't documented, have out of date documentation, or need improvements to the documentation to be made. This is a great opportunity! The developers and documentation writers are often unaware of these issues, and you have the key to solving them. Typically it starts with a bug report to the documentation project, but if you know the answer, you can often submit a patch to the documentation to improve it.
### Contributing "recipes"
We often spend too much time reinventing the wheel when we're launching common services. I remember my early days of slogging through MySQL configuration files to figure out the best settings for the databases for a particular customer. Today, a lot of that has been simplified, allowing us to use Ansible playbooks, Puppet modules, and more to get a basic configuration going. This is a place where you can contribute! Whether it's an official "recipe" you contribute to the appropriate hub or a sample rundown of your configuration or architecture diagram of Logstash, sharing your expertise in the form of examples can be incredibly helpful to others who are facing the same configuration challenges.
### Hosting project resources
I spent part of my career as a full-time systems administrator, directly working on hosting project resources for OpenStack, an infrastructure that is fully open source—every config file and Puppet change is done through public code review and tracked in a public Git repository. There are several projects out there that host their infrastructures in an open source manner, many of which are listed on the [Open Source Infrastructure (#openinfra) homepage][3]. These range from KDE and Debian to the Apache Software Foundation. In these communities, external participants can submit improvements to the infrastructure as their time and expertise allow. Since a lot of this is peer-reviewed, it's also a nice opportunity to build your skills in areas you may not be strictly focused on at work.
I've also done work on specific projects where the need was not broadcasted but was clear once I joined the community. For instance, one of my Linux communities needed a place to host a development website environment so we could try out new plugins and features outside of our production environment. We also found that giving shell accounts to participants was a valuable way to make sure they were always connected to IRC and had a sandbox beyond their own desktop. I now manage two virtual servers for this project to address these needs and have built up my own little systems team inside the project, so I'm not the only administrator.
### Supporting your fellow users
As someone who is using software in production, your operational experience is essential to a thriving support outlet, so don't be shy. Participation in user forums, mailing lists, and chat may seem like something that only experts can do, but regardless of your level, you will always have more experience than someone who just started out. A newcomer to the space can help out with simple questions, and give the more experienced participants the energy to answer more complicated questions. The more experience you gain, the more involved you can get in the community.
### Be a better sysadmin by contributing
Whatever way you decide to participate, the value gained from contributing to open source projects as a [systems administrator][4] cannot be understated. Your contributions will be noticed by members of the community, and often result in opportunities to chat on the latest project podcast, sit for an interview on the project blog, or speak at an event. All of these things raise your profile in the project as someone who is knowledgeable about the technology. You can also point to your public expertise when you're interviewing for your next role, having a public track record of giving advice in a project where a company is looking for expertise is a huge vote in your favor.
Finally, I've also found that participating in open source projects to be tremendously valuable on a personal level. I feel good about contributing to the community, and it's rewarding to know that your expertise is valuable to folks outside the walls of your organization.
Looking for a place to start? Find the communities behind the open source technology you already use and love. Or, if you're looking for a place to [write][5], you've found it here at Opensource.com.
You don't need to be a master coder to contribute to open source. Jade Wang shares 8 ways you can...
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/open-source-sysadmin
作者:[Elizabeth K. Joseph][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/pleia2
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx (open source button on keyboard)
[2]: https://opensource.com/resources/what-open-source
[3]: https://opensourceinfra.org/
[4]: https://opensource.com/article/19/7/be-a-sysadmin
[5]: https://opensource.com/how-submit-article

View File

@ -0,0 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Defining cloud native, expanding the ecosystem, and more industry trends)
[#]: via: (https://opensource.com/article/20/7/cloud-native-expanding-and-more-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
Defining cloud native, expanding the ecosystem, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a principal communication strategist at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are three of my and their favorite articles from that update.
## [As cloud native computing rises, its transforming culture as much as code][2]
> The time is right for the industry to coalesce around a set of common principles for cloud native computing as many organizations come to the realization that their initial forays into the cloud yielded limited returns. An International Data Corp. survey last year found that [80% of respondents had repatriated workloads back on-premises from public cloud environments][3] and, on average, expect to move half of their public cloud applications to private locations over the next two years.
**The impact**: The first run at the cloud involved a lot of "lift-and-shift" attempts to pick up workloads and drop them in the cloud. This second run will involve more work to figure out what and how to move, but should ultimately deliver more value as developers get more comfortable with what they can take for granted.
## [Why automating for cloud native infrastructures is a win for all involved][4]
> The holy grail in development is the creation and maintenance of secure applications that yield strong ROI and happy customers. But if this development isnt efficient, high-velocity and scalable, that holy grail quickly becomes unattainable. If youve found yourself expecting more from your current infrastructure, it might be time to consider cloud native. Not only does it check all these boxes, but automating for cloud native infrastractures can improve efficiency and results.
**The impact**: I'd add to this that truly adopting a cloud-native approach is simply impossible without substantial automation; the number of moving pieces involved is just too high to keep in a human head.
## [Linkerd case studies: meeting security requirements, reducing latency, and migrating from Istio][5]
> Finally, Subspace shares its experience with Linkerd to deliver multiplayer gaming “at the speed of light.” Although it at first seemed counterintuitive to use a service mesh in an ultra-low-latency environment, Subspace has found a strategic use of Linkerd that actually reduces total latency—the service mesh is so lightweight that the minimal latency it adds is overshadowed by the latency it reduces through observability. In short, this unique use case of Linkerd gives Subspace a large net positive on operational outcomes. [Read the full user story][6].
**The impact**: I've heard this idea that you don't really reduce complexity in a system, you abstract it and change who it gets exposed to. Seems like a similar observation is being made about latency; if you choose carefully where you accept latency, you can reduce it elsewhere in the system as a result.
## [A top exec explains IBM's 'important pivot' to win over developers, startups, and partners as part of its plan to win the hybrid cloud market away from rivals like Microsoft][7]
> Big Blue is shifting to a new strategy focused on building an ecosystem of developers, partners, and startups."Our services organization can't get to all clients. The only way to get to those clients is to activate an ecosystem."
**The impact**: More and more companies are embracing the idea that there are customer problems they just can't solve without help. Maybe that reduces the money that can be made from each individual customer as it expands the opportunities to engage more broadly into more problem spaces.
_I hope you enjoyed this list and come back next week for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/cloud-native-expanding-and-more-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://siliconangle.com/2020/07/18/cloud-native-computing-rises-transforming-culture-much-code/
[3]: https://www.networkworld.com/article/3400872/uptick-in-cloud-repatriation-fuels-rise-of-hybrid-cloud.html
[4]: https://thenewstack.io/why-automating-for-cloud-native-infrastructures-is-a-win-for-all-involved/
[5]: https://www.cncf.io/blog/2020/07/21/linkerd-case-studies-meeting-security-requirements-reducing-latency-and-migrating-from-istio/
[6]: https://buoyant.io/case-studies/subspace/
[7]: https://www.businessinsider.com/ibm-developers-tech-ecosystem-red-hat-hybrid-cloud-bob-lord-2020-7?r=AU&IR=T

View File

@ -0,0 +1,84 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Role Of SPDX In Open Source Software Supply Chain)
[#]: via: (https://www.linux.com/audience/developers/role-of-spdx-in-open-source-software-supply-chain/)
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
Role Of SPDX In Open Source Software Supply Chain
======
Kate Stewart is a Senior Director of Strategic Programs, responsible for the Open Compliance program at the Linux Foundation encompassing SPDX, OpenChain, Automating Compliance Tooling related projects. In this interview, we talk about the latest release and the role its playing in the open source software supply chain.
*Here is a transcript of our interview. *
Swapnil Bhartiya: Hi, this is Swapnil Bhartiya, and today we have with us, once again, Kate Stewart, Senior Director of Strategic Programs at Linux Foundation. So lets start with SPDX. Tell us, whats new going on in there in this specification?
Kate Stewart: Well, the SPDX specification just a month ago released auto 2.2 and what weve been doing with that is adding in a lot more features that people have been wanting for their use cases, more relationships, and then weve been working with the Japanese automotive-made people whove been wanting to have a light version. So theres lots of really new technology sitting in the SPDX 2.2 spec. And I think were at a stage right now where its good enough that theres enough people using it, we want to probably take it to ISO. So weve been re-formatting the document and well be starting to submit it into ISO so it can become an international specification. And thats happening.
Swapnil Bhartiya: Can you talk a bit about if there is anything additional that was added to the 2.2 specification. Also, I would like to talk about some of the use cases since you mentioned the automaker. But before that, I just want to talk about anything new in the specification itself.
Kate Stewart: So in the 2.2 specifications, weve got a lot more relationships. People wanted to be able to handle some of the use cases that have come up from containers now. And so they wanted to be able to start to be able to express that and specify it. Weve also been working with the NTIA. Basically they have a software bill of materials or SBoM working groups, and SPDX is one of the formats thats been adopted. And their framing group has wanted to see certain features so that we can specify known unknowns. So thats been added into the specification as well.
And then there are, how you can actually capture notices since thats something that people want to use. The license has called for it and we didnt have a clean way of doing it and so some of our tool vendors basically asked for this. Not the vendors, I guess there are partners, there are open source projects that wanted to be able to capture this stuff. And so we needed to give them a way to help.
Were very much focused right now on making sure that SPDX can be useful in tools and that we can get the automation happening in the whole ecosystem. You know, be it when you build a binary to ship to someone or to test, you want to have your SBoM. When youve downloaded something from the internet, you want to have your SBoM. When you ship it out to your customer, you want to be able to be very explicit and clear about whats there because you need to have that level of detail so that you can track any vulnerabilities.
Because right now about, I guess, 19… I think there was a stat from earlier in the year from one of the surveys. And I can dig it up for you if youd like, but I think 99% of all the code that was scanned by Synopsys last year had open source in it. And of which it was 70% of that whole build materials was open source. Open source is everywhere. And what we need to do is, be able to work with it and be able to adhere to the licenses, and transparency on the licenses is important as is being able to actually know what you have, so you can remediate any vulnerabilities.
Swapnil Bhartiya: You mentioned a couple of things there. One was, you mentioned tooling. So Im kind of curious, what sort of tooling that is already there? Whether its open source or open source be it basically commercialization that worked with the SPDX documents.
Kate Stewart: Actually, Ive got a document that basically lists all of these tools that weve been able to find and more are popping up as the day goes by. Weve got common tools. Like, some of the Linux Foundation projects are certainly working with it. Like FOSSology, for instance, is able to both consume and generate SPDX. So if youve got an SPDX document and you want to pull it in and cross check it against your sources to make sure its matching and no ones tampered with it, the FOSSology tool can let you do that pretty easily and codes out there that can generate FOSSology.
Free Software Foundation Europe has a Lindt tool in their REUSE project that will basically generate an SPDX document if youre using the IDs. I guess theres actually a whole bunch more. So like I say, Ive got a document with a list of about 30 to 40, and obviously the SPDX tools are there. Weve got a free online, a validator. So if someone gives you an SPDX document, you can paste it into this validator, and itll tell you if its a valid SPDX document or not. And were looking to it.
Im finding also some tools that are emerging, one of which is decodering, which well be bringing into the Act umbrella soon, which is looking at transforming between SPDX and SWID tags, which is another format thats commonly in use. And so we have tooling emerging and making sure that what weve got with SPDX is usable for tool developers and that weve got libraries right now for SPDX to help them in Java, Python and Go. So hopefully well see more tools come in and theyll be generating SPDX documents and people will be able to share this stuff and make it automatic, which is what we need.
Another good tool, I cant forget this one, is Tern. And actually Tern, and so what Tern does is, its another tool that basically will sit there and it will decompose a container and it will let you know the bill of materials inside that container. So you can do there. And another one thats emerging that well hopefully see more soon is something called OSS Review Toolkit that goes into your bill flow. And so it goes in when you work with it in your system. And then as youre doing bills, youre generating your SBoMs and youre having accurate information recorded as you go.
As I said, all of this sort of thing should be in the background, it should not be a manual time-intensive effort. When we started this project 10 years ago, it was, and we wanted to get it automated. And I think were finally getting to the stage where its going to be… Theres enough tooling out there and theres enough of an ecosystem building that well get this automation to happen.
This is why getting it to ISO and getting the specification to ISO means itll make it easier for people in procurement to specify that they want to see the input as an SPDX document to compliment the product that theyre being given so that they can ingest it, manage it and so forth. But by it being able to say its an ISO standard, it makes the things a lot easier in the procurement departments.
OpenChain recognized that we needed to do this and so they went through and… OpenChain is actually the first specification were taking through to ISO. But for SPDX, were taking it through as well, because once they say you need to follow the process, you also need some for a format. And so its very logical to make it easy for people to work with this information.
Swapnil Bhartiya: And as youve worked with different players, different of the ecosystem, what are some of the pressing needs? Like improve automation is one of those. What are some of the other pressing needs that you think that the community has to work on?
Kate Stewart: So some of the other pressing needs that we need to be working on is more playbooks, more instructions, showing people how they can do things. You know, we figured it out, okay, heres how we can model it, heres how you can represent all these cases. This is all sort of known in certain peoples heads, but we have not done a good job of expressing to people so that its approachable for them and they can do it.
One of the things thats kind of exciting right now is the NTIA is having this working group on these software bill of materials. Its coming from the security side, but theres various proof of concepts that are going on with it. One of which is a healthcare proof of concept. And so theres a group of about five to six device manufacturers, medical device manufacturers that are generating SBoMs in SPDX and then there are handing them into hospitals to go and be able to make sure they can ingest them in.
And this level of bringing people up to this level where they feel like they can do these things, its been really eye-opening to me. You know, how much we need to improve our handholding and improve the infrastructure to make it approachable. And this obviously motivates more people to be getting involved. From the vendors and commercial side, as well as the open source, but it wouldnt have happened, I think, to a large extent for SPDX without this open source and without the projects that have adopted it already.
Swapnil Bhartiya: Now, just from the educational awareness point of view, like if theres an open source project, how can they easily create SBoM documents that uses the SPDX specification with their releases and keep it synced?
Kate Stewart: Thats exactly what wed love to see. Wed love to see the upstream projects basically generate SPDX documents as theyre going forward. So the first step is to use the SPDX license identifiers to make sure you understand what the licensing should be in each file, and ideally you can document with eTags. But then theres three or four tools out there that actually scan them and will generate an SPDX document for you.
If youre working at the command line, the REUSE Lindt tool that I was mentioning from Free Software Foundation Europe will work very fast and quickly with what youve got. And itll also help you make sure youve got all your files tagged properly.
If you havent done all the tagging exercising and you wonder [inaudible 00:09:40] what you got, a scan code works at the command line, and itll give you that information as well. And then if you want to start working in a larger system and you want to store results and looking things over time, and have some state behind it all so like therell different versions of things over time, FOSSology will remember from one version to another and will help you create these [inaudible 00:10:01] off of bill materials.
Swapnil Bhartiya: Can you talk about some of the new use cases that youre seeing now, which maybe you did not expect earlier and which also shows how the whole community is actually growing?
Kate Stewart: Oh yeah. Well, when we started the project 10 years ago, we didnt understand containers. They werent even not on the raw mindset of people. And theres a lot of information sitting in containers. Weve had some really good talks over the last couple of years that illustrate the problems. There was a report that was put out from the Linux Foundation by Armijn Hemel, that goes into the details of whats going on in containers and some of the concerns.
So being able to get on top of automating, whats going on with concern inside a container and what youre shipping and knowing youre not shipping more than you need to, figuring out how we can improve these sorts of things is certainly an area that was not initially thought about.
Weve also seen a tremendous interest in whats going on in IOT space. And so that you need to really understand whats going on in your devices when theyre being deployed in the field and to know whether or not, effectively is vulnerability going to break it, or can you recover? Things like that. The last 10 years weve seen tremendous spectrum of things we just didnt anticipate. And the nice thing about SPDX is, youve got a use case that were not able to represent. If we cant tell you how to do it, just open an issue, and well start trying to figure it out and start to figure if we need to add fields in for you or things like that.
Swapnil Bhartiya:  Kate, thank you so much for taking your time out and talking to me today about this project.
--------------------------------------------------------------------------------
via: https://www.linux.com/audience/developers/role-of-spdx-in-open-source-software-supply-chain/
作者:[Swapnil Bhartiya][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/swapnil/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,67 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (SODA Foundation: Autonomous data management framework for data mobility)
[#]: via: (https://www.linux.com/audience/developers/soda-foundation/)
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
SODA Foundation: Autonomous data management framework for data mobility
======
SODA Foundation is an open source project under Linux Foundation that aims to establish an open, unified, and autonomous data management framework for data mobility from the edge, to core, to cloud. We talked to Steven Tan, SODA Foundation Chair, to learn more about the project.
_Here is a transcript of the interview:_
Swapnil Bhartiya: Hi, this is Swapnil Bhartiya, and today we have with us Steven Tan, chair of the SODA foundation. First of all, welcome to the show.
Steven Tan: Thank you.
Swapnil Bhartiya: Tell us a bit about what is SODA?
Steven Tan: The foundation is actually a collaboration among vendors and users to focus on data management for, how do you call, autonomous data mesh management. And the point of this whole thing is how do we serve the users? Because a lot of our users are getting a lot of data challenges, and thats what this foundation is for. To get users and vendors together to help to address these data challenges.
Swapnil Bhartiya: What kind of data are we talking about?
Steven Tan: The data that were talking about is referring to anything like data protection, data governance, data replication, data copy management and stuff like that. And also data integration, how to connect the different data silos and stuff.
Swapnil Bhartiya: Right. But are we talking about enterprise data or are we talking consumer data? Like there is a lot of data with Facebook, Google, and Gmail, and then there are a lot of enterprise data, which companies … Sorry, as an enterprise, I might put something on this cloud, I can put it on this cloud. So can you please clarify what data are we talking about?
Steven Tan: Actually, the data that were talking about is … It depends on the users. Therere all kinds of data. Like for example, I mean, in the keynote that I gave two days ago, the example I gave was from Toyota. So Toyota use case is actually car data. So car data refers to things like the car sensor data, videos, map data and stuff. And then we have users like China Unicom. I mean, they have enterprise companies going to the cloud and so on. So theyve all kinds of enterprise data over there. And then we also have other users like Yahoo Japan, and they have like a website. So the data that youre talking about is web data, consumer data and stuff like that. So its across the board.
Swapnil Bhartiya: Oh, so its not as specific to an industry or any space or sector, okay. But why do you need it? What is the problem that you see in the market and in the current sphere that youre like, hey, we should create something like that?
Steven Tan: So the problem that came, I mean the reason why all these companies came together is that they are building data centers that are from small to big. But a lot of the challenges that you have is like, its hard for a single project to address. Its not like a business where we have a specific problem and then we need this to be solved and so on, its not like that. A lot of it is like, how do you connect the different pieces together in the data center together?
So theres nothing like, no organization like that that can help them solve this kind of problem. Like how do you have, in order to address the data of … Or how do you address things like taking care of data protection and data privacy at the same time? And at the same time, you want to make sure that this data can be governed properly. So there isnt any single organization that can help to take care of this kind of stuff, so were helping these users understand their problems and then come together and then we plan projects and roadmaps based on their problems and try to address them through these projects in the SODA foundation.
Swapnil Bhartiya: And you gave an example of data from the cars and all these things. Does that also mean that open source has helped solving a lot of problems by breaking down a lot of silos so that theres a lot of interaction between different silos, which were like earlier separated and isolated? Today, as you mentioned, we are living in a data driven world. No matter what we do all the way from the Ring, to what we are doing right now, talking to each other, to the product that well create in the end. But most of this data is living in their own silos. There may be a lot of value in that data, which cannot be extracted because one, it is locked into the silos. The second problem is that these days, data is kind of becoming the next oil. These companies are trying to capture all the data, irrespective of the fact of what value do they see in that data today? And by leveraging machine learning and deep learning, they can in the future … So how do you look at that, and how is SODA foundation going to break those silos, without compromising on our privacy, yet allow companies … Because the fact is, as much as I prefer my privacy, I also want Google Maps to tell me the fastest route where I want to go.
Steven Tan: Right. So I think there are certain, I mean, there are different levels of privacy that were going to take care of. And in terms of like, first of all, there are all kinds of … I mean, in terms of the different countries or different States or different provinces like in different countries, there are different kinds of regulations and so on. So first of all, like the data silos you talk about. Yes, thats one of the key problems that were trying to solve. How to connect all the different data silos so as to reduce fragmentation, and then try to minimize the so called dark data that youre talking about, and then extract all the values over there. So thats one of the things that we try to get here. I mean, we try to connect all the different pieces, like in the different … The data may be sitting in the edge in the data center or different data centers and in the cloud. We try to connect all these pieces together.
I mean, thats one of the first things that we tried to do. And then we tried to have data policies. I think this is a critical piece of things that a lot of the solutions out there dont address. You have data policies, but it may be the data policies just for a single vendor solution. But once the data gets out, that solution then is out of control. So what were trying to do here is say, how do you have data policies across different solutions, so no matter where the data is its governed the same way, consistently? Thats the key. So then you can talk about how can you really protect the data in terms of privacy or govern the data or control the data? And in terms of the, I mentioned about the regions, right? So you know where the data is, and you know what kind of regulations that need to be taken care of and you apply it right there. Thats how it should work.
Swapnil Bhartiya: When we look at the kind of a scenario you talked about, I see it as two-fold. One is there is a technology problem and the second is people problem. So is SODA foundation going to deal with both, or are you going to just deal with the technology aspect of it?
Steven Tan: The technology part that we talk about, we try to define in terms of the API and so on to all the data policies and so on, and try to get as many companies to support this as possible. And then the next thing that we try to do is actually try to work with standards organizations to try to make this into a standard. I mean, thats what were trying to do here.
And then government aspects, there are certain organizations that we are talking to. Like theres the CESI, its China Electronic Standards organizations that were talking to thats trying to work things into their … Actually, Im not sure about China, because its, I mean, we dont know about their sphere of influence within the CSI and so on. And then for the industry standards, theres [inaudible 00:09:05] and so on, were trying to work with them and trying to get it to work.
Swapnil Bhartiya: Can we talk about the ecosystem that youre trying to build around SODA foundation? One would be the participants who are actually contributing either the code or the vision, and then the users community who would actually be benefiting from it?
Steven Tan: So the ecosystem that we are trying to build, thats the core part, which is actually the framework. So the framework, I mean, this part will be more of the data vendors or the storage vendors that will be involved in trying to build this ecosystem. And then the outer part, what I call the outer part of the ecosystem will be things like the platforms. Things like Kubernetes, VMware, all these different vendors, and then networking kind of stuff that you need to take care of like the big data analytics and stuff.
And then for the users, actually, if you can see from the SODA end-user advisory committee, I mean, thats where most of our users are participating in the communication. So most of these users, I mean, they are from different regions and different countries and different industries. So we try to serve, I mean, whichever participant is interested in, they can participate in this thing. But the main thing is that because they may be from different industries, but actually most of the issues that they have is still the same thing. So there are some commonalities among all these users.
Swapnil Bhartiya: We are in the middle of 2020, because of COVID-19 everything has slowed down, things have changed. What does your roadmap, what does your plan look like? The structure, the governance and the plan for 21 or end of the year?
Steven Tan: We are very, how do you call it? Very community-driven or focused kind of organization. We hold a lot of meetups and events and so on where we get together the users and the vendors and so on and the community in general. So with this COVID-19 thing, a lot of the plans has been upset. I mean, its in chaos right now. So most of the things are like what everybody is doing, moving online. So we are having some webinars and stuff, even as of right now when we are talking, we are having a mini summit going on with the Open Source Summit North America right now.
So for the rest of this year, most of our events will be online. Were going to have some webinars and some meetups, you can find it out from our website. And the other plans that we have is that we are going to have, we just released the SODA federal release, which is the 1.0 release. And through the end of this year, were going to have two more releases, the G release and the H release at the end of this year. G release is going to be in September, and H is in the end of the year. And were trying to engage our users with things like the POC testing for the federal. Because each release that we have, we try to get them to do the testing, and then so thats the way of them trying to provide feedback to us. Whether that works for them or how can we improve to make the code work for what they need.
Swapnil Bhartiya: Awesome. So thank you so much for taking your time out and explaining more about SODA foundation, and I look forward to talking to you again because I can see that you have a very exciting pipeline ahead. So thank you.
Steven Tan: Thank you, thank you very much.
--------------------------------------------------------------------------------
via: https://www.linux.com/audience/developers/soda-foundation/
作者:[Swapnil Bhartiya][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/swapnil/
[b]: https://github.com/lujun9972

View File

@ -1,94 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (BigBlueButton: Open Source Software for Online Teaching)
[#]: via: (https://itsfoss.com/bigbluebutton/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
BigBlueButton: Open Source Software for Online Teaching
======
_**Brief: BigBlueButton is an open-source tool for video conferencing tailored for online teaching. Lets take a look at what it offers.**_
In the year 2020, remote working from home is kind of the new normal. Of course, you cannot do everything remotely — but online teaching is something thats possible.
Even though a lot of teachers and school organizations arent familiar with all the amazing tools available out there, some of the [best open-source video conferencing tools][1] are filling in the requirements to some extent.
Among the ones I mentioned for video calls, [BigBlueButton][2] caught my attention. Here, Ill give you an overview of what it offers.
### BigBlueButton: An Open Source Web Conferencing System for Online Teaching
![][3]
BigBlueButton is an open-source web conferencing solution that aims to make online learning easy.
It is completely free to use but it requires you to set it up on your own server to use it as a full-fledged online learning solution.
BigBlueButton offers a really good set of features. You can easily try the [demo instance][4] and set it up on your server for your school.
Before you get started, take a look at the features:
### Features of BigBlueButton
BigBlueButton provides a bunch of useful features tailored for teachers and schools for online classes, heres what you get:
* Live whiteboard
* Public and private messaging options
* Webcam support
* Session recording support
* Emojis support
* Ability to group users for team collaboration
* Polling options available
* Screen sharing
* Multi-user support for whiteboard
* Ability to self-host it
* Provides an API for easy integration on web applications
In addition to the features offered, you will find an easy-to-use UI i.e. [Greenlight][5] (the front-end interface for BigBlueButton) to set up when you configure it on your server.
You can try using the demo instance for casual usage to teach your students for free. However, considering the limitations (60 minutes limit) of using the [demo instance][4] to try BigBlueButton, Id suggest you to host it on your server to explore all the functionality that it offers.
To get more clarity on how the features work, you might want to take a look at one of their official tutorials:
### Installing BigBlueButton On Your Server
They offer a [detailed documentation][6] which should come in handy for every developer. The easiest and quickest way of setting it up is by using the [bbb-install script][7] but you can also explore other options if that does not work out for you.
For starters, you need a server running Ubuntu 16.04 LTS at least. You should take a look at the [minimum requirements][8] before deploying a server for BigBlueButton.
You can explore more about the project in their [GitHub page][9].
[Try BigBlueButton][2]
If youre someone whos looking to set up a solution for online teaching, BigBlueButton is a great choice to explore.
It may not offer native smartphone apps — but you can surely access it using the web browser on your mobile. Of course, its better to find a laptop/computer to access an online teaching platform — but it works with mobile too.
What do you think about BigBlueButton for online teaching? Is there a better open-source project as an alternative to this? Let me know in the comments below!
--------------------------------------------------------------------------------
via: https://itsfoss.com/bigbluebutton/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/open-source-video-conferencing-tools/
[2]: https://bigbluebutton.org/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/big-blue-button.png?ssl=1
[4]: http://demo.bigbluebutton.org/
[5]: https://bigbluebutton.org/2018/07/09/greenlight-2-0/
[6]: https://docs.bigbluebutton.org/
[7]: https://github.com/bigbluebutton/bbb-install
[8]: https://docs.bigbluebutton.org/2.2/install.html#minimum-server-requirements
[9]: https://github.com/bigbluebutton

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bypass your Linux firewall with SSH over HTTP)
[#]: via: (https://opensource.com/article/20/7/linux-shellhub)
[#]: author: (Domarys https://opensource.com/users/domarys)
Bypass your Linux firewall with SSH over HTTP
======
Remote work is here to stay; use this helpful open source solution to
quickly connect and access all your devices from anywhere.
![Terminal command prompt on orange background][1]
With the growth of connectivity and remote jobs, accessing remote computing resources becomes more important every day. But the requirements for providing external access to devices and hardware make this task complex and risky. Aiming to reduce this friction, [ShellHub][2] is a cloud server that allows universal access to those devices, from any external network.
ShellHub is an open source solution, licensed under Apache 2.0, that covers all those needs and allows users to connect and manage multiple devices through a single account. It was developed to facilitate developers' and programmers' tasks, making remote access to Linux devices possible for any hardware architecture.
Looking more closely, the ShellHub solution uses the HTTP transport layer to encapsulate the SSH protocol. This transport layer choice allows for seamless use on most networks as it is commonly available and accepted by most companies' firewall rules and policies.
These examples use ShellHub version 0.3.2, released on Jun 10, 2020.
### Using ShellHub
To access the platform, just go to [shellhub.io][3] and register yourself to create an account. Your registration data will help the development team to understand the user profile and provide more insight into how to improve the platform.
![ShellHub registration form][4]
Figure 1: Registration form available in [shellhub.io][5]
ShellHub's design has an intuitive and clean interface that makes all information and functionality available in the fastest way. After you've registered, you will be on the dashboard, ready to register your first device.
### Adding a device
To enable the connection of devices via ShellHub, you'll need to generate an identifier that will be used to authenticate your device when it connects to the server.
This identification must be configured inside the agent (ShellHub client) that will be saved in the device along with the image or it must be added as a Docker container.
By default, ShellHub uses Docker to run the agent, which is very convenient, as it provides frictionless addition of devices on the existing system, with Docker support being the only requirement. To add a device, you need to paste the command line, which is presented inside the ShellHub Cloud dialog (see Figure 2).
![Figure 2: Adding a device to the ShellHub Cloud][6]
By default, the device uses its MAC address as its hostname. Internally, the device is identified by its key, which is generated during the device registration to authenticate it with the server.
### Accessing devices
To access your devices, just go to View All Devices in the dashboard, or click on Devices ****on the left side menu; these will list all your registered devices.
The device state can be easily seen on the page. The online ones show a green icon next to them and can be connected by clicking on the terminal icon. You then enter the credentials and, finally, click the Connect button, see (Figure 3).
![Figure 3: Accessing a device using the terminal on the web][7]
Another way to access your devices is from any SSH client like [PuTTY][8], [Termius][9], or even the Linux terminal. We can use the ShellHub Identification, called SSHID, as the destination address to connect (e.g., ssh [username@SSHID][10]). Figure 4 illustrates how we can connect to our machine using the Linux SSH client on the terminal.
![Figure 4: Connecting to a device using the Linux terminal][11]
Whenever you log in to the ShellHub Cloud platform, you'll have access to all your registered devices on the dashboard so you can access them from everywhere, anytime. ShellHub adds simplicity to the process of keeping communications secure with your remote machines through an open source platform and in a transparent way.
Join ShellHub Community on [GitHub][2] or feel free to send your suggestions or feedback to the developers' team through [Gitter][12] or by emailing [contato@ossystems.com.br][13]. We love to receive contributions from community members!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/linux-shellhub
作者:[Domarys][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/domarys
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background)
[2]: https://github.com/shellhub-io/shellhub
[3]: https://www.shellhub.io/
[4]: https://opensource.com/sites/default/files/uploads/shellhub_registration_form_0.png (ShellHub registration form)
[5]: https://opensource.com/article/20/7/www.shellhub.io
[6]: https://opensource.com/sites/default/files/figure2.gif
[7]: https://opensource.com/sites/default/files/figure3.gif
[8]: https://www.putty.org/
[9]: https://termius.com/
[10]: mailto:username@SSHID
[11]: https://opensource.com/sites/default/files/figure4.gif
[12]: https://gitter.im/shellhub-io/community?at=5e39ad8b3aca1e4c5f633e8f
[13]: mailto:contato@ossystems.com.br

View File

@ -0,0 +1,78 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How learning Linux introduced me to open source)
[#]: via: (https://opensource.com/article/20/7/open-source-learning)
[#]: author: (Manaswini Das https://opensource.com/users/manaswinidas)
How learning Linux introduced me to open source
======
An engineering student's open source internships and volunteer
contributions helped her land a full-time developer job.
![Woman sitting in front of her computer][1]
When I entered the engineering program as a freshman in college, I felt like a frivolous teenager. In my sophomore year, and in a fortunate stroke of serendipity, I joined [Zairza][2], a technical society for like-minded students who collaborated and built projects separate from the academic curriculum. It was right up my alley. Zairza provided me a safe space to learn and grow and discover my interests. There are different facets and roadways to development, and as a newbie, I didn't know where my interests lay.
I made the switch to Linux then because I heard it is good for development. Fortunately, I had Ubuntu on my system. At first, I found it obnoxious to use because I was used to Windows. But I slowly got the hang of it and fell in love with it over time. I started exploring development by trying to build apps using Android and creating data visualizations using Python. I built a Wikipedia Reader app using the [Wikipedia API][3], which I thoroughly enjoyed. I learned to use Git and put my projects on GitHub, which not only helped me showcase my projects but also enabled me to store them.
I kept juggling between Ubuntu and other Linux distributions. My machine wasn't able to handle Android Studio since it consumed a lot of RAM. I finally made a switch to Fedora in 2016, and I have not looked back since.
At the end of my sophomore year, I applied to [Rails Girls Summer of Code][4] with another member of Zairza, [Anisha Swain][5], where we contributed to [HospitalRun][6]. I didn't know much about the tech stack, but I tagged along with her. This experience introduced me to open source. As I learned more about it, I came to realize that open source is ubiquitous. The tools I had used for a long time, like Git, Linux, and even Fedora, were open source all the while. It was fascinating!
I made my first contribution when I participated in [Hacktoberfest][7] 2017\. I started diving deep and contributing to projects on GitHub. Slowly, I began gaining confidence. All the communities were newcomer-friendly, and I no longer felt like a fish out of water.
In November 2017, I began learning about other open source programs like [Google Summer of Code][8] and [Outreachy][9]. I discovered that Outreachy runs twice a year and decided to apply for the December to March cohort. It was late to apply, but I wanted to participate. I chose to contribute to [Ceph][10] and built some data visualizations using JavaScript. The mentors were helpful and amiable. I wasn't able to get through the project but, to be honest, I didn't think I tried hard enough. So, I decided to participate in the next cohort and contribute to projects that piqued my interest.
I started looking for projects as soon as they were announced on the Outreachy website. I found a Django project under the [Open Humans Foundation][11] and started contributing. I wasn't familiar with Django, but I learned it on the go. I enjoyed every bit of it! I learned about [GraphQL][12], [Django][13], and APIs in general. Three months after I started making contributions, the project announced its new interns. To my utter surprise, I got through. I was overjoyed! I learned many new things throughout my internship, and my mentor, Mike Escalante, was very supportive and helpful. I would like to extend my heartfelt gratitude to the Open Humans Foundation for extending this opportunity to me. I also attended [PyCon India][14] in Hyderabad the same year. I had never attended a conference before; it felt great to meet other passionate Pythonistas, and I could feel the power of community.
At the end of 2018, when I was edging closer to the end of my engineering program, I started preparing for interviews. That was a roller-coaster ride. I wasn't able to get past the second technical round in most of them.
In the meantime, I participated in the [Processing Foundation's fellowship program][15], where I worked with two other fellows, [Nancy Chauhan][16] and Shaharyar Shamshi, on promoting software literacy and making Processing's tools accessible to the Indian community. I applied as a mentor to open source programs, including [GirlScript Summer of Code][17] (GSSoC). Despite being a first-timer mentor, I found it really rewarding.
I also delivered [a talk][18] on my Outreachy project at [DjangoCon Europe][19] in April 2019. It was my first talk and also my first time alone abroad! I got a chance to interact and connect with the larger Django community, and I'm still in touch with the Djangonaut friends I made there. In July 2019, I started a [PyLadies chapter in Bhubaneswar][20], India, which held its first meetup the same month.
I went on job interviews relentlessly. I felt despondent and useless at times, but I realized I was getting better at them. I learned about internship openings at Red Hat in June 2019. I applied, and after several rounds, I got one! I started interning with Red Hat at the end of July and started working full time in January 2020.
It's been a year since I joined Red Hat, and not a single day has gone by without me learning something. In the last year, I have mentored in various open source programs, including [Google Code-In][21], GSSoC, [Red Hat Open Source Contest][22], and [Mentors Without Borders][23]. I have also discovered that I love to attend and speak at conferences. So far, I have spoken at conferences including PyCon, DjangoCon, and [Git Commit Show][24] and local meetups including Rails Girls Sekondi, PyLadies Bangalore, and Women Techmakers Bhubaneswar.
This journey from a confused teenager to a confident learner has been fulfilling in every possible way. To any student reading this, I advise: never stop learning. Even in these unprecedented times, the world is still your oyster. Participating in open source internships and other programs is not a prerequisite to becoming a successful programmer. Everyone is unique. Open source programs help boost your confidence, but they are not a must-have. And, if you do participate, even if you don't complete anything, don't worry. Believe in yourself, and keep looking for new opportunities to learn. Keep feeding your curiosity—and don't forget to pat yourself on your back for your efforts. The tassel is going to be worth the hassle.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/open-source-learning
作者:[Manaswini Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/manaswinidas
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_2.png?itok=JPlR5aCA (Woman sitting in front of her computer)
[2]: https://zairza.in/
[3]: https://www.mediawiki.org/wiki/API:Main_page
[4]: https://railsgirlssummerofcode.org/
[5]: https://github.com/Anisha1234
[6]: https://hospitalrun.io/
[7]: https://hacktoberfest.digitalocean.com/
[8]: https://summerofcode.withgoogle.com/
[9]: http://outreachy.org/
[10]: https://ceph.io/
[11]: http://openhumansfoundation.org/
[12]: https://graphql.org/
[13]: https://www.djangoproject.com/
[14]: https://in.pycon.org/2018/
[15]: https://medium.com/processing-foundation/meet-our-2019-fellows-9f13d4e4a68a
[16]: https://nancychauhan.in/
[17]: https://www.gssoc.tech/
[18]: https://www.youtube.com/watch?v=IJ3qMXBRUXo
[19]: https://2019.djangocon.eu/
[20]: https://twitter.com/pyladiesbbsr
[21]: https://codein.withgoogle.com/archive/
[22]: https://research.redhat.com/red-hat-open-source-contest/
[23]: https://www.mentorswithoutborders.net/
[24]: https://gitcommit.show/

View File

@ -0,0 +1,126 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (10 cheat sheets for Linux sysadmins)
[#]: via: (https://opensource.com/article/20/7/sysadmin-cheat-sheets)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
10 cheat sheets for Linux sysadmins
======
These quick reference guides make sysadmins' lives and daily tasks
significantly easier, and they're all freely available.
![People work on a computer server with devices][1]
When you're a systems administrator, you don't just have one job; you have ALL the jobs, and often each one is on-demand with little to no warning. Unless you do a task every day, you may not always have all the commands and options you need in mind when you need them. And that's why I love cheat sheets.
Cheat sheets help you avoid silly mistakes, they keep you from having to look through pages of documentation, and they keep you moving efficiently through your tasks. I've selected my favorite 10 cheat sheets for any sysadmin, regardless of experience level.
### Networking
Our [Linux networking][2] cheat sheet is like the swiss army knife of cheat sheets. It contains gentle reminders for the most common networking commands, including `nslookup`, `tcpdump`, `nmcli`, `netstat`, `traceroute`, and more. Most importantly, it uses `ip` so you can finally stop defaulting to `ifconfig`!
### Firewall
There are two groups of sysadmins—those who understand iptables and those who use iptables config files written by the first group. If you're a member of that first group, you can keep using your iptables configurations with or without [firewalld][3].
If you're a member of the second group, you can finally set aside your iptables anxiety and embrace the ease of firewalld. Go read [Secure your Linux network with firewall-cmd][4], and then download our [firewalld cheat sheet][5] to remember what you learned. Protecting your network ports has never been easier.
### SSH
Many sysadmins live in a [POSIX][6] shell, so it's no surprise that one of the most important tools on Linux is a remote shell they can run on someone else's computer. Anyone learning server administration usually gets acquainted with SSH pretty early, but many of us learn only the basics.
Sure, SSH can open an interactive shell on a remote machine, but there's a lot more to it than that. For instance, say you need a graphical login on a remote machine. The user of the remote host is either away from the keyboard or else can't seem to understand your instructions for enabling VNC. As long as you have SSH access, you can open the port for them:
```
$ ssh -L 5901:localhost:5901 <remote_host>
```
Learn about that, and more, with our [SSH cheat sheet][7].
### Linux users and permissions
Traditional user accounts in the style of mainframes and UNIX supercomputers have largely been replaced now by systems such as Samba, LDAP, and OpenShift. That doesn't change the need, however, for careful admin and services account management. For that, you still need to be familiar with commands like `useradd`, `usermod`, `chown`, `chmod`, `passwd`, `gpasswd`, `umask`, and others.
Keep my [users and permissions cheat sheet][8] handy, and you'll always have a sensible overview of tasks related to user management, and example commands demonstrating the correct syntax for whatever you need to do.
### Essential Linux commands
Not all sysadmins spend all their time in a terminal. Whether you just prefer a desktop for your work, and you're just starting out on Linux, sometimes it's nice to have a task-oriented reference for common terminal commands.
It's difficult to capture everything you might need for an interface designed for flexibility and improvisation, but my [common commands cheat sheet][9] is pretty comprehensive. Modeled after a typical day in the life of any technically-inclined desktop user, this cheat sheet covers navigating your computer with text, finding absolute paths to files, copying and renaming files, making directories, starting system services, and more.
### Git
At one point in the history of computers, revision control was something only developers needed. But that was then, and Git is now. Version control is an important tool for anyone looking to track changes to anything from Bash scripts to configuration files, documentation, and code. Git is applicable to everyone, including programmers site reliability engineers (SRE), and even sysadmins.
Get our [Git cheat sheet][10] to learn the essentials, the basic workflow, and the most important Git flags.
### Curl
Curl isn't necessarily a tool specific to sysadmins; it's technically "just" a non-interactive web browser for the terminal. You might go days without using it. And yet, chances are strong that you'll find Curl useful for something you do on a daily basis, whether it's a way to quickly reference some information on a website, to troubleshoot a web host, or to verify an important API you either run or rely upon.
Curl is a command to transfer data to and from a server, and it supports protocols including HTTP, FTP, IMAP, LDAP, POP3, SCP, SFTP, SMB, SMTP, and more. It's a vital networking tool, so download our [cheat sheet][11] and start exploring Curl.
### SELinux
Linux security policies are good by default, with strong separation between root and user privileges, but SELinux improves upon that using a labeling system. On a host configured with SELinux, every process and every file object (or directory, network port, device, and so on) has a label. SELinux provides a set of rules to control the access of a process label to an object (like a file) label.
Sometimes you need to adjust SELinux policies, or debug something that didn't get set properly upon install, or gain insight into current policies. Our [SELinux cheat sheet][12] can help.
### Kubectl
Whether you've moved into an open hybrid cloud, a closed cloud, or you're still investigating what such a move will take, you need to know Kubernetes. While the cloud does still need people to wrangle physical servers, your future as a sysadmin is definitely going to involve containers, and nothing does that better than Kubernetes.
While [OpenShift][13] provides a smooth "dashboard" experience for Kubernetes, sometimes a direct approach is necessary, which is exactly what `kubectl` provides. Next time you have to push containers around, make sure you have our [kubectl cheat sheet][14] on hand.
### awk
Linux has seen a lot of innovation in recent years; there have been virtual machines, containers, new security models, new init systems, clouds, and much more. And yet some things never seem to change. Particularly, a sysadmin's need to parse and isolate information from log files and other endless streams of data. There's still no better tool better suited for the job than Aho, Weinberger, and Kernighan's classic `awk` command.
Of course, awk has come a long way since it was written way back in 1977, with new options and features to make it even easier to use. But if you don't use awk every day, all the options and syntax can be a little overwhelming. Download our [awk cheat sheet][15] for the executive summary of how GNU awk works.
### Bonus: Bash scripting
Cheat sheets are useful, but if you're looking for something a little more comprehensive, you can download our [Bash scripting book][16]. This guide teaches you how to combine all the commands you know from cheat sheets and experience into scripts, helping you build an arsenal of on-call, automated solutions to solve your everyday problems. It's packed with detailed explanations of how Bash works, how scripting is different from interactive commands, how to catch errors, and much more.
### Enabling sysadmins
Are you a sysadmin?
Are you on your way to becoming one?
Are you curious about what sysadmins do all day?
If so, check out [Enable Sysadmin][17] for new articles from the industry's hardest working systems administrators about what they do and how Linux and open source make it all possible.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/sysadmin-cheat-sheets
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server with devices)
[2]: https://opensource.com/downloads/cheat-sheet-networking
[3]: https://firewalld.org/
[4]: https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd
[5]: https://opensource.com/downloads/firewall-cheat-sheet
[6]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[7]: https://opensource.com/downloads/advanced-ssh-cheat-sheet
[8]: https://opensource.com/downloads/linux-permissions-cheat-sheet
[9]: https://opensource.com/downloads/linux-common-commands-cheat-sheet
[10]: https://opensource.com/downloads/cheat-sheet-git
[11]: https://opensource.com/downloads/curl-command-cheat-sheet
[12]: https://opensource.com/downloads/cheat-sheet-selinux
[13]: https://opensource.com/tags/openshift
[14]: https://opensource.com/downloads/kubectl-cheat-sheet
[15]: https://opensource.com/downloads/cheat-sheet-awk-features
[16]: https://opensource.com/downloads/bash-scripting-ebook
[17]: http://redhat.com/sysadmin

View File

@ -0,0 +1,284 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Monitor systemd journals via email)
[#]: via: (https://opensource.com/article/20/7/systemd-journals-email)
[#]: author: (Kevin P. Fleming https://opensource.com/users/kpfleming)
Monitor systemd journals via email
======
Get a daily email with noteworthy output from your systemd journals with
journal-brief.
![Note taking hand writing][1]
Modern Linux systems often use systemd as their init system and manager for jobs and many other functions. Services managed by systemd generally send their output (of all forms: warnings, errors, informational messages, and more) to the systemd journal, not to traditional logging systems like syslog.
In addition to services, Linux systems often have many scheduled jobs (traditionally called cron jobs, even if the system doesn't use `cron` to run them), and these jobs may either send their output to the logging system or allow the job scheduler to capture the output and deliver it via email.
When managing multiple systems, you can install and configure a centralized log-capture system to monitor their behavior, but the complexity of centralized systems can make them hard to manage.
A simpler solution is to have each system directly send "interesting" output to the administrator(s) by email. For systems using systemd, this can be done using Tim Waugh's [journal-brief][2] tool. This tool _almost_ served my needs when I discovered it recently, so, in typical open source fashion, I contributed various patches to add email support to the project. Tim worked with me to get them merged, and now I can use the tool to monitor the 20-plus systems I manage as simply as possible.
Now, early each morning, I receive between 20 and 23 email messages: most of them contain a filtered view of each machine's entire systemd journal (with warnings or more serious messages), but a few are logs generated by scheduled ZFS snapshot-replication jobs that I use for backups. In this article, I'll show you how to set up similar messages.
### Install journal-brief
Although journal-brief is available in many Linux package repositories, the packaged versions will not include email support because that was just added recently. That means you'll need to install it from PyPI; I'll show you how to manually install it into a Python virtual environment to avoid interfering with other parts of the installed system. If you have a favorite tool for doing this, feel free to use it.
Choose a location for the virtual environment; in this article, I'll use `/opt/journal-brief` for simplicity.
Nearly all the commands in this tutorial must be executed with root permissions or the equivalent (noted by the `#` prompt). However, it is possible to install the software in a user-owned directory, grant that user permission to read from the journal, and install the necessary units as systemd `user` units, but that is not covered in this article.
Execute the following to create the virtual environment and install journal-brief and its dependencies:
```
$ python3 -m venv /opt/journal-brief
$ source /opt/journal-brief/bin/activate
$ pip install journal-brief>=1.1.7
$ deactivate
```
In order, these commands will:
1. Create `/opt/journal-brief` and set up a Python 3.x virtual environment there
2. Activate the virtual environment so that subsequent Python commands will use it
3. Install journal-brief; note that the single-quotes are necessary to keep the shell from interpreting the `>` character as a redirection
4. Deactivate the virtual environment, returning the shell back to the original Python installation
Also, create some directories to store journal-brief configuration and state files with:
```
$ mkdir /etc/journal-brief
$ mkdir /var/lib/journal-brief
```
### Configure email requirements
While configuring email clients and servers is outside the scope of this article, for journal-brief to deliver email, you will need to have one of the two supported mechanisms configured and operational.
#### Option 1: The `mail` command
Many systems have a `mail` command that can be used to send (and read) email. If such a command is installed on your system, you can verify that it is configured properly by executing a command like:
```
`$ echo "Message body" | mail --subject="Test message" {your email address here}`
```
If the message arrives in your mailbox, you're ready to proceed using this type of mail delivery in journal-brief. If not, you can either troubleshoot and correct the configuration or use SMTP delivery.
To control the generated email messages' attributes (e.g., From address, To address, Subject) with the `mail` command method, you must use the command-line options in your system's mailer program: journal-brief will only construct a message's body and pipe it to the mailer.
#### Option 2: SMTP delivery
If you have an SMTP server available that can accept email and forward it to your mailbox, journal-brief can communicate directly with it. In addition to plain SMTP, journal-brief supports Transport Layer Security (TLS) connections and authentication, which means it can be used with many hosted email services (like Fastmail, Gmail, Pobox, and others). You will need to obtain a few pieces of information to configure this delivery mode:
* SMTP server hostname
* Port number to be used for message submission (it defaults to port 25, but port 587 is commonly used)
* TLS support (optional or required)
* Authentication information (username and password/token, if required)
When using this delivery mode, journal-brief will construct the entire message before submitting it to the SMTP server, so the From address, To address, and Subject will be supplied in journal-brief's configuration.
### Set up configuration and cursor files
Journal-brief uses YAML-formatted configuration files; it uses one file per desired combination of filtering parameters, delivery options, and output formats. For this article, these files are stored in `/etc/journal-brief`, but you can store them in any location you like.
In addition to the configuration files, journal-brief creates and manages **cursor** files, which allow it to keep track of the last message in its output. Using one cursor file for each configuration file ensures that no journal messages will be lost, in contrast to a time-based log-delivery system, which might miss messages if a scheduled delivery job can't run to completion. For this article, the cursor files will be stored in `/var/lib/journal-brief` (you can store the cursor files in any location you like, but make sure not to store them in any type of temporary filesystem, or they'll be lost).
Finally, journal-brief has extensive filtering and formatting capabilities; I'll describe only the most basic options, and you can learn more about its capabilities in the documentation for journal-brief and [systemd.journal-fields][3].
### Configure a daily email with interesting journal entries
This example will set up a daily email to a system administrator named Robin at `robin@domain.invalid` from a server named `storage`. Robin's mail provider offers SMTP message submission through port 587 on a server named `mail.server.invalid` but does not require authentication or TLS. The email will be sent from `storage-server@domain.invalid`, so Robin can easily filter the incoming messages or generate alerts from them.
Robin has the good fortune to live in Fiji, where the workday starts rather late (around 10:00am), so there's plenty of time every morning to read emails of interesting journal entries. This example will gather the entries and deliver them at 8:30am in the local time zone (Pacific/Fiji).
#### Step 1: Configure journal-brief
Create a text file at `/etc/journal-brief/daily-journal-email.yml` with these contents:
```
cursor-file: '/var/lib/journal-brief/daily-journal-email'
output:
 - 'short'
  - systemd
inclusions:
  - PRIORITY: 'warning'
email:
  suppress_empty: false
  smtp:
    to: '”Robin” <[robin@domain.invalid][4]>'
    from: '"Storage Server" <[storage-server@domain.invalid][5]>'
    subject: 'daily journal'
    host: 'mail.server.invalid'
    port: 587
```
This configuration causes journal-brief to:
* Store the cursor at the path configured as `cursor-file`
* Format journal entries using the `short` format (one line per entry) and provide a list of any systemd units that are in the `failed` state
* Include journal entries from _any_ service unit (even the Linux kernel) with a priority of `warning`, `error`, or `emergency`
* Send an email even if there are no matching journal entries, so Robin can be sure that the storage server is still operating and has connectivity
* Send the email using SMTP
You can test this configuration file by executing a journal-brief command:
```
`$ journal-brief --conf /etc/journal-brief/daily-journal-email`
```
Journal-brief will scan the systemd journal for all new messages (yes, _all_ of the messages it has never seen before), identify any that match the priority filter, and format them into an email that it sends to Robin. If the storage server has been operational for months (or years) and the systemd journal has never been purged, this could produce a very large email message. In addition to Robin not appreciating such a large message, Robin's email provider may not be willing to accept it, so you can generate a shorter message by executing this command:
```
`$ journal-brief -b --conf /etc/journal-brief/daily-journal-email`
```
Adding the `-b` argument tells journal-brief to inspect only the systemd journal entries from the most recent system boot and ignore any that are older.
After journal-brief sends the email to the SMTP server, it writes a string into the cursor file so that the next time it runs using the same cursor file, it will know where to start in the journal. If the process fails for any reason (e.g., journal entry gathering, entry formatting, or SMTP delivery), the cursor file will _not_ be updated, which means the next time it uses the cursor file, the entries that would have been in the failed email will be included in the next email instead.
#### Step 2: Set up the systemd service unit
Create a text file at `/etc/systemd/system/daily-journal-email.service` with:
```
[Unit]
Description=Send daily journal report
[Service]
ExecStart=/opt/journal-brief/bin/journal-brief --conf /etc/journal-brief/%N.yml
Type=oneshot
```
This service unit will run journal-brief and specify a configuration file with the same name as the unit file with the suffix removed, which is what `%N` supplies. Since this service will be started by a timer (see step 3), there is no need to enable or manually start it.
#### Step 3: Set up the systemd timer unit
Create a text file at `/etc/systemd/system/daily-journal-email.timer` with:
```
[Unit]
Description=Trigger daily journal email report
[Timer]
OnCalendar=*-*-* 08:30:00 Pacific/Fiji
[Install]
WantedBy=multi-user.target
```
This timer will start the `daily-journal-email` service unit (because its name matches the timer name) every day at 8:30am in the Pacific/Fiji time zone. If the time zone was not specified, the timer would trigger the service at 8:30am in the system time zone configured on the `storage` server.
To make this timer start every time the system boots, it is `WantedBy` by the multi-user target. To enable and start the timer:
```
$ systemctl enable daily-journal-email.timer
$ systemctl start daily-journal-email.timer
$ systemctl list-timers daily-journal-email.timer
```
The last command will display the timer's status, and the `NEXT` column will indicate the next time the timer will start the service.
To learn more about systemd timers and building schedules for them, read [_Use systemd timers instead of cronjobs_][6].
Now the configuration is complete, and Robin will receive a daily email of interesting journal entries.
### Monitor the output of a specific service
The `storage` server has some filesystems on solid-state storage devices (SSD) and runs Fedora Linux. Fedora has an `fstrim` service that is scheduled to run once per week (using a systemd timer, as in the example above). Robin would like to see the output generated by this service, even if it doesn't generate any warnings or errors. While this output will be included in the daily journal email, it will be intermingled with other journal entries, and Robin would prefer to have the output in its own email message.
#### Step 1: Configure journal-brief
Create a text file at `/etc/journal-brief/fstrim.yml` with:
```
cursor-file: '/var/lib/journal-brief/fstrim'
output: 'short'
inclusions:
  - _SYSTEMD_UNIT:
   - fstrim.service
email:
  suppress_empty: false
  smtp:
    to: '”Robin” <[robin@domain.invalid][4]>'
    from: '"Storage Server" <[storage-server@domain.invalid][5]>'
    subject: 'weekly fstrim'
    host: 'mail.server.invalid'
    port: 587
```
This configuration is similar to the previous example, except that it will include _all_ entries related to a systemd unit named `fstrim.service`, regardless of their priority levels, and will include _only_ entries related to that service.
### Step 2: Modify the systemd service unit
Unlike in the previous example, you don't need to create a systemd service unit or timer, since they already exist. Instead, you want to add behavior to the existing service unit by using the systemd "drop-in file" mechanism (to avoid modifying the system-provided unit file).
First, ensure that the `EDITOR` environment variable is set to your preferred text editor (otherwise you'll get the default editor on your system), and execute:
```
`$ systemctl edit fstrim.service`
```
Note that this does not edit the existing service unit file; instead, it opens an editor session to create a drop-in file (located at `/etc/systemd/system/fstrim.service.d/override.conf`).
Paste these contents into the editor and save the file:
```
[Service]
ExecStopPost=/opt/journal-brief/bin/journal-brief --conf /etc/journal-brief/%N.yml
```
After you exit the editor, the systemd configuration will reload automatically (which is one benefit of using `systemctl edit` instead of creating the file directly). Like in the previous example, this drop-in uses `%N` to avoid duplicating the service name; this means that the drop-in contents can be applied to any service on the system, as long as the appropriate configuration file is created in `/etc/journal-brief`.
Using `ExecStopPost` will make journal-brief run after any attempt to run the `fstrim.service`, whether or not it's successful. This is quite useful, as the email will be generated even if the `fstrim.service` cannot be started (for example, if the `fstrim` command is missing or not executable).
Please note that this technique is primarily applicable to systemd services that run to completion before exiting (in other words, not background or daemon processes). If the `Type` in the `Service` section of the service's unit file is `forking`, then journal-brief will not execute until the specified service has stopped (either manually or by a system target change, like shutdown).
The configuration is complete; Robin will receive an email after every attempt to start the `fstrim` service; if the attempt is successful, then the email will include the output generated by the service.
### Monitor without extra effort
With this setup, you can monitor the health of your Linux systems that use systemd without needing to set up any centralized monitoring or logging tools. I find this monitoring method quite effective, as it draws my attention to unusual events on the servers I maintain without requiring any additional effort.
Special thanks to Tim Waugh for creating the journal-brief tool and being willing to accept a rather large patch to add direct email support rather than running journal-brief through cron.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/systemd-journals-email
作者:[Kevin P. Fleming][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kpfleming
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/note-taking.jpeg?itok=fiF5EBEb (Note taking hand writing)
[2]: https://github.com/twaugh/journal-brief
[3]: https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html
[4]: mailto:robin@domain.invalid
[5]: mailto:storage-server@domain.invalid
[6]: https://opensource.com/article/20/7/systemd-timers

View File

@ -1,71 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (JonnieWayy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What you need to know about Rust in 2020)
[#]: via: (https://opensource.com/article/20/1/rust-resources)
[#]: author: (Ryan Levick https://opensource.com/users/ryanlevick)
2020 年关于 Rust 你所需要知道的
======
尽管许多程序员长期以来一直将 Rust 用于业余爱好项目,但正如 Opensource.com 上许多有关 Rust 的热门文章所解释的那样,该语言在 2019 年吸引了主要技术公司的支持。
![用笔记本电脑的人][1]
一段时间以来, [Rust][2] 在诸如 Hacker News 之类的网站上引起了程序员大量的关注。尽管许多人一直喜欢在业余爱好项目中[使用该语言][3],但直到 2019 年它才开始在工业界流行,直到那会儿情况才真正开始有所转变。
在过去的一年中,包括 [Microsoft][4]、 [Facebook][5] 和 [Intel][6] 在内的许多大公司都出来支持 Rust许多[较小的公司][7]也注意到了这一点。2016 年,作为欧洲最大的 Rust 大会 [RustFest][8] 的第一主持人,我没见到任何一个人工作中使用 Rust 但却不在 Mozilla 工作的。三年后,似乎我在 RustFest 2019 有所交流的每个人都将 Rust 用于izard其他公司的日常工作无论是作为游戏开发人员、银行的后端工程师、开发者工具的创造者或是其他的一些岗位。
在 2019 年, Opensource.com 也通过报道 Rust 日益增长的受欢迎程度而发挥了作用。万一您错过了它们,这里是过去一年里 Opensource.com 上关于 Rust 的热门文章。
### 使用 rust-vmm 构建未来的虚拟化堆栈
Amazon 的 [Firecracker][9] 是支持 AWS Lambda 和 Fargate 的虚拟化技术,完全使用 Rust 编写。这项技术的作者之一 Andreea Florescu 在 [**《使用 rust-vmm 构建未来的虚拟化堆栈》**][10]中提供了对 Firecracker 及其相关技术的深刻见解。
Firecracker 最初是 Google [CrosVM][11] 的一个分支,但是很快由于两个项目的不同需求而分化。尽管如此,在这个项目与其他用 Rust 所编写的虚拟机管理器VMM之间仍有许多得到了很好共享的通用片段。考虑到这一点 [rust-vmm][12] 起初是以一种让 Amazon 和 Google Intel 和 Red Hat 以及其余开源社区去相互共享通用 Rust “crates” (即程序包)的方式开始的。其中包括 KVM 接口Linux 虚拟化 API、 Virtio 设备支持以及内核加载程序。
看到软件行业的一些巨头围绕用 Rust 编写的通用技术栈协同工作,实在是很神奇。鉴于这种和其他[使用 Rust 编写的技术堆栈][13]之间的伙伴关系,到了 2020 年,看到更多这样的情况我不会感到惊讶。
### 为何选择 Rust 作为你的下一门编程语言
采用一门新语言,尤其是在有着建立已久技术栈的大公司,并非易事。我很高兴写了[《为何选择 Rust 作为你的下一门编程语言》][14],书中讲述了 Microsoft 是如何在没有考虑其他这么多有趣的编程语言的情况下选择了采用 Rust。
选择编程语言涉及许多不同的标准——从技术上到组织上,甚至是情感上。 其中一些标准比其他的更容易衡量。比方说,了解技术变更的成本(例如调整构建系统和构建新工具)要比理解组织或情感问题(例如高效或快乐的开发人员将如何使用这种新语言)容易得多。 此外,易于衡量的标准通常与成本相关,而难以衡量的标准通常以收益为导向。 这通常会导致成本在决策过程中变得越来越重要,即使这不一定就是说成本要比收益更重要——只是成本更容易衡量。 这使得公司不太可能采用新的语言。
然而Rust 最大的好处之一是很容易衡量其编写安全且高性能系统软件的能力。鉴于 Microsoft 70% 的安全漏洞是由于 Rust 旨在防止的内存安全问题导致的,而且这些问题每年都使公司付出了几十亿美元的代价,很容易衡量并理解采用这门语言的好处。
是否会在 Microsoft 全面采用 Rust 尚待观察,但是仅凭着相对于现有技术具有明显且可衡量的好处这一事实, Rust 的未来一片光明。
### 2020 年的 Rust
尽管要达到 C++ 等语言的流行度还有很长的路要走。Rust 实际上已经开始在工业界引起关注。我希望更多公司在 2020 年开始采用 Rust。 Rust 社区现在必须着眼于欢迎开发人员和公司加入社区,同时确保将推动该语言发展到现在的一切都保留下来。
Rust 不仅仅是一个编译器和一组库,而是一群想要使系统编程变得容易、安全而且有趣的人。即将到来的这一年,对于 Rust 从业余爱好语言到软件行业所使用的主要语言之一的转型至关重要。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/rust-resources
作者:[Ryan Levick][a]
选题:[lujun9972][b]
译者:[JonnieWayy](https://github.com/JonnieWayy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ryanlevick
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
[2]: http://rust-lang.org/
[3]: https://insights.stackoverflow.com/survey/2019#technology-_-most-loved-dreaded-and-wanted-languages
[4]: https://youtu.be/o01QmYVluSw
[5]: https://youtu.be/kylqq8pEgRs
[6]: https://youtu.be/l9hM0h6IQDo
[7]: https://oxide.computer/blog/introducing-the-oxide-computer-company/
[8]: https://rustfest.eu
[9]: https://firecracker-microvm.github.io/
[10]: https://opensource.com/article/19/3/rust-virtual-machine
[11]: https://chromium.googlesource.com/chromiumos/platform/crosvm/
[12]: https://github.com/rust-vmm
[13]: https://bytecodealliance.org/
[14]: https://opensource.com/article/19/10/choose-rust-programming-language

View File

@ -0,0 +1,94 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (BigBlueButton: Open Source Software for Online Teaching)
[#]: via: (https://itsfoss.com/bigbluebutton/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
BigBlueButton开源在线教学软件
======
_**简介BigBlueButton 是一个为在线教学量身定制的开源视频会议工具。让我们来看看它提供了什么。**_
在 2020 年,在家远程工作是一种新常态。当然,你不能远程完成所有事情,但是可以进行在线教学。
即使许多老师和学校组织对那些存在的出色工具都不熟悉,但某些[最佳开源视频会议工具][1]在一定程度上满足了要求。
在我提到的视频通话软件中,[BigBlueButton][2] 引起了我的注意。在这里,我将为你简单介绍。
### BigBlueButton用于在线教学的开源 Web 会议系统
![][3]
BigBlueButton 是一个开源的网络会议方案,它旨在简化在线学习。
它是完全免费的,但是需要你在自己的服务器上安装才能将其用作完整的在线学习方案。
BigBlueButton 提供了非常好的一组功能。你可以轻松地尝试[演示实例][4],并在学校的服务器上进行安装。
开始之前,请先了解以下功能:
### BigBlueButton 的功能
BigBlueButton 提供了一系列量身定制的针对教师和学校在线课堂的有用功能,你可以获得:
* 现场白板
* 给公共和私人发消息
* 支持网络摄像头
* 支持会话记录
* 支持表情符号
* 能够将用户分组以进行团队协作
* 支持投票
* 屏幕共享
* 支持多用户白板
* 能够自行托管
* 提供用于轻松集成到 Web 应用中的 API
除了提供的功能外,你还能看到易于使用的 UI即 [Greenlight][5] BigBlueButton 的前端界面),当你在服务器上配置时可以安装它。
你可以尝试演示实例来临时免费地教你的学生。但是,考虑到使用[演示实例][4]来尝试 BigBlueButton 的局限性(限制为 60 分钟),建议你将它托管在服务器上,以探索其提供的所有功能。
为了更清楚地了解这些功能是如何工作的,你可能需要看下它的官方教程。
### 在你的服务器上安装 BigBlueButton
他们提供了[详细文档][6],它对每个开发人员都会有用。安装它最简单、最快捷的方法是使用 [bbb-install 脚本][7],但是如果不成功,你也可以探索其他选项。
对于刚接触的人,你需要一台至少运行 Ubuntu 16.04 LTS 的服务器。在为 BigBlueButton 部署服务器之前,你应该查看[最低要求][8]。
你可以在它的 [GitHub 页面][9]中进一步了解该项目。
[Try BigBlueButton][2]
如果你想为在线教学安装解决方案,那么 BigBlueButton 是一个不错的选择。
它可能不提供原生智能机应用,但你肯定可以用手机上的网络浏览器来访问它。当然,最好找一台笔记本电脑/计算机来访问在线教学平台,但它也可以在移动设备上使用。
你认为 BigBlueButton 的在线教学如何?有没有更好的开源项目可以替代?在下面的评论中让我知道!
--------------------------------------------------------------------------------
via: https://itsfoss.com/bigbluebutton/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/open-source-video-conferencing-tools/
[2]: https://bigbluebutton.org/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/big-blue-button.png?ssl=1
[4]: http://demo.bigbluebutton.org/
[5]: https://bigbluebutton.org/2018/07/09/greenlight-2-0/
[6]: https://docs.bigbluebutton.org/
[7]: https://github.com/bigbluebutton/bbb-install
[8]: https://docs.bigbluebutton.org/2.2/install.html#minimum-server-requirements
[9]: https://github.com/bigbluebutton