Merge pull request #25610 from lkxed/2022-05-11-delete-outdated-articles

删除过期文章 & 修改文件名日期
This commit is contained in:
Xingyu.Wang 2022-05-11 22:58:41 +08:00 committed by GitHub
commit b73f3d01cd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 0 additions and 1346 deletions

View File

@ -1,121 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (10 ways big data and data science impacted the world in 2020)
[#]: via: (https://opensource.com/article/21/1/big-data)
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
10 ways big data and data science impacted the world in 2020
======
Learn how open source data science languages, libraries, and tools are
helping us understand our world better by reviewing 2020's top 10 data
science articles on Opensource.com.
![Looking at a map][1]
Big datas one of many domains where open source shines. From open source alternatives for Google Analytics to new features in MySQL, 2020 brought several ways for open source enthusiasts to learn big data skills.
Get up to speed on how open source data science languages, libraries, and tools help us understand our world better by reviewing the top 10 data science articles published on Opensource.com last year. 
### The 7 most popular ways to plot data in Python
Once upon a time, Matplotlib was the lone way to make plots in Python. In recent years, Python's status as data science's de facto language changed that. We have a plethora of ways to plot data using Python today.
In this article, Shaun Taylor-Morgan walks through [seven ways to plot data in Python][2]. Don't worry if you're a Matplotlib user: It's covered, along with Seaborn, Plotly, and Bokeh. You'll find codes and charts per plotting library, plus some newcomers to the Python plotting field: Altair, Pygal, and pandas.
### Transparent, open source alternative to Google Analytics
Many websites use Google Analytics to track their activity metrics. Its status as a de facto tool leaves some to wonder if open source options exist. In this [overview of Plausible Analytics][3], Marko Saric proves they do.
If you want to compare Google Analytics against open source options, you will find Marko's article helpful. It's especially great if you're a website admin trying to comply with new data collection regulations, such as GDPR.
If you want to learn more about Plausible, you'll find links to Plausible's code and roadmap on GitHub in Marko's article.
### 5 MySQL features you need to know
After MySQL 8.0 came out in April 2018, its release cycle for new features updated to four times per year. Despite the more frequent deployments, many users don't know about [new MySQL features][4] that could save them hours of time.
In this March 2020 article, Dave Stokes shares five features that were new to MySQL. They include dual passwords, new shells, and better SQL support. But keep in mind that these updates are now close to a year old: There's a lot more to discover in MySQL since then!
### Using C and C++ for data science
Did you know that C and C++ are both strong options for data science projects? They're especially good choices to [run data science programs on the command line][5].
In this article, Cristiano L. Fontana uses [C99][6] and [C++11][7] to write a program that uses [Anscombe's quartet][8] dataset. The step-by-step instructions include reading data from a CSV file, interpolating data, and plotting results to an image file.
### Using Python to visualize COVID-19 projections
The COVID-19 pandemic brought an influx of data to the proverbial forefront. In this article, Anurag Gupta shows how to use Python to [project COVID-19 cases and deaths][9] across India.
Anurag walks through downloading and parsing data, selecting and plotting data for India, and creating an animated horizontal bar graph. If you're interested in the complete script, you'll find a link at the end of this article.
### How I use Python to map the global spread of COVID-19
If you want to [track the spread of COVID-19 globally][10], you can use Python, pandas, and Plotly to do it. In this article, Anurag Gupta explains how you can use them to clean and visualize raw data.
Using screenshots to help, Anurag shares how to load data into a pandas DataFrame; clean and modify the DataFrame; and visualize the spread in Plotly. The complete code yields a gorgeous graph, and the article ends with a link to download and run it.
### 3 ways to use PostgreSQL commands
In this follow-up to his article on getting started with PostgreSQL, Greg Pittman shares how he uses PostgreSQL commands to [keep his grocery shopping list updated][11].
Whether you want to do per-item entry or bring order to complex tables, Greg explains how to create the commands you need. He also shows how to output your lists once you're ready to print them.
No matter how long your shopping list is, PostgreSQL commands—especially the WHERE parameter—can bring ease to your life beyond programming.
### Using Python and GNU Octave to plot data
Python is data science's language du jour, but how can you use it for specific tasks? In this article, Cristiano Fontana shares how to [write a program in Python and GNU Octave][12].
Cristiano walks through each step to read data from a CSV file, interpolate the data with a straight line, and plot the result to an image file. From printing output and reading data to plotting the outcome, Fontana's step-by-step guidelines explain the whole process in Python and GNU Octave.
### Fast data modeling with JavaScript
Want a way to [model data in a few minutes][13]? In this article, Szymon shares how to do it using less than 15 lines of JavaScript code.
It really is that simple: You merely need to create a class and use the defaultsDeep function in the [Lodash][14] JavaScript library. Szymon shows this process using screenshots and code samples.
It keeps your data in one place, avoids code repetition, and is fully customizable. If you want to try out the code in this article, Szymon links to it in CodeSandbox at the end.
### How to process real-time data with Apache tools
We process so much data today that storing data for analysis later might be impossible soon. Teams that handle failure prediction and other context-sensitive data need to get this information in real time, before it hits a database. Luckily, you can do this with Apache tools.
In this article, Simon Crosby explains how Apache Spark—a unified analytics engine—can [process large datasets][15] in real time at scale. For instance, "Spark Streaming breaks data into mini-batches that are each independently analyzed by a Spark model or some other system," he writes.
If Apache's not your thing, Simon presents other open source options. Flink, Beam, and Stanza—along with Apache-licensed SwimOS and Hazelcast—are just a few of your choices.
### What do you want to know?
What would you like to know about big data and data science? Please share your suggestions for article topics in the comments. And if you have something interesting to share about data science, please consider [writing an article][16] for Opensource.com.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/1/big-data
作者:[Lauren Maffeo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lmaffeo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map)
[2]: https://opensource.com/article/20/4/plot-data-python
[3]: https://opensource.com/article/20/5/plausible-analytics
[4]: https://opensource.com/article/20/3/mysql-features
[5]: https://opensource.com/article/20/2/c-data-science
[6]: https://en.wikipedia.org/wiki/C99
[7]: https://en.wikipedia.org/wiki/C%2B%2B11
[8]: https://en.wikipedia.org/wiki/Anscombe%27s_quartet
[9]: https://opensource.com/article/20/4/python-data-covid-19
[10]: https://opensource.com/article/20/4/python-map-covid-19
[11]: https://opensource.com/article/20/2/postgresql-commands
[12]: https://opensource.com/article/20/2/python-gnu-octave-data-science
[13]: https://opensource.com/article/20/5/data-modeling-javascript
[14]: https://en.wikipedia.org/wiki/Lodash
[15]: https://opensource.com/article/20/2/real-time-data-processing
[16]: https://opensource.com/how-submit-article

View File

@ -1,123 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why KubeEdge is my favorite open source project of 2020)
[#]: via: (https://opensource.com/article/21/1/kubeedge)
[#]: author: (Mike Calizo https://opensource.com/users/mcalizo)
Why KubeEdge is my favorite open source project of 2020
======
KubeEdge is a workload framework for edge computing.
![Tips and gears turning][1]
I believe [edge computing][2], which "brings computation and data storage closer to the location where it is needed to improve response times and save bandwidth," is the next major phase of technology adoption. The widespread use of mobile devices and wearable gadgets and the availability of free city-wide WiFi in some areas create a lot of data that can provide many advantages if used properly. For example, this data can help people fight crime, learn about nearby activities and events, find the best sale price, avoid traffic, and so on.
[Gartner][3] says the rapid growth in mobile application adoption requires an edge infrastructure to use the data from these devices to further progress and improve quality of life. Some of the brightest minds are looking for ways to use the rich data generated from our mobile devices. Take the COVID-19 pandemic, for example. Edge computing can gather data that can help fight the spread of the virus. In the future, mobile devices might warn people about the potential for community infection by providing live updates to their devices based on processing and serving data collected from other devices (using artificial intelligence and machine learning).
In defining an edge-computing architecture, one thing is constant: The platform must be flexible and scalable to deploy a smart or intelligent application on it and in your core data center. As an open source advocate and user, this naturally triggers my interest in using open source technology to harness the power of edge computing.
This is why [KubeEdge][4], which delivers container orchestration to resource-constrained environments, is my favorite open source project of 2020. This extremely lightweight but fully compliant Kubernetes distribution was created to run cloud-native workloads in Internet of Things (IoT) devices at the network's edge.
![Edge computing architecture][5]
(Michael Calizo, [CC BY-SA 4.0][6])
### Challenges of collecting and consuming data
Having a rich data source does not mean anything if the data isn't used properly. This is the dilemma that edge computing is trying to solve. To be able to use data properly, the platform must be flexible enough to handle the demand required to collect, process, and serve data and make smart decisions about whether the data can be processed at the edge or must be processed in a regional or core data center.
The challenges when moving data from the edge location to a core data center include:
* Network reliability
* Security
* Resource constraints
* Autonomy
A Kubernetes platform on the edge, such as KubeEdge, meets these requirements, as it provides the scalability, flexibility, and security needed to perform data collection, processing, and serving. KubeEdge is open source, lightweight, and easy to deploy, has low resource requirements, and provides everything you need.
### KubeEdge's architecture
KubeEdge was [introduced in 2018][7] at KubeCon in Seattle. In 2019, it was accepted as a Cloud Native Computing Foundation (CNCF) sandbox project, which gives it wider public visibility and puts it on the way to becoming a full-fledged CNCF-sanctioned project.
![KubeEdge architecture][8]
(©2019 [The New Stack][9])
In a nutshell, KubeEdge has two main components or parts: Cloud and Edge.
#### Cloud
The Cloud part is where the Kubernetes Master components, the EdgeController, and edge CloudHub reside.
* **CloudHub** is a communication interface module in the Cloud component. It acts as a caching mechanism to ensure changes in the Cloud part are sent to the Edge caching mechanism (EdgeHub).
* The **EdgeController** manages the edge nodes and performs reconciliation between edge nodes.
#### Edge
The Edge part is where edge nodes are found. The most important Edge components are:
* **EdgeHub** is a communication interface module to the Cloud component.
* **Edged** does the kubelet's job, including managing pod lifecycles and other related kubelet jobs on the nodes.
* **MetaManager** makes sure that all node-level metadata is persistent.
* **DeviceTwin** is responsible for syncing devices between the Cloud and the Edge components.
* **EventBus** handles the internal edge communications using Message Queuing Telemetry Transport (MQTT).
### Kubernetes for edge computing
Kubernetes has become the gold standard for orchestrating containerized workloads on premises and in public clouds. This is why I think KubeEdge is the perfect solution for using edge computing to reap the benefits of the data that mobile technology generates.
The KubeEdge architecture allows autonomy on an edge computing layer, which solves network latency and velocity problems. This enables you to manage and orchestrate containers in a core data center as well as manage millions of mobile devices through an autonomous edge computing layer. This is possible because of how KubeEdge uses a combination of the message bus (in the Cloud and Edge components) and the Edge component's data store to allow the edge node to be independent. Through caching, data is synchronized with the local datastore every time a handshake happens. Similar principles are applied to edge devices that require persistency.
KubeEdge handles machine-to-machine (M2M) communication differently from other edge platform solutions. KubeEdge uses [Eclipse Mosquitto][10], a popular open source MQTT broker from the Eclipse Foundation. Mosquitto enables WebSocket communication between the edge and the master nodes. Most importantly, Mosquitto allows developers to author custom logic and enable resource-constrained device communication at the edge.
**[Read next: [How to explain edge computing in plain terms][11]]**
Security is a must for M2M communication; it is the only way you can trust sensitive data sent through the web. Currently, KubeEdge supports Secure Production Identity Framework for Everyone ([SPIFFE][12]), ensuring that:
1. Only verifiable nodes can join the edge cluster.
2. Only verifiable workloads can run on the edge nodes.
3. Short-lived certificates are used with rotation policies.
### Where KubeEdge is heading
KubeEdge is in the very early stage of adoption, but it is gaining popularity due to its flexible approach to making edge computing communications secure, reliable, and autonomous so that they won't be affected by network latency.
KubeEdge is a flexible, vendor-neutral, lightweight, heterogeneous edge computing platform. This enables it to support use cases such as data analysis, video analytics, machine learning, and more. Because it is vendor-neutral, KubeEdge allows big cloud players to use it.
These are the reasons why KubeEdge is my favorite project of 2020. There is much more to come, and I expect to see more contributions from the community for wider adoption. I am excited about its future of enabling us to consume available data and use it for the greater good.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/1/kubeedge
作者:[Mike Calizo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mcalizo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning)
[2]: https://en.wikipedia.org/wiki/Edge_computing
[3]: https://www.gartner.com/smarterwithgartner/what-edge-computing-means-for-infrastructure-and-operations-leaders/
[4]: https://kubeedge.io/en/
[5]: https://opensource.com/sites/default/files/uploads/edgecomputing.png (Edge computing architecture)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://www.youtube.com/watch?v=nWFkxuRvZ7U&feature=youtu.be&t=1755
[8]: https://opensource.com/sites/default/files/uploads/kubeedge-architecture.png (KubeEdge architecture)
[9]: https://thenewstack.io/kubeedge-extends-the-power-of-kubernetes-to-the-edge/
[10]: https://mosquitto.org/
[11]: https://enterprisersproject.com/article/2019/7/edge-computing-explained-plain-english
[12]: https://spiffe.io/

View File

@ -1,180 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What's new with ownCloud in 2021?)
[#]: via: (https://opensource.com/article/21/2/owncloud)
[#]: author: (Martin Loschwitz https://opensource.com/users/martinloschwitzorg)
What's new with ownCloud in 2021?
======
The open source file sharing and syncing platform gets a total overhaul
based on Go and Vue.js and eliminates the need for a database.
![clouds in the sky with blue pattern][1]
The newest version of ownCloud, [ownCloud Infinite Scale][2] (OCIS), is a complete rewrite of the venerable open source enterprise file sharing and syncing software stack. It features a new backend written in Go, a frontend in Vue.js, and many changes, including eliminating the need for a database. This scalable, modular approach replaces ownCloud's PHP, database, and [POSIX][3] filesystem and promises up to 10 times better performance.
Traditionally, ownCloud was centered around the idea of having a POSIX-compatible filesystem to store data uploaded by users—different versions of the data and trash files, as well as configuration files and logs. By default, an ownCloud user's files were found in a path on their ownCloud instance, like `/var/www` or `/srv/www` (a web server's document root).
Every admin who has maintained an ownCloud instance knows that they grow massive; today, they usually start out much larger than ownCloud was originally designed for. One of the largest ownCloud instances is Australia's Academic and Research Network (AARNet), a company that stores more than 100,000 users' data.
### Let's 'Go' for microservices
ownCloud's developers determined that rewriting the codebase with [Go][4] could bring many advantages over PHP. Even when computer programs appear to be one monolithic piece of code, most are split into different components internally. The web servers that are usually deployed with ownCloud (such as Apache) are an excellent example. Internally, one function handles TCP/IP connections, another function might handle SSL, and yet another piece of code executes the requested PHP files and delivers the results to the end user. All of those events must happen in a certain order.
ownCloud's developers wanted the new version to serve multiple steps concurrently so that events can happen simultaneously. Software capable of handling requests in parallel doesn't have to wait around for one process to finish before the next can begin, so they can deliver results faster. Concurrency is one of the reasons Go is so popular in containerized micro-architecture applications.
With OCIS, ownCloud is adapting to an architecture centered around the principle of microservices. OCIS is split into three tiers: storage, core, and frontend. I'll look at each of these tiers, but the only thing that really matters to people is overall performance. Users don't think about software in tiers; they just want the software to work well and work quickly.
### Tier 1: Storage
The storage available to the system is ownCloud's lowest tier. Performance also brings scalability; large ownCloud instances must be able to cope with the load of thousands of clients and add additional disk space if the existing storage fills up.
Like so many other concepts today, object stores and scalable storage weren't available when ownCloud was designed. Administrators now are used to having more choices, so ownCloud permits outsourcing physical storage device handling to an external solution. While S3-based object storage, Samba-based storage, and POSIX-compatible filesystem options are still supported in OCIS, the preferred way to deploy it is with [Earth Observing System][5] (EOS) storage.
#### EOS to the rescue
EOS is optimized for very low latency when accessing files. It provides disk-based storage to clients through the [XRootD][6] framework but also permits other protocols to access files. ownCloud uses EOS's HTTP protocol extension to talk to the storage solution (using the HTTPS protocol). EOS also allows almost "infinite" scalability. For instance, [CERN's EOS setup][7] includes more than 200PB of disk storage and continues to grow.
By choosing EOS, ownCloud eliminated several shortcomings of traditional storage solutions:
* EOS doesn't have a typical single point of failure.
* All relevant services are run redundantly, including the ability to scale out and add instances of all existing services.
* EOS promises to never run out of actual disk space and comes with built-in redundancy for stored data.
For large environments, ownCloud expects the administrator to deploy an EOS instance with OCIS. In exchange for the burden of maintaining a separate storage system, the admin gets the benefit of not having to worry about the OCIS instance's scalability and performance.
#### What about small setups?
This hints at ownCloud's assumed use case for OCIS: It's no longer a small business all-in-one server nor a small home server. ownCloud's strategy with OCIS targets large data centers. For small or home office setups, EOS is likely to be excessive and overly demanding for a single admin to manage. OCIS serves small setups through the [Reva][8] framework, which enables support for S3, Samba, and even POSIX-compatible filesystems. This is possible because EOS is not hardcoded into OCIS. Reva can't provide the same feature set as EOS, but it accomplishes most of the needs of end users and small installations.
### Tier 2: Core
OCIS's second tier is (due to Go) more of a collection of microservices than a singular core. Each one is responsible for handling a single task in the background (e.g., scanning for viruses). Basically, all of OCIS's functionality results from a specific microservice's work, like authenticating requests using OpenID Connect against an identity provider. In the end, that makes it a simple task to connect existing user directories—such as Active Directory Federation Services (ADFS), Azure AD, or Lightweight Directory Access Protocol (LDAP)—to ownCloud. For those that do not have an existing identity provider, ownCloud ships its own instance, effectively making ownCloud maintain its own user database.
### Tier 3: Frontend
OCIS's third tier, the frontend, is what the vendor calls ownCloud Web. It's a complete rewrite of the user interface and is based on the Vue.js JavaScript framework. Like the OCIS core, the web frontend is written based on microservices principles and hence allows better performance and scalability. The developers also used the opportunity to give the web interface a makeover; compared to previous ownCloud versions, the OCIS web interface looks smaller and slicker.
OCIS's developers did an impressive job complying with modern software design principles. The fundamental problem in building applications according to the microservices approach is making the environment's individual components communicate with each other. APIs can come to the rescue, but that means every micro component must have its own well-defined API interface.
Luckily, there are existing tools to take that burden off developers' shoulders, most notably [gRPC][9]. The idea behind gRPC is to have a set of predefined APIs that trigger actions in one component from within another.
### Other notable design changes
#### Tackling network traffic with Traefik
This new application design brings some challenges to the underlying network. OCIS's developers chose the [Traefik][10] framework to tackle them. Traefik automatically load-balances different instances of microservices, manages automated SSL encryption, and allows additional deployments of firewall rules.
The split between the backend and the frontend add advantages to OCIS. In fact, the user's actions triggered through ownCloud Web are completely decoupled from the ownCloud engine performing the task in the backend. If a user manually starts a virus check on files stored in ownCloud, they don't have to wait for the check to finish. Instead, the check happens in the background, and the user sees the results after the check is completed. This is the principle of concurrency at work.
#### Extensions as microservices
Like other web services, ownCloud supports extending its capabilities through extensions. OCIS doesn't change this, but it promises to tackle a well-known problem, especially with community apps. Apps of unknown origin can cause trouble in the server, hamper updates, and negatively impact the server's overall performance.
OCIS's new, gRPC-based architecture makes it much easier to create extensions alongside existing microservices. Because the API is predefined by gRPC, developers merely need to create a microservice featuring the desired functionality that can be controlled by gRPC. Traefik, on a per-case basis, ensures that newly deployed add-ons are automatically added to the existing communication mesh.
#### Goodbye, MySQL!
ownCloud's switch to gRPC and microservices eliminates the need for a relational database. Instead, components that need to store metadata do it on their own. Due to Reva and the lack of a MySQL dependency, the complexity of running ownCloud in small environments is reduced considerably—an especially welcome bonus for maintainers of large-scale data centers, but nice for admins of any size installation.
### Getting OCIS up and running
ownCloud published a technical preview of OCIS 1.0 in December 2020, [shipping it][11] as a Docker container and binaries. More examples of getting it running are linked in the deployment section of its [GitHub repository][12].
#### Install with Docker
Getting OCIS up and running with Docker containers is easy, although things can get complicated if you're new to EOS. Docker images for OCIS are available on [Docker Hub][13]. Look for the Latest tag for the current master branch.
Any standard virtual machine from one of the big cloud providers or any entry-level server in a data center that uses a standard Linux distribution should be sufficient, provided the system has a container runtime installed.
Assuming you have Docker or Podman installed, the command to start OCIS is simple:
```
`$ docker run --rm -ti -p 9200:9200 owncloud/ocis`
```
That's it! OCIS is now waiting at your service on localhost port 9200. Open a web browser and navigate to `http://localhost:9200` to check it out.
The demo accounts and passwords are `einstein:relativity`, `marie:radioactivity`, and `richard:superfluidity`. Admin accounts are `moss:vista` and `admin:admin`. If OCIS runs on a server with a resolvable hostname, it can request an SSL certificate from Let's Encrypt using Traefik.
![OCIS contains no files at first login][14]
(Martin Loschwitz, [CC BY-SA 4.0][15])
![OCIS user management interface][16]
(Martin Loschwitz, [CC BY-SA 4.0][15])
#### Install with binary
As an alternative to Docker, there also is a pre-compiled binary available. Thanks to Go, users can [download the latest binaries][17] from the Master branch.
OCIS's binary edition expects `/var/tmp/ocis` as the default storage location, but you can change that in its configuration. You can start the OCIS server with:
```
`$ ./ocis server`
```
Here are some of the subcommands available through the `ocis` binary:
* `ocis health` runs a health check. A result greater than 0 indicates an error.
* `ocis list` prints all running OCIS extensions.
* `ocis run foo` starts a particular extension (`foo`, in this example).
* `ocis kill foo` stops a particular extension (`foo`, in this example).
* `ocis --help` prints a help message.
The project's GitHub repository contains full [documentation][11].
### Setting up EOS (it's complicated)
Following ownCloud's recommendations to deploy OCIS with EOS for large environments requires some additional steps. EOS not only adds required hardware and increases the whole environment's complexity, but it's also a slightly bigger task to set it up. CERN provides concise [EOS documentation][18] (linked from its [GitHub repository][19]), and ownCloud offers a [step-by-step guide][20].
In a nutshell, users have to get and start EOS and OCIS containers; configure LDAP support; and kill home, users', and metadata storage before starting them with the EOS configuration. Last but not least, the accounts service needs to be set up to work with EOS. All of these steps are "docker-compose" commands documented in the GitHub repository. The Storage Backends page on EOS also provides information on verification, troubleshooting, and a command reference for the built-in EOS shell.
### Weighing risks and rewards
ownCloud Infinite Scale is easy to install, faster than ever before, and better prepared for scalability. The modular design, with microservices and APIs (even for its extensions), looks promising. ownCloud is embracing new technology and developing for the future. If you run ownCloud, or if you've been thinking of trying it, there's never been a better time. Keep in mind that this is still a technology preview and is on a rolling release published every three weeks, so please report any bugs you find.
Jos Poortvliet shares some of his favorite uses for the open source self-hosted storage platform.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/2/owncloud
作者:[Martin Loschwitz][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/martinloschwitzorg
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e (clouds in the sky with blue pattern)
[2]: https://owncloud.com/infinite-scale/
[3]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[4]: https://golang.org/
[5]: https://en.wikipedia.org/wiki/Earth_Observing_System
[6]: https://xrootd.slac.stanford.edu/
[7]: https://eos-web.web.cern.ch/eos-web/
[8]: https://reva.link/
[9]: https://en.wikipedia.org/wiki/GRPC
[10]: https://opensource.com/article/20/3/kubernetes-traefik
[11]: https://owncloud.github.io/ocis/getting-started/
[12]: https://github.com/owncloud/ocis
[13]: https://hub.docker.com/r/owncloud/ocis
[14]: https://opensource.com/sites/default/files/uploads/ocis5.png (OCIS contains no files at first login)
[15]: https://creativecommons.org/licenses/by-sa/4.0/
[16]: https://opensource.com/sites/default/files/uploads/ocis2.png (OCIS user management interface)
[17]: https://download.owncloud.com/ocis/ocis/
[18]: https://eos-docs.web.cern.ch/
[19]: https://github.com/cern-eos/eos
[20]: https://owncloud.github.io/ocis/storage-backends/eos/

View File

@ -1,148 +0,0 @@
[#]: subject: (What's new with Drupal in 2021?)
[#]: via: (https://opensource.com/article/21/4/drupal-updates)
[#]: author: (Shefali Shetty https://opensource.com/users/shefalishetty)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
What's new with Drupal in 2021?
======
Its newest initiatives include decoupled menus, automated updates, and
other usability-focused updates.
![Computer screen with files or windows open][1]
The success of open source projects is largely carried by the pillars of the community and group collaborations. Without putting a stake in the ground to achieve strategic initiatives, an open source project can lose focus. Open source strategic initiatives should aim at solving impactful problems through collaboration involving the project's stakeholders.
### The why and how of Drupal's strategic initiatives
As one of the leading open source projects, [Drupal][2]'s success largely thrives on implementing its various proposed strategic initiatives. Drupal's focus on strategic initiatives and continuous innovation since Drupal 7 brought huge architectural changes in Drupal 8, 9, and beyond that offer a platform for continuous innovation on the web and an easy upgrade path for end users.
The vision for Drupal's core strategic initiatives is determined by Dries Buytaert, Drupal project lead. These initiatives are backed by community collaboration and lead to significant developments driven by forces like:
* Collaboration with the core maintainers
* Survey data and usability studies
* A vision to build a leading open source digital experience platform
* Relevancy in the market by improving editorial, developer, and customer experiences
* Validation by broader community discussions and collaborations
Once initiatives are **proposed**, they move ahead to the **planned** initiatives stage, where each initiative is nurtured with detailed plans and goals by a strong team of contributors. When an initiative passes through this stage, it moves to the **active** initiatives stage. Here's where the initiatives take structure and come alive.
Some of the most successful Drupal 8 initiatives, like Twig and Bigpipe, did not follow the traditional process. However, following a thoughtfully planned process will avoid a lot of [bike-shedding][3].
### Popular past initiatives
In 2011, at DrupalCon Chicago, Dries announced that Drupal 8 would feature core initiatives that would cause big changes to Drupal's architecture. To support the transition, each initiative would have a few leads involved in decision-making and coordination with Dries. Some popular initiatives included:
* **Configuration Management Initiative (CMI):** This was the first key initiative announced at the 2011 DrupalCon. The idea was to offer site builders more powerful, flexible, and traceable configuration handling in Drupal 8 core. As planned, the Configuration Manager module is now a Drupal 8 core module that allows deploying configurations between different environments easily.
* **Web Services and Context Core Initiative:** This initiative aimed at embracing a modern web and turned Drupal into a first-class REST server with a first-class content management system (CMS) on top of it. The result? Drupal is now a competent REST server providing the ability to manage content entities through HTTP requests. This is part of why Drupal has been the leading CMS for decoupled experiences for several years.
* **Layout Initiative:** This initiative's focus was on improving and simplifying the site-building experience by non-technical users, like site builders and content authors. This initiative came alive in Drupal 8 by introducing the Layout Discovery API (a Layout plugin API) in v.8.4 and the Layout Builder module (a complete layout management solution) in v.8.5 core.
* **Media Initiative:** The Media Initiative was proposed to launch a rich, intuitive, easy-to-use, API-based media solution with extensible media functionalities in the core. This resulted in bringing in the Media API (which manages various operations on media entities) and Media Library (a rich digital asset management tool) to Drupal 8 core.
* **Drupal 9 Readiness Initiative:** The focus of this initiative was to get Drupal 9 ready by June 3, 2020, so that Drupal 7 and 8 users had at least 18 months to upgrade. Since Drupal 9 is just a cleaned-up version of the last version of Drupal 8 (8.9), the idea was to update dependencies and remove any deprecated code. And as planned, Drupal 9 was successfully released on June 3, 2020. Drupal 8-compatible modules were ported to Drupal 9 faster than any major version upgrade in Drupal's history, with more than 90% of the top 1,000 modules already ported (and many of the remaining now obsolete).
### The new strategic initiatives
Fast-forward to 2021, where everything is virtual. DrupalCon North America will witness a first-of-its-kind "Initiative Days" event added to the traditional DrupalCon content. Previously, initiatives were proposed during the [Driesnote][4] session, but this time, initiatives are more interactive and detailed. DrupalCon North America 2021 participants can learn about an initiative and participate in building components and contributing back to the project.
#### The Decoupled Menus Initiative
Dries proposed the Decoupled Menus Initiative in his keynote speech during DrupalCon Global 2020. While this initiative's broader intent is to make Drupal the best decoupled CMS, to accomplish the larger goal, the project chose to work on decoupled menus as a first step because menus are used on every project and are not easy to implement in decoupled architectures.
The goals of this initiative are to build APIs, documentation, and examples that can:
* Give JavaScript front-end developers the best way to integrate Drupal-managed menus into their front ends.
* Provide site builders and content editors with an easy-to-use experience to build and update menus independently.
This is because, without web services for decoupled menus in Drupal core, JavaScript developers are often compelled to hard-code menu items. This makes it really hard for a non-developer to edit or remove a menu item without getting a developer involved. The developer needs to make the change, build the JavaScript code, and then deploy it to production. With the Decoupled Menus Initiative, the developer can easily eliminate all these steps and many lines of code by using Drupal's HTTP APIs and using JavaScript-focused resources.
The bigger idea is to establish patterns and a roadmap that can be adapted to solve other decoupled problems. At DrupalCon 2021, on the [Decoupled Menus Initiative day][5], April 13, you can both learn about where it stands and get involved by building custom menu components and contributing them back to the project.
#### The Easy Out-Of-The-Box Initiative
During DrupalCon 2019 in Amsterdam, CMS users were asked about their perceptions of their CMS. The research found that beginners did not favor Drupal as much as intermediate- and expert-level users. However, it was the opposite for other CMS users; they seemed to like their CMS less over time.
![CMS users' preferences][6]
([Driesnote, DrupalCon Global 2020][7])
Hence, the Easy Out-Of-The-Box Initiative's goal is to make Drupal easy to use, especially for non-technical users and beginners. It is an extension of the great work that has been done for Layouts, Media, and Claro. Layout Builder's low-code design flexibility, Media's robust management of audio-visual content, and Claro's modern and accessible administrative UI combine to empower less-technical users with the power Drupal has under the hood.
This initiative bundles all three of these features into one initiative and aims to provide a delightful user experience. The ease of use can help attract new and novice users to Drupal. On April 14, DrupalCon North America's [Easy Out-Of-The-Box Initiative day][8], the initiative leads will discuss the initiative and its current progress. Learn about how you can contribute to the project by building a better editorial experience.
#### Automated Updates Initiative
The results of a Drupal survey in 2020 revealed that automated updating was the most frequently requested feature. Updating a Drupal site manually can be tedious, expensive, and time-consuming. Luckily, the initiative team has been on this task since 2019, when the first prototype for the Automated Update System was developed as a [contributed module][9]. The focus of the initiative now is to bring this feature into Drupal core. As easy as it may sound, there's a lot more work that needs to go in to:
* Ensure site readiness for a safe update
* Integrate composer
* Verify updates with package signing
* Safely apply updates in a way that can be rolled back in case of errors
In its first incarnation, the focus is on Drupal Core patch releases and security updates, but the intent is to support the contributed module ecosystem as well.
The initiative intends to make it easier for small to midsized businesses that sometimes overlook the importance of updating their Drupal site or struggle with the manual process. The [Automated Updates Initiative day][10] is happening on April 15 at DrupalCon North America. You will get an opportunity to know more about this initiative and get involved in the project.
#### Drupal 10 Readiness Initiative
With the release of Drupal 10 not too far away (as early as June 2022), the community is gearing up to welcome a more modern version of Drupal. Drupal now integrates more third-party technologies than ever. Dependencies such as Symfony, jQuery, Guzzle, Composer, CKEditor, and more have their own release cycles that Drupal needs to align with.
![CMS Release Cycles][11]
([Driesnote, DrupalCon 2020][7])
The goal of the initiative is to get Drupal 10 ready, and this involves:
* Releasing Drupal 10 on time
* Getting compatible with the latest versions of the dependencies for security
* Deprecating the dependencies, libraries, modules, and themes that are no longer needed and removing them from Drupal 10 core.
At the [Drupal 10 Readiness Initiative day][12], April 16, you can learn about the tools you'll use to update your websites and modules from Drupal 9 to Drupal 10 efficiently. There are various things you can do to help make Drupal better. Content authors will get an opportunity to peek into the new CKEditor 5, its new features, and improved editing experience.
### Learn more at DrupalCon
Drupal is celebrating its 20th year and its evolution to a more relevant, easier to adopt open source software. Leading an evolution is close to impossible without taking up strategic initiatives. Although the initial initiatives did not focus on offering great user experiences, today, ease of use and out-of-the-box experience are Drupal's most significant goals.
Our ambition is to create software that works for everyone. At every DrupalCon, the intent is to connect with the community that fosters the same belief, learn from each other, and ultimately, build a better Drupal.
[DrupalCon North America][13], hosted by the Drupal Association, is the largest Drupal event of the year. Drupal experts, enthusiasts, and users will unite online April 1216, 2021, share lessons learned and best practices, and collaborate on creating better, more engaging digital experiences. PHP and JavaScript developers, designers, marketers, and anyone interested in a career in open source will be able to learn, connect, and build by attending DrupalCon.
The [Drupal Association][14] is the nonprofit organization focused on accelerating Drupal, fostering the Drupal community's growth, and supporting the project's vision to create a safe, secure, and open web for everyone. DrupalCon is the primary source of funding for the Drupal Association. Your support and attendance at DrupalCon make our work possible.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/drupal-updates
作者:[Shefali Shetty][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/shefalishetty
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)
[2]: https://www.drupal.org/
[3]: https://en.wikipedia.org/wiki/Law_of_triviality
[4]: https://events.drupal.org/global2020/program/driesnote
[5]: https://events.drupal.org/northamerica2021/decoupled-menus-day
[6]: https://opensource.com/sites/default/files/uploads/cms_preferences.png (CMS users' preferences)
[7]: https://youtu.be/RIeRpLgI1mM
[8]: https://events.drupal.org/northamerica2021/easy-out-box-day
[9]: http://drupal.org/project/automatic_updates/
[10]: https://events.drupal.org/northamerica2021/automatic-updates-day
[11]: https://opensource.com/sites/default/files/uploads/cms_releasecycles.png (CMS Release Cycles)
[12]: https://events.drupal.org/northamerica2021/drupal-10-readiness-day
[13]: https://events.drupal.org/northamerica2021?utm_source=replyio&utm_medium=email&utm_campaign=DCNA2021-20210318
[14]: https://www.drupal.org/association

View File

@ -1,145 +0,0 @@
[#]: subject: (Fedora Workstation 34 feature focus: Btrfs transparent compression)
[#]: via: (https://fedoramagazine.org/fedora-workstation-34-feature-focus-btrfs-transparent-compression/)
[#]: author: (nickavem https://fedoramagazine.org/author/nickavem/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Fedora Workstation 34 feature focus: Btrfs transparent compression
======
![][1]
Photo by [Patrick Lindenberg][2] on [Unsplash][3]
The release of Fedora 34 grows ever closer, and with that, some fun new features! A [previous feature focus][4] talked about some changes coming to GNOME version 40. This article is going to go a little further under the hood and talk about data compression and _transparent compression_ in _btrfs_. A term like that may sound scary at first, but less technical users need not be wary. This change is simple to grasp, and will help many Workstation users in several key areas.
### What is transparent compression exactly?
Transparent compression is complex, but at its core it is simple to understand: it makes files take up less space. It is somewhat like a compressed tar file or ZIP file. Transparent compression will dynamically optimize your file systems bits and bytes into a smaller, reversible format. This has many benefits that will be discussed in more depth later on, however, at its core, it makes files smaller. This may leave most computer users with a question: “I cant just read ZIP files. You need to decompress them. Am I going to need to constantly decompress things when I access them?”. That is where the “transparent” part of this whole concept comes in.
Transparent compression makes a file smaller, but the final version is indistinguishable from the original by the human viewer. If you have ever worked with Audio, Video, or Photography you have probably heard of the terms “lossless” and “lossy”. Think of transparent compression like a lossless compressed PNG file. You want the image to look exactly like the original. Small enough to be streamed over the web but still readable by a human. Transparent compression works similarly. Your file system will look and behave the same way as before (no ZIP files everywhere, no major speed reductions). Everything will look, feel, and behave the same. However, in the background it is taking up much less disk space. This is because BTRFS will dynamically compress and decompress your files for you. Its “Transparent” because even with all this going on, you wont notice the difference.
> You can learn more about transparent compression at <https://btrfs.wiki.kernel.org/index.php/Compression>
### Transparent compression sounds cool, but also too good to be true…
I would be lying if I said transparent compression doesnt slow some things down. It adds extra CPU cycles to pretty much any I/O operation, and can affect performance in certain scenarios. However, Fedora is using the extremely efficient _zstd:1_ algorithm. [Several tests][5] show that relative to the other benefits, the downsides are negligible (as I mentioned in my explanation before). Better disk space usage is the greatest benefit. You may also receive reduction of write amplification (can increase the lifespan of SSDs), and enhanced read/write performance.
Btrfs transparent compression is extremely performant, and chances are you wont even notice a difference when its there.
### Im convinced! How do I get this working?
In fresh **installations of Fedora 34 and its [corresponding beta][6], it should be enabled by default. However, it is also straightforward to enable before and after an upgrade from Fedora 33. You can even enable it in Fedora 33, if you arent ready to upgrade just yet.
1. (Optional) Backup any important data. The process itself is completely safe, but human error isnt.
2. To truly begin you will be editing your _[fstab][7]_. This file tells your computer what file systems exist where, and how they should be handled. You need to be cautious here, but only a few small changes will be made so dont be intimidated. On an installation of Fedora 33 with the default Btrfs layout the _/etc/fstab_ file will probably look something like this:
```
```
&lt;strong&gt;$ $EDITOR /etc/fstab&lt;/strong&gt;
UUID=1234 /                       btrfs   subvol=root     0 0
UUID=1234 /boot                   ext4    defaults        1 2
UUID=1234         /boot/efi               vfat    umask=0077,shortname=winnt 0 2
UUID=1234 /home                   btrfs   subvol=home     0 0
```
```
NOTE: _While this guide builds around the standard partition layout, you may be an advanced enough user to partition things yourself. If so, you are probably also advanced enough to extrapolate the info given here onto your existing system. However, comments on this article are always open for any questions._
Disregard the _/boot_ and _/boot/efi_ directories as they arent ([currently][8]) compressed. You will be adding the argument _compress=zstd:1_. This tells the computer that it should transparently compress any newly written files if they benefit from it. Add this option in the fourth column, which currently only contains the _subvol_ option for both /home and /:
```
```
UUID=1234 /                       btrfs   subvol=root,compress=zstd:1     0 0
UUID=1234 /boot                   ext4    defaults        1 2
UUID=1234         /boot/efi               vfat    umask=0077,shortname=winnt 0 2
UUID=1234 /home                   btrfs   subvol=home,compress=zstd:1     0 0
```
```
Once complete, simply save and exit (on the default _nano_ editor this is CTRL-X, SHIFT-Y, then ENTER).
3\. Now that fstab has been edited, tell the computer to read it again. After this, it will make all the changes required:
```
$ sudo mount -o remount / /home/
```
Once youve done this, you officially have transparent compression enabled for all newly written files!
### Recommended: Retroactively compress old files
Chances are you already have many files on your computer. While the previous configuration _will_ compress all newly written files, those old files will not benefit. I recommend taking this next (but optional) step to receive the full benefits of transparent compression.
1. (Optional) Clean out any data you dont need (empty trash etc.). This will speed things up. However, its not required.
2. Time to compress your data. One simple command can do this, but its form is dependent on your system. Fedora Workstation (and any other desktop spins using the DNF package manager) should use:
```
$ sudo btrfs filesystem defrag -czstd -rv / /home/
```
Fedora Silverblue users should use:
```
$ sudo btrfs filesystem defrag -czstd -rv / /var/home/
```
Silverblue users may take note of the immutability of some parts of the file system as described [here][9] as well as this [Bugzilla entry][10].
NOTE: _You may receive several warnings that say something like “Cannot compress permission denied.”. This is because some files, on Silverblue systems especially, the user cannot easily modify. This is a tiny subset of files. They will most likely compress on their own, in time, as the system upgrades._
Compression can take anywhere from a few minutes to an hour depending on how much data you have. Luckily, since all new writes are compressed, you can continue working while this process completes. Just remember it may partially slow down your work at hand and/or the process itself depending on your hardware.
Once this command completes you are officially fully compressed!
### How much file space is used, how big are my files
Due to the nature of transparent compression, utilities like _du_ will only report exact, uncompressed, files space usage. This is not the actual space they take up on the disk. The [_compsize_][11] utility is the best way to see how much space your files are actually taking up on disk. An example of a _compsize_ command is:
```
$ sudo compsize -x / /home/
```
This example provides exact information on how the two locations, / and /home/ are currently, transparently, compressed. If not installed, this utility is available in the Fedora Linux repository.
### Conclusion:
Transparent compression is a small but powerful change. It should benefit everyone from developers to sysadmin, from writers to artists, from hobbyists to gamers. It is one among many of the changes in Fedora 34. These changes will allow us to take further advantage of our hardware, and of the powerful Fedora Linux operating system. I have only just touched the surface here. I encourage those of you with interest to begin at the [Fedora Project Wiki][12] and [Btrfs Wiki][13] to learn more!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-workstation-34-feature-focus-btrfs-transparent-compression/
作者:[nickavem][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/nickavem/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/btrfs_compression-1-816x345.jpg
[2]: https://unsplash.com/@heapdump?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/hdd-compare?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/fedora-34-feature-focus-updated-activities-overview/
[5]: https://fedoraproject.org/wiki/Changes/BtrfsTransparentCompression#Simple_Analysis_of_btrfs_zstd_compression_level
[6]: https://fedoramagazine.org/announcing-fedora-34-beta/
[7]: https://en.wikipedia.org/wiki/Fstab
[8]: https://fedoraproject.org/wiki/Changes/BtrfsTransparentCompression#Q:_Will_.2Fboot_be_compressed.3F
[9]: https://docs.fedoraproject.org/en-US/fedora-silverblue/technical-information/#filesystem-layout
[10]: https://bugzilla.redhat.com/show_bug.cgi?id=1943850
[11]: https://github.com/kilobyte/compsize
[12]: https://fedoraproject.org/wiki/Changes/BtrfsTransparentCompression
[13]: https://btrfs.wiki.kernel.org/index.php/Compression

View File

@ -1,139 +0,0 @@
[#]: subject: (6 exciting new ShellHub features to look for in 2021)
[#]: via: (https://opensource.com/article/21/5/shellhub-new-features)
[#]: author: (Domarys https://opensource.com/users/domarys)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
6 exciting new ShellHub features to look for in 2021
======
ShellHub's community has been busy adding new features to the open
source remote-access tool.
![People work on a computer server with devices][1]
ShellHub is a cloud server that allows universal access to your networked devices from any external network. Using it prevents being blocked by firewalls or overly complex networks because [ShellHub][2] uses the HTTP protocol to encapsulate the SSH protocol. This transport layer allows seamless use on most networks, as it is commonly available and accepted by most companies' firewall rules and policies.
Best of all, ShellHub is open source (released under the Apache 2.0 license) and facilitates developers' and programmers' remote tasks and making access to Linux devices possible for any hardware architecture.
For a full demo, please read my previous article, [_Bypass your Linux firewall with SSH over HTTP_][3]. In this follow-up article, I'll cover some of the developments and additions in the [0.7.0 release][4].
ShellHub offers a safe and quick way to access your devices from anywhere. It has a robust [community][5], whose contributions are essential to the tool's growth, new features, and improvements. I'll describe some of the updates that are (or will soon be) in the [tool's code][6] below.
### Namespace
The namespace enables you to create a set of devices to share with other ShellHub users. You can put as many devices as you want in a namespace, but a device registered in one namespace cannot belong to another.
You can access your namespace by using the top-right button on the Dashboard. There, you will find the namespace Tenant ID, which is used to register a device, and any other namespaces you have created. You can also create a new namespace and access namespace settings.
You can rename, delete, and invite other users to your namespace. Namespace user permissions work based on privilege, depending on user rank. (See [Privileges][7] for more information.)
![Namespace][8]
(Domarys, [CC BY-SA 4.0][9])
This feature is available in all editions. The difference is that in the open source version, you must use the terminal to issue commands:
```
`./bin/add-namespace <namespace> <owner>`
```
![Running namespace commands in the terminal][10]
(Domarys, [CC BY-SA 4.0][9])
### Privileges
Privileges are an organization-level mode for authoring actions in ShellHub. This ensures only the owner has permissions to do potentially dangerous actions.
There are two privilege ranks:
* **ADM:** Only the namespace owner has administrator privileges to run an action. The admin can accept and reject devices; view and delete session recordings; create, change, or delete firewall rules; and invite users to the namespace.
* **USER:** A user must be invited by the owner. A user can access devices and any information in the namespace enabled by the owner but cannot remove devices, change firewall rules, or watch session recordings.
### Session recordings
This new feature records all actions in a ShellHub connection executed by a user or owner. Session recordings are available in the Dashboard in ShellHub Cloud and Enterprise versions.
![Session recordings][11]
(Domarys, [CC BY-SA 4.0][9])
The session recording feature is on by default. If you are the owner, you can change this in a namespace's Settings.
![Session recording settings][12]
(Domarys, [CC BY-SA 4.0][9])
Each session's page has details such as hostname, user, authentication, IP address, and session begin and end time. The device's user ID (UID) is available in Details.
### Firewall rules
![Firewall rules][13]
(Domarys, [CC BY-SA 4.0][9])
Firewall rules define network traffic permissions (or blocks) to ShellHub devices. This feature is available in the Cloud and Enterprise editions. These rules allow or prevent a device's connection to defined IPs, users, or hostnames. Rules can be set only by a namespace owner.
In addition to defining the rules, ShellHub enables an owner to set priorities, which block sets of locations or permit access to a location in a blocked set if necessary.
### Admin console
![Admin console][14]
(Domarys, [CC BY-SA 4.0][9])
ShellHub developed the admin console to facilitate user support. It offers an easy and clear interface for administrators of large teams to manage and check the activities executed in the ShellHub server. It's available in the Enterprise edition.
### Automatic access with public keys
![ShellHub public key][15]
(Domarys, [CC BY-SA 4.0][9])
Automatic connection using public keys is a new feature that will be released soon. It aims to simplify access for users with many different devices and credentials because using a public key makes access quicker and more secure.
The ShellHub server keeps public key information safe and uses the key only for logging into devices. It also does not have access to users' private keys or other sensitive information.
Automatic connections using public keys is a recent feature added in ShellHub.
### Learn more
Stay up to date on this and other new features and updates on OS Systems' [Twitter][16], [LinkedIn][17], [GitHub][18], or [website][19].
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/5/shellhub-new-features
作者:[Domarys][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/domarys
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server with devices)
[2]: https://www.shellhub.io/
[3]: https://opensource.com/article/20/7/linux-shellhub
[4]: https://github.com/shellhub-io/shellhub/releases/tag/v0.7.0
[5]: https://www.shellhub.io/community
[6]: https://github.com/shellhub-io
[7]: tmp.jW5CEfWWTN#Privileges
[8]: https://opensource.com/sites/default/files/uploads/shellhub_3namespace.png (Namespace)
[9]: https://creativecommons.org/licenses/by-sa/4.0/
[10]: https://opensource.com/sites/default/files/uploads/shellhub_2terminal.png (Running namespace commands in the terminal)
[11]: https://opensource.com/sites/default/files/uploads/shellhub_1sessionrecordings.png (Session recordings)
[12]: https://opensource.com/sites/default/files/uploads/shellhub_6sessionrecording.png (Session recording settings)
[13]: https://opensource.com/sites/default/files/uploads/shellhub_5firewallrules.png (Firewall rules)
[14]: https://opensource.com/sites/default/files/uploads/shellhub_4admin.png (Admin console)
[15]: https://opensource.com/sites/default/files/pictures/public_key.png (ShellHub public key)
[16]: https://twitter.com/os_systems
[17]: https://www.linkedin.com/company/ossystems/
[18]: https://www.facebook.com/ossystems
[19]: https://www.ossystems.com.br/

View File

@ -1,168 +0,0 @@
[#]: subject: (Things to do after installing Fedora 34 Workstation)
[#]: via: (https://fedoramagazine.org/things-to-do-after-installing-fedora-34-workstation/)
[#]: author: (Arman Arisman https://fedoramagazine.org/author/armanwu/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Things to do after installing Fedora 34 Workstation
======
![][1]
Using a new operating system can be a lot of fun. But it often becomes confusing when we first use it, especially for new users who are not very familiar with computer systems. For those of you who are using Fedora for the first time and have successfully installed Fedora 34 Workstation, this article can be an initial guide. Im sure that you want to feel more at home with your new fresh Fedora. These are several things to do after installing your Fedora 34 Workstation.
### System update
Maybe you think that you have installed the most recent version of Fedora 34 Workstation, so your Fedora should be up to date. But you still have to make sure that your Fedora Linux has all the updated packages. Because in every new release of an operating system, usually there are still many things that continue to be improved. You can use the terminal or GNOME software to run the update.
If you want to update via the terminal, then you just have to open a terminal and type the following command.
```
$ sudo dnf update
```
But if you want to do it with GNOME _Software_, open the application by selecting _Activities_ then locating and selecting the _Software_ item in the taskbar at the bottom of the screen. When it opens select the _Update_s tab at the top. After that you just click the _Download_ button. An update may require a restart afterwards and _Update_ will tell you that.
![GNOME Software location in the taskbar at the bottom of the screen][2]
_note: another way to select Activities is to press the super key on the keyboard. Super key is the button that has the Windows logo on most keyboards._
![Gnome Software showing Updates][3]
### System settings
You can view and configure your devices system through _System Settings_. These include items like network, keyboard, mouse, sound, displays, etc. You can run it by pressing the _super_ key on your keyboard, clicking _Show Applications_ in the task bar at the bottom of the window, then selecting _Settings_. Configure it according to your needs.
![Settings menu showing Network selected][4]
### Additional repositories
Maybe some packages you need are not available to be installed from the official Fedora Repository. You can add software repositories with the _dnf config-manager_ command. Please be careful if you want to add other repositories besides the official Fedora repository.
The first thing you should do is define a new repository by adding a new file ending in _.repo_ to the _/etc/yum.repos.d/_ directory. Run the following command in the terminal.
```
$ sudo dnf config-manager --add-repo /etc/yum.repos.d/file_name.repo
```
_note: replace file_name with the repository file name._
Or you can use GNOME _Software_. Open it as described in the System Update section above. Now select the “hamburger” icon (three horizontal lines) on the top right and select _Software Repositories_. You can add the repository from there using the _Install_ option.
![GNOME Software showing location of Software Repositories menu][5]
Most people will enable RPM Fusion. Its a third party repository. You can read about third party repositories in [Fedora Docs][6].
### Fastest mirror and Delta RPM
There are several things you can do to speed up your download times when using DNF to update your system. You can enable Fastest Mirror and Delta RPM. Edit _/etc/dnf/dnf.conf_ using a text editor, such as gedit or nano. Heres the example to open _dnf.conf_ file with _nano_ in _terminal_.
```
$ sudo nano /etc/dnf/dnf.conf
```
Append the following line onto your _dnf.conf_ file.
```
fastestmirror=true
deltarpm=true
```
Press _ctrl+o_ to save the file then _ctrl+x_ to quit from _nano_.
### Multimedia plugins for audio and video
You may need some plugins for your multimedia needs. You can install multimedia plugins by running this command in a terminal.
```
$ sudo dnf group upgrade --with-optional Multimedia
```
Please pay attention to the regulations and standards in your country regarding multimedia codecs. You can read about this in [Fedora Docs][7].
### Tweaks and Extentions
Fedora 34 Workstation comes with GNOME as the default Desktop Environment. We can do various configurations of GNOME by using Tweaks and Extensions, like changing themes, changing buttons in the window dialog, and many more.
Open your terminal and run this command to install GNOME Tweaks.
```
$ sudo dnf install gnome-tweaks
```
And run this command to install GNOME Extensions.
```
$ sudo dnf install gnome-extensions-app
```
Do the same way as above when you search for _GNOME Software_. Select _Activities_ or press the _super_ key then select _Show Applications_ to see a list of installed applications. You can find both applications in the list. You can do the same thing every time you want to search for installed applications. Then do the configuration with your preferences with _Tweaks_ and _Extensions_.
![GNOME Tweaks][8]
![GNOME Extensions][9]
### Install applications
When you first install Fedora, you will find several installed apps. You can add other applications according to your needs with GNOME Software. Do the same way to open GNOME Software as described earlier. Then find the application you want, select the application, and then press the Install button.
![GNOME Software][10]
Or you can do it with terminal. Here are the commands to find and install the application.
Command to search for available applications:
```
$ sudo dnf search application_name
```
The command to install the application:
```
$ sudo dnf install application_name
```
Commands to remove installed applications:
```
$ sudo dnf remove application_name
```
_note: replace application_name with the name of the application._
You can search for installed applications by viewing them in _Show Applications_. Select _Activities_ or press the _super_ key and select _Show Applications_. Then you can select the application you want to run from the list.
![Installed application list][11]
### Conclusion
Fedora Workstation is an easy-to-use and customizable operating system. There are many things you can do after installing Fedora 34 Workstation according to your needs. This article is just a basic guide for your first steps before you have more fun with your Fedora Linux system. You can read [Fedora Docs][12] for more detailed information. I hope you enjoy using Fedora Linux.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/things-to-do-after-installing-fedora-34-workstation/
作者:[Arman Arisman][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/armanwu/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/07/FedoraMagz-Cover_ThingsToDo.png
[2]: https://fedoramagazine.org/wp-content/uploads/2021/07/GNOME_Software_location-1024x576.png
[3]: https://fedoramagazine.org/wp-content/uploads/2021/07/Software_Updates-1024x735.png
[4]: https://fedoramagazine.org/wp-content/uploads/2021/07/Settings-1024x764.png
[5]: https://fedoramagazine.org/wp-content/uploads/2021/07/Software_Hamburger_-1-1024x685.png
[6]: https://docs.fedoraproject.org/en-US/quick-docs/setup_rpmfusion/
[7]: https://docs.fedoraproject.org/en-US/quick-docs/assembly_installing-plugins-for-playing-movies-and-music/
[8]: https://fedoramagazine.org/wp-content/uploads/2021/07/Tweaks-1024x733.png
[9]: https://fedoramagazine.org/wp-content/uploads/2021/07/GNOME_Extensions.png
[10]: https://fedoramagazine.org/wp-content/uploads/2021/07/GNOME_Software-1-1024x687.png
[11]: https://fedoramagazine.org/wp-content/uploads/2021/07/Show_Application-1024x576.png
[12]: https://docs.fedoraproject.org/en-US/fedora/f34/

View File

@ -1,151 +0,0 @@
[#]: subject: (How to Install Fedora 34 Workstation [Step by Step])
[#]: via: (https://www.debugpoint.com/2021/07/install-fedora-34-workstation/)
[#]: author: (Arindam https://www.debugpoint.com/author/admin1/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
How to Install Fedora 34 Workstation [Step by Step]
======
In this absolute beginners guide, we explain the steps required to
install Fedora 34 workstation edition (GNOME desktop environment).
This page covers the following topics
* [Fedora 34 Minimum system requirements][1]
* [Pre-Steps before installation][2]
* [Download and create LIVE USB][3]
* [Install Fedora 34][4]
[Fedora][5] is a Linux based distribution which offers desktop and server flavors. It is a free and open-source Linux distribution sponsored by Red Hat and developed and contributed by the community. It works as an upstream distribution for Red Hat Enterprise Linux. Hence, with Fedora you get the latest Linux Kernel, packages with cutting edge features and applications.
Fedora desktop edition offers almost all popular desktop environments. A quick list of desktop environment is below which has official Fedora flavor.
* KDE Plasma
* GNOME
* Xfce
* LXDE
* LXQt
* i3 WM
* MATE
* Cinnamon (via repo)
This is why it is very popular, and many users choose Fedora to Ubuntu because you get a perfect system with many packages pre-installed. Mostly experienced users prefer Fedora, but it is absolutely useful for beginners as well. If you are an Ubuntu user and want to jump the ship to Fedora, well, you may want to check out our [Ubuntu to Fedora migration guide][6].
The Fedora 34 which we are going to install in this post brings some interesting changes. Fedora 34 brings Linux Kernel 5.11, Zstd compression when btrfs is used, default sound daemon Pipewire, GNOME 40 desktop, KDE Plasma 5.21 and many Wayland related updates. For a detailed coverage, visit our [Fedora 34 topics][7] to learn more.
### Fedora 34 workstation System requirement
This is the minimum system requirement for installing Fedora in general.
* 2 GHz dual-core processor
* 4 GiB RAM (system memory)
* 20 GB of hard-drive space
* VGA capable of 1024×768 screen resolution
* Either a CD/DVD drive or a USB port for the installer media
* Internet access is not mandatory for installation
### Pre-Step Before Installation
Before you start the installation, make sure of the followings.
* If you are installing in a physical system, make sure to decide which partition you want to install.
* If you are planning to dual boot with Windows or any other Linux Systems, then make sure you decide which partition to install.
* Take a backup of your personal data.
* Keep a LIVE USB with [Boot Repair][8] handy, in case something goes wrong.
[][9]
SEE ALSO:   How to Upgrade to Fedora 34 from Fedora 33 Workstation (GUI and CLI Method)
### Download and prepare LIVE USB
Download the Workstation edition from the below link. It contains the torrent of the .ISO file and also includes all other [Fedora 34 Spins][10] as well.
[fedora torrents][11]
After the download is complete, create a LIVE USB using any utility such as [Etcher][12]. Plug in the USB in your system, change BIOS settings to boot from it.
### Install Fedora 34 Steps
1\. The LIVE Fedora installation system boot up to a LIVE desktop, that gives you options to install to a Physical medium.
![Install to Hard Driver Option in LIVE Media][13]
2\. In the next screen, select language and continue. Then click on the Installation destination to select which partition you would like to install.
![Select Language][14]
![Installation Destination Select][15]
3\. In the installation destination screen, select the disk and choose Storage Configuration: Custom. And click Done at the top.
![Select Disk][16]
4\. In the partitioning screen, choose your partition sizes for root, and boot partitions. For example, keep /boot at around 1GB and rest you can assign to /root partition.
5\. For Fedora 34, it is better to use btrfs for root partition for better performance. Do not forget to set the mount point as / in root partition.
![root partition][17]
![boot partition][18]
6\. When you are satisfied with your new file system, click on Done. In the next screen, make sure to verify carefully the summary of changes that is going to happen to your disk. Because this will make changes to your system and can not be reverted. Click Accept changes once you are ready.
![Summary of Changes][19]
7\. Wait for the installation to complete. Once it is finished, click on Finish Installation and reboot the LIVE system.
![Installation complete][20]
So, thats about it. If all goes well, after reboot, you should be greeted with Fedora 34 workstation edition desktop with GNOME 40.
![Fedora 34 Desktop][21]
I hope this basic guide to install Fedora 34 helps beginners or advanced users for their work. If you run into a problem, such as with dual boot, or any other installation error, let me know in the comment box below.
* * *
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/07/install-fedora-34-workstation/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: tmp.cwqzC2PPCj#min-requirement
[2]: tmp.cwqzC2PPCj#pre-steps
[3]: tmp.cwqzC2PPCj#download-create-USB
[4]: tmp.cwqzC2PPCj#install-fedora-34
[5]: https://getfedora.org/
[6]: https://www.debugpoint.com/2021/04/migrate-to-fedora-from-ubuntu/
[7]: https://www.debugpoint.com/tag/fedora-34
[8]: https://sourceforge.net/p/boot-repair/home/Home/
[9]: https://www.debugpoint.com/2021/04/upgrade-fedora-34-from-fedora-33/
[10]: https://www.debugpoint.com/2021/04/fedora-34-desktop-spins/
[11]: https://torrent.fedoraproject.org/
[12]: https://www.debugpoint.com/2021/01/etcher-bootable-usb-linux/
[13]: https://www.debugpoint.com/blog/wp-content/uploads/2021/07/Install-to-Hard-Driver-Option-in-LIVE-Media.jpeg
[14]: https://www.debugpoint.com/blog/wp-content/uploads/2021/07/Select-Language.jpeg
[15]: https://www.debugpoint.com/blog/wp-content/uploads/2021/07/Installation-Destination-Select.jpeg
[16]: https://www.debugpoint.com/blog/wp-content/uploads/2021/07/Select-Disk.jpeg
[17]: https://www.debugpoint.com/blog/wp-content/uploads/2021/07/root-partition-1024x532.jpeg
[18]: https://www.debugpoint.com/blog/wp-content/uploads/2021/07/boot-partition.jpeg
[19]: https://www.debugpoint.com/blog/wp-content/uploads/2021/07/Summary-of-Changes.jpeg
[20]: https://www.debugpoint.com/blog/wp-content/uploads/2021/07/Installation-complete-1024x526.jpeg
[21]: https://www.debugpoint.com/blog/wp-content/uploads/2021/04/Fedora-34-Desktop--1024x529.jpg

View File

@ -1,67 +0,0 @@
[#]: subject: "Fedora Linux earns recognition from the Digital Public Goods Alliance as a DPG!"
[#]: via: "https://fedoramagazine.org/fedora-linux-earns-recognition-from-the-digital-public-goods-alliance-as-a-dpg/"
[#]: author: "Justin W. FloryAlberto Rodriguez SanchezMatthew Miller https://fedoramagazine.org/author/jflory7/https://fedoramagazine.org/author/bt0dotninja/https://fedoramagazine.org/author/mattdm/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Fedora Linux earns recognition from the Digital Public Goods Alliance as a DPG!
======
![][1]
In the Fedora Project community, [we look at open source][2] as not only code that can change how we interact with computers, but also as a way for us to positively influence and shape the future. The more hands that help shape a project, the more ideas, viewpoints and experiences the project represents — thats truly what the spirit of open source is built from.
But its not just the global contributors to the Fedora Project who feel this way. August 2021 saw Fedora Linux recognized as a digital public good by the [Digital Public Goods Alliance (DPGA)][3], a significant achievement and a testament to the openness and inclusivity of the project.
We know that digital technologies can save lives, improve the well-being of billions, and contribute to a more sustainable future. We also know that in tackling those challenges, Open Source is uniquely positioned in the world of digital solutions by inherently welcoming different ideas and perspectives critical to lasting success.
But, we also know that many regions and countries around the world do not have access to those technologies. Open Source technologies can be the difference between achieving the [Sustainable Development Goals][4] (SDGs) by 2030 or missing the targets. Projects like Fedora Linux, which [represent much more than code itself][2], are the game-changers we need. Already, individuals, organizations, governments, and Open Source communities, including the Fedora Projects own, are working to make sure the potential of Open Source is realized and equipped to take on the monumental challenges being faced.
The Digital Public Goods Alliance is a multi-stakeholder initiative, endorsed by the United Nations Secretary-General. It works to accelerate the attainment of the SDGs in low- and middle-income countries by facilitating the discovery, development, use of, and investment in digital public goods (DPGs). DPGs are Open Source software, open data, open AI models, open standards, and open content that adhere to privacy and other applicable best practices, and do no harm. This definition, drawn from the UN Secretary-Generals [2020 Roadmap for Digital Cooperation][5], serves as the foundation of the DPG Registry, an online repository for DPGs. 
The DPG Registry was created to help increase the likelihood of discovery, and therefore use of, DPGs. Today, we are excited to share that Fedora Linux was added to the [DPG Registry][6]! Recognition as a DPG increases the visibility, support for, and prominence of open projects that have the potential to tackle global challenges. To become a digital public good, all projects are required to meet the [DPG Standard][7] to ensure they truly encapsulate Open Source principles. 
As an Open Source leader, Fedora Linux can make achieving the SDGs a reality through its role as a convener of many Open Source “upstream” communities. In addition to providing a fully-featured desktop, server, cloud, and container operating system, it also acts as a platform where different Open Source software and work come together. Fedora Linux by default only ships its releases with purely Open Source software packages and components. While third-party repositories are available for use with proprietary packages or closed components, Fedora Linux is a complete offering with some of the greatest innovations that Open Source has to offer. Collectively this means Fedora Linux can act as a gateway, empowering the creation of more and better solutions to better tackle the challenges they are trying to address.
The DPG designation also aligns with Fedoras fundamental foundations:
* **Freedom**: Fedora Linux was built as Free and Open Source Software from the beginning. Fedora Linux only ships and distributes Free Software from its default repositories. Fedora Linux already uses widely-accepted Open Source licenses.
* **Friends**: Fedora has an international community of hundreds spread across six continents. The Fedora Community is strong and well-positioned to scale as the upstream distribution of the worlds most-widely used enterprise flavor of Linux.
* **Features**: Fedora consistently delivers on innovation and features in Open Source. Fedora Linux 34 was a record-breaking release, with 63 new approved Changes in the last release.
* **First**: Fedora leverages its unique position and resources in the Free Software world to deliver on innovation. New ideas and features are tried out in the Fedora Community to discover what works, and what doesnt. We have many stories of both.
![][8]
For us, recognition as a digital public good brings honor and is a great moment for us, as a community, to reaffirm our commitment to contribute and grow the Open Source ecosystem.
This is a proud moment for each Fedora Community member because we are making a difference. Our work matters and has value in creating an equitable world; this is a fantastic and important feeling.
If you have an interest in learning more about the Digital Public Goods Alliance please reach out to [hello@digitalpublicgoods.net][9].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-linux-earns-recognition-from-the-digital-public-goods-alliance-as-a-dpg/
作者:[Justin W. FloryAlberto Rodriguez SanchezMatthew Miller][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/jflory7/https://fedoramagazine.org/author/bt0dotninja/https://fedoramagazine.org/author/mattdm/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/09/DPG_recognition-816x345.jpg
[2]: https://docs.fedoraproject.org/en-US/project/
[3]: https://digitalpublicgoods.net/frequently-asked-questions/
[4]: https://sdgs.un.org/goals
[5]: https://www.un.org/en/content/digital-cooperation-roadmap/
[6]: http://digitalpublicgoods.net/registry/
[7]: http://digitalpublicgoods.net/standard/
[8]: https://lh6.googleusercontent.com/lzxUQ45O79-kK_LHsokEChsfMCyAz4fpTx1zEaj6sN_-IiJp5AVqpsISdcxvc8gFCU-HBv43lylwkqjItSm1X1rG_sl9is1ou9QbIUpJTGyzr4fQKWm_QujF55Uyi-hRrta1M9qB=s0
[9]: mailto:hello@digitalpublicgoods.net

View File

@ -1,104 +0,0 @@
[#]: subject: "10 open source career lessons from 2021"
[#]: via: "https://opensource.com/article/21/12/open-source-career-lessons"
[#]: author: "Lauren Maffeo https://opensource.com/users/lmaffeo"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
10 open source career lessons from 2021
======
Whether you're looking for a great open source note-taking app or want
inspiration for non-coding roles in tech, Opensource.com's authors
covered it all.
![Working from home at a laptop][1]
The ongoing pandemic kept 2021 far from normal, yet there were glimmers of hope through the uncertainty. In-person conferences slowly resumed, if smaller and with more masks than in years past. And the asynchronous essence of open source allowed many people to keep working on passion projects while growing their careers.
Accordingly, readers loved the past year's posts on all things work and career. Below, we've shared 10 of our most popular articles on these subjects in 2021. Whether you're looking for a great open source note-taking app or want inspiration for non-coding roles in tech, Opensource.com's authors covered it all.
### 3 open source tools that make Linux the ideal workstation
Is there anything Linux can't do? Seth Kenlon doesn't think so. In our [most popular career article][2] this year, Seth shares three office applications that run on Linux.
From macro support on LibreOffice to the spreadsheets in Gnumeric, open source enthusiasts looking for new tools can look outside the box. Many options in this article are minimalist, and Seth says that's a good thing. Big office suites can't solve every problem. Stepping back to consider your true needs and finding tools that meet them is the best choice.
### Use Joplin to find your notes faster
No one beats Kevin Sonney when it comes to productivity tips. His annual productivity series [had a twist][3] in 2021: Instead of covering specific apps, Sonney shared strategies and all-in-one solutions to help open sourcers work smarter.
A digital notes enthusiast, Kevin uses this piece to share why he chose Joplin to keep them all organized. Its search functionality, ability to sync between devices, and use of Markdown are just a few reasons why this note-taking app rules them all.
### 5 open source alternatives to Zoom
Zoom fatigue reached new heights in open source this year: Seth Kenlon's article on open source Zoom alternatives was [a 2021 favorite][4] across several categories.
Kenlon wrote this piece after attending a conference run on open source video conferencing software. If you want to use something other than Zoom, you have options. There's an open source tool for every unique need, from familiar favorites like Signal's group video call feature to solutions for classroom and conference presentations like BigBlueButton.
### Open source tools and tips for staying focused
Kevin Sonney's 2021 productivity series hit a pain point with readers. His tips to stay focused using open source tools [caught the eyes][5] of open sourcers who (like me) struggled to keep our attention on the tasks at hand this year.
This piece highlights Mater, a taskbar app that lets users set 25-minute timers before taking a break. It's an open source take on the Pomodoro Technique that helps users do deep work before taking strategic breaks. Kevin finds that using Mater for productivity sprints, combined with a buddy, helps keep him accountable. That's a lesson we can all take into 2022.
### My open source internship during a pandemic
Nearly two years into the pandemic, many people have started new jobs and internships remotely. In May 2020, Gerrod Ubben found his junior year of college cut short and learned that his summer internship at Red Hat would happen remotely. [This piece][6] shares his experience on Red Hat's Pulp team.
Gerrod did a lot of work updating Pulp's Python plugin, thanks to mentorship from several Red Hat engineers. He also worked with the Bandersnatch community to broaden their code so the Bandersnatch API could mirror Python content from sources including Pulp. If you've doubted what fully remote interns can do, this piece will put those doubts to rest.
### 4 tech jobs for people who don't code
Nithya Ruff is one of my open source heroes because she advocates for diverse contributions to open source beyond code. As a career techie who has always held non-coding roles, Dawn Parzych's piece [highlighting four of these positions][7] struck a familiar chord.
Whether you have a talent for technical writing or a desire to do data analysis, each of the four roles highlighted here brings its own value to tech. Lest you fear that all of them are too far from the code, developer relations made the list. This fairly new discipline puts developer needs first, and while coding isn't required for all roles, it's a huge plus.
### 16 efficient breakfasts of open source technologists from around the world
What's your favorite meal to start the day? That's what Jen Wike Huger asked us Opensource.com writers this past spring. [The answers][8] were diverse like we are, often reflecting where we live around the world.
From bacon, egg, and cheese bagels in New York City to copious cups of tea in England, 16 of us shared what we eat to start our days off right. I'm still trying to convince myself that coffee in itself is not a meal, but that's another article.
### My open source disaster recovery strategy for the home office
What's the worst that could happen? In Howard Fosdick's case, it's the risk that a home-based device might fail for remote employees. This article [walks readers through solutions][9] should the worst happen to you.
Howard is upfront that his strategies (which include defining high availability and confirming allowable downtime) might not work in all scenarios. Still, the tips he offers are customizable. The critical takeaway is to plan ahead. That way, if the worst happens, you've got a plan to tackle the challenge.
### 3 wishes for open source productivity in 2021
January 31, 2021, feels like a lifetime ago. That's when Kevin Sonney [shared some hopes][10] he had for productivity in open source this year.
To conclude his series on productivity, Kevin said he wanted open sourcers to be more mindful and inclusive. This includes a call to disconnect by turning off devices when we're not working. For my part, I tried to do this by keeping my phone in "Do Not Disturb" mode in another room during heads-down work. How did you stay productive this year?
### 15 unusual paths to tech
Is tech your second career? It is for many Opensource.com writers, as we learned when Jen Wike Huger asked which roles we held before taking the techie path. Janitor, papermaker, map editor, and musician are just a few past lives that came up.
[The complete list][11] is a fascinating read that confirms why open source is so special: Done well, it unites folks with diverse skillsets and experiences to build something great. It also confirms that it's never too late—and you're never too "out of place"— to jump into open source.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/12/open-source-career-lessons
作者:[Lauren Maffeo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lmaffeo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop)
[2]: https://opensource.com/article/21/2/linux-workday
[3]: https://opensource.com/article/21/1/notes-joplin
[4]: https://opensource.com/article/21/9/alternatives-zoom
[5]: https://opensource.com/article/21/1/stay-focused
[6]: https://opensource.com/article/21/2/python-pulp-internship
[7]: https://opensource.com/article/21/2/non-engineering-jobs-tech
[8]: https://opensource.com/article/21/5/breakfast
[9]: https://opensource.com/article/21/2/high-availability-home-office
[10]: https://opensource.com/article/21/1/productivity-wishlist
[11]: https://opensource.com/article/21/5/unusual-tech-career-paths