清除过期

This commit is contained in:
Xingyu Wang 2022-02-01 22:17:43 +08:00
parent 01dcaad895
commit 793f5a496e
51 changed files with 0 additions and 9951 deletions

View File

@ -1,129 +0,0 @@
[#]: subject: "ONLYOFFICE Docs v7.0 Adds Online Forms, Password Protection, and More Improvements"
[#]: via: "https://news.itsfoss.com/onlyoffice-docs-7-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
ONLYOFFICE Docs v7.0 Adds Online Forms, Password Protection, and More Improvements
======
ONLYOFFICE is a popular open-source office suite available for Desktop platforms (including Linux) and web applications as well.
If you have a [Nextcloud][1] or ownCloud instance, you may already have ONLYOFFICE installed to manage your documents.
Now, for its first major release in 2022, ONLYOFFICE v7.0 has been announced with a range of improvements and much-needed feature editions.
### ONLYOFFICE 7.0: Whats New?
![][2]
No matter whether you work with its online editors or desktop editors, the improvements should come in handy.
Let me highlight some of the key features here:
#### Fillable Online Forms
![][3]
The better the ability to collaborate, the more time we save. And, the ability to create and share a form online with friends and collaborators should make things easier.
To get started, you need to save the document as a standard PDF or as OFORM to be able to share it online for collaboration.
You get access to a variety of fields that include text, boxes, drop-down lists, and images. It should be a breeze to manage the form, customize it, and complete it with the help of collaborators.
To improve the collaboration experience, you can also group fields to fill them out quickly. The online fillable form can be accessed using mobile applications as well. You should update the Android/iOS applications to try it out.
#### Password Protection in Spreadsheets
![][4]
While we work with a lot of data in spreadsheets, it is also important to protect them from unauthorized access.
With ONLYOFFICE Docs v7.0, you can add password protection to individual sheets or the entire workbook.
#### Support for Query tables
For easy reporting and analysis, a new ability to open and save query tables has been added that helps you combine data from multiple tables.
#### New Transitions Tab and Animation for Presentations
![][5]
A separate transitions tab was added to let you easily access, add/edit, available transitions for your presentation slides.
It should prove to be a quick task to choose between different transitions, and manage the settings.
You cant quite add animations to your presentations yet, but the support has been added, considering that it is planned for the next release.
#### Collaboration Improvements
![][4]
Not just limited to new feature additions, there have been several improvements across the office suite.
The version history for spreadsheets received an update to save each draft as a version when the last user exits from the spreadsheet. Moreover, different colors should help identify versions for other users if you are co-editing a spreadsheet.
The comments system also received a new ability to sort through by date and author.
You should also find it easier to review changes by co-authors working in a single document.
#### Usability Improvements
![][6]
A new dark mode has been added for text documents to improve readability and reduce eye strain.
You can perform several quick actions using some of the new keyboard shortcuts by pressing “**Alt**” in any editor.
There are also new scaling options with the ability of up to 500% scaling.
#### Other Improvements
In addition to more scaling range, you also get more options like 125% and 175% to let you work with documents on different monitors.
Other essential improvements include:
* The ability to decide if you want to open editors as a new tab or a new window.
* Desktop editor integration with kDrive and Liferay
* New colour palette
* Mobile app improvements
* Hyperlink autocorrection
* New localization options
You can learn more about the changes in the [official changelog][7] or the [official announcement][8].
### Download ONLYOFFICE 7.0
You can head to its [official website][9] and download the free version (community edition). If you need, you can opt for its premium offerings as well. If you cant find the latest version, it should be available soon.
The latest version should be available as DEB/RPM package, Docker image, Snap, and 1-click applications for cloud platforms like Vultr and Digital Ocean.
[ONLYOFFICE 7.0][9]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/onlyoffice-docs-7-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/nextcloud/
[2]: https://i0.wp.com/i.ytimg.com/vi/hmGHs4v44Tk/hqdefault.jpg?w=780&ssl=1
[3]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU3MSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[4]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjM3MSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjM2OSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[6]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjM3MCIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[7]: https://github.com/ONLYOFFICE/DocumentServer/blob/master/CHANGELOG.md#641
[8]: https://www.onlyoffice.com/blog/2022/01/onlyoffice-docs-7-0/
[9]: https://www.onlyoffice.com/download-docs.aspx?from=default#docs-community

View File

@ -1,72 +0,0 @@
[#]: subject: "ProtonMail Now Protects You From Email Tracking"
[#]: via: "https://news.itsfoss.com/protonmail-tracking-protection/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
ProtonMail Now Protects You From Email Tracking
======
[ProtonMail][1] is an open-source email service that offers best-in-class privacy and security features. All of its client applications are open-source as well. You can use it for free and opt for premium upgrades if needed. Whether using it for free or with a subscription, ProtonMail has been an impressive option for privacy and open-source enthusiasts.
In fact, we use it for our team. And, it has been a good service so far!
Now, to make things better, ProtonMail [announced][2] a new feature that blocks hidden pixels in emails that often track your activity.
While they claim that it should make your email experience safer, what is it? And, what should you expect from it?
### Blocking Tracking Pixels in Emails
As of now, the email tracking happens without the receivers consent. Some of the newsletters that you receive, marketing/promotion emails, or just about anything might already contain a hidden tracking pixel that monitors your email activity.
Fret not; the email tracking methods do not compromise the data or your email address. However, these trackers monitor when you open the email, how many times you access it, and the IP address/location associated with it.
So, with this data, the sender can analyze a wide range of things.
While this can be useful for digital marketers, it can give attackers more opportunities to lure you into a scam effectively.
Unfortunately, theres no way to regulate or ask consent for it. The tracking pixels in emails are all over the place. And, several trustworthy services make use of them as well.
![][3]
ProtonMail comes to the rescue by blocking these tracking pixels and hiding your IP address or location from third parties in your email.
As you can notice from the screenshot above, the email I received included one tracker.
This feature is enabled by default for every free and premium ProtonMail user.
When you click on the tracking protection icon on the web, heres what you would see:
![][4]
And, there can be a variety of trackers that cannot be identified easily and would appear as “Uncategorized Tracker”.
The presence of this feature makes ProtonMail an attractive, privacy-focused email offering. Not to forget, you may not need to opt for expensive solutions like [HEY][5] from Basecamp to get rid of email tracking.
[ProtonMail][1]
_What do you think about ProtonMails new enhanced tracking protection feature? Let me know your thoughts in the comments down below._
**Disclaimer:** Its FOSS is an affiliate partner of ProtonMail. While this does not affect our news reporting stance, we get a small commission if you get a ProtonMail subscription from our link.
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/protonmail-tracking-protection/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/recommends/protonmail/
[2]: https://protonmail.com/blog/enhanced-tracking-protection/
[3]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQ1MyIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[4]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjMzMyIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[5]: https://www.hey.com/

View File

@ -1,108 +0,0 @@
[#]: subject: "Heres Why Ksnip is My New Favorite Linux Screenshot Tool in 2022"
[#]: via: "https://news.itsfoss.com/ksnip-experience/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Heres Why Ksnip is My New Favorite Linux Screenshot Tool in 2022
======
So, I recently upgraded to a dual-monitor setup (1080p + 1440p).
While I was excited about the productivity boost by getting things done faster without the need to manage/minimize active windows constantly, there were a few nuances that I came across.
To my surprise, Flameshot refused to work. And, for the tutorials or articles I write, a screenshot tool that offers minor editing or annotation capabilities comes in handy.
If you have a similar requirement and are confused, the [GNOME Screenshot tool][1] is an option that works with multiple screens flawlessly.
However, it does not offer annotations. So, I will have to separately open the image using another image editor or Ksnip to make things work.
Instead, I decided to use Ksnip for screenshots + annotations? Convenient, right? Yes!
Let me share my brief experience with Ksnip, and why I think you should try it as well!
### Using Ksnip for Screenshots on Linux
I installed Ksnip using the [Flatpak package][2] from [Flathub][3]. But, you can also find its Snap package on Snapcraft.
Packages including DEB/RPM and the AppImage file can be found in its [GitHub releases section][4].
You should not have any issues installing it on any Linux distribution. I am currently using it on Pop!_OS 21.10.
![][5]
Ksnip supports system tray integration out-of-the-box. So, you should get quick access to the tool and its options, as shown in the screenshot above.
It lets you take an entire screenshot of two monitors combined using the Full-Screen option. In my case, the result is not pretty (considering I have two monitors with different resolutions) and the file takes up more than 9 MB in size.
In any case, I do not have a use-case of such an option. So, I stick to the ability to take screenshots of a rectangular area.
I created a custom shortcut to take a screenshot of an area (or rectangular region) to make it more convenient. Accurately, I mapped it with the middle-click button on my mouse. You can set your preferred shortcut if you want.
![][6]
Unfortunately, it does not feature a “delay” option in the system tray to initiate a screenshot after a time gap. But, you can add a delay by accessing the Ksnip editor and initiating a screenshot from within.
![][7]
Moving forward, it lets me accurately select a rectangular area across both the monitors, which I want.
![][8]
Now, these options alone let me take all kinds of screenshots.
Once the screenshot has been taken, Ksnip directly opens the editor to let you add annotations, save the photo, or discard it.
When compared to Flameshot, if I miss adding annotations while taking the screenshot, theres no built-in image editor to help me with that. And, with Ksnip, I do not have to worry about adding annotations immediately; I can think it over and add annotations if necessary.
![][9]
It also allows me to modify the annotations, even after I saved the image to storage. What a nifty feature!
In addition to all these, you also get some key features like:
* The ability to pin the editor and use it as a widget across the screen to quickly access the Knsip editor.
* Ability to add watermarks.
* Undo/Redo
* Modify Canvas
* Scale/Crop image
* Add numbers/stickers along with other annotations
* Adjust transparency of sniping area
* Imgur/Script uploader
* Hotkey support
For my workflow, Ksnip is probably the [best screenshot tool for Linux][10] and I will be sticking to it for the near future!
[Ksnip (GitHub)][11]
_What do you think about my experience with Knsip? Have you tried it as well? What do you think about it? Let me know your thoughts in the comments!_
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/ksnip-experience/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/using-gnome-screenshot-tool/
[2]: https://itsfoss.com/flatpak-guide/
[3]: https://flathub.org/apps/details/org.ksnip.ksnip
[4]: https://github.com/ksnip/ksnip/releases/tag/v1.9.2
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjYzMSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[6]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjMxNSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[7]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjIzMiIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[8]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQ0MCIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[9]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjM2MSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[10]: https://itsfoss.com/take-screenshot-linux/
[11]: https://github.com/ksnip/ksnip

View File

@ -1,368 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (16 Places to Buy a Pre-installed Linux Laptop Online)
[#]: via: (https://www.2daygeek.com/buy-linux-laptops-computers-online/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
16 Places to Buy a Pre-installed Linux Laptop Online
======
Linux runs on most hardware these days, but most retailers do not have Linux operating systems pre-installed on their hardware.
Gone are the days when users would only buy a Windows OS pre-installed laptop.
Over the years, developers have purchased many Linux laptops as they work on major Linux applications related to Docker, Kubernetes, AI, cloud-native computing and machine learning.
But now-a-days users are eager to buy a Linux laptop instead of Windows, which allows many vendors to choose Linux OS.
### Why Pre-installed Linux?
Now-a-days normal users also started using Linux OS because of its open source nature, security and reliability.
But most of the retailers around the world do not sell Linux operating system pre-installed.
It is difficult for Linux aspirants to find the compatible hardware and drivers to get Linux OS installed.
So, we recommend to have Linux OS pre-installed computers instead of figuring out compatibility issues.
Here we list the top 16 (not in particular order) manufacturer/vendor best known for preloaded Linux OS computers.
### 1) Dell
Dell is a US multinational computer technology company that commenced to sell and distribute pre-installed Ubuntu Linux computers for several years now.
Initially it was started on 2012 as a community project called Sputnik.
The strong community support to the project became a product. Over the year they launched the first Dell XPS 13 Developer Edition (Sputnik 3) after fixing some major issues in Sputnik 1 and Sputnik 2.
[![][1]][2]
They sells Red hat Enterprise Linux and Ubuntu Linux-based laptop for business use, developers and sysadmins.
All systems are preloaded with Ubuntu but few of them were certified to install Red Hat Enterprise Linux 7.5 and RHEL 8.
I hope you can install other distro as well if you want to run but i didnt try it.
The signature Linux products of Dell are **[XPS developer edition][3]**, **[Precision mobile workstation][4]** and Precision tower workstation.
* **Availability:** Worldwide
* **Product Details:** [Dell Linux Systems][5]
### 2) System 76
**[System76][6]** is an American computer manufacturer based in Denver, Colorado specializing in the sale of notebooks, desktops, and servers.
From the year 2003, Sytem76 started to computers with the Linux operating system installed.
They developed Linux distribution named Pop!_OS based on Ubuntu using the GNOME Desktop Environment for developers and professionals.
[![][1]][7]
The products are categorized majorly based on portability, storage, graphics and CPU performance.
Lower laptop model Galago Pro is costing around $950 and higher models such as Adder WS and Serval WS are costing around $2000.
They provide Destops(Thelio variants) in range of $800 to $2600.
They also sell mini servers(Meerkat) ranges from $500 and Larger Servers(Jackal, Ibex and Starling) with preloaded Ubuntu ranges from $3000.
They provide the laptop with the coreboot open source firmware, which is an alternative to the proprietary BIOS firmware.
System76 ships their products to 60 countries all around the world in Africa, Europe, Asia, North America, South America, Australia and Zealandia.
* **Availability:** To 60 countries worldwide
* **Product Details:** [System76][8]
### 3) Purism
Purism is a US-based company that commenced its operation in 2014.
It manufactures the Librem personal computing devices with a focus on software freedom, computer security, and Internet privacy.
[![][1]][9]
Purism sell their products with PureOS installed, a Linux distribution based on Debain developed by purism.
They sell multiple customized products such as Laptops, Tablets, Smartphones, Server and Librem key.
* **Availability:** Worldwide
* **Product Details:** [Purism][10]
### 4) Slimbook
**[Slimbook][11]** commenced their operation in 2015 based in spain.
It is a Linux friendly product that offers Laptops, Desktops, Mini Pcs,All in one PCs and Servers.
[![][1]][12]
It sell their products with with preloaded variety of Linux distributions, windows or both.
They were the first to sell KDE OS installed. It is ideal for Linux beginners, since it is easy to use and easy to learn.
The Laptop body is made of metal alloy based on aluminum and magnesium.
* **Availability:** Worldwide
* **Product Details:** [Slimbook][13]
### 5) Tuxedo Computers
Tuxedo computers a german based company sells notebooks, desktops and mini computers with preloaded Linux.
Their products desktop cost starts from around 480EUR, mini computers starts from 430EUR and notebooks starts from around 815EUR.
They have both intel and AMD processors and come up with 5 years warranty and lifetime support.
TUXEDO Computers are individually built computers and PCs being fully Linux-suitable. They sell their products to most part of Europe and USA.
* **Availability:** Ships to many countries
* **Product Details:** [Tuxedo Computers][14]
### 6) ThinkPenguin
ThinkPengine is a US based company started their operation in 2008 to improve support for GNU/Linux and other free software operating systems.
They sell desktops, notebooks, network equipment, storage devices, printers, scanners and other accessories that are compatible with Linux.
They provide warranty from 90days to 3years based on the products.
* **Availability:** Worldwide
* **Product Details:** [ThinkPenguin][15]
### 7) Emperor Linux
EmperorLinux is a US based company,since 1999 they provides Linux laptops with full hardware support under Linux.
They offers Linux laptops with unique features such as Molecule RD3D using Sharps ground-breaking Auto-Stereo 3D display, Panasonics ToughBook line of rugged & semi-rugged Linux laptops.
They also sell fully-functional Linux tablets, the Raven tablet (based on the ThinkPad X series).
* **Availability:** USA (International shipping is available upon request).
* **Product Details:** [Emperor Linux][16]
### 8) ZaReason
ZaReason opened for business in the year 2007 based in US.
They mainly focuses on R&D labs, businesses both small and large, universities and peoples homes.
It has a long career building hardware for different distros such as Debian, Fedora, Ubuntu, Kubuntu, Edubuntu and Linux Mint Preloaded.
[![][1]][17]
And customer can even choose Linux disros of their choose other than specified.
Their laptop ranges from $999 to $1699. Their desktop and mini computers ranges from $499 to $1199.
They do sell desktop specific for game lovers (Gamebox9400).
Default warranty will be for a year. Additional cost includes for extending the warranty till 3 years.
* **Availability:** USA and Canada
* **Product Details:** [ZaReason][18]
### 9) LAC Portland
LAC(Los Alamos Computers) Portland is a US based company, provides Linux-based computers configured and supported by GNU and Linux professionals since 2000.
They sell Lenovo desktops(ThinkCentre and ThinkStation) ranges from $845 to $2215 and laptops(ThinkPad) ranges from $926 to $2380.
They install and sell Linux distors such as Ubuntu, Linux Mint, Debain, Fedora, CentOS, Scientific Linux, Open SUSE and Free DOS.
They provide five years hardware and labor warranty with on-site support options backed worldwide by Lenovo.
* **Availability:** USA
* **Product Details:** [LAC Portland][19]
### 10) Entroware
Entroware is a UK based company specialized in providing Ubuntu based computing solutions and services since early 2014 based on customers requirements.
They sell Ubuntu and Ubuntu MATE powered Desktops, Laptops, and Servers using modern and high quality components.
[![][1]][20]
They do sell mini computers and All-in-one computers.
Desktop ranges from $499 to $1900, laptops ranges from $740 to $1900 and server ranges from $1150 to $2000.
They also sell accessories such as OS recovery drive, external hard drive, etc.
The default warranty is for 3 years, they have three warranty plans for which additional may include. They also provide software support.
Entroware currently ships to UK, Republic of Ireland, France, Germany, Italy and Spain.
* **Availability:** UK and other European countries (Republic of Ireland, France, Germany, Italy and Spain).
* **Product Details:** [Entroware][21]
### 11) Vikings
Viking is based in Germany, sells Libre-friendly hardware certified by the Free Software Foundation with preinstalled Debian, Trisquel or Parabola Linux based on customer requirement.
They sell desktops, laptops, servers, routers, mainboards, key generators, PCI cards and usb sound adaptors compatible with Linux.
The Linux laptops and desktops by Vikings come with core boot or Libreboot.
Their desktop ranges from 895EUR, laptop ranges from 250EUR and servers ranges from 990EUR.
They provide refurbished/used parts: mainboard, CPU(s) with rigorous testing of all parts and also gives a comprehensive guarantee for all parts of the system.
Their product warranty varies from 1year to 3year, with subsequent additional charges.
They ship to all part of the world with very few exceptions such as North Korea.
* **Availability:** Worldwide
* **Product Details:** [Viking][22]
### 12) Juno Computers
Juno Computers is company based in UK comes with pre-installed elementary OS or Ubuntu.
They provide an application known as Kronos which allows for quick and easy installation of commercial applications such as Chrome, Dropbox, Spotify, Skype, etc.
Their laptop ranges from $945/357EUR to $999/933EUR and mini PC ranges around $549/490EUR.
They provide a 1-year limited warranty on all manufacture problems.
Currently they ship to mainland USA, some Canadian provinces, and most part of the world includes South Africa, Asia and Europe.
* **Availability:** Worldwide
* **Product Details:** [Juno Computers][23]
### 13) Pine64
Pine64 is a US based community platform that offers laptops (**[Pinebooks][24]**), Pine Phones, Pine Watches(PineTime), Single board computers and other compatible Linux accessories.
It commenced its operation in the year 2016 powered by ARM devices.
[![][1]][25]
The laptops ranges from $100 to $200.
All single board and accessories sold on the Pine store are entitled to a 30 days Limited Warranty against defects in materials and workmanship, but provide online support through their forum.
They almost ship to most part of the country, refer site shipping policy for more details.
* **Availability:** Worldwide
* **Product Details:** [Pine64][26]
### 14) Libiquity
Libiquity is a US based company with R&D investments and its own personal computer brand since 2011.
They offer laptop(Taurinus X200) preloaded with Trisquel and comes with ProteanOS, a free/libre and open source embedded operating system distribution endorsed by Free Software Foundation.
Laptop ranges starts from $375. Product comes with limited warrant of 1 year. Currently their shipping are limited to US.
* **Availability:** US
* **Product Details:** [Libiquity][27]
### 15) LinuxCertified
LinuxCertified an US based company offers lenovo desktops and laptops with Linux distros preinstalled.
Various preloaded Linux distros offered are Ubuntu, Fedora, Open SUSE, CentOS, Redhat Enterprise Linux and Oracle Enterprise Linux.
Desktops(ThinkStation) ranges from $899 to $2199 and laptops(Z1, LC series) ranges from $899 to $2199.
Product warranty is for one year. They ship their product within US.
* **Availability:** Worldwide
* **Product Details:** [LinuxCertified][28]
### 16) Star Labs
**[Star Labs][29]** was created by a group of Linux users, who created the ultimate Linux laptop for their own use.
Its based in the United Kingdom which sells laptops with Linux pre-installed.
[![][1]][30]
Star Labs offer a range of laptops designed and built specifically for Linux.
All of their laptops come with a choice of Ubuntu Linux, Linux Mint or Zorin OS pre-installed.
It is not limited to the above three distributions, and you can install any Linux distros on their hardware, and it runs flawlessly.
* **Availability:** Worldwide
* **Product Details:** [Star Labs][31]
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/buy-linux-laptops-computers-online/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.2daygeek.com/wp-content/uploads/2020/01/dell-xps-13-developer-deition-2.png
[3]: https://www.linuxtechnews.com/dells-new-xps-13-developer-edition-is-powered-by-the-10th-generation/
[4]: https://www.linuxtechnews.com/dell-launches-three-new-dell-precision-developer-editions-laptops-preloaded-with-ubuntu-linux/
[5]: https://www.dell.com/en-us/work/shop/overview/cp/linuxsystems
[6]: https://www.linuxtechnews.com/system76-has-announced-new-gazelle-laptops/
[7]: https://www.2daygeek.com/wp-content/uploads/2020/01/system76-1.jpg
[8]: https://system76.com/laptops
[9]: https://www.2daygeek.com/wp-content/uploads/2020/01/librem-1.jpg
[10]: https://puri.sm/products/
[11]: https://www.linuxtechnews.com/slimbook-is-offering-a-new-laptop-called-slimbook-pro-x/
[12]: https://www.2daygeek.com/wp-content/uploads/2020/01/slimbook.jpg
[13]: https://slimbook.es/en/comparison-slimbook-pro-x-with-other-ultrabooks
[14]: https://www.tuxedocomputers.com/en/Linux-Hardware/Linux-Notebooks.tuxedo
[15]: https://www.thinkpenguin.com/catalog/notebook-computers-gnu-linux-2
[16]: http://www.emperorlinux.com/systems/
[17]: https://www.2daygeek.com/wp-content/uploads/2020/01/zareason-1.jpg
[18]: https://zareason.com/Laptops/
[19]: https://shop.lacpdx.com/laptops/
[20]: https://www.2daygeek.com/wp-content/uploads/2020/01/entroware.jpg
[21]: https://www.entroware.com/store/laptops
[22]: https://store.vikings.net/libre-friendly-hardware/x200-ryf-certfied
[23]: https://junocomputers.com/store/
[24]: https://www.linuxtechnews.com/pinebook-pro-199-linux-laptop-pre-orders-ansi-iso-keyboards/
[25]: https://www.2daygeek.com/wp-content/uploads/2020/01/Pinebook_Pro-photo-1.jpg
[26]: https://store.pine64.org/
[27]: https://shop.libiquity.com/
[28]: https://www.linuxcertified.com/linux_laptops.html
[29]: https://www.linuxtechnews.com/star-labs-offering-a-range-of-linux-laptops-with-zorin-os-15-pre-installed/
[30]: https://www.2daygeek.com/wp-content/uploads/2020/01/starlabs-1.jpg
[31]: https://earth.starlabs.systems/pages/laptops

View File

@ -1,199 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Develop GUI apps using Flutter on Fedora)
[#]: via: (https://fedoramagazine.org/develop-gui-apps-using-flutter-on-fedora/)
[#]: author: (Carmine Zaccagnino https://fedoramagazine.org/author/carzacc/)
Develop GUI apps using Flutter on Fedora
======
![][1]
When it comes to app development frameworks, Flutter is the latest and greatest. Google seems to be planning to take over the entire GUI app development world with Flutter, starting with mobile devices, which are already perfectly supported. Flutter allows you to develop cross-platform GUI apps for multiple targets — mobile, web, and desktop — from a single codebase.
This post will go through how to install the Flutter SDK and tools on Fedora, as well as how to use them both for mobile development and web/desktop development.
### Installing Flutter and Android SDKs on Fedora
To get started building apps with Flutter, you need to install
* the Android SDK;
* the Flutter SDK itself; and,
* optionally, an IDE and its Flutter plugins.
#### Installing the Android SDK
Flutter requires the installation of the Android SDK with the entire [Android Studio][2] suite of tools. Google provides a _tar.gz_ archive. The Android Studio executable can be found in the _android-studio/bin_ directory and is called _studio.sh_. To run it, open a terminal, _cd_ into the aforementioned directory, and then run:
```
$ ./studio.sh
```
#### Installing the Flutter SDK
Before you install Flutter you may want to consider what release channel you want to be on.
The _stable_ channel is least likely to give you a headache if you just want to build a mobile app using mainstream Flutter features.
On the other hand, you may want to use the latest features, especially for desktop and web app development. In that case, you might be better off installing either the latest version of the _beta_ or even the _dev_ channel.
Either way, you can switch between channels after you install using the _flutter channel_ command explained later in the article.
Head over to the [official SDK archive page][3] and download the latest installation bundle for the release channel most appropriate for your use case.
The installation bundle is simply a _xz-_compressed tarball (_.tar.xz_ extension). You can extract it wherever you want, given that you add the _flutter/bin_ subdirectory to the _PATH_ environment variable.
#### Installing the IDE plugins
To install the plugin for [Visual Studio Code][4], you need to search for _Flutter_ in the _Extensions_ tab. Installing it will also install the _Dart_ plugin.
The same will happen when you install the plugin for Android Studio by opening the _Settings_, then the _Plugins_ tab and installing the _Flutter_ plugin.
### Using the Flutter and Android CLI Tools on Fedora
Now that youve installed Flutter, heres how to use the CLI tool.
#### Upgrading and Maintaining Your Flutter Installations
The _flutter doctor_ command is used to check whether your installation and related tools are complete and dont require any further action.
For example, the output you may get from _flutter doctor_ right after installing on Fedora is:
```
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, v1.12.13+hotfix.5, on Linux, locale it_IT.UTF-8)
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
✗ Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[!] Android Studio (version 3.5)
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
[!] Connected device
! No devices available
! Doctor found issues in 3 categories.
```
Of course the issue with the Android toolchain has to be resolved in order to build for Android. Run this command to accept the licenses:
```
$ flutter doctor --android-licenses
```
Use the _flutter channel_ command to switch channels after installation. Its just like switching branches on Git (and thats actually what it does). You use it in the following way:
```
$ flutter channel <channel_name>
```
…where youd replace _&lt;channel_name&gt;_ with the release channel you want to switch to.
After doing that, or whenever you feel the need to do it, you need to update your installation. You might consider running this every once in a while or when a major update comes out if you follow Flutter news. Run this command:
```
$ flutter upgrade
```
#### Building for Mobile
You can build for Android very easily: the _flutter build_ command supports it by default, and it allows you to build both APKs and newfangled app bundles.
All you need to do is to create a project with _flutter create_, which will generate some code for an example app and the necessary _android_ and _ios_ folders.
When youre done coding you can either run:
* _flutter build apk_ or _flutter build appbundle_ to generate the necessary app files to distribute, or
* _flutter run_ to run the app on a connected device or emulator directly.
When you run the app on a phone or emulator with _flutter run_, you can use the _R_ button on the keyboard to use _stateful hot reload_. This feature updates whats displayed on the phone or emulator to reflect the changes youve made to the code without requiring a full rebuild.
If you input a capital _R_ character to the debug console, you trigger a _hot restart_. This restart doesnt preserve state and is necessary for bigger changes to the app.
If youre using a GUI IDE, you can trigger a hot reload using the _bolt_ icon button and a hot restart with the typical _refresh_ button.
#### Building for the Desktop
To build apps for the desktop on Fedora, use the [flutter-desktop-embedding][5] repository. The _flutter create_ command doesnt have templates for desktop Linux apps yet. That repository contains examples of desktop apps and files required to build on desktop, as well as examples of plugins for desktop apps.
To build or run apps for Linux, you also need to be on the _master_ release channel and enable Linux desktop app development. To do this, run:
```
$ flutter config --enable-linux-desktop
```
After that, you can use _flutter run_ to run the app on your development workstation directly, or run _flutter build linux_ to build a binary file in the _build/_ directory.
If those commands dont work, run this command in the project directory to generate the required files to build in the _linux/_ directory:
```
$ flutter create .
```
#### Building for the Web
Starting with Flutter 1.12, you can build Web apps using Flutter with the mainline codebase, without having to use the _flutter_web_ forked libraries, but you have to be running on the _beta_ channel.
If you are (you can switch to it using _flutter channel beta_ and _flutter upgrade_ as weve seen earlier), you need to enable web development by running _flutter config enable-web_.
After doing that, you can run _flutter run -d web_ and a local web server will be started from which you can access your app. The command returns the URL at which the server is listening, including the port number.
You can also run _flutter build web_ to build the static website files in the _build/_ directory.
If those commands dont work, run this command in the project directory to generate the required files to build in the _web/_ directory:
```
$ flutter create .
```
### Packages for Installing Flutter
Other distributions have packages or community repositories to install and update in a more straightforward and intuitive way. However, at the time of writing, no such thing exists for Flutter. If you have experience packaging RPMs for Fedora, consider contributing to [this GitHub repository][6] for [this COPR package][7].
The next step is learning Flutter. You can do that in a number of ways:
* Read the good API reference documentation on the official site
* Watching some of the introductory video courses available online
* Read one of the many books out there today. _[Check out the authors bio for a suggestion! — Ed.]_
* * *
_Photo by [Randall Ruiz][8] on [Unsplash][9]._
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/develop-gui-apps-using-flutter-on-fedora/
作者:[Carmine Zaccagnino][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/carzacc/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/01/flutter-816x345.jpg
[2]: https://developer.android.com/studio
[3]: https://flutter.dev/docs/development/tools/sdk/releases?tab=linux
[4]: https://fedoramagazine.org/using-visual-studio-code-fedora/
[5]: https://github.com/google/flutter-desktop-embedding
[6]: https://github.com/carzacc/flutter-copr
[7]: https://copr.fedorainfracloud.org/coprs/carzacc/flutter/
[8]: https://unsplash.com/@ruizra?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[9]: https://unsplash.com/s/photos/flutter?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -1,122 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (PaperWM, the Tiling Window Manager for GNOME)
[#]: via: (https://itsfoss.com/paperwm/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
PaperWM, the Tiling Window Manager for GNOME
======
Lately, tiling window managers have been gaining popularity even among the regular desktop Linux users. Unfortunately, it can be difficult and time-consuming for a user to install and set up a tiling window manager.
This is why projects like [Regolith][1] and PaperWM has come up to provide tiling window experience with minimal efforts.
We have already discussed [Regolith desktop][2] in details. In this article, well check out PaperWM.
### What is PaperWM?
According to its GitHub repo, [PaperWM][3] is “an experimental [Gnome Shell extension][4] providing scrollable tiling of windows and per monitor workspaces. Its inspired by paper notebooks and tiling window managers.”
PaperWM puts all of your windows in a row. You can quickly switch between windows very quickly. Its a little bit like having a long spool of paper in front of you that you can move back and forth.
This extension supports GNOME Shell 3.28 to 3.34. It also supports both X11 and Wayland. It is written in JavaScript.
![PaperWM Desktop][5]
# How to Install PaperWM?
To install the PaperWM extension, you will need to clone the Github repo. Use this command:
```
git clone 'https://github.com/paperwm/PaperWM.git' "${XDG_DATA_HOME:-$HOME/.local/share}/gnome-shell/extensions/[email protected]:matrix.org"
```
Now all you have to do is run:
```
./install.sh
```
The installer will set up and enable PaperWM.
If you are an Ubuntu user, there are a couple of things that you will need to consider. There are currently three different versions of the Gnome desktop available with Ubuntu:
* ubuntu-desktop
* ubuntu-gnome-desktop
* vanilla-gnome-desktop
Ubuntu ships ubuntu-desktop by default and includes the _desktop-icons_ package, which causes issues with PaperWM. The PaperWM devs recommend that you turn off the desktop-icons extension [using GNOME Tweaks tool][6]. However, while this step does work in 19.10, they say that users have reported that it is not working 19.04.
According to the PaperWM devs, using _ubuntu-gnome-desktop_ produces the best out of the box results. _vanilla-gnome-desktop_ has some keybindings that raise havoc with PaperWM.
**Recommended Read:**
![][7]
#### [Get a Preconfigured Tiling Window Manager on Ubuntu With Regolith][2]
Using tiling window manager in Linux can be tricky with all those configuration. Regolith gives you an out of box i3wm experience within Ubuntu.
### How to Use PaperWM?
Like most tiling window managers, PaperWM uses the keyboard to control and manage the windows. PaperWM also supports mouse and touchpad controls. For example, if you have Wayland installed, you can use a three-fingered swipe to navigate.
![PaperWM in action][8]
Here is a list of a few of the keybinding that preset in PaperWM:
* Super + , or Super + . to activate the next or previous window
* Super + Left or Super + Rightto activate the window to the left or right
* Super + Up or Super + Downto activate the window above or below
* Super + , or Super + . to activate the next or previous window
* Super + Tab or Alt + Tab to cycle through the most recently used windows
* Super + C to center the active window horizontally
* Super + R to resize the window (cycles through useful widths)
* Super + Shift + R to resize the window (cycles through useful heights)
* Super + Shift + F to toggle fullscreen
* Super + Return or Super + N to create a new window from the active application
* Super + Backspace to close the active window
The Super key is the Windows key on your keyboard. You can find the full list of keybindings on the PaperWM [GitHub page][9].
### Final Thoughts on PaperWM
As I have stated previously, I dont use tiling managers. However, this one has me thinking. I like the fact that you dont have to do a lot of configuring to get it working. Another big plus is that it is built on GNOME, which means that getting a tiling manager working on Ubuntu is fairly straight forward.
The only downside that I can see is that a system running a dedicated tiling window manager, like [Sway][10], would use fewer system resources and be faster overall.
What are your thoughts on the PaperWM GNOME extension? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][11].
--------------------------------------------------------------------------------
via: https://itsfoss.com/paperwm/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://regolith-linux.org/
[2]: https://itsfoss.com/regolith-linux-desktop/
[3]: https://github.com/paperwm/PaperWM
[4]: https://itsfoss.com/gnome-shell-extensions/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/paperwm-desktop.png?ssl=1
[6]: https://itsfoss.com/gnome-tweak-tool/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/regolith-linux.png?fit=800%2C450&ssl=1
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/paperwm-desktop2.png?fit=800%2C450&ssl=1
[9]: https://github.com/paperwm/PaperWM#usage
[10]: https://itsfoss.com/sway-window-manager/
[11]: https://reddit.com/r/linuxusersgroup

View File

@ -1,270 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to restore a single-core computer with Linux)
[#]: via: (https://opensource.com/article/20/2/restore-old-computer-linux)
[#]: author: (Howard Fosdick https://opensource.com/users/howtech)
How to restore a single-core computer with Linux
======
Let's have some geeky fun refurbishing your prehistoric Pentium with
Linux and open source.
![Two animated computers waving one missing an arm][1]
In a [previous article][2], I explained how I refurbish old dual-core computers ranging from roughly five to 15 years old. Properly restored, these machines can host a fully capable lightweight Linux distribution like [Mint/Xfce][3], [Xubuntu][4], or [Lubuntu][5] and perform everyday tasks. But what if you have a really old computer gathering dust in your attic or basement? Like a Pentium 4 desktop or Pentium M laptop? Yikes! Can you even do anything with a relic like that?
### Why restore a relic?
For starters, you might learn a bit about hardware and open source software by refurbishing it. And you could have some fun along the way. Whether you can make much use of it depends on your expectations.
A single-core computer can perform well for a specific purpose. For example, my friend created a dandy retro gaming box (like I describe below) that runs hundreds of Linux and old Windows and DOS games. His kids love it!
Another friend uses his Pentium 4 for running design spreadsheets in his workshop. He finds it convenient to have a dedicated machine tucked into a corner of his shop. He likes that he doesn't have to worry about heat or dust ruining an expensive modern computer.
My romance author acquaintance employs her Pentium M as a "novelist's workstation" lodged in her cozy attic hideaway. The laptop functions as her private word processor.
I've used old computers to teach beginners how to build and repair hardware. Old equipment makes the best testbed because it's expendable. If someone makes a mistake and fries a board, it doesn't much matter. (Contrast this to how you would feel if you wrecked your main computer!)
The web suggests many [other potential uses][6] for old Pentiums: security cam monitors, network-attached storage (NAS) servers, [SETI][7] boxes, torrent servers, anonymous [Tails][8] servers, Bitcoin miners, programming workstations, thin clients, terminal emulators, routers, file servers, and more. To me, many of these applications sound more like fun projects than practical uses for single-core computers. That doesn't mean they aren't worth your while; it's just that you want to be clear-eyed about any project you take on.
By current standards, P-4s and Ms are terribly [weak processors][9]. For example, using them for web surfing is problematic because webpage size and programming complexity have [grown exponentially][10]. And the open web is closing—increasingly, sites won't allow you access unless you let them run all those ads that can overwhelm old processors. (I'll discuss web surfing performance tricks later in this article.) Another shortcoming of old computers is their energy consumption. Better electricity-to-performance ratios often make newer computers more sensible. This especially true when a [tablet or smartphone][11] can fulfill your needs.
Nevertheless, you can still have fun and learn a lot by tinkering with an old P-4 or M. They're great educational tools, they're expendable, and they can be useful in dedicated roles. Best of all, you can get them for free. I'll tell you how.
Still reading? Okay, let's have some geeky fun refurbishing your prehistoric Pentium.
### Understand hardware evolution
As a quick level-set, here are the common names for the P-4 and M class processors and their rough dates of manufacture:
**Desktops (2000-2008)**
* Pentium 4
* Pentium 4 HT (Hyper-Threading)
* Pentium 4 EE (Extreme Edition)
**Desktops (2005-2008)**
* Pentium D (early dual-core)
**Mobile (2002-2008)**
* Pentium M
* Pentium 4-M
* Mobile Pentium 4
* Mobile Pentium 4 HT
Sources: Wikipedia (for the [P-4][12], [P-M][13], and [processor][14] lists), [CPU World,][15] [Revolvy][16].
Machines hosting these processors typically use either DDR2 or DDR memory. Dual-core processors entered the market in 2005 and displaced single-core CPUs within a few years. I'll assume you have some version of what's in the above table. Or you might have an equivalent [AMD][17] or [Celeron][18] processor from the same era.
The big draw of this old hardware is that you can get it for free. People consider it junk. They'll be only too glad to give you their castoffs. If you don't have a machine on hand, just ask your friends or family. Or drop by the local recycling center. Unless they have strict rules, they'll be happy to give you this old equipment. You can even advertise on [Craigslist][19], [Freecycle,][20] or [other reuse websites][21].
**A quick tip:** Grab more than one machine. With old hardware, you often need to cannibalize parts from several computers to build one good working one.
### Prepare the hardware
Before you can use your old computer, you must refurbish it. The steps to fixing it up are:
1. Clean it
2. Identify what hardware you have
3. Verify the hardware works
Start by opening up the box and cleaning out the dirt. Dust causes the heat that kills electronics. A can of compressed air helps.
Always keep yourself grounded when touching things so that you don't harm the electronics. And don't rub anything with a cleaning rag! Even a shock you can't feel can damage computer circuitry.
While you've got the box open, learn everything you can about your hardware. Write it all down, so you remember it later:
* Count the open memory slots, if any. Is the RAM DDR or DDR2 (or something else)?
* Read the hard drive label to learn its capacity and age. (It'll probably be an old IDE drive. You can identify IDE drives by their wide connector ribbons.)
* Check the optical drive label to see what kinds of discs it reads and/or writes, at what speed, and to what standard(s).
* Note other peripherals, add-in cards, or anything unusual.
Close and boot the machine into its boot-time [BIOS][22] panels. [This list][23] tells you what program function (PF) key to press to access those startup panels for your specific computer. Now you can complete your hardware identification by rounding out the details on your processor, memory, video memory, and more.
### Verify the hardware
Once you know what you've got, verify that it all works. Test:
* Memory
* Disk
* Motherboard
* Peripherals (optical drive, USB ports, sound, etc.)
Run any diagnostic tests in the computer's boot or BIOS panels. Free resource kits like [Hiren's BootCD][24] or the [Ultimate Boot CD][25] can round out your testing with any diagnostics your boot panels lack. These kits offer dozens of testing programs: all are free, but not all are open source. You can boot them off a live USB or DVD so that you don't have to install anything on the computer.
Be sure to run the "extended" or long tests for the memory and disk drive. Run tests overnight if you have to. Do this job right! If you miss a problem now, it could cause you big headaches later.
If you find a problem, refer to my _[Quick guide to fixing hardware][26]_ to solve common issues.
### Essential hardware upgrades
You'll want to make two key hardware upgrades. First, increase memory to the computer's maximum. (You can find the maximum for your computer with a quick web search for its specs.) The practical minimum to run many lightweight Linux distros is 1GB RAM; 2GB or more is ideal. While the maximum allowable memory varies by the machine, the great majority of these computers will upgrade to at least 2GB.
Second—if the desktop doesn't already have one—add a video card. This offloads graphics processing from the motherboard to the video card and increases the computer's video memory. Bumping up the VRAM from 32 or 64MB to 256GB or more greatly increases the range of applications an old computer can run. Especially if you want to run games.
Be sure the video card fits your computer's [video slot][27] (AGP, PCI, or PCI-Express) and has the right [cable connector][28] (VGA or DVI). You can issue a couple of [Linux line commands][29] to see how much VRAM your system has, or look in the BIOS boot panels.
These two simple upgrade hacks—increasing memory and video power—take a marginal machine and make it _way_ more functional. Your goal is to build the most powerful P-4 or M ever. That way, you can squeeze the most performance from this aging design.
The good news is that with the old computers we're talking about, you can get any parts you need for free. Just cannibalize them from other discarded PC's.
### Select the software
Choosing the right software for a P-4 or M is critical. [Don't][30] use an [unsupported][31] Windows version just because it's already on the PC; malware might plague you if you do. A fresh install is mandatory.
Open source software is the way to go. [Many][32] Linux [distributions][33] are specifically designed for older computers. And with Linux, you can install, move, copy, and clone the operating system and its apps at will. This makes your job easier: You won't run into activation or licensing issues, and it's all free.
Which distribution should you pick? Assuming you have at least 2GB of memory, start your search by trying a _lightweight distribution_—these feature resource-stingy [desktop environments][34]. Xfce or LXQt are excellent desktop environment choices. Products that [consume more resources][35] or produce fancier graphics—like Unity, GNOME, KDE, MATE, and Cinnamon—won't perform well.
The lightweight Linux distros I've enjoyed success with are Mint/Xfce, Xubuntu, and Lubuntu. The first two use Xfce while Lubuntu employs LXQt. You can find [many other][36] excellent candidate distros beyond these three choices that I can vouch for.
Be sure to download the 32-bit versions of the operating systems; 64-bit versions don't make much sense unless a computer has at least 4GB of memory.
The lightweight Linux distros I've cited offer friendly menus and feature huge software repositories backed by active forums. They'll enable your old computer to do everything it's capable of. However, they won't run on every computer from the P-4 era. If one of these products runs on your computer and you like it, great! You've found your distro.
If your computer doesn't perform well with these selections, won't boot, or you have less than 2GB of memory, try an _ultralight distribution_. Ultralights reduce resource use by replacing desktop environments with [window managers][37] like Fluxbox, FLWM, IceWM, JWM, or Openbox. Window managers use fewer resources than desktop environments. The trade-off is that they're less flexible. As an example, you may have to dip into code to alter your desktop or taskbar icons.
My go-to ultralight distro is [Puppy Linux][38]. It comes in several variants that run well on Pentium 4's and M's with only 1GB of memory. Puppy's big draw is that it has versions designed specifically for older computers. This means you'll avoid the hassles you might run into with other distros. For example, Puppy versions run on old CPUs that don't support features like PAE or SSE3. They'll even help you run an older kernel or obsolete bootstrap program if your hardware requires it.
And Puppy runs _fast_ on limited-resource computers! It optimizes performance by loading the operating system entirely into memory to avoid slow disk access. It bundles a full range of apps that have been carefully selected to use minimal hardware resources.
Puppy is also user-friendly. Even a naive end user can use its simple menus and attractive desktop. But be advised—it takes expertise to install and configure the product. You might have to spend some time on Puppy's [forum][39] to get oriented. The forum is especially useful because many who post there work with old computers.
A fun alternative to Puppy is [Tiny Core][40] Linux. With Tiny Core, you install only the software components you want. So you build up your environment from the absolute minimum. This takes time but results in a lean, mean system. Tiny Core is perfect for creating a dedicated server. It's a great learning tool, too, so check out its [free eBook][41].
If you want a quick, no-hassles install, you might try [antiX][42]. It's Debian-based, offers a selection of lightweight interfaces, and runs well on machines with only a gigabyte of memory. I've had excellent results installing antiX on a variety of old PCs.
_**Caution:**_ Many distros casually claim that they run on "old computers" when they really mean that they run on _limited-resource computers_. There's a big difference. Old computers sometimes do not support all the CPU features required by newer operating systems. Avoid problems by selecting a Linux proven to run on your hardware.
Don't know if a distro will run on your box? Save yourself some time by posting a message on the distro's forum and asking for responses from folks using hardware like yours. You should receive some success stories. If nobody can say they've done what you're trying to do, I'd avoid that product.
### How to use your refurbished computer
Will you be happy using your restored PC? It depends on what you expect.
People who use aging systems learn to leverage minimal resources. For example, they run resource-stingy programs like GNOME Office in place of LibreOffice. They forgo CPU-intense programs like emulators, graphics-heavy apps, video processing, and virtual machine hosting. They focus on one task at a time and don't expect much concurrency. And they know how to manage machine resources proactively.
Old hardware can perform well in dedicated situations. Earlier, I mentioned my friends who use their old computers for design spreadsheets and as a writer's workbench. And I wrote this article on my personal retro box—a Dell GX280 desktop with a Pentium 4 at 3.2GHz, with 2GB DDR-2 RAM and two 40GB IDE disks, dual-booting Puppy and antiX.
#### Create a retro game box
You can also create a fantastic retro game box. First, install an appropriate distro. Then install [Wine][43], a program designed to run Windows software on Linux. Now you'll be able to run nearly all your old Windows XP, ME/98/95, and 3.1 games. [DOSBox][44] supports tons more [free DOS games][45]. And Linux offers over a thousand more.
I've enjoyed nostalgic fun on a P-4 running antiX and all the old games I remember from years ago. Just be sure you've maxed out system memory and added a good video card for the best results.
#### Access the web
The big challenge with old computers is web surfing. [This study][46] claims that average website size has increased 100% over a three-year period, while [this article][47] tells how bloated news sites have become. Videos, animation, images, trackers, ad requests—they all make websites slower than just a few years ago.
Worse, websites increasingly refuse you access unless you allow them to run their ads. This is a problem because the ads can overwhelm old CPUs. In fact, for most websites, the resources required to run ads and trackers are _way_ greater than that required for the actual website content.
Here are the performance tricks you need to know if you web surf with an older computer:
* Run the fastest, lightest browser possible. Chrome, Firefox, and Opera are probably the top mainstream offerings.
* Try alternative [minimalist browsers][48] to see if they can meet your needs: [Dillo][49], [NetSurf][50], [Dooble][51], [Lynx][52], [Links][53], or others.
* Actively manage your browser.
* Don't open many browser tabs.
* Manually start and stop processing in specific tabs.
* Block ads and trackers:
* Offload this chore to your virtual private network (VPN) if at all possible.
* Otherwise, use a browser extension.
* Don't slow down your browser by installing add-ons or extensions beyond the minimum required.
* Disable autoplay for videos and Flash.
* Toggle JavaScript off and on.
* Ensure the browser renders text before graphics.
* Don't run background tasks while web surfing.
* Manually clear cookies to avoid page-access limits on some websites.
* Linux means you don't have to run real-time anti-malware (which consumes a CPU core on many Windows PCs).
Employing some of these tricks, I happily use refurbished dual-core computers for all my web surfing. But with today's internet, I find single-core processors inadequate for anything beyond the occasional web lookup. In other words, they're acceptable for _web access_ but insufficient for _web surfing_. That's just my opinion. Yours may vary depending on your expectations and the nature of your web activity.
### Enjoy free educational fun
However you use your refurbished P-4 or M, you'll know a lot more about computer hardware and open source software than when you started. It won't cost you a penny, and you'll have some fun along the way!
Please share your own refurbishing experiences in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/restore-old-computer-linux
作者:[Howard Fosdick][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/howtech
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_other11x_cc.png?itok=I_kCDYj0 (Two animated computers waving one missing an arm)
[2]: http://opensource.com/article/19/7/how-make-old-computer-useful-again
[3]: http://linuxmint.com/
[4]: https://xubuntu.org/
[5]: http://lubuntu.me/
[6]: http://www.google.com/search?q=uses+for+a+pentium+IV
[7]: https://en.wikipedia.org/wiki/Search_for_extraterrestrial_intelligence
[8]: https://en.wikipedia.org/wiki/Tails_(operating_system)
[9]: http://www.cpubenchmark.net/low_end_cpus.html
[10]: http://www.digitaltrends.com/web/internet-is-getting-slower/
[11]: https://www.forbes.com/sites/christopherhelman/2013/09/07/how-much-energy-does-your-iphone-and-other-devices-use-and-what-to-do-about-it/#ba4918e2f702
[12]: https://en.wikipedia.org/wiki/Pentium_4
[13]: https://en.wikipedia.org/wiki/Pentium_M
[14]: https://en.wikipedia.org/wiki/List_of_Intel_Pentium_4_microprocessors
[15]: http://www.cpu-world.com/CPUs/Pentium_4/index.html
[16]: https://www.revolvy.com/page/List-of-Intel-Pentium-4-microprocessors?cr=1
[17]: https://en.wikipedia.org/wiki/List_of_AMD_microprocessors
[18]: https://en.wikipedia.org/wiki/Celeron
[19]: https://www.craigslist.org/about/sites
[20]: https://www.freecycle.org/
[21]: https://alternativeto.net/software/freecycle/
[22]: http://en.wikipedia.org/wiki/BIOS
[23]: http://www.disk-image.com/faq-bootmenu.htm
[24]: http://www.hirensbootcd.org/download/
[25]: http://www.ultimatebootcd.com/
[26]: http://www.rexxinfo.org/Quick_Guide/Quick_Guide_To_Fixing_Computer_Hardware
[27]: http://www.playtool.com/pages/vidslots/slots.html
[28]: https://silentpc.com/articles/video-connectors
[29]: https://www.cyberciti.biz/faq/howto-find-linux-vga-video-card-ram/
[30]: https://fusetg.com/dangers-running-unsupported-operating-system/
[31]: http://home.bt.com/tech-gadgets/computing/windows-7/windows-7-support-end-11364081315419
[32]: https://itsfoss.com/lightweight-linux-beginners/
[33]: https://fossbytes.com/best-lightweight-linux-distros/
[34]: https://en.wikipedia.org/wiki/Desktop_environment
[35]: http://www.phoronix.com/scan.php?page=article&item=ubu-1704-desktops&num=3
[36]: https://www.google.com/search?ei=TfIoXtG5OYmytAbl04z4Cw&q=best+lightweight+linux+distros+for+old+computers&oq=best+lightweight+linux+distros+for+old&gs_l=psy-ab.1.0.0i22i30l8j0i333.6806.8527..10541...2.2..0.159.1119.2j8......0....1..gws-wiz.......0i71j0.a6LTmaIXan0
[37]: https://en.wikipedia.org/wiki/X_window_manager
[38]: http://puppylinux.com/
[39]: http://murga-linux.com/puppy/
[40]: http://tinycorelinux.net/
[41]: http://tinycorelinux.net/book.html
[42]: http://antixlinux.com/
[43]: https://www.winehq.org/
[44]: https://en.wikipedia.org/wiki/DOSBox
[45]: https://www.dosgamesarchive.com/
[46]: https://www.digitaltrends.com/web/internet-is-getting-slower/
[47]: https://www.forbes.com/sites/kalevleetaru/2016/02/06/why-the-web-is-so-slow-and-what-it-tells-us-about-the-future-of-online-journalism/#34475c2072f4
[48]: http://en.wikipedia.org/wiki/Comparison_of_lightweight_web_browsers
[49]: http://www.dillo.org/
[50]: http://www.netsurf-browser.org/
[51]: http://textbrowser.github.io/dooble/
[52]: http://lynx.browser.org/
[53]: http://en.wikipedia.org/wiki/Links_%28web_browser%29

View File

@ -1,133 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create web user interfaces with Qt WebAssembly instead of JavaScript)
[#]: via: (https://opensource.com/article/20/2/wasm-python-webassembly)
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
Create web user interfaces with Qt WebAssembly instead of JavaScript
======
Get hands-on with Wasm, PyQt, and Qt WebAssembly.
![Digital creative of a browser on the internet][1]
When I first heard about [WebAssembly][2] and the possibility of creating web user interfaces with Qt, just like I would in ordinary C++, I decided to take a deeper look at the technology.
My open source project [Pythonic][3] is completely Python-based (PyQt), and I use C++ at work; therefore, this minimal, straightforward WebAssembly tutorial uses Python on the backend and C++ Qt WebAssembly for the frontend. It is aimed at programmers who, like me, are not familiar with web development.
![Header Qt C++ frontend][4]
### TL;DR
```
git clone <https://github.com/hANSIc99/wasm\_qt\_example>
cd wasm_qt_example
python mysite.py
```
Then visit <http://127.0.0.1:7000> with your favorite browser.
### What is WebAssembly?
WebAssembly (often shortened to Wasm) is designed primarily to execute portable binary code in web applications to achieve high-execution performance. It is intended to coexist with JavaScript, and both frameworks are executed in the same sandbox. [Recent performance benchmarks][5] showed that WebAssembly executes roughly 1040% faster, depending on the browser, and given its novelty, we can still expect improvements. The downside of this great execution performance is its widespread adoption as the preferred malware language. Crypto miners especially benefit from its performance and harder detection of evidence due to its binary format.
### Toolchain
There is a [getting started guide][6] on the Qt wiki. I recommend sticking exactly to the steps and versions mentioned in this guide. You may need to select your Qt version carefully, as different versions have different features (such as multi-threading), with improvements happening with each release.
To get executable WebAssembly code, simply pass your Qt C++ application through [Emscripten][7]. Emscripten provides the complete toolchain, and the build script couldn't be simpler:
```
#!/bin/sh
source ~/emsdk/emsdk_env.sh
~/Qt/5.13.1/wasm_32/bin/qmake
make
```
Building takes roughly 10 times longer than with a standard C++ compiler like Clang or g++. The build script will output the following files:
* WASM_Client.js
* WASM_Client.wasm
* qtlogo.svg
* qtloader.js
* WASM_Client.html
* Makefile (intermediate)
The versions on my (Fedora 30) build system are:
* emsdk: 1.38.27
* Qt: 5.13.1
### Frontend
The frontend provides some functionalities based on [WebSocket][8].
![Qt-made frontend in browser][9]
* **Send message to server:** Send a simple string message to the server with a WebSocket. You could have done this also with a simple HTTP POST request.
* **Start/stop timer:** Create a WebSocket and start a timer on the server to send messages to the client at a regular interval.
* **Upload file:** Upload a file to the server, where the file is saved to the home directory (**~/**) of the user who runs the server.
If you adapt the code and face a compiling error like this:
```
error: static_assert failed due to
 requirement bool(-1 == 1) “Required feature http for file
 ../../Qt/5.13.1/wasm_32/include/QtNetwork/qhttpmultipart.h not available.”
QT_REQUIRE_CONFIG(http);
```
it means that the requested feature is not available for Qt Wasm.
### Backend
The server work is done by [Eventlet][10]. I chose Eventlet because it is lightweight and easy to use. Eventlet provides WebSocket functionality and supports threading.
![Decorated functions for WebSocket handling][11]
Inside the repository under **mysite/template**, there is a symbolic link to **WASM_Client.html** in the root path. The static content under **mysite/static** is also linked to the root path of the repository. If you adapt the code and do a recompile, you just have to restart Eventlet to update the content to the client.
Eventlet uses the Web Server Gateway Interface for Python (WSGI). The functions that provide the specific functionality are extended with decorators.
Please note that this is an absolute minimum server implementation. It doesn't implement any multi-user capabilitiesevery client is able to start/stop the timer, even for other clients.
### Conclusion
Take this example code as a starting point to get familiar with WebAssembly without wasting time on minor issues. I don't make any claims for completeness nor best-practice integration. I walked through a long learning curve until I got it running to my satisfaction, and I hope this gives you a brief look into this promising technology.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/wasm-python-webassembly
作者:[Stephan Avenwedde][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hansic99
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: https://webassembly.org/
[3]: https://github.com/hANSIc99/Pythonic
[4]: https://opensource.com/sites/default/files/uploads/cpp_qt.png (Header Qt C++ frontend)
[5]: https://pspdfkit.com/blog/2018/a-real-world-webassembly-benchmark/
[6]: https://wiki.qt.io/Qt_for_WebAssembly#Getting_Started
[7]: https://emscripten.org/docs/introducing_emscripten/index.html
[8]: https://en.wikipedia.org/wiki/WebSocket
[9]: https://opensource.com/sites/default/files/uploads/wasm_frontend.png (Qt-made frontend in browser)
[10]: https://eventlet.net/
[11]: https://opensource.com/sites/default/files/uploads/python_backend.png (Decorated functions for WebSocket handling)

View File

@ -1,69 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (10 Grafana features you need to know for effective monitoring)
[#]: via: (https://opensource.com/article/20/2/grafana-features)
[#]: author: (Daniel Lee https://opensource.com/users/daniellee)
10 Grafana features you need to know for effective monitoring
======
Learn how to make the most of this open source dashboard tool.
![metrics and data shown on a computer screen][1]
The [Grafana][2] project [started in 2013][3] when [Torkel Ödegaard][4] decided to fork Kibana and turn it into a time-series and graph-focused dashboarding tool. His guiding vision: to make everything look more clean and elegant, with fewer things distracting you from the data.
More than 500,000 active installations later, Grafana dashboards are ubiquitous and instantly recognizable. (Even during a [SpaceX launch][5]!)
Whether you're a recent adopter or an experienced power user, you may not be familiar with all of the features that [Grafana Labs][6]—the company formed to accelerate the adoption of the Grafana project and to build a sustainable business around it—and the Grafana community at large have developed over the past 6+ years.
Here's a look at some of the most impactful:
1. **Dashboard templating**: One of the key features in Grafana, templating allows you to create dashboards that can be reused for lots of different use cases. Values aren't hard-coded with these templates, so for instance, if you have a production server and a test server, you can use the same dashboard for both. Templating allows you to drill down into your data, say, from all data to North America data, down to Texas data, and beyond. You can also share these dashboards across teams within your organization—or if you create a great dashboard template for a popular data source, you can contribute it to the whole community to customize and use.
2. **Provisioning**: While it's easy to click, drag, and drop to create a single dashboard, power users in need of many dashboards will want to automate the setup with a script. You can script anything in Grafana. For example, if you're spinning up a new Kubernetes cluster, you can also spin up a Grafana automatically with a script that would have the right server, IP address, and data sources preset and locked. It's also a way of getting control over a lot of dashboards.
3. **Annotations:** This feature, which shows up as a graph marker in Grafana, is useful for correlating data in case something goes wrong. You can create the annotations manually—just control-click on a graph and input some text—or you can fetch data from any data source. (Check out how Wikimedia uses annotations on its [public Grafana dashboard][7], and here is [another example][8] from the OpenHAB community.) A good example is if you automatically create annotations around releases, and a few hours after a new release, you start seeing a lot of errors, then you can go back to your annotation and correlate whether the errors started at the same time as the release. This automation can be achieved using the Grafana HTTP API (see examples [here][9] and [here][10]). Many of Grafana's largest customers use the HTTP API for a variety of tasks, particularly setting up databases and adding users. It's an alternative to provisioning for automation, and you can do more with it. For instance, the team at DigitalOcean used the API to integrate a [snapshot feature for reviewing dashboards][11].
4. **Kiosk mode and playlists:** If you want to display your Grafana dashboards on a TV monitor, you can use the playlist feature to pick the dashboards that you or your team need to look at through the course of the day and have them cycle through on the screen. The [kiosk mode][12] hides all the user interface elements that you don't need in view-only mode. Helpful hint: The [Grafana Kiosk][13] utility handles logging in, switching to kiosk mode, and opening a playlist—eliminating the pain of logging in on a TV that has no keyboard.
5. **Custom plugins:** Plugins allow you to extend Grafana with integrations with other tools, different visualizations, and more. Some of the most popular in the community are [Worldmap Panel][14] (for visualizing data on top of a map), [Zabbix][15] (an integration with Zabbix metrics), and [Influx Admin Panel][16] (which offers other functionality like creating databases or adding users). But they're only the tip of the iceberg. Just by writing a bit of code, you can get anything that produces a timestamp and a value visualized in Grafana. Plus, Grafana Enterprise customers have access to more plugins for integrations with Splunk, Datadog, New Relic, and others.
6. **Alerting and alert hooks:** If you're using Grafana alerting, you can have alerts sent through a number of different notifiers, including PagerDuty, SMS, email, or Slack. Alert hooks allow you to create different notifiers with a bit of code if you prefer some other channels of communication.
7. **Permissions and teams**: When organizations have one Grafana and multiple teams, they often want the ability to both keep things separate and share dashboards. Early on, the default in Grafana was that everybody could see everyone else's dashboards, and that was it. Later, Grafana introduced multi-tenant mode, in which you can switch organizations but can't share dashboards. Some people were using huge hacks to enable both, so Grafana decided to officially create an easier way to do this. Now you can create a team of users and then set permissions on folders, dashboards, and down to the data source level if you're using Grafana Enterprise.
8. **SQL data sources:** Grafana's native support for SQL helps you turn anything—not just metrics—in an SQL database into metric data that you can graph. Power users are using SQL data sources to do a whole bunch of interesting things, like creating business dashboards that "make sense for your boss's boss," as the team at Percona put it. Check out their [presentation at GrafanaCon][17].
9. **Monitoring your monitoring**: If you're serious about monitoring and you want to monitor your own monitoring, Grafana has its own Prometheus HTTP endpoint that Prometheus can scrape. It's quite simple to get dashboards and statics. There's also an enterprise version in development that will offer Google Analytics-style easy access to data, such as how much CPU your Grafana is using or how long alerting is taking.
10. **Authentication**: Grafana supports different authentication styles, such as LDAP and OAuth, and allows you to map users to organizations. In Grafana Enterprise, you can also map users to teams: If your company has its own authentication system, Grafana allows you to map the teams in your internal systems to teams in Grafana. That way, you can automatically give people access to the dashboards designated for their teams.
Want to take a deeper dive? Join the [Grafana community][18], check out the [how-to section][19], and share what you think.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/grafana-features
作者:[Daniel Lee][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/daniellee
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
[2]: https://github.com/grafana/grafana
[3]: https://grafana.com/blog/2019/09/03/the-mostly-complete-history-of-grafana-ux/
[4]: https://grafana.com/author/torkel
[5]: https://youtu.be/ANv5UfZsvZQ?t=29
[6]: https://grafana.com/
[7]: https://grafana.wikimedia.org/d/000000143/navigation-timing?orgId=1&refresh=5m
[8]: https://community.openhab.org/t/howto-create-annotations-in-grafana-via-rules/48929
[9]: https://docs.microsoft.com/en-us/azure/devops/service-hooks/services/grafana?view=azure-devops
[10]: https://medium.com/contentsquare-engineering-blog/from-events-to-grafana-annotation-f35aafe8bd3d
[11]: https://youtu.be/kV3Ua6guynI
[12]: https://play.grafana.org/d/vmie2cmWz/bar-gauge?orgId=1&refresh=10s&kiosk
[13]: https://github.com/grafana/grafana-kiosk
[14]: https://grafana.com/grafana/plugins/grafana-worldmap-panel
[15]: https://grafana.com/grafana/plugins/alexanderzobnin-zabbix-app
[16]: https://grafana.com/grafana/plugins/natel-influx-admin-panel
[17]: https://www.youtube.com/watch?v=-xlchgoqkqY
[18]: https://community.grafana.com/
[19]: https://community.grafana.com/c/howto/6

View File

@ -1,272 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (17 Cool Arduino Project Ideas for DIY Enthusiasts)
[#]: via: (https://itsfoss.com/cool-arduino-projects/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
17 Cool Arduino Project Ideas for DIY Enthusiasts
======
[Arduino][1] is an open-source electronics platform that combines both open source software and hardware to let people make interactive projects with ease. You can get Arduino-compatible [single board computers][2] and use them to make something useful.
In addition to the hardware, you will also need to know the [Arduino language][3] to use the [Arduino IDE][4] to successfully create something.
You can code using the web editor or use the Arduino IDE offline. Nevertheless, you can always refer to the [official resources][5] available to learn about Arduino.
Considering that you know the essentials, I will be mentioning some of the best (or interesting) Arduino projects. You can try to make them for yourself or modify them to come up with something of your own.
### Interesting Arduino project ideas for beginners, experts, everyone
![][6]
The following projects need a variety of additional hardware so make sure to check out the official link to the projects (_originally featured on the [official Arduino Project Hub][7]_) to learn more about them.
Also, it is worth noting that they arent particularly in any ranking order so feel free to try what sounds best to you.
#### 1\. LED Controller
Looking for simple Arduino projects? Heres one for you.
One of the easiest projects that let you control LED lights. Yes, you do not have to opt for expensive LED products just to decorate your room (or for any other use-case), you can simply make an LED controller and customize it to use it however you want.
It requires using the [Arduino UNO board][8] and a couple more things (which also includes an Android phone). You can learn more about it in the link to the project below.
[LED Controller][9]
#### 2\. Hot Glue LED Matrix Lamp
![][10]
Another Arduino LED project for you. Since we are talking about using LEDs to decorate, you can also make an LED lamp that looks beautiful.
For this, you might want to make sure that you have a 3D printer. Next, you need an LED strip and **Arduino Nano R3** as the primary materials.
Once youve printed the case and assembled the lamp section, all you need to do is to add the glue sticks and figure out the wiring. It does sound very simple to mention you can learn more about it on the official Arduino project feature site.
[LED Matrix Lamp][11]
#### 3\. Arduino Mega Chess
![][12]
Want to have a personal digital chessboard? Why not?
Youll need a TFT LCD touch screen display and an [Arduino Mega 2560][13] board as the primary materials. If you have a 3D printer, you can create a pretty case for it and make changes accordingly.
Take a look at the original project for inspiration.
[Arduino Mega Chess][14]
#### 4\. Enough Already: Mute My TV
A very interesting project. I wouldnt argue the usefulness of it but if youre annoyed by certain celebrities (or personalities) on TV, you can simply mute their voice whenever theyre about to speak something on TV.
Technically, it was tested with the old tech back then (when you didnt really stream anything). You can watch the video above to get an idea and try to recreate it or simply head to the link to read more about it.
[Mute My TV][15]
#### 5\. Robot Arm with Controller
![][16]
If you want to do something with the help of your robot and still have manual control over it, the robot arm with a controller is one of the most useful Arduino projects. It uses the [Arduino UNO board][8] if youre wondering.
You will have a robot arm -for which you can make a case using the 3D printer to enhance its usage and you can use it for a variety of use-cases. For instance, to clean the carbage using the robot arm or anything similar where you dont want to directly intervene.
[Robotic Arm With Controller][17]
#### 6\. Make Musical Instrument Using Arduino
Ive seen a variety of musical instruments made using Arduino. You can explore the Internet if you want something different than this.
You would need a [Pi supply flick charge][18] and an **Arduino UNO** to make it happen. It is indeed a cool Arduino project where you get to simply tap and your hand waves will be converted to music. Also, it isnt tough to make this so you should have a lot of fun making this.
[Musical Instrument using Arduino][19]
#### 7\. Pet Trainer: The MuttMentor
An Arduino-based device that assists you to help train your pet sounds exciting!
For this, theyre using the [Arduino Nano 33 BLE Sense][20] while utilizing TensorFlow to train a small neural network for all the common actions that your pet does. Accordingly, the buzzer will offer a reinforcing notification when your pet obeys your command.
This can have wide applications when tweaked as per your requirements. Check out the details below.
[The MuttMentor][21]
#### 8\. Basic Earthquake Detector
Normally, you depend on the government officials to announce/inform about the earthquake stats (or the warning for it).
But with Arduino boards, you can simply build a basic earthquake detector and have transparent results for yourself without depending on the authorities. Click on the button below to know about the relevant details to help make it.
[Basic Earthquake Detector][22]
#### 9\. Security Access Using RFID Reader
![][23]
As the project describes “_RFID tagging is an ID system that uses small radio frequency identification_ “.
So, in this project, you will be making an RFID reader using Arduino while pairing it with an [Adafruit NFC card][24] for security access. Check out the full details using the button below and let me know how it works for you.
[Security Access using RFID reader][25]
#### 10\. Smoke Detection using MQ-2 Gas Sensor
![][26]
This could be potentially one of the best Arduino projects out there. You dont need to spend a lot of money to equip smoke detectors for your home, you can manage with a DIY solution to some extent.
Of course, unless you want a complex failsafe set up along with your smoke detector, a basic inexpensive solution should do the trick. In either case, you can also find other applications for the smoke detector.
[Smoke Detector][27]
#### 11\. Arduino Based Amazon Echo using 1Sheeld
![][28]
In case you didnt know [1Sheeld][29] basically replaces the need for an add-on Arduino board. You just need a smartphone and add Arduino shields to it so that you can do a lot of things with it.
Using 5 such shields, the original creator of this project made himself a DIY Amazon Echo. You can find all the relevant details, schematics, and code to make it happen.
[DIY Amazon Echo][30]
#### 12\. Audio Spectrum Visualizer
![][31]
Just want to make something cool? Well, heres an idea for an audio spectrum visualizer.
For this, you will need an Arduino Nano R3 and an LED display as primary materials to get started with. You can tweak the display as required. You can connect it with your headphone output or simply a line-out amplifier.
Easily one of the cheapest Arduino projects that you can try for fun.
[Audio Spectrum Visualizer][32]
#### 13\. Motion Following Motorized Camera
![][33]
Up for a challenge? If you are this will be one of the coolest Arduino Projects in our list.
Basically, this is meant to replace your home security camera which is limited to an angle of video recording. You can turn the same camera into a motorized camera that follows the motion.
So, whenever it detects a movement, it will change its angle to try to follow the object. You can read more about it to find out how to make it.
[Motion Following Motorized Camera][34]
#### 14\. Water Quality Monitoring System
![][35]
If youre concerned about your health in connection to the water you drink, you can try making this.
It requires an Arduino UNO and the water quality sensors as the primary materials. To be honest, a useful Arduino project to go for. You can find everything you need to make this in the link below.
[Water Quality Monitoring System][36]
#### 15\. Punch Activated Arm Flamethrower
I would be very cautious about this but seriously, one of the best (and coolest) Arduino projects Ive ever come across.
Of course, this counts as a fun project to try out to see what bigger projects you can pull off using Arduino and here it is. In the project, he originally used the [SparkFun Arduino Pro Mini 328][37] along with an accelerometer as the primary materials.
[Punch Activated Flamethrower][38]
#### 16\. Polar Drawing Machine
![][39]
This isnt any ordinary plotter machine that you mightve seen people creating using Arduino boards.
With this, you can draw some cool vector graphics images or bitmap. It might sound like bit of overkill but then it could also be fun to do something like this.
This could be a tricky project, so you can refer to the details on the link to go through it thoroughly.
[Polar Drawing Machine][40]
#### 17\. Home Automation
Technically, this is just a broad project idea because you can utilize the Arduino board to automate almost anything you want at your home.
Just like I mentioned, you can go for a security access device, maybe create something that automatically waters the plants or simply make an alarm system.
Countless possibilities of what you can do to automate things at your home. For reference, Ive linked to an interesting home automation project below.
[Home Automation][41]
#### Bonus: Robot Cat (OpenCat)
![][42]
A programmable robotic cat for AI-enhanced services and STEM education. In this project, both Arduino and Raspberry Pi boards have been utilized.
You can also look at the [Raspberry Pi alternatives][2] if you want. This project needs a lot of work, so you would want to invest a good amount of time to make it work.
[OpenCat][43]
**Wrapping Up**
With the help of Arduino boards (coupled with other sensors and materials), you can do a lot of projects with ease. Some of the projects that Ive listed above are suitable for beginners and some are not. Feel free to take your time to analyze what you need and the cost of the project before proceeding.
Did I miss listing an interesting Arduino project that deserves the mention here? Let me know your thoughts in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/cool-arduino-projects/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.arduino.cc/
[2]: https://itsfoss.com/raspberry-pi-alternatives/
[3]: https://www.arduino.cc/reference/en/
[4]: https://www.arduino.cc/en/main/software
[5]: https://www.arduino.cc/en/Guide/HomePage
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/arduino-project-ideas.jpg?ssl=1
[7]: https://create.arduino.cc/projecthub
[8]: https://store.arduino.cc/usa/arduino-uno-rev3
[9]: https://create.arduino.cc/projecthub/mayooghgirish/arduino-bluetooth-basic-tutorial-d8b737?ref=platform&ref_id=424_trending___&offset=89
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/led-matrix-lamp.jpg?ssl=1
[11]: https://create.arduino.cc/projecthub/john-bradnam/hot-glue-led-matrix-lamp-42322b?ref=platform&ref_id=424_trending___&offset=42
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/arduino-chess-board.jpg?ssl=1
[13]: https://store.arduino.cc/usa/mega-2560-r3
[14]: https://create.arduino.cc/projecthub/Sergey_Urusov/arduino-mega-chess-d54383?ref=platform&ref_id=424_trending___&offset=95
[15]: https://makezine.com/2011/08/16/enough-already-the-arduino-solution-to-overexposed-celebs/
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/robotic-arm-controller.jpg?ssl=1
[17]: https://create.arduino.cc/projecthub/H0meMadeGarbage/robot-arm-with-controller-2038df?ref=platform&ref_id=424_trending___&offset=13
[18]: https://uk.pi-supply.com/products/flick-hat-3d-tracking-gesture-hat-raspberry-pi
[19]: https://create.arduino.cc/projecthub/lanmiLab/make-musical-instrument-using-arduino-and-flick-large-e2890b?ref=platform&ref_id=424_trending___&offset=24
[20]: https://store.arduino.cc/usa/nano-33-ble-sense
[21]: https://create.arduino.cc/projecthub/whatsupdog/the-muttmentor-9d9753?ref=platform&ref_id=424_trending___&offset=44
[22]: https://www.instructables.com/id/Basic-Arduino-Earthquake-Detector/
[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/security-access-arduino.jpg?ssl=1
[24]: https://www.adafruit.com/product/359
[25]: https://create.arduino.cc/projecthub/Aritro/security-access-using-rfid-reader-f7c746?ref=platform&ref_id=424_trending___&offset=85
[26]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/smoke-detection-arduino.jpg?ssl=1
[27]: https://create.arduino.cc/projecthub/Aritro/smoke-detection-using-mq-2-gas-sensor-79c54a?ref=platform&ref_id=424_trending___&offset=89
[28]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/diy-amazon-echo.jpg?ssl=1
[29]: https://1sheeld.com/
[30]: https://create.arduino.cc/projecthub/ahmedismail3115/arduino-based-amazon-echo-using-1sheeld-84fa6f?ref=platform&ref_id=424_trending___&offset=91
[31]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/audio-spectrum-visualizer.jpg?ssl=1
[32]: https://create.arduino.cc/projecthub/Shajeeb/32-band-audio-spectrum-visualizer-analyzer-902f51?ref=platform&ref_id=424_trending___&offset=87
[33]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/motion-following-camera.jpg?ssl=1
[34]: https://create.arduino.cc/projecthub/lindsi8784/motion-following-motorized-camera-base-61afeb?ref=platform&ref_id=424_trending___&offset=86
[35]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/water-quality-monitoring.jpg?ssl=1
[36]: https://create.arduino.cc/projecthub/chanhj/water-quality-monitoring-system-ddcb43?ref=platform&ref_id=424_trending___&offset=93
[37]: https://www.sparkfun.com/products/11113
[38]: https://create.arduino.cc/projecthub/Advanced/punch-activated-arm-flamethrowers-real-firebending-95bb80
[39]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/polar-drawing-machine.jpg?ssl=1
[40]: https://create.arduino.cc/projecthub/ArduinoFT/polar-drawing-machine-f7a05c?ref=search&ref_id=drawing&offset=2
[41]: https://create.arduino.cc/projecthub/ahmedel-hinidy2014/home-management-system-control-your-home-from-a-website-076846?ref=search&ref_id=home%20automation&offset=4
[42]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/opencat.jpg?ssl=1
[43]: https://create.arduino.cc/projecthub/petoi/opencat-845129?ref=platform&ref_id=424_popular___&offset=8

View File

@ -1,152 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Revive your RSS feed with Newsboat in the Linux terminal)
[#]: via: (https://opensource.com/article/20/2/newsboat)
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
Revive your RSS feed with Newsboat in the Linux terminal
======
Newsboat is an excellent RSS reader, whether you need a basic set of
features or want your application to do a whole lot more.
![Boat on the ocean with Creative Commons sail][1]
Psst. Word on the web is that RSS died in 2013. That's when Google pulled the plug on Google Reader.
Don't believe everything that you hear. RSS is alive. It's well. It's still a great way to choose the information you want to read without algorithms making the decision for you. All you need is the [right feed reader][2].
Back in January, Opensource.com Correspondent [Kevin Sonney][3] introduced a nifty terminal RSS reader [called Newsboat][4]. In his article, Kevin scratched Newsboat's surface. I figured it was time to take a deeper dive into what Newsboat can do.
### Adding RSS feeds to Newsboat
As Kevin writes, "installing Newsboat is pretty easy since it is included with most distributions (and Homebrew on macOS)." You can, as Kevin also notes, import a [file containing RSS feeds][5] from another reader. If this is your first kick at the RSS can or it's been a while since you've used an RSS reader, chances are you don't have one of those files handy.
Not to worry. You just need to do some copying and pasting. Go to the folder **.newsboat** in your **/home** directory. Once you're there, open the file **urls** in a text editor. Then, go to the websites you want to read, find the links to their RSS feeds, and copy and paste them into the **urls** file.
![Newsboat urls file][6]
Start Newsboat, and you're ready to get reading.
### Reading your feeds
As Kevin Sonney points out, you refresh your feeds by pressing the **r** or **R** keys on your keyboard. To read the articles from a feed, press **Enter** to open that feed and scroll down the list. Then, press **Enter** to read an item.
![Newsboat reading][7]
Return to the list of articles by pressing **q**. Press **q** again to return to your list of feeds.
Every so often, you might run into a feed that shows just part of an article. That can be annoying. To get the full article, press **o** to open it in your desktop's default web browser. On my desktop, for example, that's Firefox. You can change the browser Newsboat works with; I'll explain that below.
### Following links
Hyperlinking has been a staple of the web since its beginnings at CERN in the early 1990s. It's hard to find an article published online that doesn't contain at least a couple of links that point elsewhere.
Instead of leaving links embedded in an article or post, Newsboat gathers them into a numbered list at the end of the article or post.
![Hyperlinks in Newsboat][8]
To follow a link, press the number beside it. In the screenshot above, you'd press **4** to open the link to the homepage of one of the contributors to that article. The link, as you've probably guessed, opens in your default browser.
### Using Newsboat as a client for other feed readers
You might use a web-based feed reader, but might also want to read your RSS feeds in something a bit more minimal on your desktop. Newsboat can do that.
It works with several feed readers, including The Old Reader, Inoreader, Newsblur, Tiny Tiny RSS, FeedHQ, and the newsreader apps for [ownCloud][9] and [Nextcloud][10]. Before you can read feeds from any of them, you'll need to do a little work.
Go back to the **.newsboat** folder in your **/home** directory and create a file named **config**. Then add the settings that hook Newsboat into one of the RSS readers it supports. You can find more information about the specific settings for each reader in [Newsboat's documentation][11].
Here's an example of the settings I use to connect Newsboat with the newsreader app in my instance of Nextcloud:
```
urls-source "ocnews"
ocnews-url "<https://my.nextcloud.instance>"
ocnews-login "myUserName"
ocnews-password "NotTellingYouThat!"
```
I've tested this with Nextcloud, The Old Reader, Inoreader, and Newsblur. Newsboat worked seamlessly with all of them.
![Newsboat with The Old Reader][12]
### Other useful configuration tricks
You can really unleash Newsboat's power and flexibility by tapping into [its configuration options][13]. That includes changing text colors, the order Newsboat sorts feeds, where it saves articles, the length of time Newsboat keeps articles, and more.
Below are a few of the options I've added to my configuration file.
#### Change Newsboat's default browser
As I mentioned a few paragraphs back, Newsboat opens articles in your default graphical web browser. If you want to read feeds in a [text-only browser][14] like w3m or ELinks, add this to your Newsboat configuration file:
```
`browser "/path/to/browser %u"`
```
In my configuration file, I've set w3m up as my browser:
```
`browser "/usr/bin/w3m %u"`
```
![Newsboat with w3m][15]
#### Remove read articles
I like an uncluttered RSS feed. That means getting rid of articles I've already read. Add this setting to the configuration file to have Newsboat do that automatically:
```
`show-read-feeds  no`
```
#### Refresh feeds at launch
Life gets busy. Sometimes, I go a day or two without checking my RSS feeds. That means having to refresh them after I fire Newsboat up. Sure, I can press **r** or **R**, but why not have the application do it for me? I've added this setting to my configuration file to have Newsboat refresh all of my feeds when I launch it:
```
`refresh-on-startup  yes`
```
If you have a lot of feeds, it can take a while to refresh them. I have around 80 feeds, and it takes over a minute to get new content from all of them.
### Is that everything?
Not even close. In addition to all of its configuration options, Newsboat also has a number of command-line switches you can use when you fire it up. Read more about them in the [documentation][16].
On the surface, Newsboat is simple. But a lot of power and flexibility hides under its hood. That makes Newsboat an excellent RSS reader for anyone who needs a basic set of features or for someone who needs their RSS reader to do a whole lot more.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/newsboat
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/CreativeCommons_ideas_520x292_1112JS.png?itok=otei0vKb (Boat on the ocean with Creative Commons sail)
[2]: https://opensource.com/article/17/3/rss-feed-readers
[3]: https://opensource.com/users/ksonney
[4]: https://opensource.com/article/20/1/open-source-rss-feed-reader
[5]: https://en.wikipedia.org/wiki/OPML
[6]: https://opensource.com/sites/default/files/uploads/newsboat-urls-file.png (Newsboat urls file)
[7]: https://opensource.com/sites/default/files/uploads/newsboat-reading.png (Newsboat reading)
[8]: https://opensource.com/sites/default/files/uploads/newsboat-links.png (Hyperlinks in Newsboat)
[9]: https://github.com/owncloudarchive/news
[10]: https://github.com/nextcloud/news
[11]: https://newsboat.org/releases/2.18/docs/newsboat.html#_newsboat_as_a_client_for_newsreading_services
[12]: https://opensource.com/sites/default/files/uploads/newsboat-oldreader.png (Newsboat with The Old Reader)
[13]: https://newsboat.org/releases/2.18/docs/newsboat.html#_example_configuration
[14]: https://opensource.com/article/16/12/web-browsers-linux-command-line
[15]: https://opensource.com/sites/default/files/uploads/newsboat-read-with-w3m.png (Newsboat with w3m)
[16]: https://newsboat.org/releases/2.18/docs/newsboat.html

View File

@ -1,288 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Level up your use of Helm on Kubernetes with Charts)
[#]: via: (https://opensource.com/article/20/3/helm-kubernetes-charts)
[#]: author: (Jessica Cherry https://opensource.com/users/jrepka)
Level up your use of Helm on Kubernetes with Charts
======
Configuring known apps using the Helm package manager.
![Ships at sea on the web][1]
Applications are complex collections of code and configuration that have a lot of nuance to how they are installed. Like all open source software, they can be installed from source code, but most of the time users want to install something simply and consistently. Thats why package managers exist in nearly every operating system, which manages the installation process.
Similarly, Kubernetes depends on package management to simplify the installation process. In this article, well be using the Helm package manager and its concept of stable charts to create a small application.
### What is Helm package manager?
[Helm][2] is a package manager for applications to be deployed to and run on Kubernetes. It is maintained by the [Cloud Native Computing Foundation][3] (CNCF) with collaboration with the largest companies using Kubernetes. Helm can be used as a command-line utility, which [I cover how to use here][4].
#### Installing Helm
Installing Helm is quick and easy for Linux and macOS. There are two ways to do this, you can go to the release [page][5], download your preferred version, untar the file, and move the Helm executable to your** /usr/local/bin** or your **/usr/bin** whichever you are using.
Alternatively, you can use your operating system package manage (**dnf**, **snap**, **brew**, or otherwise) to install it. There are instructions on how to install on each OS on this [GitHub page][6].
### What are Helm Charts?
We want to be able to repeatably install applications, but also to customize them to our environment. Thats where Helm Charts comes into play. Helm coordinates the deployment of applications using standardized templates called Charts. Charts are used to define, install, and upgrade your applications at any level of complexity.
> A _Chart_ is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster. Think of it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file.
>
> [Using Helm][7]
Charts are quick to create, and I find them straightforward to maintain. If you have one that is accessible from a public version control site, you can publish it to the [stable repository][8] to give it greater visibility. In order for a Chart to be added to stable, it must meet a number of [technical requirements][9]. In the end, if it is considered properly maintained by the Helm maintain, it can then be published to [Helm Hub][10].
Since we want to use the community-curated stable charts, we will make that easier by adding a shortcut: 
```
$ helm repo add stable <https://kubernetes-charts.storage.googleapis.com>
"stable" has been added to your repositories
```
### Running our first Helm Chart
Since Ive already covered the basic Helm usage in [this article][11], Ill focus on how to edit and use charts in this article. To follow along, youll need Helm installed and access to some Kubernetes environment, like minikube (which you can walk through [here][12] or [here][13]).
Starting I will be picking one chart. Usually, in my article I use Jenkins as my example, and I would gladly do this if the chart wasnt really complex. This time Ill be using a basic chart and will be creating a small wiki, using [mediawiki and its chart][14].  
So how do I get this chart? Helm makes that as easy as a pull.
By default, charts are compressed in a .tgz file, but we can unpack that file to customize our wiki by using the **\--untar** flag.
```
$ helm pull stable/mediawiki --untar
$ ls
mediawiki/
$ cd mediawiki/
$ ls
Chart.yaml         README.md          requirements.lock  templates/
OWNERS             charts/            requirements.yaml  values.yaml
```
Now that we have this we can begin customizing the chart.
### Editing your Helm Chart
When the file was untared there was a massive amount of files that came out. While it does look frightening, there really is only one file we should be working with and that's the **values.yaml** file.
Everything that was unpacked was a list of template files that has all the information for the basic application configurations. All the template files actually depend on what is configured in the values.yaml file. Most of these templates and chart files actually are for creating service accounts in the cluster and the various sets of required application configurations that would usually be put together if you were to build this application on a regular server.
But on to the values.yaml file and what we should be changing in it. Open it in your favorite text editor or IDE. We see a [YAML][15] file with a ton of configuration. If we zoom in just on the container image file, we see its repository, registry, and tags amongst other details.
```
## Bitnami DokuWiki image version
## ref: <https://hub.docker.com/r/bitnami/mediawiki/tags/>
##
image:
  registry: docker.io
  repository: bitnami/mediawiki
  tag: 1.34.0-debian-10-r31
  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: <http://kubernetes.io/docs/user-guide/images/\#pre-pulling-images>
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: <https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/>
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName
```
As you can see in the file each configuration for the values is well-defined. Our pull policy is set to **IfNotPresent**. This means if I run a **helm pull** command, it will not overwrite my existing version. If its set to always, the image will default to the latest version of the image on every pull. Ill be using the default in this case, as in the past I have run into images being broken if it goes to the latest version without me expecting it (remember to version control your software, folks).
### Customizing our Helm Chart
So lets configure this values file with some basic changes and make it our own. Ill be changing some naming conventions, the wiki username, and the mediawiki site name. _Note: This is another snippet from values.yaml. All of this customization happens in that one file._
```
## User of the application
## ref: <https://github.com/bitnami/bitnami-docker-mediawiki\#environment-variables>
##
mediawikiUser: cherrybomb
## Application password
## Defaults to a random 10-character alphanumeric string if not set
## ref: <https://github.com/bitnami/bitnami-docker-mediawiki\#environment-variables>
##
# mediawikiPassword:
## Admin email
## ref: <https://github.com/bitnami/bitnami-docker-mediawiki\#environment-variables>
##
mediawikiEmail: [root@example.com][16]
## Name for the wiki
## ref: <https://github.com/bitnami/bitnami-docker-mediawiki\#environment-variables>
##
mediawikiName: Jess's Home of Helm
```
After this, Ill make some small modifications to our database name and user account. I changed the defaults to "jess" so you can see where changes were made.
```
externalDatabase:
 ## Database host
  host:
  ## Database port
  port: 3306
  ## Database user
  user: jess_mediawiki
  ## Database password
  password:
  ## Database name
  database: jess_mediawiki
##
## MariaDB chart configuration
##
## <https://github.com/helm/charts/blob/master/stable/mariadb/values.yaml>
##
mariadb:
 ## Whether to deploy a mariadb server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters
  enabled: true
  ## Disable MariaDB replication
  replication:
    enabled: false
  ## Create a database and a database user
  ## ref: <https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md\#creating-a-database-user-on-first-run>
  ##
  db:
    name: jess_mediawiki
    user: jess_mediawiki
```
And finally, Ill be adding some ports in our load balancer to allow traffic from the local host. I'm running on minikube and find the **LoadBalancer** option works well.
```
service:
 ## Kubernetes svc type
  ## For minikube, set this to NodePort, elsewhere use LoadBalancer
  ##
  type: LoadBalancer
  ## Use serviceLoadBalancerIP to request a specific static IP,
  ## otherwise leave blank
  ##
  # loadBalancerIP:
  # HTTP Port
  port: 80
  # HTTPS Port
  ## Set this to any value (recommended: 443) to enable the https service port
  # httpsPort: 443
  ## Use nodePorts to requets some specific ports when usin NodePort
  ## nodePorts:
  ##   http: &lt;to set explicitly, choose port between 30000-32767&gt;
  ##   https: &lt;to set explicitly, choose port between 30000-32767&gt;
  ##
  # nodePorts:
  #  http: "30000"
  #  https: "30001"
  ## Enable client source IP preservation
  ## ref <http://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/\#preserving-the-client-source-ip>
  ##
  externalTrafficPolicy: Cluster
```
Now that we have made the configurations to allow traffic and create the database, we know that we can go ahead and deploy our chart.
### Deploy and enjoy!
Now that we have our custom version of the wiki, it's time to create a deployment. Before we get into that, lets first confirm that nothing else is installed with Helm, to make sure my cluster has available resources to run our wiki.
```
$ helm ls
NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION
```
There are no other deployments through Helm right now, so let's proceed with ours. 
```
$ helm install jesswiki -f values.yaml stable/mediawiki
NAME: jesswiki
LAST DEPLOYED: Thu Mar  5 12:35:31 2020
NAMESPACE: default
STATUS: deployed
REVISION: 2
NOTES:
1\. Get the MediaWiki URL by running:
  NOTE: It may take a few minutes for the LoadBalancer IP to be available.
        Watch the status with: 'kubectl get svc --namespace default -w jesswiki-mediawiki'
  export SERVICE_IP=$(kubectl get svc --namespace default jesswiki-mediawiki --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
  echo "Mediawiki URL: http://$SERVICE_IP/"
2\. Get your MediaWiki login credentials by running:
    echo Username: user
    echo Password: $(kubectl get secret --namespace default jesswiki-mediawiki -o jsonpath="{.data.mediawiki-password}" | base64 --decode)
$
```
Perfect! Now we will navigate to the wiki, which is accessible at the cluster IP address. To confirm that address:
```
kubectl get svc --namespace default -w jesswiki-mediawiki
NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
jesswiki-mediawiki   LoadBalancer   10.103.180.70   &lt;pending&gt;     80:30220/TCP   17s
```
Now that we have the IP, we go ahead and check to see if its up: 
![A working wiki installed through helm charts][17]
Now we have our new wiki up and running, and we can enjoy our new application with our personal edits. Use the command from the output above to get the password and start to fill in your wiki.
### Conclusion
Helm is a powerful package manager that makes installing and uninstalling applications on top of Kubernetes as simple as a single command. Charts add to the experience by giving us curated and tested templates to install applications with our unique customizations. Keep exploring what Helm and Charts have to offer and let me know what you do with them in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/3/helm-kubernetes-charts
作者:[Jessica Cherry][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jrepka
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes_containers_ship_lead.png?itok=9EUnSwci (Ships at sea on the web)
[2]: https://www.google.com/url?q=https://helm.sh/&sa=D&ust=1583425787800000
[3]: https://www.google.com/url?q=https://www.cncf.io/&sa=D&ust=1583425787800000
[4]: https://www.google.com/url?q=https://opensource.com/article/20/2/kubectl-helm-commands&sa=D&ust=1583425787801000
[5]: https://www.google.com/url?q=https://github.com/helm/helm/releases/tag/v3.1.1&sa=D&ust=1583425787801000
[6]: https://www.google.com/url?q=https://github.com/helm/helm&sa=D&ust=1583425787802000
[7]: https://helm.sh/docs/intro/using_helm/
[8]: https://www.google.com/url?q=https://github.com/helm/charts&sa=D&ust=1583425787803000
[9]: https://github.com/helm/charts/blob/master/CONTRIBUTING.md#technical-requirements
[10]: https://www.google.com/url?q=https://hub.helm.sh/&sa=D&ust=1583425787803000
[11]: https://www.google.com/url?q=https://opensource.com/article/20/2/kubectl-helm-commands&sa=D&ust=1583425787803000
[12]: https://www.google.com/url?q=https://opensource.com/article/18/10/getting-started-minikube&sa=D&ust=1583425787804000
[13]: https://www.google.com/url?q=https://opensource.com/article/19/7/security-scanning-your-devops-pipeline&sa=D&ust=1583425787804000
[14]: https://www.google.com/url?q=https://github.com/helm/charts/tree/master/stable/mediawiki&sa=D&ust=1583425787805000
[15]: https://en.wikipedia.org/wiki/YAML
[16]: mailto:root@example.com
[17]: https://opensource.com/sites/default/files/uploads/lookitworked.png (A working wiki installed through helm charts)

View File

@ -1,85 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open source alternative for multi-factor authentication: privacyIDEA)
[#]: via: (https://opensource.com/article/20/3/open-source-multi-factor-authentication)
[#]: author: (Cornelius Kölbel https://opensource.com/users/cornelius-k%C3%B6lbel)
Open source alternative for multi-factor authentication: privacyIDEA
======
As technology changes, so too will our need to adapt our authentication
mechanisms.
![Three closed doors][1]
Two-factor authentication, or multi-factor authentication, is not a topic only for nerds anymore. Many services on the internet provide it, and many end-users demand it. While the average end-user might only realize that his preferred web site either offers MFA or it does not, there is more to it behind the scene.
The two-factor market is changing, and changing rapidly. New authentication methods arise, classical vendors are merging, and products have disappeared.
The end-user might not be bothered at all, but organizations and companies who want to require multi-factor authentication for their users may wonder where to turn to and which horse to bet on.
Companies like Secure Computing, Aladdin, SafeNet, Cryptocard, Gemalto, and Thales have been providing authentication solutions for organizations for some decades and have been involved in a round dance of [mergers and acquisitions][2] during the last ten years. And the user was the one who suffered. While the IT department thought it was rolling out a reliable software of a successful vendor, a few years later, they were confronted with the product being end-of-life.
### How the cloud changes things
In 1986, RSA released RSA SecurID, a physical hardware token displaying magic numbers based on an unknown, proprietary algorithm. But, almost 20 years later, thanks to the Open Authentication Initiative, HOTP (RFC4226) and TOTP (RFC6238) were specified—originally for OTP hardware tokens.
SMS Passcode, which specialized in authenticating by sending text messages, was founded in 2005; no hardware token required. While other on-premises solutions kept the authentication server and the enrollment in a confined environment, with SMS Passcode, the authentication information (a secret text message) was transported via the mobile network to the user.
The iPhone 1 was released in 2007, and the Android phone quickly followed. DUO Security was founded in 2009 as a specific cloud MFA provider, with the smartphone acting as a second factor. Both vendors concentrated on a new second factor—the phone with a text message or the smartphone with an app—and they offered and used infrastructure that was not part of the company's network anymore.
Classical on-premises vendors started to move to the cloud, either by offering their new services or acquiring smaller vendors with cloud solutions, such as SafeNet's [acquisition of Cryptocard in 2012][3]. It seemed tempting for classical vendors to offer cloud services—no software updates on-premises, no support cases, unlimited scaling, and unlimited revenue.
Even the old top dog, RSA, now offers a "Cloud Authentication Service." And doesn't it make sense to put authentication services in the cloud? The data is hosted at cloud services like Azure, the identities are hosted in the cloud at Azure AD, so why not put authentication there with Azure MFA? This approach might make sense for companies with a complete cloud-centric approach, but it also probably locks you into one specific vendor.
Cloud seems a big topic also for multi-factor authentication. But what if you want to stay on-prem?
### The state of multi-factor authentication technology
Multi-factor authentication has also come a long way since 1986, when RSA introduced its first OTP tokens. A few decades ago, well-paid consultants made a living by rolling PKI concepts, since smartcard authentication needed a working certificate infrastructure.
After having OTP keyfob tokens and smartphones with HOTP and TOTP apps and even push notification, the current state-of-the-art authentication seems to be FIDO2/WebAuthn. While U2F was specified by the FIDO Alliance alone, WebAuthn was specified by no one else than W3C, and the good news is, the base requirements have been integrated into all browsers except Internet Explorer.
However, applications still need to add a lot of code when supporting Webauthn. But WebAuthn allows for new authentication devices like TPM chips in tablets, computers, and smartphones or cheap and small hardware devices. But U2F also looked good back then, and even it did not make the breakthrough. Will WebAuthn do it?
So these are challenging times since currently, you probably cannot use WebAuthn, but in two years, you'll probably want to. Thus, you need a system that allows you to adapt your authentication mechanisms.
### Getting actual requirements
This is one of the first requirements when you are about to choose a flexible multi-factor authentication solution. It will not work out to solely rely on text messages, or on one single smartphone app or only WebAuthn tokens. The smartphone app may vanish; the WebAuthn devices might not be applicable in all situations.
When looking at the mergers and acquisitions, we learned that it did happen and can happen again; that the software goes end-of-life, or the vendors cease their cloud services. And sometimes it is only the last few months that hurt, when the end of sales means that you cannot buy any new user licenses or onboard any new users! To get a lasting solution, you need to be independent on cloud services and vendor decisions. The safest way to do so is to go for an open source solution.
But when going for an open source solution, you want to get a reliable system, reliable meaning that you can be sure to get updates that do not break and that bugs will be fixed, and there are people to be asked.
### An open source alternative: privacyIDEA
Concentrated experiences in the two-factor market since 2004 have been incorporated into the open source software alternative: [privacyIDEA][4].
privacyIDEA is an open source solution providing a wide variety of different authentication technologies. It started with HOTP and TOTP tokens, but it also supports SMS, email, push notifications, SSH keys, X.509 certificates, Yubikeys, Nitrokeys, U2F, and a lot more. Currently, the support for WebAuthn is added.
The modular structure of the token types (being Python classes) allows new types to be added quickly, making it the most flexible in regards to authentication methods. It runs on-premises at a central location in your network. This way, you stay flexible, have control over your network, and keep pace with the latest developments.
privacyIDEA comes with a mighty and flexible policy framework that allows you to adapt privacyIDEA to your needs. The unique event handler modules enable you to fit privacyIDEA into your existing workflows or create new workflows that work the best for your scenario. It is also plays nice with the others and integrates with identity and authentication solutions like FreeRADIUS, simpleSAMLphp, Keycloak, or Shibboleth. This flexibility may be the reason organizations like the World Wide Web Consortium and companies like Axiad are using privacyIDEA.
privacyIDEA is developed [on GitHub][5] and backed by a Germany-based company providing services and support worldwide.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/3/open-source-multi-factor-authentication
作者:[Cornelius Kölbel][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/cornelius-k%C3%B6lbel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA (Three closed doors)
[2]: https://netknights.it/en/consolidation-of-the-market-and-migrations/
[3]: https://www.infosecurity-magazine.com/news/safenet-acquires-cryptocard/
[4]: https://privacyidea.org
[5]: https://github.com/privacyidea/privacyidea

View File

@ -1,75 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to set up a remote school environment for kids with Linux)
[#]: via: (https://opensource.com/article/20/4/school-home-linux)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
How to set up a remote school environment for kids with Linux
======
Repurpose an old computer to support the new home-schooler in your life.
![Image by Alan Formy-Duvall][1]
COVID-19 has suddenly thrown all of us into a new and challenging situation. Many of us are now working full-time from home, and for a lot of us (especially people who aren't used to working remotely), this is taking some getting used to.
Another group that is similarly challenged is our kids. They can't go to school or participate in their regular after-school activities. My daughter's elementary school closed its classrooms and is teaching through an online, web-based learning portal instead. And one of her favorite extracurricular activities—a coding school where she has been learning Scratch and just recently "graduated" to WoofJShas also gone to an online-only format.
We are fortunate that so many of our children's activities can be done online now, as this is the only way they will be able to learn, share, and socialize for at least the next several months.
### Setting up a temporary homeschool environment
When our daughter's school went to an online-only format, we realized she needed a place and some tools to do her work. So we cleaned off her desk and cleared the toys from the floor around it to make an "office" for her. We also realized she would need a computer. While I could have shopped online and ordered a new computer (and spent at least several hundred dollars—if not more than $1,000—in the process), I chose an alternative and put an old, unused laptop back to work.
If you have an unused computer sitting around and are willing to do a bit of tech work, you, too, can set something up to get your kids online. Here's how I did it.
### The hardware
While my daughter already has her own small IT department (as I like to say), it consists of some gaming systems, a tablet, and a Chromebook. Even her Chromebook has just an 11.6" screen and a small keyboard, so none of her devices are really quite adequate for full-time school duty.
So we found ourselves in a pinch. She really needed a desktop-capable computer system with a decent-sized screen, a full keyboard, a good-quality microphone, a set of speakers, and a headphone jack. And having an external video connector helps if you decide one screen isn't enough.
I didn't have a spare desktop, but I did have a laptop: a Lenovo G550 with a Pentium Dual-Core T4500 2.3GHz processor and 4GB RAM. I replaced its aging 5400RPM spindle hard drive with a 240GB solid-state drive. The laptop has a 15.6" screen, which is much easier to view than the small screens on her other devices, and a comfortable, full-size keyboard. Its CPU scores a bit better in PassMark's benchmarks (913 vs. 674) than the 1.6GHz Intel Celeron N3060 Dual-Core in the Chromebook.
However, it is 10 years old, certainly on the edge of usability by today's standards. But, thanks to the efficiency of the Linux operating system, it gets the job done. I installed the latest version (v31) of [Fedora Workstation][2], but many other distributions will work just fine. If you really want to eke out every drop of performance, you could use one of the [lightweight Linux distributions][3]. The only area that required a little extra effort with Fedora was the wireless; I had to install the driver for the Broadcom WiFi hardware. But really, this was only a few extra steps and a restart, and it was good to go.
Linux supports all of the other hardware in the laptop. My daughter prefers a full-sized mouse over the touchpad, so I attached one. She likes the keyboard on this laptop, but if she wants an external keyboard, there are enough USB ports to hook one up.
It has a traditional 3.5mm audio jack, so she can use headphones. I recommend giving children decibel-limited headphones to protect their hearing.
Even though this laptop has a 15.6" widescreen display, I think having a second monitor gives the best experience. I have a spare that I might hook up to the external VGA connector.
### The software
My daughter's school set up an online learning portal. The benefit is that students just need a supported web browser to log on and get to work, and I thank the school for its efforts and choice of a vendor-agnostic solution. Most Linux distributions include the Mozilla Firefox web browser installed by default, and Linux provides a full operating system, so I can install any applications she might need. Fedora is also updated regularly (unlike the old Windows Vista that came with the laptop and is no longer supported).
![][4]
Scratch running on Fedora
Her extracurricular coding school is using the Zoom client. I'm happy to report that it was an easy [install with RPM][5] and works great on Fedora 31.
### Success!
My daughter has no trouble using her new laptop. She likes the [GNOME desktop][6], particularly the fact that it "Looks like Dad's!" This is turning out to be a great experiment in practical (and under-pressure) use of a Linux desktop.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/school-home-linux
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/homeschool.jpg?itok=vYEd9NON (Image by Alan Formy-Duvall)
[2]: https://getfedora.org/en/workstation/
[3]: https://opensource.com/article/19/6/linux-distros-to-try
[4]: https://opensource.com/sites/default/files/scratch.jpg
[5]: https://zoom.us/download?os=linux
[6]: https://www.gnome.org/

View File

@ -1,96 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (6 open source teaching tools for virtual classrooms)
[#]: via: (https://opensource.com/article/20/4/open-source-remote-teaching-tools)
[#]: author: (Mathias Hoffmann https://opensource.com/users/mhopensource)
6 open source teaching tools for virtual classrooms
======
Create podcasts, online lectures, tutorials, and other teaching
resources for learning at home with open source tools.
![Person reading a book and digital copy][1]
As schools and universities are shutting down around the globe due to COVID-19, many of us in academia are wondering how we can get up to speed and establish a stable workflow to get our podcasts, online lectures, and tutorials out there for our students.
Open source software (OSS) has a key role to play in this situation for many reasons, including:
* **Speed:** OSS can roll out quickly and in large numbers (e.g., to an army of teaching assistants for multiple tutorial sessions in big lectures) without licensing issues and in a decentralized manner.
* **Cost:** OSS does not cost anything upfront, which is important for financially stretched schools and universities that need solutions to complex challenges on very short notice.
With everything going online, we need new ways to engage with students. Here is a list of tools that I have found useful to share my own lectures. 
### Create podcasts, videos, or live streams with OBS
[Open Broadcast Studio (OBS)][2] is a professional, open source audio and video recording tool that allows you to record, stream instantly, and do much more. OBS is available for all major platforms (Windows, macOS, and Linux), so interoperability with your colleagues and their various devices is ensured.
Even if you're already using online conferencing software as a recording system, OBS can be a great backup solution. Since it records locally, you're protected against any network lags or disconnections. You also have complete control over your data, so many educational institutions may find it to be a more secure solution than some other options.
Compatibility is also an advantage: OBS stores recordings in a standard intermediate format (MKV), which can be transferred to MP4 or other formats. Also, support for Nvidia graphics cards under OBS is great, as the company is one of the main sponsors of the OBS project. This allows you to make full use of your hardware and speed up the recording process.
### Video and sound editing
After you record your podcast or video, you may find that it needs editing. There are many reasons you may need to edit your audio or video. For example, many university online platforms restrict the size of files you can upload, so you may have to cut long videos. Or, the sound may be too quiet, or maybe it was too noisy when you recorded it, so you need to make adjustments to the audio.
Two of the open source apps to explore are [OpenShot][3] and [Shotcut][4]. Of the two, Shotcut is a more advanced program, which implies a slightly steeper learning curve. Both are cross-platform and have full support for hardware encoding with NVidia and other graphics cards, which will substantially lower processing time compared to CPU-only processing.
You can also extract a soundtrack in either program (although I have found it to be much faster with Shotcut) and export it to an audio-editing program. I find [Audacity][5], another open source, cross-platform (Mac, Linux, Windows) tool, to work extremely well.
My typical workflow looks something like this:
* Import the recording into Shotcut
* Extract the audio, save it to an audio file
* Import it into Audacity, normalize and amplify the audio, maybe do some noise reduction
* Save the audio to a new file
* Import the new audio file into Shotcut, align it with the audio-free video, and cut appropriately
* Export into an MP4 video (this last step usually takes some time, so have a coffee…)
### Electronic blackboards
If you want to annotate your slides or develop ideas on an electronic blackboard, you need note-taking software and a device with a touchscreen or a graphics tablet. A great open source tool (developed with Swiss taxpayer funding) for blackboarding is [OpenBoard][6]. It is cross-platform; although it is officially only available for Linux on Ubuntu 16.04, you can install a [Flatpak][7] and it will work on any Linux flavor. It is really a nice tool; its only shortcoming is that annotating slides is not very good.
My main open source annotation and electric blackboard tool is [Xournal++][8], which is available in some Linux distros repos (e.g., Linux Mint) and otherwise via [Flathub][9]. Like all the tools mentioned earlier, it is also available on Mac and Windows. If you know of any open source, cross-platform note-taking tools, please share them in the comments.
### Built-in solutions have their limits
You might wonder why you should bother with alternative recording software in the first place. After all, most modern operating systems have built-in screen recorders that will also capture audio. However, these built-in solutions have their limits. One key limitation is that you cannot usually capture more than one video source at a time (e.g., a webcam with your talking head and a set of slides plus a whiteboard from a graphics tablet).
The ability to use multiple video sources is very useful, though, since it can be dull for students to just listen to your voice and see your slides for extended periods. Face-to-face interactions—even if done virtually—help keep listeners' attention and make it easier for them to cope with imperfect recording quality and background noise. In addition, many of the built-in tools do not allow you to capture selected areas of the screen, and in general, you cannot change the resolution or the number of frames per second, which can be important for keeping your podcast's memory and bandwidth usage in check.
### Conclusion
When planning your online teaching, you will want to use a blend of audio, video, slides, and electronic blackboards to create an immersive experience even while students are learning remotely. Open source software offers advanced, effective tools for creating such online educational experiences.
* * *
_This article is based on "[Open source software for online teaching in the times of corona][10]" on Mathias Hoffman's blog and is reused with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/open-source-remote-teaching-tools
作者:[Mathias Hoffmann][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mhopensource
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/read_book_guide_tutorial_teacher_student_apaper.png?itok=_GOufk6N (Person reading a book and digital copy)
[2]: https://obsproject.com/
[3]: http://www.openshot.org/
[4]: http://www.shotcut.org/
[5]: https://www.audacityteam.org/
[6]: http://www.openboard.ch/
[7]: http://www.flathub.org
[8]: https://github.com/xournalpp/xournalpp
[9]: https://flathub.org/apps/details/com.github.xournalpp.xournalpp
[10]: http://mathiashoffmann.net/2020/03/22/open-source-software-for-online-teaching-in-the-times-of-corona

View File

@ -1,239 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Writing Java with Quarkus in VS Code)
[#]: via: (https://opensource.com/article/20/4/java-quarkus-vs-code)
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
Writing Java with Quarkus in VS Code
======
In this tutorial, I'll walk you through how to rebuild, package, and
deploy cloud-native applications automatically with Quarkus.
![Person drinking a hat drink at the computer][1]
In the previous articles in this series about cloud-native [Java][2] applications, I shared [_6 requirements of cloud-native software_][3] and [_4 things cloud-native Java must provide_][4]. But now you might want to implement these advanced Java applications in your local machine without climbing a steep learning curve. In this article, I will walk through using the open source technologies [Quarkus][5] and [Visual Studio Code][6] (VS Code) to accelerate the development of both traditional cloud-native Java stacks and also serverless, reactive applications with easier and more familiar methods.
Quarkus is a Kubernetes-native Java stack tailored for GraalVM and OpenJDK HotSpot. It's crafted from best-of-breed Java libraries and standards with live coding, unified configuration, superfast startup, small memory footprint, and unified imperative and reactive development. VS Code is an open source integrated development environment (IDE) for editing code.
### Generate a Quarkus project
Begin by navigating to Quarkus' [Start coding][7] page to generate a Quarkus project that includes a RESTful endpoint. Leave all variables (i.e., Group, Artifact, Build Tool, Extensions) on the default settings, then click **Generate your application** at the top-right of the page. Note that the RESTEasy JAX-RS extension is preselected as default.
![Quarkus Generate application button][8]
The ZIP file will automatically download on your local machine. Extract the file with the following command:
```
$ unzip code-with-quarkus.zip
Archive: code-with-quarkus.zip
    creating: code-with-quarkus/
   inflating: code-with-quarkus/pom.xml
   ...
```
### Install VS Code
Download and install VS Code in your preferred way, whether that's [from the website][9] or through your package manager (dnf, apt, brew, etc). Once that's done, open the unzipped Quarkus project using VS Code's command-line tool:
```
$ cd code-with-quarkus/
$ code .
```
You will see the [Apache Maven][10] project structure with:
* **ExampleResource** exposed on **/hello**
* Associated JUnit test
* Accessible landing page via <http://localhost:8080>
* Dockerfiles for both [native compilation][11] and JVM HotSpot
* A unified application configuration file
Add Quarkus tools to your IDE through the VS Code's extension feature.
![Add Quarkus tools to VS Code IDE][12]
### Start coding
Run the application using Quarkus development mode. To run the application, you need:
* JDK 1.8+ installed with JAVA_HOME configured appropriately
* Apache Maven 3.6.3+
Move to the **code-with-quarkus** directory then type **mvn compile quarkus:dev** in VS Code's terminal.
![Run application][13]
You will see that the Java application is running well with:
* About one second to startup
* Live coding activated
* EnabledCDI and RESTEASY features
When you access the endpoint via a web browser, you will see the return code, **hello**.
!["Hello" return][14]
Now, you're ready to change the code! Move back to VS Code, then open the **ExampleResource.java** file in **src/main/java/org/acme**. Replace the return code with "**Welcome, Cloud-Native Java with Quarkus!"** Don't forget to **Save** the file.
![Editing the return][15]
Go back to the web browser and reload the page.
![New return][16]
_It's like magic!_ Behind the scenes, Quarkus rebuilt, packaged, and deployed the application for you automatically, and it only took half a second. This is one of the essential cloud-native Java runtime features for increasing development productivity.
![Quarkus output][17]
Continue running your cloud-native Java application in Quarkus.
### Integrate data transactions via Quakrus Tool
To add an in-memory database (H2) transaction capability, press **F1** then click on **Quarkus: Add extensions to the current project**.
![Adding extensions in Quarkus][18]
Enter **h2** in the search bar, then double-click on **JDBC Driver - H2 Data** in the result.
![JDBC Driver - H2 Data extension][19]
Select the following three extensions, which will simplify your persistence code and return JSON format data:
* Hibernate ORM with Panache Data
* JDBC Driver - H2
* RESTEasy JSON-B Web
Press **Enter** to add those dependencies.
![Add Quarkus extensions][20]
You should see the following in a new VS Code terminal:
![VS Code adding extensions][21]
You should also find the following pulled dependencies in **POM.xml**:
![dependencies in POM.xml][22]
### Create an Inventory entity
With your project in place, you can get to work defining the business logic.
The first step is to define the model (entity) of an Inventory object. Since Quarkus uses Hibernate ORM Panache, create an **Inventory.java** file in the **src.main.java.org.acme** directory, and paste the following code into it:
```
package org.acme;
import javax.persistence.Cacheable;
import javax.persistence.Entity;
import io.quarkus.hibernate.orm.panache.PanacheEntity;
@[Entity][23]
@Cacheable
public class Inventory extends PanacheEntity {
   
    public [String][24] itemId;
    public [String][24] location;
    public int quantity;
    public [String][24] link
    public Inventory() {
    }
   
}
```
#### Define the RESTful endpoint of Inventory
Next, mirror the abstraction of service so that you can inject the Inventory service into various places (like a RESTful resource endpoint) in the future. Create an **InventoryResource.java** file in the **src.main.java.org.acme** directory and add this code to it:
```
package org.acme;
import java.util.List;
import javax.enterprise.context.ApplicationScoped;
import javax.ws.rs.Consumes;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
@Path("/services/inventory")
@ApplicationScoped
@Produces("application/json")
@Consumes("application/json")
public class InventoryResource {
    @GET
    <http://localhost:8080/services/inventory>
    public List&lt;Inventory&gt; getAll() {
        return Inventory.listAll();
    }
}
```
Don't forget to save these files. Go back to your web browser and access a new endpoint, <http://localhost:8080/services/inventory>. You will see:
![Inventory endpoint][25]
### Wrapping up
If you have an issue or get an error when you implement this, you can find and reuse the [code in my GitHub repository][26].
If you want to learn more, Quarkus has some [practical and useful guides][27] that show how to develop advanced cloud-native Java applications using Quarkus extensions with event-driven programming, serverless development, and Kubernetes deployment.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/java-quarkus-vs-code
作者:[Daniel Oh][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/daniel-oh
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_laptop_computer_work_desk.png?itok=D5yMx_Dr (Person drinking a hat drink at the computer)
[2]: https://opensource.com/resources/java
[3]: https://opensource.com/article/20/1/cloud-native-software
[4]: https://opensource.com/article/20/1/cloud-native-java
[5]: https://quarkus.io/
[6]: https://code.visualstudio.com/
[7]: https://code.quarkus.io/
[8]: https://opensource.com/sites/default/files/uploads/quarkus_generateapplication.png (Quarkus Generate application button)
[9]: https://code.visualstudio.com/download
[10]: https://maven.apache.org/
[11]: https://quarkus.io/guides/building-native-image
[12]: https://opensource.com/sites/default/files/uploads/add-quarkus-to-ide.png (Add Quarkus tools to VS Code IDE)
[13]: https://opensource.com/sites/default/files/uploads/run-application.png (Run application)
[14]: https://opensource.com/sites/default/files/uploads/endpoint-hello.png ("Hello" return)
[15]: https://opensource.com/sites/default/files/uploads/edit-return-code.png (Editing the return)
[16]: https://opensource.com/sites/default/files/uploads/new-return-code.png (New return)
[17]: https://opensource.com/sites/default/files/uploads/quarkus-magic.png (Quarkus output)
[18]: https://opensource.com/sites/default/files/uploads/quarkus-add-extensions.png (Adding extensions in Quarkus)
[19]: https://opensource.com/sites/default/files/uploads/jbdc-driver-h2-data.png (JDBC Driver - H2 Data extension)
[20]: https://opensource.com/sites/default/files/uploads/add-extensions.png (Add Quarkus extensions)
[21]: https://opensource.com/sites/default/files/uploads/vscode-adding-extensions.png (VS Code adding extensions)
[22]: https://opensource.com/sites/default/files/uploads/dependencies-pomxml.png (dependencies in POM.xml)
[23]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+entity
[24]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[25]: https://opensource.com/sites/default/files/uploads/inventory-endpoint.png (Inventory endpoint)
[26]: https://github.com/danieloh30/code-with-quarkus
[27]: https://quarkus.io/guides/

View File

@ -1,164 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to set up and run WordPress for your classroom)
[#]: via: (https://opensource.com/article/20/4/wordpress-virtual-machine)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
How to set up and run WordPress for your classroom
======
Follow these simple steps to customize WordPress for use in the
classroom using free open source software.
![Painting art on a computer screen][1]
There are many good reasons to set up WordPress for your classroom. As more schools switch to online classes, WordPress can become the go-to content management system. Teachers using WordPress can provide a number of different educational choices to differentiate instruction for their students. Blogging is an accessible way to create content that energizes student learning. Teachers can write short stories, poems, and provide picture galleries that function as story starters. Students can comment and those comments can be moderated by their teacher.
There are free options like [WordPress.com][2] and [Edublogs][3]. However, these free versions are limited, and you may want to explore all your options. You can install [Virtualbox][4] on any Windows, macOS, or Linux computer. You can use your own computer or an extra you happen to have access to in a virtual environment.
On Linux, you can install Virtualbox from your package manager. For instance, on Debian, Elementary OS, or Ubuntu:
```
`$ sudo apt install virtualbox`
```
On Fedora:
```
`$ sudo dnf install virtualbox`
```
### Download a Wordpress image
Wordpress is easy to install, but server configuration and management can be difficult for the uninitiated. That's why there's [Turnkey Linux][5], a project dedicated to creating virtual machine images and containers of popular server software, preconfigured and ready to run. With Turnkey Linux, you just download a disk image containing the operating system and the software you want to run, and then import that image into Virtualbox.
To get started with Wordpress, download the **VM** virtual machine image from [turnkeylinux.org/wordpress][6] (in the **Builds** section). Make sure you download the image labeled **VM**, because that's the only format meant for Virtualbox.
### Import the image into Virtualbox
After installing Virtualbox, launch the application and import the virtual machine image into Virtualbox.
![][7]
Networking on the imported image is set to NAT by default. You will want to change the network settings to "bridged."
![Virtualbox menu][8]
After restarting the virtual machine, you are prompted to add passwords for MySQL, Adminer, and the WordPress **admin** user.
Then you see the network configuration console for the installation. Launch a web browser and navigate to the **web** address provided (in this example, it's 192.168.86.149).
![Console][9]
In a web browser, you see a login screen for your Wordpress installation. Click on the **Login** link.
![Wordpress welcome][10]
Enter **admin** as the username, followed by the password you created earlier. Click the **Login** link. On this first login as **admin**, you can choose a new password. Be sure to remember it!
![Login screen][11]
After logging in, you're presented with the WordPress Dashboard. The software will likely notify you, in the upper left corner of the window, that a new version of Wordpress exists. Update to the latest versions as prompted so your site is secure.
It's important to note that your Wordpress blog isn't visible by anyone on the Internet yet. It only exists in your local network: only people in your building who are connected to the same router or wifi access point as you can see your Wordpress site right now. The worldwide Internet can't get to it because you're behind a firewall (embedded in your router, and possible also in your computer).
![Wordpress dashboard][12]
Following the upgrade, the application restarts, and you're ready to begin configuring WordPress to your liking.
![Wordpress configuration][13]
On the far left, there is a button to **Customize Your Site**.
There, you can choose the name of your site. You can accept the default theme, which is "Twenty Nineteen," or choose another. My favorite is "Twenty Ten," but browse through the themes available to find your personal favorite. WordPress comes with five free themes installed. You can download other free themes from the [WordPress][14][.org][15] site or choose to purchase a premium theme.
When you click the **Customize Your Site** button, you're presented with new menu options. Select **Site Identity** and change the name of your site. You might use the name of your school or classroom. There's also room to choose a byline (the credit given to the author of a blog post). You can choose the colors for your site and where you will place menus and widgets. WordPress widgets and content and features to the sidebars for your site. Homepage settings are important, as they allow you to choose between a static page that might have a description of your school or classroom or having your blog entries displayed prominently. You can add additional CSS.
![Turnkey theme][16]
You can edit your front page, add additional pages like "About," or add a blog post. You can also manage widgets, manage menus, turn comments on or off, or add a link to learn more about WordPress.
Customizing your site allows you to configure a number of options quickly and easily.
WordPress has dozens of widgets that you can place in different areas of your page. Widgets are independent sections of content that can be placed into specific areas provided by your theme. These areas are called sidebars.
### Adding content
After you have WordPress configured to your liking, you probably want to get busy creating content. The best way to do that is to head back to the WordPress Dashboard.
On the left side, near the top of the page, you see **Posts**. Select that link and a dropdown appears. Choose **Add New** to create your very first blog post.
![Add post dropdown][17]
Fill in your title in the top block and then move down to the body. It's like using a word processor. WordPress has all the tools you need to write. You can set the font size from _small_ to _huge_. You can start a paragraph with dropped capitals. The text and background color can be changed. Your posts can include quote blocks and embedded content. A wide variety of embedded content is supported so you can make your posts a dynamic multimedia experience.
![Wordpress classroom blog][18]
### Going online
So far, your Wordpress blog only exists on your local network. Anyone using the same router as you (your housemates or classroom) can see your Wordpress site by navigating to 192.168.86.149, but once you're away from that router, the site becomes inaccessible.
If you want to go online with your custom Wordpress site, you have to allow traffic through your router, and then direct that traffic to the computer running Virtualbox. If you've installed Virtualbox on a laptop, then your website would disappear any time you closed your laptop, which is why servers that never get shutdown exist. But if this is just a fun lesson on how to run a Wordpress site, then having a website that's only available during class hours is fine.
If you have access to your router, then you can log into it and make the adjustments yourself. If you don't own or control your router, then you must talk to your systems administrator for access.
A _router_ is the box you got from your internet service provider. You might also call it your _modem_.
Every device is different, so there's no way for me to definitively tell you what you need to click on to adjust your settings. Generally, you access your home router through a web browser. Your router's address is often printed on the bottom of the router and begins with either 192.168 or 10.
Navigate to the router address and log in with the credentials you were provided when you got your internet service. It's often as simple as `admin` with a numeric password (sometimes this password is printed on the router, too). If you don't know the login, call your internet provider and ask for details.
Different routers use different terms for the same thing; keywords to look for are **Port forwarding**, **Virtual server**, and **Firewall**. Whatever your router calls it, you want to accept traffic coming to port 80 of your router and forward that traffic to the same port of your virtual machines's IP address (in this example, that is 192.168.86.149, but it could be different for you).
![Example router setting screen][19]
Now you're allowing traffic through the web port of your router's firewall. To view your Wordpress site over the Internet, get your worldwide IP address. You can get your global IP by going to the site [icanhazip.com][20]. Then go to a different computer, open a browser, and navigate to that IP address. As long as Virtualbox is running, you'll see your Wordpress site on the Internet. You can do this from anywhere in the world, because your site is on the Internet now.
Most websites use a domain name so you don't have to remember global IP addresses. You can purchase a domain name from services like [webhosting.coop][21] or [gandi.net][22], or a temporary one from [freenom.com][23]. Mapping that to your Wordpress site, however, is out of scope for this article.
### Wordpress for everyone
[WordPress][24] is open source and is licensed under the [GNU Public License][25]. You are welcome to contribute to WordPress as either a [developer][26] or enthusiast. WordPress is committed to being inclusive and accessible as possible.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/wordpress-virtual-machine
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen)
[2]: https://wordpress.com/
[3]: https://edublogs.org/
[4]: https://www.virtualbox.org/
[5]: https://www.turnkeylinux.org
[6]: https://www.turnkeylinux.org/wordpress
[7]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_1.png
[8]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_2.png (Virtualbox menu)
[9]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_3.png (Console)
[10]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_4.png (Wordpress welcome)
[11]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_5.png (Login screen)
[12]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_6.png (Wordpress dashboard)
[13]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_7.png (Wordpress configuration)
[14]: http://Wordpress.org
[15]: http://WordPress.org
[16]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_8.png (Turnkey theme)
[17]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_12.png (Add post dropdown)
[18]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_13.png (Wordpress classroom blog)
[19]: https://opensource.com/sites/default/files/router-web.jpg (Example router setting screen)
[20]: http://icanhazip.com/
[21]: https://webhosting.coop/domain-names
[22]: https://www.gandi.net
[23]: http://freenom.com/
[24]: https://wordpress.org/
[25]: https://github.com/WordPress/WordPress/blob/master/license.txt
[26]: https://wordpress.org/five-for-the-future/

View File

@ -1,123 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create interactive learning games for kids with open source)
[#]: via: (https://opensource.com/article/20/5/jclic-games-kids)
[#]: author: (Peter Cheer https://opensource.com/users/petercheer)
Create interactive learning games for kids with open source
======
Help your students learn by creating fun puzzles and games in JClic, an
easy Java-based app.
![Family learning and reading together at night in a room][1]
Schools are closed in many countries around the world to slow the spread of COVID-19. This has suddenly thrown many parents and teachers into homeschooling. Fortunately, there are plenty of educational resources on the internet to use or adapt, although their licenses vary. You can try searching for Creative Commons Open Educational Resources, but if you want to create your own materials, there are many options for that to.
If you want to create digital educational activities with puzzles or tests, two easy-to-use, open source, cross-platform applications that fit the bill are eXeLearning and JClic. My earlier article on [eXeLearning][2] is a good introduction to that program, so here I'll look at [JClic][3]. It is an open source software project for creating various types of interactive activities such as associations, text-based activities, crosswords, and other puzzles with text, graphics, and multimedia elements.
Although it's been around since the 1990s, JClic never developed a large user base in the English-speaking world. It was created in Catalonia by the [Catalan Educational Telematic Network][4] (XTEC).
### About JClic
JClic is a Java-based application that's available in many Linux repositories and can be downloaded from [GitHub][5]. It runs on Linux, macOS, and Windows, but because it is a Java program, you must have a Java runtime environment [installed][6].
The program's interface has not really changed much over the years, even while features have been added or dropped, such as introducing HTML5 export functionality to replace Java Applet technology for web-based deployment. It hasn't needed to change much, though, because it's very effective at what it does.
### Creating a JClic project
Many teachers from many countries have used JClic to create interactive materials for a wide variety of ability levels, subjects, languages, and curricula. Some of these materials have been collected in an [downloadable activities library][7]. Although few activities are in English, you can get a sense of the possibilities JClic offers.
As JClic has a visual, point-and-click program interface, it is easy enough to learn that a new user can quickly concentrate on content creation. [Documentation][8] is available on GitHub.
The screenshots below are from one of the JClic projects I created to teach basic Excel skills to learners in Papua New Guinea.
A JClic project is created in its authoring tool and consists of the following four elements:
#### 1\. Metadata about the project
![JClic metadata][9]
#### 2\. A library of the graphical and other resources it uses
![JClic media][10]
#### 3\. A series of one or more activities
![JClic activities][11]
JClic can produce seven different activity types:
* Associations where the user discovers the relationships between two information sets
* Memory games where the user discovers pairs of identical elements or relations (which are hidden) between them
* Exploration activities involving the identification and information, based on a single Information set
* Puzzles where the user reconstructs information that is initially presented in a disordered form; the activity can include graphics, text, sound, or a combination of them
* Written-response activities that are solved by writing text, either a single word or a sentence
* Text activities that are based on words, phrases, letters, and paragraphs of text that need to be completed, understood, corrected, or ordered; these activities can contain images and windows with active content
* Word searches and crosswords
Because of variants in the activities, there are 16 possible activity types.
#### 4\. A timeline to sequence the activities
![JClic timeline][12]
### Using JClic content
Projects can run in JClic's player (part of the Java application you used to create the project), or they can be exported to HTML5 so they can run in a web browser.
The one thing I don't like about JClic is that its default HTML5 export function assumes you'll be online when running a project. If you want a project to work offline as needed, you must download a compiled and minified HTML5 player from [Github][13], and place it in the same folder as your JClic project.
Next, open the **index.html** file in a text editor and replace this line:
```
`<script type="text/javascript" src="https://clic.xtec.cat/dist/jclic.js/jclic.min.js"></script>`
```
With:
```
`<script type="text/javascript" src="jclic.min.js"></script>`
```
Now the HTML5 version of your project runs in a web browser, whether the user is online or not.
JClic also provides a reports function that can store test scores in an ODBC-compliant database. I have not explored this feature, as my tests and puzzles are mostly used for self-assessment and to prompt reflection by the learner, rather than as part of a formal scheme, so the scores are not very important. If you would like to learn about it, there is [documentation][14] on running JClic Reports Server with Tomcat and MySQL (or [mariaDB][15]).
### Conclusion
JClic offers a wide range of activity types that provide plenty of room to be creative in designing content to fit your subject area and type of learner. JClic is a valuable addition for anyone who needs a quick and easy way to develop educational resources.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/5/jclic-games-kids
作者:[Peter Cheer][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/petercheer
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/family_learning_kids_night_reading.png?itok=6K7sJVb1 (Family learning and reading together at night in a room)
[2]: https://opensource.com/article/18/5/exelearning
[3]: https://clic.xtec.cat/legacy/en/jclic/index.html
[4]: https://clic.xtec.cat/legacy/en/index.html
[5]: https://github.com/projectestac/jclic
[6]: https://adoptopenjdk.net/installation.html
[7]: https://clic.xtec.cat/repo/
[8]: https://github.com/projectestac/jclic/wiki/JClic_Guide
[9]: https://opensource.com/sites/default/files/uploads/metadata.png (JClic metadata)
[10]: https://opensource.com/sites/default/files/uploads/media.png (JClic media)
[11]: https://opensource.com/sites/default/files/uploads/activities.png (JClic activities)
[12]: https://opensource.com/sites/default/files/uploads/sequence.png (JClic timeline)
[13]: http://projectestac.github.io/jclic.js/
[14]: https://github.com/projectestac/jclic/wiki/Jclic-Reports-Server-with-Tomcat-and-MySQL-on-Ubuntu
[15]: https://mariadb.org/

View File

@ -1,116 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (8 open source video games to play)
[#]: via: (https://opensource.com/article/20/5/open-source-fps-games)
[#]: author: (Aman Gaur https://opensource.com/users/amangaur)
8 open source video games to play
======
These games are fun and free to play, a way to connect with friends, and
an opportunity to make an old favorite even better.
![Gaming on a grid with penguin pawns][1]
Video games are a big business. That's great for the industry's longevity—not to mention for all the people working in programming and graphics. But it can take a lot of work, time, and money to keep up with all the latest gaming crazes. If you feel like playing a few quick rounds of a video game without investing in a new console or game franchise, then you'll be happy to know that there are plenty of open source combat games you can download, play, share, and even modify (if you're inclined to programming) for free.
First-person shooters (FPS) are one of the most popular categories of video games. They are centered around the perspective of the protagonist (the player), and they often offer weapon-based advancement. As you get better at the game, you survive longer, you get better weapons, and you increase your power. FPS games have a distinct look and feel, which is reflected in the category's name: players see everything—their weapons and the game world—in first person, as if they're looking through their player character's eyes.
If you want to give one a try, check out the following eight great open source FPS games.
### Xonotic
![Xonotic][2]
[Xonotic][3] is a fast-paced, arena-based FPS game. It is a popular game in the open source world. One reason could be the fact that it has never been a mainstream game. It offers a variety of weapons and enemies that are thrown right at you mercilessly from the start. Demanding quick action and response, it is an experience that will keep you on the edge of your seats. The game is available under the GPLv3+ license.
### Wolfenstein Enemy Territory
![Wolfenstein Enemy Territory][4]
Wolfenstein has been a major franchise in gaming for many years. If you are a fan of gore and glory, then you've probably already heard of this game (if not, you'll love it once you try it). [Wolfenstein Enemy Territory][5] is an early iteration of the popular World War II game. It became free to play in 2003, and its [source code][6] is provided under the GPLv3. To play, however, you must own the game data (or recreate it yourself) separately (which remains under its original EULA).
### Doom
![Doom][7]
[Doom][8] is a wildly popular game that was also an early example of games on Linux—way back in 2004. There are many iterations of the game, many of which have been released as open source. The game is about acquiring a teleportation device that's been captured by demons, so the violence, while gory, is low on realism. The source code for the game was provided under the GPL, but many versions require that you own the game for the game assets. There are dozens of ports and adaptations, including [Freedoom][9] (with free assets), [Dhewm3][10], [RBDoom-3-BFG][11], and many more. Try a few and pick your favorite!
### Smokin' Guns
![Smokin' Guns][12]
If you're a fan of the Old West and six-shooters, this FPS is for you. From cowboys to gunslingers and with a captivating background score, [Smokin' Guns][13] has it all. It's a semi-realistic simulation of the old spaghetti western. On your way through the game, you face multiple enemies and get multiple weapons, so there's always the promise of excitement and danger around the corner. The game is free and open source under the terms of the GPLv2.
### Nexuiz
![Nexuiz][14]
[Nexuiz][15] (classic) is another great FPS that's free to play on multiple platforms. The game is based on the Quake engine and has been made open source under the GNU GPLv2. The game offers multiple modes, including online, LAN party, and bot training. The game features sophisticated weapons and fast action. It's brutal and exciting, with an objective: kill as many opponents as possible before they get you.
Note that the open source version of Nexuiz is not the same as the version built on CryEngine3 that is sold on Steam.
### .kkrieger
![kkrieger][16]
[.Kkrieger][17] was developed in 2004 by .theprodukkt, a German demogroup. The game was developed using an unreleased (at the time) engine known as Werkkzeug. This game might feel a little slow to many, but it still offers an intense experience. The approaching enemies are slow, but their sheer number makes it confusing to know which one to take down first. It's an onslaught, and you have to shoot through layers of enemies before you reach the final boss. It was released in a rather raw form on [GitHub][18] by its creators under a BSD license with some public domain components.
### Warsow
![Warsow][19]
If you've ever played Borderlands 2, then imagine [Warsow][20] as an arena-style Borderlands. The game is built on a modernized Quake II engine, and its plot takes a simple approach: Kill as many opponents as possible. The team with the most number of kills wins. Despite its simplicity, it features amazing weaponry and lots of great trick moves, like circle jumping, bunny hopping, double jumping, ramp sliding, and so on. It makes for an engaging multiplayer session, and it's been recognized by multiple online leagues as a worthy game for their competitions. Get the source code from [GitHub][21] or install the game from your software repository.
### World of Padman
![World of Padman][22]
[The World of Padman][23] may be the last game on this list, but it's one of the most unique. Designed by PadWorld Entertainment, World of Padman takes a different twist graphically and introduces you to quirky and whimsical characters in a colorful (albeit cartoonishly violent) world. It's based on the ioquake3 engine, and its unique style and uproarious gameplay have earned it a featured place in multiple gaming magazines. You can download the source code from [GitHub][24].
### Give one a shot
A game that becomes open source can act as a template for something great, whether it's a wholly open source version of an old classic, a remix of a beloved game, or an entirely new platform built on an old reliable engine.
Open source gaming is important for many reasons: it provides users with a fun diversion, a way to connect with friends, and an opportunity for programmers and designers to hack within an existing framework. If titles like Doom weren't made open source, a little bit of video game history would be lost. Instead, it endures and has the opportunity to grow even more.
Try an open source game, and watch your six.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/5/open-source-fps-games
作者:[Aman Gaur][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/amangaur
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/game_pawn_grid_linux.png?itok=4gERzRkg (Gaming on a grid with penguin pawns)
[2]: https://opensource.com/sites/default/files/uploads/xonotic.jpg (Xonotic)
[3]: https://www.xonotic.org/download/
[4]: https://opensource.com/sites/default/files/uploads/wolfensteinenemyterritory.jpg (Wolfenstein Enemy Territory)
[5]: https://www.splashdamage.com/games/wolfenstein-enemy-territory/
[6]: https://github.com/id-Software/Enemy-Territory
[7]: https://opensource.com/sites/default/files/uploads/doom.jpg (Doom)
[8]: https://github.com/id-Software/DOOM
[9]: https://freedoom.github.io/
[10]: https://dhewm3.org/
[11]: https://github.com/RobertBeckebans/RBDOOM-3-BFG/
[12]: https://opensource.com/sites/default/files/uploads/smokinguns.jpg (Smokin' Guns)
[13]: https://www.smokin-guns.org/downloads
[14]: https://opensource.com/sites/default/files/uploads/nexuiz.jpg (Nexuiz)
[15]: https://sourceforge.net/projects/nexuiz/
[16]: https://opensource.com/sites/default/files/uploads/kkrieger.jpg (kkrieger)
[17]: https://web.archive.org/web/20120204065621/http://www.theprodukkt.com/kkrieger
[18]: https://github.com/farbrausch/fr_public
[19]: https://opensource.com/sites/default/files/uploads/warsow.jpg (Warsow)
[20]: https://www.warsow.net/download
[21]: https://github.com/Warsow
[22]: https://opensource.com/sites/default/files/uploads/padman.jpg (World of Padman)
[23]: https://worldofpadman.net/en/
[24]: https://github.com/PadWorld-Entertainment

View File

@ -1,201 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Tips and tricks for optimizing container builds)
[#]: via: (https://opensource.com/article/20/5/optimize-container-builds)
[#]: author: (Ravi Chandran https://opensource.com/users/ravichandran)
Tips and tricks for optimizing container builds
======
Try these techniques to minimize the number and length of your container
build iterations.
![Toolbox drawing of a container][1]
How many iterations does it take to get a container configuration just right? And how long does each iteration take? Well, if you answered "too many times and too long," then my experiences are similar to yours. On the surface, creating a configuration file seems like a straightforward exercise: implement the same steps in a configuration file that you would perform if you were installing the system by hand. Unfortunately, I've found that it usually doesn't quite work that way, and a few "tricks" are handy for such DevOps exercises.
In this article, I'll share some techniques I've found that help minimize the number and length of iterations. In addition, I'll outline a few good practices beyond the [standard ones][2].
In the [tutorial repository][3] from my previous article about [containerizing build systems][4], I've added a folder called **/tutorial2_docker_tricks** with an example covering some of the tricks that I'll walk through in this post. If you want to follow along and you have Git installed, you can pull it locally with:
```
`$ git clone https://github.com/ravi-chandran/dockerize-tutorial`
```
The tutorial has been tested with Docker Desktop Edition, although it should work with any compatible Linux container system (like [Podman][5]).
### Save time on container image build iterations
If the Dockerfile involves downloading and installing a 5GB file, each iteration of **docker image build** could take a lot of time even with good network speeds. And forgetting to include one item to be installed can mean rebuilding all the layers after that point.
One way around that challenge is to use a local HTTP server to avoid downloading large files from the internet multiple times during **docker image build** iterations. To illustrate this by example, say you need to create a container image with Anaconda 3 under Ubuntu 18.04. The Anaconda 3 installer is a ~0.5GB file, so this will be the "large" file for this example.
Note that you don't want to use the **COPY** instruction, as it creates a new layer. You should also delete the large installer after using it to minimize the container image size. You could use [multi-stage builds][6], but I've found the following approach sufficient and quite effective.
The basic idea is to use a Python-based HTTP server locally to serve the large file(s) and have the Dockerfile **wget** the large file(s) from this local server. Let's explore the details of how to set this up effectively. As a reminder, you can access the [full example][7].
The necessary contents of the folder **tutorial2_docker_tricks/** in this example repository are:
```
tutorial2_docker_tricks/
├── build_docker_image.sh                   # builds the docker image
├── run_container.sh                        # instantiates a container from the image
├── install_anaconda.dockerfile             # Dockerfile for creating our target docker image
├── .dockerignore                           # used to ignore contents of the installer/ folder from the docker context
├── installer                               # folder with all our large files required for creating the docker image
│   └── Anaconda3-2019.10-Linux-x86_64.sh   # from <https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86\_64.sh>
└── workdir                                 # example folder used as a volume in the running container
```
The key steps of the approach are:
* Place the large file(s) in the **installer/** folder. In this example, I have the large Anaconda installer file **Anaconda3-2019.10-Linux-x86_64.sh**. You won't find this file if you clone my [Git repository][8] because only you, as the container image creator, need this source file. The end users of the image don't. [Download the installer][9] to follow along with the example.
* Create the **.dockerignore** file and have it ignore the **installer/** folder to avoid Docker copying all the large files into the build context.
* In a terminal, **cd** into the **tutorial2_docker_tricks/** folder and execute the build script as **./build_docker_image.sh**.
* In **build_docker_image.sh**, start the Python HTTP server to serve any files from the **installer/** folder: [code] cd installer
python3 -m http.server --bind 10.0.2.15 8888 &amp;
cd ..
```
* If you're wondering about the strange internet protocol (IP) address, I'm working with a VirtualBox Linux VM, and **10.0.2.15** shows up as the address of the Ethernet adapter when I run **ifconfig**. This IP seems to be the convention used by VirtualBox. If your setup is different, you'll need to update this IP address to match your environment and then update **build_docker_image.sh** and **install_anaconda.dockerfile**. The server's port number is set to **8888** for this example. Note that the IP and port numbers could be passed in as build arguments, but I've hard-coded them for brevity.
* Since the HTTP server is set to run in the background, stop the server near the end of the script with the **kill -9** command using an [elegant approach][10] I found: [code]`kill -9 `ps -ef | grep http.server | grep 8888 | awk '{print $2}'`
```
* Note that this same **kill -9** is also used earlier in the script (before starting the HTTP server). In general, when I iterate on any build script that I might deliberately interrupt, this ensures a clean start of the HTTP server each time.
* In the [Dockerfile][11], there is a **RUN wget** instruction that downloads the Anaconda installer from the local HTTP server. It also deletes the installer file and cleans up after the installation. Most importantly, all these actions are performed within the same layer to keep the image size to a minimum: [code] # install Anaconda by downloading the installer via the local http server
ARG ANACONDA
RUN wget --no-proxy <http://10.0.2.15:8888/${ANACONDA}> -O ~/anaconda.sh \
    &amp;&amp; /bin/bash ~/anaconda.sh -b -p /opt/conda \
    &amp;&amp; rm ~/anaconda.sh \
    &amp;&amp; rm -fr /var/lib/apt/lists/{apt,dpkg,cache,log} /tmp/* /var/tmp/*
```
* This file runs the wrapper script, **anaconda.sh**, and cleans up large files by removing them with **rm**.
* After the build is complete, you should see an image **anaconda_ubuntu1804:v1**. (You can list the images with **docker image ls**.)
* You can instantiate a container from this image using **./run_container.sh** at the terminal while in the folder **tutorial2_docker_tricks/**. You can verify that Anaconda is installed with: [code] $ ./run_container.sh
$ python --version
Python 3.7.5
$ conda --version
conda 4.8.0
$ anaconda --version
anaconda Command line client (version 1.7.2)
```
* You'll note that **run_container.sh** sets up a volume **workdir**. In this example repository, the folder **workdir/** is empty. This is a convention I use to set up a volume where I can have my Python and other scripts that are independent of the container image.
### Minimize container image size
Each **RUN** command is equivalent to executing a new shell, and each **RUN** command creates a layer. The naive approach of mimicking installation instructions with separate **RUN** commands may eventually break at one or more interdependent steps. If it happens to work, it will typically result in a larger image. Chaining multiple installation steps in one **RUN** command and including the **autoremove**, **autoclean**, and **rm** commands (as in the example below) is useful to minimize the size of each layer. Some of these steps may not be needed, depending on what's being installed. However, since these steps take an insignificant amount of time, I always throw them in for good measure at the end of **RUN** commands invoking **apt-get**:
```
RUN apt-get update \
    &amp;&amp; DEBIAN_FRONTEND=noninteractive \
       apt-get -y --quiet --no-install-recommends install \
       # list of packages being installed go here \
    &amp;&amp; apt-get -y autoremove \
    &amp;&amp; apt-get clean autoclean \
    &amp;&amp; rm -fr /var/lib/apt/lists/{apt,dpkg,cache,log} /tmp/* /var/tmp/*
```
Also, ensure that you have a **.dockerignore** file in place to ignore items that don't need to be sent to the Docker build context (such as the Anaconda installer file in the earlier example).
### Organize the build tool I/O
For software build systems, the build inputs and outputs—all the scripts that configure and invoke the tools—should be outside the image and the eventually running container. The container itself should remain stateless so that different users will have identical results with it. I covered this extensively in my [previous article][4] but wanted to emphasize it because it's been a useful convention for my work. These inputs and outputs are best accessed by setting up container volumes.
I've had to use a container image that provides data in the form of source code and large pre-built binaries. As a software developer, I was expected to edit the code in the container. This was problematic, because containers are by default stateless: they don't save data within the container, because they're designed to be disposable. But I worked on it, and at the end of each day, I stopped the container and had to be careful not to remove it, because the state had to be maintained so I could continue work the next day. The disadvantage of this approach was that there would be a divergence of development state had there been more than one person working on the project. The value of having identical build systems across developers is somewhat lost with this approach.
### Generate output as non-root user
An important aspect of I/O concerns the ownership of the output files generated when running the tools in the container. By default, since Docker runs as **root**, the output files would be owned by **root**, which is unpleasant. You typically want to work as a non-root user. Changing the ownership after the build output is generated can be done with scripts, but it is an additional and unnecessary step. It's best to set the [**USER**][12] argument in the Dockerfile at the earliest point possible:
```
ARG USERNAME
# other commands...
USER ${USERNAME}
```
The **USERNAME** can be passed in as a build argument (**\--build-arg**) when executing the **docker image build**. You can see an example of this in the example [Dockerfile][11] and corresponding [build script][13].
Some portions of the tools may also need to be installed as a non-root user. So the sequence of installations in the Dockerfile may need to be different from the way it's done if you are installing manually and directly under Linux.
### Non-interactive installation
Interactivity is the opposite of container automation. I've found the
```
`DEBIAN_FRONTEND=noninteractive apt-get -y --quiet --no-install-recommends`
```
options for the **apt-get install** instruction (as in the example above) necessary to prevent the installer from opening dialog boxes. Note that these options should be used as part of the **RUN** instruction. The **DEBIAN_FRONTEND=noninteractive** should not be set as an environment variable (**ENV**) in the Dockerfile, as this [FAQ explains][14], as it will be inherited by the containers.
### Log your build and run output
Debugging why a build failed is a common task, and logs are a great way to do this. Save a TypeScript of everything that happened during the container image build or container run session using the **tee** utility in a Bash script. In other words, add **|&amp; tee $BASH_SOURCE.log** to the end of the **docker image build** and the **docker image run** commands in your scripts. See the examples in the [image build][13] and [container run][15] scripts.
What this **tee**-ing technique does is generate a file with the same name as the Bash script but with a **.log** extension appended to it so that you know which script it originated from. Everything you see printed to the terminal when running the script will get logged to this file with a similar name.
This is especially valuable for users of your container images to report issues to you when something doesn't work. You can ask them to send you the log file to help diagnose the issue. Many tools generate so much output that it easily overwhelms the default size of the terminal's buffer. Relying only on the terminal's buffer capacity to copy-paste error messages may not be sufficient for diagnosing issues because earlier errors may have been lost.
I've found this to be useful, even in the container image-building scripts, especially when using the Python-based HTTP server discussed above. The server generates so many lines during a download that it typically overwhelms the terminal's buffer.
### Deal with proxies elegantly
In my work environment, proxies are required to reach the internet for downloading the resources in **RUN apt-get** and **RUN wget** commands. The proxies are typically inferred from the environment variables **http_proxy** or **https_proxy**. While **ENV** commands can be used to hard-code such proxy settings in the Dockerfile, there are multiple issues with using **ENV** for proxies directly.
If you are the only one who will ever build the container, then perhaps this will work. But the Dockerfile couldn't be used by someone else at a different location with a different proxy setting. Another issue is that the IT department could change the proxy at some point, resulting in a Dockerfile that won't work any longer. Furthermore, the Dockerfile is a precise document specifying a configuration-controlled system, and every change will be scrutinized by quality assurance.
One simple approach to avoid hard-coding the proxy is to pass your local proxy setting as a build argument in the **docker image build** command:
```
docker image build \
    --build-arg MY_PROXY=<http://my\_local\_proxy.proxy.com:xx>
```
And then, in the Dockerfile, set the environment variables based on the build argument. In the example shown here, you can still set a default proxy value that can be overridden by the build argument above:
```
# set a default proxy
ARG MY_PROXY=MY_PROXY=<http://my\_default\_proxy.proxy.com:nn/>
ENV http_proxy=$MY_PROXY
ENV https_proxy=$MY_PROXY
```
### Summary
These techniques have helped me significantly reduce the time it takes to create container images and debug them when they go wrong. I continue to be on the lookout for additional best practices to add to my list. I hope you find the above techniques useful.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/5/optimize-container-builds
作者:[Ravi Chandran][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ravichandran
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP (Toolbox drawing of a container)
[2]: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
[3]: https://github.com/ravi-chandran/dockerize-tutorial
[4]: https://opensource.com/article/20/4/how-containerize-build-system
[5]: https://podman.io/getting-started/installation
[6]: https://docs.docker.com/develop/develop-images/multistage-build/
[7]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/tutorial2_docker_tricks/
[8]: https://github.com/ravi-chandran/dockerize-tutorial/
[9]: https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh
[10]: https://stackoverflow.com/a/37214138
[11]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/tutorial2_docker_tricks/install_anaconda.dockerfile
[12]: https://docs.docker.com/engine/reference/builder/#user
[13]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/tutorial2_docker_tricks/build_docker_image.sh
[14]: https://docs.docker.com/engine/faq/
[15]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/tutorial2_docker_tricks/run_container.sh

View File

@ -1,232 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to examine processes running on Linux)
[#]: via: (https://www.networkworld.com/article/3543232/how-to-examine-processes-running-on-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to examine processes running on Linux
======
Thinkstock
There are quite a number of ways to look at running processes on Linux systems to see whats running, the resources that processes are using, how the system is affected by the load and how memory is being used. Each command gives you a different view, and the range of details is considerable. In this post, well run through a series of commands that can help you view process details in a number of different ways.
### ps
While the **ps** command is the most obvious command for examining processes, the arguments that you use when running **ps** will make a big difference in how much information will be provided. With no arguments, **ps** will only show processes associated with your current login session. Add a **-u** and you'll see extended details.
Here is a comparison:
```
nemo$ ps
PID TTY TIME CMD
45867 pts/1 00:00:00 bash
46140 pts/1 00:00:00 ps
nemo$ ps -u
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
nemo 45867 0.0 0.0 11232 5636 pts/1 Ss 19:04 0:00 -bash
nemo 46141 0.0 0.0 11700 3648 pts/1 R+ 19:16 0:00 ps -u
```
Using **ps -ef** will display details on all of the processes running on the system but **ps -eF** will add some additional details.
```
$ ps -ef | head -2
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 May10 ? 00:00:06 /sbin/init splash
$ ps -eF | head -2
UID PID PPID C SZ RSS PSR STIME TTY TIME CMD
root 1 0 0 42108 12524 0 May10 ? 00:00:06 /sbin/init splash
```
Both commands show who is running the process, the process and parent process IDs, process start time, accumulated run time and the task being run. The additional fields shown when you use **F** instead of **f** include:
* SZ: the process **size** in physical pages for the core image of the process
* RSS: the **resident set size** which shows how much memory is allocated to those parts of the process in RAM. It does not include memory that is swapped out, but does include memory from shared libraries as long as the pages from those libraries are currently in memory. It also includes stack and heap memory.
* PSR: the **processor** the process is using
##### ps -fU
You can list processes for some particular user with a command like "ps -ef | grep USERNAME", but with **ps -fU** command, youre going to see considerably more data. This is because details of processes that are being run on the user's behalf are also included. In fact, nearly all these processes shown have been kicked off by system simply to support this users online session. Nemo has only just logged in and is not yet running any commands or scripts.
```
$ ps -fU nemo
UID PID PPID C STIME TTY TIME CMD
nemo 45726 1 0 19:04 ? 00:00:00 /lib/systemd/systemd --user
nemo 45732 45726 0 19:04 ? 00:00:00 (sd-pam)
nemo 45738 45726 0 19:04 ? 00:00:00 /usr/bin/pulseaudio --daemon
nemo 45740 45726 0 19:04 ? 00:00:00 /usr/libexec/tracker-miner-f
nemo 45754 45726 0 19:04 ? 00:00:00 /usr/bin/dbus-daemon --sessi
nemo 45829 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfsd
nemo 45856 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfsd-fuse /run
nemo 45862 45706 0 19:04 ? 00:00:00 sshd: nemo@pts/1
nemo 45864 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfs-udisks2-vo
nemo 45867 45862 0 19:04 pts/1 00:00:00 -bash
nemo 45878 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfs-afc-volume
nemo 45883 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfs-goa-volume
nemo 45887 45726 0 19:04 ? 00:00:00 /usr/libexec/goa-daemon
nemo 45895 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfs-mtp-volume
nemo 45896 45726 0 19:04 ? 00:00:00 /usr/libexec/goa-identity-se
nemo 45903 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfs-gphoto2-vo
nemo 45946 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfsd-metadata
```
Note that the only process with an assigned TTY is Nemo's shell and that the parent of all of the other processes is **systemd**.
You can supply a comma-separated list of usernames instead of a single name. Just be prepared to be looking at quite a bit more data.
#### top and ntop
The **top** and **ntop** commands will help when you want to get an idea which processes are using the most resources and allow you to reorder your view depending on what criteria you want to use to rank the processes (e.g., highest CPU or memory use).
```
top - 11:51:27 up 1 day, 21:40, 1 user, load average: 0.08, 0.02, 0.01
Tasks: 211 total, 1 running, 210 sleeping, 0 stopped, 0 zombie
%Cpu(s): 5.0 us, 0.5 sy, 0.0 ni, 94.3 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 5944.4 total, 3527.4 free, 565.1 used, 1851.9 buff/cache
MiB Swap: 2048.0 total, 2048.0 free, 0.0 used. 5084.3 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
999 root 20 0 394660 14380 10912 S 8.0 0.2 0:46.54 udisksd
65224 shs 20 0 314268 9824 8084 S 1.7 0.2 0:00.34 gvfs-ud+
2034 gdm 20 0 314264 9820 7992 S 1.3 0.2 0:06.25 gvfs-ud+
67909 root 20 0 0 0 0 I 0.3 0.0 0:00.09 kworker+
1 root 20 0 168432 12532 8564 S 0.0 0.2 0:09.93 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd
```
Use **shift+m** to sort by memory use and **shift+p** to go back to sorting by CPU usage (the default).
#### /proc
A tremendous amount of information is available on running processes in the **/proc** directory. In fact, if you haven't visited **/proc** quite a few times, you might be astounded by the amount of details available. Just keep in mind that **/proc** is a very different kind of file system. As an interface to kernel data, it provides a view of process details that are currently being used by the system.
Some of the more useful **/proc** files for viewing include **cmdline**, **environ**, **fd**, **limits** and **status**. The following views provide some samples of what you might see.
The **status** file shows the process that is running (bash), its status, the user and group ID for the person running bash, a full list of the groups the user is a member of and the process ID and parent process ID.
```
$ head -11 /proc/65333/status
Name: bash
Umask: 0002
State: S (sleeping)
Tgid: 65333
Ngid: 0
Pid: 65333
PPid: 65320
TracerPid: 0
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
FDSize: 256
Groups: 4 11 24 27 30 46 118 128 500 1000
...
```
The **cmdline** file shows the command line used to start the process.
```
$ cat /proc/65333/cmdline
-bash
```
The **environ** file shows the environment variables that are in effect.
```
$ cat environ
USER=shsLOGNAME=shsHOME=/home/shsPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/gamesSHELL=/bin/bashTERM=xtermXDG_SESSION_ID=626XDG_RUNTIME_DIR=/run/user/1000DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/busXDG_SESSION_TYPE=ttyXDG_SESSION_CLASS=userMOTD_SHOWN=pamLANG=en_US.UTF-8SSH_CLIENT=192.168.0.19 9385 22SSH_CONNECTION=192.168.0.19 9385 192.168.0.11 22SSH_TTY=/dev/pts/0$
```
The **fd** file shows the file descriptors. Note how they reflect the pseudo-tty that is being used (pts/0).
```
$ ls -l /proc/65333/fd
total 0
lrwx------ 1 shs shs 64 May 12 09:45 0 -> /dev/pts/0
lrwx------ 1 shs shs 64 May 12 09:45 1 -> /dev/pts/0
lrwx------ 1 shs shs 64 May 12 09:45 2 -> /dev/pts/0
lrwx------ 1 shs shs 64 May 12 09:56 255 -> /dev/pts/0
$ who
shs pts/0 2020-05-12 09:45 (192.168.0.19)
```
The **limits** file contains information about the limits imposed on the process.
```
$ cat limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 23554 23554 processes
Max open files 1024 1048576 files
Max locked memory 67108864 67108864 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 23554 23554 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
```
#### pmap
The **pmap** command takes you in an entirely different direction when it comes to memory use. It provides a detailed map of a processs memory usage. To make sense of this, you need to keep in mind that processes do not run entirely on their own. Instead, they make use of a wide range of system resources. The truncated **pmap** output below shows a portion of the memory map for a single users bash login along with some memory usage totals at the bottom.
```
$ pmap -x 43120
43120: -bash
Address Kbytes RSS Dirty Mode Mapping
000055887655b000 180 180 0 r---- bash
0000558876588000 708 708 0 r-x-- bash
0000558876639000 220 148 0 r---- bash
0000558876670000 16 16 16 r---- bash
0000558876674000 36 36 36 rw--- bash
000055887667d000 40 28 28 rw--- [ anon ]
0000558876b96000 1328 1312 1312 rw--- [ anon ]
00007f0bd9a7e000 28 28 0 r---- libpthread-2.31.so
00007f0bd9a85000 68 68 0 r-x-- libpthread-2.31.so
00007f0bd9a96000 20 0 0 r---- libpthread-2.31.so
00007f0bd9a9b000 4 4 4 r---- libpthread-2.31.so
00007f0bd9a9c000 4 4 4 rw--- libpthread-2.31.so
00007f0bd9a9d000 16 4 4 rw--- [ anon ]
00007f0bd9aa1000 20 20 0 r---- libnss_systemd.so.2
00007f0bd9aa6000 148 148 0 r-x-- libnss_systemd.so.2
...
ffffffffff600000 4 0 0 --x-- [ anon ]
---------------- ------- ------- -------
total kB 11368 5664 1656
Kbytes: size of map in kilobytes
RSS: resident set size in kilobytes
Dirty: dirty pages (both shared and private) in kilobytes
```
```
```
Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3543232/how-to-examine-processes-running-on-linux.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.facebook.com/NetworkWorld/
[2]: https://www.linkedin.com/company/network-world

View File

@ -1,452 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fast data modeling with JavaScript)
[#]: via: (https://opensource.com/article/20/5/data-modeling-javascript)
[#]: author: (Szymon https://opensource.com/users/schodevio)
Fast data modeling with JavaScript
======
This tutorial showcases a method to model data in just a few minutes.
![Analytics: Charts and Graphs][1]
As a backend developer at the [Railwaymen][2], a software house in Kraków, Poland, some of my tasks rely on models that manipulate and customize data retrieved from a database. When I wanted to improve my skills in frontend frameworks, I [chose Vue][3], and I thought it would be good to have a similar way to model data in a store. I started with some libraries that I found through [NPM][4], but they offered many more features than I needed.
So I decided to build my own solution, and I was very surprised that the base took less than 15 lines of code and is very flexible. I implemented this solution in an open source application which I developed and called [Evally][5] - a web app that helps businesses keep track of their employees' performance reviews and professional development. It reminds managers or HR representatives about employees' upcoming evaluations and gathers all of the data needed to assess their performance in the fairest way.
### Model and list
The only things you need to do are to create a class and use the defaultsDeep function in the [Lodash][6] JavaScript library:
```
`_.defaultsDeep(object, [sources])`
```
Arguments:
* `object (Object)`: The destination object
* `[sources] (...Object)`: The source objects
Returns:
* `(Object)`: Returns object
This helper function: [Lodash Docs][7]
> "Assigns recursively own and inherited enumerable string keyed properties of source objects to the destination object for all destination properties that resolve to undefined. Source objects are applied from left to right. Once a property is set, additional values of the same property are ignored."
For example:
```
_.defaultsDeep({ 'a': { 'b': 2 } }, { 'a': { 'b': 1, 'c': 3 } })
 // =&gt; { 'a': { 'b': 2, 'c': 3 } }
```
That's all! To try it out, create a file called **base.js** and import the defaultsDeep function from the Lodash package:
```
 // base.js
 import defaultsDeep from "lodash/defaultsDeep";
```
Next, create and export the Model class, where constructor will use the Lodash helper function to assign values to all passed attributes and initialize the attributes that were not received with default values:
```
 // base.js
 // ...
 export class Model {
   constructor(attributes = {}) {
     defaultsDeep(this, attributes, this.defaults);
   }
 }
```
Now, create your first real model, Employee, with attributes for firstName, lastName, position and hiredAt where "position" defines "Programmer" as the default value:
```
 // employee.js
 import { Model } from "./base.js";
 export class Employee extends Model {
   get defaults() {
     return {
       firstName: "",
       lastName: "",
       position: "Programmer",
       hiredAt: ""
     };
   }
 }
```
Next, begin creating employees:
```
// app.js
 import { Employee } from "./employee.js";
 const programmer = new Employee({
   firstName: "Will",
   lastName: "Smith"
 });
 // =&gt; Employee {
 //   firstName: "Will",
 //   lastName: "Smith",
 //   position: "Programmer",
 //   hiredAt: "",
 //   constructor: Object
 // }
 const techLeader = new Employee({
   firstName: "Charles",
   lastName: "Bartowski",
   position: "Tech Leader"
 });
 // =&gt; Employee {
 //   firstName: "Charles",
 //   lastName: "Bartowski",
 //   position: "Tech Leader",
 //   hiredAt: "",
 //   constructor: Object
 // }
```
You have two employees, and the first one's position is assigned from the defaults. Here's how multiple employees can be defined:
```
 // base.js
 // ...
 export class List {
   constructor(items = []) {
     this.models = items.map(item =&gt; new this.model(item));
   }
 }
[/code] [code]
 // employee.js
 import { Model, List } from "./base.js";
 // …
 export class EmployeesList extends List {
   get model() {
     return Employee;
   }
 }
```
The List class constructor maps an array of received items into an array of desired models. The only requirement is to provide a correct model class name:
```
 // app.js
 import { Employee, EmployeesList } from "./employee.js";
 // …
 const employees = new EmployeesList([
   {
     firstName: "Will",
     lastName: "Smith"
   },
   {
     firstName: "Charles",
     lastName: "Bartowski",
     position: "Tech Leader"
   }
 ]);
 // =&gt; EmployeesList {models: Array[2], constructor: Object}
 //  models: Array[2]
 //   0: Employee
 //     firstName: "Will"
 //     lastName: "Smith"
 //     position: "Programmer"
 //     hiredAt: ""
 //     &lt;constructor&gt;: "Employee"
 //   1: Employee
 //     firstName: "Charles"
 //     lastName: "Bartowski"
 //     position: "Tech Leader"
 //     hiredAt: ""
 //     &lt;constructor&gt;: "Employee"
 //   &lt;constructor&gt;: "EmployeesList"
```
### Ways to use this approach
This simple solution allows you to keep your data structure in one place and avoid code repetition. The [DRY][8] principle rocks! You can also customize your models as needed, such as in the following examples.
#### Custom getters
Do you need one attribute to be dependent on the others? No problem; you can do this by improving your Employee model:
```
// employee.js
 import { Model } from "./base.js";
 export class Employee extends Model {
   get defaults() {
     return {
       firstName: "",
       lastName: "",
       position: "Programmer",
       hiredAt: ""
     };
   }
   get fullName() {
     return [this.firstName, this.lastName].join(' ')
   }
 }
[/code] [code]
// app.js
 import { Employee, EmployeesList } from "./employee.js";
 // …
 console.log(techLeader.fullName);
 // =&gt; Charles Bartowski
```
Now you don't have to repeat the code to do something as simple as displaying the employee's full name.
#### Date formatting
Model is a good place to define other formats for given attributes. The best examples are dates:
```
// employee.js
 import { Model } from "./base.js";
 import moment from 'moment';
 export class Employee extends Model {
   get defaults() {
     return {
       firstName: "",
       lastName: "",
       position: "Programmer",
       hiredAt: ""
     };
   }
   get formattedHiredDate() {
     if (!this.hiredAt) return "---";
     return moment(this.hiredAt).format('MMMM DD, YYYY');
   }
 }
[/code] [code]
// app.js
 import { Employee, EmployeesList } from "./employee.js";
 // …
 techLeader.hiredAt = "2020-05-01";
 console.log(techLeader.formattedHiredDate);
 // =&gt; May 01, 2020
```
Another case related to dates (which I discovered developing the Evally app) is the ability to operate with different date formats. Here's an example that uses datepicker:
1. All employees fetched from the database have the hiredAt date in the format:
YEAR-MONTH-DAY, e.g., 2020-05-01
2. You need to display the hiredAt date in a more friendly format:
MONTH DAY, YEAR, e.g., May 01, 2020
3. A datepicker uses the format:
DAY-MONTH-YEAR, e.g., 01-05-2020
Resolve this issue with:
```
// employee.js
 import { Model } from "./base.js";
 import moment from 'moment';
 export class Employee extends Model {
   // …
   get formattedHiredDate() {
     if (!this.hiredAt) return "---";
     return moment(this.hiredAt).format('MMMM DD, YYYY');
   }
   get hiredDate() {
     return (
       this.hiredAt
         ? moment(this.hiredAt).format('DD-MM-YYYY')
         : ''
     );
   }
   set hiredDate(date) {
     const mDate = moment(date, 'DD-MM-YYYY');
 
     this.hiredAt = (
       mDate.isValid()
         ? mDate.format('YYYY-MM-DD')
         : ''
     );
   }
 }
```
This adds getter and setter functions to handle datepicker's functionality.
```
 // Get date from server
 techLeader.hiredAt = '2020-05-01';
 console.log(techLeader.formattedHiredDate);
 // =&gt; May 01, 2020
 // Datepicker gets date
 console.log(techLeader.hiredDate);
 // =&gt; 01-05-2020
 // Datepicker sets new date
 techLeader.hiredDate = '15-06-2020';
 // Display new date
 console.log(techLeader.formattedHiredDate);
 // =&gt; June 15, 2020
```
This makes it very simple to manage multiple date formats.
#### Storage for model-related information
Another use for a model class is storing general information related to the model, like paths for routing:
```
// employee.js
 import { Model } from "./base.js";
 import moment from 'moment';
 export class Employee extends Model {
   // …
   static get routes() {
     return {
       employeesPath: '/api/v1/employees',
       employeePath: id =&gt; `/api/v1/employees/${id}`
     }
   }
 }
[/code] [code]
 // Path for POST requests
 console.log(Employee.routes.employeesPath)
 // Path for GET request
 console.log(Employee.routes.employeePath(1))
```
### Customize the list of models
Don't forget about the List class, which you can customize as needed:
```
// employee.js
 import { Model, List } from "./base.js";
 // …
 export class EmployeesList extends List {
   get model() {
     return Employee;
   }
   findByFirstName(val) {
     return this.models.find(item =&gt; item.firstName === val);
   }
   filterByPosition(val) {
     return this.models.filter(item =&gt; item.position === val);
   }
 }
[/code] [code]
 console.log(employees.findByFirstName('Will'))
 // =&gt; Employee {
 //   firstName: "Will",
 //   lastName: "Smith",
 //   position: "Programmer",
 //   hiredAt: "",
 //   constructor: Object
 // }
 console.log(employees.filterByPosition('Tech Leader'))
 // =&gt; [Employee]
 //     0: Employee
 //       firstName: "Charles"
 //       lastName: "Bartowski"
 //       position: "Tech Leader"
 //       hiredAt: ""
 //       &lt;constructor&gt;: "Employee"
```
### Summary
This simple structure for data modeling in JavaScript should save you some development time. You can add new functions whenever you need them to keep your code cleaner and easier to maintain. All of this code is available in my [CodeSandbox][9], so try it out and let me know how it goes by leaving a comment below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/5/data-modeling-javascript
作者:[Szymon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/schodevio
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/analytics-graphs-charts.png?itok=sersoqbV (Analytics: Charts and Graphs)
[2]: https://railwaymen.org/
[3]: https://blog.railwaymen.org/vue-vs-react-which-one-is-better-for-your-app-similarities-differences
[4]: https://www.npmjs.com/
[5]: https://github.com/railwaymen/evally
[6]: https://lodash.com/
[7]: https://lodash.com/docs/4.17.15
[8]: https://en.wikipedia.org/wiki/Don%27t_repeat_yourself
[9]: https://codesandbox.io/s/02jsdatamodels-1mhtb

View File

@ -1,97 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 Linux distributions for gaming)
[#]: via: (https://opensource.com/article/20/5/linux-gaming)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
4 Linux distributions for gaming
======
Linux offers plenty of great options for a work/play combo or a full
gaming console setup. Take our poll to tell us your favorite.
![Gaming with penguin pawns][1]
Gaming on Linux got a thorough kickstart in 2013 when Valve announced that their own SteamOS would be written on top of Linux. Since then, Linux users could realistically expect to play high-grade games that, in the past, required the purchase of a Windows computer or gaming console. The experience got off to a modest start, with just a few brave companies like CD Projekt Red, Deep Silver, Valve itself, and others putting the Linux penguin icon in their compatibility list, but eventually, even Gearbox and Square Enix were releasing their biggest titles on Linux. Today, [Valve's Proton project][2] helps ensure that even titles with no formal Linux release still work on SteamOS and other Linux distributions.
Valve didn't singlehandedly drag gaming into Linux, though. Well before Valve's initiative, there have been excellent independent games, blockbusters from id Software, and open source [gaming emulators][3] for Linux. Whether you want to play the latest releases or you want to relive classics from gaming history, Linux provides the only open source platform for your game rig. Here's an overview of what you might consider running on it.
### SteamOS
![Steam OS][4]
If you're looking for the full gaming PC experience—in which there's no difference between your desktop computer and a game console—then SteamOS is the obvious choice. On the one hand, there's nothing particularly special about SteamOS; it's essentially just [Debian Linux][5] with Steam set as the default startup application. When you boot your computer, Steam starts automatically, and you can interact with it using only your [Steam controller][6] or any [Xbox-style gamepad][7]. You can create the same configuration by installing Steam on any distribution and setting its "Big Picture mode" as a startup item.
However, SteamOS is ultimately specific to its purpose as a game console. While you can treat SteamOS as a normal desktop, the design choices of the distribution make it clear that it's intended as the frontend to a dedicated gaming machine. This isn't the distribution you're likely to use for your daily office or schoolwork. It's the "firmware" (except it's actually software) of a gaming console, first and foremost. When you're looking for a seamless, reliable, self-maintaining game console, build the machine of your dreams and install SteamOS.
### Lakka
![Lakka OS][8]
Similar in spirit to SteamOS, Lakka recreates the Playstation 3 interface, but for retro gaming. I installed Lakka on a Raspberry Pi Rev 1 using [Etcher][9] and was pleasantly surprised to find it ready for gaming upon bootup. Lakka loads to an interface that's eerily familiar to PS3 gamers, and, like a Playstation, you can control everything using just a [game controller][10].
Lakka focuses on retro gaming, meaning that, instead of Steam, it provides game emulators for old systems and engines. Provided you have ROM images, you can use the emulators to play games from Nintendo, Sega Genesis, Dreamcast, N64, or homebrew titles like [POWDER][11], [Warcraft Tower Defense][12], and others.
Lakka doesn't ship with any games, but it makes it easy for you to add games over SSH or Samba shares. Even if you've never used SSH or set up Samba (you've probably used it without knowing it), Lakka makes it easy to find your retro gaming system over your own network, so you can add games to it using whatever OS you have handy.
### Pop_OS!
![PopOS][13]
Not everyone is trying to build a game console—modern, retro, or otherwise. Sometimes, all you really want is a good computer with the ability to run games at top performance. [System76][14] maintains a desktop they call Pop_OS!, designed around the standard GNOME desktop with some custom additions. Pop_OS! doesn't do much by way of innovation, but it makes an impact in the way its designers maintain convenient defaults. For gamers, this includes easy access to Steam, Proton, WINE, game emulators, PlayOnLinux, automatic game controller recognition and configuration, and more. It's not far from its Ubuntu roots, but it has been refined just enough to make a noticeable difference.
When you're not playing games, Pop_OS! is also a wonderful productivity-focused desktop. It uses all of GNOME's built-in conveniences (such as the quick Activities menu overlay) to maximize efficiency, and adds useful modifications to bring the desktop closer to the universal expectation that's grown from decades of traditions founded in KDE Plasma, Finder, and Explorer. Pop_OS! is an intuitive and understated environment that helps you focus on whatever you're working on, until you break out the gaming gear, and then it makes sure you spend your time on entertainment instead of configuration.
### Drauger OS
![Drauger OS][15]
Situated somewhere between a dedicated gaming console and a plain old desktop is Drauger OS, with a simple interface designed to stay out of your way while also making it quick and easy to access the game applications you need. Drauger is still a young project, but it represents an interesting philosophy of computing and gaming—conserve every last resource for the task at hand. To that end, Drauger OS does away with the concept of a traditional desktop and instead provides a simplified control panel that lets you launch your game client (such as Steam, PlayOnLinux, Lutris, and so on), and configure services (such as your network) or launch an application. It's a little disorienting at first, especially because the control panel is designed to more or less disappear when in the background, but after an afternoon of interaction, you realize that the complexity of a full desktop is mostly unnecessary. The point of any computer is rarely its desktop. What you really care about is getting into an application as quickly and easily as possible, and then for that application to perform well.
The other side of this equation is performance. While having a drastically simplified desktop helps, Drauger OS attempts to maximize game performance by using a low-latency kernel. A kernel is the part of your operating system that communicates with external devices, such as game controllers and mice and keyboards, and even hard drives, memory, and video cards. An all-purpose kernel, such as the one that ships with most Linux distributions, gives more or less equal attention to all processes. A low-latency kernel can favor specific processes, including video and graphics, to ensure that calculations performed for important tasks are returned promptly, while mundane system tasks are assigned less importance. Drauger's Linux kernel is tuned for performance, so your games get top priority over all other processes.
### The Linux of your choice
![Pantheon OS][16]
Looking past the self-declared focal points of individual "gaming distributions," one Linux is ultimately essentially the same as the next Linux. Amazingly, I play games even on my RHEL laptop, a distribution famous for its enterprise IT support, thanks to the [Flatpak Steam installer][17]. If you want to game on Linux in this decade, your question isn't how to do it but which system to use.
The easiest answer to which Linux to use is, ultimately, to choose whatever Linux works best on your hardware. When you find a Linux distribution that boots and recognizes your computer hardware, your game controllers, and lets you play your games. Once you find that, install the games of your choice and get busy playing.
There are more great Linux distributions for gaming out there, including the [Fedora Games Spin][18], [RetroPie][19], [Clear Linux][20], [Manjaro][21], and so many more. What's your favorite? Tell us in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/5/linux-gaming
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gaming_grid_penguin.png?itok=7Fv83mHR (Gaming with penguin pawns)
[2]: https://github.com/ValveSoftware/Proton
[3]: https://opensource.com/article/18/10/lutris-open-gaming-platform
[4]: https://opensource.com/sites/default/files/uploads/screenshot_from_2020-05-15_15-53-15_0.png (Steam OS)
[5]: http://debian.org
[6]: https://store.steampowered.com/app/353370/Steam_Controller/
[7]: https://www.logitechg.com/en-nz/products/gamepads/f710-wireless-gamepad.940-000119.html
[8]: https://opensource.com/sites/default/files/uploads/os-lakka_0.png (Lakka OS)
[9]: https://www.balena.io/etcher/
[10]: https://www.logitechg.com/en-nz/products/gamepads/f310-gamepad.940-000112.html
[11]: http://www.zincland.com/powder/index.php?pagename=about
[12]: https://ndswtd.wordpress.com/
[13]: https://opensource.com/sites/default/files/uploads/os-pop_os_0.jpg (PopOS)
[14]: https://system76.com/
[15]: https://opensource.com/sites/default/files/uploads/os-drauger_0.jpg (Drauger OS)
[16]: https://opensource.com/sites/default/files/uploads/os-pantheon_0.jpg (Pantheon OS)
[17]: https://flathub.org/apps/details/com.valvesoftware.Steam
[18]: https://labs.fedoraproject.org/en/games/
[19]: https://retropie.org.uk/
[20]: https://clearlinux.org/software/bundle/games
[21]: http://manjaro.org

View File

@ -1,246 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting Started With Nano Text Editor [Beginners Guide])
[#]: via: (https://itsfoss.com/nano-editor-guide/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Getting Started With Nano Text Editor [Beginners Guide]
======
[Nano][1] is the default [terminal-based text editor][2] in Ubuntu and many other Linux distributions. Though it is less complicated to use than the likes of [Vim][3] and [Emacs][4], it doesnt mean Nano cannot be overwhelming to use.
In this beginners guide, Ill show you how to use the Nano text editor. I am also going to include a downloadable PDF cheat sheet at the end of the article so that you can refer to it for practicing and mastering Nano editor commands.
If you are just interested in a quick summary of Nano keyboard shortcuts, please expand the next section.
Essential Nano keyboard shortcuts (click to expand)
**Shortcut** | **Description**
---|---
nano filename | Open file for editing in Nano
Arrow keys | Move cursor up, down, left and right
Ctrl+A, Ctrl+E | Move cursor to start and end of the line
Ctrl+Y/Ctrl+V | Move page up and down
Ctrl+_ | Move cursor to a certain location
Alt+A and then use arrow key | Set a marker and select text
Alt+6 | Copy the selected text
Ctrl+K | Cut the selected text
Ctrl+U | Paste the selected text
Ctrl+6 | Cancel the selection
Ctrl+K | Cut/delete entire line
Alt+U | Undo last action
Alt+E | Redo last action
Ctrl+W, Alt+W | Search for text, move to next match
Ctrl+\ | Search and replace
Ctrl+O | Save the modification
Ctrl+X | Exit the editor
### How to use Nano text editor
![][5]
I presume that you have Nano editor installed on your system already. If not, please your distributions package manager to install it.
#### Getting familiar with the Nano editor interface
If youve ever [used Vim][6] or Emacs, youll notice that using Nano is a lot simpler. You can start writing or editing text straightaway.
Nano editor also shows important keyboard shortcuts you need to use for editing at the bottom of the editor. This way you wont get stuck at [exiting the editor like Vim][7].
The wider your terminal window, the more shortcuts it shows.
![Nano Editor Interface][8]
You should get familiar with the symbols in Nano.
* The caret symbol (^) means Ctrl key
* The M character mean the Alt key
When it says “^X Exit”, it means to use Ctrl+X keys to exit the editor. When it says “M-U Undo”, it means use Alt+U key to undo your last action.
#### Open or create a file for editing in Nano
You can open a file for editing in Nano like this:
```
nano my_file
```
If the file doesnt exist, it will still open the editor and when you exit, youll have the option for saving the text to my_file.
You may also open a new file without any name (like new document) with Nano like this:
```
nano
```
#### Basic editing
You can start writing or modifying the text straightaway in Nano. There are no special insert mode or anything of that sort. It is almost like using a regular text editor, at least for writing and editing.
As soon as you modify anything in the file, youll notice that it reflects this information on the editor.
![][9]
Nothing is saved immediately to the file automatically unless you explicitly do so. When you exit the editor using Ctrl+X keyboard shortcut, youll be asked whether you want to save your modified text to the file or not.
#### Moving around in the editor
Mouse click doesnt work here. Use the arrow keys to move up and down, left and right.
You can use the Home key or Ctrl+A to move to the beginning of a line and End key or Ctrl+E to move to the end of a line. Ctrl+Y/Page Up and Ctrl+V/Page Down keys can be used to scroll by pages.
If you want to go a specific location like last line, first line, to a certain text, use Ctrl+_ key combination. This will show you some options you can use at the bottom of the editor.
![Jump to a specific line in Nano][10]
#### Cut, copy and paste in Nano editor
If you dont want to spend too much time remembering the shortcuts, use mouse.
Select a text with mouse and then use the right click menu to copy the text. You may also use the Ctrl+Shift+C [keyboard shortcut in Ubuntu][11] terminal. Similarly, you can use the right click and select paste from the menu or use the Ctrl+Shift+V key combination.
**Nano specific shortcuts for copy and pasting**
Nano also provides its own shortcuts for cutting and pasting text but that could become confusing for beginners.
Move your cursor to the beginning of the text you want to copy. Press Alt+A to set a marker. Now use the arrow keys to highlight the selection. Once you have selected the desired text, you can Alt+6 key to copy the selected text or use Ctrl+K to cut the selected text. Use Ctrl+6 to cancel the selection.
Once you have copied or cut the selected text, you can use Ctrl+U to paste it.
![][12]
#### Delete text or lines in Nano
There is no dedicated option for deletion in Nano. You may use the Backspace or Delete key to delete one character at a time. Press them repeatedly or hold them to delete multiple characters.
You can also use the Ctrl+K keys that cuts the entire line. If you dont paste it anywhere, its as good as deleting a line.
If you want to delete multiple lines, you may use Ctrl+K on all of them one by one.
Another option is to use the marker (Ctrl+a). Set the marker and move the arrow to select a portion of text. Use Ctrl+K to cut the text. No need to paste it and the selected text will be deleted (in a way).
#### Undo or redo your last action
Cut the wrong line? Pasted the wrong text selection? Its easy to make such silly mistakes and its easy to correct those silly mistakes.
You can undo and redo your last actions using:
* Alt+U : Undo
* Alt +E : Redo
You can repeat these key combinations to undo or redo multiple times.
#### Search and replace
If you want to search for a certain text, use Ctrl+W and then enter the term you want to search and press enter. The cursor will move to the first match. To go to the next match, use Alt+W keys.
![][13]
By default, the search is case-insensitive. You can also use regex for the search terms.
If you want to replace the searched term, use Ctr+\ keys and then enter the search term and press enter key. Next it will ask for the term you want to replace the searched items with.
![][14]
The cursor will move to the first match and Nano will ask for your conformation for replacing the matched text. Use Y or N to confirm or deny respectively. Using either of Y or N will move to the next match. You may also use A to replace all matches.
![][15]
#### Save your file while editing (without exiting)
In a graphical editor, you are probable used to of saving your changes from time to time. In Nano, you can use Ctrl+O to save your changes you made to the file. It also works with a new, unnamed file.
![][16]
Nano actually shows this keyboard shortcut at the bottom but its not obvious. It says “^O Write Out” which actually means to use Ctrl+O (it is letter O, not number zero) to save your current work. Not everyone can figure that out.
In a graphical text editor, you probably use Ctrl+S to save your changes. Old habits die hard but it could cause trouble. Out of habit, if you accidentally press Ctrl+S to save your file, youll notice that the terminal freezes and you can do nothing.
If you accidentally press Ctrl+S press Ctrl+Q nothing can be more scary than a frozen terminal and losing the work.
#### Save and exit Nano editor
To exit the editor, press Ctrl+X keys. When you do that, it will give you the option to save the file, or discard the file or cancel the exit process.
![][17]
If you want to save the modified file as a new file (save as function in usual editors), you can do that as well. When you press Ctrl+X to exit and then Y to save the changes, it gives the option to which file it should save the changes. You can change the file name at this point.
Youll need to have write permission on the file you are editing if you want to save the modifications to the file.
#### Forgot keyboard shortcut? Use help
Like any other terminal based text editor, Nano relies heavily on keyboard shortcuts. Though it displays several useful shortcuts on the bottom of the editor, you cannot see all of them.
It is not possible to remember all the shortcuts, specially in the beginning. What you can do is to use the Ctrl+G keys to bring up the detailed help menu. The help menu lists all the keyboard shortcuts.
![][18]
#### Always look at the bottom of the Nano editor
If you are using Nano, youll notice that it displays important information at the bottom. This includes the keyboard shortcuts that will be used in the scenario. It also shows the last action you performed.
![][19]
If you get too comfortable with Nano, you can get more screen for editing the text by disabling the shortcuts displayed at the bottom. You can use Alt+X keys for that. I dont recommend doing it, to be honest. Pressing Alt+X brings the shortcut display back.
### Download Nano cheatsheet [PDF]
There are a lot more shortcuts and editing options in Nano. I am not going to overwhelm you by mentioning them all.
Heres a quick summary of the important Nano keyboard shortcuts you should rememeber. Download link is under the image.
![][20]
[Download Nano Cheat Sheet (free PDF)][21]
You can download the cheatsheet, print it and keep at your desk. It will help you in remembering and mastering the shortcuts.
I hope you find this beginners guide to Nano text editor helpful. If you liked it, please share it on Reddit, [Hacker News][22] or in various [Linux forums][23] you frequently visit.
I welcome your questions and suggestions.
--------------------------------------------------------------------------------
via: https://itsfoss.com/nano-editor-guide/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.nano-editor.org/
[2]: https://itsfoss.com/command-line-text-editors-linux/
[3]: https://www.vim.org/
[4]: https://www.gnu.org/software/emacs/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/nano-editor-guide.png?ssl=1
[6]: https://itsfoss.com/pro-vim-tips/
[7]: https://itsfoss.com/how-to-exit-vim/
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/nano-editor-interface.png?ssl=1
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/nano-modified-text.png?ssl=1
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/nano-editor-jump-to-line.png?ssl=1
[11]: https://itsfoss.com/ubuntu-shortcuts/
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/nano-editor-set-mark.png?ssl=1
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/nano-search-text.png?ssl=1
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/nano-editor-search-replace.png?ssl=1
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/nano-editor-search-replace-confirm.png?ssl=1
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/nano-editor-save-while-writing.png?ssl=1
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/nano-editor-save-and-exit.png?ssl=1
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/nano-editor-help-menu.png?ssl=1
[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/nano-editor-hints.png?ssl=1
[20]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/nano-cheatsheet.png?ssl=1
[21]: https://itsfoss.com/wp-content/uploads/2020/05/Nano-Cheat-Sheet.pdf
[22]: https://news.ycombinator.com/
[23]: https://itsfoss.community/

View File

@ -1,356 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Control your computer time and date with systemd)
[#]: via: (https://opensource.com/article/20/6/time-date-systemd)
[#]: author: (David Both https://opensource.com/users/dboth)
Control your computer time and date with systemd
======
Keep your computer time in sync with NTP, Chrony, and systemd-timesyncd.
![Alarm clocks with different time][1]
Most people are concerned with time. We get up in time to perform our morning rituals and commute to work (a short trip for many of us these days), take a break for lunch, meet a project deadline, celebrate birthdays and holidays, catch a plane, and so much more.
Some of us are even _obsessed_ with time. My watch is solar-powered and obtains the exact time from the [National Institute of Standards and Technology][2] (NIST) in Fort Collins, Colorado, via the [WWVB][3] time signal radio station located there. The time signals are synced to the atomic clock, also located in Fort Collins. My Fitbit syncs up to my phone, which is synced to a [Network Time Protocol][4] (NTP) server, which is ultimately synced to the atomic clock.
### Why time is important to computers
There are many reasons our devices and computers need the exact time. For example, in banking, stock markets, and other financial businesses, transactions must be maintained in the proper order, and exact time sequences are critical for that.
Our phones, tablets, cars, GPS systems, and computers all require precise time and date settings. I want the clock on my computer desktop to be correct, so I can count on my local calendar application to pop up reminders at the correct time. The correct time also ensures SystemV cron jobs and systemd timers trigger at the correct time.
The correct time is also important for logging, so it is a bit easier to locate specific log entries based on the time. For one example, I once worked in DevOps (it was not called that at the time) for the State of North Carolina email system. We used to process more than 20 million emails per day. Following the trail of email through a series of servers or determining the exact sequence of events by using log files on geographically dispersed hosts can be much easier when the computers in question keep exact times.
### Multiple times
Linux hosts have two times to consider: system time and RTC time. RTC stands for real-time clock, which is a fancy and not particularly accurate name for the system hardware clock.
The hardware clock runs continuously, even when the computer is turned off, by using a battery on the system motherboard. The RTC's primary function is to keep the time when a connection to a time server is not available. In the dark ages of personal computers, there was no internet to connect to a time server, so the only time a computer had available was the internal clock. Operating systems had to rely on the RTC at boot time, and the user had to manually set the system time using the hardware BIOS configuration interface to ensure it was correct.
The hardware clock does not understand the concept of time zones; only the time is stored in the RTC, not the time zone nor an offset from UTC (Universal Coordinated Time, which is also known as GMT, or Greenwich Mean Time). You can set the RTC with a tool I will explore later in this article.
The system time is the time known by the operating system. It is the time you see on the GUI clock on your desktop, in the output from the `date` command, in timestamps for logs, and in file access, modify, and change times.
The [`rtc` man page][5] contains a more complete discussion of the RTC and system clocks and RTC's functionality.
### What about NTP?
Computers worldwide use the NTP (Network Time Protocol) to synchronize their time with internet standard reference clocks through a hierarchy of NTP servers. The primary time servers are at stratum 1, and they are connected directly to various national time services at stratum 0 via satellite, radio, or even modems over phone lines. The time services at stratum 0 may be an atomic clock, a radio receiver that is tuned to the signals broadcast by an atomic clock, or a GPS receiver using the highly accurate clock signals broadcast by GPS satellites.
To prevent time requests from time servers or clients lower in the hierarchy (i.e., with a higher stratum number) from overwhelming the primary reference servers, several thousand public NTP stratum 2 servers are open and available for all to use. Many organizations and users (including me) with large numbers of hosts that need an NTP server choose to set up their own time servers, so only one local host accesses the stratum 2 or 3 time servers. Then they configure the remaining hosts in the network to use the local time server. In the case of my home network, that is a stratum 3 server.
### NTP implementation options
The original NTP implementation is **ntpd**, and it has been joined by two newer ones, **chronyd** and **systemd-timesyncd**. All three keep the local host's time synchronized with an NTP time server. The systemd-timesyncd service is not as robust as chronyd, but it is sufficient for most purposes. It can perform large time jumps if the RTC is far out of sync, and it can adjust the system time gradually to stay in sync with the NTP server if the local system time drifts a bit. The systemd-timesync service cannot be used as a time server.
[Chrony][6] is an NTP implementation containing two programs: the chronyd daemon and a command-line interface called chronyc. As I explained in a [previous article][7], Chrony has some features that make it the best choice for many environments, chiefly:
* Chrony can synchronize to the time server much faster than the old ntpd service. This is good for laptops or desktops that do not run constantly.
* It can compensate for fluctuating clock frequencies, such as when a host hibernates or enters sleep mode, or when the clock speed varies due to frequency stepping that slows clock speeds when loads are low.
* It handles intermittent network connections and bandwidth saturation.
* It adjusts for network delays and latency.
* After the initial time sync, Chrony never stops the clock. This ensures stable and consistent time intervals for many system services and applications.
* Chrony can work even without a network connection. In this case, the local host or server can be updated manually.
* Chrony can act as an NTP server.
Just to be clear, NTP is a protocol that is implemented on a Linux host using either Chrony or the systemd-timesyncd.service.
The NTP, Chrony, and systemd-timesyncd RPM packages are available in standard Fedora repositories. The systemd-udev RPM is a rule-based device node and kernel event manager that is installed by default with Fedora but not enabled.
You can install all three and switch between them, but that is a pain and not worth the trouble. Modern releases of Fedora, CentOS, and RHEL have moved from NTP to Chrony as their default timekeeping implementation, and they also install systemd-timesyncd. I find that Chrony works well, provides a better interface than the NTP service, presents much more information, and increases control, which are all advantages for the sysadmin.
### Disable other NTP services
It's possible an NTP service is already running on your host. If so, you need to disable it before switching to something else. I have been using chronyd, so I used the following commands to stop and disable it. Run the appropriate commands for whatever NTP daemon you are using on your host:
```
[root@testvm1 ~]# systemctl disable chronyd ; systemctl stop chronyd
Removed /etc/systemd/system/multi-user.target.wants/chronyd.service.
[root@testvm1 ~]#
```
Verify that it is both stopped and disabled:
```
[root@testvm1 ~]# systemctl status chronyd
● chronyd.service - NTP client/server
     Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled)
     Active: inactive (dead)
       Docs: man:chronyd(8)
             man:chrony.conf(5)
[root@testvm1 ~]#
```
### Check the status before starting
The systemd timesync's status indicates whether systemd has initiated an NTP service. Because you have not yet started systemd NTP, the `timesync-status` command returns no data:
```
[root@testvm1 ~]# timedatectl timesync-status
Failed to query server: Could not activate remote peer.
```
But a straight `status` request provides some important information. For example, the `timedatectl` command without an argument or options implies the `status` subcommand as default:
```
[root@testvm1 ~]# timedatectl status
           Local time: Fri 2020-05-15 08:43:10 EDT  
           Universal time: Fri 2020-05-15 12:43:10 UTC  
                 RTC time: Fri 2020-05-15 08:43:08      
                Time zone: America/New_York (EDT, -0400)
System clock synchronized: no                          
              NTP service: inactive                    
          RTC in local TZ: yes                    
Warning: The system is configured to read the RTC time in the local time zone.
         This mode cannot be fully supported. It will create various problems
         with time zone changes and daylight saving time adjustments. The RTC
         time is never updated, it relies on external facilities to maintain it.
         If at all possible, use RTC in UTC by calling
         'timedatectl set-local-rtc 0'.
[root@testvm1 ~]#
```
This returns the local time for your host, the UTC time, and the RTC time. It shows that the system time is set to the `America/New_York` time zone (`TZ`), the RTC is set to the time in the local time zone, and the NTP service is not active. The RTC time has started to drift a bit from the system time. This is normal with systems whose clocks have not been synchronized. The amount of drift on a host depends upon the amount of time since the system was last synced and the speed of the drift per unit of time.
There is also a warning message about using local time for the RTC—this relates to time-zone changes and daylight saving time adjustments. If the computer is off when changes need to be made, the RTC time will not change. This is not an issue in servers or other hosts that are powered on 24/7. Also, any service that provides NTP time synchronization will ensure the host is set to the proper time early in the startup process, so it will be correct before it is fully up and running.
### Set the time zone
Usually, you set a computer's time zone during the installation procedure and never need to change it. However, there are times it is necessary to change the time zone, and there are a couple of tools to help. Linux uses time-zone files to define the local time zone in use by the host. These binary files are located in the `/usr/share/zoneinfo` directory. The default for my time zone is defined by the link `/etc/localtime -> ../usr/share/zoneinfo/America/New_York`. But you don't need to know that to change the time zone.
But you do need to know the official time-zone name for your location. Say you want to change the time zone to Los Angeles:
```
[root@testvm2 ~]# timedatectl list-timezones | column
&lt;SNIP&gt;
America/La_Paz                  Europe/Budapest
America/Lima                    Europe/Chisinau
America/Los_Angeles             Europe/Copenhagen
America/Maceio                  Europe/Dublin
America/Managua                 Europe/Gibraltar
America/Manaus                  Europe/Helsinki
&lt;SNIP&gt;
```
Now you can set the time zone. I used the `date` command to verify the change, but you could also use `timedatectl`:
```
[root@testvm2 ~]# date
Tue 19 May 2020 04:47:49 PM EDT
[root@testvm2 ~]# timedatectl set-timezone America/Los_Angeles
[root@testvm2 ~]# date
Tue 19 May 2020 01:48:23 PM PDT
[root@testvm2 ~]#
```
You can now change your host's time zone back to your local one.
### systemd-timesyncd
The systemd timesync daemon provides an NTP implementation that is easy to manage within a systemd context. It is installed by default in Fedora and Ubuntu and started by default in Ubuntu but not in Fedora. I am unsure about other distros; you can check yours with:
```
`[root@testvm1 ~]# systemctl status systemd-timesyncd`
```
### Configure systemd-timesyncd
The configuration file for systemd-timesyncd is `/etc/systemd/timesyncd.conf`. It is a simple file with fewer options included than the older NTP service and chronyd. Here are the complete contents of the default version of this file on my Fedora VM:
```
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See timesyncd.conf(5) for details.
[Time]
#NTP=
#FallbackNTP=0.fedora.pool.ntp.org 1.fedora.pool.ntp.org 2.fedora.pool.ntp.org 3.fedora.pool.ntp.org
#RootDistanceMaxSec=5
#PollIntervalMinSec=32
#PollIntervalMaxSec=2048
```
The only section it contains besides comments is `[Time]`, and all the lines are commented out. These are the default values and do not need to be changed or uncommented (unless you have some reason to do so). If you do not have a specific NTP time server defined in the `NTP=` line, Fedora's default is to fall back on the Fedora pool of time servers. I like to add the time server on my network to this line:
```
`NTP=myntpserver`
```
### Start timesync
Starting and enabling systemd-timesyncd is just like any other service:
```
[root@testvm2 ~]# systemctl enable systemd-timesyncd.service
Created symlink /etc/systemd/system/dbus-org.freedesktop.timesync1.service → /usr/lib/systemd/system/systemd-timesyncd.service.
Created symlink /etc/systemd/system/sysinit.target.wants/systemd-timesyncd.service → /usr/lib/systemd/system/systemd-timesyncd.service.
[root@testvm2 ~]# systemctl start systemd-timesyncd.service
[root@testvm2 ~]#
```
### Set the hardware clock
Here's what one of my systems looked like after starting timesyncd:
```
[root@testvm2 systemd]# timedatectl
               Local time: Sat 2020-05-16 14:34:54 EDT  
           Universal time: Sat 2020-05-16 18:34:54 UTC  
                 RTC time: Sat 2020-05-16 14:34:53      
                Time zone: America/New_York (EDT, -0400)
System clock synchronized: yes                          
              NTP service: active                      
          RTC in local TZ: no    
```
The RTC time is around a second off from local time (EDT), and the discrepancy grows by a couple more seconds over the next few days. Because RTC does not have the concept of time zones, the `timedatectl` command must do a comparison to determine which time zone is a match. If the RTC time does not match local time exactly, it is not considered to be in the local time zone.
In search of a bit more information, I checked the status of systemd-timesync.service and found:
```
[root@testvm2 systemd]# systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
     Loaded: loaded (/usr/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: disabled)
     Active: active (running) since Sat 2020-05-16 13:56:53 EDT; 18h ago
       Docs: man:systemd-timesyncd.service(8)
   Main PID: 822 (systemd-timesyn)
     Status: "Initial synchronization to time server 163.237.218.19:123 (2.fedora.pool.ntp.org)."
      Tasks: 2 (limit: 10365)
     Memory: 2.8M
        CPU: 476ms
     CGroup: /system.slice/systemd-timesyncd.service
             └─822 /usr/lib/systemd/systemd-timesyncd
May 16 09:57:24 testvm2.both.org systemd[1]: Starting Network Time Synchronization...
May 16 09:57:24 testvm2.both.org systemd-timesyncd[822]: System clock time unset or jumped backwards, restoring from recorded timestamp: Sat 2020-05-16 13:56:53 EDT
May 16 13:56:53 testvm2.both.org systemd[1]: Started Network Time Synchronization.
May 16 13:57:56 testvm2.both.org systemd-timesyncd[822]: Initial synchronization to time server 163.237.218.19:123 (2.fedora.pool.ntp.org).
[root@testvm2 systemd]#
```
Notice the log message that says the system clock time was unset or jumped backward. The timesync service sets the system time from a timestamp. Timestamps are maintained by the timesync daemon and are created at each successful time synchronization.
The `timedatectl` command does not have the ability to set the value of the hardware clock from the system clock; it can only set the time and date from a value entered on the command line. However, you can set the RTC to the same value as the system time by using the `hwclock` command:
```
[root@testvm2 ~]# /sbin/hwclock --systohc --localtime
[root@testvm2 ~]# timedatectl
               Local time: Mon 2020-05-18 13:56:46 EDT  
           Universal time: Mon 2020-05-18 17:56:46 UTC  
                 RTC time: Mon 2020-05-18 13:56:46      
                Time zone: America/New_York (EDT, -0400)
System clock synchronized: yes                          
              NTP service: active                      
          RTC in local TZ: yes
```
The `--localtime` option ensures that the hardware clock is set to local time, not UTC.
### Do you really need RTC?
Any NTP implementation will set the system clock during the startup sequence, so is RTC necessary? Not really, so long as you have a network connection to a time server. However, many systems do not have full-time access to a network connection, so the hardware clock is useful so that Linux can read it and set the system time. This is a better solution than having to set the time by hand, even if it might drift away from the actual time.
### Summary
This article explored the use of some systemd tools for managing date, time, and time zones. The systemd-timesyncd tool provides a decent NTP client that can keep time on a local host synchronized with an NTP server. However, systemd-timesyncd does not provide a server service, so if you need an NTP server on your network, you must use something else, such as Chrony, to act as a server.
I prefer to have a single implementation for any service in my network, so I use Chrony. If you do not need a local NTP server, or if you do not mind dealing with Chrony for the server and systemd-timesyncd for the client and you do not need Chrony's additional capabilities, then systemd-timesyncd is a serviceable choice for an NTP client.
There is another point I want to make: You do not have to use systemd tools for NTP implementation. You can use the old ntpd or Chrony or some other NTP implementation. systemd is composed of a large number of services; many of them are optional, so they can be disabled and something else used in its place. It is not the huge, monolithic monster that some make it out to be. It is OK to not like systemd or parts of it, but you should make an informed decision.
I don't dislike systemd's implementation of NTP, but I much prefer Chrony because it meets my needs better. And that is what Linux is all about.
### Resources
There is a great deal of information about systemd available on the internet, but much is terse, obtuse, or even misleading. In addition to the resources mentioned in this article, the following webpages offer more detailed and reliable information about systemd startup.
* The Fedora Project has a good, practical [guide to systemd][8]. It has pretty much everything you need to know in order to configure, manage, and maintain a Fedora computer using systemd.
* The Fedora Project also has a good [cheat sheet][9] that cross-references the old SystemV commands to comparable systemd ones.
* For detailed technical information about systemd and the reasons for creating it, check out [Freedesktop.org][10]'s [description of systemd][11].
* [Linux.com][12]'s "More systemd fun" offers more advanced systemd [information and tips][13].
There is also a series of deeply technical articles for Linux sysadmins by Lennart Poettering, the designer and primary developer of systemd. These articles were written between April 2010 and September 2011, but they are just as relevant now as they were then. Much of everything else good that has been written about systemd and its ecosystem is based on these papers.
* [Rethinking PID 1][14]
* [systemd for Administrators, Part I][15]
* [systemd for Administrators, Part II][16]
* [systemd for Administrators, Part III][17]
* [systemd for Administrators, Part IV][18]
* [systemd for Administrators, Part V][19]
* [systemd for Administrators, Part VI][20]
* [systemd for Administrators, Part VII][21]
* [systemd for Administrators, Part VIII][22]
* [systemd for Administrators, Part IX][23]
* [systemd for Administrators, Part X][24]
* [systemd for Administrators, Part XI][25]
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/time-date-systemd
作者:[David Both][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk (Alarm clocks with different time)
[2]: https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
[3]: https://en.wikipedia.org/wiki/WWVB
[4]: https://en.wikipedia.org/wiki/Network_Time_Protocol
[5]: https://linux.die.net/man/4/rtc
[6]: https://chrony.tuxfamily.org/
[7]: https://opensource.com/article/18/12/manage-ntp-chrony
[8]: https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html
[9]: https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet
[10]: http://Freedesktop.org
[11]: http://www.freedesktop.org/wiki/Software/systemd
[12]: http://Linux.com
[13]: https://www.linux.com/training-tutorials/more-systemd-fun-blame-game-and-stopping-services-prejudice/
[14]: http://0pointer.de/blog/projects/systemd.html
[15]: http://0pointer.de/blog/projects/systemd-for-admins-1.html
[16]: http://0pointer.de/blog/projects/systemd-for-admins-2.html
[17]: http://0pointer.de/blog/projects/systemd-for-admins-3.html
[18]: http://0pointer.de/blog/projects/systemd-for-admins-4.html
[19]: http://0pointer.de/blog/projects/three-levels-of-off.html
[20]: http://0pointer.de/blog/projects/changing-roots
[21]: http://0pointer.de/blog/projects/blame-game.html
[22]: http://0pointer.de/blog/projects/the-new-configuration-files.html
[23]: http://0pointer.de/blog/projects/on-etc-sysinit.html
[24]: http://0pointer.de/blog/projects/instances.html
[25]: http://0pointer.de/blog/projects/inetd.html

View File

@ -1,381 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Exploring Algol 68 in the 21st century)
[#]: via: (https://opensource.com/article/20/6/algol68)
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
Exploring Algol 68 in the 21st century
======
An in-depth look at a forgotten language and its modern applications.
![Old UNIX computer][1]
In the preface to his excellent textbook _Algol 68: A First and Second Course_, Andrew McGettrick writes:
> "This book originated from lectures first given at the University of Strathclyde in 1973-4 to first-year undergraduates, many of whom had no previous knowledge of programming. Many of the students were not taking computer science as their main subject but merely as a subsidiary subject. They, therefore, served as a suitable audience on whom to inflict lectures attempting to teach Algol 68 as a first programming language."
Perhaps this quote carries particular weight for me as I, too, was a first-year student in 1973-1974, though at a different institution—the University of British Columbia. Moreover, "back in those days," the introductory computer science course at UBC was taught in the second year using Waterloo FORTRAN with a bit of IBM 360 Assembler thrown in; nothing so exotic as Algol 68. In my case, I didn't encounter Algol 68 until my third year. Maybe this wait, along with experiences in other programming languages, contributed to my lifelong fascination with this underrated and wonderful programming language. And thanks to Marcel van der Veer, who has created [a very fine implementation of Algol 68][2] called Algol 68 Genie, that is now in my distro's repositories, at long last, I've been able to explore Algol 68 at my leisure. I should also mention that Marcel's book, [_Learning Algol 68 Genie_][3], is of great utility both for newcomers and as a refresher course in Algol 68.
Because I've been having so much fun rediscovering Algol 68, I thought I'd share some of my thoughts and impressions.
### What people say about Algol 68
If it's worth reading [the overview of Algol 68 on Wikipedia][4], then it's really worth reading this paragraph from the [_Revised Report on the Algorithmic Language Algol 68_][5]:
> "The original authors acknowledged with pleasure and thanks the wholehearted cooperation, support, interest, criticism, and violent objections from members of WG 2.1 and many other people interested in Algol."
"Criticism and violent objections"—wow! In fact, some committee members were so unhappy with the direction the committee was taking that they left and started their own language definition projects, at least partly as a protest against Algol 68. Niklaus Wirth, for example, fed up with the complexity of Algol 68, [went off to design Pascal][6]. And having written and supported a fair bit of Pascal code from about 1984 through 2000 or so, I am here to tell you that Pascal is about as far from Algol 68 as it's possible to get. Which, it seems to me, was Wirth's point.
Dennis Ritchie [gave a talk][7] at the second ACM History of Programming Languages conference in Cambridge, Massachusetts in 1993, in which he compares Bliss, Pascal, Algol 68, and C. In that talk, he made several interesting observations:
* All of the four languages are "based on this old, old model of machines that pick up things, do operations, and put them someplace else" and "are very much influenced by Algol 60 and FORTRAN."
* "When Steve Bourne (yes, the person who created the Bourne shell) came to Bell Labs with the Algol 68C compiler, he made it do the same things that C could do; it had Unix system call interfaces and so forth."
* "I think the language really did suffer from its definition in terms of acceptance. Nevertheless, it was really quite practical."
* "In some ways, Algol 68 is the most elegant of the languages I've been discussing. I think in some ways, it's even the most influential, although as a language in itself, it has nearly gone away."
There is much more opinion on Algol 68 still prominent on the Internet today. A lot of it is negative, but oh well! I suspect a great deal of it is not informed by actual use. One very interesting place to find coders just getting down to using the language (and many others, some marvelously obscure) is on [the Rosetta Code Wiki][8]. Go there and form your own opinion! Or follow me as I review what strikes me as great and not so great about Algol 68.
### What seems important and relevant to me about Algol 68
Algol 68, as a programming language, offers some distinctive and useful ideas that were innovative at the time and have shown up, to some degree or the other, in other languages since then.
#### Key design principles clearly explained in the Revised Report
The committee that designed Algol 68 was driven by a very clear set of principles:
* Completeness and clarity of description (aided by the use of two-level grammar, which provoked many negative opinions)
* Orthogonal design; that is, basic concepts defined in the language can be used anywhere that usage can be said to "make sense." As an example—every expression that can reasonably be expected to yield a value does, in fact, yield a value.
* Security by way of careful syntactical design (that two-level grammar again); most errors thought to be related to semantic concepts in other languages can be detected at compile time.
* Efficiency, in that programs should run efficiently (on the hardware of the day) without requiring significant efforts to optimize the generated code, and furthermore:
* No run time-type checking except in the unique case of types that present alternative configurations at run time (`united` types in Algol 68, similar to `union` types in C)
* Type-independent parsing (again, the two-level grammar at work here) and certainty that, in a finite number of steps, any input sequence can be evaluated as to whether it is a program or not
* Loop structures that encourage the use of well-known loop optimization strategies of the day
* A symbol set (with alternatives) that worked on the various different character sets available on computers at the time
I find it instructive to see the emphasis on very strong static typing (50 years ago!!) and the benefits that were expected to accrue, in contrast to today's universe of dynamically-typed languages and languages with weak static typing that have helped spawn an entire industry of run-time testing. (OK maybe that's not completely fair, but it contains a certain element of truth).
#### Structures to group statements together without extra grouping constructs
In programs written in Algol 60 and Pascal, we see a lot of `begin` and `end` tokens; in C, C++, Java, and so forth, we see a lot of `{` and `}`. For example, the simple expression to calculate the absolute value `av` of an integer value `iv` can be written in either Algol 60 or Pascal as:
```
`if iv < 0 then av := -iv else av := iv`
```
If we wanted to set a Boolean value stating whether `iv` was negative, then we need to start inserting `begin` and `end`:
```
`if iv < 0 then begin av := -iv; negative := true end else begin av := iv; negative := false end`
```
Formally, Algol 68 uses boldface for tokens with special meaning like **if** or **then**, and uses italics for names of things like the _print_() procedure.  This wasn't practical back in the day when many still used keypunches for coding, and it would still be a bit weird today.  So Algol 68 implementations usually provided some method of marking special symbols (called _stropping_), leaving everything else unmarked.  By default, Algol 68 Genie uses upper case stropping, so symbols like **if** are coded as IF, and names of things can only be in lower case.  Worth noting however is that it's completely ok to have a variable named "if" should that suit the purpose at hand. Anway... in case any reader is inclined to copy / paste, I'm using the Genie convention in my code samples.
Moreover, Algol 68 has a closed syntax, which the Bourne shell and Bash have inherited.  So the previous line of code in Algol 68 Genie would be:
```
`IF iv < 0 THEN av := -iv; negative := TRUE ELSE av := iv; negative := FALSE FI`
```
The token `fi` closes off the preceding `if`, in case that's not obvious. Now, perhaps I'm the only person in the world who has ever written some Java that looks like this:
```
if (something)
    statement;
```
and then found myself inserting a call to `println` to debug that code:
```
if (something)
    statement;
    [System][9].err.println(stuff);  /* not in the then-part of if!!! */
```
cluelessly forgetting to wrap the then-part in `{``}`. And of course, this isn't the end of the world, but when the insertion is something with less obvious results, well, let's just say I've spent a fair bit of time debugging this kind of thing over the years.
But that can't happen in Algol 68. Well, mostly, anyway. Algol 68 still needs `begin``end` for operator and procedure declarations. But `if``fi`, `do``od` and `case``esac` (the Algol 68 switch statement) are all closed.
We see this same concept in Go today; an "if" statement looks like if … { … }; the { and } are required. And as I already mentioned, the Bourne shell and its descendants use similar constructs.
#### Almost every expression yields a value
Look at the expression `iv < 0` above; pretty obvious that yields a value, and most likely that value is Boolean (`true` or `false`). So no big deal there.
But an assignment statement also yields a value, namely, the left-hand side of the assignment statement after the assignment is completed.
A sequence of statements yields whatever the final statement (or expression) yields as a value.
An "if" statement yields either the value of the then-part or the else-part, depending on whether the expression following "if" yields `true` or `false`.
An example: think of using the C, Java… ternary operator for our absolute value calculation:
```
`av = iv < 0 ? -iv : iv;`
```
In Algol 68, we don't need an extra "ternary operator," as the "if" statement works just fine:
```
`av := IF iv < 0 THEN -iv ELSE iv FI`
```
This might be a good moment to mention that Algol 68 provides "brief" versions of symbols like `begin`, `end`, `if`, `then`, `else` and so forth, using `( |` and `)`:
```
`av := ( iv < 0 | -iv | iv )`
```
has the same meaning as the previous expression.
One thing that surprised me when I first encountered it is that loops don't yield an expression. But loops have a few differences that end up making sense once they are fully understood.
A loop in Algol 68 might look like this:
```
`FOR lv FROM 1 BY 1 TO 1000 WHILE 2 * lv * ly < limit DO … OD`
```
The variable `ly` here is the loop variable, implicitly declared by the `for` as an integer. Its scope is the entire `for``od`**,** and its value is retained from one iteration to the next. We can declare a regular variable in the `while``do` part, just like in an `if``then` part. Its scope is the `while``od` part, but its value is not retained from one iteration to the next. So, for example, if we want to accumulate the sum of the elements of an array, we must write:
```
`INT sum := 0; FOR ai FROM LWB array TO UPB array DO sum +:= array[ai] OD`
```
where the operators `lwb` and `upb` deliver the smallest and largest index values respectively defined for the array and the +:= symbol has the same meaning as += in C or Java.
If we wanted to return the sum as a value, we would write:
```
`BEGIN INT sum := 0; FOR ai FROM LWB array TO UPB array DO sum +:= array[ai] OD; sum END`
```
Of course, we could replace `begin` and `end` with `(` and `)` for brevity. This expression would be a reasonable implementation of a procedure (or operator) that returns the sum of the values of the elements of an array.
#### Orthogonality—the same expression will work almost anywhere
Look again at the expression `iv < 0` above.
Let's step back a bit and include a definition of `iv` and the acquisition of its value. Then the code might look like:
```
`INT iv; read(iv); IF iv < 0 THEN … FI`
```
However, we could just as well write:
```
`IF INT iv; read(iv); iv < 0 THEN … FI`
```
Here we can see orthogonality at work - the declaration and reading of the variable can occur between the `if` and the logical expression testing the variable, because the value delivered is just that of the final expression. Moreover, this works with Algol 68 semantics to provide an interesting difference—in the first case, the scope of `iv` is the code surrounding the "if" statement; in the second, the scope is just between the `if` and the `fi`. To my way of thinking, this option means that we should have fewer variables declared far away from where they are used, and the ones that remain really do have a "long life" in the code.
This has practical importance as well. Think, for example, of code that uses some kind of SQL interface to execute several scripts in a database and return the values for further analysis. Usually, in this case, the programmer needs to do a bit of work to set up the connection to the database, pass a query string to the execute command, and retrieve the results. Each instance requires declaring some variables to hold the connection, the query string, and the results. How nice it is when these variables can be declared locally to the results accumulation code! This also facilitates adding a new query-analysis step with a quick copy-paste. And yes, it's good to turn these code snippets into procedure calls, especially in a language that supports lambdas (anonymous procedures) so as to avoid obscuring the different analysis steps with repeated administrative steps. But having very locally-defined administrative variables facilitates the refactoring effort required.
Another great consequence of orthogonality is that we can have the equivalent of the ternary operator on the left-hand side of an assignment statement as well as on the right-hand side.
Let's suppose we're processing an input stream of signed integers, and we want to accumulate positive integers into gains and negative integers into losses. Then, the following Algol 68 code would work:
```
`IF amount < 0 THEN losses +:= amount ELSE gains +:= amount FI`
```
However, there's no need to repeat the `+:= amount` here; we can move it outside the `if``fi` as follows:
```
`IF amount < 0 THEN losses ELSE gains FI +:= amount`
```
This works because the "if" statement yields either the losses or gains expression as a result of the evaluation of the test, and that expression is incremented by amount. And of course, we can use the brief form, which, in my opinion at least, improves the readability in these short expressions:
```
`(amount < 0 | losses | gains) +:= amount`
```
How about a real example to show why this expression-oriented thing is so great?
Suppose you are writing a hash table facility. Two functions you will have to implement are "get the value associated with a given key" and "set the value associated with a given key".
In an expression-oriented language, those can be one function. Why? Because the "get" operation returns the location where the value is found, and then the "set" operation simply uses the "get" operation to set the value at that location. Let's assume we've created an operator called `valueat` that takes two arguments—the hash table itself and the key value. Then,
```
`ht VALUEAT 42`
```
will return the location of key 42 in the hash table ht and
```
`ht VALUEAT 42 := "the meaning of everything"`
```
will put the string "the meaning of everything" at location 42.
This reduces the amount of code required to support the application at hand, reducing the number of pathways and edge cases that must be tested, and just generally adds wonderfulness to the users' and maintainers' lives.
There is a simple example of using procedures on the left-hand side of assignment statements to store values in a table on [RosettaCode][10].
#### Anonymous procedures (lambdas)
Everyone seems to want anonymous procedures (or "here" procedures, or lambdas) these days. Algol 68 provided that out of the box, and it's really, truly useful.
By way of example, imagine that you want to create a facility to read files with delimited fields and to give users a nice interaction pathway with those. Think of the fine job `awk` does on this, basically by abstracting away all the junk related to opening the file, reading the lines, splitting the lines into fields, and providing some useful collateral variables along the way, like current-line-number, number-of-fields-on-this-line, and so forth.
It turns out that's pretty easy to do in Algol 68 as well, where the task becomes to write a procedure that takes three arguments—the first being the input file name, the second being the field separator string, and the third being a procedure that handles each line.
The declaration of that procedure might look like this:
```
PROC each line =         # 1 #
        (STRING input file name, CHAR separator, PROC (STRING, [] STRING, INT) VOID process) # 2 #
VOID: BEGIN              # 3 #
    FILE inf;            # 4 #
    open(inf, input file name, stand in channel); # 5 #
    BOOL finished reading := FALSE;
    on logical file end (inf, (REF FILE f) bool: finished reading := TRUE); # 6 #
    INT linecount := 0;  # 7 #
    WHILE                # 8 #
        string line;
        get(inf,(line, new line));
        not finished reading
    DO                   # 9 #
        linecount +:= 1;
        FLEX [1:0] STRING fields := split(line, separator);
        process(line, fields, linecount)
    OD;
    close(inf)           # 10 #
END                      # 11 #
```
Heres whats going on above:
1. Comment 1 (the # 1 # above)—the declaration of the procedure `each line` (note that blanks can be inserted into the middle of names or numbers at will)
2. The parameters to each line—the `string` file name, the field separator `char`acter, and the `pro`cedure to be called to process each line, which itself takes a `string` (the line of input) an array of `string`s (the fields of the line) and an `int`eger (the line number) and which returns a `void` value
3. `each line` returns a `void` value, and the procedure body starts with a `begin`, allowing us to use several statements in its definition
4. Declare the input `file`
5. Associate the `standard input channel` with the `file`, whose name is given by `input file name` and open it (for reading)
6. Algol 68 handles end-of-file conditions a bit differently; here, we use the I/O event detection procedure `on logical file end` to set the flag `finished reading` that we can detect while processing the file
7. Create and initialize the line count (see the previous description of the nature of loops)
8. This `while` loop attempts to read the next line from the input file. If successful, it processes the line; otherwise, it exits
9. Processing the input line—increment the line count; create an array of strings corresponding to the fields of the line using the `split` procedure; call the supplied `process` procedure to consume the line, its fields and the line count
10. Remember to `close` the file
11. `end` of the procedure definition.
And we might use it like so, in order to build a lookup table (in conjunction with the hypothetical hash table facility mentioned in passing in the previous section):
```
# remapping definitions in remapping.csv file #
# new-reference|old-reference #
# 093M0770371|093X0012250 #
# 093M0770375|093X0012249 #
# 093M0770370|093X0012133 #
[/code] [code]
HASTABLE ht := new hashtable;
each delimited line("test.csv", "|", (STRING line, [] STRING fields, INT linecount) VOID: BEGIN
    STRING to map := fields[1], from map := fields[2];
    ht VALUEAT from map := to map
END);
```
Above, we see the call to each delimited line. Of particular interest is the declaration of the "here" procedure or lambda that stows the lookup values into the hash table. From my perspective, the big lesson here is that lambdas are a consequence of Algol 68's orthogonality; I think that's pretty neat.
One of the things I plan to dig deeper into as I continue to explore Algol 68 is how much further I can take this functional form of expression. For example, I don't see why I can't build a list or a hash table element by element and yield the finished structure as the result of the looping procedure, so the above might look more like:
```
HASHTABLE ht := each delimited line as map entry("test.csv", "|",
        (STRING line, [] STRING fields, INT linecount) VOID: BEGIN
    STRING to map := fields[1], from map := fields[2];
    (from map, to map)
END);
```
### In conclusion
Why learn about old, dusty, and forgotten languages? Well, we all know about the recent interest in COBOL, but perhaps that's an outlier in the sense that there probably aren't a lot of mission-critical applications written in SNOBOL, Icon, APL, or even Algol 68. Certainly, there is George Santayana's guidance to bear in mind: ["Those who cannot remember the past are condemned to repeat it."][11]
For me, there are a few key reasons to up my game in Algol 68 (and probably in a few other languages that don't seem to be absolutely necessary to my daily efforts):
* Algol 68 was not defined as a reaction against some annoyances in an existing programming language; rather, according to the Revised Report:
* The committee (Working Group 2.1 on ALGOL of the International Federation for Information Processing) "expresses its belief in the value of a common programming language serving many people in many countries."
* "Algol 68 has not been designed as an expansion of Algol 60 but rather as a completely new language based on new insight into the essential, fundamental concepts of computing and a new description technique."
* Whether through positive contributions copied into other languages (`do` … `od` in the Bourne shell; += in C, Java, …) or negative reactions (Pascal and all its descendants, Ada), Algol 68 can claim to have influenced computing in profound ways.
* While Algol 68 is very much "a child of its time," being influenced by keypunches and line printers, small and diverse character sets, the wide variation in character and word sizes of computers in the 1960s and 1970s, and not explicitly incorporating object orientation or functional programming, its rather extraordinary orthogonality and expression-orientedness make up for these oddities and lacking in other useful ways.
* Perhaps the most practical reason is having the wonderful Algol 68 Genie interpreter installed and running on my desktop, allowing me to pursue this odd small hobby!
Perhaps I should return to Santayana for a final comment:
> ["Beauty as we feel it is something indescribable: what it is or what it means can never be said."][11]
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/algol68
作者:[Chris Hermansen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/retro_old_unix_computer.png?itok=SYAb2xoW (Old UNIX computer)
[2]: https://jmvdveer.home.xs4all.nl/en.algol-68-genie.html
[3]: https://jmvdveer.home.xs4all.nl/en.download.learning-algol-68-genie-283.html
[4]: https://en.wikipedia.org/wiki/ALGOL_68
[5]: http://www.softwarepreservation.org/projects/ALGOL/report/Algol68_revised_report-AB.pdf
[6]: https://en.wikipedia.org/wiki/Pascal_(programming_language)
[7]: https://www.bell-labs.com/usr/dmr/www/hopl.html
[8]: http://rosettacode.org/wiki/Rosetta_Code
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[10]: https://rosettacode.org/wiki/Associative_array/Creation#ALGOL_68
[11]: https://en.wikiquote.org/wiki/George_Santayana

View File

@ -1,300 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Eliminate spam using SSL with an open source certification authority)
[#]: via: (https://opensource.com/article/20/6/secure-open-source-antispam)
[#]: author: (Victor Lopes https://opensource.com/users/victorlclopes)
Eliminate spam using SSL with an open source certification authority
======
Use a Lets Encrypt certificate with MailCleaner for STARTTLS and SSL.
Here's how.
![Chat via email][1]
[MailCleaner][2] is a feature-rich, open source antispam solution. Its virtual appliances (VMs) available for distribution come out-of-the-box with self-signed certificates for both the web interface and the MTA services.
This requires you to supply your own valid, publicly trusted certificate. Using a Let's Encrypt certificate is a great way to accomplish that because it's free, safe, and automated.
When requesting a Let's Encrypt certificate, the most important step is the hostname validation. If you don't know about it, consult the [documentation][3].
### Firewall requirements
First of all, you need to define which hostnames you will use, including your MX records, and they must point to the IP address you're using to publish your MailCleaner server.
If you choose to perform the validation using local port 80 on your MailCleaner box, you will have to include a few commands to temporarily stop the Apache service during the certificate request. That's why I recommend using an alternative port, which, in our examples, will be port TCP 8090.
You have a few options in this scenario:
**Option 1**: Create rules in your reverse proxy to forward Let's Encrypt validation requests to your MailCleaner server. You have to redirect every request sent to port TCP 80, whose destination hostname is your MailCleaner external FQDN, and the path starts with `/.well-known/acme-challenge/` to port TCP 8090 on your MailCleaner server.
**Option 2**: Using a NAT rule, for example, redirect the traffic sent to port TCP 80 to port TCP 8090 on your MailCleaner server.
**Option 3**: Redirect/allow traffic sent to port TCP 80 to the actual port TCP 80 on your MailCleaner server, which is less secure, less flexible, and not recommended.
Alternatively, you could have the certificate request and the name validation performed somewhere else (like in your firewall) and create a routine for copying the cert files to your MailCleaner box. If you have a [pfSense][4] firewall with the [ACME][5] package, for example, you can try to merge the concepts within this article with this [how-to][6].
### Installing Certbot
[Certbot][7] is an open source tool for requesting and managing Let's Encrypt certificates.
To install Certbot on your MailCleaner server, log in as `root` (in the console or through SSH) and run:
```
$ wget <https://dl.eff.org/certbot-auto>
$ mv certbot-auto /usr/local/bin/certbot-auto
$ chown root /usr/local/bin/certbot-auto
$ chmod 0755 /usr/local/bin/certbot-auto
```
### Testing certificate name validation
If you're using an alternate port, you need to open it in the local firewall on your MailCleaner server:
```
`iptables -A INPUT -p tcp -m tcp --dport 8090 -j ACCEPT`
```
Note: MailCleaner keeps local firewall rules in its database and sets the `iptables` config every time the server loads. It's imperative that you add port 8090 to the firewall table inside MailCleaner's MySQL database; otherwise, every renewal process will fail. To learn how to do this, take a look at the section titled "Accessing MailCleaner's MySQL database" in the article _[How to install MailCleaner 2020.01][8]._
Now, let's try to issue our certificate using Let's Encrypt's staging (testing) server. Please replace the appropriate values with your email address and your MailCleaner server hostname(s).
**Option 1**: If you are using the alternative port 8090, use this command line:
```
$ certbot-auto certonly --standalone --preferred-challenges http \
\--http-01-port 8090 --email [myemail@domain.com][9] \--no-eff-email \
\--agree-tos --staging -d myhostname.mydomain.com
```
If you have more than one name, just add them with "`-d`" at the end:
```
-d mx1.mydomain.com \
-d mx2.mydomain.com \
-d spam.mydomain.com
```
**Option 2**: If you are using local port 80, use this command line:
```
$ certbot-auto certonly --standalone --preferred-challenges http \
 --email [myemail@domain.com][9] \--no-eff-email --agree-tos --staging \
-d myhostname.mydomain.com \
\--pre-hook "/usr/mailcleaner/etc/init.d/apache stop" \
\--post-hook "/usr/mailcleaner/etc/init.d/apache start"
```
Note: After issuing this command, you will hit a bootstrapping routine identifying missing dependencies, mostly Python packages. Let it install the necessary software.
If everything went fine, you should see a result like this:
```
root#mailcleaner:~#
root@mailcleaner:~# certbot-auto certonly \
\--standalone --preferred-challenges http \
\--http-01-port 8090 --email [victor@domain.com][10] \
\--no-eff-email --agree-tos --staging \
-d mail.example.com
Saving debug log to /var/log/letsencrypt/
Plugins selected: Authenticator standalone
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for mail.example.com
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
Your certificate and chain have been saved at:
/etc/letsencrypt/live/mail.example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/ live/mail.example.com/privkey.pem
[...]
root@mailcleaner:~#
```
If it didn't go well, keep in mind that most errors with this process are caused by Let's Encrypt servers not being able to reach your server. Check if your firewall configuration is really OK.
### Request your certificate
When the certificate issuing process is working correctly with the staging server, go ahead and request your certificate for production (removing the staging parameter):
```
`certbot-auto certonly --standalone --preferred-challenges http --http-01-port 8090 --email myemail@domain.com --no-eff-email --agree-tos --force-renewal -d myhostname.mydomain.com`
```
Note: Adapt the command line if you're not using the alternative port 8090. If that's the case, don't forget the pre-hook and post-hook.
The result screen is pretty similar. You will now have a valid certificate at the following path:
```
`/etc/letsencrypt/live/my__hostname_.yourdomain.com_/`[/code] [code]
root#mailcleaner:~# ls /etc/letsencrypt/live/mail.example.com
cert.pem chain.pem fullchain.pem privkey.pem README
root@mailcleaner:~#
```
### Automate certificate assignment and renewal
The last piece of this puzzle is the great script provided by "GRahamJB" in this [MailCleaner forum topic][11]. You can download the script from [here][12].
Let's save this script in our server. Create the following file:
```
`$ nano /usr/local/bin/set-certificates.pl`
```
Then paste the contents of the script and save it (`Ctrl + X`). And give the script permission to run:
```
`$ chmod +x /usr/local/bin/set-certificates.pl`
```
Now run the script to assign your certificate to the web interface and the MTA services:
```
root@mailcleaner:~# set-certificates.pl --set_web \
\--set_mta_in --set_mta_out \
\--key /etc/letsencrypt/live/mail.example.com/privkey.pem \
\--data /etc/letsencrypt/live/mail.example.com/cert.pem \
\--chain /etc/letsencrypt/live/mail.example.com/chain.pem
Stopping Apache: stopped.
Starting Apache: started.
Stopping Exim stage 1: stopped.
Starting Exim stage 1: started.
Stopping Exim stage 4: stopped.
Starting Exim stage 4: started.
root@mailcleaner:~#
```
Now that we know it works, schedule these commands to run weekly, using cron and Certbot's built-in renewal routine:
```
`$ nano /etc/letsencrypt/renewal/yourhostname.yourdomain.com.conf`
```
Check if the options look correct and add the following line at the end (the same set-certificates.pl you just ran, preceded by `renew_hook =`):
```
.
.
# Options used in the renewal process
[renewalparams]
authenticator = standalone
account = 9d670ed7c63c6238f90f042f852fc33e
pref_challs = http-01,
http01_port = 8090
server = <https://acme-v02.api.letsencrypt.org/directory>
# Set MailCleaner certs
renew_hook = set-certificates.pl --set_web --set_mta_in --set_mta_out --key /etc/letsencrypt/live/myhostname.mydomain.com/privkey.pem --data /etc/letsencrypt/live/myhostname.mydomain.com/cert.pem --chain /etc/letsencrypt/live/myhostname.mydomain.com/chain.pem
```
Note that the "`renew_hook = set-cert…`" command must be one single line. Save the file and run the following command to test it:
```
`$ certbot-auto renew --force-renewal`
```
If the renewal succeeds, you'll see a result similar to the one below. Note how our `renew_hook` command was called. The certificate has been updated in MailCleaner and the necessary services restarted.
```
root@mailcleaner:~# certbot-auto renew --force-reneval
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Processing /etc/ letsencrypt/renewal/mail.example.com.conf
Plugins selected: Authenticator standalone, Installer None
Renewing an existing certificate
Running deploy-hook command: set-certificates.pl \
\--set_web --set_mta_in --set_mta_out \
\--key /etc/letsencrypt/live/mail.example.com/privkey.pem \
\--data /etc/letsencrypt/live/mail.example.com/cert.pem \
\--chain /etc/letsencrypt/live/mail.example.com/chain.pem
Output from deploy-hook conmtwand set-certificates.pl:
Stopping Apache: stopped.
Starting Apache: started.
Stopping Exim stage 1: stopped.
Starting Exim stage 1: started.
Stopping Exim stage 4: stopped.
Starting Exim stage 4: started.
new certificate deployed without reload, fullchain is
/etc/letsencrypt/live/mail.example.com/fullchain.pem
Congratulations, all renewals succeeded.
The following certs have been renewed:
/etc/letsencrypt/live/mail.example.com/fullchain.pem (success)
root@mailcleaner:~#
```
Now, let's add that renew command to cron:
```
`$ crontab -e`
```
Add the following line and save the file. This will make Certbot run every Sunday at 2:00am:
```
`0 2 * * 7 /usr/local/bin/certbot-auto renew`
```
If crontab doesn't open the way you expect, run `select-editor` to choose the editor you like (nano, for example). If you want to check the result, run `crontab -l`.
By default, Certbot will only renew the certificate if it has less than 30 days left before its expiry date. If the cert is not due to expire, Certbot will not renew it (nor call hooks, of course).
### Testing results
If you access MailCleaner's web interface, you'll see that the SSL certificate is valid. And if you run the following command in your server, you can see that the certificate being presented on STARTTLS is the new Let's Encrypt cert you just set:
```
`$ openssl s_client -connect localhost:25 -starttls smtp`
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/secure-open-source-antispam
作者:[Victor Lopes][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/victorlclopes
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_chat_communication_message.png?itok=LKjiLnQu (Chat via email)
[2]: https://www.mailcleaner.org/
[3]: https://letsencrypt.org/docs/challenge-types
[4]: https://www.pfsense.org/
[5]: https://docs.netgate.com/pfsense/en/latest/certificates/acme-package.html
[6]: https://medium.com/@victorlclopes/copy-pfsense-acme-certificate-to-another-server-e42c611c47ec
[7]: https://certbot.eff.org/
[8]: https://medium.com/@victorlclopes/how-to-install-mailcleaner-2020-01-8319c83e11ee
[9]: mailto:myemail@domain.com
[10]: mailto:victor@domain.com
[11]: https://forum.mailcleaner.org/viewtopic.php?f=5&t=3035#p12532
[12]: https://gist.github.com/victorlclopes/f5aa081f1a9c76466aaf3f3dc5bd60b7

View File

@ -1,163 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Internet connection sharing with NetworkManager)
[#]: via: (https://fedoramagazine.org/internet-connection-sharing-networkmanager/)
[#]: author: (bengal https://fedoramagazine.org/author/bengal/)
Internet connection sharing with NetworkManager
======
![][1]
NetworkManager is the network configuration daemon used on Fedora and many other distributions. It provides a consistent way to configure network interfaces and other network-related aspects on a Linux machine. Among many other features, it provides a Internet connection sharing functionality that can be very useful in different situations.
For example, suppose you are in a place without Wi-Fi and want to share your laptops mobile data connection with friends. Or maybe you have a laptop with broken Wi-Fi and want to connect it via Ethernet cable to another laptop; in this way the first laptop become able to reach the Internet and maybe download new Wi-Fi drivers.
In cases like these it is useful to share Internet connectivity with other devices. On smartphones this feature is called “Tethering” and allows sharing a cellular connection via Wi-Fi, Bluetooth or a USB cable.
This article shows how the connection sharing mode offered by NetworkManager can be set up easily; it addition, it explains how to configure some more advanced features for power users.
### How connection sharing works
The basic idea behind connection sharing is that there is an _upstream_ interface with Internet access and a _downstream_ interface that needs connectivity. These interfaces can be of a different type—for example, Wi-Fi and Ethernet.
If the upstream interface is connected to a LAN, it is possible to configure our computer to act as a _bridge_; a bridge is the software version of an Ethernet switch. In this way, you “extend” the LAN to the downstream network. However this solution doesnt always play well with all interface types; moreover, it works only if the upstream network uses private addresses.
A more general approach consists in assigning a private IPv4 subnet to the downstream network and turning on routing between the two interfaces. In this case, NAT (Network Address Translation) is also necessary. The purpose of NAT is to modify the source of packets coming from the downstream network so that they look as if they originate from your computer.
It would be inconvenient to configure manually all the devices in the downstream network. Therefore, you need a DHCP server to assign addresses automatically and configure hosts to route all traffic through your computer. In addition, in case the sharing happens through Wi-Fi, the wireless network adapter must be configured as an access point.
There are many tutorials out there explaining how to achieve this, with different degrees of difficulty. NetworkManager hides all this complexity and provides a _shared_ mode that makes this configuration quick and convenient.
### Configuring connection sharing
The configuration paradigm of NetworkManager is based on the concept of connection (or connection profile). A connection is a group of settings to apply on a network interface.
This article shows how to create and modify such connections using _nmcli_, the NetworkManager command line utility, and the GTK connection editor. If you prefer, other tools are available such as _nmtui_ (a text-based user interface), GNOME control center or the KDE network applet.
A reasonable prerequisite to share Internet access is to have it available in the first place; this implies that there is already a NetworkManager connection active. If you are reading this, you probably already have a working Internet connection. If not, see [this article][2] for a more comprehensive introduction to NetworkManager.
The rest of this article assumes you already have a Wi-Fi connection profile configured and that connectivity must be shared over an Ethernet interface _enp1s0_.
To enable sharing, create a connection for interface enp1s0 and set the ipv4.method property to _shared_ instead of the usual _auto_:
```
$ nmcli connection add type ethernet ifname enp1s0 ipv4.method shared con-name local
```
The shared IPv4 method does multiple things:
* enables IP forwarding for the interface;
* adds firewall rules and enables masquerading;
* starts dnsmasq as a DHCP and DNS server.
NetworkManager connection profiles, unless configured otherwise, are activated automatically. The new connection you have added should be already active in the device status:
```
$ nmcli device
DEVICE TYPE STATE CONNECTION
enp1s0 ethernet connected local
wlp4s0 wifi connected home-wifi
```
If that is not the case, activate the profile manually with _nmcli connection up local_.
### Changing the shared IP range
Now look at how NetworkManager configured the downstream interface enp1s0:
```
$ ip -o addr show enp1s0
8: enp1s0 inet 10.42.0.1/24 brd 10.42.0.255 ...
```
10.42.0.1/24 is the default address set by NetworkManager for a device in shared mode. Addresses in this range are also distributed via DHCP to other computers. If the range conflicts with other private networks in your environment, change it by modifying the _ipv4.addresses_ property:
```
$ nmcli connection modify local ipv4.addresses 192.168.42.1/24
```
Remember to activate again the connection profile after any change to apply the new values:
```
$ nmcli connection up local
$ ip -o addr show enp1s0
8: enp1s0 inet 192.168.42.1/24 brd 192.168.42.255 ...
```
If you prefer using a graphical tool to edit connections, install the _nm-connection-editor_ package. Launch the program and open the connection to edit; then select the _Shared to other computers_ method in the _IPv4 Settings_ tab. Finally, if you want to use a specific IP subnet, click _Add_ and insert an address and a netmask.
![][3]
### Adding custom dnsmasq options
In case you want to further extend the dnsmasq configuration, you can add new configuration snippets in _/etc/NetworkManager/dnsmasq-shared.d/_. For example, the following configuration:
```
dhcp-option=option:ntp-server,192.168.42.1
dhcp-host=52:54:00:a4:65:c8,192.168.42.170
```
tells dnsmasq to advertise a NTP server via DHCP. In addition, it assigns a static IP to a client with a certain MAC.
There are many other useful options in the dnsmasq manual page. However, remember that some of them may conflict with the rest of the configuration; so please use custom options only if you know what you are doing.
### Other useful tricks
If you want to set up sharing via Wi-Fi, you could create a connection in Access Point mode, manually configure the security, and then enable connection sharing. Actually, there is a quicker way, the hotspot mode:
```
$ nmcli device wifi hotspot [ifname $dev] [password $pw]
```
This does everything needed to create a functional access point with connection sharing. The interface and password options are optional; if they are not specified, _nmcli_ chooses the first Wi-Fi device available and generates a random password. Use the _nmcli device wifi show-password_ command to display information for the active hotspot; the output includes the password and a text-based QR code that you can scan with a phone:
![][4]
### What about IPv6?
Until now this article discussed sharing IPv4 connectivity. NetworkManager also supports sharing IPv6 connectivity through DHCP prefix delegation. Using prefix delegation, a computer can request additional IPv6 prefixes from the DHCP server. Those public routable addresses are assigned to local networks via Router Advertisements. Again, NetworkManager makes all this easier through the shared IPv6 mode:
```
$ nmcli connection modify local ipv6.method shared
```
Note that IPv6 sharing requires support from the Internet Service Provider, which should give out prefix delegations through DHCP. If the ISP doesnt provides delegations, IPv6 sharing will not work; in such case NM will report in the journal that no prefixes are available:
```
policy: ipv6-pd: none of 0 prefixes of wlp1s0 can be shared on enp1s0
```
Also, note that the Wi-Fi hotspot command described above only enables IPv4 sharing; if you want to also use IPv6 sharing you must edit the connection manually.
### Conclusion
Remember, the next time you need to share your Internet connection, NetworkManager will make it easy for you.
If you have suggestions on how to improve this feature or any other feedback, please reach out to the NM community using the [mailing list][5], the [issue tracker][6] or joining the _#nm_ IRC channel on _freenode_.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/internet-connection-sharing-networkmanager/
作者:[bengal][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bengal/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/06/networkmanager-connection_sharing-816x345.png
[2]: https://www.redhat.com/sysadmin/becoming-friends-networkmanager
[3]: https://fedoramagazine.org/wp-content/uploads/2020/06/nmce.png
[4]: https://fedoramagazine.org/wp-content/uploads/2020/06/hotspot-password.png
[5]: https://mail.gnome.org/mailman/listinfo/networkmanager-list
[6]: https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues

View File

@ -1,203 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get Your Work Done Faster With These To-Do List Apps on Linux Desktop)
[#]: via: (https://itsfoss.com/to-do-list-apps-linux/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Get Your Work Done Faster With These To-Do List Apps on Linux Desktop
======
Getting work done is super important. If you have a planned list of things to do, it makes your work easier. So, its no surprise why were talking about to-do list apps on Linux here.
Sure, you can easily utilize some of the [best note taking apps on Linux][1] for this purpose but using a dedicated to-do app helps you stay focused on work.
You might be aware of some online services for that— but how about some [cool Linux apps][2] that you can use to create a to-do list? In this article, Im going to highlight the best to-do list apps available for Linux.
### Best To-Do List Applications For Desktop Linux Users
![][3]
I have tested these apps on Pop!_OS. I have also tried to mention the installation steps for the mentioned apps but you should check your distributions package manager for details.
**Note:** The list is in no particular order of ranking
#### 1\. Planner
![][4]
Planner is probably the best to-do list app Ive across for Linux distributions.
The best thing is — it is a free and open-source project. It provides a beautiful user interface that aims to give you a meaningful user experience. In other words, its simple and yet attractive.
Not to forget, you get a gorgeous dark mode. As you can see in the screenshot above, you can also choose to add emojis to add some fun to your serious work tasks.
Overall, it looks clean while offering features like the ability to add repeating tasks, creating separate folder/projects, sync with [todoist][5] etc.
#### How to install it?
If youre using [elementary OS][6], you can find it listed in the app center. In either case, they also offer a [Flatpak package on Flathub][7].
Unless you have Flatpak integration in your software center, you should follow our guide to [use Flatpak on Linux][8] to get it installed.
In case you want to explore the source code, take a look at its [GitHub page][9].
[Planner][10]
### 2\. Go For It!
![][11]
Yet another impressive open-source to-do app for Linux which is based on [todotxt][12]. Even though it isnt available for Ubuntu 20.04 (or later) at the time of writing this, you can still use it on machines with Ubuntu 19.10 or older.
In addition to the ability to adding tasks, you can also specify the duration/interval of your break. So, with this to-do app, you will not just end up completing the tasks but also being productive without stressing out.
The user interface is plain and simple with no fancy features. We also have a separate article on [Go][13] [For It][13] — if youd like to know more about it.
You can also use it on your Android phone using the [Simpletask Dropbox app][14].
#### How to install it?
You can type the commands below to install it on any Ubuntu-based distro (prior to Ubuntu 20.04):
```
sudo add-apt-repository ppa:go-for-it-team/go-for-it-stable
sudo apt update
sudo apt install go-for-it
```
In case you want to install it on any other Linux distro, you can try the [Flatpak package on Flathub][15].
If you dont know about Flatpak — take a look at our [complete guide on using Flatpak][8]. To explore more about it, you can also head to their [GitHub page][16].
[Go For It!][16]
#### 3\. GNOME To Do
![][17]
If youre [using Ubuntu][18] or other Linux distribution with GNOME desktop envioenment, you should already have it installed. Just search for “To Do” and you should find it.
Its a simple to-do app which presents the list in the form of cards and you can have separate set of tasks every card. You can add a schedule to the tasks as well. It supports extensions with which you can enable the support for todo.txt files and also integration with [todoist][5].
[GNOME To Do][19]
#### 4\. Taskwarrior [Terminal-based]
![][20]
A command-line based open-source to-do list program “[Taskwarrior][21]” is an impressive tool if you dont need a Graphical User Interface (GUI). It also provides cross-platform support (Windows and macOS).
Its quite easy to add and list tasks along with a due date as shown in the screenshot above.
To make the most out of it, I would suggest you to follow the [official documentation][22] to know how to use it and the options/features that it offers.
##### How to install it?
You can find it in your respective package managers on any Linux distribution. To get it intalled in Ubuntu, you will have to type the following in the terminal:
```
sudo apt install taskwarrior
```
For Manjaro Linux, you can simply get it installed through [pamac][23] that you usually need to [install software in Manjaro Linux.][24]
In case of any other Linux distributions, you should head to its [official download page][25] and follow the instructions.
[Taskwarrior][21]
#### 5\. Task Coach
![][26]
Task Coach is yet another open-source to-do list app that offers quite a lot of essential features. You can add sub-tasks, description to your task, add dates, notes, and a lot more things. It also supports tree view for the task lists you add and manage.
Its a good thing to see that it offers cross-platform support (Windows, macOS, and Android).
Overall, its easy to use with tons of options and works well.
#### How to install it?
It offers both **.deb** and **.rpm** packages for Ubuntu and Fedora. In addition to that, you can also install it using PPA.
You can find all the necessary files and instructions from its [official download page][27].
You may notice an installation error for its dependencies on Ubuntu 20.04. But, I believe it should work fine on the previous Ubuntu releases.
In my case, it worked out fine for me when using the [AUR package][28] through Pamac on Manjaro Linux.
[Task Coach][29]
#### 6\. Todour
![][30]
A very simple open-source to-do list app that lets you utilize todo.txt file as well. You may not get a lot of options to choose from — but you get a couple of useful settings to tweak.
It may not be the most actively developed to-do list app — but it does the work expected.
#### How to install Todour?
If youre using Manjaro Linux, you can utilize pamac to install Todour from [AUR][28].
Unfortunately, it does not provide any **.deb** or **.rpm** package for Ubuntu/Fedora. So, youll have to build it from source or just explore more about it on its [GitHub page][31].
[Todour][32]
### Wrapping Up
As an interesting mention, Id like you to take a look at [TodoList][33], which is an applet for KDE-powered distributions. Among mainstream to-do list applications, [Remember The Milk is the rare one that provides a Linux client][34]. It is not open source, though.
I hope this list of to-do specific apps help you get things done on Linux.
Did I miss any of your favorite to-do list apps on Linux? Feel free to let me know what you think!
--------------------------------------------------------------------------------
via: https://itsfoss.com/to-do-list-apps-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/note-taking-apps-linux/
[2]: https://itsfoss.com/essential-linux-applications/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/open-Source-to-do-list-apps.jpg?ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/planner-screenshot.jpg?ssl=1
[5]: https://todoist.com
[6]: https://elementary.io
[7]: https://flathub.org/apps/details/com.github.alainm23.planner
[8]: https://itsfoss.com/flatpak-guide/
[9]: https://github.com/alainm23/planner
[10]: https://planner-todo.web.app/
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/go-for-it-reminders.jpg?ssl=1
[12]: http://todotxt.com
[13]: https://itsfoss.com/go-for-it-to-do-app-in-linux/
[14]: https://play.google.com/store/apps/details?id=nl.mpcjanssen.todotxtholo&hl=en
[15]: https://flathub.org/apps/details/de.manuel_kehl.go-for-it
[16]: https://github.com/JMoerman/Go-For-It
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/to-do-gnome.jpg?ssl=1
[18]: https://itsfoss.com/getting-started-with-ubuntu/
[19]: https://wiki.gnome.org/Apps/Todo/Download
[20]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/taskwarrior.png?ssl=1
[21]: https://taskwarrior.org/
[22]: https://taskwarrior.org/docs/start.html
[23]: https://wiki.manjaro.org/index.php?title=Pamac
[24]: https://itsfoss.com/install-remove-software-manjaro/
[25]: https://taskwarrior.org/download/
[26]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/task-coach.png?ssl=1
[27]: https://www.taskcoach.org/download.html
[28]: https://itsfoss.com/aur-arch-linux/
[29]: https://www.taskcoach.org/index.html
[30]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/todour.png?ssl=1
[31]: https://github.com/SverrirValgeirsson/Todour
[32]: https://nerdur.com/todour-pl/
[33]: https://store.kde.org/p/1152230/
[34]: https://itsfoss.com/remember-the-milk-linux/

View File

@ -1,602 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Expand your Raspberry Pi with Arduino ports)
[#]: via: (https://opensource.com/article/20/7/arduino-raspberry-pi)
[#]: author: (Patrick Martins de Lima https://opensource.com/users/pattrickx)
Expand your Raspberry Pi with Arduino ports
======
For this project, explore Raspberry Pi port expansions using Java,
serial, and Arduino.
![Parts, modules, containers for software][1]
As members of the maker community, we are always looking for creative ways to use hardware and software. This time, [Patrick Lima][2] and I decided we wanted to expand the Raspberry Pi's ports using an Arduino board, so we could access more functionality and ports and add a layer of protection to the device. There are a lot of ways to use this setup, such as building a solar panel that follows the sun, a home weather station, joystick interaction, and more.
We decided to start by building a dashboard that allows the following serial port interactions:
* Control three LEDs to turn them on and off
* Control three LEDs to adjust their light intensity
* Identify which ports are being used
* Show input movements on a joystick
* Measure temperature
We also want to show all the interactions between ports, hardware, and sensors in a nice user interface (UI) like this:
![UI dashboard][3]
(Bruno Muniz, [CC BY-SA 4.0][4])
You can use the concepts in this article to build many different projects that use many different components. Your imagination is the limit!
### 1\. Get started
![Raspberry Pi and Arduino logos][5]
(Bruno Muniz, [CC BY-SA 4.0][4])
The first step is to expand the Raspberry Pi's ports to also use Arduino ports. This is possible using Linux ARM's native serial communication implementation that enables you to use an Arduino's digital, analogical, and Pulse Width Modulation (PWM) ports to run an application on the Raspberry Pi.
This project uses [TotalCross][6], an open source software development kit for building UIs for embedded devices, to execute external applications through the terminal and use the native serial communication. There are two classes you can use to achieve this: [Runtime.exec][7] and [PortConnector][8]. They represent different ways to execute these actions, so we will show how to use both in this tutorial, and you can decide which way is best for you.
To start this project, you need:
* 1 Raspberry Pi 3
* 1 Arduino Uno
* 3 LEDs
* 2 resistors between 1K and 2.2K ohms
* 1 push button
* 1 potentiometer between 1K and 50K ohms
* 1 protoboard (aka breadboard)
* Jumpers
### 2\. Set up the Arduino
Create a communication protocol to receive messages, process them, execute the request, and send a response between the Raspberry Pi and the Arduino. This is done on the Arduino.
#### 2.1 Define the message format
Every message received will have the following format:
* Indication of the function called
* Port used
* A char separator, if needed
* A value to be sent, if needed
* Indication of the message's end
The following table presents the list of characters with their respective functions, example values, and descriptions of the example. The choice of characters used in this example is arbitrary and can be changed anytime.
Characters | Function | Example | Description of the example
---|---|---|---
* | End of the instruction | - | -
, | Separator | - | -
# | Set mode | #8,0* | Pin 8 input mode
&lt; | Set digital value | &lt;1,0* | Set pin 1 low
&gt; | Get digital value | &gt;13* | Get value pin 13
+ | Get PWM value | +6,250* | Set pin 6 value 250
- | Get analogic value | -14* | Get value pin A0
#### 2.2 Source code
The following source code implements the communication protocol specified above. It must be sent to the Arduino, so it can interpret and execute messages' commands:
```
void setup() {
 Serial.begin(9600);
 Serial.println("Connected");
 Serial.println("Waiting command...");
}
void loop() {
String text="";
char character;
String pin="";
String value="0";
char separator='.';
char inst='.';
 while(Serial.available()){ // verify RX is getting data
   delay(10);
   character= Serial.read();
   if(character=='*'){
     action(inst,pin,value);
     break;
    }
    else {
     text.concat(character);}
   if(character==',') {
     separator=character;
   
   if(inst=='.'){
     inst = character;}
   else if(separator!=',' &amp;&amp; character!=inst ){
     pin.concat(character);}
   else if (character!=separator &amp;&amp; character!=inst ){
     value.concat(character);}
 }
}
void action(char instruction, String pin, String value){
 if (instruction=='#'){//pinMode
   pinMode(pin.toInt(),value.toInt());
 }
 if (instruction=='&lt;'){//digitalWrite
   digitalWrite(pin.toInt(),value.toInt());
 }
 if (instruction=='&gt;'){ //digitalRead
   String aux= pin+':'+String(digitalRead(pin.toInt()));
   Serial.println(aux);
 }
 if (instruction=='+'){ // analogWrite = PWM
   analogWrite(pin.toInt(),value.toInt());
 }
 if (instruction=='-'){ // analogRead
   String aux= pin+':'+String(analogRead(pin.toInt()));
   Serial.println(aux);
 }
}
```
#### 2.3 Build the electronics
Define what you need to test to check communication with the Arduino and ensure the inputs and outputs are responding as expected:
* LEDs are connected with positive logic. Connect to the GND pin through a resistor and activate it with the digital port I/O 2 and PWM 3.
* The button has a pull-down resistor connected to the digital port I/O 4, which sends a signal of 0 if not pressed and 1 if pressed.
* The potentiometer is connected with the central pin to the analog input A0 with one of the side pins on the positive and the other on the negative.
![Connecting the hardware][9]
(Bruno Muniz, [CC BY-SA 4.0][4])
#### 2.4 Test communications
Send the code in section 2.2 to the Arduino. Open the serial monitor and check the communication protocol by sending the commands below:
```
#2,1*&lt;2,1*&gt;2*
#3,1*+3,10*
#4,0*&gt;4*
#14,0*-14*
```
This should be the result in the serial monitor:
![Testing communications in Arduino][10]
(Bruno Muniz, [CC BY-SA 4.0][4])
One LED on the device should be on at maximum intensity and the other at a lower intensity.
![LEDs lit on board][11]
(Bruno Muniz, [CC BY-SA 4.0][4])
Pressing the button and changing the position of the potentiometer when sending reading commands will display different values. For example, turn the potentiometer to the positive side and press the button. With the button still pressed, send the commands:
```
&gt;4*
-14*
```
Two lines should appear:
![Testing communications in Arduino][12]
(Bruno Muniz, [CC BY-SA 4.0][4])
### 3\. Set up the Raspberry Pi
Use a Raspberry Pi to access the serial port via the terminal using the `cat` command to read the entries and the `echo` command to send the message.
#### 3.1 Do a serial test
Connect the Arduino to one of the USB ports on the Raspberry Pi, open the terminal, and execute this command:
```
`cat /dev/ttyUSB0 9600`
```
This will initiate the connection with the Arduino and display what is returned to the serial.
![Testing serial on Arduino][13]
(Bruno Muniz, [CC BY-SA 4.0][4])
To test sending commands, open a new terminal window (keeping the previous one open), and send this command:
```
`echo "command" > /dev/ttyUSB0 9600`
```
You can send the same commands used in section 2.4.
You should see feedback in the first terminal along with the same result you got in section 2.4:
![Testing serial on Arduino][14]
(Bruno Muniz, [CC BY-SA 4.0][4])
### 4\. Create the graphical user interface
The UI for this project will be simple, as the objective is just to show the ports expansion using the serial. Another article will use TotalCross to create a high-quality GUI for this project and start the application backend (working with sensors), as shown in the dashboard image at the top of this article.
This first part uses two UI components: a Listbox and an Edit. These build a connection between the Raspberry Pi and the Arduino and test that everything is working as expected.
Simulate the terminal where you put the commands and watch for answers:
* Edit is used to send messages. Place it at the bottom with a FILL width that extends the component to the entire width of the screen.
* Listbox is used to show results, e.g., in the terminal. Add it at the TOP position, starting at the LEFT side, with a width equal to Edit and a FIT height to vertically occupy all space not filled by Edit.
```
package com.totalcross.sample.serial;
import totalcross.sys.Settings;
import totalcross.ui.Edit;
import totalcross.ui.ListBox;
import totalcross.ui.MainWindow;
import totalcross.ui.gfx.Color;
public class SerialSample extends MainWindow {
   ListBox Output;
   Edit Input;
   public SerialSample() {
       setUIStyle(Settings.MATERIAL_UI);
   }
   @Override
   public void initUI() {
       Input = new Edit();
       add(Input, LEFT, BOTTOM, FILL, PREFERRED);
       Output = new ListBox();
       Output.setBackForeColors([Color][15].BLACK, [Color][15].WHITE);
       add(Output, LEFT, TOP, FILL, FIT);
   }
}
```
It should look like this:
![UI][16]
(Bruno Muniz, [CC BY-SA 4.0][4])
### 5\. Set up serial communication
As stated above, there are two ways to set up serial communication: Runtime.exec and PortConnector.
#### 5.1 Option 1: Use Runtime.exec
The `java.lang.Runtime` class allows the application to create a connection interface with the environment where it is running. It allows the program to use the Raspberry Pi's native serial communication.
Use the same commands you used in section 3.1, but now use the Edit component on the UI to send the commands to the device.
##### Read the serial
The application must constantly read the serial, and if a value is returned, add it to the Listbox using threads. Threads are a great way to work with processes in the background without blocking user interaction.
The following code creates a new process on this thread that executes the `cat` command, tests the serial, and starts an infinite loop to check if something new is received. If something is received, the value is added to the next line of the Listbox component. This process will continue to run as long as the application is running:
```
new [Thread][17] () {
   @Override
   public void run() {
       try {
           [Process][18] Runexec2 = [Runtime][19].getRuntime().exec("cat /dev/ttyUSB0 9600\n");
           LineReader lineReader = new LineReader(Stream.asStream(Runexec2.getInputStream()));
           [String][20] input;
         
           while (true) {
               if ((input = lineReader.readLine()) != null) {
                   Output.add(input);
                   Output.selectLast();
                   Output.repaintNow();
               }
           }
         } catch ([IOException][21] ioe) {
            ioe.printStackTrace();
         }
       }
   }.start();
}
```
##### Send commands
Sending commands is a simpler process. It happens whenever you press **Enter** on the Edit component.
To forward the commands to the device, as shown in section 3.1, you must instantiate a new terminal. For that, the Runtime class must execute a `sh` command on Linux:
```
try{
   Runexec = [Runtime][19].getRuntime().exec("sh").getOutputStream()        }catch ([IOException][21] ioe) {
   ioe.printStackTrace();
}
```
After the user writes the command in Edit and presses **Enter**, the application triggers an event that executes the `echo` command with the value indicated in Edit:
```
Input.addKeyListener(new [KeyListener][22]() {
   @Override
   public void specialkeyPressed([KeyEvent][23] e) {
       if (e.key == SpecialKeys.ENTER) {
           [String][20] s = Input.getText();
           Input.clear();
           try {
               Runexec.write(("echo \"" + s + "\" &gt; /dev/ttyUSB0 9600\n").getBytes());
           } catch ([IOException][21] ioe) {
           ioe.printStackTrace();
           }
       }
   }
   @Override
   public void keyPressed([KeyEvent][23] e) {} //auto-generate code
   @Override
   public void actionkeyPressed([KeyEvent][23] e) {} //auto-generate code
});
```
Run the application on the Raspberry Pi with the Arduino connected and send the commands for testing. The result should be:
![Testing application running on Raspberry Pi][24]
(Bruno Muniz, [CC BY-SA 4.0][4])
##### Runtime.exec source code
Following is the source code with all parts explained. It includes the thread that will read the serial on line 31 and the `KeyListener` that will send the commands on line 55:
```
package com.totalcross.sample.serial;
import totalcross.ui.MainWindow;
import totalcross.ui.event.KeyEvent;
import totalcross.ui.event.KeyListener;
import totalcross.ui.gfx.Color;
import totalcross.ui.Edit;
import totalcross.ui.ListBox;
import java.io.IOException;
import java.io.OutputStream;
import totalcross.io.LineReader;
import totalcross.io.Stream;
import totalcross.sys.Settings;
import totalcross.sys.SpecialKeys;
public class SerialSample extends MainWindow {
   [OutputStream][25] Runexec;
   ListBox Output;
   public SerialSample() {
       setUIStyle(Settings.MATERIAL_UI);
   }
   @Override
   public void initUI() {
       Edit Input = new Edit();
       add(Input, LEFT, BOTTOM, FILL, PREFERRED);
       Output = new ListBox();
       Output.setBackForeColors([Color][15].BLACK, [Color][15].WHITE);
       add(Output, LEFT, TOP, FILL, FIT);
       new [Thread][17]() {
           @Override
           public void run() {
               try {
                   [Process][18] Runexec2 = [Runtime][19].getRuntime().exec("cat /dev/ttyUSB0 9600\n");
                   LineReader lineReader = new
                   LineReader(Stream.asStream(Runexec2.getInputStream()));
                   [String][20] input;
                   while (true) {
                       if ((input = lineReader.readLine()) != null) {
                           Output.add(input);
                           Output.selectLast();
                           Output.repaintNow();
                       }
                   }
               } catch ([IOException][21] ioe) {
                   ioe.printStackTrace();
               }
           }
       }.start();
       try {
           Runexec = [Runtime][19].getRuntime().exec("sh").getOutputStream();
       } catch ([IOException][21] ioe) {
           ioe.printStackTrace();
       }
       Input.addKeyListener(new [KeyListener][22]() {
           @Override
           public void specialkeyPressed([KeyEvent][23] e) {
               if (e.key == SpecialKeys.ENTER) {
                   [String][20] s = Input.getText();
                   Input.clear();
                   try {
                       Runexec.write(("echo \"" + s + "\" &gt; /dev/ttyUSB0 9600\n").getBytes());
                   } catch ([IOException][21] ioe) {
                       ioe.printStackTrace();
                   }
               }
           }
           @Override
           public void keyPressed([KeyEvent][23] e) {
           }
           @Override
           public void actionkeyPressed([KeyEvent][23] e) {
           }
      });
   }
}
```
#### 5.2 Option 2: Use PortConnector
PortConnector is specifically for working with serial communication. If you want to follow the original example, you can skip this section, as the intention here is to show another, easier way to work with serial.
Change the original source code to work with PortConnector:
```
package com.totalcross.sample.serial;
import totalcross.io.LineReader;
import totalcross.io.device.PortConnector;
import totalcross.sys.Settings;
import totalcross.sys.SpecialKeys;
import totalcross.ui.Edit;
import totalcross.ui.ListBox;
import totalcross.ui.MainWindow;
import totalcross.ui.event.KeyEvent;
import totalcross.ui.event.KeyListener;
import totalcross.ui.gfx.Color;
public class SerialSample extends MainWindow {
   PortConnector pc;
   ListBox Output;
   public SerialSample() {
       setUIStyle(Settings.MATERIAL_UI);
   }
   @Override
   public void initUI() {
       Edit Input = new Edit();
       add(Input, LEFT, BOTTOM, FILL, PREFERRED);
       Output = new ListBox();
       Output.setBackForeColors([Color][15].BLACK, [Color][15].WHITE);
       add(Output, LEFT, TOP, FILL, FIT);
       new [Thread][17]() {
           @Override
           public void run() {
               try {
                   pc = new PortConnector(PortConnector.USB, 9600);
                   LineReader lineReader = new LineReader(pc);
                   [String][20] input;
                   while (true) {
                       if ((input = lineReader.readLine()) != null) {
                           Output.add(input);
                           Output.selectLast();
                           Output.repaintNow();
                       }
                   }
               } catch (totalcross.io.[IOException][21] ioe) {
                   ioe.printStackTrace();
               }
           }
       }.start();
       Input.addKeyListener(new [KeyListener][22]() {
           @Override
           public void specialkeyPressed([KeyEvent][23] e) {
               if (e.key == SpecialKeys.ENTER) {
                   [String][20] s = Input.getText();
                   Input.clear();
                   try {
                       pc.writeBytes(s);
                   } catch (totalcross.io.[IOException][21] ioe) {
                       ioe.printStackTrace();
                   }
               }
           }
           @Override
           public void keyPressed([KeyEvent][23] e) {
           }
           @Override
           public void actionkeyPressed([KeyEvent][23] e) {
           }
      });
  }
}
```
You can find all the code in the [project's repository][26].
### 6\. Next steps
This article shows how to use Raspberry Pi serial ports with Java by using either the Runtime or PortConnector classes. You can also call external files in other languages and create countless other projects—like a water quality monitoring system for an aquarium with temperature measurement via the analog inputs, or a chicken brooder with temperature and humidity regulation and a servo motor to rotate the eggs.
A future article will use the PortConnector implementation (because it is focused on serial connection) to finish the communications with all sensors. It will also add a digital input and complete the UI.
Here are some references for more reading:
* [Get started with TotalCross][27]
* [TotalCross PortConnector class][8]
* [Running C++ applications with TotalCross][7]
* [VSCode TotalCross Project Extension plugin][28]
After you connect your Arduino and Raspberry Pi, please leave comments below with your results. We'd love to read them!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/arduino-raspberry-pi
作者:[Patrick Martins de Lima][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/pattrickx
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_modules_networking_hardware_parts.png?itok=rPpVj92- (Parts, modules, containers for software)
[2]: https://github.com/pattrickx
[3]: https://opensource.com/sites/default/files/uploads/gui-dashboard.png (UI dashboard)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/sites/default/files/uploads/raspberrypi_arduino.png (Raspberry Pi and Arduino logos)
[6]: https://totalcross.com/
[7]: https://learn.totalcross.com/documentation/guides/running-c++-applications-with-totalcross
[8]: https://rs.totalcross.com/doc/totalcross/io/device/PortConnector.html
[9]: https://opensource.com/sites/default/files/uploads/connecting-electronics.png (Connecting the hardware)
[10]: https://opensource.com/sites/default/files/uploads/communication-test-result.png (Testing communications in Arduino)
[11]: https://opensource.com/sites/default/files/uploads/leds.jpg (LEDs lit on board)
[12]: https://opensource.com/sites/default/files/uploads/communication-test-result2.png (Testing communications in Arduino)
[13]: https://opensource.com/sites/default/files/uploads/serial-test.png (Testing serial on Arduino)
[14]: https://opensource.com/sites/default/files/uploads/serial-test2.png (Testing serial on Arduino)
[15]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+color
[16]: https://opensource.com/sites/default/files/uploads/ui_0.png (UI)
[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+thread
[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+process
[19]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+runtime
[20]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[21]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+ioexception
[22]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+keylistener
[23]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+keyevent
[24]: https://opensource.com/sites/default/files/uploads/test-commands.png (Testing application running on Raspberry Pi)
[25]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+outputstream
[26]: https://github.com/pattrickx/TotalCrossSerialCommunication
[27]: https://learn.totalcross.com/documentation/get-started/
[28]: https://marketplace.visualstudio.com/items?itemName=Italo.totalcross

View File

@ -1,123 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Tricks with Pseudorandom Number Generators)
[#]: via: (https://theartofmachinery.com/2020/07/18/prng_tricks.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
Tricks with Pseudorandom Number Generators
======
Pseudorandom number generators (PRNGs) are often treated like a compromise: their output isnt as good as real random number generators, but theyre cheap and easy to use on computer hardware. But a special feature of PRNGs is that theyre _reproducible_ sources of random-looking data:
```
import std.random;
import std.stdio;
void main()
{
// Seed a PRNG and generate 10 pseudo-random numbers
auto rng = Random(42);
foreach (_; 0..10) write(uniform(0, 10, rng), ' ');
writeln();
// Reset the PRNG, and the same sequence is generated again
rng = Random(42);
foreach (_; 0..10) write(uniform(0, 10, rng), ' ');
writeln();
// Output:
// 2 7 6 4 6 5 0 4 0 3
// 2 7 6 4 6 5 0 4 0 3
}
```
This simple fact enables a few neat tricks.
A couple of famous examples come from the gaming industry. The classic example is the space trading game Elite, which was originally written for 8b BBC Micros in the early 80s. It was a totally revolutionary game, but just one thing that amazed fans was its complex universe of thousands of star systems. That was something you just didnt normally get in games written for machines with kilobytes of RAM total. The trick was to generate the universe with a PRNG seeded with a small value. There was no need to store the universe in memory because the game could regenerate each star system on demand, repeatedly and deterministically.
PRNGs are now widely exploited for recording games for replays. You dont need to record every frame of the game world if you can just record the PRNG seed and all the player actions. (Like most things in software, [actually implementing that can be surprisingly challenging][1].)
### Random mappings
In machine learning, you often need a mapping from things to highly dimensional random unit vectors (random vectors of length 1). Lets get more specific and say youre processing documents for topic/sentiment analysis or similarity. In this case youll generate a random vector for each word in the dictionary. Then you can create a vector for each document by adding up the vectors for each word in it (with some kind of weighting scheme, in practice). Similar documents will end up with similar vectors, and you can use linear algebra tricks to uncover deeper patterns (read about [latent semantic analysis][2] if youre interested).
An obvious way to get a mapping between words and random vectors is to just initially generate a vector for each word, and create a hash table for looking them up later. Another way is to generate the random vectors on demand using a PRNG seeded by a hash of the word. Heres a toy example:
```
/+ dub.sdl:
name "prngvecdemo"
dependency "mir-random" version="~>2.2.14"
+/
// Demo of mapping words to random vectors with PRNGs
// Run me with "dub prngvecdemo.d"
import std.algorithm;
import std.stdio;
// Using the Mir numerical library https://www.libmir.org/
import mir.random.engine.xoshiro;
import mir.random.ndvariable;
enum kNumDims = 512;
alias RNG = Xoroshiro128Plus;
// D's built-in hash happens to be MurmurHash, but we just need it to be suitable for seeding the PRNG
static assert("".hashOf.sizeof == 8);
void main()
{
auto makeUnitVector = sphereVar!float();
auto doc = "a lot of words";
float[kNumDims] doc_vec, word_vec;
doc_vec[] = 0.0;
foreach (word; doc.splitter) // Not bothering with whitening or stop word filtering for this demo
{
// Create a PRNG seeded with the hash of the word
auto rng = RNG(word.hashOf);
// Generate a unit vector for the word using the PRNG
// We'll get the same vector every time we see the same word
makeUnitVector(rng, word_vec);
// Add it to the document vector (no weighting for simplicity)
doc_vec[] += word_vec[];
}
writeln(doc_vec);
}
```
This kind of trick isnt the answer to everything, but it has some uses. Obviously, it can be useful if youre working with more data than you have RAM (though you might still cache some of the generated data). Another use case is processing a large dataset with parallel workers. In the document example, you can get workers to “agree” on what the vector for each word should be, without data synchronisation, and without needing to do an initial pass over the data to build a dictionary of words. Ive used this trick with experimental code, just because I was too lazy to add an extra stage to the data pipeline. In some applications, recomputing data on the fly can even be faster than fetching it from a very large lookup table.
### An ode to Xorshift
You might have noticed I used `Xoroshiro128Plus`, a variant of the Xorshift PRNG. The Mersenne Twister is a de facto standard PRNG in some computing fields, but Im a bit of a fan of the Xorshift family. The basic Xorshift engines are fast and pretty good, and there are variants that are still fast and have excellent output quality. But the big advantage compared to the Mersenne Twister is the state size. The Mersenne Twister uses a pool of 2496 bytes of state, whereas most of the Xorshift PRNGs can fit into one or two machine `int`s.
The small state size has a couple of advantages for this kind of “on demand” PRNG usage: One is that thoroughly initialising a big state from a small seed takes work (some people “warm up” a Mersenne Twister by throwing away several of the initial outputs, just to be sure). The second is that the small size of the PRNGs makes them cheap enough to use in places you wouldnt think of using a Mersenne Twister.
### Random data structures made reliable
Some data structures and algorithms use randomisation. An example is a treap, which is a binary search tree that uses a randomised heap for balancing. Treaps are much less popular than AVL trees or red-black trees, but theyre easier to implement correctly because you end up with fewer edge cases. Theyre also good enough for most use cases. That makes them a good choice for application-specific “augmented” BSTs. But for argument purposes, its just a real example of a data structure that happens to use randomness as an implementation detail.
Randomisation comes with a major drawback: its a pain when testing and debugging. Test failures arent reproducible for debugging if real randomness is used. If you have any experience with testing, youll have seen this and youll know its a good idea to use a PRNG instead.
Using a global PRNG mostly works, but it couples the treaps through one shared PRNG. That accidental coupling can lead to test flakes if youre running several tests at once, unless youre careful to use one PRNG per thread and reset it for every test. Even then you can get Heisenbugs in your non-test code.
What about dependency injection? Making every treap method require a reference to a PRNG works, but it leaks the implementation detail throughout your code. You could make the treap take a reference to a PRNG in its constructor, but that implies adding an extra pointer to the data structure. If youre going to do that, why not just make every treap embed its own 32b or 64b Xorshift PRNG? Embedding the PRNG into the treap makes it deterministic and reproducible in a way thats encapsulated and decoupled from everything else.
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2020/07/18/prng_tricks.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://technology.riotgames.com/news/determinism-league-legends-introduction
[2]: https://en.wikipedia.org/wiki/Latent_semantic_analysis

View File

@ -1,284 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Monitor systemd journals via email)
[#]: via: (https://opensource.com/article/20/7/systemd-journals-email)
[#]: author: (Kevin P. Fleming https://opensource.com/users/kpfleming)
Monitor systemd journals via email
======
Get a daily email with noteworthy output from your systemd journals with
journal-brief.
![Note taking hand writing][1]
Modern Linux systems often use systemd as their init system and manager for jobs and many other functions. Services managed by systemd generally send their output (of all forms: warnings, errors, informational messages, and more) to the systemd journal, not to traditional logging systems like syslog.
In addition to services, Linux systems often have many scheduled jobs (traditionally called cron jobs, even if the system doesn't use `cron` to run them), and these jobs may either send their output to the logging system or allow the job scheduler to capture the output and deliver it via email.
When managing multiple systems, you can install and configure a centralized log-capture system to monitor their behavior, but the complexity of centralized systems can make them hard to manage.
A simpler solution is to have each system directly send "interesting" output to the administrator(s) by email. For systems using systemd, this can be done using Tim Waugh's [journal-brief][2] tool. This tool _almost_ served my needs when I discovered it recently, so, in typical open source fashion, I contributed various patches to add email support to the project. Tim worked with me to get them merged, and now I can use the tool to monitor the 20-plus systems I manage as simply as possible.
Now, early each morning, I receive between 20 and 23 email messages: most of them contain a filtered view of each machine's entire systemd journal (with warnings or more serious messages), but a few are logs generated by scheduled ZFS snapshot-replication jobs that I use for backups. In this article, I'll show you how to set up similar messages.
### Install journal-brief
Although journal-brief is available in many Linux package repositories, the packaged versions will not include email support because that was just added recently. That means you'll need to install it from PyPI; I'll show you how to manually install it into a Python virtual environment to avoid interfering with other parts of the installed system. If you have a favorite tool for doing this, feel free to use it.
Choose a location for the virtual environment; in this article, I'll use `/opt/journal-brief` for simplicity.
Nearly all the commands in this tutorial must be executed with root permissions or the equivalent (noted by the `#` prompt). However, it is possible to install the software in a user-owned directory, grant that user permission to read from the journal, and install the necessary units as systemd `user` units, but that is not covered in this article.
Execute the following to create the virtual environment and install journal-brief and its dependencies:
```
$ python3 -m venv /opt/journal-brief
$ source /opt/journal-brief/bin/activate
$ pip install journal-brief&gt;=1.1.7
$ deactivate
```
In order, these commands will:
1. Create `/opt/journal-brief` and set up a Python 3.x virtual environment there
2. Activate the virtual environment so that subsequent Python commands will use it
3. Install journal-brief; note that the single-quotes are necessary to keep the shell from interpreting the `>` character as a redirection
4. Deactivate the virtual environment, returning the shell back to the original Python installation
Also, create some directories to store journal-brief configuration and state files with:
```
$ mkdir /etc/journal-brief
$ mkdir /var/lib/journal-brief
```
### Configure email requirements
While configuring email clients and servers is outside the scope of this article, for journal-brief to deliver email, you will need to have one of the two supported mechanisms configured and operational.
#### Option 1: The `mail` command
Many systems have a `mail` command that can be used to send (and read) email. If such a command is installed on your system, you can verify that it is configured properly by executing a command like:
```
`$ echo "Message body" | mail --subject="Test message" {your email address here}`
```
If the message arrives in your mailbox, you're ready to proceed using this type of mail delivery in journal-brief. If not, you can either troubleshoot and correct the configuration or use SMTP delivery.
To control the generated email messages' attributes (e.g., From address, To address, Subject) with the `mail` command method, you must use the command-line options in your system's mailer program: journal-brief will only construct a message's body and pipe it to the mailer.
#### Option 2: SMTP delivery
If you have an SMTP server available that can accept email and forward it to your mailbox, journal-brief can communicate directly with it. In addition to plain SMTP, journal-brief supports Transport Layer Security (TLS) connections and authentication, which means it can be used with many hosted email services (like Fastmail, Gmail, Pobox, and others). You will need to obtain a few pieces of information to configure this delivery mode:
* SMTP server hostname
* Port number to be used for message submission (it defaults to port 25, but port 587 is commonly used)
* TLS support (optional or required)
* Authentication information (username and password/token, if required)
When using this delivery mode, journal-brief will construct the entire message before submitting it to the SMTP server, so the From address, To address, and Subject will be supplied in journal-brief's configuration.
### Set up configuration and cursor files
Journal-brief uses YAML-formatted configuration files; it uses one file per desired combination of filtering parameters, delivery options, and output formats. For this article, these files are stored in `/etc/journal-brief`, but you can store them in any location you like.
In addition to the configuration files, journal-brief creates and manages **cursor** files, which allow it to keep track of the last message in its output. Using one cursor file for each configuration file ensures that no journal messages will be lost, in contrast to a time-based log-delivery system, which might miss messages if a scheduled delivery job can't run to completion. For this article, the cursor files will be stored in `/var/lib/journal-brief` (you can store the cursor files in any location you like, but make sure not to store them in any type of temporary filesystem, or they'll be lost).
Finally, journal-brief has extensive filtering and formatting capabilities; I'll describe only the most basic options, and you can learn more about its capabilities in the documentation for journal-brief and [systemd.journal-fields][3].
### Configure a daily email with interesting journal entries
This example will set up a daily email to a system administrator named Robin at `robin@domain.invalid` from a server named `storage`. Robin's mail provider offers SMTP message submission through port 587 on a server named `mail.server.invalid` but does not require authentication or TLS. The email will be sent from `storage-server@domain.invalid`, so Robin can easily filter the incoming messages or generate alerts from them.
Robin has the good fortune to live in Fiji, where the workday starts rather late (around 10:00am), so there's plenty of time every morning to read emails of interesting journal entries. This example will gather the entries and deliver them at 8:30am in the local time zone (Pacific/Fiji).
#### Step 1: Configure journal-brief
Create a text file at `/etc/journal-brief/daily-journal-email.yml` with these contents:
```
cursor-file: '/var/lib/journal-brief/daily-journal-email'
output:
 - 'short'
  - systemd
inclusions:
  - PRIORITY: 'warning'
email:
  suppress_empty: false
  smtp:
    to: '”Robin” &lt;[robin@domain.invalid][4]&gt;'
    from: '"Storage Server" &lt;[storage-server@domain.invalid][5]&gt;'
    subject: 'daily journal'
    host: 'mail.server.invalid'
    port: 587
```
This configuration causes journal-brief to:
* Store the cursor at the path configured as `cursor-file`
* Format journal entries using the `short` format (one line per entry) and provide a list of any systemd units that are in the `failed` state
* Include journal entries from _any_ service unit (even the Linux kernel) with a priority of `warning`, `error`, or `emergency`
* Send an email even if there are no matching journal entries, so Robin can be sure that the storage server is still operating and has connectivity
* Send the email using SMTP
You can test this configuration file by executing a journal-brief command:
```
`$ journal-brief --conf /etc/journal-brief/daily-journal-email`
```
Journal-brief will scan the systemd journal for all new messages (yes, _all_ of the messages it has never seen before), identify any that match the priority filter, and format them into an email that it sends to Robin. If the storage server has been operational for months (or years) and the systemd journal has never been purged, this could produce a very large email message. In addition to Robin not appreciating such a large message, Robin's email provider may not be willing to accept it, so you can generate a shorter message by executing this command:
```
`$ journal-brief -b --conf /etc/journal-brief/daily-journal-email`
```
Adding the `-b` argument tells journal-brief to inspect only the systemd journal entries from the most recent system boot and ignore any that are older.
After journal-brief sends the email to the SMTP server, it writes a string into the cursor file so that the next time it runs using the same cursor file, it will know where to start in the journal. If the process fails for any reason (e.g., journal entry gathering, entry formatting, or SMTP delivery), the cursor file will _not_ be updated, which means the next time it uses the cursor file, the entries that would have been in the failed email will be included in the next email instead.
#### Step 2: Set up the systemd service unit
Create a text file at `/etc/systemd/system/daily-journal-email.service` with:
```
[Unit]
Description=Send daily journal report
[Service]
ExecStart=/opt/journal-brief/bin/journal-brief --conf /etc/journal-brief/%N.yml
Type=oneshot
```
This service unit will run journal-brief and specify a configuration file with the same name as the unit file with the suffix removed, which is what `%N` supplies. Since this service will be started by a timer (see step 3), there is no need to enable or manually start it.
#### Step 3: Set up the systemd timer unit
Create a text file at `/etc/systemd/system/daily-journal-email.timer` with:
```
[Unit]
Description=Trigger daily journal email report
[Timer]
OnCalendar=*-*-* 08:30:00 Pacific/Fiji
[Install]
WantedBy=multi-user.target
```
This timer will start the `daily-journal-email` service unit (because its name matches the timer name) every day at 8:30am in the Pacific/Fiji time zone. If the time zone was not specified, the timer would trigger the service at 8:30am in the system time zone configured on the `storage` server.
To make this timer start every time the system boots, it is `WantedBy` by the multi-user target. To enable and start the timer:
```
$ systemctl enable daily-journal-email.timer
$ systemctl start daily-journal-email.timer
$ systemctl list-timers daily-journal-email.timer
```
The last command will display the timer's status, and the `NEXT` column will indicate the next time the timer will start the service.
To learn more about systemd timers and building schedules for them, read [_Use systemd timers instead of cronjobs_][6].
Now the configuration is complete, and Robin will receive a daily email of interesting journal entries.
### Monitor the output of a specific service
The `storage` server has some filesystems on solid-state storage devices (SSD) and runs Fedora Linux. Fedora has an `fstrim` service that is scheduled to run once per week (using a systemd timer, as in the example above). Robin would like to see the output generated by this service, even if it doesn't generate any warnings or errors. While this output will be included in the daily journal email, it will be intermingled with other journal entries, and Robin would prefer to have the output in its own email message.
#### Step 1: Configure journal-brief
Create a text file at `/etc/journal-brief/fstrim.yml` with:
```
cursor-file: '/var/lib/journal-brief/fstrim'
output: 'short'
inclusions:
  - _SYSTEMD_UNIT:
   - fstrim.service
email:
  suppress_empty: false
  smtp:
    to: '”Robin” &lt;[robin@domain.invalid][4]&gt;'
    from: '"Storage Server" &lt;[storage-server@domain.invalid][5]&gt;'
    subject: 'weekly fstrim'
    host: 'mail.server.invalid'
    port: 587
```
This configuration is similar to the previous example, except that it will include _all_ entries related to a systemd unit named `fstrim.service`, regardless of their priority levels, and will include _only_ entries related to that service.
### Step 2: Modify the systemd service unit
Unlike in the previous example, you don't need to create a systemd service unit or timer, since they already exist. Instead, you want to add behavior to the existing service unit by using the systemd "drop-in file" mechanism (to avoid modifying the system-provided unit file).
First, ensure that the `EDITOR` environment variable is set to your preferred text editor (otherwise you'll get the default editor on your system), and execute:
```
`$ systemctl edit fstrim.service`
```
Note that this does not edit the existing service unit file; instead, it opens an editor session to create a drop-in file (located at `/etc/systemd/system/fstrim.service.d/override.conf`).
Paste these contents into the editor and save the file:
```
[Service]
ExecStopPost=/opt/journal-brief/bin/journal-brief --conf /etc/journal-brief/%N.yml
```
After you exit the editor, the systemd configuration will reload automatically (which is one benefit of using `systemctl edit` instead of creating the file directly). Like in the previous example, this drop-in uses `%N` to avoid duplicating the service name; this means that the drop-in contents can be applied to any service on the system, as long as the appropriate configuration file is created in `/etc/journal-brief`.
Using `ExecStopPost` will make journal-brief run after any attempt to run the `fstrim.service`, whether or not it's successful. This is quite useful, as the email will be generated even if the `fstrim.service` cannot be started (for example, if the `fstrim` command is missing or not executable).
Please note that this technique is primarily applicable to systemd services that run to completion before exiting (in other words, not background or daemon processes). If the `Type` in the `Service` section of the service's unit file is `forking`, then journal-brief will not execute until the specified service has stopped (either manually or by a system target change, like shutdown).
The configuration is complete; Robin will receive an email after every attempt to start the `fstrim` service; if the attempt is successful, then the email will include the output generated by the service.
### Monitor without extra effort
With this setup, you can monitor the health of your Linux systems that use systemd without needing to set up any centralized monitoring or logging tools. I find this monitoring method quite effective, as it draws my attention to unusual events on the servers I maintain without requiring any additional effort.
Special thanks to Tim Waugh for creating the journal-brief tool and being willing to accept a rather large patch to add direct email support rather than running journal-brief through cron.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/systemd-journals-email
作者:[Kevin P. Fleming][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kpfleming
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/note-taking.jpeg?itok=fiF5EBEb (Note taking hand writing)
[2]: https://github.com/twaugh/journal-brief
[3]: https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html
[4]: mailto:robin@domain.invalid
[5]: mailto:storage-server@domain.invalid
[6]: https://opensource.com/article/20/7/systemd-timers

View File

@ -1,176 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open ports and route traffic through your firewall)
[#]: via: (https://opensource.com/article/20/9/firewall)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Open ports and route traffic through your firewall
======
Safely and securely give outside parties access to your network.
![Traffic lights at night][1]
Ideally, most local networks are protected from the outside world. If you've ever tried installing a service, such as a web server or a [Nextcloud][2] instance at home, then you probably know from first-hand experience that, while the service is easy to reach from inside the network, it's unreachable over the worldwide web.
There are both technical and security reasons for this, but sometimes you want to open access to something within a local network to the outside world. This means you need to be able to route traffic from the internet into your local network—correctly and safely. In this article, I'll explain how.
### Local and public IP addresses
The first thing you need to understand is the difference between a local internet protocol (IP) address and a public IP address. Currently, most of the world (still) uses an addressing system called IPv4, which famously has a limited pool of numbers available to assign to networked electronic devices. In fact, there are more networked devices in the world than there are IPv4 addresses, and yet IPv4 continues to function. This is possible because of local addresses.
All local networks in the world use the _same_ address pools. For instance, my home router's local IP address is 192.168.1.1. One of those is probably the same number as your home router, yet when I navigate to 192.168.1.1, I reach _my_ router's login screen and not _your_ router's login screen. That's because your home router actually has two addresses: one public and one local, and the public one shields the local one from being detected by the internet, much less from being confused for someone else's 192.168.1.1.
![network of networks][3]
(Seth Kenlon, [CC BY-SA 4.0][4])
This, in fact, is why the internet is called the internet: it's a "web" of interconnected and otherwise self-contained networks. Each network, whether it's your workplace or your home or your school or a big data center or the "cloud" itself, is a collection of connected hosts that, in turn, communicate with a gateway (usually a router) that manages traffic from the internet and to the local network, as well as out of the local network to the internet.
This means that if you're trying to access a computer on a network that's not the network you're currently attached to, then knowing the local address of that computer does you no good. You need to know the _public_ address of the remote network's gateway. And that's not all. You also need permission to pass through that gateway into the remote network.
### Firewalls
Ideally, there are firewalls all around you, even now. You don't see them (hopefully), but they're there. As technology goes, firewalls have a fun name, but they're actually a little boring. A firewall is just a computer service (also called a "daemon"), a subsystem that runs in the background of most electronic devices. There are many daemons running on your computer, including the one listening for mouse or trackpad movements, for instance. A firewall is a daemon programmed to either accept or deny certain kinds of network traffic.
Firewalls are relatively small programs, so they are embedded in most modern devices. They're running on your mobile phone, on your router, and your computer. Firewalls are designed based on network protocols, and it's part of the specification of talking to other computers that a data packet sent over a network must announce specific pieces of information about itself (or be ignored). One thing that network data contains is a _port_ number, which is one of the primary things a firewall uses when accepting or denying traffic.
Websites, for instance, are hosted on web servers. When you want to view a website, your computer sends network data identifying itself as traffic destined for port 80 of the web host. The web server's firewall is programmed to accept incoming traffic destined for port 80, so it accepts your request (and the web server, in turn, sends you the web page in response). However, were you to send (whether by accident or by design) network data destined for port 22 of that web server, you'd likely be denied by the firewall (and possibly banned for some time).
This can be a strange concept to understand because, like IP addresses, ports and firewalls don't really "exist" in the physical world. These are concepts defined in software. You can't open your computer or your router to physically inspect network ports, and you can't look at a number printed on a chip to find your IP address, and you can't douse your firewall in water to put it out. But now that you know these concepts exist, you know the hurdles involved in getting from one computer in one network to another on a different network.
Now it's time to get around those blockades.
### Your IP address
I assume you have control over your own network, and you're trying to open your own firewalls and route your own traffic to permit outside traffic into your network. First, you need your local and public IP addresses.
To find your local IP address, you can use the `ip` address command on Linux:
```
$ ip addr show | grep "inet "
 inet 127.0.0.1/8 scope host lo
 inet 192.168.1.6/27 brd 10.1.1.31 scope [...]
```
In this example, my local IP address is 192.168.1.6. The other address (127.0.0.1) is a special "loopback" address that your computer uses to refer to itself from within itself.
To find your local IP address on macOS, you can use `ifconfig`:
```
$ ifconfig | grep "inet "
 inet 127.0.0.1 netmask 0xff000000
 inet 192.168.1.6 netmask 0xffffffe0 [...]
```
And on Windows, use `ipconfig`:
```
`$ ipconfig`
```
Get the public IP address of your router at [icanhazip.com][5]. On Linux, you can get this from a terminal with the [curl command][6]:
```
$ curl <http://icanhazip.com>
93.184.216.34
```
Keep these numbers handy for later.
### Directing traffic through a router
The first device that needs to be adjusted is the gateway device. This could be a big, physical server, or it could be a tiny router. Either way, the gateway is almost certainly performing network address translation (NAT), which is the process of accepting traffic and altering the destination IP address.
When you generate network traffic to view an external website, your computer must send that traffic to your local network's gateway because your computer has, essentially, no knowledge of the outside world. As far as your computer knows, the entire internet is just your network router, 192.168.1.1 (or whatever your router's address). So, your computer sends everything to your gateway. It's the gateway's job to look at the traffic and determine where it's _actually_ headed, and then forward that data on to the real internet. When the gateway receives a response, it forwards the incoming data back to your computer.
If your gateway is a router, then to expose your computer to the outside world, you must designate a port in your router to represent your computer. This configures your router to accept traffic to a specific port and direct all of that traffic straight to your computer. Depending on the brand of router you use, this process goes by a few different names, including port forwarding or virtual server or sometimes even firewall settings.
Every device is different, so there's no way for me to tell you exactly what you need to click on to adjust your settings. Generally, you access your home router through a web browser. Your router's address is sometimes printed on the bottom of the router, and it begins with either 192.168 or 10.
Navigate to your router's address and log in with the credentials you were provided when you got your internet service. It's often as simple as `admin` with a numeric password (sometimes, this password is printed on the router, too). If you don't know the login, call your internet provider and ask for details.
In the graphical interface, redirect incoming traffic for one port to a port (the same one is usually easiest) of your computer's local IP address. In this example, I redirect incoming traffic destined for port 22 (used for SSH connections) of my home router to my desktop PC.
![Example of a router configuration][7]
(Seth Kenlon, [CC BY-SA 4.0][4])
You can redirect any port you want. For instance, if you're hosting a website on a spare computer, you can redirect traffic destined for port 80 of your router to port 80 of your website host.
### Directing traffic through a server
If your gateway is a physical server, you can direct traffic using [firewall-cmd][8]. Using the _rich rule_ option, you can have your server listen for an incoming request at a specific address (your public IP) and specific port (in this example, I use 22, which is the port used for SSH), and then direct that traffic to an IP address and port in the local network (your computer's local address).
```
$ firewall-cmd --permanent --zone=public \
\--add-rich-rule 'rule family="ipv4" destination address="93.184.216.34" forward-port port=22 protocol=tcp to-port=22 to-addr=192.168.1.6'
```
### Set your firewall
Most devices have firewalls, so you might find that traffic can't get through to your local computer even after you've forwarded ports and traffic. It's possible that there's a firewall blocking traffic even within your local network. Firewalls are designed to make your computer secure, so resist the urge to deactivate your firewall entirely (except for troubleshooting). Instead, you can selectively allow traffic.
The process of modifying your personal firewall differs according to your operating system.
On Linux, there are many services already defined. View the ones available:
```
$ sudo firewall-cmd --get-services
amanda-client amanda-k5-client bacula bacula-client
bgp bitcoin bitcoin-rpc ceph cfengine condor-collector
ctdb dhcp dhcpv6 dhcpv6-client dns elasticsearch
freeipa-ldaps ftp [...] ssh steam-streaming svdrp [...]
```
If the service you're trying to allow is listed, you can add it to your firewall:
```
`$ sudo firewall-cmd --add-service ssh --permanent`
```
If your service isn't listed, you can add the port you want to open manually:
```
`$ sudo firewall-cmd --add-port 22/tcp --permanent`
```
Opening a port in your firewall is specific to your current _zone_. For more information about firewalls, firewall-cmd, and ports, refer to my article [_Make Linux stronger with firewalls_][8], and download our [Firewall cheatsheet][9] for quick reference.
This step is only about opening a port in your computer so that traffic destined for it on a specific port is accepted. You don't need to redirect traffic because you've already done that at your gateway.
### Make the connection
You've set up your gateway and your local network to route traffic for you. Now, when someone outside your network navigates to your public IP address, destined for a specific port, they'll be redirected to your computer on the same port. It's up to you to monitor and safeguard your network, so use your new knowledge with care. Too many open ports can look like invitations to bad actors and bots, so only open what you intend to use. And most of all, have fun!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/firewall
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/traffic-light-go.png?itok=nC_851ys (Traffic lights at night)
[2]: http://nextcloud.org
[3]: https://opensource.com/sites/default/files/uploads/network-of-networks.png (network of networks)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: http://icanhazip.com
[6]: https://opensource.com/article/20/5/curl-cheat-sheet
[7]: https://opensource.com/sites/default/files/uploads/port-mapping.png (Example of a router configuration)
[8]: https://opensource.com/article/19/7/make-linux-stronger-firewalls
[9]: https://opensource.com/article/20/2/firewall-cheat-sheet

View File

@ -1,88 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Tux the Linux Penguin in its first video game, better DNS and firewall on Android, Gitops IDE goes open source, and more open source news)
[#]: via: (https://opensource.com/article/20/9/news-sept-8)
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
Tux the Linux Penguin in its first video game, better DNS and firewall on Android, Gitops IDE goes open source, and more open source news
======
Catch up on the biggest open source headlines from the past two weeks.
![][1]
In this weeks edition of our open source news roundup, Gitpod open sources its IDE platform, BraveDNS launches an all-in-one platform, and more open source news.
### Engineers debut an open source-powered robot
Matthias Müller and Vladlen Koltun, two engineers at Intel, have shared their new robot to tackle computer vision tasks. [The robot][2], called "OpenBot", is powered by a smartphone, which acts as a camera and computing unit. 
The OpenBot prototype components cost $50. It's intended to be a low-cost alternative to commercially available radio-controlled models, with more computing power than educational models.
To use OpenBot, users can connect their smartphones to an electromechanical body. They can also use Bluetooth to connect their smartphone to a video game controller like an Xbox or PlayStation. 
Müller and Koltun say they want OpenBot to address two key issues in robotics: Scalability and accessibility. Its source code is still pending [on GitHub][3], although models for 3D-printing the case are up.
### Tux the Linux Penguin gets his video game dues
A new update to [a free and open source 3D kart racer][4] features an unlikely hero: Tux, the Linux penguin.
Born in the early aughts as a project called _TuxKart_, Joerg Henrichs renamed it "Super Tux Kart" in 2006. Lux is the latest open source mascot to feature in the project: Blender and GIMP's mascots are represented as well.
Along with adding Tux to the mix, Super Tux Kart Version 1.2 includes lots of updates. iOS users can create racing servers in-game, while all official tracks are now included in the release built on Android. And since the game is open source [on four platforms][5], all players can make their own changes to submit for review.
### BraveDNS offers three services in one for Android users
It's notoriously tough for Android users to find a firewall, adblocker, and DNS-over-HTTPS client in one product. But if BraveDNS lives up to the hype, this free and open source tool offers all three in one. 
Self-described as “an [OpenSnitch][6]-inspired firewall and network monitor + a [pi-hole][7]-inspired DNS over HTTPS client with blocklists”, BraveDNS uses its own ads, trackers, and spyware-blocking DNS endpoint. Users who need features like custom blocklists and ability to store DNS logs can use the tool's DNS resolver service as a paid option.
Along with a robust [list of firewall features][8], BraveDNS offers to backport support for dual-mode DNS and firewall execution to legacy Android versions. You'll need at least Android 8 Oreo to use the latest version of BraveDNS on their website and Google Play, but their developers pledge to make it compatible down to Android Marshmellow in the near future. 
### Gitpod open sources its IDE platform
With projects like Theia, Xtext, and Open VSX under its belt, Gitpod has been a strong open source presence for 10 years. Now, Gitpod -- an IDE platform for GitHub projects -- is [officially open source][9] as well.
The move marks a big change for Gitpod, which was previously closed to community development from the start. Founders Sven Efftinge and Johannes Landgraf shared that Gitpod now meets GitHub's open source criteria under AGPL license. This allows Gitpod developers to co-collaborate on Kubernetes applications.
Along with Gitpod's open source status, they've expanded into software as well. Self-Hosted, a private cloud platform, is now available for free to unlimited users. Designed for DevOps teams to work on enterprise projects, Self-Hosted's features include collaboration tools, analytics, dashboards, and more.
In other news:
* [5 open source software applications for virtualization][10]
* [Building a heavy duty open source ventilator][11]
* [China looks at Gitee as an open source alternative to Microsoft's GitHub][12]
* [The future of American industry depends on open source tech][13]
Thanks, as always, to Opensource.com staff members and [Correspondents][14] for their help this week.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/news-sept-8
作者:[Lauren Maffeo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lmaffeo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/weekly_news_roundup_tv.png?itok=tibLvjBd
[2]: https://www.inceptivemind.com/openbot-open-source-low-cost-smartphone-powered-robot/15023/
[3]: https://github.com/intel-isl/OpenBot
[4]: https://hothardware.com/news/super-tux-kart-update
[5]: https://supertuxkart.net/Download
[6]: https://github.com/evilsocket/opensnitch
[7]: https://github.com/pi-hole/pi-hole
[8]: https://www.xda-developers.com/bravedns-open-source-dns-over-https-client-firewall-adblocker-android/
[9]: https://aithority.com/it-and-devops/gitpod-goes-open-source-with-its-ide-platform-launches-self-hosted-cloud-package/
[10]: https://searchservervirtualization.techtarget.com/tip/5-open-source-software-applications-for-virtualization
[11]: https://hackaday.com/2020/08/28/building-a-heavy-duty-open-source-ventilator/
[12]: https://www.scmp.com/abacus/tech/article/3099107/china-pins-its-hopes-gitee-open-source-alternative-microsofts-github
[13]: https://www.wired.com/story/opinon-the-future-of-american-industry-depends-on-open-source-tech/
[14]: https://opensource.com/correspondent-program

View File

@ -1,106 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Protect your network with open source tools)
[#]: via: (https://opensource.com/article/20/10/apache-security-tools)
[#]: author: (Chantale Benoit https://opensource.com/users/chantalebenoit)
Protect your network with open source tools
======
Apache Syncope and Metron can help you secure your network against
unauthorized access and data loss.
![A lock on the side of a building][1]
System integrity is essential, especially when you're charged with safeguarding other people's personal details on your network. It's critical that system administrators are familiar with security tools, whether their purview is a home, a small business, or an organization with hundreds or thousands of employees.
### How cybersecurity works
Cybersecurity involves securing networks against unauthorized access. However, there are many attack vectors out there that most people don't consider. The cliché of a lone hacker manually dueling with firewall rules until they gain access to a network is popular—but wildly inaccurate. Security breaches happen through automation, malware, phishing, ransomware, and more. You can't directly fight every attack as it happens, and you can't count on every computer user to exercise common sense. Therefore, you have to design a system that resists intrusion and protects users against outside attacks as much as it protects them from their own mistakes.
The advantage of open source security tools is that they keep vulnerabilities transparent. They give full visibility into their codebase and are supported by a global community of experts working together to create strong, tried-and-tested code.
With so many domains needing protection, there's no single cybersecurity solution that fits every situation, but here are two that you should consider.
### Apache Syncope
[Apache Syncope][2] is an open source system for managing digital identities in an enterprise environment. From focusing on identity lifecycle management and identity storage to provisioning engines and accessing management capabilities, Apache Syncope is a comprehensive identity management solution. It also provides monitoring and security features for third-party applications.
Apache Syncope synchronizes users, groups, and other objects. _Users_ represent the buildup of virtual identities and account information fragmented across external resources. _Groups_ are entities on external resources that support the concept of LDAP or Active Directory. _Objects_ are entities such as printers, services, and sensors. It also does full reconciliation and live synchronization from external resources with workflow-based approval.
#### Third-party applications
Apache Syncope also exposes a fully compliant [JAX-RS][3] 2.0 [RESTful][4] interface to enable third-party applications written in any programming language. These applications consume identity management services, such as:
* **Logic:** Syncope implements business logic that can be triggered through REST services and controls additional features such as notifications, reports, and auditing.
* **Provisioning:** It manages the internal and external representation of users, groups, and objects through workflow and specific connectors.
* **Workflow:** Syncope supports Activiti or Flowable [business process management (BPM)][5] workflow engines and allows defining new and custom workflows when needed.
* **Persistence:** It manages all data, such as users, groups, attributes, and resources, at a high level using a standard [JPA 2.0][6] approach. The data is further persisted to an underlying database, such as internal storage.
* **Security:** Syncope defines a fine-grained set of entitlements, which are granted to administrators and enable the implementation of delegated administration scenarios.
#### Syncope extensions
Apache Syncope's features can be enhanced with [extensions][7], which add a REST endpoint and manage the persistence of additional entities, tweak the provisioning layer, and add features to the user interface.
Some popular extensions include:
* **Swagger UI** works as a user interface for Syncope RESTful services.
* **SSO support** provides OpenID Connect and SAML 2.0 access to administrative or end-user web interfaces.
* **Apache Camel provisioning manager** delegates the execution of the provisioning process to a group of Apache Camel routes. It can be dynamically changed at the runtime through the REST interfaces or the administrative console, and modifications are also instantly available for processing.
* **Elasticsearch** provides an alternate internal search engine for users, groups, and objects through an external [Elasticsearch][8] cluster.
### Apache Metron
Security information and event management ([SIEM][9]) gives admins insights into the activities happening within their IT environment. It combines the concepts of security event management (SEM) with security information management (SIM) into one functionality. SIEM collects security data from network devices, servers, and domain controllers, then aggregates and analyzes the data to detect malicious threats and payloads.
[Apache Metron][10] is an advanced security analytics framework that detects cyber anomalies, such as phishing activity and malware infections. Further, it enables organizations to take corrective measures to counter the identified anomalies.
It also interprets and normalizes security events into standard JSON language, which makes it easier to analyze security events, such as:
* An employee flagging a suspicious email
* An authorized or unauthorized software download by an employee to a company device
* A security lapse due to a server outage
Apache Metron provides security alerts, labeling, and data enrichment. It can also store and index security events. Its four key capabilities are:
* **Security data lake:** Metron is a cost-effective way to store and combine a wide range of business and security data. The security data lake provides the amount of data required to power discovery analytics. It also provides a mechanism to search and query for operational analytics.
* **Pluggable framework:** It provides a rich set of parsers for common security data sources such as pcap, NetFlow, Zeek (formerly Bro), Snort, FireEye, and Sourcefire. You can also add custom parsers for new data sources, including enrichment services for more contextual information, to the raw streaming data. The pluggable framework provides extensions for threat-intel feeds and lets you customize security dashboards. Machine learning and other models can also be plugged into real-time streams and provide extensibility.
* **Threat detection platform:** It uses machine learning algorithms to detect anomalies in a system. It also helps analysts extract and reconstruct full packets to understand the attacker's identity, what data was leaked, and where the data was sent.
* **Incident response application:** This refers to evolved SIEM capabilities, including alerting, threat intel frameworks, and agents to ingest data sources. Incident response applications include packet replay utilities, evidence storage, and hunting services commonly used by security operations center analysts.
### Security matters
Incorporating open source security tools into your IT infrastructure is imperative to keep your organization safe and secure. Open source tools, like Syncope and Metron from Apache, can help you identify and counter security threats. Learn to use them well, file bugs as you find them, and help the open source community protect the world's data.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/10/apache-security-tools
作者:[Chantale Benoit][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/chantalebenoit
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
[2]: https://syncope.apache.org/
[3]: https://jax-rs.github.io/apidocs/2.0/
[4]: https://www.redhat.com/en/topics/api/what-is-a-rest-api
[5]: https://www.redhat.com/en/topics/automation/what-is-business-process-management
[6]: http://openjpa.apache.org/openjpa-2.0.0.html
[7]: http://syncope.apache.org/docs/2.1/reference-guide.html#extensions
[8]: https://opensource.com/life/16/6/overview-elastic-stack
[9]: https://en.wikipedia.org/wiki/Security_information_and_event_management
[10]: http://metron.apache.org/

View File

@ -1,104 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Top 5 open source alternatives to Google Analytics)
[#]: via: (https://opensource.com/article/18/1/top-5-open-source-analytics-tools)
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
Top 5 open source alternatives to Google Analytics
======
These four versatile web analytics tools provide valuable insights on
your customers and site visitors while keeping you in control.
![Analytics: Charts and Graphs][1]
If you have a website or run an online business, collecting data on where your visitors or customers come from, where they land on your site, and where they leave _is vital._ Why? That information can help you better target your products and services, and beef up the pages that are turning people away.
To gather that kind of information, you need a web analytics tool.
Many businesses of all sizes use Google Analytics. But if you want to keep control of your data, you need a tool that _you_ can control. You wont get that from Google Analytics. Luckily, Google Analytics isnt the only game on the web.
Here are four open source alternatives to Google Analytics.
### Matomo
Lets start with the open source application that rivals Google Analytics for functions: [Matomo][2] (formerly known as Piwik). Matomo does most of what Google Analytics does, and chances are it offers the features that you need.
Those features include metrics on the number of visitors hitting your site, data on where they come from (both on the web and geographically), the pages from which they leave, and the ability to track search engine referrals. Matomo also offers many reports, and you can customize the dashboard to view the metrics that you want to see.
To make your life easier, Matomo integrates with more than 65 content management, e-commerce, and online forum systems, including WordPress, Magneto, Joomla, and vBulletin, using plugins. For any others, you can simply add a tracking code to a page on your site.
You can [test-drive][3] Matomo or use a [hosted version][4].
### Open Web Analytics
If theres a close second to Matomo in the open source web analytics stakes, its [Open Web Analytics][5]. In fact, it includes key features that either rival Google Analytics or leave it in the dust.
In addition to the usual raft of analytics and reporting functions, Open Web Analytics tracks where on a page, and on what elements, visitors click; provides [heat maps][6] that show where on a page visitors interact the most; and even does e-commerce tracking.
Open Web Analytics has a [WordPress plugin][7] and can [integrate with MediaWiki][8] using a plugin. Or you can add a snippet of [JavaScript][9] or [PHP][10] code to your web pages to enable tracking.
Before you [download][11] the Open Web Analytics package, you can [give the demo a try][12] to see it its right for you.
### AWStats
Web server log files provide a rich vein of information about visitors to your site, but tapping into that vein isn't always easy. That's where [AWStats][13] comes to the rescue. While it lacks the most modern look and feel, AWStats more than makes up for that with breadth of data it can present.
That information includes the number of unique visitors, how long those visitors stay on the site, the operating system and web browsers they use, the size of a visitor's screen, and the search engines and search terms people use to find your site. AWStats can also tell you the number of times your site is bookmarked, track the pages where visitors enter and exit your sites, and keep a tally of the most popular pages on your site.
These features only scratch the surface of AWStats's capabilities. It also works with FTP and email logs, as well as [syslog][14] files. AWStats can gives you a deep insight into what's happening on your website using data that stays under your control.
### Countly
[Countly][15] bills itself as a "secure web analytics" platform. While I can't vouch for its security, Countly does a solid job of collecting and presenting data about your site and its visitors.
Heavily targeting marketing organizations, Countly tracks data that is important to marketers. That information includes site visitors' transactions, as well as which campaigns and sources led visitors to your site. You can also create metrics that are specific to your business. Countly doesn't forgo basic web analytics; it also keeps track of the number of visitors on your site, where they're from, which pages they visited, and more.
You can use the hosted version of Countly or [grab the source code][16] from GitHub and self-host the application. And yes, there are [differences between the hosted and self-hosted versions][17] of Countly.
### Plausible
[Plausible][18] is a newer kid on the open source analytics tools block. Its lean, its fast, and only collects a small amount of information — that includes numbers of unique visitors and the top pages they visited, the number of page views, the bounce rate, and referrers. Plausible is simple and very focused.
What sets Plausible apart from its competitors is its heavy focus on privacy. The project creators state that the tool doesnt collect or store any information about visitors to your website, which is particularly attractive if privacy is important to you. You can read more about that [here][19].
Theres a [demo instance][20] that you check out. After that, you can either [self-host][21] Plausible or sign up for a [paid, hosted account][22].
**Share your favorite open source web analytics tool with us in the comments.**
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/top-5-open-source-analytics-tools
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/analytics-graphs-charts.png?itok=sersoqbV (Analytics: Charts and Graphs)
[2]: https://matomo.org/
[3]: https://demo.matomo.org/index.php?module=CoreHome&action=index&idSite=3&period=day&date=yesterday
[4]: https://www.innocraft.cloud/
[5]: http://www.openwebanalytics.com/
[6]: http://en.wikipedia.org/wiki/Heat_map
[7]: https://github.com/padams/Open-Web-Analytics/wiki/WordPress-Integration
[8]: https://github.com/padams/Open-Web-Analytics/wiki/MediaWiki-Integration
[9]: https://github.com/padams/Open-Web-Analytics/wiki/Tracker
[10]: https://github.com/padams/Open-Web-Analytics/wiki/PHP-Invocation
[11]: https://github.com/padams/Open-Web-Analytics
[12]: http://demo.openwebanalytics.com/
[13]: http://www.awstats.org
[14]: https://en.wikipedia.org/wiki/Syslog
[15]: https://count.ly/web-analytics
[16]: https://github.com/Countly
[17]: https://count.ly/pricing#compare-editions
[18]: https://plausible.io
[19]: https://plausible.io/data-policy
[20]: https://plausible.io/plausible.io
[21]: https://plausible.io/self-hosted-web-analytics
[22]: https://plausible.io/register

View File

@ -1,78 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What's the difference between orchestration and automation?)
[#]: via: (https://opensource.com/article/20/11/orchestration-vs-automation)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
What's the difference between orchestration and automation?
======
Both terms imply that things happen without your direct intervention.
But the way you get to those results, and the tools you use to make them
happen, differ.
![doodles of arrows moving in different directions][1]
For the longest time, it seemed the only thing any sysadmin cared about was automation. Recently, though, the mantra seems to have changed from automation to orchestration, leading many puzzled admins to wonder: "What's the difference?"
The difference between automation and orchestration is primarily in intent and tooling. Technically, automation can be considered a subset of orchestration. While orchestration suggests many moving parts, automation usually refers to a singular task or a small number of strongly related tasks. Orchestration works at a higher level and is expected to make decisions based on changing conditions and requirements.
However, this view shouldn't be taken too literally because both terms—_automation_ and _orchestration_—do have implications when they're used. The results of both are functionally the same: things happen without your direct intervention. But the way you get to those results, and the tools you use to make them happen, are different, or at least the terms are used differently depending on what tools you've used.
For instance, automation usually involves scripting, often in Bash or Python or similar, and it often suggests scheduling something to happen at either a precise time or upon a specific event. However, orchestration often begins with an application that's purpose-built for a set of tasks that may happen irregularly, on demand, or as a result of any number of trigger events, and the exact results may even depend on a variety of conditions.
### Decisionmaking and IT orchestration
Automation suggests that a sysadmin has invented a system to cause a computer to do something that would normally have to be done manually. In automation, the sysadmin has already made most of the decisions on what needs to be done, and all the computer must do is execute a "recipe" of tasks.
Orchestration suggests that a sysadmin has set up a system to do something on its own based on a set of rules, parameters, and observations. In orchestration, the sysadmin knows the desired end result but leaves it up to the computer to decide what to do.
Consider Ansible and Bash. Bash is a popular shell and scripting language used by sysadmins to accomplish practically everything they do during a given workday. Automating with Bash is straightforward: Instead of typing commands into an interactive session, you type them into a text document and save the file as a shell script. Bash runs the shell script, executing each command in succession. There's room for some conditional decisionmaking, but usually, it's no more complex than simple if-then statements, each of which must be coded into the script.
Ansible, on the other hand, uses playbooks in which a sysadmin describes the desired state of the computer. It lists requirements that must be met before Ansible can consider the job done. When Ansible runs, it takes action based on the current state of the computer compared to the desired state, based on the computer's operating system, and so on. A playbook doesn't contain specific commands, instead leaving those decisions up to Ansible itself.
Of course, it's particularly revealing that Ansible is referred to as an automation—not an orchestration—tool. The difference can be subtle, and the terms definitely overlap.
### Orchestration and the cloud
Say you need to convert a file type that's regularly uploaded to your server by your users.
The manual solution would be to check a directory for uploaded content every morning, open the file, and then save it in a different format. This solution is slow, inefficient, and probably could happen only once every 24 hours because you're a busy person.
**[Read next: [How to explain orchestration][2]]**
You could automate the task. Were you to do that, you might write a PHP or a Node.js script to detect when a file has been uploaded. The script would perform the conversion and send an alert or make a log entry to confirm the conversion was successful. You could improve the script over time to allow users to interact with the upload and conversion process.
Were you to orchestrate the process, you might instead start with an application. Your custom app would be designed to accept and convert files. You might run the application in a container on your cloud, and using OpenShift, you could launch additional instances of your app when the traffic or workload increases beyond a certain threshold.
### Learning automation and orchestration
There isn't just one discipline for automation or orchestration. These are broad practices that are applied to many different tasks across many different industries. The first step to learning, though, is to become proficient with the technology you're meant to orchestrate and automate. It's difficult to orchestrate (safely) the scaling a series of web servers if you don't understand how a web server works, or what ports need to be open or closed, or what a port is. In practice, you may not be the person opening ports or configuring the server; you could be tasked with administrating OpenShift without really knowing or caring what's inside a container. But basic concepts are important because they broadly apply to usability, troubleshooting, and security.
You also need to get familiar with the most common tools of the orchestration and automation world. Learn some [Bash][3], start using [Git][4] and design some [Git hooks][5], learn some Python, get comfortable with [YAML][6] and [Ansible][7], and try out Minikube, [OKD][8], and [OpenShift][9].
Orchestration and automation are important skills, both to make your work more efficient and as something to bring to your team. Invest in it today, and get twice as much done tomorrow.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/orchestration-vs-automation
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/arrows_operation_direction_system_orchestrate.jpg?itok=NUgoZYY1 (doodles of arrows moving in different directions)
[2]: https://enterprisersproject.com/article/2020/8/orchestration-explained-plain-english
[3]: https://www.redhat.com/sysadmin/using-bash-automation
[4]: https://opensource.com/life/16/7/stumbling-git
[5]: https://opensource.com/life/16/8/how-construct-your-own-git-server-part-6
[6]: https://www.redhat.com/sysadmin/understanding-yaml-ansible
[7]: https://opensource.com/downloads/ansible-k8s-cheat-sheet
[8]: https://www.redhat.com/sysadmin/learn-openshift-minishift
[9]: http://openshift.io

View File

@ -1,225 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use your favorite open source apps on your Mac with MacPorts)
[#]: via: (https://opensource.com/article/20/11/macports)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Use your favorite open source apps on your Mac with MacPorts
======
MacPorts is an easy way to get open source applications and keep them
updated on macOS.
![Coffee and laptop][1]
"Package manager" is a generic name for software to install, upgrade, and uninstall applications. Commands like `dnf` or `apt` on Linux, or `pkg_add` on BSD, or even `pip` on Python and `luarocks` on Lua, make it trivial for users to add new applications to their system. Once you've tried it, you're likely to find it hard to live without, and it's a convenience every operating system ought to include. Not all do, but the open source community tends to ensure the best ideas in computing are propagated across all platforms.
There are several package managers designed just for macOS, and one of the oldest is the [MacPorts][2] project.
### Darwin and MacPorts
When Apple shifted to Unix at the turn of the century, it essentially built a Unix operating system called [Darwin][3]. Shortly thereafter, a group of resourceful hackers promptly began work on a project called OpenDarwin, with the intent of creating an independent branch of Darwin. They hoped that OpenDarwin and Apple developers could work on related codebases, borrowing from each other whenever it was useful. Unfortunately, OpenDarwin didn't gain traction within Apple and it eventually [came to an end][4]. However, the OpenDarwin package manager project, MacPorts, is alive and well and continues to provide great open source software for macOS.
MacOS already comes with a healthy set of default terminal commands, some borrowed from GNU, others from BSD, and still others written especially for Darwin. You can use MacPorts to add new commands and even graphical applications.
### Install MacPorts
Your macOS version dictates which MacPorts installer package you need. So first, get the version of macOS you're currently running:
```
$ sw_vers -productVersion
10.xx.y
```
MacPorts releases for recent macOS versions are available on [macports.org/install.php][5]. You can download an installer from the website, or just copy the link and download using the [curl][6] command:
```
$ curl <https://distfiles.macports.org/MacPorts/MacPorts-2.6.3-10.14-Mojave.pkg> \
\--output MacPorts-2.6.3-10.14-Mojave.pkg
```
Once you download the installer, you can double-click to install it or install it using a terminal:
```
$ sudo installer -verbose \
-pkg MacPorts*.pkg
-tgt /
```
### Configure MacPorts
Once the package is installed, you must add the relevant paths to your system so that your terminal knows where to find your new MacPorts commands. Add the path to MacPorts, and add its manual pages to your `PATH` environment variable by adding this to `~/.bashrc`:
```
export PATH=/opt/local/bin:/opt/local/sbin:$PATH
export MANPATH=/opt/local/share/man:$MANPATH
```
Load your new environment:
```
`$ source ~/.bashrc`
```
Run an update so your MacPorts installation has access to the latest versions of software:
```
`$ sudo port -v selfupdate`
```
### Use MacPorts
Some package managers install pre-built software from a server onto your local system. This is called _binary installation_ because it installs code that's been compiled into an executable binary file. Other package managers, MacPorts among them, pull source code from a server, compile it into a binary executable on your computer, and install it into the correct directories. The end result is the same: you have the software you want.
The way they get there is different.
There are advantages to both methods. A binary install is quicker because the only transaction required is copying files from a server onto your computer. This is something [Homebrew][7] does with its "bottles," but there are sometimes issues with [non-relocatable][8] builds. Installing from source code means it's easy for you to modify how software is built and where it gets installed.
MacPorts provides the **port** command, and calls it packages **ports** (inherited terminology from projects like NetBSD's [Pkgsrc][9] and FreeBSD's port system.) The typical MacPorts workflow is to search for an application and then install it.
#### Search for an application
If you know the specific command or application you need to install, search for it to ensure it's in the MacPorts tree:
```
`$ sudo port search parallel`
```
By default, `port` searches both the names and descriptions of packages. You can search on just the name field by adding the `--name` option:
```
`$ sudo port search --name parallel`
```
You can make your searches "fuzzy" with common shell wildcards. For instance, to search for `parallel` only at the start of a name field:
```
`$ sudo port search --name --glob "parallel*"`
```
List all ports
If you don't know what you're searching for and you want to see all the packages (or "ports" in MacPorts and BSD terminology) available, use the `list` subcommand:
```
`$ sudo port list`
```
The list is long but complete. You can, of course, redirect the output into a text for reference or pipe it to `more` or `less` for closer examination:
```
$ sudo port list &gt; all-ports.txt
$ sudo port list | less
```
#### Get information about a package
You can get all the important details about a package with the `info` subcommand:
```
$ sudo port info parallel
parallel @20200922 (sysutils)
Description:          Build and execute shell command lines from standard input in parallel
Homepage:             <https://www.gnu.org/software/parallel/>
Library Dependencies: perl5
Platforms:            darwin
License:              GPL-3+
Maintainers:          Email: [example@example.com][10]
```
This displays important metadata about each application, including a brief description of what it is and the project homepage, in case you need more information. It also lists dependencies, which are _other_ ports that must be on your system for a package to run correctly. Dependencies are resolved automatically by MacPorts, meaning that if you install, for example, the `parallel` package, MacPorts also installs `perl5` if it's not already on your system. Finally, it provides the license and port maintainer.
#### Install a package
When you're ready to install a package, use the `install` subcommand:
```
`$ sudo port install parallel`
```
It can take some time to compile the code depending on your CPU, the size of the code base, and the number of packages being installed, so be patient. It'll be worth it.
Once the installation is done, the new application is available immediately:
```
$ parallel echo ::: "hello" "world"
hello
world
```
Applications installed by MacPorts are placed into `/opt/local`.
#### View what is installed
Once a package has been installed on your system, you can see exactly what it placed on your drive using the `contents` subcommand:
```
$ sudo port contents parallel
/opt/local/bin/parallel
[...]
```
#### Clean up
Installing a package with MacPorts often leaves build files in your ports tree. These files are useful for debugging a failed install, but normally you don't need to keep them lying around. Purge these files from your system with the `port clean` command:
```
`$ port clean parallel`
```
#### Uninstall packages
Uninstall a package with the `port uninstall` command:
```
`$ port uninstall parallel`
```
### Open source package management
The MacPorts project is a remnant of an early movement to build upon the open source work that served as macOS's foundation. While that effort failed, there have been efforts to revive it as a project called [PureDarwin][11]. The push to open more of Apple's code is important work, and the byproducts of this effort are beneficial to everyone running macOS. If you're looking for an easy way to get open source applications on your Mac and a reliable way to keep them up to date, install and use MacPorts.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/macports
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o (Coffee and laptop)
[2]: http://macports.org
[3]: https://en.wikipedia.org/wiki/Darwin_%28operating_system%29
[4]: https://web.archive.org/web/20070111155348/opendarwin.org/en/news/shutdown.html
[5]: https://www.macports.org/install.php
[6]: https://opensource.com/article/20/5/curl-cheat-sheet
[7]: https://opensource.com/article/20/6/homebrew-linux
[8]: https://discourse.brew.sh/t/why-do-bottles-need-to-be-in-home-linuxbrew-linuxbrew/4346/3
[9]: https://opensource.com/article/19/11/pkgsrc-netbsd-linux
[10]: mailto:example@example.com
[11]: http://www.puredarwin.org/

View File

@ -1,99 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My top 7 Rust commands for using Cargo)
[#]: via: (https://opensource.com/article/20/11/commands-rusts-cargo)
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
My top 7 Rust commands for using Cargo
======
Spend some time investigating Rust's package manager, Cargo.
![Person drinking a hot drink at the computer][1]
I've been using Rust for a little over six months now. I'm far from an expert, but I have stumbled across many, many gotchas and learned many, many things along the way; things that I hope will be of use to those who are learning what is easily my favourite programming language.
This is the third article in my miniseries for Rust newbs like me. You can find my other excursions into Rust in:
* [My top 7 keywords in Rust][2]
* [My top 7 functions in Rust][3]
I plan to write more, and this article is about Rust's package manager, [Cargo][4]. I'm ashamed to admit that I don't use Cargo's power as widely as I should, but researching this article gave me a better view of its commands' capabilities. In fact, I wasn't even aware of some of the options available until I started looking in more detail.
For this list of my top seven Cargo commands, I'll assume you have basic familiarity with Cargo—that you have it installed, and you can create a package using `cargo new <package>`, for instance. I could have provided more commands (there are many options!), but here are my "lucky 7."
1. **cargo help &lt;command&gt;:** You can always find out more about a command with the `--help` option. The same goes for Cargo itself: `cargo --help` will give you a quick intro to what's out there. To get more information on a command (more like a man page), you can try using the command `new`. For instance, `cargo help new` will give extended information about `cargo new`. This behaviour is pretty typical for command-line tools, particularly in the Linux/Unix world, but it's very expressively implemented for Cargo, and you can gain lots of quick information with it.
2. **cargo build --bin &lt;target&gt;:** What happens when you have multiple .rs files in your package, but you want to build just one of them? I have a package called `test-and-try` that I use for, well, testing and trying functionality, features, commands, and crates. It has around a dozen different files in it. By default, `cargo build` will try to build _all_ of them, and as they're often in various states of repair (some of them generating lots of warnings, some of them not even fully compiling), this can be a real pain. Instead, I place a section in my `Cargo.toml` file for each one like this: [code]
[[bin]]
name = "warp-body"
path = "src/warp-body.rs"
[/code] I can then use `cargo build --bin warp-body` to build _just_ this file (and any dependencies). I can then run it with a similar command: `cargo run --bin warp-body`.
3. **cargo test:** I have an admission; I am not as assiduous about creating automatic tests in my Rust code as I ought to be. This is because I'm currently mainly writing proof of concept rather than production code, and also because I'm lazy. Maybe changing this behaviour should be a New Year's resolution, but when I _do_ get round to writing tests, Cargo is there to help me (as it is for you). All you need to do is add a line before the test code in your .rs file: [code]`#[cfg(test)]`[/code] When you run `cargo test`, Cargo will "automagically" find these tests, run them, and tell you if you have problems. As with many of the commands here, you'll find much more information online, but it's particularly worth familiarising yourself with the basics of this capability in the relevant [Rust By Example section][5].
4. **cargo search &lt;query&gt;:** This is one of the commands that I didn't even know existed until I started researching this article—and which would have saved me so much time over the past few months if I'd known about it. It searches [Crates.io][6], Rust's repository of public (and _sometimes_ maintained) packages and tells you which ones may be relevant. (You can specify a different repository if you want, with the intuitively named `--registry` option.) I've recently been doing some work on network protocols for non-String data, so I've been working with Concise Binary Object Representation ([CBOR][7]). Here's what happens if I use `cargo search`:
![Cargo search output][8]
(Mike Bursell, [CC BY-SA 4.0][9])
This is great! I can, of course, also combine this command with tools like grep to narrow down the search yet further, like so: `cargo search cbor --limit 70 | grep serde`.
5. **cargo tree:** Spoiler alert: this one may scare you. You've probably noticed that when you first build a new package, or when you add a new dependency, or just do a `cargo clean` and then `cargo build`, you see a long list of crates printed out as Cargo pulls them down from the relevant repositories and compiles them. How can you tell ahead of time, however, what will be pulled down and what version it will be? More importantly, how can you know what other dependencies a new crate has pulled into your build? The answer is `cargo tree`. Just to warn you: For any marginally complex project, you can expect to have a _lot_ of dependencies. I tried `cargo tree | wc -l` to count the number of dependent crates for a smallish project I'm working on and got an answer of 350! I tried providing an example, but it didn't display well, so I recommend that you try it yourself—be prepared for lots of output!
6. **cargo clippy:** If you try running this and it doesn't work, that's because I cheated a little with these last two commands: you may have to install them explicitly (depending on your setup). For this one, run `cargo install clippy`—you'll be glad you did. Clippy is Rust's linter; it goes through your code, looking at ways to reduce and declutter it by removing or changing commands. I try to run `cargo clippy` before every `git commit`—partly because the Git repositories I tend to commit to have automatic actions to reject files that need linting, and partly to keep my code generally more tidy. Here's an example:
![Cargo clippy output][10]
(Mike Bursell, [CC BY-SA 4.0][9])
Let's face it; this isn't a major issue (though clippy will find errors, too, if you run it on non-compiling code), but it's an easy fix, so you might as well deal with it—either by removing the code or prefixing the variable with an underscore. As I plan to use this variable later but haven't yet implemented the function to consume it, I will perform the latter fix.
7. **cargo readme:** While it's not the most earth-shattering of commands, this is another that is very useful (and that, as with `cargo clippy`, you may need to install explicitly). If you add the relevant lines to your .rs files, you can output README files from Cargo. For instance, I have the following lines at the beginning of my main.rs file:
![Cargo readme input][11]
(Mike Bursell, [CC BY-SA 4.0][9])
I'll leave the `cargo readme` command's output as an exercise for the reader, but it's interesting to me that the Licence (or "License," if you must) declaration is added. Use this to create simple documentation for your users and make them happy with minimal effort (always a good approach!).
I've just scratched the surface of Cargo's capabilities in this article; all the commands above are actually way more powerful than I described. I heartily recommend that you spend some time investigating Cargo and finding out how it can make your life better.
* * *
_This article was originally published on [Alice, Eve, and Bob][12] and is reprinted with the author's permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/commands-rusts-cargo
作者:[Mike Bursell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_laptop_computer_work_desk.png?itok=D5yMx_Dr (Person drinking a hot drink at the computer)
[2]: https://opensource.com/article/20/10/keywords-rust
[3]: https://opensource.com/article/20/10/rust-functions
[4]: https://doc.rust-lang.org/cargo/
[5]: https://doc.rust-lang.org/stable/rust-by-example/testing/unit_testing.html
[6]: https://crates.io/
[7]: https://cbor.io/
[8]: https://opensource.com/sites/default/files/uploads/cargo-search-output-5.png (Cargo search output)
[9]: https://creativecommons.org/licenses/by-sa/4.0/
[10]: https://opensource.com/sites/default/files/uploads/cargo-clippy-output-1.png (Cargo clippy output)
[11]: https://opensource.com/sites/default/files/uploads/cargo-readme-input.png (Cargo readme input)
[12]: https://aliceevebob.com/2020/11/03/my-top-7-cargo-rust-commands/

View File

@ -1,134 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cloud control vs local control: What to choose for your home automation)
[#]: via: (https://opensource.com/article/20/11/cloud-vs-local-home-automation)
[#]: author: (Steve Ovens https://opensource.com/users/stratusss)
Cloud control vs local control: What to choose for your home automation
======
Cloud may be more convenient, but local control gives you more privacy
and other options in your Home Assistant ecosystem.
![clouds in windows][1]
There are a lot of factors to consider when investing in a home automation ecosystem. In my first article in this series, I explained [why I picked Home Assistant][2], and in this article, I'll explain some of the foundational issues and technologies in home automation, which may influence how you approach and configure your Internet of Things (IoT) devices.
### Cloud connectivity
Most devices you can buy today are tied to some type of cloud service. While the cloud brings a certain level of convenience, it also opens a host of problems. For starters, there are privacy issues related to a company having access to your personal habits—when you are home, what shows you watch, what time you go to bed, etc. Although most people are not as concerned about these issues as I am, privacy should still be a consideration, even if it is a small one.
Cloud access also creates issues around being reliant on something outside your control. In 2019, Sonos came under fire for [remotely bricking][3] older smart speakers. Speakers usually continue to work for years after their warranty ends; in fact, they usually function until they physically break. There's also the case of Automatic, which produced a cloud-based car tracker. When it announced in May 2020 that it would be [shutting down][4] its services, it advised customers to "please discard your adapter by following standard electronic recycling procedures."
Being dependent on a third-party provider for critical functionality can come back to bite you. [IFTTT][5], a popular service for programming events based on external conditions, recently altered its free plan's [terms and conditions][6] to severely limit the number of events you can create—from an unlimited number to three. This is even though IFTTT charges device manufacturers for certification with its system, which allows products like [Meross smart bulbs][7] to proudly display their compatibility with IFTTT.
![Meross screenshot from Amazon][8]
(Amazon screenshot by Steve Ovens, [CC BY-SA 4.0][9])
Some of these decisions are purely financial, but there are more than a few anecdotal cases where a company blocks a person's access to a device they purchased simply because they [did not like what the user said][10] about them. How crazy is that?
Another consideration with cloud connectivity is a device's responsivity if its signals must travel from your home to a cloud server (which may be halfway around the world) and then back to the device. This can lead to a two-second (or more) delay on any action. For some people, this is not a deal-breaker. For others, that delay is unbearable.
Finally, what happens if there is an internet outage? While most modern home internet connections are quite reliable, they do happen. [Some large][11], well-known cloud [service providers][12] have experienced outages this year. Are you OK trading convenience for possibly having your automations break and losing control of your smart devices for periods of time?
### Local control
There are several ways you can regain control over your smart devices. Commercially, you could try something like [Hubitat][13], which is a proprietary platform that emphasizes local control. I have no experience with these devices, as I don't like to rely on an intermediary.
In my home, I standardized on WiFi (although I may branch out to [Zigbee][14] in the future) and [Home Assistant][15]. Using WiFi means I need to buy or make my devices based on their compatibility with alternative open source firmware, such as [Tasmota][16] or [ESPHome][17]. I admit that neither of these options is "plug-and-play friendly" unless you buy devices from sources like [Shelly][18], which is very friendly to the community, or [CloudFree][19], which has Tasmota installed by default.
(As a small aside, I have both flashed my own devices and purchased them from CloudFree. There are some savings with the DIY approach, but I buy pre-flashed devices for my father's house because this eliminates a lot of hassle.)
I won't go into more detail about alternative firmware, how to flash it, and so on. I simply want to introduce you to the idea that there are options for local control.
### Achieving local control with MQTT
A local control device probably uses either a direct [API][20] call, where Home Assistant talks directly to the device, or Message Queuing Telemetry Transport ([MQTT][21]).
MQTT is one of the most widely used protocols for local IoT communication. I'll share some of the basics, but the Hook Up has an [in-depth video][22] you can watch if you want to learn more, and HiveMQ has an [entire series][23] on MQTT essentials.
MQTT uses three components for communication. The first, the **sender**, is the component that triggers the action. The second, the **broker**, is kind of like a bulletin board where messages are posted. The final component is the **device** that will perform the action. This process is called the _publish-subscribe_ model.
Say you have a button on the wall that you want to use to turn on the projector, lower the blinds, and turn on a fan. The button (sender) posts the _message_ **ON** to a specific section of the broker, called a _topic_. The topic might be something like `/livingroom/POWER`. The fan, the projector, and the blinds _subscribe_ to this topic. When the message **ON** is posted to the topic, all of the devices activate their respective functions, turning on the projector, lowering the blinds, and starting the fan.
Unlike a message board, messages have different Quality of Service (QoS) states. The HiveMQ website has a good explanation of the [three QoS levels][24]. In short:
* **QoS 0:** The message is sent to the broker in a fire-and-forget way. No attempt is made to verify that the broker received the message.
![MQTTT QoS 0][25]
(© 2015 [HiveMQ][24], reused with permission)
* **QoS 1**: The message is posted, and the broker replies once the message is received. Multiple messages can be sent before the broker replies. For example, if you are trying to raise the projector's brightness, multiple brightness bars may be inadvertantly adjusted before the broker tells the sender to stop publishing messages.
![MQTTT QoS 1][26]
(© 2015 [HiveMQ][24], reused with permission)
* **QoS 2:** This is the slowest but safest level. It guarantees that the message is received only once. Similar to TCP, if a message is lost, the sender will resend the message.
![MQTTT QoS 2][27]
(© 2015 [HiveMQ][24], reused with permission)
In addition, MQTT has a **retain** flag that can be enabled on the messages, but it is not set by default. Going back to the bulletin board analogy, it's like if someone posts a message to a bulletin board, but another person walks up to the board, takes the message down, reads it, and throws it away. If a third person looks at the bulletin board five minutes later, they would have no knowledge of the message. However, if the **retain** flag is set to true, it's like leaving the message pinned on the board until a new message is received. This means that no matter when people come to read messages, they will all know the latest message.
In home automation terms, whether or not the **retain** flag is set depends completely on the use case.
In this series, I will use Home Assistant's [Mosquitto MQTT broker][28] add-on. Most of my devices use MQTT; however, I do have a couple of non-critical Tuya devices that require a cloud account. I may replace them with locally controllable ones in the future.
### Wrapping up
Home Assistant is a large, wonderful piece of software. It is complex in some areas, and it will help you to be familiar with these fundamental technologies when you need to troubleshoot and coordinate your setup.
In the next article, I will talk about the "big three" wireless protocols that you are likely to encounter in smart devices: Zigbee, Z-Wave, and WiFi. Don't worry—I'm almost done with the underlying theories, and soon I'll get on with installing Home Assistant.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/cloud-vs-local-home-automation
作者:[Steve Ovens][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/stratusss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-windows-building-containers.png?itok=0XvZLZ8k (clouds in windows)
[2]: https://opensource.com/article/20/11/home-assistant
[3]: https://www.bbc.com/news/technology-51768574
[4]: https://www.cnet.com/roadshow/news/automatic-connected-car-service-dead-may-coronavirus/
[5]: https://ifttt.com/
[6]: https://ifttt.com/plans
[7]: https://www.amazon.ca/meross-Dimmable-Equivalent-Compatible-Required/dp/B07WN2J3C7
[8]: https://opensource.com/sites/default/files/uploads/ifttt_add.png (Meross screenshot from Amazon)
[9]: https://creativecommons.org/licenses/by-sa/4.0/
[10]: https://www.techrepublic.com/article/iot-company-bricks-customers-device-after-negative-review/
[11]: https://www.theverge.com/2020/9/28/21492688/microsoft-outlook-office-teams-azure-outage-down
[12]: https://www.cnn.com/2020/08/30/tech/internet-outage-cloudflare/index.html
[13]: https://hubitat.com/
[14]: https://zigbeealliance.org/
[15]: https://www.home-assistant.io/
[16]: https://tasmota.github.io/docs/
[17]: https://esphome.io/
[18]: https://shelly.cloud/
[19]: https://cloudfree.shop/
[20]: https://en.wikipedia.org/wiki/API
[21]: https://en.wikipedia.org/wiki/MQTT
[22]: https://www.youtube.com/watch?v=NjKK5ab0-Kk
[23]: https://www.hivemq.com/tags/mqtt-essentials/
[24]: https://www.hivemq.com/blog/mqtt-essentials-part-6-mqtt-quality-of-service-levels/
[25]: https://opensource.com/sites/default/files/uploads/ha-config8-qos0.png (MQTTT QoS 0)
[26]: https://opensource.com/sites/default/files/uploads/ha-config8-qos1.png (MQTTT QoS 1)
[27]: https://opensource.com/sites/default/files/uploads/ha-config9-qos2.png (MQTTT QoS 2)
[28]: https://mosquitto.org/

View File

@ -1,224 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Automate your tasks with this Ansible cheat sheet)
[#]: via: (https://opensource.com/article/20/11/ansible-cheat-sheet)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Automate your tasks with this Ansible cheat sheet
======
Start automating your repetitive tasks by getting to know Ansible's
modules, YAML structure, and more.
![Cheat Sheet cover image][1]
Ansible is one of the primary tools in the world of [automation and orchestration][2] because of its broad usefulness and flexibility. However, those same traits are the very reason it can be difficult to get started with [Ansible][3]. It isn't a graphical application, and yet it also isn't a scripting or programming language. But like a programming language, the answer to the common question of "what can I do with it?" is "everything," which makes it difficult to know where to begin doing _anything_.
Here's how I view Ansible: It's an "engine" that uses other people's modules to accomplish complex tasks you describe in a special "pseudo-code" text format called YAML. This means you need to have three things to get started with Ansible:
1. Ansible
2. A repetitive task you want to automate
3. A basic understanding of YAML
This article aims to help you get started with these three things.
### Install Ansible
Part of Ansible's widespread popularity can be attributed to how it lets you (the user) completely ignore what operating system (OS) you're targeting. Generally, you don't have to think about whether your Ansible task will be executed on Linux, macOS, Windows, or BSD. Ansible takes care of the messy platform-specific bits for you.
However, to _run_ Ansible, you do need to have Ansible installed somewhere. The computer where Ansible is installed is called the _control node_. Any computer that Ansible targets is called a _host_.
Only the control node needs to have Ansible installed.
If you're on Linux, you can install Ansible from your software repository with your package manager.
As yet, Windows is unable to serve as an Ansible control node, although the more progress it makes toward [POSIX][4], the better things look for it, so keep a close watch on Microsoft's [Windows Subsystem for Linux (WSL)][5] product.
On macOS, you can use a third-party package manager like [Homebrew][6] or [MacPorts][7].
### Ansible modules
Ansible is just an engine. The parts that do 90% of the work are [Ansible modules][8]. These modules are programmed by lots of different people all over the world. Some have become so popular that the Ansible team adopts them and helps maintain them.
As a user, much of your interaction with Ansible is directed to its modules. Choosing a module is like choosing an app on your phone or computer: you have a task you want done, so you look for an Ansible module that claims to assist.
Most modules are tied to specific applications. For instance, the [file][9] module helps create and manage files. The [authorized_key][10] module helps manage SSH keys, [Database][11] modules help control and manipulate databases, and so on.
Part of deciding on a task to offload onto Ansible is finding the module that will help you accomplish it. Ansible plays run _tasks_, and tasks consist of Ansible keywords or Ansible modules.
### YAML and Ansible
The YAML text format is a highly structured way to feed instructions to an application, making it almost a form of code. Like a programming language, you must write YAML according to a specific set of syntax rules. A YAML file intended for Ansible is called a _playbook_, and it consists of one or more Ansible _plays_.
An Ansible play, like YAML, has a very limited structure. There are two kinds of instructions: a _sequence_ and a _mapping_. An Ansible play, as with YAML, always starts with 3 dashes (`---`).
#### Sequences
A _sequence_ element is a list. For example, here's a list of penguin species in YAML:
```
\---
\- Emperor
\- Gentoo
\- Yellow-eyed
\----
```
#### Mapping
A _mapping_ element consists of two parts: a key and a value. A _key_ in Ansible is usually a keyword defined by an Ansible module, and the value is sometimes Boolean (`true` or `false`) or some choice of parameters defined by the module, or something arbitrary, a variable, depending on what's being set.
Here's a simple mapping in YAML:
```
\---
\- Name: "A list of penguin species"
\----
```
#### Sequences and mapping
These two data types aren't mutually exclusive.
You can put a sequence into a mapping. In such a case, the sequence is a value for a mapping's key. When placing a sequence into a mapping, you indent the sequence so that it is a "descendent" (or "child") of its key:
```
\---
\- Penguins:
 - Emperor
  - Gentoo
  - Yellow-eyed
\----
```
You can also place mappings in a sequence:
```
\---
\- Penguin: Emperor
\- Mammal: Gnu
\- Planar: Demon
\----
```
Those are all the rules you need to be familiar with to write valid YAML.
### Write an Ansible play
For Ansible plays, whether you use a sequence or a mapping (or a mapping in a sequence, or a sequence in a mapping) is dictated by Ansible or the Ansible module you're using. The "language" of Ansible mostly speaks to configuration options to help you determine how and where your play will run. A quick reference to all Ansible keywords is available in the [Ansible playbook documentation][12].
From the list of keywords, you can create an opening for your play. You start with three dashes because that's how a YAML file always starts. Then you give your play a name in a mapping block. You must also define what hosts (computers) you want the play to run on, and how Ansible is meant to reach the computer.
For this example, I set the host to `localhost`, so the play runs only on _this_ computer, and the connection type to `local` (the default is `ssh`):
```
\---
\- name: "My first Ansible play"
  hosts: localhost
  connection: local
\----
```
Most of the YAML you'll write in a play is probably configuration options for a specific Ansible module. To find out what instructions a module expects from your Ansible play, refer to that module's documentation. [Modules maintained by Ansible][8] are documented on Ansible's website.
For this example, I'll use the debug module.
![Documentation for Ansible debugger module][13]
On [debug's documentation page][14], three parameters are listed:
* `msg` is an optional string to print to the terminal.
* `var` is an optional variable, interpreted as a string. This is mutually exclusive with `msg`, so you can use one or the other—not both.
* `verbosity` is an integer you can use to control how verbose this debugger is. Its default is 0, so there is no threshold to pass.
It's a simple module, but the thing to look for is the YAML data type of each parameter. Can you determine from my description whether these parameters are a sequence (a list) or a mapping (a key and value pair)? Knowing what kind of YAML block to use in your play helps you write valid plays.
Here's a simple "hello world" Ansible play:
```
\---
\- name: "My first Ansible play"
  hosts: localhost
  connection: local
  tasks:
    - name: "Print a greeting"
      debug:
        msg: "Hello world"
\----
```
Notice that the play contains a `task`. This task is a mapping that contains a sequence of exactly one item. The item in this task is `name` (and its value), the module being used by the task, and a `msg` parameter (along with its value). These are all part of the task mapping, so they're indented to show inheritance.
You can test this Ansible play by using the `ansible-playbook` command with the `--check` option:
```
$ ansible-playbook --check hello.yaml
PLAY [My first Ansible play] *************************
TASK [Gathering Facts] *******************************
ok: [localhost]
TASK [Print a greeting] ******************************
ok: [localhost] =&gt; {
    "msg": "Hello world"
}
PLAY RECAP *******************************************
localhost: ok=2  changed=0  unreachable=0  failed=0
```
It's verbose, but you can debug the message in your "Print a greeting" task, right where you put it.
### Testing modules
Using a new Ansible module is like trying out a new Linux command. You read its documentation, study its syntax, and then try some tests.
There are at least two other modules you could use to write a "hello world" play: [assert][15] and [meta][16]. Try reading through the documentation for these modules, and see if you can create a simple test play based on what you learned above.
For further examples of how modules are used to get work done, visit [Ansible Galaxy][17], an open source repository of community-contributed plays.
### For a quick reference of important Ansible commands, download our [Ansible cheat sheet][18].
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/ansible-cheat-sheet
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coverimage_cheat_sheet.png?itok=lYkNKieP (Cheat Sheet cover image)
[2]: https://opensource.com/article/20/11/orchestration-vs-automation
[3]: https://opensource.com/resources/what-ansible
[4]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[5]: https://docs.microsoft.com/en-us/windows/wsl/install-win10
[6]: https://opensource.com/article/20/6/homebrew-mac
[7]: https://opensource.com/article/20/11/macports
[8]: https://docs.ansible.com/ansible/2.8/modules/modules_by_category.html
[9]: https://docs.ansible.com/ansible/2.8/modules/file_module.html#file-module
[10]: https://docs.ansible.com/ansible/2.8/modules/authorized_key_module.html#authorized-key-module
[11]: https://docs.ansible.com/ansible/2.8/modules/list_of_database_modules.html
[12]: https://docs.ansible.com/ansible/latest/reference_appendices/playbooks_keywords.html
[13]: https://opensource.com/sites/default/files/screenshot_from_2020-11-13_20-44-15.png (Documentation for Ansible debugger module)
[14]: https://docs.ansible.com/ansible/2.8/modules/debug_module.html
[15]: https://docs.ansible.com/ansible/2.8/modules/assert_module.html
[16]: https://docs.ansible.com/ansible/2.8/modules/meta_module.html
[17]: https://galaxy.ansible.com/
[18]: https://opensource.com/downloads/ansible-cheat-sheet

View File

@ -1,233 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A beginner's guide to Kubernetes Jobs and CronJobs)
[#]: via: (https://opensource.com/article/20/11/kubernetes-jobs-cronjobs)
[#]: author: (Mike Calizo https://opensource.com/users/mcalizo)
A beginner's guide to Kubernetes Jobs and CronJobs
======
Use Jobs and CronJobs to control and manage Kubernetes pods and
containers.
![Ships at sea on the web][1]
[Kubernetes][2] is the default orchestration engine for containers. Its options for controlling and managing pods and containers include:
1. Deployments
2. StatefulSets
3. ReplicaSets
Each of these features has its own purpose, with the common function to ensure that pods run continuously. In failure scenarios, these controllers either restart or reschedule pods to ensure the services in the pods continue running.
As the [Kubernetes documentation explains][3], a Kubernetes Job creates one or more pods and ensures that a specified number of the pods terminates when the task (Job) completes.
Just like in a typical operating system, the ability to perform automated, scheduled jobs without user interaction is important in the Kubernetes world. But Kubernetes Jobs do more than just run automated jobs, and there are multiple ways to utilize them through:
1. Jobs
2. CronJobs
3. Work queues (this is beyond the scope of this article)
Sounds simple right? Well, maybe. Anyone who works on containers and microservice applications knows that some require services to be transient so that they can do specific tasks for applications or within the Kubernetes clusters.
In this article, I will go into why Kubernetes Jobs are important, how to create Jobs and CronJobs, and when to use them for applications running on the Kubernetes cluster.
### Differences between Kubernetes Jobs and CronJobs
Kubernetes Jobs are used to create transient pods that perform specific tasks they are assigned to. [CronJobs][4] do the same thing, but they run tasks based on a defined schedule.
Jobs play an important role in Kubernetes, especially for running batch processes or important ad-hoc operations. Jobs differ from other Kubernetes controllers in that they run tasks until completion, rather than managing the desired state such as in Deployments, ReplicaSets, and StatefulSets.
### How to create Kubernetes Jobs and CronJobs
With that background in hand, you can start creating Jobs and CronJobs.
#### Prerequisites
To do this exercise, you need to have the following:
1. A working Kubernetes cluster; you can install it with either:
* [CentOS 8][5]
* [Minikube][6]
2. The [kubectl][7] Kubernetes command line
Here is the Minikube deployment I used for this demonstration:
```
$ minikube version
minikube version: v1.8.1
$ kubectl cluster-info
Kubernetes master is running at <https://172.17.0.59:8443>
KubeDNS is running at <https://172.17.0.59:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy>
$ kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
minikube   Ready    master   88s   v1.17.3
```
#### Kubernetes Jobs
Just like anything else in the Kubernetes world, you can create Kubernetes Jobs with a definition file. Create a file called `sample-jobs.yaml` using your favorite editor.
Here is a snippet of the file that you can use to create an example Kubernetes Job:
```
apiVersion: batch/v1          ## The version of the Kubernetes API
kind: Job                     ## The type of object for jobs
metadata:
 name: job-test
spec:                        ## What state you desire for the object
 template:
   metadata:
     name: job-test
   spec:
     containers:
     - name: job
       image: busybox                  ##  Image used
       command: ["echo", "job-test"]   ##  Command used to create logs for verification later
     restartPolicy: OnFailure          ##  Restart Policy in case container failed
```
Next, apply the Jobs in the cluster:
```
`$ kubectl apply -f sample-jobs.yaml`
```
Wait a few minutes for the pods to be created. You can view the pod creation's status:
```
`$ kubectl get pod watch`
```
After a few seconds, you should see your pod created successfully:
```
$ kubectl get pods
  NAME                  READY   STATUS          RESTARTS         AGE
  job-test                      0/1     Completed       0            11s
```
Once the pods are created, verify the Job's logs:
```
`$ kubectl logs job-test job-test`
```
You have created your first Kubernetes Job, and you can explore details about it:
```
`$ kubectl describe job job-test`
```
Clean up the Jobs:
```
`$ kubectl delete jobs job-test`
```
#### Kubernetes CronJobs
You can use CronJobs for cluster tasks that need to be executed on a predefined schedule. As the [documentation explains][8], they are useful for periodic and recurring tasks, like running backups, sending emails, or scheduling individual tasks for a specific time, such as when your cluster is likely to be idle.
As with Jobs, you can create CronJobs via a definition file. Following is a snippet of the CronJob file `cron-test.yaml`. Use this file to create an example CronJob:
```
apiVersion: batch/v1beta1            ## The version of the Kubernetes API
kind: CronJob                        ## The type of object for Cron jobs
metadata:
  name: cron-test
spec:
  schedule: "*/1 * * * *"            ## Defined schedule using the *nix style cron syntax
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: cron-test
            image: busybox            ## Image used
            args:
           - /bin/sh
            - -c
            - date; echo Hello this is Cron test
          restartPolicy: OnFailure    ##  Restart Policy in case container failed
```
Apply the CronJob to your cluster:
```
$ kubectl apply -f cron-test.yaml
 cronjob.batch/cron-test created
```
Verify that the CronJob was created with the schedule in the definition file:
```
$ kubectl get cronjob cron-test
 NAME        SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
 cron-test   */1 * * * *   False     0        &lt;none&gt;          10s
```
After a few seconds, you can find the pods that the last scheduled job created and view the standard output of one of the pods:
```
$ kubectl logs cron-test-1604870760
  Sun Nov  8 21:26:09 UTC 2020
  Hello from the Kubernetes cluster
```
You have created a Kubernetes CronJob that creates an object once per execution based on the schedule `schedule: "*/1 * * * *"`. Sometimes the creation can be missed because of environmental issues in the cluster. Therefore, they need to be [idempotent][9].
### Other things to know
Unlike deployments and services in Kubernetes, you can't change the same Job configuration file and reapply it at once. When you make changes in the Job configuration file, you must delete the previous Job from the cluster before you apply it.
Generally, creating a Job creates a single pod and performs the given task, as in the example above. But by using completions and [parallelism][10], you can initiate several pods, one after the other.
### Use your Jobs
You can use Kubernetes Jobs and CronJobs to manage your containerized applications. Jobs are important in Kubernetes application deployment patterns where you need a communication mechanism along with interactions between pods and the platforms. This may include cases where an application needs a "controller" or a "watcher" to complete tasks or needs to be scheduled to run periodically.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/kubernetes-jobs-cronjobs
作者:[Mike Calizo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mcalizo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes_containers_ship_lead.png?itok=9EUnSwci (Ships at sea on the web)
[2]: https://kubernetes.io/
[3]: https://kubernetes.io/docs/concepts/workloads/controllers/job/
[4]: https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
[5]: https://phoenixnap.com/kb/how-to-install-kubernetes-on-centos
[6]: https://minikube.sigs.k8s.io/docs/start/
[7]: https://kubernetes.io/docs/reference/kubectl/kubectl/
[8]: https://v1-18.docs.kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
[9]: https://en.wikipedia.org/wiki/Idempotence
[10]: https://kubernetes.io/docs/concepts/workloads/controllers/job/#parallel-jobs

View File

@ -1,638 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create a machine learning model with Bash)
[#]: via: (https://opensource.com/article/20/11/machine-learning-bash)
[#]: author: (Girish Managoli https://opensource.com/users/gammay)
Create a machine learning model with Bash
======
Bash, Tcsh, or Zsh can help you get ready for machine learning.
![bash logo on green background][1]
[Machine learning][2] is a powerful computing capability for predicting or forecasting things that conventional algorithms find challenging. The machine learning journey begins with collecting and preparing data—a _lot_ of it—then it builds mathematical models based on that data. While multiple tools can be used for these tasks, I like to use the [shell][3].
A shell is an interface for performing operations using a defined language. This language can be invoked interactively or scripted. The concept of the shell was introduced in [Unix][4] operating systems in the 1970s. Some of the most popular shells include [Bash][5], [tcsh][6], and [Zsh][7]. They are available for all operating systems, including Linux, macOS, and Windows, which gives them high portability. For this exercise, I'll use Bash.
This article is an introduction to using a shell for data collection and data preparation. Whether you are a data scientist looking for efficient tools or a shell expert looking at using your skills for machine learning, I hope you will find valuable information here.
The example problem in this article is creating a machine learning model to forecast temperatures for US states. It uses shell commands and scripts to do the following data collection and data preparation steps:
1. Download data
2. Extract the necessary fields
3. Aggregate data
4. Make time series
5. Create the train, test, and validate data sets
You may be asking why you should do this with shell, when you can do all of it in a machine learning programming language such as [Python][8]. This is a good question. If data processing is performed with an easy, friendly, rich technology like a shell, a data scientist focuses only on machine learning modeling and not the details of a language.
## Prerequisites
First, you need to have a shell interpreter installed. If you use Linux or macOS, it will already be installed, and you may already be familiar with it. If you use Windows, try [MinGW][9] or [Cygwin][10].
For more information, see:
* [Bash tutorials][11] here on opensource.com
* The official [shell scripting tutorial][12] by Steve Parker, the creator of Bourne shell
* The [Bash Guide for Beginners][13] by the Linux Documentation Project
* If you need help with a specific command, type `<commandname> --help` in the shell for help; for example: `ls --help`.
## Get started
Now that your shell is set up, you can start preparing data for the machine learning temperature-prediction problem.
### 1\. Download data
The data for this tutorial comes from the US National Oceanic and Atmospheric Administration (NOAA). You will train your model using the last 10 complete years of data. The data source is at <https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/>, and the data is in .csv format and gzipped.
Download and unzip the data using a [shell script][14]. Use your favorite text editor to create a file named `download.sh` and paste in the code below. The comments in the code explain what the commands do:
```
#!/bin/sh
# This is called hashbang. It identifies the executor used to run this file.
# In this case, the script is executed by shell itself.
# If not specified, a program to execute the script must be specified.
# With hashbang: ./download.sh;  Without hashbang: sh ./download.sh;
FROM_YEAR=2010
TO_YEAR=2019
year=$FROM_YEAR
# For all years one by one starting from FROM_YEAR=2010 upto TO_YEAR=2019
while [ $year -le $TO_YEAR ]
do
    # show the year being downloaded now
    echo $year
    # Download
    wget <https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by\_year/${year}.csv.gz>
    # Unzip
    gzip -d ${year}.csv.gz
    # Move to next year by incrementing
    year=$(($year+1))
done
```
Notes:
* If you are behind a proxy server, consult Mark Grennan's [how-to][15], and use: [code] export http_proxy=<http://username:password@proxyhost:port/>
export https_proxy=<https://username:password@proxyhost:port/>
```
* Make sure all standard commands are already in your PATH (such as `/bin` or `/usr/bin`). If not, [set your PATH][16].
* [Wget][17] is a utility for connecting to web servers from the command line. If Wget is not installed on your system, [download it][18].
* Make sure you have [gzip][19], a utility used for compression and decompression.
Run this script to download, extract, and make 10 years' worth of data available as CSVs:
```
$ ./download.sh
2010
\--2020-10-30 19:10:47--  <https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by\_year/2010.csv.gz>
Resolving www1.ncdc.noaa.gov (www1.ncdc.noaa.gov)... 205.167.25.171, 205.167.25.172, 205.167.25.178, ...
Connecting to www1.ncdc.noaa.gov (www1.ncdc.noaa.gov)|205.167.25.171|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 170466817 (163M) [application/gzip]
Saving to: '2010.csv.gz'
     0K .......... .......... .......... .......... ..........  0% 69.4K 39m57s
    50K .......... .......... .......... .......... ..........  0%  202K 26m49s
   100K .......... .......... .......... .......... ..........  0% 1.08M 18m42s
...
```
The [ls][20] command lists the contents of a folder. Use `ls 20*.csv` to list all your files with names beginning with 20 and ending with .csv.
```
$ ls 20*.csv
2010.csv  2011.csv  2012.csv  2013.csv  2014.csv  2015.csv  2016.csv  2017.csv  2018.csv  2019.csv
```
### 2\. Extract average temperatures
Extract the TAVG (average temperature) data from the CSVs for US regions:
**extract_tavg_us.sh**
```
#!/bin/sh
# For each file with name that starts with "20" and ens with ".csv"
for csv_file in `ls 20*.csv`
do
    # Message that says file name $csv_file is extracted to file TAVG_US_$csv_file
    # Example: 2010.csv extracted to TAVG_US_2010.csv
    echo "$csv_file -&gt; TAVG_US_$csv_file"
    # grep "TAVG" $csv_file: Extract lines in file with text "TAVG"
    # |: pipe
    # grep "^US": From those extract lines that begin with text "US"
    # &gt; TAVG_US_$csv_file: Save xtracted lines to file TAVG_US_$csv_file
    grep "TAVG" $csv_file | grep "^US" &gt; TAVG_US_$csv_file
done
```
This script:
```
$ ./extract_tavg_us.sh
2010.csv -&gt; TAVG_US_2010.csv
...
2019.csv -&gt; TAVG_US_2019.csv
```
creates these files:
```
$ ls TAVG_US*.csv
TAVG_US_2010.csv  TAVG_US_2011.csv  TAVG_US_2012.csv  TAVG_US_2013.csv
TAVG_US_2014.csv  TAVG_US_2015.csv  TAVG_US_2016.csv  TAVG_US_2017.csv
TAVG_US_2018.csv  TAVG_US_2019.csv
```
Here are the first few lines for `TAVG_US_2010.csv`:
```
$ head TAVG_US_2010.csv
USR0000AALC,20100101,TAVG,-220,,,U,
USR0000AALP,20100101,TAVG,-9,,,U,
USR0000ABAN,20100101,TAVG,12,,,U,
USR0000ABCA,20100101,TAVG,16,,,U,
USR0000ABCK,20100101,TAVG,-309,,,U,
USR0000ABER,20100101,TAVG,-81,,,U,
USR0000ABEV,20100101,TAVG,-360,,,U,
USR0000ABEN,20100101,TAVG,-224,,,U,
USR0000ABNS,20100101,TAVG,89,,,U,
USR0000ABLA,20100101,TAVG,59,,,U,
```
The [head][21] command is a utility for displaying the first several lines (by default, 10 lines) of a file.
The data has more information than you need. Limit the number of columns by eliminating column 3 (since all the data is average temperature) and column 5 onward. In other words, keep columns 1 (climate station), 2 (date), and 4 (temperature recorded).
**key_columns.sh**
```
#!/bin/sh
# For each file with name that starts with "TAVG_US_" and ens with ".csv"
for csv_file in `ls TAVG_US_*.csv`
do
    echo "Exractiing columns $csv_file"
    # cat $csv_file: 'cat' is to con'cat'enate files - here used to show one year csv file
    # |: pipe
    # cut -d',' -f1,2,4: Cut columns 1,2,4 with , delimitor
    # &gt; $csv_file.cut: Save to temporary file
    | &gt; $csv_file.cut:
    cat $csv_file | cut -d',' -f1,2,4 &gt; $csv_file.cut
    # mv $csv_file.cut $csv_file: Rename temporary file to original file
    mv $csv_file.cut $csv_file
    # File is processed and saved back into the same
    # There are other ways to do this
    # Using intermediate file is the most reliable method.
done
```
Run the script:
```
$ ./key_columns.sh
Extracting columns TAVG_US_2010.csv
...
Extracting columns TAVG_US_2019.csv
```
The first few lines of `TAVG_US_2010.csv` with the unneeded data removed are:
```
$ head TAVG_US_2010.csv
USR0000AALC,20100101,-220
USR0000AALP,20100101,-9
USR0000ABAN,20100101,12
USR0000ABCA,20100101,16
USR0000ABCK,20100101,-309
USR0000ABER,20100101,-81
USR0000ABEV,20100101,-360
USR0000ABEN,20100101,-224
USR0000ABNS,20100101,89
USR0000ABLA,20100101,59
```
Dates are in string form (YMD). To train your model correctly, your algorithms need to recognize date fields in the comma-separated Y,M,D form (For example, `20100101` becomes `2010,01,01`). You can convert them with the [sed][22] utility.
**date_format.sh**
```
for csv_file in `ls TAVG_*.csv`
do
    echo Date formatting $csv_file
    # This inserts , after year
    sed -i 's/,..../&amp;,/' $csv_file
    # This inserts , after month
    sed -i 's/,....,../&amp;,/' $csv_file
done
```
Run the script:
```
$ ./date_format.sh
Date formatting TAVG_US_2010.csv
...
Date formatting TAVG_US_2019.csv
```
The first few lines of `TAVG_US_2010.csv` with the comma-separated date format are:
```
$ head TAVG_US_2010.csv
USR0000AALC,2010,01,01,-220
USR0000AALP,2010,01,01,-9
USR0000ABAN,2010,01,01,12
USR0000ABCA,2010,01,01,16
USR0000ABCK,2010,01,01,-309
USR0000ABER,2010,01,01,-81
USR0000ABEV,2010,01,01,-360
USR0000ABEN,2010,01,01,-224
USR0000ABNS,2010,01,01,89
USR0000ABLA,2010,01,01,59
```
### 3\. Aggregate states' average temperature data
The weather data comes from climate stations located in US cities, but you want to forecast whole states' temperatures. To convert the climate-station data to state data, first, map climate stations to their states.
Download the list of climate stations using wget:
```
`$ wget ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd-stations.txt`
```
Extract the US stations with the [grep][23] utility to find US listings. The following command searches for lines that begin with the text `"US`." The `>` is a [redirection][24] that writes output to a file—in this case, to a file named `us_stations.txt`:
```
`$ grep "^US" ghcnd-stations.txt > us_stations.txt`
```
This file was created for pretty print, so the column separators are inconsistent:
```
$ head us_stations.txt
US009052008  43.7333  -96.6333  482.0 SD SIOUX FALLS (ENVIRON. CANADA)
US10RMHS145  40.5268 -105.1113 1569.1 CO RMHS 1.6 SSW
US10adam001  40.5680  -98.5069  598.0 NE JUNIATA 1.5 S
...
```
Make them consistent by using [cat][25] to print the file, using [tr][26] to squeeze repeats and output to a temp file, and renaming the temp file back to the original—all in one line:
```
`$ cat us_stations.txt | tr -s ' ' > us_stations.txt.tmp; cp us_stations.txt.tmp us_stations.txt;`
```
The first lines of the command's output:
```
$ head us_stations.txt
US009052008 43.7333 -96.6333 482.0 SD SIOUX FALLS (ENVIRON. CANADA)
US10RMHS145 40.5268 -105.1113 1569.1 CO RMHS 1.6 SSW
US10adam001 40.5680 -98.5069 598.0 NE JUNIATA 1.5 S
...
```
This contains a lot of info—GPS coordinates and such—but you only need the station code and state. Use [cut][27]:
```
`$ cut -d' ' -f1,5 us_stations.txt > us_stations.txt.tmp; mv us_stations.txt.tmp us_stations.txt;`
```
The first lines of the command's output:
```
$ head us_stations.txt
US009052008 SD
US10RMHS145 CO
US10adam001 NE
US10adam002 NE
...
```
Make this a CSV and change the spaces to comma separators using sed:
```
`$ sed -i s/' '/,/g us_stations.txt`
```
The first lines of the command's output:
```
$ head us_stations.txt
US009052008,SD
US10RMHS145,CO
US10adam001,NE
US10adam002,NE
...
```
Although you used several commands for these tasks, it is possible to perform all the steps in one run. Try it yourself.
Now, replace the station codes with their state locations by using [AWK][28], which is functionally high performant for large data processing.
**station_to_state_data.sh**
```
PATTERN_FILE=us_stations.txt
for DATA_FILE in `ls TAVG_US_*.csv`
do
    echo ${DATA_FILE}
    awk -F, \
        'FNR==NR { x[$1]=$2; next; } { $1=x[$1]; print $0 }' \
        OFS=, \
        ${PATTERN_FILE} ${DATA_FILE} &gt; ${DATA_FILE}.tmp
   mv ${DATA_FILE}.tmp ${DATA_FILE}
done
```
Here is what these parameters mean:
`-F,` | Field separator is `,`
---|---
`FNR` | Line number in each file
`NR` | Line number in both files together
`FNR==NR` | Is TRUE only in the first file `${PATTERN_FILE}`
`{ x[$1]=$2; next; }` | If `FNR==NR` is TRUE (for all lines in `$PATTERN_FILE` only)
`- x` | Variable to store `station=state` map
`- x[$1]=$2` | Adds data of `station=state` to map
`- $1` | First column in first file (station codes)
`- $2` | Second column in first file (state codes)
`- x` | Map of all stations e.g., `x[US009052008]=SD`, `x[US10RMHS145]=CO`, ..., `x[USW00096409]=AK`
`- next` | Go to next line matching `FNR==NR` (essentially, this creates a map of all stations-states from the `${PATTERN_FILE}`
`{ $1=x[$1]; print $0 }` | If `FNR==NR` is FALSE (for all lines `in $DATA_FILE` only)
`- $1=x[$1]` | Replace first field with `x[$1]`; essentially, replace station code with state code
`- print $0` | Print all columns (including replaced `$1`)
`OFS=,` | Output fields separator is `,`
The CSV with station codes:
```
$ head TAVG_US_2010.csv
USR0000AALC,2010,01,01,-220
USR0000AALP,2010,01,01,-9
USR0000ABAN,2010,01,01,12
USR0000ABCA,2010,01,01,16
USR0000ABCK,2010,01,01,-309
USR0000ABER,2010,01,01,-81
USR0000ABEV,2010,01,01,-360
USR0000ABEN,2010,01,01,-224
USR0000ABNS,2010,01,01,89
USR0000ABLA,2010,01,01,59
```
Run the command:
```
$ ./station_to_state_data.sh
TAVG_US_2010.csv
...
TAVG_US_2019.csv
```
Stations are now mapped to states:
```
$ head TAVG_US_2010.csv
AK,2010,01,01,-220
AZ,2010,01,01,-9
AL,2010,01,01,12
AK,2010,01,01,16
AK,2010,01,01,-309
AK,2010,01,01,-81
AK,2010,01,01,-360
AK,2010,01,01,-224
AZ,2010,01,01,59
AK,2010,01,01,-68
```
Every state has several temperature readings for each day, so you need to calculate the average of each state's readings for a day. Use AWK for text processing, [sort][29] to ensure the final results are in a logical order, and [rm][30] to delete the temporary file after processing.
**station_to_state_data.sh**
```
PATTERN_FILE=us_stations.txt
for DATA_FILE in `ls TAVG_US_*.csv`
do
    echo ${DATA_FILE}
    awk -F, \
        'FNR==NR { x[$1]=$2; next; } { $1=x[$1]; print $0 }' \
        OFS=, \
        ${PATTERN_FILE} ${DATA_FILE} &gt; ${DATA_FILE}.tmp
   mv ${DATA_FILE}.tmp ${DATA_FILE}
done
```
Here is what the AWK parameters mean:
`FILE=$DATA_FILE` | CSV file processed as `FILE`
---|---
`-F,` | Field separator is `,`
`state_day_sum[$1 "," $2 "," $3 "," $4] = $5 state_day_sum[$1 "," $2 "," $3 "," $4] + $5` | Sum of temperature (`$5`) for the state `($1`) on year (`$2`), month (`$3`), day (`$4`)
`state_day_num[$1 "," $2 "," $3 "," $4] = $5 state_day_num[$1 "," $2 "," $3 "," $4] + 1` | Number of temperature readings for the state (`$1`) on year (`$2`), month (`$3`), day (`$4`)
`END` | In the end, after collecting sum and number of readings for all states, years, months, days, calculate averages
`for (state_day_key in state_day_sum)` | For each state-year-month-day
`print state_day_key "," state_day_sum[state_day_key]/state_day_num[state_day_key]` | Print state,year,month,day,average
`OFS=,` | Output fields separator is `,`
`$DATA_FILE` | Input file (all files with name starting with `TAVG_US_` and ending with `.csv`, one by one)
`> STATE_DAY_${DATA_FILE}.tmp` | Save result to a temporary file
Run the script:
```
$ ./TAVG_avg.sh
TAVG_US_2010.csv
TAVG_US_2011.csv
TAVG_US_2012.csv
TAVG_US_2013.csv
TAVG_US_2014.csv
TAVG_US_2015.csv
TAVG_US_2016.csv
TAVG_US_2017.csv
TAVG_US_2018.csv
TAVG_US_2019.csv
```
These files are created:
```
$ ls STATE_DAY_TAVG_US_20*.csv
STATE_DAY_TAVG_US_2010.csv  STATE_DAY_TAVG_US_2015.csv
STATE_DAY_TAVG_US_2011.csv  STATE_DAY_TAVG_US_2016.csv
STATE_DAY_TAVG_US_2012.csv  STATE_DAY_TAVG_US_2017.csv
STATE_DAY_TAVG_US_2013.csv  STATE_DAY_TAVG_US_2018.csv
STATE_DAY_TAVG_US_2014.csv  STATE_DAY_TAVG_US_2019.csv
```
See one year of data for all states ([less][31] is a utility to see output a page at a time):
```
$ less STATE_DAY_TAVG_US_2010.csv
AK,2010,01,01,-181.934
...
AK,2010,01,31,-101.068
AK,2010,02,01,-107.11
...
AK,2010,02,28,-138.834
...
WY,2010,01,01,-43.5625
...
WY,2010,12,31,-215.583
```
Merge all the data files into one:
```
`$ cat STATE_DAY_TAVG_US_20*.csv > TAVG_US_2010-2019.csv`
```
You now have one file, with all states, for all years:
```
$ cat TAVG_US_2010-2019.csv
AK,2010,01,01,-181.934
...
WY,2018,12,31,-167.421
AK,2019,01,01,-32.3386
...
WY,2019,12,30,-131.028
WY,2019,12,31,-79.8704
```
## 4\. Make time-series data
A problem like this is fittingly addressed with a time-series model such as long short-term memory ([LSTM][32]), which is a recurring neural network ([RNN][33]). This input data is organized into time slices; consider 20 days to be one slice.
This is a one-time slice (as in `STATE_DAY_TAVG_US_2010.csv`):
```
X (input 20 weeks):
AK,2010,01,01,-181.934
AK,2010,01,02,-199.531
...
AK,2010,01,20,-157.273
y (21st week, prediction for these 20 weeks):
AK,2010,01,21,-165.31
```
This time slice is represented as (temperature values where the first 20 weeks are X, and 21 is y):
```
AK, -181.934,-199.531, ... ,
-157.273,-165.3
```
The slices are time-contiguous. For example, the end of 2010 continues into 2011:
```
AK,2010,12,22,-209.92
...
AK,2010,12,31,-79.8523
AK,2011,01,01,-59.5658
...
AK,2011,01,10,-100.623
```
Which results in the prediction: 
```
`AK,2011,01,11,-106.851`
```
This time slice is taken as:
```
`AK, -209.92, ... ,-79.8523,-59.5658, ... ,-100.623,-106.851`
```
and so on, for all states, years, months, and dates. For more explanation, see this tutorial on [time-series forecasting][34].
Write a script to create time slices:
**timeslices.sh**
```
#!/bin/sh
TIME_SLICE_PERIOD=20
file=TAVG_US_2010-2019.csv
# For each state in file
for state in `cut -d',' -f1 $file | sort | uniq`
do
    # Get all temperature values for the state
    state_tavgs=`grep $state $file | cut -d',' -f5`
    # How many time slices will this result in?
    # mber of temperatures recorded minus size of one timeslice
    num_slices=`echo $state_tavgs | wc -w`
    num_slices=$((${num_slices} - ${TIME_SLICE_PERIOD}))
    # Initialize
    slice_start=1; num_slice=0;
    # For each timeslice
    while [ $num_slice -lt $num_slices ]
    do
        # One timeslice is from slice_start to slice_end
        slice_end=$(($slice_start + $TIME_SLICE_PERIOD - 1))
        # X (1-20)
        sliceX="$slice_start-$slice_end"
        # y (21)
        slicey=$(($slice_end + 1))
        # Print state and timeslice temperature values (column 1-20 and 21)
        echo $state `echo $state_tavgs | cut -d' ' -f$sliceX,$slicey`
        # Increment
        slice_start=$(($slice_start + 1)); num_slice=$(($num_slice + 1));
    done
done
```
Run the script. It uses spaces as column separators; make them commas with sed:
```
`$ ./timeslices.sh > TIMESLICE_TAVG_US_2010-2019.csv; sed -i s/' '/,/g TIME_VARIANT_TAVG_US_2010-2019.csv`
```
Here are the first few lines and the last few lines of the output .csv:
```
$ head -3 TIME_VARIANT_TAVG_US_2009-2019.csv
AK,-271.271,-290.057,-300.324,-277.603,-270.36,-293.152,-292.829,-270.413,-256.674,-241.546,-217.757,-158.379,-102.585,-24.9517,-1.7973,15.9597,-5.78231,-33.932,-44.7655,-92.5694,-123.338
AK,-290.057,-300.324,-277.603,-270.36,-293.152,-292.829,-270.413,-256.674,-241.546,-217.757,-158.379,-102.585,-24.9517,-1.7973,15.9597,-5.78231,-33.932,-44.7655,-92.5694,-123.338,-130.829
AK,-300.324,-277.603,-270.36,-293.152,-292.829,-270.413,-256.674,-241.546,-217.757,-158.379,-102.585,-24.9517,-1.7973,15.9597,-5.78231,-33.932,-44.7655,-92.5694,-123.338,-130.829,-123.979
$ tail -3 TIME_VARIANT_TAVG_US_2009-2019.csv
WY,-76.9167,-66.2315,-45.1944,-27.75,-55.3426,-81.5556,-124.769,-137.556,-90.213,-54.1389,-55.9907,-30.9167,-9.59813,7.86916,-1.09259,-13.9722,-47.5648,-83.5234,-98.2963,-124.694,-142.898
WY,-66.2315,-45.1944,-27.75,-55.3426,-81.5556,-124.769,-137.556,-90.213,-54.1389,-55.9907,-30.91

View File

@ -1,108 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Customize Task Switching Experience on GNOME Desktop With These Nifty Tools)
[#]: via: (https://itsfoss.com/customize-gnome-task-switcher/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Customize Task Switching Experience on GNOME Desktop With These Nifty Tools
======
Unless youre new to Linux, you know that there are several [popular desktop environment][1] choices for users. And if youre that newbie, I recommend you to learn [what a desktop environment is][2] along with this tutorial.
Here, I shall be focusing on tweaking the task switching experience on GNOME. I know that the majority of users just tend to use it as is and stock settings are good enough for the most part.
I mean there is nothing wrong with the application switcher that [you use with Alt+Tab keyboard shortcut in Ubuntu][3].
![][4]
However, if you are a tinkerer who wants to [customize the look and feel of your GNOME desktop][5], including the task switcher and animation effects when launching or minimizing an app — you might want to continue reading this article.
### Change GNOME Task Switcher to Windows 7 Style Effect
![][6]
Switching between running applications using the key bind **Alt+Tab** is fast but it may not be the most intuitive experience for some. You just get to cycle through a bunch of icons depending on the number of active applications.
What if you want to change how the task switcher looks?
Well, you can easily give it a look of Windows 7 Aero Flip 3D effect. And, heres how it will look:
![][7]
It definitely looks interesting to have a different task switcher. Why? Just for fun or to share your desktops screenshot on Linux communities.
Now, to get this on your GNOME desktop, heres what you have to do:
Step 1: Enable GNOME extensions if you havent already. You can follow our guide to [learn how to use GNOME shell extensions][8].
Step 2: Once you are done with the setup, you can proceed downloading and installing the [Coverflow GNOME extension][9] from GNOME extensions website.
In case you havent installed the browser extension, you can just click on the link “**Click here to install browser extension**” from the notice as shown in the screenshot below.
![][10]
Step 3: Next, you just have to refresh the web page and enable the extension as shown in the screenshot below.
![][11]
You also get some customization options if you click on the “gear” icon right to the toggle button.
![][12]
Of course, depending on how fast you want it to be or how good you want it to look, you will have to adjust the animation speed accordingly.
Next, why not some kind of cool effect when you interact with applications (minimize/close)? I have just the solution for you.
### Add Genie Animation Effect While Minimizing &amp; Re-opening Applications
Theres an interesting effect (sort of like genie popping out of a lamp) that you can add to see when you minimize or re-open an app.
This also comes as a GNOME extension, so you do not need to do anything else to get started.
You just have to head to the extensions page, which is [Compiz alike Magic Lamp effect][13] and then enable the extension to see it in action.
![][14]
Heres how it looks in action:
![][15]
It would look even cooler if you switch the Ubuntu dock to the bottom.
Exciting GNOME extensions, right? You can play around to tweak your GNOME experience using the [GNOME tweaks app][16] and [install some beautiful icon themes][17] or explore different options.
How do you prefer to customize your GNOME experience? Is there any other cool GNOME extension or an app that you tend to utilize? Feel free to share your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/customize-gnome-task-switcher/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/best-linux-desktop-environments/
[2]: https://itsfoss.com/what-is-desktop-environment/
[3]: https://itsfoss.com/ubuntu-shortcuts/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2017/10/gnome-app-switcher.jpeg?resize=800%2C255&ssl=1
[5]: https://itsfoss.com/gnome-tricks-ubuntu/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/ubuntu-coverflow-screenshot.jpg?resize=800%2C387&ssl=1
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/11/coverflow-task-switcher.jpg?resize=800%2C392&ssl=1
[8]: https://itsfoss.com/gnome-shell-extensions/
[9]: https://extensions.gnome.org/extension/97/coverflow-alt-tab/
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/gnome-shell-extension-browser.jpg?resize=800%2C401&ssl=1
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/coverflow-enable.jpg?resize=800%2C303&ssl=1
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/coverflow-settings.png?resize=800%2C481&ssl=1
[13]: https://extensions.gnome.org/extension/3740/compiz-alike-magic-lamp-effect/
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/11/magic-lamp-extension.jpg?resize=800%2C355&ssl=1
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/magic-lamp-effect-800x380.gif?resize=800%2C380&ssl=1
[16]: https://itsfoss.com/gnome-tweak-tool/
[17]: https://itsfoss.com/best-icon-themes-ubuntu-16-04/

View File

@ -1,122 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 open source alternatives to GitHub)
[#]: via: (https://opensource.com/article/20/11/open-source-alternatives-github)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
5 open source alternatives to GitHub
======
Stay resilient by keeping your open source code in an open source
repository.
![Woman programming][1]
Git is a popular version-control system, primarily used for code but popular in [other disciplines][2], too. It can run locally on your computer for personal use, it can run on a server for collaboration, and it can also run as a hosted service for widespread public participation. There are many hosted services out there, and one of the most popular brands is [GitHub][3].
GitHub is not open source. Pragmatically, this doesn't make much of a difference to most users. The vast majority of code put onto GitHub is, presumably, encouraged to be shared by everyone, so GitHub's primary function is a sort of public backup service. Should GitHub fold or drastically change its terms of service, recovering data would be relatively simple because it's expected that you have a local copy of the code you keep on GitHub. However, some organizations have come to rely on the non-Git parts of GitHub's service offerings, making migration away from GitHub difficult. That's an awkward place to be, so for many people and organizations, insurance against vendor lock-in is a worthwhile investment.
If that's the position you're in, check out these five GitHub alternatives, all of which are open source.
### 1\. GitLab
![GitLab][4]
(Seth Kenlon, [CC BY-SA 4.0][5])
GitLab is more than just a GitHub alternative; it's more like a complete DevOps platform. GitLab is nearly all the infrastructure a software development house requires, as it provides code and project management tools, issue reporting, continuous delivery, and monitoring. You can use GitLab on [GitLab.com][6], or you can download the codebase and run it locally with or without paid support. GitLab has a web interface, but all Git-specific commands work as expected.
GitLab is committed to open source, both in its code and the organization behind it, and to Git itself. The organization publishes much of its business documentation, including [how employees are onboarded][7], their [marketing policies][8], and much more. As a site, GitLab is ardent in promoting Git. When you use a site-specific feature (such as a merge request), GitLab's interface explains how to resolve the request in pure Git, should you prefer to work in the terminal.
### 2\. Gitolite
[Gitolite][9] is quite probably the minimal amount of code required to provide a server administrator a frontend for Git repository management. Unlike GitHub, it has no web interface, no desktop client, and adds nothing to Git from the user perspective. In fact, your users don't really use Gitolite directly. They just use Git, as usual, whether they're used to Git in a terminal or Git in a frontend client like [Git Cola][10].
From the server administrator's perspective, though, Gitolite solves all the permission and access problems you'd have to manage manually if you ran a plain Git server. With Gitolite, you create only one user (for instance, a user called `git`) on your server. You allow your users to use this single login identity to access your Git server, but when they log in, they must deal with your Git server through Gitolite. It's Gitolite that verifies users' access permissions, manages their SSH keys, verifies their privilege level when accessing specific repositories, and more. Instead of creating and managing countless Unix user accounts, all the administrator has to do is list users (identified by their SSH public keys) to the repositories they are allowed to access. Gitolite takes care of everything else.
Gitolite is nearly invisible to users, and it makes Git management nearly invisible to the server admin. As long as you don't require a web interface, Gitolite is a net win for everyone involved.
### 3\. Gitea and Gogs
![Gitea][11]
(Seth Kenlon, [CC BY-SA 4.0][5])
The [Gogs project][12] is an MIT-Licensed Git server framework and web user interface. In 2016, some Gogs users felt development was hindered because only its initial developer had write access to its development repository, so they forked the code to [Gitea][13]. Today, both projects co-exist independently of one another, and from a user's perspective, they are basically the same experience. Ironically, both projects are hosted on GitHub.
With Gitea and Gogs, you download the source code and run it as a service on your server. This provides a website for users, where they can create an account, log in, create their own repositories, upload code, navigate through code, file issues and bug reports, request code merges, manage SSH keys, and so on. The interface is similar in look and feel to GitLab, GitHub, or Bitbucket, so if users have any experience with an online-code management system, they're already essentially familiar with Gitea and Gogs.
Gitea or Gogs can be installed as a package on any Linux server, including a Raspberry Pi, as a container, on BSD, macOS, or Windows, or compiled from source code. They're both cross-platform, so they can be run on anything that runs Go. Read Ricardo Gerardi's article about [setting up a Gogs container using Podman][14] for more information.
### 4\. Independent communities
![Notabug][15]
(Seth Kenlon, [CC BY-SA 4.0][5])
If you're not up for self-hosting, you can cheat a little by using a self-hosted option on somebody else's server. There are many independent sites out there, such as [Codeberg][16], Nixnet, Tinfoil-hat, and [Notabug.org][17]. Some run Gitea and others run Gogs, but the result is the same: free code hosting to help you keep your work safe and public. These solutions may not be as complex as something like GitLab or GitHub, they may not offer on-demand Jenkins pipelines and continuous integration/continuous development (CI/CD) solutions, but they're great mirrors for your work.
There are purpose-specific providers, too: a [Gitea instance for FSFE supporters][18], a Gitlab instance for [Freedesktop projects][19], and another for [GNOME projects][20].
Because these independent servers are smaller communities, you might also find that the "social" aspect of social coding is more significant. I've made several online friends through an independent Git provider, while GitHub has proven to be, at least socially, underwhelming.
The message is clear: there's no requirement or advantage for there to be a centralized, dominant, non-free Git software hosting service.
### 5\. Git
It might surprise you to know that Git is surprisingly self-reliant as a server. While it lacks user management and permission settings, Git integrates with SSH and ships with a special `git-shell` application designed specifically to serve as a limited environment for using Git commands. By setting users' default shell to `git-shell`, you can limit what actions are available to them when they interact with your server.
What Git alone does not offer is repository permission tools to help you manage what each user has access to. For this, you'll have to fall back on the operating system's user and access control list (ACL) controls, which can become tedious should you have more than just a handful of users. For small projects or projects just starting, running Git on a Linux server is an easy and immediate solution to the need for a collaborative space. For more information, read my article on [building a Git server][21].
### Bonus: Fossil
![Fossil UI][22]
(Klaatu, [CC BY-SA 4.0][5])
Fossil isn't by any means Git, and in a sense, that's its appeal as an alternative to GitHub. In fact, Fossil is an alternative to the entire Git system. It's a complete version-control system, like Git, and it also has bug tracking, wiki, forum, and documentation features _built into every repository you create_. It also has a web interface included and is entirely self-contained. If it all sounds too good to be true, you can see it in action at [fossil-scm.org][23], because Fossil's homepage runs on Fossil!
Read Klaatu's article on [getting started with Fossil][24] for more information.
### Open source means choice
The best thing about Git (and Fossil) is that they're open source technologies. You can choose whatever solution works best for you. In fact, because Git is also distributed, you can even choose _multiple_ solutions. There's nothing stopping you from hosting your code on several services and writing to all of them with each push. Take a look at your options, decide what works best for you, and get to work!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/open-source-alternatives-github
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
[2]: https://opensource.com/article/19/4/write-git
[3]: https://github.com/
[4]: https://opensource.com/sites/default/files/uploads/gitlab.jpg (GitLab)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://gitlab.com
[7]: https://about.gitlab.com/handbook/people-group/general-onboarding/onboarding-processes
[8]: https://about.gitlab.com/handbook
[9]: https://gitolite.com/gitolite/index.html
[10]: https://opensource.com/article/20/3/git-cola
[11]: https://opensource.com/sites/default/files/uploads/gitea.jpg (Gitea)
[12]: https://gogs.io
[13]: https://gitea.io
[14]: https://www.redhat.com/sysadmin/git-gogs-podman
[15]: https://opensource.com/sites/default/files/uploads/notabug.jpg (Notabug)
[16]: https://join.codeberg.org/
[17]: https://notabug.org
[18]: https://git.fsfe.org/
[19]: https://gitlab.freedesktop.org
[20]: https://gitlab.gnome.org
[21]: https://opensource.com/life/16/8/how-construct-your-own-git-server-part-6
[22]: https://opensource.com/sites/default/files/uploads/fossil-ui.jpg (Fossil UI)
[23]: https://www.fossil-scm.org
[24]: https://opensource.com/article/20/11/fossil

View File

@ -1,146 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to choose a wireless protocol for home automation)
[#]: via: (https://opensource.com/article/20/11/wireless-protocol-home-automation)
[#]: author: (Steve Ovens https://opensource.com/users/stratusss)
How to choose a wireless protocol for home automation
======
Which of the three dominant wireless protocols used in home
automation—WiFi, Z-Wave, and Zigbee—is right for you? Consider the
options in part three of this series.
![Digital images of a computer desktop][1]
In the second article in this series, I talked about [local control vs. cloud connectivity][2] and some things to consider for your home automation setup.
In this third article, I will discuss the underlying technology for connecting devices to [Home Assistant][3], including the dominant protocols that smart devices use to communicate and some things to think about before purchasing smart devices.
### Connecting devices to Home Assistant
Many different devices work with Home Assistant. Some connect through a cloud service, and others work by communicating with a central unit, such as a [SmartThings Hub][4], that Home Assistant communicates with. And still others have a facility to communicate over your local network.
For a device to be truly useful, one of its key features must be wireless connectivity. There are currently three dominant wireless protocols that smart devices use: WiFi, Z-Wave, and Zigbee. I'll do a quick breakdown of each including their pros and cons.
**A note about wireless spectra:** Spectra are measured in hertz (Hz). A gigahertz (GHz) is 1 billion Hz. In general, the larger the number of Hz, the more data can be transmitted and the faster the connection. However, higher frequencies are more susceptible to interference and do not travel very well through solid objects. Lower frequencies can travel further and pass through solid objects more readily, but the trade-off is they cannot send much data.
### WiFi
[WiFi][5] is the most widely known of the three standards. These devices are the easiest to get up and running if you are starting from scratch. This is because almost everyone interested in home automation already has a WiFi router or an access point. In fact, in most countries in the western world, WiFi is considered almost on the same level as running water; if you go to a hotel, you expect a clean, temperature-controlled room with a WiFi password provided at check-in.
Therefore, Internet of Things (IoT) devices that use the WiFi protocol require no additional hardware to get started. Plug in the new device, launch a vendor-provided application or a web browser, enter your credentials, and you're done.
It's important to note that almost all moderate- to low-priced IoT devices use the 2.4GHz wireless spectrum. Why does this matter? Well, 2.4GHz has been around so long that virtually all devices—from cordless phones to smart bulbs—use this spectrum. In most countries, there are generally only about a dozen channels that off-the-shelf devices can broadcast and receive on. Like overloading a cell tower when too many users attempt to make phone calls during an emergency, channels can become overcrowded and susceptible to outside interference.
While well-behaving smart devices use little-to-no bandwidth, if they struggle to send/receive messages due to overcrowding on the spectrum, your automation will have mixed results. A WiFi access point can only communicate with one client at a time. That means the more devices you have on WiFi, the greater the chance that someone on the network will have to wait their turn to communicate.
**Pros:**
* Ubiquitous
* Tend to be inexpensive
* Easy to set up
* Easy to extend the range
* Uses existing network
* Requires no hub
**Cons:**
* Can suffer from interference from neighboring devices or adjacent networks
* Uses the most populated 2.4GHz spectrum
* Your router limits the number of devices
* Uses more power, which means less or no battery-powered devices
* Has the potential to impact latency-sensitive activities like gaming over WiFi
* Most off-the-shelf products require an internet connection
### Z-Wave
[Z-Wave][6] is a closed wireless protocol controlled and maintained by a company named Zensys. Because it is controlled by a single entity, all devices are guaranteed to work together. There is one standard and one implementation. This means that you never have to worry about which device you buy from which manufacturer; they will always work.
Z-Wave operates in the 0.9GHz spectrum, which means it has the largest range of the popular protocols. A central hub is required to coordinate all the devices on a Z-Wave ecosystem. Z-Wave operates on a [mesh network][7] topology, which means that every device acts as a potential repeater for other devices. In theory, this allows a much greater coverage area. Z-Wave limits the number of "hops" to 4. That means that, in order for a signal to get from a device to a hub, it can only travel through four devices. This could be a positive or a negative, depending on your perspective. 
On the one hand, it reduces the ecosystem's maximum latency by preventing packets from traveling through a significant number of devices before reaching the destination. The more devices a signal must go through, the longer it can take for devices to become responsive.
On the other hand, it means that you need to be more strategic about providing a good path from your network's extremities back to the hub. Remember, the lower frequency that enables greater distance also limits the speed and amount of data that can be transferred. This is currently not an issue, but no one knows what size messages future smart devices will want to send.
**Pros:**
* Z-Wave compatibility guaranteed
* Form mesh network 
* Low powered and can be battery powered
* Mesh networks become more reliable with more devices
* Uses 0.9GHz and can transmit up to 100 meters
* Least likely of the three to have signal interference from solid objects or external sources
**Cons:**
* Closed protocol
* Costs the most
* Maximum of four hops in the mesh
* Can support up to 230 devices per network
* Uses 0.9GHz, which is the slowest of all protocols
### Zigbee
Unlike Z-Wave, [Zigbee][8] is an open standard. This can be a pro or a con, depending on your perspective. Because it is an open standard, manufacturers are free to alter the implementation to suit their products. To borrow an analogy from one of my favorite YouTube channels, [The Hook Up][9], Zigbee is like going through a restaurant drive-through. Having the same standard means you will always be able to speak to the restaurant and they will be able to hear you. However, if you speak a different language than the drive-through employee, you won't be able to understand each other. Both of you can speak and hear each other, but the meaning will be lost.
Similarly, the Zigbee standard allows all devices on a Zigbee network to "hear" each other, but different implementations mean they may not "understand" each other. Fortunately, more often than not, your Zigbee devices should be able to interoperate. However, there is a non-trivial chance that your devices will not be able to understand each other. When this happens, you may end up with multiple networks that could interfere with each other.
Like Z-Wave, Zigbee employs a mesh network topology but has no limit to the number of "hops" devices can use to communicate with the hub. This, combined with some tweaks to the standard, means that Zigbee theoretically can support more than 65,000 devices on a single network.
**Pros:**
* Open standard
* Form mesh network
* Low-powered and can be battery powered
* Can support over 65,000 devices
* Can communicate faster than Z-Wave
**Cons:**
* No guaranteed compatibility
* Can form separate mesh networks that interfere with each other
* Uses the oversaturated 2.4GHz spectrum
* Transmits only 10 to 30 meters
### Pick your protocol
Perhaps you already have some smart devices. Or maybe you are just starting to investigate your options. There is a lot to consider when you're buying devices. Rather than focusing on the lights, sensors, smart plugs, thermometers, and the like, it's perhaps more important to know which protocol (WiFi, Z-Wave, or Zigbee) you want to use.
Whew! I am finally done laying home automation groundwork. In the next article, I will show you how to start the initial installation and configuration of a Home Assistant virtual machine.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/wireless-protocol-home-automation
作者:[Steve Ovens][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/stratusss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_browser_web_desktop.png?itok=Bw8ykZMA (Digital images of a computer desktop)
[2]: https://opensource.com/article/20/11/cloud-vs-local-home-automation
[3]: https://opensource.com/article/20/11/home-assistant
[4]: https://www.smartthings.com/
[5]: https://en.wikipedia.org/wiki/Wi-Fi
[6]: https://www.z-wave.com/
[7]: https://en.wikipedia.org/wiki/Mesh_networking
[8]: https://zigbeealliance.org/
[9]: https://www.youtube.com/channel/UC2gyzKcHbYfqoXA5xbyGXtQ

View File

@ -1,157 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create universal blockchain smart contracts)
[#]: via: (https://opensource.com/article/20/12/blockchain-smart-contracts)
[#]: author: (Gage Mondok https://opensource.com/users/matt-coolidge)
Create universal blockchain smart contracts
======
Chainlink connects blockchain data with external, "real-world" data
using decentralized oracles.
![cubes coming together to create a larger cube][1]
Blockchain [smart contracts][2] have the ability to access off-chain data by integrating [decentralized oracles][3]. Before diving into how to use them, it's important to understand why smart contracts matter in the big picture and why they need oracles for data access.
Transactions happen every day, as they have for tens of thousands of years. They're generally governed by an agreement or contract. This may be driven by a vendor's terms of service, regulatory frameworks, or some combination of both. Parameters for these agreements are not always clear or transparent, and they are ultimately governed by a brand (whether that's a person or a company) and its willingness to act upon terms agreed upon in advance.
Contracts, like the rest of the world, are going digital. The rise of blockchain technology has introduced smart contracts, a more tamper-proof, transparent, and fair system for governing such agreements. Smart contracts are governed by math, not brands. They automatically enforce the parameters of a contract once they're executed, creating a more equitable structure for all parties.
The challenge with smart contracts is that they generally depend on their ability to bridge real-world data with blockchains (or data from one blockchain to another) so that the smart contract can recognize quality, assess reliable data, and trigger agreed-upon outcomes once terms are met. Traditionally, this has been an overly complex and difficult process, which limited broader adoption.
### About Chainlink
[Chainlink][4] is an open source abstraction layer that provides a framework to easily connect any blockchain with any external (or separate blockchain) API. You can think of Chainlink as the blockchain equivalent of the transport layer in TCP/IP, ensuring data is reliably transmitted in and out. Chainlink was designed to be the standard data layer for smart contracts, unlocking their true capability to affect the external world, and turning them into externally aware, universal smart contracts.
Smart contracts have the power to revolutionize how trust and automation are handled in business, but their restriction in scope to events on the blockchain has severely limited their potential. A majority of what developers want to interact with exists in the "real world," such as pricing data, shipping events, world events, etc. To create universal smart contracts, which are externally aware and thus can handle a wide, universal set of jobs with the world's data at its fingertips, the Chainlink network gives [Solidity][5] and other blockchain developers a framework of decentralized oracles to build with.
You can use these oracles to retrieve data for your decentralized application (dApp) in real-time on the Ethereum mainnet.
#### Chainlink adapters
[Adapters][6] are the default data manipulation functions that every Chainlink node supports by default. The nodes are the decentralized oracles in this case. They fulfill the data requests, and the Chainlink network is composed of an ever-growing number of them. Nodes are run by a multitude of independent operators. Through adapters, all developers have a standard interface for making data requests, and node operators have a standard for serving that data. These adapters include functionality such as HTTP GET, HTTP POST, Compare, Copy, etc. Adapters are a dApp's connection to the external world's data.
For example, here are the parameters for the [HttpGet][7] adapter:
* **get**: Takes a string containing the API URL to make a GET request to
* **headers**: Takes an object containing keys as strings and values as arrays of strings
* **queryParams**: Takes a string or array of strings for the URL's query parameters
* **extPath**: Takes a slash-delimited string or array of strings to be appended to the job's URL
#### Chainlink requests
For a universal smart contract to interact with these adapters, you need another functionality, requests. All contracts that inherit from [ChainlinkClient][8] can create a Chainlink.Request struct that allows developers to form a request to a Chainlink decentralized oracle. This request should add the desired adapter parameters to the struct according to the request you want to make. Submitting this request requires some basic fields, such as the address of the node you want to use as your oracle, the jobId, and the agreed-upon fee. In addition to those default fields, you can add your desired adapter parameters to the request struct:
```
// Set the URL to perform the GET request on
request.add("get", "[https://min-api.cryptocompare.com/data/price?fsym=ETH\&amp;tsyms=USD][9]");
```
With this struct, requests are flexible and can be formulated to fit various situations involving getting, posting, and manipulating data from any API because the requests can contain any of the adapter functions. What makes this system decentralized is that Chainlink's oracle network consists of many of these nodes, and developers are free to choose which and how many they want to request from based on their needs. This enables redundant failover and error checking via multiple sources, as high-reliability dApps often require.
For more information on constructing a request and the functions needed to submit it and receive a response within a ChainlinkClient contract, see Chainlink's full [HTTP GET request example][10].
For common requests, a node operator may already have an existing oracle job preconfigured, and in this case, the request is much simpler. Rather than building a custom request struct and adding the necessary adapters, the default request struct is all you need to create. No additional adapter parameters are needed; the set of decentralized oracles you choose will know how to respond based on the jobId provided when creating the request struct.
This example comes from the full [CoinGecko Consumer API][11]:
```
Chainlink.[Request][12] memory req = buildChainlinkRequest(jobId, address(this),     this.fulfillEthereumPrice.selector);
sendChainlinkRequestTo(oracle, req, fee);
```
You can use a decentralized oracle data service, such as [Chainlink Market][13], to search through existing oracles and the jobs they support in order to find the jobId you require.
### External adapters
But what if you have a complex use case for your smart contract that isn't covered by the default adapter functions? What if you need to perform some advanced data manipulation? Maybe it's not raw data you want to submit to your contract but rather metadata generated by statistical analysis of multiple data points. Maybe you can manipulate the data on-chain with the default adapters but want to reduce gas costs. Perhaps you don't want your API request on-chain due to using a credentialed source, and you don't want to specify those credentials on-chain or in the oracle job spec. This is where [external adapters][14] come in.
![Chainlink External Adapter for IoT Devices][15]
(Chainlink, ©2020)
External adapters are the "whatever data you need; we can handle it" of Chainlink. When we say universal smart contracts, we really mean _universal_. Since external adapters are pieces of code that exist off-chain with the Chainlink oracle node, they can be written in any language of your choice and perform whatever functionality you can think up—so long as the data input and output adhere to the adapter's JSON specification. External adapters act as the interface between the Chainlink decentralized oracle network and external data, letting the node operators know how to request and receive the JSON response that is then consumed on-chain.
Defining this interface specification off-chain through an external adapter opens up vast possibilities: You can now store your API credentials off-chain per your personal security standards, data can be programmed in any way in the language of your choice, and all of this happens without using any Ethereum gas fees to fund an on-chain transaction. In a sense, external adapters are like another layer of a decentralized oracle, packaging up data outside the blockchain with speed and at low cost and putting it into one tidy JSON format to be verifiably committed on-chain by the Chainlink oracle node.
External adapters are a large part of what makes Chainlink such a versatile decentralized oracle network. Contract developers are free to implement these adapters as needed, or they can choose from [existing adapters][16] on the Chainlink Market. If you are a smart contract developer looking to create an external adapter, Chainlink merely requires you to specify the JSON interfaces for the data request and the return data; between those two interfaces is where developers are free to create and manipulate the data to fit their use case. As an oracle node operator, to support the external adapter and handle the additional requests, you must [create a bridge][17] for it in your node user interface and add the adapter's bridge name to your supported tasks.
![Create a new bridge in Chainlink][18]
(ChainLink, ©2020)
```
{
  "initiators": [
    { "type": "runLog" }
  ],
  "tasks": [
    { "type": "randomNumber" },
    { "type": "copy",
      "params": {"copyPath": ["details", "current"]}},
    { "type": "multiply",
      "params": {"times": 100 }},
    { "type": "ethuint256" },
    { "type": "ethtx" }
  ]
}
```
You can access a full example of creating an external adapter on Chainlink's [building external adapters][19] page.
Chainlink is striving to give blockchain and smart contract developers the tools to empower universal smart contracts with real-world data, exactly how they need it. Chainlink's design, incorporating direct calls to any API through default adapters and extensible external adapters, gives developers a flexible platform to create as they see fit, with any data they might need. This opens up smart contracts to a literal world of data and the new use cases this empowers.
### Start building with Chainlink
If you're a smart contract developer looking to increase your smart contracts' utility with external data, try out this Chainlink [example walkthrough][20] to deploy a universal smart contract that interacts with off-chain data.
Chainlink is open source under the [MIT License][21], so if you're developing a product that could benefit from Chainlink decentralized oracles or would like to assist in developing the Chainlink Network, visit the [developer documentation][22] or join the technical discussion on [Discord][23]. You can also learn more on Chainlink's [website][4], [Twitter][24], [Reddit][25], [YouTube][26], [Telegram][27], and [GitHub][28].
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/12/blockchain-smart-contracts
作者:[Gage Mondok][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/matt-coolidge
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
[2]: https://blog.chain.link/what-is-a-smart-contract-and-why-it-is-a-superior-form-of-digital-agreement/
[3]: https://blog.chain.link/what-is-the-blockchain-oracle-problem/
[4]: https://chain.link/
[5]: https://github.com/ethereum/solidity
[6]: https://docs.chain.link/docs/adapters
[7]: https://docs.chain.link/docs/adapters#httpget
[8]: https://github.com/smartcontractkit/chainlink/blob/develop/evm-contracts/src/v0.6/ChainlinkClient.sol
[9]: https://min-api.cryptocompare.com/data/price?fsym=ETH\&tsyms=USD
[10]: https://docs.chain.link/docs/make-a-http-get-request
[11]: https://docs.chain.link/docs/existing-job-request
[12]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+request
[13]: https://market.link/
[14]: https://docs.chain.link/docs/external-adapters
[15]: https://opensource.com/sites/default/files/chainlink-external-adapter.png (Chainlink External Adapters enable smart contracts to easily integrate with specialized APIs)
[16]: https://market.link/search/adapters
[17]: https://docs.chain.link/docs/node-operators#config
[18]: https://opensource.com/sites/default/files/uploads/chainlink_newbridge.png (Create a new bridge in Chainlink)
[19]: https://docs.chain.link/docs/developers
[20]: https://docs.chain.link/docs/example-walkthrough
[21]: https://github.com/smartcontractkit/chainlink/blob/develop/LICENSE
[22]: https://docs.chain.link/
[23]: https://discordapp.com/invite/aSK4zew
[24]: https://twitter.com/chainlink
[25]: https://www.reddit.com/r/Chainlink/
[26]: https://www.youtube.com/channel/UCnjkrlqaWEBSnKZQ71gdyFA
[27]: https://t.me/chainlinkofficial
[28]: https://github.com/smartcontractkit/chainlink

View File

@ -1,125 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 collaboration tips for using an open source alternative to Google Docs)
[#]: via: (https://opensource.com/article/20/12/onlyoffice-docs)
[#]: author: (Nadya Knyazeva https://opensource.com/users/hellonadya)
5 collaboration tips for using an open source alternative to Google Docs
======
Collaborative writing and editing is a breeze when you put these
ONLYOFFICE features to work.
![Filing cabinet for organization][1]
ONLYOFFICE Docs is a self-hosted open source alternative to Microsoft Office and Google Docs for collaborating on documents, spreadsheets, and presentations in real time.
The following are the five most important ways [ONLYOFFICE Docs][2] helps organize my collaborative work.
### 1\. Integrate with document storage
ONLYOFFICE Docs is highly flexible in how you can store documents. By default, you can use ONLYOFFICE Docs within an ONLYOFFICE Workspace. This provides a productivity solution for managing documents and projects. It's the clear way to use ONLYOFFICE Docs because it's included; when you install one, you get the other.
However, the full ONLYOFFICE suite can be integrated with ownCloud, Nextcloud, and other popular sync and share platforms. Helpful [connectors][3] are available in your sharing platform's official app store or on GitHub.
Finally, since ONLYOFFICE is open source, web app developers are free to integrate ONLYOFFICE Docs into their applications using the [ONLYOFFICE API][4].
### 2\. Manage document permissions
In ONLYOFFFICE Docs, you can differentiate what your teammates can do when they open shared documents. You can grant them permission to view, edit, or share files or perform specific actions—leave comments, suggest changes in review mode, fill in determined fields, etc. Differentiating document permissions can help structure and secure your collaboration.
![ONLYOFFICE sharing options][5]
(Nadya Knyazeva, [CC BY-SA 4.0][6])
The permissions you have available depend on your document management system. In ONLYOFFICE Workspace and ownCloud, you can share files with all the permissions listed above, plus you'll get an additional permission for spreadsheets (Custom Filter in ONLYOFFICE or Modify Filter in ownCloud). The filtering permission allows you to decide whether filters applied by one user should affect only that person or everyone. If you're integrating with Nextcloud or, for example, Seafile, you get fewer permission options.
If you are integrating the suite and want to add more permissions, your app must allow registering new sharing attributes (such as the ability to restrict downloading, printing, or copying document content to the clipboard), as described in [the API documentation][7].
### 3\. True collaboration
The collaborative work toolset is basically the same for all environments. You have comments to add notes, suggestions, or questions for people working on a document together. ONLYOFFICE has this, of course, but it strives to provide a few extra features whenever possible. For instance, in ONLYOFFICE Workspace, you can quickly add mentions by typing + or @ followed by a user's name to draw a specific person's attention to your comment.
![ONLYOFFICE comments][8]
(Nadya Knyazeva, [CC BY-SA 4.0][6])
There's also a chat feature to quickly discuss something with teammates without switching to a messaging app (be aware that the chat history clears when you close a document).
Track changes enables reviewing documents by suggesting changes. All the changes made in this mode are highlighted, and the owner and users with full editing access can accept or reject them or preview the document with all the changes accepted or rejected.
What's important about collaborative work in ONLYOFFICE Docs is that users working simultaneously on the same docs can set individual preferences (e.g., enable track changes or spell checking, display non-printing characters, zoom the doc in and out, and so on) without disturbing each other.
### 4\. Version control
Versioning is so important that entire industries have developed around the process. For developers, writing without Git-style revision control can be unsettling. For content creators, emailing revisions back and forth to one another gets messy and confusing.
ONLYOFFICE Docs allows viewing a document's version history in the editor. Changes and the author who made them are highlighted in different colors. This feature's availability is determined by the doc management system you use; version history is available for ONLYOFFICE Workspace, Nextcloud, and ownCloud integration.
![ONLYOFFICE version history in Nextcloud][9]
(Nadya Knyazeva, [CC BY-SA 4.0][6])
### 5\. Change real-time co-editing mode
There are two ways to co-edit a document in real-time in ONLYOFFICE Docs. They are called Fast and Strict modes, and they're available regardless of how you integrate ONLYOFFICE into your toolchain.
Fast mode allows you to see your co-authors' changes as they are typing. Your changes are also shown to others immediately.
In Strict mode, you lock the document you are working on, and no one can see what you are typing until you click Save. You can see what parts of the document are locked by co-authors, but you can not see what they are doing until they save.
When collaborating on a document in one of these modes, the Ctrl+Z (undo) command affects only your work, so your co-authors' actions are unaffected.
### Bonus: Security options
Depending on your environment, you'll find different options to protect collaboration on documents.
ONLYOFFICE Workspace offers the standard security toolset, with HTTPS, backups, two-factor authentication, secure sign-on, and an option to encrypt data at rest. One feature that, according to ONLYOFFICE, has no counterpart is called _Private Rooms_.
A Private Room is a folder that can be accessed only through the desktop app. Each office file created and stored there is encrypted using the AES-265 algorithm. Everything you type—every letter, every number, every symbol—is encrypted immediately, even if you're collaborating in real time.
![ONLYOFFICE Private Rooms][10]
(Nadya Knyazeva, [CC BY-SA 4.0][6])
ONLYOFFICE Docs also uses JSON Web Tokens (JWT) for security. The editors request an encrypted signature to check who can access the document and what they can do with it. Currently, JWT is implemented for ONLYOFFICE Workspace and for Nextcloud, ownCloud, Alfresco, Confluence, HumHub, and Nuxeo integrations, in addition to their built-in security tools.
In a Nextcloud integration, you can also insert watermarks to protect sensitive docs. Watermarks are enabled by an admin and cannot be removed from a document.
![ONLYOFFICE Watermark in Nextcloud][11]
(Nadya Knyazeva, [CC BY-SA 4.0][6])
### So many features
There are many more features in ONLYOFFICE that will fit into one article. If you're looking for an open source alternative to Microsoft or Google collaboration tools, ONLYOFFICE is the most powerful option I know of. Give it a try and let me know in the comments what you think of ONLYOFFICE Docs as a collaboration tool.
Take a look at five great open source alternatives to Google Docs.
Sandstorm's Jade Wang shares some of her favorite open source web apps that are self-hosted...
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/12/onlyoffice-docs
作者:[Nadya Knyazeva][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hellonadya
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_organize_letter.png?itok=GTtiiabr (Filing cabinet for organization)
[2]: https://www.onlyoffice.com/office-suite.aspx
[3]: https://www.onlyoffice.com/all-connectors.aspx
[4]: https://api.onlyoffice.com/editors/basic
[5]: https://opensource.com/sites/default/files/uploads/1._sharing_window.png (ONLYOFFICE sharing options)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://api.onlyoffice.com/editors/config/document/permissions
[8]: https://opensource.com/sites/default/files/uploads/2._comments.png (ONLYOFFICE comments)
[9]: https://opensource.com/sites/default/files/uploads/3._version_history_in_nextcloud.png (ONLYOFFICE version history in Nextcloud)
[10]: https://opensource.com/sites/default/files/uploads/4_privateroom.png (ONLYOFFICE Private Rooms)
[11]: https://opensource.com/sites/default/files/uploads/5._watermark.png (ONLYOFFICE Watermark in Nextcloud)

View File

@ -1,94 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Customize the Task Switcher in KDE Plasma)
[#]: via: (https://itsfoss.com/customize-task-switcher-kde/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
How to Customize the Task Switcher in KDE Plasma
======
It is often the little interactions with a [desktop environment][1] that makes up for a good user experience and task switcher is something that most of the users fiddle with.
Ive recently about [customizing the task switching experience on GNOME][2] but what about the most customizable desktop environment, KDE?
Fret not, it isnt rocket science to tweak the task switcher in KDE. In this article, Im going to show you how to change the task switcher experience on any KDE-powered Linux system.
### Customize Task Switcher in KDE: Heres How It is Done
If you prefer video instructions I have also made a quick video for you:
Here are the text instructions:
![Kde Task Switcher Default Style][3]
To get started, you need to head to the System Settings in KDE as shown in the screenshot below.
![][4]
Next, you have to navigate your way to the “**Window Management**” option as shown in the image below.
![][5]
Once you click on the option, you will be greeted with more options. Here, you need to click on “**Task Switcher**” because that is what we are going to customize, you can explore other options if you are curious.
![][6]
As you can observe in the screenshot above, my settings may look different that yours:
* I have disabled the option to “**Show selected window**“
* And, have set the visual style of the task switcher to “**Flip Switch**“
Heres how it looks like with the Flip Switch style when you press **Alt+Tab**:
![][7]
In case you cannot find the option to set it, take a closer look at how you navigate the drop-down menu to change the visual style of Task Switcher while potentially disabling “**Show selected Window**” (thats what I prefer).
![][8]
As you can see in the image above, you get to change the sort order of the windows along with a couple more visual styles for the task switcher.
In addition to this setting, you can also look for a variety of task switcher themes/designs online by click on “**Get New Task Switchers**” button in the bottom-right corner of the window.
![][9]
You will also find several other options to change the key bind for accessing the tasks switcher, if that is what you need.
#### Reset to default in a click
If you want to revert the settings and want it to go back to the defaults. You will find a “**Defaults**” / “**Reset**” button, you can click on it to reset any changes that you made.
![][10]
Of course, feel free to explore any other customization options that you come across in the System Settings to personalize your KDE experience.
Id like to cover a detailed customization guide for KDE desktop in the near future, would you find that interesting? Let me know your thoughts in the comments below!
--------------------------------------------------------------------------------
via: https://itsfoss.com/customize-task-switcher-kde/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/what-is-desktop-environment/
[2]: https://itsfoss.com/customize-gnome-task-switcher/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/kde-task-switcher-default.jpg?resize=800%2C396&ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/11/kde-system-settings.jpg?resize=761%2C600&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/window-management-kde.jpg?resize=800%2C568&ssl=1
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/11/kde-settings-task-switcher.jpg?resize=800%2C569&ssl=1
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/kde-flip-switch.jpg?resize=800%2C484&ssl=1
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/kde-task-switcher-flip.jpg?resize=800%2C568&ssl=1
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/11/kde-task-switcher-online.png?resize=800%2C572&ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/12/kde-default-reset-task-switcher.jpg?resize=800%2C568&ssl=1

View File

@ -1,128 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install Mesa Drivers on Ubuntu [Latest and Stable])
[#]: via: (https://itsfoss.com/install-mesa-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
How to Install Mesa Drivers on Ubuntu [Latest and Stable]
======
_**This quick tutorial shows the steps to get a newer version of Mesa drivers on Ubuntu, be it stable release or cutting-edge development release.**_
### What is Mesa?
[Mesa][1] itself is not a graphics card like Nvidia or AMD. Instead, it provides open source software implementation of [OpenGL][2], [Vulkan][3], and some other graphics API specifications for Intel and AMD graphics hardware. With Mesa, you can play high-end games and use applications that require such graphics libraries.
More information on Mesa can be found in [this article][4].
### How to install Mesa on Ubuntu?
![][5]
Mesa comes preinstalled on Ubuntu with the open source graphics drivers of Radeon, Intel and Nvidia (sometimes). Though it probably wont be the latest Mesa version.
You can check if your system uses Mesa and the installed versions using this command:
```
glxinfo | grep Mesa
```
If for some reasons (like playing games), you want to install a newer version of Mesa, this tutorial will help you with that. Since, youll be using PPA, I highly recommend reading my [in-depth guide on PPA][6].
Attention!
Installing new Mesa graphics drivers may also need a newer Linux kernel. It will be a good idea to [enable HWE kernel on Ubuntu][7] to reduce the chances of conflict with the kernel. HWE Kernel gives you the latest stable kernel used by Ubuntu on an older LTS release.
### Install the latest stable version of Mesa driver in Ubuntu [Latest point release]
The [Kisak-mesa PPA][8] provides the latest point release of Mesa. You can use it by entering the following commands one by one in the terminal:
```
sudo add-apt-repository ppa:kisak/kisak-mesa
sudo apt update
sudo apt install mesa
```
It will give you the latest Mesa point release.
#### Remove it and go back to original Mesa driver
If you are facing issues and do not want to use the newer version of Mesa, you can revert to the original version.
Install PPA Purge tool first:
```
sudo apt install ppa-purge
```
And then use it to remove the PPA as well as the Mesa package version installed by this PPA.
```
sudo ppa-purge ppa:kisak/kisak-mesa
```
### Install the latest Mesa graphics drivers in Ubuntu [Bleeding edge]
If you want the latest Mesa drivers as they are being developed, this is what you need.
There is this awesome PPA that provides open source graphics drivers packages for Radeon, Intel and Nvidia hardware.
The best thing here is that all driver packages are automatically built twice a day, when there is an upstream change.
If you want the absolute latest Mesa drivers on Ubuntu and do not want to take the trouble of installing it from the source code, use this [PPA by Oibaf][9].
The PPA is available for 20.04, 20.10 and 21.04 at the time of writing this article. It is no longer updated for Ubuntu 18.04 LTS.
Open the terminal and use the following commands one by one:
```
sudo add-apt-repository ppa:oibaf/graphics-drivers
sudo apt update
sudo apt install mesa
```
This will give you the latest Mesa drivers.
#### Remove it and go back to original Mesa driver
You can remove the PPA and the installed latest Mesa driver using the ppa-purge tool.
Install it first:
```
sudo apt-get install ppa-purge
```
Now use it to disable the PPA you had added and revert the Mesa package to the version provided by Ubuntu officially.
```
sudo ppa-purge ppa:oibaf/graphics-drivers
```
I hope this quick tutorial was helpful in getting a newer version of Mesa on Ubuntu. If you have questions or suggestions, please use the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-mesa-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://mesa3d.org
[2]: https://www.opengl.org
[3]: https://www.khronos.org/vulkan/
[4]: https://www.gamingonlinux.com/articles/an-explanation-of-what-mesa-is-and-what-graphics-cards-use-it.9244
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/12/mesa-ubuntu.png?resize=800%2C450&ssl=1
[6]: https://itsfoss.com/ppa-guide/
[7]: https://itsfoss.com/ubuntu-hwe-kernel/
[8]: https://launchpad.net/~kisak/+archive/ubuntu/kisak-mesa
[9]: https://launchpad.net/~oibaf/+archive/ubuntu/graphics-drivers

View File

@ -1,246 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Practice coding in Java by writing a game)
[#]: via: (https://opensource.com/article/20/12/learn-java)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Practice coding in Java by writing a game
======
Writing simple games is a fun way to learn a new programming language.
Put that principle to work to get started with Java.
![Learning and studying technology is the key to success][1]
My article about [learning different programming languages][2] lists five things you need to understand when starting a new language. An important part of learning a language, of course, is knowing what you intend to do with it.
I've found that simple games are both fun to write and useful in exploring a language's abilities. In this article, I demonstrate how to create a simple guessing game in Java.
### Install Java
To do this exercise, you must have Java installed. If you don't have it, check out these links to install Java on [Linux][3], [macOS, or Windows][4].
After installing it, run this Java command in a terminal to confirm the version you installed:
```
`$ java -version`
```
### Guess the number
This "guess the number" program exercises several concepts in programming languages: how to assign values to variables, how to write statements, and how to perform conditional evaluation and loops. It's a great practical experiment for learning a new programming language.
Here's my Java implementation:
```
package com.example.guess;
import java.util.Random;
import java.util.Scanner;
   
class Main {
    private static final [Random][5] r = new [Random][5]();
    private static final int NUMBER = r.nextInt(100) + 1;
    private static int guess = 0;
    public static void main([String][6][] args) {  
        Scanner player = new Scanner([System][7].in);
            [System][7].out.println("number is " + [String][6].valueOf(NUMBER)); //DEBUG
            while ( guess != NUMBER ) {
            // prompt player for guess
            [System][7].out.println("Guess a number between 1 and 100");
            guess = player.nextInt();
            if ( guess &gt; NUMBER ) {
                [System][7].out.println("Too high");
            } else if ( guess &lt; NUMBER ) {
                [System][7].out.println("Too low");
            } else {
                [System][7].out.println("That's right!");
                [System][7].exit(0);
            }
        }
  }
}
```
That's about 20 lines of code, excluding whitespace and trailing braces. Structurally, however, there's a lot going on, which I'll break down here.
#### Package declaration
The first line, `package com.example.guess`, is not strictly necessary in a simple one-file application like this, but it's a good habit to get into. Java is a big language, and new Java is written every day, so every Java project needs to have a unique identifier to help programmers tell one library from another.
When writing Java code, you should declare a `package` it belongs to. The format for this is usually a reverse domain name, such as `com.opensource.guess` or `org.slf4j.Logger`. As usual for Java, this line is terminated by a semicolon.
#### Import statements
The next lines of the code are import statements, which tell the Java compiler what libraries to load when building the executable application. The libraries I use here are distributed along with OpenJDK, so you don't need to download them yourself. Because they're not strictly a part of the core language, you do need to list them for the compiler.
The Random library provides access to pseudo-random number generation, and the Scanner library lets you read user input in a terminal.
#### Java class
The next part creates a Java class. Java is an object-oriented programming language, so its quintessential construct is a _class_. There are some very specific code ideas suggested by a class, and if you're new to programming, you'll pick up on them with practice. For now, think of a class as a box into which you place variables and code instructions, almost as if you were building a machine. The parts you place into the class are unique to that class, and because they're contained in a box, they can't be seen by other classes. More importantly, since there is only one class in this sample game, a class is self-sufficient: It contains everything it needs to perform its particular task. In this case, its task is the whole game, but in larger applications, classes often work together in a sort of daisy-chain to produce complex jobs.
In Java, each file generally contains one class. The class in this file is called `Main` to signify that it's the entry-point for this application. In a single-file application such as this, the significance of a main class is difficult to appreciate, but in a larger Java project with dozens of classes and source files, marking one `Main` is helpful. And anyway, it's easy to package up an application for distribution with a main class defined.
#### Java fields
In Java, as in C and C++, you must declare variables before using them. You can define "fields" at the top of a Java class. The word "field" is just a fancy term for a variable, but it specifically refers to a variable assigned to a class rather than one embedded somewhere in a function.
This game creates three fields: Two to generate a pseudo-random number, and one to establish an initial (and always incorrect) guess. The long string of keywords (`private static final`) leading up to each field may look confusing (especially when starting out with Java), but using a good IDE like Netbeans or Eclipse can help you navigate the best choice.
It's important to understand them, too. A _private_ field is one that's available only to its own class. If another class tries to access a private field, the field may as well not exist. In a one-class application such as this one, it makes sense to use private fields.
A _static_ field belongs to the class itself and not to a class instance. This doesn't make much difference in a small demo app like this because only one instance of the class exists. In a larger application, you may have a reason to define or redefine a variable each time a class instance is spawned.
A _final_ field cannot have its value changed. This application demonstrates this perfectly: The random number never changes during the game (a moving target wouldn't be very fair), while the player's guess _must_ change or the game wouldn't be winnable. For that reason, the random number established at the beginning of the game is final, but the guess is not.
#### Pseudo-random numbers
Two fields create the random number that serves as the player's target. The first creates an instance of the `Random` class. This is essentially a random seed from which you can draw a pretty unpredictable number. To do this, list the class you're invoking followed by a variable name of your choice, which you set to a new instance of the class: `Random r = new Random();`. Like other Java statements, this terminates with a semicolon.
To draw a number, you must create another variable using the `nextInt()` method of Java. The syntax looks a little different, but it's similar: You list the kind of variable you're creating, you provide a name of your choice, and then you set it to the results of some action: `int NUMBER = r.nextInt(100) + 1;`. You can (and should) look at the documentation for specific methods, like `nextInt()`, to learn how they work, but in this case, the integer drawn from the `r` random seed is limited _up to_ 100 (that is, a maximum of 99). Adding 1 to the result ensures that a number is never 0 and the functional maximum is 100.
Obviously, the decision to disqualify any number outside of the 1 to 100 range is a purely arbitrary design decision, but it's important to know these constraints before sitting down to program. Without them, it's difficult to know what you're coding toward. If possible, work with a person whose job it is to define the application you're coding. If you have no one to work with, make sure to list your targets first—and only then put on your "coder hat."
### Main method
By default, Java looks for a `main` method (or "function," as they're called in many other languages) to run in a class. Not all classes need a main method, but this demo app only has one method, so it may as well be the main one. Methods, like fields, can be made public or private and static or non-static, but the main method must be public and static for the Java compiler to recognize and utilize it.
### Application logic
For this application to work as a game, it must continue to run _while_ the player takes guesses at a secret pseudo-random number. Were the application to stop after each guess, the player would only have one guess and would very rarely win. It's also part of the game's design that the computer provides hints to guide the player's next guess.
A `while` loop with embedded `if` statements achieves this design target. A `while` loop inherently continues to run until a specific condition is met. (In this case, the `guess` variable must equal the `NUMBER` variable.) Each guess can be compared to the target `NUMBER` to prompt helpful hints.
### Syntax
The main method starts by creating a new `Scanner` instance. This is the same principle as the `Random` instance used as a pseudo-random seed: You cite the class you want to use as a template, provide a variable name (I use `player` to represent the person entering guesses), and then set that variable to the results of running the class' main method. Again, if you were coding this on your own, you'd look at the class' documentation to get the syntax when using it.
This sample code includes a debugging statement that reveals the target `NUMBER`. That makes the game moot, but it's useful to prove to yourself that it's working correctly. Even this small debugging statement reveals some important Java tips: `System.out.println` is a print statement, and the `valueOf()` method converts the integer `NUMBER` to a string to print it as part of a sentence rather than an element of math.
The `while` statement begins next, with the sole condition that the player's `guess` is not equal to the target `NUMBER`. This is an infinite loop that can end only when it's _false_ that `guess` does _not_ equal `NUMBER`.
In this loop, the player is prompted for a number. The Scanner object, called `player`, takes any valid integer entered by the player and puts its value into the `guess` field.
The `if` statement compares `guess` to `NUMBER` and responds with `System.out.println` print statements to provide feedback to the human player.
If `guess` is neither greater than nor less than `NUMBER`, then it must be equal to it. At this point, the game prints a congratulatory message and exits. As usual with [POSIX][8] application design, this game exits with a 0 status to indicate success.
### Run the game
To test your game, save the sample code as `Guess.java` and use the Java command to run it:
```
$ java ./Guess.java
number is 38
Guess a number between 1 and 100
1
Too low
Guess a number between 1 and 100
39
Too high
Guess a number between 1 and 100
38
That's right!
$
```
Just as expected!
### Package the game
While it isn't as impressive on a single-file application like this as it is on a complex project, Java makes packaging very easy. For the best results, structure your project directory to include a place for your source code, a place for your compiled class, and a manifest file. In practice, this is somewhat flexible, and using an IDE does most of the work for you. It's useful to do it by hand once in a while, though.
Create a project folder if you haven't already. Then create one directory called `src` to hold your source files. Save the sample code in this article as `src/Guess.java`:
```
$ mkdir src
$ mv sample.java src/Guess.java
```
Now, create a directory tree that mirrors the name of your Java package, which appears at the very top of your code:
```
$ head -n1 src/Guess.java
package com.example.guess;
$ mkdir -p com/example/guess
```
Create a new file called `Manifest.txt` with just one line of text in it:
```
`$ echo "Manifest-Version: 1.0" > Manifest.txt`
```
Next, compile your game into a Java class. This produces a file called `Main.class` in `com/example/guess`:
```
$ javac src/Guess.java -d com/example/guess
$ ls com/example/guess/
Main.class
```
You're all set to package your application into a JAR (Java archive). The `jar` command is a lot like the [tar][9] command, so many of the options may look familiar:
```
$ jar cfme Guess.jar \
Manifest.txt \
com.example.guess.Main \
com/example/guess/Main.class
```
From the syntax of the command, you may surmise that it creates a new JAR file called `Guess.jar` with its required manifest data located in `Manifest.txt`. Its main class is defined as an extension of the package name, and the class is `com/example/guess/Main.class`.
You can view the contents of the JAR file:
```
$ jar tvf Guess.jar
     0 Wed Nov 25 10:33:10 NZDT 2020 META-INF/
    96 Wed Nov 25 10:33:10 NZDT 2020 META-INF/MANIFEST.MF
  1572 Wed Nov 25 09:42:08 NZDT 2020 com/example/guess/Main.class
```
And you can even extract it with the `xvf` options.
Run your JAR file with the `java` command:
```
`$ java -jar Guess.jar`
```
Copy your JAR file from Linux to a macOS or Windows computer and try running it. Without recompiling, it runs as expected. This may seem normal if your basis of comparison is, say, a simple Python script that happens to run anywhere, but imagine a complex project with several multimedia libraries and other dependencies. With Java, those dependencies are packaged along with your application, and it _all_ runs on _any_ platform. Welcome to the wonderful world of Java!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/12/learn-java
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/studying-books-java-couch-education.png?itok=C9gasCXr (Learning and studying technology is the key to success)
[2]: https://opensource.com/article/20/10/learn-any-programming-language
[3]: https://opensource.com/article/19/11/install-java-linux
[4]: http://adoptopenjdk.org
[5]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+random
[6]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[8]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[9]: https://opensource.com/article/17/7/how-unzip-targz-file