Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2019-03-06 20:07:05 +08:00
commit 889a82d63c
18 changed files with 2424 additions and 23 deletions

View File

@ -1,60 +1,56 @@
[#]: collector: "lujun9972"
[#]: translator: "zero-mk"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-10593-1.html"
[#]: subject: "Bash-Insulter : A Script That Insults An User When Typing A Wrong Command"
[#]: via: "https://www.2daygeek.com/bash-insulter-insults-the-user-when-typing-wrong-command/"
[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
Bash-Insulter : 一个在输入错误命令时侮辱用户的脚本
Bash-Insulter:一个在输入错误命令时嘲讽用户的脚本
======
这是一个非常有趣的脚本,每当用户在终端输入错误的命令时,它都会侮辱用户。
这是一个非常有趣的脚本,每当用户在终端输入错误的命令时,它都会嘲讽用户。
它让你在处理一些问题时感到快乐
它让你在解决一些问题时会感到快乐。有的人在受到终端嘲讽的时候感到不愉快。但是,当我受到终端的批评时,我真的很开心
有的人在受到终端侮辱的时候感到不愉快。但是,当我受到终端的侮辱时,我真的很开心。
这是一个有趣的CLI译者注command-line interface 工具,在你弄错的时候,会用随机短语侮辱你。
此外,它允许您添加自己的短语。
这是一个有趣的 CLI 工具,在你弄错的时候,会用随机短语嘲讽你。此外,它允许你添加自己的短语。
### 如何在 Linux 上安装 Bash-Insulter?
在安装 Bash-Insulter 之前,请确保的系统上安装了 git。如果没有请使用以下命令安装它。
在安装 Bash-Insulter 之前,请确保你的系统上安装了 git。如果没有请使用以下命令安装它。
对于 **`Fedora`** 系统, 请使用 **[DNF 命令][1]** 安装 git
对于 Fedora 系统, 请使用 [DNF 命令][1] 安装 git
```
$ sudo dnf install git
```
对于 **`Debian/Ubuntu`** 系统,,请使用 **[APT-GET 命令][2]** 或者 **[APT 命令][3]** 安装 git。
对于 Debian/Ubuntu 系统,请使用 [APT-GET 命令][2] 或者 [APT 命令][3] 安装 git。
```
$ sudo apt install git
```
对于基于 **`Arch Linux`** 的系统, 请使用 **[Pacman 命令][4]** 安装 git。
对于基于 Arch Linux 的系统,请使用 [Pacman 命令][4] 安装 git。
```
$ sudo pacman -S git
```
对于 **`RHEL/CentOS`** systems, 请使用 **[YUM 命令][5]** 安装 git。
对于 RHEL/CentOS 系统,请使用 [YUM 命令][5] 安装 git。
```
$ sudo yum install git
```
对于 **`openSUSE Leap`** system, 请使用 **[Zypper 命令][6]** 安装 git。
对于 openSUSE Leap 系统,请使用 [Zypper 命令][6] 安装 git。
```
$ sudo zypper install git
```
我们可以通过克隆clone开发人员的github存储库轻松地安装它。
我们可以通过<ruby>克隆<rt>clone</rt></ruby>开发人员的 GitHub 存储库轻松地安装它。
首先克隆 Bash-insulter 存储库。
@ -85,7 +81,7 @@ fi
$ sudo source /etc/bash.bashrc
```
你想测试一下安装是否生效吗?你可以试试在终端上输入一些错误的命令,看看它如何侮辱你。
你想测试一下安装是否生效吗?你可以试试在终端上输入一些错误的命令,看看它如何嘲讽你。
```
$ unam -a
@ -95,9 +91,7 @@ $ pin 2daygeek.com
![][8]
如果您想附加您自己的短语,则导航到以下文件并更新它
您可以在 `messages` 部分中添加短语。
如果你想附加你自己的短语,则导航到以下文件并更新它。你可以在 `messages` 部分中添加短语。
```
# vi /etc/bash.command-not-found
@ -178,7 +172,7 @@ via: https://www.2daygeek.com/bash-insulter-insults-the-user-when-typing-wrong-c
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[zero-mk](https://github.com/zero-mk)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Developer happiness: What you need to know)
[#]: via: (https://opensource.com/article/19/2/developer-happiness)
[#]: author: (Bart Copeland https://opensource.com/users/bartcopeland)
Developer happiness: What you need to know
======
Developers need the tools and the freedom to code quickly, without getting bogged down by compliance and security.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_happy_sad_developer_programming.png?itok=72nkfSQ_)
A person needs the right tools for the job. There's nothing as frustrating as getting halfway through a car repair, for instance, only to discover you don't have the specialized tool you need to complete the job. The same concept applies to developers: you need the tools to do what you are best at, without disrupting your workflow with compliance and security needs, so you can produce code faster.
Over half—51%, to be specific—of developers spend only one to four hours each day programming, according to ActiveState's recent [Developer Survey 2018: Open Source Runtime Pains][1]. In other words, the majority of developers spend less than half of their time coding. According to the survey, 50% of developers say security is one of their biggest concerns, but 67% of developers choose not to add a new language when coding because of the difficulties related to corporate policies.
The result is developers have to devote time to non-coding activities like retrofitting software for security and compliance criteria checked after software and languages have been built. And they won't choose the best tool or language for the job because of corporate policies. Their satisfaction goes down and risk goes up.
So, developers aren't able to devote time to high-value work. This creates additional business risk because their time-to-market is slowed, and the organization increases tech debt by not empowering developers to decide on "the best" tech, unencumbered by corporate policy drag.
### Baking in security and compliance workflows
How can we solve this issue? One way is to integrate security and compliance workflows into the software development process in four easy steps:
#### 1\. Gather your forces
Get support from everyone involved. This is an often-forgotten but critical first step. Make sure to consider a wide range of stakeholders, including:
* DevOps
* Developers
* InfoSec
* Legal/compliance
* IT security
Stakeholders want to understand the business benefits, so make a solid case for eliminating the security and compliance checkpoints after software builds. You can consider any (or all) of the following in building your business case: time savings, opportunity cost, and developer productivity. By integrating security and compliance workflows into the development process, you also avoid retrofitting of languages.
#### 2\. Find trustworthy sources
Next, choose the trusted sources that can be used, along with their license and security requirements. Consider including information such as:
* Restrictions on usage based on environment or application type and version controls per language
* Which open source components are allowable, e.g., specific packages
* Which licenses can be used in which types of environments (e.g., research vs. production)
* The definition of security levels, acceptable vulnerability risk levels, what risk levels trigger an action, what that action would be, and who would be responsible for its implementation
#### 3\. Incorporate security and compliance from day one
The upshot of incorporating security and compliance workflows is that it ultimately bakes security and compliance into the first line of code. It eliminates the drag of corporate policy because you're coding to spec versus having to fix things after the fact. But to do this, consider mechanisms for automatically scanning code as it's being built, along with using agentless monitoring of your runtime code. You're freeing up your time, and you'll also be able to programmatically enforce policies to ensure compliance across your entire organization.
New vulnerabilities arise, and new patches and versions become available. Consequently, security and compliance need to be considered when deploying code into production and also when running code. You need to know what, if any, code is at risk and where that code is running. So, the process for deploying and running code should include monitoring, reporting, and updating code in production.
By integrating security and compliance into your software development process from the start, you can also benefit by tracking where your code is running once deployed and be alerted of new threats as they arise. You will be able to track when your applications were vulnerable and respond with automatic enforcement of your software policies.
If your software development process has security and compliance workflows baked in, you will improve your productivity. And you'll be able to measure value through increased time spent coding; gains in security and stability; and cost- and time-savings in maintenance and discovery of security and compliance threats.
### Happiness through integration
If you don't develop and update software, your organization can't go forward. Developers are a linchpin in the success of your company, which means they need the tools and the freedom to code quickly. You can't let compliance and security needs—though they are critical—bog you down. Developers clearly worry about security, so the happy medium is to "shift left" and integrate security and compliance workflows from the start. You'll get more done, get it right the first time, and spend far less time retrofitting code.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/2/developer-happiness
作者:[Bart Copeland][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bartcopeland
[b]: https://github.com/lujun9972
[1]: https://www.activestate.com/company/press/press-releases/activestate-developer-survey-examines-open-source-challenges/

View File

@ -0,0 +1,76 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (No! Ubuntu is NOT Replacing Apt with Snap)
[#]: via: (https://itsfoss.com/ubuntu-snap-replaces-apt-blueprint/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
No! Ubuntu is NOT Replacing Apt with Snap
======
Stop believing the rumors that Ubuntu is planning to replace Apt with Snap in the [Ubuntu 19.04 release][1]. These are only rumors.
![Snap replacing apt rumors][2]
Dont get what I am talking about? Let me give you some context.
There is a blueprint on Ubuntus launchpad website, titled Replace APT with snap as default package manager. It talks about replacing Apt (package manager at the heart of Debian) with Snap ( a new packaging system by Ubuntu).
> Thanks to Snap, the need for APT is disappearing, fast… why dont we use snap at the system level?
The post further says “Imagine, for example, being able to run “sudo snap install cosmic” to upgrade to the current release, “sudo snap install beta disco” (in March) to upgrade to a beta release, or, for that matter, “sudo snap install edge disco” to upgrade to a pre-beta release. It would make the whole process much easier, and updates could simply be delivered as updates to the corresponding snap, which could then just be pushed to the repositories and there it is. This way, instead of having a separate release updater, it would be possible to A, run all system updates completely and silently in the background to avoid nagging the user (a la Chrome OS), and B, offer release upgrades in the GNOME software store, Mac-style, as banners, so the user can install them easily. It would make the user experience both more consistent and even more user-friendly than it currently is.”
It might sound good and promising and if you take a look at [this link][3], even you might start believing the rumor. Why? Because at the bottom of the blueprint information, it lists Ubuntu-founder Mark Shuttleworth as the approver.
![Apt being replaced with Snap blueprint rumor][4]Mark Shuttleworths name adds to the confusion
The rumor got fanned when the Switch to Linux YouTube channel covered it. You can watch the video from around 11:30.
<https://youtu.be/Xy7v5tdfSZM>
When this news was brought to my attention, I reached out to Alan Pope of Canonical and asked him if he or his colleagues at Canonical (Ubuntus parent company) could confirm it.
Alan clarified that the so called blueprint was not associated with official Ubuntu team. It was created as a proposal by some community member not affiliated with Ubuntu.
> Thats not anything official. Some random community person made it. Anyone can write a blueprint.
>
> Alan Pope, Canonical
Alan further elaborated that anyone can create such blueprints and tag Mark Shuttleworth or other Ubuntu members in it. Just because Marks name was listed as the approver, it doesnt mean he already approved the idea.
Canonical has no such plans to replace Apt with Snap. Its not as simple as the blueprint in question suggests.
After talking with Alan, I decided to not write about this topic because I dont want to fan baseless rumors and confuse people.
Unfortunately, the replace Apt with Snap blueprint is still being shared on various Ubuntu and Linux related groups and forums. Alan had to publicly dismiss these rumors in a series of tweets:
> Seen this [#Ubuntu][5] blueprint being shared around the internet. It's not official, not a thing we're doing. Just because someone made a blueprint, doesn't make it fact. <https://t.co/5aUYlT2no5>
>
> — Alan Pope 🇪🇺🇬🇧 (@popey) [February 23, 2019][6]
I dont want you, the Its FOSS reader, to fell for such silly rumors so I quickly penned this article.
If you come across apt being replaced with snap discussion, you may tell people that its not true and provide them this link as a reference.
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-snap-replaces-apt-blueprint/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/ubuntu-19-04-release-features/
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/snap-replacing-apt.png?resize=800%2C450&ssl=1
[3]: https://blueprints.launchpad.net/ubuntu/+spec/package-management-default-snap
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/apt-snap-blueprint.jpg?ssl=1
[5]: https://twitter.com/hashtag/Ubuntu?src=hash&ref_src=twsrc%5Etfw
[6]: https://twitter.com/popey/status/1099238146393468931?ref_src=twsrc%5Etfw

View File

@ -0,0 +1,72 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Teaching scientists how to share code)
[#]: via: (https://opensource.com/article/19/2/open-science-git)
[#]: author: (Jon Tennant https://opensource.com/users/jon-tennant)
Teaching scientists how to share code
======
This course teaches them how to set up a GitHub project, index their project in Zenodo, and integrate Git into an RStudio workflow.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_051x_0.png?itok=gIzbmxuI)
Would it surprise you to learn that most of the world's scholarly research is not owned by the people who funded it or who created it? Rather it's owned by private corporations and locked up in proprietary systems, leading to [problems][1] around sharing, reuse, and reproducibility.
The open science movement is challenging this system, aiming to give researchers control, ownership, and freedom over their work. The [Open Science MOOC][2] (massively open online community) is a mission-driven project launched in 2018 to kick-start an open scientific revolution and foster more partnerships between open source software and open science.
The Open Science MOOC is a peer-to-peer community of practice, based around sharing knowledge and ideas, learning new skills, and using these things to develop as individuals so research communities can grow as part of a wider cultural shift towards openness.
### The curriculum
The Open Science MOOC is divided into 10 core modules, from the principles of open science to becoming an open science advocate.
The first module, [Open Research Software and Open Source][3], was released in late 2018. It includes three main tasks, all designed to help make research workflows more efficient and more open for collaboration:
#### 1\. Setting up your first GitHub project
GitHub is a powerful project management tool, both for coders and non-coders. This task teaches how to create a community around the platform, select an appropriate license, and write good documentation (including README files, contributing guidelines, and codes of conduct) to foster open collaboration and a welcoming community.
#### 2\. Indexing your project in Zenodo
[Zenodo][4] is an open science platform that seamlessly integrates with GitHub to help make projects more permanent, reusable, and citable. This task explains how webhooks between Zenodo and GitHub allow new versions of projects to become permanently archived as they progress. This is critical for helping researchers get a [DOI][5] for their work so they can receive full credit for all aspects of a project. As citations are still a primary form of "academic capital," this is essential for researchers.
#### 3\. Integrating Git into an RStudio workflow
This task is about giving research a mega-boost through greater collaborative efficiency and reproducibility. Git enables version control in all forms of text-based content, including data analysis and writing papers. Each time you save your work during the development process, Git saves time-stamped copies. This saves the hassle of trying to "roll back" projects if you delete a file or text by mistake, and eliminates horrific file-naming conventions. (For example, does FINAL_Revised_2.2_supervisor_edits_ver1.7_scream.txt look familiar?) Getting Git to interface with RStudio is the painful part, but this task goes through it, step by step, to ease the stress.
The third task also gives students the ability to interact directly with the MOOC by submitting pull requests to demonstrate their skills. This also adds their name to an online list of open source champions (aka "open sourcerers").
The MOOC's inherently interactive style is much more valuable than listening to someone talk at you, either on or off screen, like with many traditional online courses or educational programs. Each task is backed up by expert-gathered knowledge, so students get a rigorous, dual-learning experience.
### Empowering researchers
The Open Science MOOC strives to be as open as possible—this means we walk the walk and talk the talk. We are built upon a solid values-based foundation of freedom and equitable access to research. We see this route towards widespread adoption of best scientific practices as an essential part of the research process.
Everything we produce is openly developed and openly licensed for maximum engagement, sharing, and reuse. An open source workflow underpins our development. All of this happens openly around channels such as [Slack][6] and [GitHub][7] and helps to make the community much more coherent.
If we can instill the value of open source into modern research, this would empower current and future generations of researchers to think more about fundamental freedoms around knowledge production. We think that is something worth working towards as a community.
The Open Science MOOC combines the best elements of the open education, open science, and open source worlds. If you're ready to join, [sign up for the full course][3], which is, of course, free.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/2/open-science-git
作者:[Jon Tennant][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jon-tennant
[b]: https://github.com/lujun9972
[1]: https://www.theguardian.com/science/political-science/2018/jun/29/elsevier-are-corrupting-open-science-in-europe
[2]: https://opensciencemooc.eu/
[3]: https://eliademy.com/catalog/oer/module-5-open-research-software-and-open-source.html
[4]: https://zenodo.org/
[5]: https://en.wikipedia.org/wiki/Digital_object_identifier
[6]: https://osmooc.herokuapp.com/
[7]: https://open-science-mooc-invite.herokuapp.com/

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Reducing security risks with centralized logging)
[#]: via: (https://opensource.com/article/19/2/reducing-security-risks-centralized-logging)
[#]: author: (Hannah Suarez https://opensource.com/users/hcs)
Reducing security risks with centralized logging
======
Centralizing logs and structuring log data for processing can mitigate risks related to insufficient logging.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx)
Logging and log analysis are essential to securing infrastructure, particularly when we consider common vulnerabilities. This article, based on my lightning talk [Let's use centralized log collection to make incident response teams happy][1] at FOSDEM'19, aims to raise awareness about the security concerns around insufficient logging, offer a way to avoid the risk, and advocate for more secure practices _(disclaimer: I work for NXLog)._
### Why log collection and why centralized logging?
Logging is, to be specific, an append-only sequence of records written to disk. In practice, logs help you investigate an infrastructure issue as you try to find a cause for misbehavior. A challenge comes up when you have heterogeneous systems with their own standards and formats, and you want to be able to handle and process these in a dependable way. This often comes at the cost of metadata. Centralized logging solutions require commonality, and that commonality often removes the rich metadata many open source logging tools provide.
### The security risk of insufficient logging and monitoring
The Open Web Application Security Project ([OWASP][2]) is a nonprofit organization that contributes to the industry with incredible projects (including many [tools][3] focusing on software security). The organization regularly reports on the riskiest security challenges for application developers and maintainers. In its most recent report on the [top 10 most critical web application security risks][4], OWASP added Insufficient Logging and Monitoring to its list. OWASP warns of risks due to insufficient logging, detection, monitoring, and active response in the following types of scenarios.
* Important auditable events, such as logins, failed logins, and high-value transactions are not logged.
* Warnings and errors generate none, inadequate, or unclear log messages.
* Logs are only being stored locally.
* The application is unable to detect, escalate, or alert for active attacks in real time or near real time.
These instances can be mitigated by centralizing logs (i.e., not storing logs locally) and structuring log data for processing (i.e., in alerting dashboards and security suites).
For example, imagine a DNS query leads to a malicious site named **hacked.badsite.net**. With DNS monitoring, administrators monitor and proactively analyze DNS queries and responses. The efficiency of DNS monitoring relies on both sufficient logging and log collection in order to catch potential issues as well as structuring the resulting DNS log for further processing:
```
2019-01-29
Time (GMT)      Source                  Destination             Protocol-Info
12:42:42.112898 SOURCE_IP               xxx.xx.xx.x             DNS     Standard query 0x1de7  A hacked.badsite.net
```
You can try this yourself and run through other examples and snippets with the [NXLog Community Edition][5] _(disclaimer again: I work for NXLog)._
### Important aside: unstructured vs. structured data
It's important to take a moment and consider the log data format. For example, let's consider this log message:
```
debug1: Failed password for invalid user amy from SOURCE_IP port SOURCE_PORT ssh2
```
This log contains a predefined structure, such as a metadata keyword before the colon ( **debug1** ). However, the rest of the log field is an unstructured string ( **Failed password for invalid user amy from SOURCE_IP port SOURCE_PORT ssh2** ). So, while the message is easily available in a human-readable format, it is not a format a computer can easily parse.
Unstructured event data poses limitations including difficulty of parsing, searching, and analyzing the logs. The important metadata is too often set in an unstructured data field in the form of a freeform string like the example above. Logging administrators will come across this problem at some point as they attempt to standardize/normalize log data and centralize their log sources.
### Where to go next
Alongside centralizing and structuring your logs, make sure you're collecting the right log data—Sysmon, PowerShell, Windows EventLog, DNS debug, NetFlow, ETW, kernel monitoring, file integrity monitoring, database logs, external cloud logs, and so on. Also have the right tools and processes in place to collect, aggregate, and help make sense of the data.
Hopefully, this gives you a starting point to centralize log collection across diverse sources; send them to outside sources like dashboards, monitoring software, analytics software, specialized software like security information and event management (SEIM) suites; and more.
What does your centralized logging strategy look like? Share your thoughts in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/2/reducing-security-risks-centralized-logging
作者:[Hannah Suarez][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hcs
[b]: https://github.com/lujun9972
[1]: https://fosdem.org/2019/schedule/event/lets_use_centralized_log_collection_to_make_incident_response_teams_happy/
[2]: https://www.owasp.org/index.php/Main_Page
[3]: https://github.com/OWASP
[4]: https://www.owasp.org/index.php/Top_10-2017_Top_10
[5]: https://nxlog.co/products/nxlog-community-edition/download

View File

@ -0,0 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Let your engineers choose the license: A guide)
[#]: via: (https://opensource.com/article/19/2/choose-open-source-license-engineers)
[#]: author: (Jeffrey Robert Kaufman https://opensource.com/users/jkaufman)
Let your engineers choose the license: A guide
======
Enabling engineers to make licensing decisions is wise and efficient.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk)
Imagine you are working for a company that will be starting a new open source community project. Great! You have taken a positive first step to give back and enable a virtuous cycle of innovation that the open source community-based development model provides.
But what about choosing an open source license for your project? You ask your manager for guidance, and she provides some perspective but quickly realizes that there is no formal company policy or guidelines. As any wise manager would do, she asks you to develop formal corporate guidelines for choosing an open source license for such projects.
Simple, right? You may be surprised to learn some unexpected challenges. This article will describe some of the complexities you may encounter and some perspective based on my recent experience with a similar project at Red Hat.
It may be useful to quickly review some of the more common forms of open source licensing. Open source licenses may be generally placed into two main buckets, copyleft and permissive.
> Copyleft licenses, such as the GPL, allow access to source code, modifications to the source, and distribution of the source or binary versions in their original or modified forms. Copyleft additionally provides that essential software freedoms (run, study, change, and distribution) will be allowed and ensured for any recipients of that code. A copyleft license prohibits restrictions or limitations on these essential software freedoms.
>
> Permissive licenses, similar to copyleft, also generally allow access to source code, modifications to the source, and distribution of the source or binary versions in their original or modified forms. However, unlike copyleft licenses, additional restrictions may be included with these forms of licenses, including proprietary limitations such as prohibiting the creation of modified works or further distribution.
Red Hat is one of the leading open source development companies, with thousands of open source developers continuously working upstream and contributing to an assortment of open source projects. When I joined Red Hat, I was very familiar with its flagship Red Hat Enterprise Linux offering, often referred to as RHEL. Although I fully expected that the company contributes under a wide assortment of licenses based on project requirements, I thought our preference and top recommendation for our engineers would be GPLv2 due to our significant involvement with Linux. In addition, GPL is a copyleft license, and copyleft ensures that the essential software freedoms (run, study, change, distribute) will be extended to any recipients of that code. What could be better for sustaining the open source ecosystem than a copyleft license?
Fast forwarding on my journey to craft internal license choice guidelines for Red Hat, the end result was to not have any license preference at all. Instead, we delegate that responsibility, to the maximum extent possible, to our engineers. Why? Because each open source project and community is unique and there are social aspects to these communities that may have preferences towards various licensing philosophies (e.g., copyleft or permissive). Engineers working in those communities understand all these issues and are best equipped to choose the proper license on this knowledge. Mandating certain licenses for code contributions often will conflict with these community norms and result in reduction or prohibition in contributed content.
For example, perhaps your organization believes that the latest GPL license (GPLv3) is the best for your company due to its updated provisions. If you mandated GPLv3 for all future contributions vs. GPLv2, you would be prohibited from contributing code to the Linux kernel, since that is a GPLv2 project and will likely remain that way for a very long time. Your engineers, being part of that open source community project, would know that and would automatically choose GPLv2 in the absence of such a mandate.
Bottom line: Enabling engineers to make these decisions is wise and efficient.
To the extent your organization may have to restrict the use of certain licenses (e.g., due to certain intellectual property concerns), this should naturally be part of your guidelines or policy. I believe it is much better to delegate to the maximum extent possible to those that understand all the nuances, politics, and licensing philosophies of these varied communities and restrict license choice only when absolutely necessary. Even having a preference for a certain license over another can be problematic. Open source engineers may have deeply rooted feelings about copyleft (either for or against), and forcing one license over the other (unless absolutely necessary for business reasons) may result in creating ill-will and ostracizing an engineer or engineering department within your organization
In summary, Red Hat's guidelines are very simple and are summarized below:
1. We suggest choosing an open source license from a set of 10 different licenses that are very common and meet the needs of most new open source projects.
2. We allow the use of other licenses but we ask that a reason is provided to the open source legal team so we can collect and better understand some of the new and perhaps evolving needs of the open source communities that we serve. (As stated above, our engineers are on the front lines and are best equipped to deliver this type of information.)
3. The open source legal team always has the right to override a decision, but this would be very rare and only would occur if we were aware of some community or legal concern regarding a specific license or project.
4. Publishing source code without a license is never permitted.
In summary, the advantages of these guidelines are enormous. They are very efficient and lead to a very low-friction development and approval system within our organization.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/2/choose-open-source-license-engineers
作者:[Jeffrey Robert Kaufman][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jkaufman
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,113 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Brief History of FOSS Practices)
[#]: via: (https://itsfoss.com/history-of-foss)
[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
A Brief History of FOSS Practices
======
We focus a great deal about Linux and Free & Open Source Software at Its FOSS. Ever wondered how old such FOSS Practices are? How did this Practice come by? What is the History behind this revolutionary concept?
In this history and trivia article, lets take a look back in time through this brief write-up and note some interesting initiatives in the past that have grown so huge today.
### Origins of FOSS
![History of FOSS][1]
The origins of FOSS goes back to the 1950s. When hardware was purchased, there used to be no additional charges for bundled software and the source code would also be available in order to fix possible bugs in the software.
It was actually a common Practice back then for users with the freedom to customize the code.
At that time, mostly academicians and researchers in the industry were the collaborators to develop such software.
The term Open Source was not there yet. Instead, the term that was popular at that time was “Public Domain Software”. As of today, ideologically, both are very much [different][2] in nature even though they may sound similar.
<https://youtu.be/0Dt3MCcXay8?list=PLybyE6hxfb7cIzYK1HegM3-ccU-JhovpH>
Back in 1955, some users of the [IBM 701][3] computer system from Los Angeles, voluntarily founded a group called SHARE. The “SHARE Program Library Agency” (SPLA) distributed information and software through magnetic tapes.
Technical information shared, was about programming languages, operating systems, database systems, and user experiences for enterprise users of small, medium, and large-scale IBM computers.
The initiative that is now more than 60 years old, continues to follow its goals quite actively. SHARE has its upcoming event coming up as [SHARE Phoenix 2019][4]. You can download and check out their complete timeline [here][5].
### The GNU Project
Announced at MIT on September 27, 1983, by Richard Stallman, the GNU Project is what immensely empowers and supports the Free Software Community today.
### Free Software Foundation
The “Free Software Movement” by Richard Stallman established a new norm for developing Free Software.
He founded The Free Software Foundation (FSF) on 4th October 1985 to support the free software movement. Software that ensures that the end users have freedom in using, studying, sharing and modifying that software came to be called as Free Software.
**Free as in Free Speech, Not Free Beer**
<https://youtu.be/MtNcxMuphLc>
The Free Software Movement laid the following rules to establish the distinctiveness of the idea:
* The freedom to run the program as you wish, for any purpose (freedom 0).
* The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
* The freedom to redistribute copies so you can help your neighbor (freedom 2).
* The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
### The Linux Kernel
<https://youtu.be/RkUrOSQF1JQ>
How can we miss this section at Its FOSS! The Linux kernel was released as freely modifiable source code in 1991 by Linus Torvalds. At first, it was neither Free Software nor used an Open-source software license. In February 1992, Linux was relicensed under the GPL.
### The Linux Foundation
The Linux Foundation has a goal to empower open source projects to accelerate technology development and commercial adoption. It is an initiative that was taken in 2000 via the [Open Source Development Labs][6] (OSDL) which later merged with the [Free Standards Group][7].
Linus Torvalds works at The Linux Foundation who provide complete support to him so that he can work full-time on improving Linux.
### Open Source
When the source code of [Netscape][8] Communicator was released in 1998, the label “Open Source” was adopted by a group of individuals at a strategy session held on February 3rd, 1998 in Palo Alto, California. The idea grew from a visionary realization that the [Netscape announcement][9] had changed the way people looked at commercial software.
This opened up a whole new world, creating a new perspective that revealed the superiority and advantage of an open development process that could be powered by collaboration.
[Christine Peterson][10] was the one among that group of individuals who originally suggested the term “Open Source” as we perceive today (mentioned [earlier][11]).
### Evolution of Business Models
The concept of Open Source is a huge phenomenon right now and there are several companies that continue to adopt the Open Source Approach to this day. [As of April 2015, 78% of companies used Open Source Software][12] with different [Open Source Licenses][13].
Several organisations have adopted [different business models][14] for Open Source. Red Hat and Mozilla are two good examples.
So this was a brief recap of some interesting facts from FOSS History. Do let us know your thoughts if you want to share in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/history-of-foss
作者:[Avimanyu Bandyopadhyay][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/avimanyu/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/history-of-foss.png?resize=800%2C450&ssl=1
[2]: https://opensource.org/node/878
[3]: https://en.wikipedia.org/wiki/IBM_701
[4]: https://event.share.org/home
[5]: https://www.share.org/d/do/11532
[6]: https://en.wikipedia.org/wiki/Open_Source_Development_Labs
[7]: https://en.wikipedia.org/wiki/Free_Standards_Group
[8]: https://en.wikipedia.org/wiki/Netscape
[9]: https://web.archive.org/web/20021001071727/http://wp.netscape.com:80/newsref/pr/newsrelease558.html
[10]: https://en.wikipedia.org/wiki/Christine_Peterson
[11]: https://itsfoss.com/nanotechnology-open-science-ai/
[12]: https://www.zdnet.com/article/its-an-open-source-world-78-percent-of-companies-run-open-source-software/
[13]: https://itsfoss.com/open-source-licenses-explained/
[14]: https://opensource.com/article/17/12/open-source-business-models

View File

@ -0,0 +1,56 @@
[#]: collector: (lujun9972)
[#]: translator: (lujun9972)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (IRC vs IRL: How to run a good IRC meeting)
[#]: via: (https://opensource.com/article/19/2/irc-vs-irl-meetings)
[#]: author: (Ben Cotton https://opensource.com/users/bcotton)
IRC vs IRL: How to run a good IRC meeting
======
Internet Relay Chat meetings can be a great way to move a project forward if you follow these best practices.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_community_1.png?itok=rT7EdN2m)
There's an art to running a meeting in any format. Many people have learned to run in-person or telephone meetings, but [Internet Relay Chat][1] (IRC) meetings have unique characteristics that differ from "in real life" (IRL) meetings. This article will share the advantages and disadvantages of the IRC format as well as tips that will help you lead IRC meetings more effectively.
Why IRC? Despite the wealth of real-time chat options available today, [IRC remains a cornerstone of open source projects][2]. If your project uses another communication method, don't worry. Most of this advice works for any synchronous text chat mechanism, perhaps with a few tweaks here and there.
### Challenges of IRC meetings
IRC meetings pose certain challenges compared to in-person meetings. You know that lag between when one person finishes talking and the next one begins? It's worse in IRC because people have to type what they're thinking. This is slower than talking and—unlike with talking—you can't tell when someone else is trying to compose a message. Moderators must remember to insert long pauses when asking for responses or moving to the next topic. And someone who wants to speak up should insert a brief message (e.g., a period) to let the moderator know.
IRC meetings also lack the metadata you get from other methods. You can't read facial expressions or tone of voice in text. This means you have to be careful with your word choice and phrasing.
And IRC meetings make it really easy to get distracted. At least when someone is looking at funny cat GIFs during an in-person meeting, you'll see them smile and hear them laugh at inopportune times. In IRC, unless they accidentally paste the wrong text, there's no peer pressure even to pretend to pay attention. With IRC, you can even be in multiple meetings at once. I've done this, but it's dangerous if you need to be an active participant.
### Benefits of IRC meetings
IRC meetings have some unique advantages, too. IRC is a very resource-light medium. It doesn't tax bandwidth or CPU. This lowers the barrier for participation, which is advantageous for both the underprivileged and people who are on the road. For volunteer contributors, it means they may be able to participate during their workday. And it means participants don't need to find a quiet space where they can talk without bothering those around them.
With a meeting bot, IRC can produce meeting minutes instantly. In Fedora, we use Zodbot, an instance of Debian's [Meetbot][3], to log meetings and provide interaction. When a meeting ends, the minutes and full logs are immediately available to the community. This can reduce the administrative overhead of running the meeting.
### It's like a normal meeting, but different
Conducting a meeting via IRC or other text-based medium means thinking about the meeting in a slightly different way. Although it lacks some of the benefits of higher-bandwidth modes of communication, it has advantages, too. Running an IRC meeting provides the opportunity to develop discipline that can help you run any type of meeting.
Like any meeting, IRC meetings are best when there's a defined agenda and purpose. A good meeting moderator knows when to let the conversation follow twists and turns and when it's time to reel it back in. There's no hard and fast rule here—it's very much an art. But IRC offers an advantage in this regard. By setting the channel topic to the meeting's current topic, people have a visible reminder of what they should be talking about.
If your project doesn't already conduct synchronous meetings, you should give it some thought. For projects with a diverse set of time zones, finding a mutually agreeable time to hold a meeting is hard. You can't rely on meetings as your only source of coordination. But they can be a valuable part of how your project works.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/2/irc-vs-irl-meetings
作者:[Ben Cotton][a]
选题:[lujun9972][b]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bcotton
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Internet_Relay_Chat
[2]: https://opensource.com/article/16/6/getting-started-irc
[3]: https://wiki.debian.org/MeetBot

View File

@ -0,0 +1,251 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 Linux GUI Cloud Backup Tools)
[#]: via: (https://www.linux.com/blog/learn/2019/2/5-linux-gui-cloud-backup-tools)
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
5 Linux GUI Cloud Backup Tools
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloud-m-wong-unsplash.jpg?itok=aW0mpzio)
We have reached a point in time where most every computer user depends upon the cloud … even if only as a storage solution. What makes the cloud really important to users, is when its employed as a backup. Why is that such a game changer? By backing up to the cloud, you have access to those files, from any computer you have associated with your cloud account. And because Linux powers the cloud, many services offer Linux tools.
Lets take a look at five such tools. I will focus on GUI tools, because they offer a much lower barrier to entry to many of the CLI tools. Ill also be focusing on various, consumer-grade cloud services (e.g., [Google Drive][1], [Dropbox][2], [Wasabi][3], and [pCloud][4]). And, I will be demonstrating on the Elementary OS platform, but all of the tools listed will function on most Linux desktop distributions.
Note: Of the following backup solutions, only Duplicati is licensed as open source. With that said, lets see whats available.
### Insync
I must confess, [Insync][5] has been my cloud backup of choice for a very long time. Since Google refuses to release a Linux desktop client for Google Drive (and I depend upon Google Drive daily), I had to turn to a third-party solution. Said solution is Insync. This particular take on syncing the desktop to Drive has not only been seamless, but faultless since I began using the tool.
The cost of Insync is a one-time $29.99 fee (per Google account). Trust me when I say this tool is worth the price of entry. With Insync you not only get an easy-to-use GUI for managing your Google Drive backup and sync, you get a tool (Figure 1) that gives you complete control over what is backed up and how it is backed up. Not only that, but you can also install Nautilus integration (which also allows you to easy add folders outside of the configured Drive sync destination).
![Insync app][7]
Figure 1: The Insync app window on Elementary OS.
[Used with permission][8]
You can download Insync for Ubuntu (or its derivatives), Linux Mint, Debian, and Fedora from the [Insync download page][9]. Once youve installed Insync (and associated it with your account), you can then install Nautilus integration with these steps (demonstrating on Elementary OS):
1. Open a terminal window and issue the command sudo nano /etc/apt/sources.list.d/insync.list.
2. Paste the following into the new file: deb <http://apt.insynchq.com/ubuntu> precise non-free contrib.
3. Save and close the file.
4. Update apt with the command sudo apt-get update.
5. Install the necessary package with the command sudo apt-get install insync-nautilus.
Allow the installation to complete. Once finished, restart Nautilus with the command nautilus -q (or log out and back into the desktop). You should now see an Insync entry in the Nautilus right-click context menu (Figure 2).
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/insync_2.jpg?itok=-kA0epiH)
Figure 2: Insync/Nautilus integration in action.
[Used with permission][8]
### Dropbox
Although [Dropbox][2] drew the ire of many in the Linux community (by dropping support for all filesystems but unencrypted ext4), it still supports a great deal of Linux desktop deployments. In other words, if your distribution still uses the ext4 file system (and you do not opt to encrypt your full drive), youre good to go.
The good news is the Dropbox Linux desktop client is quite good. The tool offers a system tray icon that allows you to easily interact with your cloud syncing. Dropbox also includes CLI tools and a Nautilus integration (by way of an additional addon found [here][10]).
The Linux Dropbox desktop sync tool works exactly as youd expect. From the Dropbox system tray drop-down (Figure 3) you can open the Dropbox folder, launch the Dropbox website, view recently changed files, get more space, pause syncing, open the preferences window, find help, and quite Dropbox.
![Dropbox][12]
Figure 3: The Dropbox system tray drop-down on Elementary OS.
[Used with permission][8]
The Dropbox/Nautilus integration is an important component, as it makes quickly adding to your cloud backup seamless and fast. From the Nautilus file manager, locate and right-click the folder to bad added, and select Dropbox > Move to Dropbox (Figure 4).
The only caveat to the Dropbox/Nautilus integration is that the only option is to move a folder to Dropbox. To some this might not be an option. The developers of this package would be wise to instead have the action create a link (instead of actually moving the folder).
Outside of that one issue, the Dropbox cloud sync/backup solution for Linux is a great route to go.
### pCloud
pCloud might well be one of the finest cloud backup solutions youve never heard of. This take on cloud storage/backup includes features like:
* Encryption (subscription service required for this feature);
* Mobile apps for Android and iOS;
* Linux, Mac, and Windows desktop clients;
* Easy file/folder sharing;
* Built-in audio/video players;
* No file size limitation;
* Sync any folder from the desktop;
* Panel integration for most desktops; and
* Automatic file manager integration.
pCloud offers both Linux desktop and CLI tools that function quite well. pCloud offers both a free plan (with 10GB of storage), a Premium Plan (with 500GB of storage for a one-time fee of $175.00), and a Premium Plus Plan (with 2TB of storage for a one-time fee of $350.00). Both non-free plans can also be paid on a yearly basis (instead of the one-time fee).
The pCloud desktop client is quite user-friendly. Once installed, you have access to your account information (Figure 5), the ability to create sync pairs, create shares, enable crypto (which requires an added subscription), and general settings.
![pCloud][14]
Figure 5: The pCloud desktop client is incredibly easy to use.
[Used with permission][8]
The one caveat to pCloud is theres no file manager integration for Linux. Thats overcome by the Sync folder in the pCloud client.
### CloudBerry
The primary focus for [CloudBerry][15] is for Managed Service Providers. The business side of CloudBerry does have an associated cost (one that is probably well out of the price range for the average user looking for a simple cloud backup solution). However, for home usage, CloudBerry is free.
What makes CloudBerry different than the other tools is that its not a backup/storage solution in and of itself. Instead, CloudBerry serves as a link between your desktop and the likes of:
* AWS
* Microsoft Azure
* Google Cloud
* BackBlaze
* OpenStack
* Wasabi
* Local storage
* External drives
* Network Attached Storage
* Network Shares
* And more
In other words, you use CloudBerry as the interface between the files/folders you want to share and the destination with which you want send them. This also means you must have an account with one of the many supported solutions.
Once youve installed CloudBerry, you create a new Backup plan for the target storage solution. For that configuration, youll need such information as:
* Access Key
* Secret Key
* Bucket
What youll need for the configuration will depend on the account youre connecting to (Figure 6).
![CloudBerry][17]
Figure 6: Setting up a CloudBerry backup for Wasabi.
[Used with permission][8]
The one caveat to CloudBerry is that it does not integrate with any file manager, nor does it include a system tray icon for interaction with the service.
### Duplicati
[Duplicati][18] is another option that allows you to sync your local directories with either locally attached drives, network attached storage, or a number of cloud services. The options supported include:
* Local folders
* Attached drives
* FTP/SFTP
* OpenStack
* WebDAV
* Amazon Cloud Drive
* Amazon S3
* Azure Blob
* Box.com
* Dropbox
* Google Cloud Storage
* Google Drive
* Microsoft OneDrive
* And many more
Once you install Duplicati (download the installer for Debian, Ubuntu, Fedora, or RedHat from the [Duplicati downloads page][19]), click on the entry in your desktop menu, which will open a web page to the tool (Figure 7), where you can configure the app settings, create a new backup, restore from a backup, and more.
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/duplicati_1.jpg?itok=SVf06tnv)
To create a backup, click Add backup and walk through the easy-to-use wizard (Figure 8). The backup service you choose will dictate what you need for a successful configuration.
![Duplicati backup][21]
Figure 8: Creating a new Duplicati backup for Google Drive.
[Used with permission][8]
For example, in order to create a backup to Google Drive, youll need an AuthID. For that, click the AuthID link in the Destination section of the setup, where youll be directed to select the Google Account to associate with the backup. Once youve allowed Duplicati access to the account, the AuthID will fill in and youre ready to continue. Click Test connection and youll be asked to okay the creation of a new folder (if necessary). Click Next to complete the setup of the backup.
### More Where That Came From
These five cloud backup tools arent the end of this particular rainbow. There are plenty more options where these came from (including CLI-only tools). But any of these backup clients will do a great job of serving your Linux desktop-to-cloud backup needs.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2019/2/5-linux-gui-cloud-backup-tools
作者:[Jack Wallen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/jlwallen
[b]: https://github.com/lujun9972
[1]: https://www.google.com/drive/
[2]: https://www.dropbox.com/
[3]: https://wasabi.com/
[4]: https://www.pcloud.com/
[5]: https://www.insynchq.com/
[6]: /files/images/insync1jpg
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/insync_1.jpg?itok=_SDP77uE (Insync app)
[8]: /licenses/category/used-permission
[9]: https://www.insynchq.com/downloads
[10]: https://www.dropbox.com/install-linux
[11]: /files/images/dropbox1jpg
[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dropbox_1.jpg?itok=BYbg-sKB (Dropbox)
[13]: /files/images/pcloud1jpg
[14]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/pcloud_1.jpg?itok=cAUz8pya (pCloud)
[15]: https://www.cloudberrylab.com
[16]: /files/images/cloudberry1jpg
[17]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloudberry_1.jpg?itok=s0aP5xuN (CloudBerry)
[18]: https://www.duplicati.com/
[19]: https://www.duplicati.com/download
[20]: /files/images/duplicati2jpg
[21]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/duplicati_2.jpg?itok=Xkn8s3jg (Duplicati backup)

View File

@ -0,0 +1,177 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Identify That The Linux Server Is Integrated With Active Directory (AD)?)
[#]: via: (https://www.2daygeek.com/how-to-identify-that-the-linux-server-is-integrated-with-active-directory-ad/)
[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/)
How To Identify That The Linux Server Is Integrated With Active Directory (AD)?
======
Single Sign On (SSO) Authentication is an implemented in most of the organizations due to multiple applications access.
It allows a user to logs in with a single ID and password to all the applications which is available in the organization.
It uses a centralized authentication system for all the applications.
A while ago we had written an article, **[how to integrate Linux system with AD][1]**.
Today we are going to show you, how to check that the Linux system is integrated with AD using multiple ways.
It can be done in four ways and we will explain one by one.
* **`ps Command:`** It report a snapshot of the current processes.
* **`id Command:`** It prints user identity.
* **`/etc/nsswitch.conf file:`** It is Name Service Switch configuration file.
* **`/etc/pam.d/system-auth file:`** It is Common configuration file for PAMified services.
### How To Identify That The Linux Server Is Integrated With AD Using PS Command?
ps command displays information about a selection of the active processes.
To integrate the Linux server with AD, we need to use either `winbind` or `sssd` or `ldap` service.
So, use the ps command to filter these services.
If you found any of these services is running on system then we can decide that the system is currently integrate with AD using “winbind” or “sssd” or “ldap” service.
You might get the output similar to below if the system is integrated with AD using `SSSD` service.
```
# ps -ef | grep -i "winbind\|sssd"
root 29912 1 0 2017 ? 00:19:09 /usr/sbin/sssd -f -D
root 29913 29912 0 2017 ? 04:36:59 /usr/libexec/sssd/sssd_be --domain 2daygeek.com --uid 0 --gid 0 --debug-to-files
root 29914 29912 0 2017 ? 00:29:28 /usr/libexec/sssd/sssd_nss --uid 0 --gid 0 --debug-to-files
root 29915 29912 0 2017 ? 00:09:19 /usr/libexec/sssd/sssd_pam --uid 0 --gid 0 --debug-to-files
root 31584 26666 0 13:41 pts/3 00:00:00 grep sssd
```
You might get the output similer to below if the system is integrated with AD using `winbind` service.
```
# ps -ef | grep -i "winbind\|sssd"
root 676 21055 0 2017 ? 00:00:22 winbindd
root 958 21055 0 2017 ? 00:00:35 winbindd
root 21055 1 0 2017 ? 00:59:07 winbindd
root 21061 21055 0 2017 ? 11:48:49 winbindd
root 21062 21055 0 2017 ? 00:01:28 winbindd
root 21959 4570 0 13:50 pts/2 00:00:00 grep -i winbind\|sssd
root 27780 21055 0 2017 ? 00:00:21 winbindd
```
### How To Identify That The Linux Server Is Integrated With AD Using id Command?
It Prints information for given user name, or the current user. It displays the UID, GUID, User Name, Primary Group Name and Secondary Group Name, etc.,
If the Linux system is integrated with AD then you might get the output like below. The GID clearly shows that the user is coming from AD “domain users”.
```
# id daygeek
uid=1918901106(daygeek) gid=1918900513(domain users) groups=1918900513(domain users)
```
### How To Identify That The Linux Server Is Integrated With AD Using nsswitch.conf file?
The Name Service Switch (NSS) configuration file, `/etc/nsswitch.conf`, is used by the GNU C Library and certain other applications to determine the sources from which to obtain name-service information in a range of categories, and in what order. Each category of information is identified by a database name.
You might get the output similar to below if the system is integrated with AD using `SSSD` service.
```
# cat /etc/nsswitch.conf | grep -i "sss\|winbind\|ldap"
passwd: files sss
shadow: files sss
group: files sss
services: files sss
netgroup: files sss
automount: files sss
```
You might get the output similar to below if the system is integrated with AD using `winbind` service.
```
# cat /etc/nsswitch.conf | grep -i "sss\|winbind\|ldap"
passwd: files [SUCCESS=return] winbind
shadow: files [SUCCESS=return] winbind
group: files [SUCCESS=return] winbind
```
You might get the output similer to below if the system is integrated with AD using `ldap` service.
```
# cat /etc/nsswitch.conf | grep -i "sss\|winbind\|ldap"
passwd: files ldap
shadow: files ldap
group: files ldap
```
### How To Identify That The Linux Server Is Integrated With AD Using system-auth file?
It is Common configuration file for PAMified services.
PAM stands for Pluggable Authentication Module that provides dynamic authentication support for applications and services in Linux.
system-auth configuration file is provide a common interface for all applications and service daemons calling into the PAM library.
The system-auth configuration file is included from nearly all individual service configuration files with the help of the include directive.
You might get the output similar to below if the system is integrated with AD using `SSSD` service.
```
# cat /etc/pam.d/system-auth | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so"
or
# cat /etc/pam.d/system-auth-ac | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so"
auth sufficient pam_sss.so use_first_pass
account [default=bad success=ok user_unknown=ignore] pam_sss.so
password sufficient pam_sss.so use_authtok
session optional pam_sss.so
```
You might get the output similar to below if the system is integrated with AD using `winbind` service.
```
# cat /etc/pam.d/system-auth | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so"
or
# cat /etc/pam.d/system-auth-ac | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so"
auth sufficient pam_winbind.so cached_login use_first_pass
account [default=bad success=ok user_unknown=ignore] pam_winbind.so cached_login
password sufficient pam_winbind.so cached_login use_authtok
```
You might get the output similar to below if the system is integrated with AD using `ldap` service.
```
# cat /etc/pam.d/system-auth | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so"
or
# cat /etc/pam.d/system-auth-ac | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so"
auth sufficient pam_ldap.so cached_login use_first_pass
account [default=bad success=ok user_unknown=ignore] pam_ldap.so cached_login
password sufficient pam_ldap.so cached_login use_authtok
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-identify-that-the-linux-server-is-integrated-with-active-directory-ad/
作者:[Vinoth Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/vinoth/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/join-integrate-rhel-centos-linux-system-to-windows-active-directory-ad-domain/

View File

@ -0,0 +1,156 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install VirtualBox on Ubuntu [Beginners Tutorial])
[#]: via: (https://itsfoss.com/install-virtualbox-ubuntu)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
How to Install VirtualBox on Ubuntu [Beginners Tutorial]
======
**This beginners tutorial explains various ways to install VirtualBox on Ubuntu and other Debian-based Linux distributions.**
Oracles free and open source offering [VirtualBox][1] is an excellent virtualization tool, specially for desktop operating systems. I prefer using it over [VMWare Workstation in Linux][2], another virtualization tool.
You can use virtualization software like VirtualBox for installing and using another operating system within a virtual machine.
For example, you can [install Linux on VirtualBox inside Windows][3]. Similarly, you can also [install Windows inside Linux using VirtualBox][4].
You can also use VirtualBox for installing another Linux distribution in your current Linux system. Actually, this is what I use it for. If I hear about a nice Linux distribution, instead of installing it on a real system, I test it on a virtual machine. Its more convenient when you just want to try out a distribution before making a decision about installing it on your actual machine.
![Linux installed inside Linux using VirtualBox][5]Ubuntu 18.10 installed inside Ubuntu 18.04
In this beginners tutorial, Ill show you various ways of installing Oracle VirtualBox on Ubuntu and other Debian-based distributions.
### Installing VirtualBox on Ubuntu and Debian based Linux distributions
The installation methods mentioned here should also work for other Debian and Ubuntu-based Linux distributions such as Linux Mint, elementary OS etc.
#### Method 1: Install VirtualBox from Ubuntu Repository
**Pros** : Easy installation
**Cons** : Installs older version
The easiest way to install VirtualBox on Ubuntu would be to search for it in the Software Center and install it from there.
![VirtualBox in Ubuntu Software Center][6]VirtualBox is available in Ubuntu Software Center
You can also install it from the command line using the command:
```
sudo apt install virtualbox
```
However, if you [check the package version before installing it][7], youll see that the VirtualBox provided by Ubuntus repository is quite old.
For example, the current VirtualBox version at the time of writing this tutorial is 6.0 but the one in Software Center is 5.2. This means you wont get the newer features introduced in the [latest version of VirtualBox][8].
#### Method 2: Install VirtualBox using Deb file from Oracles website
**Pros** : Easily install the latest version
**Cons** : Cant upgrade to newer version
If you want to use the latest version of VirtualBox on Ubuntu, the easiest way would be to [use the deb file][9].
Oracle provides read to use binary files for VirtualBox releases. If you look at its download page, youll see the option to download the deb installer files for Ubuntu and other distributions.
![VirtualBox Linux Download][10]
You just have to download this deb file and double click on it to install it. Its as simple as that.
However, the problem with this method is that you wont get automatically updated to the newer VirtualBox releases. The only way is to remove the existing version, download the newer version and install it again. Thats not very convenient, is it?
#### Method 3: Install VirualBox using Oracles repository
**Pros** : Automatically updates with system updates
**Cons** : Slightly complicated installation
Now this is the command line method and it may seem complicated to you but it has advantages over the previous two methods. Youll get the latest version of VirtualBox and it will be automatically updated to the future releases. Thats what you would want, I presume.
To install VirtualBox using command line, you add the Oracle VirtualBoxs repository in your list of repositories. You add its GPG key so that your system trusts this repository. Now when you install VirtualBox, it will be installed from Oracles repository instead of Ubuntus repository. If there is a new version released, VirtualBox install will be updated along with the system updates. Lets see how to do that.
First, add the key for the repository. You can download and add the key using this single command.
```
wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
```
```
Important for Mint users
The next step will work for Ubuntu only. If you are using Linux Mint or some other distribution based on Ubuntu, replace $(lsb_release -cs) in the command with the Ubuntu version your current version is based on. For example, Linux Mint 19 series users should use bionic and Mint 18 series users should use xenial. Something like this
sudo add-apt-repository “deb [arch=amd64] <http://download.virtualbox.org/virtualbox/debian> **bionic** contrib“
```
Now add the Oracle VirtualBox repository in the list of repositories using this command:
```
sudo add-apt-repository "deb [arch=amd64] http://download.virtualbox.org/virtualbox/debian $(lsb_release -cs) contrib"
```
If you have read my article on [checking Ubuntu version][11], you probably know that lsb_release -cs will print the codename of your Ubuntu system.
**Note** : If you see [add-apt-repository command not found][12] error, youll have to install software-properties-common package.
Now that you have the correct repository added, refresh the list of available packages through these repositories and install VirtualBox.
```
sudo apt update && sudo apt install virtualbox-6.0
```
**Tip** : A good idea would be to type sudo apt install **virtualbox** and hit tab to see the various VirtualBox versions available for installation and then select one of them by typing it completely.
![Install VirtualBox via terminal][13]
### How to remove VirtualBox from Ubuntu
Now that you have learned to install VirtualBox, I would also mention the steps to remove it.
If you installed it from the Software Center, the easiest way to remove the application is from the Software Center itself. You just have to find it in the [list of installed applications][14] and click the Remove button.
Another ways is to use the command line.
```
sudo apt remove virtualbox virtualbox-*
```
Note that this will not remove the virtual machines and the files associated with the operating systems you installed using VirtualBox. Thats not entirely a bad thing because you may want to keep them safe to use it later or in some other system.
**In the end…**
I hope you were able to pick one of the methods to install VirtualBox. Ill also write about using it effectively in another article. For the moment, if you have and tips or suggestions or any questions, feel free to leave a comment below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-virtualbox-ubuntu
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.virtualbox.org
[2]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
[3]: https://itsfoss.com/install-linux-in-virtualbox/
[4]: https://itsfoss.com/install-windows-10-virtualbox-linux/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/linux-inside-linux-virtualbox.png?resize=800%2C450&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/virtualbox-ubuntu-software-center.jpg?ssl=1
[7]: https://itsfoss.com/know-program-version-before-install-ubuntu/
[8]: https://itsfoss.com/oracle-virtualbox-release/
[9]: https://itsfoss.com/install-deb-files-ubuntu/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/virtualbox-download.jpg?resize=800%2C433&ssl=1
[11]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
[12]: https://itsfoss.com/add-apt-repository-command-not-found/
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/install-virtualbox-ubuntu-terminal.png?resize=800%2C165&ssl=1
[14]: https://itsfoss.com/list-installed-packages-ubuntu/

View File

@ -0,0 +1,187 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Netboot a Fedora Live CD)
[#]: via: (https://fedoramagazine.org/netboot-a-fedora-live-cd/)
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
Netboot a Fedora Live CD
======
![](https://fedoramagazine.org/wp-content/uploads/2019/02/netboot-livecd-816x345.jpg)
[Live CDs][1] are useful for many tasks such as:
* installing the operating system to a hard drive
* repairing a boot loader or performing other rescue-mode operations
* providing a consistent and minimal environment for web browsing
* …and [much more][2].
As an alternative to using DVDs and USB drives to store your Live CD images, you can upload them to an [iSCSI][3] server where they will be less likely to get lost or damaged. This guide shows you how to load your Live CD images onto an iSCSI server and access them with the [iPXE][4] boot loader.
### Download a Live CD Image
```
$ MY_RLSE=27
$ MY_LIVE=$(wget -q -O - https://dl.fedoraproject.org/pub/archive/fedora/linux/releases/$MY_RLSE/Workstation/x86_64/iso | perl -ne '/(Fedora[^ ]*?-Live-[^ ]*?\.iso)(?{print $^N})/;')
$ MY_NAME=fc$MY_RLSE
$ wget -O $MY_NAME.iso https://dl.fedoraproject.org/pub/archive/fedora/linux/releases/$MY_RLSE/Workstation/x86_64/iso/$MY_LIVE
```
The above commands download the Fedora-Workstation-Live-x86_64-27-1.6.iso Fedora Live image and save it as fc27.iso. Change the value of MY_RLSE to download other archived versions. Or you can browse to <https://getfedora.org/> to download the latest Fedora live image. Versions prior to 21 used different naming conventions, and must be [downloaded manually here][5]. If you download a Live CD image manually, set the MY_NAME variable to the basename of the file without the extension. That way the commands in the following sections will reference the correct file.
### Convert the Live CD Image
Use the livecd-iso-to-disk tool to convert the ISO file to a disk image and add the netroot parameter to the embedded kernel command line:
```
$ sudo dnf install -y livecd-tools
$ MY_SIZE=$(du -ms $MY_NAME.iso | cut -f 1)
$ dd if=/dev/zero of=$MY_NAME.img bs=1MiB count=0 seek=$(($MY_SIZE+512))
$ MY_SRVR=server-01.example.edu
$ MY_RVRS=$(echo $MY_SRVR | tr '.' "\n" | tac | tr "\n" '.' | cut -b -${#MY_SRVR})
$ MY_LOOP=$(sudo losetup --show --nooverlap --find $MY_NAME.img)
$ sudo livecd-iso-to-disk --format --extra-kernel-args netroot=iscsi:$MY_SRVR:::1:iqn.$MY_RVRS:$MY_NAME $MY_NAME.iso $MY_LOOP
$ sudo losetup -d $MY_LOOP
```
### Upload the Live Image to your Server
Create a directory on your iSCSI server to store your live images and then upload your modified image to it.
**For releases 21 and greater:**
```
$ MY_FLDR=/images
$ scp $MY_NAME.img $MY_SRVR:$MY_FLDR/
```
**For releases prior to 21:**
```
$ MY_FLDR=/images
$ MY_LOOP=$(sudo losetup --show --nooverlap --find --partscan $MY_NAME.img)
$ sudo tune2fs -O ^has_journal ${MY_LOOP}p1
$ sudo e2fsck ${MY_LOOP}p1
$ sudo dd status=none if=${MY_LOOP}p1 | ssh $MY_SRVR "dd of=$MY_FLDR/$MY_NAME.img"
$ sudo losetup -d $MY_LOOP
```
### Define the iSCSI Target
Run the following commands on your iSCSI server:
```
$ sudo -i
# MY_NAME=fc27
# MY_FLDR=/images
# MY_SRVR=`hostname`
# MY_RVRS=$(echo $MY_SRVR | tr '.' "\n" | tac | tr "\n" '.' | cut -b -${#MY_SRVR})
# cat << END > /etc/tgt/conf.d/$MY_NAME.conf
<target iqn.$MY_RVRS:$MY_NAME>
backing-store $MY_FLDR/$MY_NAME.img
readonly 1
allow-in-use yes
</target>
END
# tgt-admin --update ALL
```
### Create a Bootable USB Drive
The [iPXE][4] boot loader has a [sanboot][6] command you can use to connect to and start the live images hosted on your iSCSI server. It can be compiled in many different [formats][7]. The format that works best depends on the type of hardware youre running. As an example, the following instructions show how to [chain load][8] iPXE from [syslinux][9] on a USB drive.
First, download iPXE and build it in its lkrn format. This should be done as a normal user on a workstation:
```
$ sudo dnf install -y git
$ git clone http://git.ipxe.org/ipxe.git $HOME/ipxe
$ sudo dnf groupinstall -y "C Development Tools and Libraries"
$ cd $HOME/ipxe/src
$ make clean
$ make bin/ipxe.lkrn
$ cp bin/ipxe.lkrn /tmp
```
Next, prepare a USB drive with a MSDOS partition table and a FAT32 file system. The below commands assume that you have already connected the USB drive to be formatted. **Be careful that you do not format the wrong drive!**
```
$ sudo -i
# dnf install -y parted util-linux dosfstools
# echo; find /dev/disk/by-id ! -regex '.*-part.*' -name 'usb-*' -exec readlink -f {} \; | xargs -i bash -c "parted -s {} unit MiB print | perl -0 -ne '/^Model: ([^(]*).*\n.*?([0-9]*MiB)/i && print \"Found: {} = \$2 \$1\n\"'"; echo; read -e -i "$(find /dev/disk/by-id ! -regex '.*-part.*' -name 'usb-*' -exec readlink -f {} \; -quit)" -p "Drive to format: " MY_USB
# umount $MY_USB?
# wipefs -a $MY_USB
# parted -s $MY_USB mklabel msdos mkpart primary fat32 1MiB 100% set 1 boot on
# mkfs -t vfat -F 32 ${MY_USB}1
```
Finally, install syslinux on the USB drive and configure it to chain load iPXE:
```
# dnf install -y syslinux-nonlinux
# syslinux -i ${MY_USB}1
# dd if=/usr/share/syslinux/mbr.bin of=${MY_USB}
# MY_MNT=$(mktemp -d)
# mount ${MY_USB}1 $MY_MNT
# MY_NAME=fc27
# MY_SRVR=server-01.example.edu
# MY_RVRS=$(echo $MY_SRVR | tr '.' "\n" | tac | tr "\n" '.' | cut -b -${#MY_SRVR})
# cat << END > $MY_MNT/syslinux.cfg
ui menu.c32
default $MY_NAME
timeout 100
menu title SYSLINUX
label $MY_NAME
menu label ${MY_NAME^^}
kernel ipxe.lkrn
append dhcp && sanboot iscsi:$MY_SRVR:::1:iqn.$MY_RVRS:$MY_NAME
END
# cp /usr/share/syslinux/menu.c32 $MY_MNT
# cp /usr/share/syslinux/libutil.c32 $MY_MNT
# cp /tmp/ipxe.lkrn $MY_MNT
# umount ${MY_USB}1
```
You should be able to use this same USB drive to netboot additional iSCSI targets simply by editing the syslinux.cfg file and adding additional menu entries.
This is just one method of loading iPXE. You could install syslinux directly on your workstation. Another option is to compile iPXE as an EFI executable and place it directly in your [ESP][10]. Yet another is to compile iPXE as a PXE loader and place it on your TFTP server to be referenced by DHCP. The best option depends on your environment.
### Final Notes
* You may want to add the filename \EFI\BOOT\grubx64.efi parameter to the sanboot command if you compile iPXE in its EFI format.
* It is possible to create custom live images. Refer to [Creating and using live CD][11] for more information.
* It is possible to add the overlay-size-mb and home-size-mb parameters to the livecd-iso-to-disk command to create live images with persistent storage. However, if you have multiple concurrent users, youll need to set up your iSCSI server to manage separate per-user writeable overlays. This is similar to what was shown in the “[How to Build a Netboot Server, Part 4][12]” article.
* The live images support a persistenthome option on their kernel command line (e.g. persistenthome=LABEL=HOME). Used together with CHAP-authenticated iSCSI targets, the persistenthome option provides an interesting alternative to NFS for centralized home directories.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/netboot-a-fedora-live-cd/
作者:[Gregory Bartholomew][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/glb/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Live_CD
[2]: https://en.wikipedia.org/wiki/Live_CD#Uses
[3]: https://en.wikipedia.org/wiki/ISCSI
[4]: https://ipxe.org/
[5]: https://dl.fedoraproject.org/pub/archive/fedora/linux/releases/https://dl.fedoraproject.org/pub/archive/fedora/linux/releases/
[6]: http://ipxe.org/cmd/sanboot/
[7]: https://ipxe.org/appnote/buildtargets#boot_type
[8]: https://en.wikipedia.org/wiki/Chain_loading
[9]: https://www.syslinux.org/wiki/index.php?title=SYSLINUX
[10]: https://en.wikipedia.org/wiki/EFI_system_partition
[11]: https://docs.fedoraproject.org/en-US/quick-docs/creating-and-using-a-live-installation-image/#proc_creating-and-using-live-cd
[12]: https://fedoramagazine.org/how-to-build-a-netboot-server-part-4/

View File

@ -0,0 +1,239 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (All about {Curly Braces} in Bash)
[#]: via: (https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash)
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
All about {Curly Braces} in Bash
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/curly-braces-1920.jpg?itok=cScRhWrX)
At this stage of our Bash basics series, it would be hard not to see some crossover between topics. For example, you have already seen a lot of brackets in the examples we have shown over the past several weeks, but the focus has been elsewhere.
For the next phase of the series, well take a closer look at brackets, curly, curvy, or straight, how to use them, and what they do depending on where you use them. We will also tackle other ways of enclosing things, like when to use quotes, double-quotes, and backquotes.
This week, we're looking at curly brackets or _braces_ : `{}`.
### Array Builder
You have already encountered curly brackets before in [The Meaning of Dot][1]. There, the focus was on the use of the dot/period (`.`), but using braces to build a sequence was equally important.
As we saw then:
```
echo {0..10}
```
prints out the numbers from 0 to 10. Using:
```
echo {10..0}
```
prints out the same numbers, but in reverse order. And,
```
echo {10..0..2}
```
prints every second number, starting with 10 and making its way backwards to 0.
Then,
```
echo {z..a..2}
```
prints every second letter, starting with _z_ and working its way backwards until _a_.
And so on and so forth.
Another thing you can do is combine two or more sequences:
```
echo {a..z}{a..z}
```
This prints out all the two letter combinations of the alphabet, from _aa_ to _zz_.
Is this useful? Well, actually it is. You see, arrays in Bash are defined by putting elements between parenthesis `()` and separating each element using a space, like this:
```
month=("Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec")
```
To access an element within the array, you use its index within brackets `[]`:
```
$ echo ${month[3]} # Array indexes start at [0], so [3] points to the fourth item
Apr
```
You can accept all those brackets, parentheses, and braces on faith for a moment. We'll talk about them presently.
Notice that, all things being equal, you can create an array with something like this:
```
letter_combos=({a..z}{a..z})
```
and `letter_combos` points to an array that contains all the 2-letter combinations of the entire alphabet.
You can also do this:
```
dec2bin=({0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1})
```
This last one is particularly interesting because `dec2bin` now contains all the binary numbers for an 8-bit register, in ascending order, starting with 00000000, 00000001, 00000010, etc., until reaching 11111111. You can use this to build yourself an 8-bit decimal-to-binary converter. Say you want to know what 25 is in binary. You can do this:
```
$ echo ${dec2bin[25]}
00011001
```
Yes, there are better ways of converting decimal to binary as we saw in [the article where we discussed & as a logical operator][2], but it is still interesting, right?
### Parameter expansion
Getting back to
```
echo ${month[3]}
```
Here the braces `{}` are not being used as apart of a sequence builder, but as a way of generating _parameter expansion_. Parameter expansion involves what it says on the box: it takes the variable or expression within the braces and expands it to whatever it represents.
In this case, `month` is the array we defined earlier, that is:
```
month=("Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec")
```
And, item 3 within the array points to `"Apr"` (remember: the first index in an array in Bash is `[0]`). That means that `echo ${month[3]}`, after the expansion, translates to `echo "Apr"`.
Interpreting a variable as its value is one way of expanding it, but there are a few more you can leverage. You can use parameter expansion to manipulate what you read from variable, say, by cutting a chunk off the end.
Suppose you have a variable like:
```
a="Too longgg"
```
The command:
```
echo ${a%gg}
```
chops off the last two gs and prints " _Too long_ ".
Breaking this down,
* `${...}` tells the shell to expand whatever is inside it
* `a` is the variable you are working with
* `%` tells the shell you want to chop something off the end of the expanded variable ("Too longgg")
* and `gg` is what you want to chop off.
This can be useful for converting files from one format to another. Allow me to explain with a slight digression:
[ImageMagick][3] is a set of command line tools that lets you manipulate and modify images. One of its most useful tools ImageMagick comes with is `convert`. In its simplest form `convert` allows you to, given an image in a certain format, make a copy of it in another format.
The following command takes a JPEG image called _image.jpg_ and creates a PNG copy called _image.png_ :
```
convert image.jpg image.png
```
ImageMagick is often pre-installed on most Linux distros. If you can't find it, look for it in your distro's software manager.
Okay, end of digression. On to the example:
With variable expansion, you can do the same as shown above like this:
```
i=image.jpg
convert $i ${i%jpg}png
```
What you are doing here is chopping off the extension `jpg` from `i` and then adding `png`, making the command `convert image.jpg image.png`.
You may be wondering how this is more useful than just writing in the name of the file. Well, when you have a directory containing hundreds of JPEG images, you need to convert to PNG, run the following in it:
```
for i in *.jpg; do convert $i ${i%jpg}png; done
```
... and, hey presto! All the pictures get converted automatically.
If you need to chop off a chunk from the beginning of a variable, instead of `%`, use `#`:
```
$ a="Hello World!"
$ echo Goodbye${a#Hello}
Goodbye World!
```
There's quite a bit more to parameter expansion, but a lot of it makes sense only when you are writing scripts. We'll explore more on that topic later in this series.
### Output Grouping
Meanwhile, let's finish up with something simple: you can also use `{ ... }` to group the output from several commands into one big blob. The command:
```
echo "I found all these PNGs:"; find . -iname "*.png"; echo "Within this bunch of files:"; ls > PNGs.txt
```
will execute all the commands but will only copy into the _PNGs.txt_ file the output from the last `ls` command in the list. However, doing
```
{ echo "I found all these PNGs:"; find . -iname "*.png"; echo "Within this bunch of files:"; ls; } > PNGs.txt
```
creates the file _PNGs.txt_ with everything, starting with the line " _I found all these PNGs:_ ", then the list of PNG files returned by `find`, then the line "Within this bunch of files:" and finishing up with the complete list of files and directories within the current directory.
Notice that there is space between the braces and the commands enclosed within them. Thats because `{` and `}` are _reserved words_ here, commands built into the shell. They would roughly translate to " _group the outputs of all these commands together_ " in plain English.
Also notice that the list of commands has to end with a semicolon (`;`) or the whole thing will bork.
### Next Time
In our next installment, we'll be looking at more things that enclose other things, but of different shapes. Until then, have fun!
Read more:
[And, Ampersand, and & in Linux][4]
[Ampersands and File Descriptors in Bash][5]
[Logical & in Bash][2]
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
[2]: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash
[3]: http://www.imagemagick.org/
[4]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
[5]: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash

View File

@ -0,0 +1,114 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To SSH Into A Particular Directory On Linux)
[#]: via: (https://www.ostechnix.com/how-to-ssh-into-a-particular-directory-on-linux/)
[#]: author: (SK https://www.ostechnix.com/author/sk/)
How To SSH Into A Particular Directory On Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2019/02/SSH-Into-A-Particular-Directory-720x340.png)
Have you ever been in a situation where you want to SSH to a remote server and immediately cd into a directory and continue work interactively? Youre on the right track! This brief tutorial describes how to directly SSH into a particular directory of a remote Linux system. Not just SSH into a specific directory, you can run any command immediately right after connecting to an SSH server as described in this guide. It is not that difficult as you might think. Read on.
### SSH Into A Particular Directory Of A Remote System
Before I knew this method, I would usually first SSH to the remote remote system using command:
```
$ ssh user@remote-system
```
And then cd into a directory like below:
```
$ cd <some-directory>
```
However, you need not to use two separate commands. You can combine these commands and simplify the task with one command.
Have a look at the following example.
```
$ ssh -t sk@192.168.225.22 'cd /home/sk/ostechnix ; bash'
```
The above command will SSH into a remote system (192.168.225.22) and immediately cd into a directory named **/home/sk/ostechnix/** directory and leave yourself at the prompt.
Here, the **-t** flag is used to force pseudo-terminal allocation, which is necessary or an interactive shell.
Here is the sample output of the above command:
![](https://www.ostechnix.com/wp-content/uploads/2019/02/ssh-1.gif)
You can also use this command as well.
```
$ ssh -t sk@192.168.225.22 'cd /home/sk/ostechnix ; exec bash'
```
Or,
```
$ ssh -t sk@192.168.225.22 'cd /home/sk/ostechnix && exec bash -l'
```
Here, the **-l** flag sets the bash as login shell.
In the above example, I have used **bash** in the last argument. It is the default shell in my remote system. If you dont know the shell type on the remote system, use the following command:
```
$ ssh -t sk@192.168.225.22 'cd /home/sk/ostechnix && exec $SHELL'
```
Like I already said, this is not just for cd into directory after connecting to an remote system. You can use this trick to run other commands as well. For example, the following command will land you inside /home/sk/ostechnix/ directory and then execute uname -a command.
```
$ ssh -t sk@192.168.225.22 'cd /home/sk/ostechnix && uname -a && exec $SHELL'
```
Alternatively, you can add the command(s) you wanted to run after connecting to an SSH server on the remote systems **.bash_profile** file.
Edit **.bash_profile** file:
```
$ nano ~/.bash_profile
```
Add the command(s) one by one. In my case, I am adding the following line:
```
cd /home/sk/ostechnix >& /dev/null
```
Save and close the file. Finally, run the following command to update the changes.
```
$ source ~/.bash_profile
```
Please note that you should add this line on the remote systems **.bash_profile** or **.bashrc** file, not in your local systems. From now on, whenever you login (whether by SSH or direct), the cd command will execute and you will be automatically landed inside “/home/sk/ostechnix/” directory.
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-ssh-into-a-particular-directory-on-linux/
作者:[SK][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,86 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux security: Cmd provides visibility, control over user activity)
[#]: via: (https://www.networkworld.com/article/3342454/linux-security-cmd-provides-visibility-control-over-user-activity.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Linux security: Cmd provides visibility, control over user activity
======
![](https://images.techhive.com/images/article/2017/01/background-1900329_1920-100705659-large.jpg)
There's a new Linux security tool you should be aware of — Cmd (pronounced "see em dee") dramatically modifies the kind of control that can be exercised over Linux users. It reaches way beyond the traditional configuration of user privileges and takes an active role in monitoring and controlling the commands that users are able to run on Linux systems.
Provided by a company of the same name, Cmd focuses on cloud usage. Given the increasing number of applications being migrated into cloud environments that rely on Linux, gaps in the available tools make it difficult to adequately enforce required security. However, Cmd can also be used to manage and protect on-premises systems.
### How Cmd differs from traditional Linux security controls
The leaders at Cmd — Milun Tesovic and Jake King — say organizations cannot confidently predict or control user behavior until they understand how users work routinely and what is considered “normal.” They seek to provide a tool that will granularly control, monitor, and authenticate user activity.
Cmd monitors user activity by forming user activity profiles (characterizing the activities these users generally perform), noticing abnormalities in their online behavior (login times, commands used, user locations, etc.), and preventing and reporting certain activities (e.g., downloading or modifying files and running privileged commands) that suggest some kind of system compromise might be underway. The product's behaviors are configurable and changes can be made rapidly.
The kind of tools most of us are using today to detect threats, identify vulnerabilities, and control user privileges have taken us a long way, but we are still fighting the battle to keep our systems and data safe. Cmd brings us a lot closer to identifying the intentions of hostile users whether those users are people who have managed to break into accounts or represent insider threats.
![1 sources live sessions][1]
View live Linux sessions
### How does Cmd work?
In monitoring and managing user activity, Cmd:
* Collects information that profiles user activity
* Uses the baseline to determine what is considered normal
* Detects and proactively prevents threats using specific indicators
* Sends alerts to responsible people
![2 triggers][3]
Building custom policies in Cmd
Cmd goes beyond defining what sysadmins can control through traditional methods, such as configuring sudo privileges, providing much more granular and situation-specific controls.
Administrators can select escalation policies that can be managed separately from the user privilege controls managed by Linux sysadmins.
The Cmd agent provides real-time visibility (not after-the-fact log analysis) and can block actions, require additional authentication, or negotiate authorization as needed.
Also, Cmd supports custom rules based on geolocation if user locations are available. And new policies can be pushed to agents deployed on hosts within minutes.
![3 command blocked][4]
Building a trigger query in Cmd
### Funding news for Cmd
[Cmd][2] recently got a financial boost, having [completed of a $15 million round of funding][5] led by [GV][6] (formerly Google Ventures) with participation from Expa, Amplify Partners, and additional strategic investors. This brings the company's raised funding to $21.6 million and will help it continue to add new defensive capabilities to the product and grow its engineering teams.
In addition, the company appointed Karim Faris, general partner at GV, to its board of directors.
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3342454/linux-security-cmd-provides-visibility-control-over-user-activity.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/1-sources-live-sessions-100789431-large.jpg
[2]: https://cmd.com
[3]: https://images.idgesg.net/images/article/2019/02/2-triggers-100789432-large.jpg
[4]: https://images.idgesg.net/images/article/2019/02/3-command-blocked-100789433-large.jpg
[5]: https://www.linkedin.com/pulse/changing-cybersecurity-announcing-cmds-15-million-funding-jake-king/
[6]: https://www.gv.com/
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,165 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Check Password Complexity/Strength And Score In Linux?)
[#]: via: (https://www.2daygeek.com/how-to-check-password-complexity-strength-and-score-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How To Check Password Complexity/Strength And Score In Linux?
======
We all know the password importance. Its a best practices to use hard and guess password.
Also, i advise you to use the different password for each services such as email, ftp, ssh, etc.,
In top of that i suggest you guys to change the password frequently to avoid an unnecessary hacking attempt.
By default RHEL and its clone uses `cracklib` module to check password strength.
We are going to teach you, how to check the password strength using cracklib module.
If you would like to check the password score which you have created then use the `pwscore` package.
If you would like to create a good password, basically it should have minimum 12-15 characters length.
It should be created in the following combinations like, Alphabets (Lower case & Upper case), Numbers and Special Characters.
There are many utilities are available in Linux to check a password complexity and we are going to discuss about `cracklib` module today.
### How To Install cracklib module In Linux?
The cracklib module is available in most of the distribution repository so, use the distribution official package manager to install it.
For **`Fedora`** system, use **[DNF Command][1]** to install cracklib.
```
$ sudo dnf install cracklib
```
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][2]** or **[APT Command][3]** to install libcrack2.
```
$ sudo apt install libcrack2
```
For **`Arch Linux`** based systems, use **[Pacman Command][4]** to install cracklib.
```
$ sudo pacman -S cracklib
```
For **`RHEL/CentOS`** systems, use **[YUM Command][5]** to install cracklib.
```
$ sudo yum install cracklib
```
For **`openSUSE Leap`** system, use **[Zypper Command][6]** to install cracklib.
```
$ sudo zypper install cracklib
```
### How To Use The cracklib module In Linux To Check Password Complexity?
I have added few example in this article to make you understand better about this module.
If you are given any words like, person name or place name or common word then you will be getting an message “it is based on a dictionary word”.
```
$ echo "password" | cracklib-check
password: it is based on a dictionary word
```
The default password length in Linux is `Seven` characters. If you give any password less than seven characters then you will be getting an message “it is WAY too short”.
```
$ echo "123" | cracklib-check
123: it is WAY too short
```
You will be getting `OK` When you give good password like us.
```
$ echo "ME$2w!@fgty6723" | cracklib-check
ME!@fgty6723: OK
```
### How To Install pwscore In Linux?
The pwscore package is available in most of the distribution official repository so, use the distribution package manager to install it.
For **`Fedora`** system, use **[DNF Command][1]** to install libpwquality.
```
$ sudo dnf install libpwquality
```
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][2]** or **[APT Command][3]** to install libpwquality.
```
$ sudo apt install libpwquality
```
For **`Arch Linux`** based systems, use **[Pacman Command][4]** to install libpwquality.
```
$ sudo pacman -S libpwquality
```
For **`RHEL/CentOS`** systems, use **[YUM Command][5]** to install libpwquality.
```
$ sudo yum install libpwquality
```
For **`openSUSE Leap`** system, use **[Zypper Command][6]** to install libpwquality.
```
$ sudo zypper install libpwquality
```
If you are given any words like, person name or place name or common word then you will be getting a message “it is based on a dictionary word”.
```
$ echo "password" | pwscore
Password quality check failed:
The password fails the dictionary check - it is based on a dictionary word
```
The default password length in Linux is `Seven` characters. If you give any password less than seven characters then you will be getting an message “it is WAY too short”.
```
$ echo "123" | pwscore
Password quality check failed:
The password is shorter than 8 characters
```
You will be getting `password score` When you give good password like us.
```
$ echo "ME!@fgty6723" | pwscore
90
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-check-password-complexity-strength-and-score-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[2]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[4]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[5]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[6]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/

View File

@ -0,0 +1,202 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Find Available Network Interfaces On Linux)
[#]: via: (https://www.ostechnix.com/how-to-find-available-network-interfaces-on-linux/)
[#]: author: (SK https://www.ostechnix.com/author/sk/)
How To Find Available Network Interfaces On Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2019/02/network-interface-720x340.jpeg)
One of the common task we do after installing a Linux system is network configuration. Of course, you can configure network interfaces during the installation time. But, some of you might prefer to do it after installation or change the existing settings. As you know already, you must first know how many interfaces are available on the system in-order to configure network settings from command line. This brief tutorial addresses all the possible ways to find available network interfaces on Linux and Unix operating systems.
### Find Available Network Interfaces On Linux
We can find the available network cards in couple ways.
**Method 1 Using ifconfig Command:**
The most commonly used method to find the network interface details is using **ifconfig** command. I believe some of Linux users might still use this.
```
$ ifconfig -a
```
Sample output:
```
enp5s0: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 24:b6:fd:37:8b:29 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 171420 bytes 303980988 (289.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 171420 bytes 303980988 (289.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlp9s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.225.37 netmask 255.255.255.0 broadcast 192.168.225.255
inet6 2409:4072:6183:c604:c218:85ff:fe50:474f prefixlen 64 scopeid 0x0<global>
inet6 fe80::c218:85ff:fe50:474f prefixlen 64 scopeid 0x20<link>
ether c0:18:85:50:47:4f txqueuelen 1000 (Ethernet)
RX packets 564574 bytes 628671925 (599.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 299706 bytes 60535732 (57.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
```
As you see in the above output, I have two network interfaces namely **enp5s0** (on board wired ethernet adapter) and **wlp9s0** (wireless network adapter) on my Linux box. Here, **lo** is loopback interface, which is used to access all network services locally. It has an ip address of 127.0.0.1.
We can also use the same ifconfig command in many UNIX variants, for example **FreeBSD** , to list available network cards.
**Method 2 Using ip Command:**
The ifconfig command is deprecated in the latest Linux versions. So you can use **ip** command to display the network interfaces as shown below.
```
$ ip link show
```
Sample output:
```
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp5s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 24:b6:fd:37:8b:29 brd ff:ff:ff:ff:ff:ff
3: wlp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
link/ether c0:18:85:50:47:4f brd ff:ff:ff:ff:ff:ff
```
![](https://www.ostechnix.com/wp-content/uploads/2019/02/ip-command.png)
You can also use the following commands as well.
```
$ ip addr
$ ip -s link
```
Did you notice that these command also shows the connected state of the network interfaces? If you closely look at the above output, you will notice that my Ethernet card is not connected with network cable (see the word **“DOWN”** in the above output). And wireless network card is connected (See the word **“UP”** ). For more details, check our previous guide to [**find the connected state of network interfaces on Linux**][1].
These two commands (ifconfig and ip) are just enough to find the available network cards on your Linux systems.
However, there are few other methods available to list network interfaces on Linux. Here you go.
**Method 3:**
The Linux Kernel saves the network interface details inside **/sys/class/net** directory. You can verify the list of available interfaces by looking into this directory.
```
$ ls /sys/class/net
```
Output:
```
enp5s0 lo wlp9s0
```
**Method 4:**
In Linux operating systems, **/proc/net/dev** file contains statistics about network interfaces.
To view the available network cards, just view its contents using command:
```
$ cat /proc/net/dev
```
Output:
```
Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
wlp9s0: 629189631 566078 0 0 0 0 0 0 60822472 300922 0 0 0 0 0 0
enp5s0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
lo: 303980988 171420 0 0 0 0 0 0 303980988 171420 0 0 0 0 0 0
```
**Method 5: Using netstat command**
The **netstat** command displays various details such as network connections, routing tables, interface statistics, masquerade connections, and multicast memberships.
```
$ netstat -i
```
**Sample output:**
```
Kernel Interface table
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
lo 65536 171420 0 0 0 171420 0 0 0 LRU
wlp9s0 1500 565625 0 0 0 300543 0 0 0 BMRU
```
Please be mindful that netstat is obsolete. The Replacement for “netstat -i” is “ip -s link”. Also note that this method will list only the active interfaces, not all available interfaces.
**Method 6: Using nmcli command**
The nmcli is nmcli is a command-line tool for controlling NetworkManager and reporting network status. It is used to create, display, edit, delete, activate, and deactivate network connections and display network status.
If you have Linux system with Network Manager installed, you can list the available network interfaces using nmcli tool using the following commands:
```
$ nmcli device status
```
Or,
```
$ nmcli connection show
```
You know now how to find the available network interfaces on Linux. Next, check the following guides to know how to configure IP address on Linux.
[How To Configure Static IP Address In Linux And Unix][2]
[How To Configure IP Address In Ubuntu 18.04 LTS][3]
[How To Configure Static And Dynamic IP Address In Arch Linux][4]
[How To Assign Multiple IP Addresses To Single Network Card In Linux][5]
If you know any other quick ways to do it, please share them in the comment section below. I will check and update the guide with your inputs.
And, thats all. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-find-available-network-interfaces-on-linux/
作者:[SK][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/how-to-find-out-the-connected-state-of-a-network-cable-in-linux/
[2]: https://www.ostechnix.com/configure-static-ip-address-linux-unix/
[3]: https://www.ostechnix.com/how-to-configure-ip-address-in-ubuntu-18-04-lts/
[4]: https://www.ostechnix.com/configure-static-dynamic-ip-address-arch-linux/
[5]: https://www.ostechnix.com/how-to-assign-multiple-ip-addresses-to-single-network-card-in-linux/

View File

@ -0,0 +1,290 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Display Weather Information in Ubuntu 18.04)
[#]: via: (https://itsfoss.com/display-weather-ubuntu)
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
How to Display Weather Information in Ubuntu 18.04
======
Youve got a fresh Ubuntu install and youre [customizing Ubuntu][1] to your liking. You want the best experience and the best apps for your needs.
The only thing missing is a weather app. Luckily for you, we got you covered. Just make sure you have the Universe repository enabled.
![Tools to Display Weather Information in Ubuntu Linux][2]
### 8 Ways to Display Weather Information in Ubuntu 18.04
Back in the Unity days, there were a few popular options like My Weather Indicator to display weather on your system. Those options are either discontinued or not available in Ubuntu 18.04 and higher versions anymore.
Fortunately, there are many other options to choose from. Some are minimalist and plain simple to use, some offer detailed information (or even present you with news headlines) and some are made for terminal gurus. Whatever your needs may be, the right app is waiting for you.
**Note:** The presented apps are in no particular order of ranking.
**Top Panel Apps**
These applications usually sit on the top panel of your screen. Good for quick look at the temperature.
#### 1\. OpenWeather Shell Extension
![Open Weather Gnome Shell Extesnsion][3]
**Key features:**
* Simple to install and customize
* Uses OpenWeatherMap (by default)
* Many Units and Layout options
* Can save multiple locations (that can easily be changed)
This is a great extension presenting you information in a simple manner. There are multiple ways to install this. It is the weather app that I find myself using the most, because its just a simple, no-hassle integrated weather display for the top panel.
**How to Install:**
I recommend reading this [detailed tutorial about using GNOME extensions][4]. The easiest way to install this extension is to open up a terminal and run:
```
sudo apt install gnome-shell-extension-weather
```
Then all you have to restart the gnome shell by executing:
```
Alt+F2
```
Enter **r** and press **Enter**.
Now open up **Tweaks** (gnome tweak tool) and enable **Openweather** in the **Extensions** tab.
#### 2\. gnome-weather
![Gnome Weather App UI][5]
![Gnome Weather App Top Panel][6]
**Key features:**
* Pleasant Design
* Integrated into Calendar (Top Panel)
* Simple Install
* Flatpak install available
This app is great for new users. The installation is only one command and the app is easy to use. Although it doesnt have as many features as other apps, it is still great if you dont want to bother with multiple settings and a complex install procedure.
**How to Install:**
All you have to do is run:
```
sudo apt install gnome-weather
```
Now search for **Weather** and the app should pop up. After logging out (and logging back in), the Calendar extension will be displayed.
If you prefer, you can get a [flatpak][7] version.
#### 3\. Meteo
![Meteo Weather App UI][8]
![Meteo Weather System Tray][9]
**Key features:**
* Great UI
* Integrated into System Tray (Top Panel)
* Simple Install
* Great features (Maps)
Meteo is a snap app on the heavier side. Most of that weight comes from the great Maps features, with maps presenting temperatures, clouds, precipitations, pressure and wind speed. Its a distinct feature that I havent encountered in any other weather app.
**Note** : After changing location, you might have to quit and restart the app for the changes to be applied in the system tray.
**How to Install:**
Open up the **Ubuntu Software Center** and search for **Meteo**. Install and launch.
**Desktop Apps**
These are basically desktop widgets. They look good and provide more information at a glance.
#### 4\. Temps
![Temps Weather App UI][10]
**Key features:**
* Beautiful Design
* Useful Hotkeys
* Hourly Temperature Graph
Temps is an electron app with a beautiful UI (though not exactly “light”). The most unique features are the temperature graphs. The hotkeys might feel unintuitive at first, but they prove to be useful in the long run. The app will minimize when you click somewhere else. Just press Ctrl+Shift+W to bring it back.
This app is **Open-Source** , and the developer cant afford the cost of a faster API key, so you might want to create your own API at [OpenWeatherMap][11].
**How to Install:**
Go to the website and download the version you need (probably 64-bit). Extract the archive. Open the extracted directory and double-click on **Temps**. Press Ctrl+Shift+W if the window minimizes.
#### 5\. Cumulus
![Cumulus Weather App UI][12]
**Key features:**
* Color Selector for background and text
* Re-sizable window
* Tray Icon (temperature only)
* Allows multiple instances with different locations etc.
Cumulus is a greatly customizable weather app, with a backend supporting Yahoo! Weather and OpenWeatherMap. The UI is great and the installer is simple to use. This app has amazing features. Its one of the few weather apps that allow for multiple instances. You should definitely try it you are looking for an experience tailored to your preferences.
**How to Install:**
Go to the website and download the (online) installer. Open up a terminal and **cd** (change directory) to the directory where you downloaded the file.
Then run
```
chmod +x Cumulus-online-installer-x64
./Cumulus-online-installer-x64
```
Search for **Cumulus** and enjoy the app!
**Terminal Apps**
You are a terminal dweller? You can check the weather right in your terminal.
#### 7\. WeGo
![WeGo Weather App Terminal][13]
**Key features:**
* Supports different APIs
* Pretty detailed
* Customizable config
* Multi-language support
* 1 to 7 day forecast
WeGo is a Go app for displaying weather info in the terminal. Its install can be a little tricky, but its easy to set up. Youll need to register an API Key [here][14] (if using **forecast.io** , which is default). Once you set it up, its fairly practical for someone who mostly works in the terminal.
**How to Install:**
I recommend you to check out the GitHub page for complete information on installation, setup and features.
#### 8\. Wttr.in
![Wttr.in Weather App Terminal][15]
**Key features:**
* Simple install
* Easy to use
* Lightweight
* 3 day forecast
* Moon phase
If you really live in the terminal, this is the weather app for you. This is as lightweight as it gets. You can specify location (by default the app tries to detect your current location) and a few other parameters (eg. units).
**How to Install:**
Open up a terminal and install Curl:
```
sudo apt install curl
```
Then:
```
curl wttr.in
```
Thats it. You can specify location and parameters like so:
```
curl wttr.in/london?m
```
To check out other options type:
```
curl wttr.in/:help
```
If you found some settings you enjoy and you find yourself using them frequently, you might want to add an **alias**. To do so, open **~/.bashrc** with your favorite editor (thats **vim** , terminal wizard). Go to the end and paste in
```
alias wttr='curl wttr.in/CITY_NAME?YOUR_PARAMS'
```
For example:
```
alias wttr='curl wttr.in/london?m'
```
Save and close **~/.bashrc** and run the command below to source the new file.
```
source ~/.bashrc
```
Now, typing **wttr** in the terminal and pressing Enter should execute your custom command.
**Wrapping Up**
These are a handful of the weather apps available for Ubuntu. We hope our list helped you discover an app fitting your needs, be that something with pleasant aesthetics or just a quick tool.
What is your favorite weather app? Tell us about what you enjoy and why in the comments section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/display-weather-ubuntu
作者:[Sergiu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sergiu/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/gnome-tricks-ubuntu/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/display-weather-ubuntu.png?resize=800%2C450&ssl=1
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/open_weather_gnome_shell-1-1.jpg?fit=800%2C383&ssl=1
[4]: https://itsfoss.com/gnome-shell-extensions/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/gnome_weather_ui.jpg?fit=800%2C599&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/gnome_weather_top_panel.png?fit=800%2C587&ssl=1
[7]: https://flatpak.org/
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/meteo_ui.jpg?fit=800%2C547&ssl=1
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/meteo_system_tray.png?fit=800%2C653&ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/temps_ui.png?fit=800%2C623&ssl=1
[11]: https://openweathermap.org/
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/cumulus_ui.png?fit=800%2C651&ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/wego_terminal.jpg?fit=800%2C531&ssl=1
[14]: https://developer.forecast.io/register
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/wttr_in_terminal.jpg?fit=800%2C526&ssl=1