mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-25 00:50:15 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
8147d2eb01
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11173-1.html)
|
||||
[#]: subject: (Install NetData Performance Monitoring Tool On Linux)
|
||||
[#]: via: (https://www.ostechnix.com/netdata-real-time-performance-monitoring-tool-linux/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
@ -12,43 +12,40 @@
|
||||
|
||||
![][1]
|
||||
|
||||
**NetData** 是一个用于系统和应用的分布式实时性能和健康监控工具。它提供了对系统中实时发生的所有事情的全面检测。你可以在高度互动的 Web 仪表板中查看结果。使用 Netdata,你可以清楚地了解现在发生的事情,以及之前系统和应用中发生的事情。你无需成为专家即可在 Linux 系统中部署此工具。NetData 开箱即用,零配置和零依赖。只需安装它然后休息,之后 NetData 将负责其余部分。
|
||||
**NetData** 是一个用于系统和应用的分布式实时性能和健康监控工具。它提供了对系统中实时发生的所有事情的全面检测。你可以在高度互动的 Web 仪表板中查看结果。使用 Netdata,你可以清楚地了解现在发生的事情,以及之前系统和应用中发生的事情。你无需成为专家即可在 Linux 系统中部署此工具。NetData 开箱即用,零配置、零依赖。只需安装它然后坐等,之后 NetData 将负责其余部分。
|
||||
|
||||
|
||||
它有自己的内置 Web 服务器,以图形形式显示结果。NetData 非常快速高效,安装后可立即开始分析系统性能。它是用 **C** 编程语言编写的,所以它非常轻量。它占用的单核 CPU 使用率不到 3%,内存占用 10-15MB。我们可以轻松地在任何现有网页上嵌入图表,并且它还有一个插件 API,以便你可以监控任何应用。
|
||||
它有自己的内置 Web 服务器,以图形形式显示结果。NetData 非常快速高效,安装后可立即开始分析系统性能。它是用 C 编程语言编写的,所以它非常轻量。它占用的单核 CPU 使用率不到 3%,内存占用 10-15MB。我们可以轻松地在任何现有网页上嵌入图表,并且它还有一个插件 API,以便你可以监控任何应用。
|
||||
|
||||
以下是 Linux 系统中 NetData 的监控列表。
|
||||
|
||||
* CPU 使用率
|
||||
* RAM 使用率
|
||||
* swap 内存使用率
|
||||
* 内核内存使用率
|
||||
* 硬盘及其使用率
|
||||
* 网络接口
|
||||
* IPtables
|
||||
* Netfilter
|
||||
* DDoS 保护
|
||||
* 进程
|
||||
* 应用
|
||||
* NFS 服务器
|
||||
* Web 服务器 (Apache 和 Nginx)
|
||||
* 数据库服务器 (MySQL),
|
||||
* DHCP 服务器
|
||||
* DNS 服务器
|
||||
* 电子邮件服务
|
||||
* 代理服务器
|
||||
* Tomcat
|
||||
* PHP
|
||||
* SNP 设备
|
||||
* 等等
|
||||
* CPU 使用率
|
||||
* RAM 使用率
|
||||
* 交换内存使用率
|
||||
* 内核内存使用率
|
||||
* 硬盘及其使用率
|
||||
* 网络接口
|
||||
* IPtables
|
||||
* Netfilter
|
||||
* DDoS 保护
|
||||
* 进程
|
||||
* 应用
|
||||
* NFS 服务器
|
||||
* Web 服务器 (Apache 和 Nginx)
|
||||
* 数据库服务器 (MySQL),
|
||||
* DHCP 服务器
|
||||
* DNS 服务器
|
||||
* 电子邮件服务
|
||||
* 代理服务器
|
||||
* Tomcat
|
||||
* PHP
|
||||
* SNP 设备
|
||||
* 等等
|
||||
|
||||
|
||||
|
||||
NetData 是免费的开源工具,它支持 Linux、FreeBSD 和 Mac OS。
|
||||
NetData 是自由开源工具,它支持 Linux、FreeBSD 和 Mac OS。
|
||||
|
||||
### 在 Linux 上安装 NetData
|
||||
|
||||
Netdata 可以安装在任何安装了**Bash** 的 Linux 发行版上。
|
||||
Netdata 可以安装在任何安装了 Bash 的 Linux 发行版上。
|
||||
|
||||
最简单的安装 Netdata 的方法是从终端运行以下命令:
|
||||
|
||||
@ -58,19 +55,19 @@ $ bash <(curl -Ss https://my-netdata.io/kickstart-static64.sh)
|
||||
|
||||
这将下载并安装启动和运行 Netdata 所需的一切。
|
||||
|
||||
有些用户可能不想在没有研究的情况下将某些东西直接注入 Bash。如果你不喜欢此方法,可以按照以下步骤在系统上安装它。
|
||||
有些用户可能不想在没有研究的情况下将某些东西直接注入到 Bash。如果你不喜欢此方法,可以按照以下步骤在系统上安装它。
|
||||
|
||||
**在 Arch Linux 上:**
|
||||
#### 在 Arch Linux 上
|
||||
|
||||
Arch Linux 默认仓库中提供了最新版本。所以,我们可以使用以下 [**pacman**][2] 命令安装它:
|
||||
Arch Linux 默认仓库中提供了最新版本。所以,我们可以使用以下 [pacman][2] 命令安装它:
|
||||
|
||||
```
|
||||
$ sudo pacman -S netdata
|
||||
```
|
||||
|
||||
**在基于 DEB 和基于 RPM 的系统上**
|
||||
#### 在基于 DEB 和基于 RPM 的系统上
|
||||
|
||||
在基于 DEB (Ubuntu / Debian) 或基于RPM (RHEL / CentOS / Fedora) 系统的默认仓库没有 NetData。我们需要从它的 Git 仓库手动安装 NetData。
|
||||
在基于 DEB (Ubuntu / Debian)或基于 RPM(RHEL / CentOS / Fedora) 系统的默认仓库没有 NetData。我们需要从它的 Git 仓库手动安装 NetData。
|
||||
|
||||
首先安装所需的依赖项:
|
||||
|
||||
@ -97,9 +94,9 @@ Git 克隆 NetData 仓库:
|
||||
$ git clone https://github.com/netdata/netdata.git --depth=100
|
||||
```
|
||||
|
||||
上面的命令将在当前工作目录中创建一个名为 **“netdata”** 的目录。
|
||||
上面的命令将在当前工作目录中创建一个名为 `netdata` 的目录。
|
||||
|
||||
切换到 “netdata” 目录:
|
||||
切换到 `netdata` 目录:
|
||||
|
||||
```
|
||||
$ cd netdata/
|
||||
@ -167,23 +164,23 @@ Uninstall script generated: ./netdata-uninstaller.sh
|
||||
|
||||
![][3]
|
||||
|
||||
安装 NetData
|
||||
*安装 NetData*
|
||||
|
||||
NetData 已安装并启动。
|
||||
|
||||
要在其他 Linux 发行版上安装 Netdata,请参阅[**官方安装说明页面**][4]。
|
||||
要在其他 Linux 发行版上安装 Netdata,请参阅[官方安装说明页面][4]。
|
||||
|
||||
##### 在防火墙或者路由器上允许 NetData 的默认端口
|
||||
### 在防火墙或者路由器上允许 NetData 的默认端口
|
||||
|
||||
如果你的系统在防火墙或者路由器后面,那么必须允许默认端口 **19999** 以便从任何远程系统访问 NetData 的 web 界面。
|
||||
如果你的系统在防火墙或者路由器后面,那么必须允许默认端口 `19999` 以便从任何远程系统访问 NetData 的 web 界面。
|
||||
|
||||
**在 Ubuntu/Debian 中:**
|
||||
#### 在 Ubuntu/Debian 中
|
||||
|
||||
```
|
||||
$ sudo ufw allow 19999
|
||||
```
|
||||
|
||||
**在 CentOS/RHEL/Fedora 中:**
|
||||
#### 在 CentOS/RHEL/Fedora 中
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --permanent --add-port=19999/tcp
|
||||
@ -193,11 +190,10 @@ $ sudo firewall-cmd --reload
|
||||
|
||||
### 启动/停止 NetData
|
||||
|
||||
要在使用 **Systemd** 的系统上启用和启动 Netdata 服务,请运行:
|
||||
要在使用 Systemd 的系统上启用和启动 Netdata 服务,请运行:
|
||||
|
||||
```
|
||||
$ sudo systemctl enable netdata
|
||||
|
||||
$ sudo systemctl start netdata
|
||||
```
|
||||
|
||||
@ -207,11 +203,10 @@ $ sudo systemctl start netdata
|
||||
$ sudo systemctl stop netdata
|
||||
```
|
||||
|
||||
要在使用 **Init** 的系统上启用和启动 Netdata 服务,请运行:
|
||||
要在使用 Init 的系统上启用和启动 Netdata 服务,请运行:
|
||||
|
||||
```
|
||||
$ sudo service netdata start
|
||||
|
||||
$ sudo chkconfig netdata on
|
||||
```
|
||||
|
||||
@ -223,19 +218,19 @@ $ sudo service netdata stop
|
||||
|
||||
### 通过 Web 浏览器访问 NetData
|
||||
|
||||
打开 Web 浏览器,然后打开 **<http://127.0.0.1:19999>** 或者 **<http://localhost:19999/>** 或者 **<http://ip-address:19999>**。你应该看到如下页面。
|
||||
打开 Web 浏览器,然后打开 `http://127.0.0.1:19999` 或者 `http://localhost:19999/` 或者 `http://ip-address:19999`。你应该看到如下页面。
|
||||
|
||||
![][5]
|
||||
|
||||
Netdata 仪表板
|
||||
*Netdata 仪表板*
|
||||
|
||||
在仪表板中,你可以找到 Linux 系统的完整统计信息。向下滚动以查看每个部分。
|
||||
|
||||
你可以随时打开 **<http://localhost:19999/netdata.conf>** 来下载和/或查看 NetData 默认配置文件。
|
||||
你可以随时打开 `http://localhost:19999/netdata.conf` 来下载和/或查看 NetData 默认配置文件。
|
||||
|
||||
![][6]
|
||||
|
||||
Netdata 配置文件
|
||||
*Netdata 配置文件*
|
||||
|
||||
### 更新 NetData
|
||||
|
||||
@ -245,7 +240,7 @@ Netdata 配置文件
|
||||
$ sudo pacman -Syyu
|
||||
```
|
||||
|
||||
在基于 DEB 或 RPM 的系统中,只需进入已克隆它的目录(此例中是 netdata)。
|
||||
在基于 DEB 或 RPM 的系统中,只需进入已克隆它的目录(此例中是 `netdata`)。
|
||||
|
||||
```
|
||||
$ cd netdata
|
||||
@ -283,12 +278,10 @@ $ sudo ./netdata-uninstaller.sh --force
|
||||
$ sudo pacman -Rns netdata
|
||||
```
|
||||
|
||||
**资源:**
|
||||
|
||||
* [**NetData 网站**][7]
|
||||
* [**NetData 的 GitHub 页面**][8]
|
||||
|
||||
### 资源
|
||||
|
||||
* [NetData 网站][7]
|
||||
* [NetData 的 GitHub 页面][8]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -297,7 +290,7 @@ via: https://www.ostechnix.com/netdata-real-time-performance-monitoring-tool-lin
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,107 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Intent-Based Networking (IBN): Bridging the gap on network complexity)
|
||||
[#]: via: (https://www.networkworld.com/article/3428356/intent-based-networking-ibn-bridging-the-gap-on-network-complexity.html)
|
||||
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
Intent-Based Networking (IBN): Bridging the gap on network complexity
|
||||
======
|
||||
Intent-Based Networking was a result of the need for greater network automation.
|
||||
![IDG / Thinkstock][1]
|
||||
|
||||
Networking has gone through various transformations over the last decade. In essence, the network has become complex and hard to manage using traditional mechanisms. Now there is a significant need to design and integrate devices from multiple vendors and employ new technologies, such as virtualization and cloud services.
|
||||
|
||||
Therefore, every network is a unique snowflake. You will never come across two identical networks. The products offered by the vendors act as the building blocks for engineers to design solutions that work for them. If we all had a simple and predictable network, this would not be a problem. But there are no global references to follow and designs vary from organization to organization. These lead to network variation even while offering similar services.
|
||||
|
||||
It is estimated that over 60% of users consider their I.T environment to be more complex than it was 2 years ago. We can only assume that network complexity is going to increase in the future.
|
||||
|
||||
**[ Learn more about SDN: Find out [where SDN is going][2] and learn the [difference between SDN and NFV][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
|
||||
|
||||
Large enterprises and service providers need to manage this complexity to ensure that all their traffic flows, policies and configurations are in line with requirements and objectives. You cannot manage a complex network in a manual way. Human error will always get you, eventually slowing down the network prohibiting agility.
|
||||
|
||||
As I wrote about on Network Insight _[Disclaimer: the author is employed by Network Insight]_, the fact that the network is complex and error-prone is an [encouragement to automate][5]. How this actually happens, depends on the level of automation. Hence, a higher-level of orchestration is needed.
|
||||
|
||||
### The need to modernize
|
||||
|
||||
This complexity is compounded by the fact that the organizations are now looking to modernize their business processes and networks. The traditional vertically integrated monolithic networking solutions prohibit network modernization. This consequently creates a gap between the architect’s original intent and the actual runtime-behavior.
|
||||
|
||||
If you examine, you will find that the contents of a design document are loosely coupled to the network execution. Primarily, there is no structured process as to how the design documents get translated and implemented to the actual devices. How it gets implemented is wide-open to individual interpretation.
|
||||
|
||||
These networks were built for a different era. Therefore, we must now shift the focus from the traditional network prescriptive to Intent-Based Networking (IBN). IBN is a technology that has the capability to modernize the network and process in-line with the overall business objectives. This gives you the tight coupling of your design rules with your network.
|
||||
|
||||
### The need for new tools
|
||||
|
||||
Undoubtedly, we need new tools, not just from the physical device’s perspective, but also from the traffic’s perspective. Verifying the manual way will not work anymore. We have 100s of bits in the packet, meaning the traffic could be performing numerous talks at one time. Hence, tracking the end-to-end flow is impossible using the human approach.
|
||||
|
||||
When it comes to provisioning, CLI is the most common method used to make configuration changes. But it has many drawbacks. Firstly, it offers the wrong level of abstraction. It targets the human operator and there is no validation whether the engineers will follow the correct procedures.
|
||||
|
||||
Also, the CLI languages are not standardized across multi-vendors. The industry reacted and introduced NETCONF. However, NETCONF has many inconsistencies across the vendor operating systems. Many use their own proprietary format, making it hard to write NETCONF applications across multiple vendor networks.
|
||||
|
||||
NETCONF was basically meant to make the automation easy but in reality, the irregularities it presented actually made the automation even more difficult. Also, the old troubleshooting tools that we use such as ping, traceroute does not provide a holistic assessment of how the network is behaving. Traceroute has problems with IP unnumbered links which is useful in the case of fully automated network environments. On the other hand, ping tells you nothing about how the network is performing. These tools were originally built for simpler times.
|
||||
|
||||
We need to progress to a vendor-agnostic solution that can verify the intent against the configured policies. This should be regardless of the number of devices, the installed operating system, traffic rules and any other type of configured policy. We need networks to be automated and predictable. And the existing tools that are commonly used add no value.
|
||||
|
||||
In a nutshell, we need a new model that can figure out all the device and traffic interactions, not just at the device level but the network-wide level.
|
||||
|
||||
### IBN and SDN
|
||||
|
||||
Software-Defined Networking (SDN) was a success in terms of interest, on the other hand, its adoption rate largely comprised of only the big players. These were the users who had the resources to build their own hardware and software, such as Google and Facebook.
|
||||
|
||||
For example, Google’s B4 project to build an efficient Wide Area Network (WAN) in a dynamic fashion with flow-based optimization. However, this would be impossible to implement in case of a traditional WAN architecture on the production network.
|
||||
|
||||
IBN is a natural successor to SDN as it borrows the same principles and architectures; a divide between the application and the network infrastructure. Similar to SDN, IBN is making software that controls the network as a whole, instead of device-to-device.
|
||||
|
||||
Now the question that surfaces is can SDN, as a concept, automate as much as you want it to? Virtually, SDN uses software to configure the network, thereby driving a software-based network. But IBN is the next step where you have more of an explicit emphasis. Intent-Based systems work higher in the application level to offer true automation.
|
||||
|
||||
### What is IBN?
|
||||
|
||||
IBN was a result of the need for greater network automation. IBN is a technology that provides enhanced automation and network insights. It represents a paradigm shift that focuses on “what” the network is supposed to do versus “how” the network components are configured. It monitors if the network design is doing what it should be doing.
|
||||
|
||||
IBN does this by generating the configuration for the design and device implementation. In addition, it continuously verifies and validates in real-time whether the original intent is being met. For example, if the desired intent is not being met, the system can take corrective action, such as modifying a QoS policy, VLAN or ACL. This makes the network more in-line with both; business objectives and compliance requirements.
|
||||
|
||||
It uses declarative statements i.e. what the network should do as opposed to imperative statements that describe how it should be done. IBN has the capability to understand a large collection of heterogeneous networks which consist of a range of diverse devices that do not have one API. This substantially allows you to focus on the business needs rather than the constraints of traditional networking.
|
||||
|
||||
### The IBN journey
|
||||
|
||||
The first step on the road to IBN is to translate all of this into explicit logical rules which are essentially a piece of IBN software. You also need to understand the traffic to see if the reality is matching the intent. For this, the system would build a model of the network and then verify that model; this is known as formal verification in computer science. This is a method where we mathematically analyze the network to see if it's matching its intent. This involves certain calculations to encompass the logic.
|
||||
|
||||
### Network verification
|
||||
|
||||
Network verification is a key part of any IBN system. It requires an underlying mathematical model of the network behavior in order to analyze and reason out the targeted network design and policies. The systems need to verify all the conceivable packets flows and traffic patterns.
|
||||
|
||||
Although there are no clear architectural guidelines for IBN a mathematical model can be used to treat every network device. This can be treated as a set of algebraic and logical operations on all the packet types and traffic flows at a per-device level. This allows the IBM system to evaluate and verify all the possible scenarios.
|
||||
|
||||
When a device receives a packet, it can perform a number of actions. It can forward the packet to a particular port, drop the packet, or modify the packet header and then forward to a port. It’s up to the mathematical model to understand how each device responds to every possible type of packet and to evaluate the behavior at a network-wide level, not just at a device-level.
|
||||
|
||||
Principally, the verification process must be end-to-end. It must collect the configuration files and state information from every device on the network. It then mathematically analyzes the behavior of all the possible traffic flows on a hop-by-hop basis. The IBM system builds a software model of the network infrastructure. This model first reads the Layer 2 to Layer 4 configuration details and then collects the state from each device (IP routing tables).
|
||||
|
||||
With IBN we will see the shift from a reactive to a proactive approach. It will have a profound effect on the future of networking as we switch to a model that primarily focuses on the business needs and makes things easier. We are not as far down the road as one might think, but if you want to you can start your IBN journey today. So, the technology is there, and a phased deployment model is recommended. If you look at the deployment of IDS/IPS, you will find that most are still altering.
|
||||
|
||||
**This article is published as part of the IDG Contributor Network. [Want to Join?][6]**
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3428356/intent-based-networking-ibn-bridging-the-gap-on-network-complexity.html
|
||||
|
||||
作者:[Matt Conran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/01/nw_2018_intent-based_networking_cover_art-100747914-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3209131/lan-wan/what-sdn-is-and-where-its-going.html
|
||||
[3]: https://www.networkworld.com/article/3206709/lan-wan/what-s-the-difference-between-sdn-and-nfv.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://network-insight.net/2019/07/introducing-intent-based-networking-its-not-hype/
|
||||
[6]: https://www.networkworld.com/contributor-network/signup.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,87 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (From e-learning to m-learning: Open education's next move)
|
||||
[#]: via: (https://opensource.com/open-organization/19/7/m-learning-open-education)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hallhttps://opensource.com/users/mysentimentshttps://opensource.com/users/petercheerhttps://opensource.com/users/jenkelchnerhttps://opensource.com/users/gkftii)
|
||||
|
||||
From e-learning to m-learning: Open education's next move
|
||||
======
|
||||
"Open education" means more than teaching with open source software. It
|
||||
means being open to meeting students wherever they are.
|
||||
![A person looking at a phone][1]
|
||||
|
||||
> "Access to computers and the Internet has become a basic need for education in our society."‒U.S. Senator Kent Conrad, 2004
|
||||
|
||||
I spent seventeen years working in higher education, both as a campus technology leader and as an adjunct professor. Today, I continue as an adjunct professor. I know firsthand that educational technology is invaluable to the teaching and learning mission of universities—and that it changes at a rapid pace.
|
||||
|
||||
Higher education is often an entrepreneurial space, seizing on new opportunities to deliver the best value. Too often, however, institutions spend a year or more to designing, bidding on, selecting, purchasing, building, or implementing new education technologies in the service of the teaching and learning mission. But in that yearlong interim, the technology landscape may change so much that the solution delivered no longer addresses the needs of the education community.
|
||||
|
||||
What's more, technological solutions often re-entrench traditional educational models that aren't as effective today as they once were. The "closed" classroom featuring the model of teacher as a "sage on a stage" can no longer be the norm.
|
||||
|
||||
Education needs to evolve and embrace new technologies and new modes of learning if we are to meet our students' needs.
|
||||
|
||||
### Shifts in teaching and learning
|
||||
|
||||
The next fundamental technological shift at universities will impact how students interface with teaching and learning. To understand the new learning landscape, let me first provide the context of previous methods.
|
||||
|
||||
Learning has always been about students sitting in a classroom, pen and paper in hand, taking notes during a professor's lecture. We've experienced variations on this mode over time (such as small group breakout discussions and inverted classrooms) but most classes involve some version of this teaching model.
|
||||
|
||||
In the 1980s, IBM introduced the IBM-PC, which put individual computing power into the hands of everyone, including students. Overnight, institutions needed to integrate the new technology into their pedagogies.
|
||||
|
||||
The PC changed the teaching and learning landscape. Certainly students needed to learn the new software. Students previously wrote papers by hand—a methodology that directly mirrored work in the professional world. But with the introduction of the PC, modern students now needed to learn new skills.
|
||||
|
||||
For example, writing-intensive courses could no longer expect students to use a standard typewriter to write papers. That would be like expecting handwritten papers in the era of the typewriter. "Keyboarding" became a new skill, replacing "typing" classes in most institutions. Rather than simply learning to type on a typewriter, students needed to learn the new "word processing" software available on the new PC.
|
||||
|
||||
The thought process behind writing remains the same, only the tools change. In this case, the PC introduced an additional component to teaching and learning: Students learned the same writing _process_, but now learned new skills in the _mechanics_ of writing via word processing software.
|
||||
|
||||
### M-learning means mobile learning
|
||||
|
||||
Technology is changing, and will continue to evolve. How will students access information next year? Five years from now? Ten years from now? We cannot expect to rely on old models. And campuses need to look toward the technology horizon and consider how to prepare for that new landscape in the face of new technologies.
|
||||
|
||||
Universities cannot rest on the accomplishments of e-learning. How students interface with e-learning continues to evolve, and is already changing.
|
||||
|
||||
In response to today's ubiquitous computing trends across higher education, many institutions have already adopted electronic learning system, or "e-learning." If you have stepped into a college campus in the last few years, you'll already be familiar with central systems that provide a single place for students to turn in homework, respond to quizzes, interact with other students, ask questions of the instructor, receive grades, and track other progress in their courses. Universities that adopt e-learning are evolving to the classroom of the future.
|
||||
|
||||
But these universities cannot rest on the accomplishments of e-learning. How students interface with e-learning continues to evolve, and is already changing.
|
||||
|
||||
By my count, only two years ago students preferred laptops for their personal computing devices. Since then, smaller mobile devices have overtaken the classroom. Students still use laptops for _creating_ content, such as writing papers, but they increasingly use mobile devices such as phones to _consume_ content. This trend is increasing. According to research by Nielsen conducted a few years ago, [98% of surveyed Millennials aged 18 to 24 said they owned a smartphone][2].
|
||||
|
||||
In a listening session with my campus, I heard one major concern from our students: How could they could access e-learning systems from their phones? With loud voices, students asked for e-learning interfaces that supported their smartphones. Electronic learning had shifted from "e-learning" to mobile learning, or "m-learning."
|
||||
|
||||
In turn, this meant we needed better mobile carrier reception across campus. The focus changes again—this time, from providing high-quality, high-speed WiFi networks to every corner of campus to ensuring the mobile carriers could provide their own coverage across campus. With smartphones, and with m-learning, students now expect to "bring their network with them."
|
||||
|
||||
### Finding the future landscape
|
||||
|
||||
This radically changes the model of e-learning and how students access e-learning systems. M-learning is about responding to _the mobility of the student_, and recognizing that _students can continue to learn wherever they are_. Students don't want to be anchored to the four walls of a classroom.
|
||||
|
||||
Our responsibility as stewards of education is to discover the next educational computing methods in partnership with the students we serve.
|
||||
|
||||
How will the future unfold? The future is always changing, so I cannot give a complete picture of the future of learning. But I can describe the trends that we will see.
|
||||
|
||||
Mobile computing and m-learning will only expand. In the next five years, campuses that have dedicated computer labs will be in the minority. Instead of dedicated spaces, students will need to access software and programs from these "labs" through a "virtual lab." If this sounds similar to today's model of a laptop connected to a virtual lab, that's to be expected. The model isn't likely to change much; education will be via m-learning and mobile devices for the foreseeable future.
|
||||
|
||||
Even after education fully adopts m-learning, change will continue. Students won't stop discovering new ways of learning, and they'll demand those new methods from their institutions. We will move beyond m-learning to new modes we have yet to uncover. That's the reality of educational technology.
|
||||
|
||||
Our responsibility as stewards of education is to discover the next educational computing methods _in partnership_ with the students we serve. To meet the challenges this future technology landscape presents us, we cannot expect an ivory tower to dictate how students will adopt technology. That era is long past. Instead, institutions need to work together with students—and examine how to adapt technology to serve students.
|
||||
|
||||
_(This article is part of the_ [Open Organization Guide for Educators][3] _project.)_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/7/m-learning-open-education
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hallhttps://opensource.com/users/mysentimentshttps://opensource.com/users/petercheerhttps://opensource.com/users/jenkelchnerhttps://opensource.com/users/gkftii
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd (A person looking at a phone)
|
||||
[2]: https://www.nielsen.com/us/en/insights/article/2016/millennials-are-top-smartphone-users/
|
||||
[3]: https://github.com/open-organization-ambassadors/open-org-educators-guide
|
@ -0,0 +1,69 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Google Cloud to offer VMware data-center tools natively)
|
||||
[#]: via: (https://www.networkworld.com/article/3428497/google-cloud-to-offer-vmware-data-center-tools-natively.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Google Cloud to offer VMware data-center tools natively
|
||||
======
|
||||
Google is enlisting VMware and CloudSimple to serve up vSphere, NSX and vSAN software on Google Cloud to make ease the transition of enterprise workloads to the cloud.
|
||||
![Thinkstock / Google][1]
|
||||
|
||||
Google this week said it would for the first time natively support VMware workloads in its Cloud service, giving customers more options for deploying enterprise applications.
|
||||
|
||||
The hybrid cloud service called Google Cloud VMware Solution by CloudSimple will use VMware software-defined data center (SDCC) technologies including VMware vSphere, NSX and vSAN software deployed on a platform administered by CloudSimple for GCP.
|
||||
|
||||
[RELATED: How to make hybrid cloud work][2]
|
||||
|
||||
“Users will have full, native access to the full VMware stack including vCenter, vSAN and NSX-T. Google Cloud will provide the first line of support, working closely with CloudSimple to help ensure customers receive a streamlined product support experience and that their business-critical applications are supported with the SLAs that enterprise customers need,” Thomas Kurian, CEO of Google Cloud [wrote in a blog outlining the deal][3].
|
||||
|
||||
“With VMware on Google Cloud Platform, customers will be able to leverage all of the familiarity of VMware tools and training, and protect their investments, as they execute on their cloud strategies and rapidly bring new services to market and operate them seamlessly and more securely across a hybrid cloud environment,” said Sanjay Poonen, chief operating officer, customer operations at VMware [in a statement][4].
|
||||
|
||||
The move further integrates Google and VMware software as both have teamed up multiple times in the past including:
|
||||
|
||||
* Google Cloud integration for VMware NSX Service Mesh and SD-WAN by VeloCloud that lets customers deploy and gain visibility into their hybrid workloads—wherever they’re running.
|
||||
* Google Cloud’s Anthos on VMware vSphere, including validations for vSAN, as the preferred hyperconverged infrastructure, to provide customers a multi-cloud offering and providing Kubernetes users the ability to create and manage persistent storage volumes for stateful workloads on-premises.
|
||||
* A Google Cloud plug-in for VMware vRealize Automation providing customers with a seamless way to deploy, orchestrate and manage Google Cloud resources from within their vRealize Automation environment.
|
||||
|
||||
|
||||
|
||||
Google is just one key cloud relationship VMware relies on. It has a deep integration with Amazon Web Services that began in 2017. With that flagship agreement, VMware customers can run workloads in the AWS cloud. And more recently, VMware cloud offerings can be bought directly through the AWS service.
|
||||
|
||||
VMware also has a hybrid cloud partnership with [Microsoft’s Azure cloud service][5]. That package, called Azure VMware Solutions is built on VMware Cloud Foundation, which is a packaging of the company’s traditional compute virtualization software vSphere with its NSX network virtualization product and its VSAN software-defined storage area network product.
|
||||
|
||||
More recently VMware bulked up its cloud offerings by [buying Avi Networks][6]' load balancing, analytics and application-delivery technology for an undisclosed amount.
|
||||
|
||||
Founded in 2012 by a group of Cisco engineers and executives, Avi offers a variety of software-defined products and services including a software-based application delivery controller (ADC) and intelligent web-application firewall. The software already integrates with VMware vCenter and NSX, OpenStack, third party [SDN][7] controllers, as well as Amazon AWS and Google Cloud Platform, Red Hat OpenShift and container orchestration platforms such as Kubernetes and Docker.
|
||||
|
||||
According to the company, the VMware and Avi Networks teams will work together to advance VMware’s Virtual Cloud Network plan, build out full stack Layer 2-7 services, and deliver the public-cloud experience for on-prem environments and data centers, said Tom Gillis, VMware's senior vice president and general manager of its networking and security business unit.
|
||||
|
||||
Combining Avi Networks with [VMware NSX][8] will further enable organizations to respond to new opportunities and threats, create new business models and deliver services to all applications and data, wherever they are located, VMware stated.
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3428497/google-cloud-to-offer-vmware-data-center-tools-natively.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/07/google-cloud-services-100765812-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3119362/hybrid-cloud/how-to-make-hybrid-cloud-work.html#tk.nww-fsb
|
||||
[3]: https://cloud.google.com/blog/topics/partners/vmware-cloud-foundation-comes-to-google-cloud
|
||||
[4]: https://www.vmware.com/company/news/releases/vmw-newsfeed.Google-Cloud-and-VMware-Extend-Strategic-Partnership.1893625.html
|
||||
[5]: https://www.networkworld.com/article/3113394/vmware-cloud-foundation-integrates-virtual-compute-network-and-storage-systems.html
|
||||
[6]: https://www.networkworld.com/article/3402981/vmware-eyes-avi-networks-for-data-center-software.html
|
||||
[7]: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html
|
||||
[8]: https://www.networkworld.com/article/3346017/vmware-preps-milestone-nsx-release-for-enterprise-cloud-push.html
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,72 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IoT roundup: Connected cows, food safety sensors and tracking rent-a-bikes)
|
||||
[#]: via: (https://www.networkworld.com/article/3412141/iot-roundup-connected-cows-food-safety-sensors-and-tracking-rent-a-bikes.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
IoT roundup: Connected cows, food safety sensors and tracking rent-a-bikes
|
||||
======
|
||||
Farmers use LoRaWAN wireless with internet of things devices that monitor the health and location of beef cattle; restaurant chains use it to network IoT sensors that signal temperature shifts in commercial refrigerators.
|
||||
![Getty Images][1]
|
||||
|
||||
While the public image of agriculture remains a bit antiquated, the industry is actually an increasingly sophisticated one, and farmers have been particularly enthusiastic in their embrace of the internet of things ([IoT][2]). Everything from GPS-guided precision for planting, watering and harvesting to remote soil monitoring and in-depth yield analysis is available to the modern farmer.
|
||||
|
||||
What’s more, the technology used in agriculture continues to evolve at speed; witness the recent partnership between Quantified Ag, a University of Nebraska-backed program that, among other things, can track livestock health via a system of IoT ear tags, and Cradlepoint, a vendor that makes the NetCloud Manager product.
|
||||
|
||||
**[ Now see: [How edge networking and IoT will reshape data centers][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
Quantified Ag’s tags use [LoRaWAN][5] tech to transmit behavioral and biometric data to custom gateways installed at a farm, where the data is aggregated. Yet those gateways sometimes suffered from problems, particularly where unreliable wired connectivity and the complexities of working with an array of different rural ISPs were concerned. Enter Cradlepoint, which partnered up with an unnamed national cellular data provider to dramatically simplify the synchronization of data across a given implementation, as well as make it easier to deploy and provision new nodes.
|
||||
|
||||
Simplicity is always a desirable quality in an IoT deployment, and single-pane-of-glass systems, provisioned by a single network, are a strong play for numerous IoT use cases.
|
||||
|
||||
### LoRaWAN keeping food safe
|
||||
|
||||
Even after the livestock is no longer, well, alive, IoT technology plays a role. Restaurants such as Five Guys and Shake Shack are integrating low-power WAN technology to connect temperature sensors into a network. Sort of an Internet of Burgers, if you will.
|
||||
|
||||
According to an announcement earlier this month from Semtech, who makes the LoRaWAN devices in question, the restaurant chains join up-and-comers like Hattie B’s among those using IoT tech to improve food safety. The latter restaurant – a Nashville-based small chain noted for its spicy fried chicken – recently realized the benefits of such a system after a power outage. Instant notification that the refrigeration had died enabled the management to rescue tens of thousands of dollars’ worth of food inventory.
|
||||
|
||||
Frankly, anything that saves fried chicken and burgers from wastage – and potentially, keeps their prices fractionally lower – is a good thing in our book, and Semtech argues (as it might be expected to) that the lower-frequency LoRa-based technology is a better choice for this application, given its ability to pass through obstacles like refrigerator and freezer doors with less attenuation than, for example, Bluetooth.
|
||||
|
||||
### IoT tracking rental bikes**
|
||||
|
||||
**
|
||||
|
||||
Readers who live in urban areas will probably have noticed the rent-a-bike phenomenon spreading quickly of late. IoT connectivity provider Sigfox has, also, getting in on the action via a partnership with France-based INDIGO weel, a self-service bicycle fleet that was announced earlier this month.
|
||||
|
||||
In this application, Sigfox’s proprietary wide area network technology is used to precisely track INDIGO’s bikes, deterring theft and damage. Sigfox also claims that the integration of its technology into the bike fleet will reduce costs, since reusable sensors can be easily transferred from one bike to another, and help users find the vehicle they need more quickly.
|
||||
|
||||
Sigfox likes to talk about itself as an “IoT service provider,” and its large coverage footprint – the company claims to be operating in 60 countries – is a good fit for the kind of application that covers a lot of ground and might not require a great deal of bandwidth.
|
||||
|
||||
### Vulnerability warning for IoT medical devices
|
||||
|
||||
Per usual, several minor but alarming revelations about insecure, exploitable IoT devices have come to light this month. One advisory, [revealed by healthcare cybersecurity firm CyberMDX][6], said attackers could compromise GE Aestiva and Aespire anesthesia and respiration devices – changing the mix of gases that the patient breathes, altering the date and time on the machine or silencing alarms. (GE responded by pointing out that the compromise requires access to both the hospital’s network and an insufficiently secure terminal server, and urged users not to use such servers. Obviously, if devices don’t need to be on the network in the first place, that’s an even better solution.)
|
||||
|
||||
Elsewhere, [an anonymous researcher at VPN Mentor][7] posted early this month that China-based smart home product maker Orvibo had (presumably by accident) opened up an enormous database to public view. The database contained 2 billion log entries, which covered email addresses, usernames, passwords and even geographic locations, based on its footprint of smart home devices installed in residences and hotels. The company has since cut off access to the database, but, still – not a great look for them.
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3412141/iot-roundup-connected-cows-food-safety-sensors-and-tracking-rent-a-bikes.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/08/nw_iot-news_internet-of-things_smart-city_smart-home7-100768495-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.networkworld.com/article/3211390/lorawan-key-to-building-full-stack-production-iot-networks.html
|
||||
[6]: https://www.cybermdx.com/news/vulnerability-discovered-ge-anesthesia-respiratory-devices
|
||||
[7]: https://www.vpnmentor.com/blog/report-orvibo-leak/
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Remote code execution is possible by exploiting flaws in Vxworks)
|
||||
[#]: via: (https://www.networkworld.com/article/3428996/remote-code-execution-is-possible-by-exploiting-flaws-in-vxworks.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
Remote code execution is possible by exploiting flaws in Vxworks
|
||||
======
|
||||
|
||||
![Thinkstock][1]
|
||||
|
||||
Eleven zero-day vulnerabilities in WindRiver’s VxWorks, a real-time operating system in use across an advertised 2 billion connected devices have been discovered by network security vendor Armis.
|
||||
|
||||
Six of the vulnerabilities could enable remote attackers to access unpatched systems without any user interaction, even through a firewall according to Armis.
|
||||
|
||||
**About IoT:**
|
||||
|
||||
* [What is the IoT? How the internet of things works][2]
|
||||
* [What is edge computing and how it’s changing the network][3]
|
||||
* [Most powerful Internet of Things companies][4]
|
||||
* [10 Hot IoT startups to watch][5]
|
||||
* [The 6 ways to make money in IoT][6]
|
||||
* [What is digital twin technology? [and why it matters]][7]
|
||||
* [Blockchain, service-centric networking key to IoT success][8]
|
||||
* [Getting grounded in IoT networking and security][9]
|
||||
* [Building IoT-ready networks must become a priority][10]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][11]
|
||||
|
||||
|
||||
|
||||
The vulnerabilities affect all devices running VxWorks version 6.5 and later with the exception of VxWorks 7, issued July 19, which patches the flaws. That means the attack windows may have been open for more than 13 years.
|
||||
|
||||
Armis Labs said that affected devices included SCADA controllers, patient monitors, MRI machines, VOIP phones and even network firewalls, specifying that users in the medical and industrial fields should be particularly quick about patching the software.
|
||||
|
||||
Thanks to remote-code-execution vulnerabilities, unpatched devices can be compromised by a maliciously crafted IP packet that doesn’t need device-specific tailoring, and every vulnerable device on a given network can be targeted more or less simultaneously.
|
||||
|
||||
The Armis researchers said that, because the most severe of the issues targets “esoteric parts of the TCP/IP stack that are almost never used by legitimate applications,” specific rules for the open source Snort security framework can be imposed to detect exploits.
|
||||
|
||||
VxWorks, which has been in use since the 1980s, is a popular real-time OS, used in industrial, medical and many other applications that require extremely low latency and response time. While highly reliable, the inability to install a security agent alongside the operating system makes it vulnerable, said Armis, and the proprietary source code makes it more difficult to detect problems.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][12] ]**
|
||||
|
||||
Armis argued that more attention has to be paid by security researchers to real-time operating systems, particularly given the explosive growth in IoT usage – for one thing, the researchers said, any software that doesn’t get thoroughly researched runs a higher risk of having serious vulnerabilities go unaddressed. For another, the critical nature of many IoT use cases means that the consequences of a compromised device are potentially very serious.
|
||||
|
||||
“It is inconvenient to have your phone put out of use, but it’s an entirely different story to have your manufacturing plant shut down,” the Armis team wrote. “A compromised industrial controller could shut down a factory, and a pwned patient monitor could have a life-threatening effect.”
|
||||
|
||||
In addition to the six headlining vulnerabilities, five somewhat less serious security holes were found. These could lead to consequences ranging from denial of service and leaked information to logic flaws and memory issues.
|
||||
|
||||
More technical details and a fuller overview of the problem can be found at the Armis Labs blog post here, and there are partial lists available of companies and devices that run VxWorks available [on Wikipedia][13] and at [Wind River’s customer page][14]. Wind River itself issued a security advisory [here][15], which contains some potential mitigation techniques.
|
||||
|
||||
Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3428996/remote-code-execution-is-possible-by-exploiting-flaws-in-vxworks.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2017/09/iot-security11-100735405-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
|
||||
[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[4]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[5]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[6]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[7]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[8]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[9]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[10]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[11]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[12]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[13]: https://en.wikipedia.org/wiki/VxWorks#Notable_uses
|
||||
[14]: https://www.windriver.com/customers/
|
||||
[15]: https://www.windriver.com/security/announcements/tcp-ip-network-stack-ipnet-urgent11/security-advisory-ipnet/
|
||||
[16]: https://www.facebook.com/NetworkWorld/
|
||||
[17]: https://www.linkedin.com/company/network-world
|
@ -1,126 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Manage your passwords with Bitwarden and Podman)
|
||||
[#]: via: (https://fedoramagazine.org/manage-your-passwords-with-bitwarden-and-podman/)
|
||||
[#]: author: (Eric Gustavsson https://fedoramagazine.org/author/egustavs/)
|
||||
|
||||
Manage your passwords with Bitwarden and Podman
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
You might have encountered a few advertisements the past year trying to sell you a password manager. Some examples are [LastPass][2], [1Password][3], or [Dashlane][4]. A password manager removes the burden of remembering the passwords for all your websites. No longer do you need to re-use passwords or use easy-to-remember passwords. Instead, you only need to remember one single password that can unlock all your other passwords for you.
|
||||
|
||||
This can make you more secure by having one strong password instead of many weak passwords. You can also sync your passwords across devices if you have a cloud-based password manager like LastPass, 1Password, or Dashlane. Unfortunately, none of these products are open source. Luckily there are open source alternatives available.
|
||||
|
||||
### Open source password managers
|
||||
|
||||
These alternatives include Bitwarden, [LessPass][5], or [KeePass][6]. Bitwarden is [an open source password manager][7] that stores all your passwords encrypted on the server, which works the same way as LastPass, 1Password, or Dashlane. LessPass is a bit different as it focuses on being a stateless password manager. This means it derives passwords based on a master password, the website, and your username rather than storing the passwords encrypted. On the other side of the spectrum there’s KeePass, a file-based password manager with a lot of flexibility with its plugins and applications.
|
||||
|
||||
Each of these three apps has its own downsides. Bitwarden stores everything in one place and is exposed to the web through its API and website interface. LessPass can’t store custom passwords since it’s stateless, so you need to use their derived passwords. KeePass, a file-based password manager, can’t easily sync between devices. You can utilize a cloud-storage provider together with [WebDAV][8] to get around this, but a lot of clients do not support it and you might get file conflicts if devices do not sync correctly.
|
||||
|
||||
This article focuses on Bitwarden.
|
||||
|
||||
### Running an unofficial Bitwarden implementation
|
||||
|
||||
There is a community implementation of the server and its API called [bitwarden_rs][9]. This implementation is fully open source as it can use SQLite or MariaDB/MySQL, instead of the proprietary Microsoft SQL Server that the official server uses.
|
||||
|
||||
It’s important to recognize some differences exist between the official and the unofficial version. For instance, the [official server has been audited by a third-party][10], whereas the unofficial one hasn’t. When it comes to implementations, the unofficial version lacks [email confirmation and support for two-factor authentication using Duo or email codes][11].
|
||||
|
||||
Let’s get started running the server with SELinux in mind. Following the documentation for bitwarden_rs you can construct a Podman command as follows:
|
||||
|
||||
```
|
||||
$ podman run -d \
|
||||
--userns=keep-id \
|
||||
--name bitwarden \
|
||||
-e SIGNUPS_ALLOWED=false \
|
||||
-e ROCKET_PORT=8080 \
|
||||
-v /home/egustavs/Bitwarden/bw-data/:/data/:Z \
|
||||
-p 8080:8080 \
|
||||
bitwardenrs/server:latest
|
||||
```
|
||||
|
||||
This downloads the bitwarden_rs image and runs it in a user container under the user’s namespace. It uses a port above 1024 so that non-root users can bind to it. It also changes the volume’s SELinux context with _:Z_ to prevent permission issues with read-write on _/data_.
|
||||
|
||||
If you host this under a domain, it’s recommended to put this server under a reverse proxy with Apache or Nginx. That way you can use port 80 and 443 which points to the container’s 8080 port without running the container as root.
|
||||
|
||||
### Running under systemd
|
||||
|
||||
With Bitwarden now running, you probably want to keep it that way. Next, create a unit file that keeps the container running, automatically restarts if it doesn’t respond, and starts running after a system restart. Create this file as _/etc/systemd/system/bitwarden.service_:
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Bitwarden Podman container
|
||||
Wants=syslog.service
|
||||
|
||||
[Service]
|
||||
User=egustavs
|
||||
Group=egustavs
|
||||
TimeoutStartSec=0
|
||||
ExecStart=/usr/bin/podman run 'bitwarden'
|
||||
ExecStop=-/usr/bin/podman stop -t 10 'bitwarden'
|
||||
Restart=always
|
||||
RestartSec=30s
|
||||
KillMode=none
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
Now, enable and start it [using][12] _[sudo][12]_:
|
||||
|
||||
```
|
||||
$ sudo systemctl enable bitwarden.service && sudo systemctl start bitwarden.service
|
||||
$ systemctl status bitwarden.service
|
||||
bitwarden.service - Bitwarden Podman container
|
||||
Loaded: loaded (/etc/systemd/system/bitwarden.service; enabled; vendor preset: disabled)
|
||||
Active: active (running) since Tue 2019-07-09 20:23:16 UTC; 1 day 14h ago
|
||||
Main PID: 14861 (podman)
|
||||
Tasks: 44 (limit: 4696)
|
||||
Memory: 463.4M
|
||||
```
|
||||
|
||||
Success! Bitwarden is now running under system and will keep running.
|
||||
|
||||
### Adding LetsEncrypt
|
||||
|
||||
It’s strongly recommended to run your Bitwarden instance through an encrypted channel with something like LetsEncrypt if you have a domain. Certbot is a bot that creates LetsEncrypt certificates for us, and they have a [guide for doing this through Fedora][13].
|
||||
|
||||
After you generate a certificate, you can follow the [bitwarden_rs guide about HTTPS][14]. Just remember to append _:Z_ to the LetsEncrypt volume to handle permissions while not changing the port.
|
||||
|
||||
* * *
|
||||
|
||||
*Photo by _[_CMDR Shane_][15]_ on *[_Unsplash_][16].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/manage-your-passwords-with-bitwarden-and-podman/
|
||||
|
||||
作者:[Eric Gustavsson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/egustavs/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/bitwarden-816x345.jpg
|
||||
[2]: https://www.lastpass.com
|
||||
[3]: https://1password.com/
|
||||
[4]: https://www.dashlane.com/
|
||||
[5]: https://lesspass.com/
|
||||
[6]: https://keepass.info/
|
||||
[7]: https://bitwarden.com/
|
||||
[8]: https://en.wikipedia.org/wiki/WebDAV
|
||||
[9]: https://github.com/dani-garcia/bitwarden_rs/
|
||||
[10]: https://blog.bitwarden.com/bitwarden-completes-third-party-security-audit-c1cc81b6d33
|
||||
[11]: https://github.com/dani-garcia/bitwarden_rs/wiki#missing-features
|
||||
[12]: https://fedoramagazine.org/howto-use-sudo/
|
||||
[13]: https://certbot.eff.org/instructions
|
||||
[14]: https://github.com/dani-garcia/bitwarden_rs/wiki/Enabling-HTTPS
|
||||
[15]: https://unsplash.com/@cmdrshane?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[16]: https://unsplash.com/search/photos/password?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -1,285 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top 8 Things to do after Installing Debian 10 (Buster))
|
||||
[#]: via: (https://www.linuxtechi.com/things-to-do-after-installing-debian-10/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
Top 8 Things to do after Installing Debian 10 (Buster)
|
||||
======
|
||||
|
||||
Debian 10 code name Buster is the latest LTS release from the house of Debian and the latest release comes packed with a lot of features. So if you have already installed the Debian 10 in your system and thinking what next, then please continue reading the article till the end as we provide you with the top 8 things to do after installing Debian 10. For those who haven’t installed Debian 10, please read this guide [**Debian 10 (Buster) Installation Steps with Screenshots**][1]. So lets continue with the article:
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/Things-to-do-after-installing-debian10.jpg>
|
||||
|
||||
### 1) Install and Configure sudo
|
||||
|
||||
Once you complete setting up Debian 10 in your system, the first thing you need to do is install the sudo package as it enables you to get administrative privileges to install any package you need. In order to install and configure sudo, please use the following command:
|
||||
|
||||
Become the root user and then install sudo package using the beneath command,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ su -
|
||||
Password:
|
||||
root@linuxtechi:~# apt install sudo -y
|
||||
```
|
||||
|
||||
Add your local user to sudo group using the following [usermod][2] command,
|
||||
|
||||
```
|
||||
root@linuxtechi:~# usermod -aG sudo pkumar
|
||||
root@linuxtechi:~#
|
||||
```
|
||||
|
||||
Now verify whether local user got the sudo rights or not,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ id
|
||||
uid=1000(pkumar) gid=1000(pkumar) groups=1000(pkumar),27(sudo)
|
||||
root@linuxtechi:~$ sudo vi /etc/hosts
|
||||
[sudo] password for pkumar:
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
### 2) Fix Date and time
|
||||
|
||||
Once you’ve successfully configured the sudo package, next thing you need to fix the date and time according to your location. In order to fix the date and time,
|
||||
|
||||
Go to System **Settings** –> **Details** –> **Date and Time** and then change your time zone that suits to your location.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/Adjust-date-time-Debian10.jpg>
|
||||
|
||||
Once the time zone is changed, you can see the time changed automatically in your clock
|
||||
|
||||
### 3) Apply all updates
|
||||
|
||||
After Debian 10 installation, it is recommended to install all updates which are available via Debian 10 package repositories, execute the beneath apt command,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt update
|
||||
root@linuxtechi:~$ sudo apt upgrade -y
|
||||
```
|
||||
|
||||
**Note:** If you are a big fan of vi editor then install vim using the following command apt command,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt install vim -y
|
||||
```
|
||||
|
||||
### 4) Install Flash Player Plugin
|
||||
|
||||
By default, the Debian 10 (Buster) repositories don’t come packed with the Flash plugin and hence users looking to install flash player in their system need to follow the steps outlined below:
|
||||
|
||||
Configure Repository for flash player:
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ echo "deb http://ftp.de.debian.org/debian buster main contrib" | sudo tee -a /etc/apt/sources.list
|
||||
deb http://ftp.de.debian.org/debian buster main contrib
|
||||
root@linuxtechi:~
|
||||
```
|
||||
|
||||
Now update package index using following command,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt update
|
||||
```
|
||||
|
||||
Install flash plugin using following apt command
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt install pepperflashplugin-nonfree -y
|
||||
```
|
||||
|
||||
Once package is installed successfully, then try to play videos in YouTube,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/Flash-Player-plugin-Debian10.jpg>
|
||||
|
||||
### 5) Install Software like VLC, SKYPE, FileZilla and Screenshot tool
|
||||
|
||||
So now we’ve enabled flash player, it is time to install all other software like VLC, Skype, Filezilla and screenshot tool like flameshot in our Debian 10 system.
|
||||
|
||||
**Install VLC Media Player**
|
||||
|
||||
To install VLC player in your system using apt command,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt install vlc -y
|
||||
```
|
||||
|
||||
After the successful installation of VLC player, try to play your favorite videos
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/Debian10-VLC.jpg>
|
||||
|
||||
**Install Skype:**
|
||||
|
||||
First download the latest Skype package as shown below:
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ wget https://go.skype.com/skypeforlinux-64.deb
|
||||
```
|
||||
|
||||
Next install the package using the apt command as shown below:
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt install ./skypeforlinux-64.deb
|
||||
```
|
||||
|
||||
After successful installation of Skype, try to access it and enter your Credentials,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/skype-Debian10.jpg>
|
||||
|
||||
**Install Filezilla**
|
||||
|
||||
To install Filezilla in your system use the following apt command,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt install filezilla -y
|
||||
```
|
||||
|
||||
Once FileZilla package is installed successfully, try to access it,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/FileZilla-Debian10.jpg>
|
||||
|
||||
**Install Screenshot tool (flameshot)**
|
||||
|
||||
Use the following command to install screenshoot tool flameshot,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt install flameshot -y
|
||||
```
|
||||
|
||||
**Note:** Shutter Tool in Debian 10 has been removed
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/flameshoot-debian10.jpg>
|
||||
|
||||
### 6) Enable and Start Firewall
|
||||
|
||||
It is always recommended to start firewall to make your secure over the network. If you are looking to enable firewall in Debian 10, **UFW** (Uncomplicated Firewall) is the best tool handle firewall. Since UFW is available in the Debian repositories, it is quite easy to install as shown below:
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt install ufw
|
||||
```
|
||||
|
||||
Once you have installed UFW, the next step is to set up the firewall. So, to setup the firewall, disable all incoming traffic by denying the ports and allow only the required ports like ssh, http and https.
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo ufw default deny incoming
|
||||
Default incoming policy changed to 'deny'
|
||||
(be sure to update your rules accordingly)
|
||||
root@linuxtechi:~$ sudo ufw default allow outgoing
|
||||
Default outgoing policy changed to 'allow'
|
||||
(be sure to update your rules accordingly)
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
Allow SSH port
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo ufw allow ssh
|
||||
Rules updated
|
||||
Rules updated (v6)
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
In case you have installed Web Server in your system then allow their ports too in the firewall using the following ufw command,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo ufw allow 80
|
||||
Rules updated
|
||||
Rules updated (v6)
|
||||
root@linuxtechi:~$ sudo ufw allow 443
|
||||
Rules updated
|
||||
Rules updated (v6)
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
Finally, you can enable UFW using the following command
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo ufw enable
|
||||
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
|
||||
Firewall is active and enabled on system startup
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
In case if you want to check the status of your firewall, you can check it using the following command
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo ufw status
|
||||
```
|
||||
|
||||
### 7) Install Virtualization Software (VirtualBox)
|
||||
|
||||
First step in installing Virtualbox is by importing the public keys of the Oracle VirtualBox repository to your Debian 10 system
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
|
||||
OK
|
||||
root@linuxtechi:~$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
|
||||
OK
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
If the import is successful, you will see a “OK” message displayed.
|
||||
|
||||
Next you need to add the repository to the source list
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo add-apt-repository "deb http://download.virtualbox.org/virtualbox/debian buster contrib"
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
Finally, it is time to install VirtualBox 6.0 in your system
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt update
|
||||
root@linuxtechi:~$ sudo apt install virtualbox-6.0 -y
|
||||
```
|
||||
|
||||
Once VirtualBox packages are installed successfully, try access it and start creating virtual machines,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/VirtualBox6-Debian10-Workstation.jpg>
|
||||
|
||||
### 8) Install latest AMD Drivers
|
||||
|
||||
Finally, you can also install additional AMD drivers needed like the graphics card, ATI Proprietary and Nvidia Graphics drivers. To Install the latest AMD Drivers, first we must modify **/etc/apt/sources.list** file, add **non-free** word in lines which contains **main** and **contrib**, example is shown below
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo vi /etc/apt/sources.list
|
||||
…………………
|
||||
deb http://deb.debian.org/debian/ buster main non-free contrib
|
||||
deb-src http://deb.debian.org/debian/ buster main non-free contrib
|
||||
|
||||
deb http://security.debian.org/debian-security buster/updates main contrib non-free
|
||||
deb-src http://security.debian.org/debian-security buster/updates main contrib non-free
|
||||
|
||||
deb http://ftp.us.debian.org/debian/ buster-updates main contrib non-free
|
||||
……………………
|
||||
```
|
||||
|
||||
Now use the following apt commands to install latest AMD drivers in Debian 10 system
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt update
|
||||
root@linuxtechi:~$ sudo apt install firmware-linux firmware-linux-nonfree libdrm-amdgpu1 xserver-xorg-video-amdgpu -y
|
||||
```
|
||||
|
||||
That’s all from this article, I hope you got an idea what one should after installing Debian 10. Please do share your feedback and comments in comments section below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/things-to-do-after-installing-debian-10/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/debian-10-buster-installation-guide/
|
||||
[2]: https://www.linuxtechi.com/linux-commands-to-manage-local-accounts/
|
144
sources/tech/20190730 How to create a pull request in GitHub.md
Normal file
144
sources/tech/20190730 How to create a pull request in GitHub.md
Normal file
@ -0,0 +1,144 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to create a pull request in GitHub)
|
||||
[#]: via: (https://opensource.com/article/19/7/create-pull-request-github)
|
||||
[#]: author: (Kedar Vijay Kulkarni https://opensource.com/users/kkulkarnhttps://opensource.com/users/fontanahttps://opensource.com/users/mhanwellhttps://opensource.com/users/mysentimentshttps://opensource.com/users/greg-p)
|
||||
|
||||
How to create a pull request in GitHub
|
||||
======
|
||||
Learn how to fork a repo, make changes, and ask the maintainers to
|
||||
review and merge it.
|
||||
![a checklist for a team][1]
|
||||
|
||||
So, you know how to use git. You have a [GitHub][2] repo and can push to it. All is well. But how the heck do you contribute to other people's GitHub projects? That is what I wanted to know after I learned git and GitHub. In this article, I will explain how to fork a git repo, make changes, and submit a pull request.
|
||||
|
||||
When you want to work on a GitHub project, the first step is to fork a repo.
|
||||
|
||||
![Forking a GitHub repo][3]
|
||||
|
||||
Use [my demo repo][4] to try it out.
|
||||
|
||||
Once there, click on the **Fork** button in the top-right corner. This creates a new copy of my demo repo under your GitHub user account with a URL like:
|
||||
|
||||
|
||||
```
|
||||
`https://github.com/<YourUserName>/demo`
|
||||
```
|
||||
|
||||
The copy includes all the code, branches, and commits from the original repo.
|
||||
|
||||
Next, clone the repo by opening the terminal on your computer and running the command:
|
||||
|
||||
|
||||
```
|
||||
`git clone https://github.com/<YourUserName>/demo`
|
||||
```
|
||||
|
||||
Once the repo is cloned, you need to do two things:
|
||||
|
||||
1. Create a new branch by issuing the command:
|
||||
|
||||
|
||||
```
|
||||
`git checkout -b new_branch`
|
||||
```
|
||||
|
||||
2. Create a new remote for the upstream repo with the command:
|
||||
|
||||
|
||||
```
|
||||
`git remote add upstream https://github.com/kedark3/demo`
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
In this case, "upstream repo" refers to the original repo you created your fork from.
|
||||
|
||||
Now you can make changes to the code. The following code creates a new branch, makes an arbitrary change, and pushes it to **new_branch**:
|
||||
|
||||
|
||||
```
|
||||
$ git checkout -b new_branch
|
||||
Switched to a new branch ‘new_branch’
|
||||
$ echo “some test file” > test
|
||||
$ cat test
|
||||
Some test file
|
||||
$ git status
|
||||
On branch new_branch
|
||||
No commits yet
|
||||
Untracked files:
|
||||
(use "git add <file>..." to include in what will be committed)
|
||||
test
|
||||
nothing added to commit but untracked files present (use "git add" to track)
|
||||
$ git add test
|
||||
$ git commit -S -m "Adding a test file to new_branch"
|
||||
[new_branch (root-commit) 4265ec8] Adding a test file to new_branch
|
||||
1 file changed, 1 insertion(+)
|
||||
create mode 100644 test
|
||||
$ git push -u origin new_branch
|
||||
Enumerating objects: 3, done.
|
||||
Counting objects: 100% (3/3), done.
|
||||
Writing objects: 100% (3/3), 918 bytes | 918.00 KiB/s, done.
|
||||
Total 3 (delta 0), reused 0 (delta 0)
|
||||
Remote: Create a pull request for ‘new_branch’ on GitHub by visiting:
|
||||
Remote: <http://github.com/example/Demo/pull/new/new\_branch>
|
||||
Remote:
|
||||
* [new branch] new_branch -> new_branch
|
||||
```
|
||||
|
||||
Once you push the changes to your repo, the **Compare & pull request** button will appear in GitHub.
|
||||
|
||||
![GitHub's Compare & Pull Request button][5]
|
||||
|
||||
Click it and you'll be taken to this screen:
|
||||
|
||||
![GitHub's Open pull request button][6]
|
||||
|
||||
Open a pull request by clicking the **Create pull request** button. This allows the repo's maintainers to review your contribution. From here, they can merge it if it is good, or they may ask you to make some changes.
|
||||
|
||||
### TLDR
|
||||
|
||||
In summary, if you want to contribute to a project, the simplest way is to:
|
||||
|
||||
1. Find a project you want to contribute to
|
||||
2. Fork it
|
||||
3. Clone it to your local system
|
||||
4. Make a new branch
|
||||
5. Make your changes
|
||||
6. Push it back to your repo
|
||||
7. Click the **Compare & pull request** button
|
||||
8. Click **Create pull request** to open a new pull request
|
||||
|
||||
|
||||
|
||||
If the reviewers ask for changes, repeat steps 5 and 6 to add more commits to your pull request.
|
||||
|
||||
Happy coding!
|
||||
|
||||
In a previous article , I discussed the complaints that have been leveled against GitHub during the...
|
||||
|
||||
Arfon Smith works at GitHub and is involved in a number of activities at the intersection of open...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/create-pull-request-github
|
||||
|
||||
作者:[Kedar Vijay Kulkarni][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/kkulkarnhttps://opensource.com/users/fontanahttps://opensource.com/users/mhanwellhttps://opensource.com/users/mysentimentshttps://opensource.com/users/greg-p
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk (a checklist for a team)
|
||||
[2]: https://github.com/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/forkrepo.png (Forking a GitHub repo)
|
||||
[4]: https://github.com/kedark3/demo
|
||||
[5]: https://opensource.com/sites/default/files/uploads/compare-and-pull-request-button.png (GitHub's Compare & Pull Request button)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/open-a-pull-request_crop.png (GitHub's Open pull request button)
|
110
sources/tech/20190730 How to manage logs in Linux.md
Normal file
110
sources/tech/20190730 How to manage logs in Linux.md
Normal file
@ -0,0 +1,110 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to manage logs in Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3428361/how-to-manage-logs-in-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
How to manage logs in Linux
|
||||
======
|
||||
Log files on Linux systems contain a LOT of information — more than you'll ever have time to view. Here are some tips on how you can make use of it without ... drowning in it.
|
||||
![Greg Lobinski \(CC BY 2.0\)][1]
|
||||
|
||||
Managing log files on Linux systems can be incredibly easy or painful. It all depends on what you mean by log management.
|
||||
|
||||
If all you mean is how you can go about ensuring that your log files don’t eat up all the disk space on your Linux server, the issue is generally quite straightforward. Log files on Linux systems will automatically roll over, and the system will only maintain a fixed number of the rolled-over logs. Even so, glancing over what can easily be a group of 100 files can be overwhelming. In this post, we'll take a look at how the log rotation works and some of the most relevant log files.
|
||||
|
||||
**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
|
||||
|
||||
### Automatic log rotation
|
||||
|
||||
Log files rotate frequently. What is the current log acquires a slightly different file name and a new log file is established. Take the syslog file as an example. This file is something of a catch-all for a lot of normal system messages. If you **cd** over to **/var/log** and take a look, you’ll probably see a series of syslog files like this:
|
||||
|
||||
```
|
||||
$ ls -l syslog*
|
||||
-rw-r----- 1 syslog adm 28996 Jul 30 07:40 syslog
|
||||
-rw-r----- 1 syslog adm 71212 Jul 30 00:00 syslog.1
|
||||
-rw-r----- 1 syslog adm 5449 Jul 29 00:00 syslog.2.gz
|
||||
-rw-r----- 1 syslog adm 6152 Jul 28 00:00 syslog.3.gz
|
||||
-rw-r----- 1 syslog adm 7031 Jul 27 00:00 syslog.4.gz
|
||||
-rw-r----- 1 syslog adm 5602 Jul 26 00:00 syslog.5.gz
|
||||
-rw-r----- 1 syslog adm 5995 Jul 25 00:00 syslog.6.gz
|
||||
-rw-r----- 1 syslog adm 32924 Jul 24 00:00 syslog.7.gz
|
||||
```
|
||||
|
||||
Rolled over at midnight each night, the older syslog files are kept for a week and then the oldest is deleted. The syslog.7.gz file will be tossed off the system and syslog.6.gz will be renamed syslog.7.gz. The remainder of the log files will follow suit until syslog becomes syslog.1 and a new syslog file is created. Some syslog files will be larger than others, but in general, none will likely ever get very large and you’ll never see more than eight of them. This gives you just over a week to review any data they collect.
|
||||
|
||||
The number of files maintained for any particular log file depends on the log file itself. For some, you may have as many as 13. Notice how the older files – both for syslog and dpkg – are gzipped to save space. The thinking here is likely that you’ll be most interested in the recent logs. Older logs can be unzipped with **gunzip** as needed.
|
||||
|
||||
```
|
||||
# ls -t dpkg*
|
||||
dpkg.log dpkg.log.3.gz dpkg.log.6.gz dpkg.log.9.gz dpkg.log.12.gz
|
||||
dpkg.log.1 dpkg.log.4.gz dpkg.log.7.gz dpkg.log.10.gz
|
||||
dpkg.log.2.gz dpkg.log.5.gz dpkg.log.8.gz dpkg.log.11.gz
|
||||
```
|
||||
|
||||
Log files can be rotated based on age, as well as by size. Keep this in mind as you examine your log files.
|
||||
|
||||
Log file rotation can be configured differently if you are so inclined, though the defaults work for most Linux sysadmins. Take a look at files like **/etc/rsyslog.conf** and **/etc/logrotate.conf** for some of the details.
|
||||
|
||||
### Making use of your log files
|
||||
|
||||
Managing log files should also include using them from time to time. The first step in making use of log files should probably include getting used to what each log file can tell you about how your system is working and what problems it might have run into. Reading log files from top to bottom is almost never a good option, but knowing how to pull information from them can be of great benefit when you want to get a sense of how well your system is working or need to track down a problem. This also suggests that you have a general idea what kind of information is stored in each file. For example:
|
||||
|
||||
```
|
||||
$ who wtmp | tail -10 show the most recent logins
|
||||
$ who wtmp | grep shark show recent logins for a particular user
|
||||
$ grep "sudo:" auth.log see who is using sudo
|
||||
$ tail dmesg look at kernel messages
|
||||
$ tail dpkg.log see recently installed and updated packages
|
||||
$ more ufw.log see firewall activity (i.e., if you are using ufw)
|
||||
```
|
||||
|
||||
Some commands that you run will also extract information from your log files. If you want to see, for example, a list of system reboots, you can use a command like this:
|
||||
|
||||
```
|
||||
$ last reboot
|
||||
reboot system boot 5.0.0-20-generic Tue Jul 16 13:19 still running
|
||||
reboot system boot 5.0.0-15-generic Sat May 18 17:26 - 15:19 (21+21:52)
|
||||
reboot system boot 5.0.0-13-generic Mon Apr 29 10:55 - 15:34 (18+04:39)
|
||||
```
|
||||
|
||||
### Using more advanced log managers
|
||||
|
||||
While you can write scripts to make it easier to find interesting information in your log files, you should also be aware that there are some very sophisticated tools available for log file analysis. Some correlate information from multiple sources to get a fuller picture of what’s happening on your network. They may provide real-time monitoring, as well. Tools such as [Solarwinds Log & Event Manager][3] and [PRTG Network Monitor][4] (which includes log monitoring) come to mind.
|
||||
|
||||
There are also some free tools that can help with analyzing log files. These include:
|
||||
|
||||
* **Logwatch** — program to scan system logs for interesting lines
|
||||
* **Logcheck** — system log analyzer and reporter
|
||||
|
||||
|
||||
|
||||
I'll provide some insights and help on these tools in upcoming posts.
|
||||
|
||||
**[ Also see: [Invaluable tips and tricks for troubleshooting Linux][5] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3428361/how-to-manage-logs-in-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/07/logs-100806633-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: https://www.esecurityplanet.com/products/solarwinds-log-event-manager-siem.html
|
||||
[4]: https://www.paessler.com/prtg
|
||||
[5]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,304 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Using Python to explore Google's Natural Language API)
|
||||
[#]: via: (https://opensource.com/article/19/7/python-google-natural-language-api)
|
||||
[#]: author: (JR Oakes https://opensource.com/users/jroakes)
|
||||
|
||||
Using Python to explore Google's Natural Language API
|
||||
======
|
||||
Google's API can surface clues to how Google is classifying your site
|
||||
and ways to tweak your content to improve search results.
|
||||
![magnifying glass on computer screen][1]
|
||||
|
||||
As a technical search engine optimizer, I am always looking for ways to use data in novel ways to better understand how Google ranks websites. I recently investigated whether Google's [Natural Language API][2] could better inform how Google may be classifying a site's content.
|
||||
|
||||
Although there are [open source NLP tools][3], I wanted to explore Google's tools under the assumption it might use the same tech in other products, like Search. This article introduces Google's Natural Language API and explores common natural language processing (NLP) tasks and how they might be used to inform website content creation.
|
||||
|
||||
### Understanding the data types
|
||||
|
||||
To begin, it is important to understand the types of data that Google's Natural Language API returns.
|
||||
|
||||
#### Entities
|
||||
|
||||
Entities are text phrases that can be tied back to something in the physical world. Named entity recognition (NER) is a difficult part of NLP because tools often need to look at the full context around words to understand their usage. For example, homographs are spelled the same but have multiple meanings. Does "lead" in a sentence refer to a metal (a noun), causing someone to move (a verb), or the main character in a play (also a noun)? Google has 12 distinct types of entities, as well as a 13th catch-all category called "UNKNOWN." Some of the entities tie back to Wikipedia articles, suggesting [Knowledge Graph][4] influence on the data. Each entity returns a salience score, which is its overall relevance to the supplied text.
|
||||
|
||||
![Entities][5]
|
||||
|
||||
#### Sentiment
|
||||
|
||||
Sentiment, a view of or attitude towards something, is measured at the document and sentence level and for individual entities discovered in the document. The score of the sentiment ranges from -1.0 (negative) to 1.0 (positive). The magnitude represents the non-normalized strength of emotion; it ranges between 0.0 and infinity.
|
||||
|
||||
![Sentiment][6]
|
||||
|
||||
#### Syntax
|
||||
|
||||
Syntax parsing contains most of the common NLP activities found in better libraries, like [lemmatization][7], [part-of-speech tagging][8], and [dependency-tree parsing][9]. NLP mainly deals with helping machines understand text and the relationship between words. Syntax parsing is a foundational part of most language-processing or understanding tasks.
|
||||
|
||||
![Syntax][10]
|
||||
|
||||
#### Categories
|
||||
|
||||
Categories assign the entire given content to a specific industry or topical category with a confidence score from 0.0 to 1.0. The categories appear to be the same audience and website categories used by other Google tools, like AdWords.
|
||||
|
||||
![Categories][11]
|
||||
|
||||
### Pulling some data
|
||||
|
||||
Now I'll pull some sample data to play around with. I gathered some search queries and their corresponding URLs using Google's [Search Console API][12]. Google Search Console is a tool that reports the terms people use to find a website's pages with Google Search. This [open source Jupyter notebook][13] allows you to pull similar data about your website. For this example, I pulled Google Search Console data on a website (which I won't name) generated between January 1 and June 1, 2019, and restricted it to queries that received at least one click (as opposed to just impressions).
|
||||
|
||||
This dataset contains information on 2,969 pages and 7,144 queries that displayed the website's pages in Google Search results. The table below shows that the vast majority of pages received very few clicks, as this site focuses on what is called long-tail (more specific and usually longer) as opposed to short-tail (very general, higher search volume) search queries.
|
||||
|
||||
![Histogram of clicks for all pages][14]
|
||||
|
||||
To reduce the dataset size and get only top-performing pages, I limited the dataset to pages that received at least 20 impressions over the period. This is the histogram of clicks by page for this refined dataset, which includes 723 pages:
|
||||
|
||||
![Histogram of clicks for subset of pages][15]
|
||||
|
||||
### Using Google's Natural Language API library in Python
|
||||
|
||||
To test out the API, create a small script that leverages the **[google-cloud-language][16]** library in Python. The following code is Python 3.5+.
|
||||
|
||||
First, activate a new virtual environment and install the libraries. Replace **<your-env>** with a unique name for the environment.
|
||||
|
||||
|
||||
```
|
||||
virtualenv <your-env>
|
||||
source <your-env>/bin/activate
|
||||
pip install --upgrade google-cloud-language
|
||||
pip install --upgrade requests
|
||||
```
|
||||
|
||||
This script extracts HTML from a URL and feeds the HTML to the Natural Language API. It returns a dictionary of **sentiment**, **entities**, and **categories**, where the values for these keys are all lists. I used a Jupyter notebook to run this code because it makes it easier to annotate and retry code using the same kernel.
|
||||
|
||||
|
||||
```
|
||||
# Import needed libraries
|
||||
import requests
|
||||
import json
|
||||
|
||||
from google.cloud import language
|
||||
from google.oauth2 import service_account
|
||||
from google.cloud.language import enums
|
||||
from google.cloud.language import types
|
||||
|
||||
# Build language API client (requires service account key)
|
||||
client = language.LanguageServiceClient.from_service_account_json('services.json')
|
||||
|
||||
# Define functions
|
||||
def pull_googlenlp(client, url, invalid_types = ['OTHER'], **data):
|
||||
|
||||
html = load_text_from_url(url, **data)
|
||||
|
||||
if not html:
|
||||
return None
|
||||
|
||||
document = types.Document(
|
||||
content=html,
|
||||
type=language.enums.Document.Type.HTML )
|
||||
|
||||
features = {'extract_syntax': True,
|
||||
'extract_entities': True,
|
||||
'extract_document_sentiment': True,
|
||||
'extract_entity_sentiment': True,
|
||||
'classify_text': False
|
||||
}
|
||||
|
||||
response = client.annotate_text(document=document, features=features)
|
||||
sentiment = response.document_sentiment
|
||||
entities = response.entities
|
||||
|
||||
response = client.classify_text(document)
|
||||
categories = response.categories
|
||||
|
||||
def get_type(type):
|
||||
return client.enums.Entity.Type(entity.type).name
|
||||
|
||||
result = {}
|
||||
|
||||
result['sentiment'] = []
|
||||
result['entities'] = []
|
||||
result['categories'] = []
|
||||
|
||||
if sentiment:
|
||||
result['sentiment'] = [{ 'magnitude': sentiment.magnitude, 'score':sentiment.score }]
|
||||
|
||||
for entity in entities:
|
||||
if get_type(entity.type) not in invalid_types:
|
||||
result['entities'].append({'name': entity.name, 'type': get_type(entity.type), 'salience': entity.salience, 'wikipedia_url': entity.metadata.get('wikipedia_url', '-') })
|
||||
|
||||
for category in categories:
|
||||
result['categories'].append({'name':category.name, 'confidence': category.confidence})
|
||||
|
||||
|
||||
return result
|
||||
|
||||
def load_text_from_url(url, **data):
|
||||
|
||||
timeout = data.get('timeout', 20)
|
||||
|
||||
results = []
|
||||
|
||||
try:
|
||||
|
||||
print("Extracting text from: {}".format(url))
|
||||
response = requests.get(url, timeout=timeout)
|
||||
|
||||
text = response.text
|
||||
status = response.status_code
|
||||
|
||||
if status == 200 and len(text) > 0:
|
||||
return text
|
||||
|
||||
return None
|
||||
|
||||
|
||||
except Exception as e:
|
||||
print('Problem with url: {0}.'.format(url))
|
||||
return None
|
||||
```
|
||||
|
||||
To access the API, follow Google's [quickstart instructions][17] to create a project in Google Cloud Console, enable the API, and download a service account key. Afterward, you should have a JSON file that looks similar to this:
|
||||
|
||||
![services.json file][18]
|
||||
|
||||
Upload it to your project folder with the name **services.json**.
|
||||
|
||||
Then you can pull the API data for any URL (such as Opensource.com) by running the following:
|
||||
|
||||
|
||||
```
|
||||
url = "<https://opensource.com/article/19/6/how-ssh-running-container>"
|
||||
pull_googlenlp(client,url)
|
||||
```
|
||||
|
||||
If it's set up correctly, you should see this output:
|
||||
|
||||
![Output from pulling API data][19]
|
||||
|
||||
To make it easier to get started, I created a [Jupyter Notebook][20] that you can download and use to test extracting web pages' entities, categories, and sentiment. I prefer using [JupyterLab][21], which is an extension of Jupyter Notebooks that includes a file viewer and other enhanced user experience features. If you're new to these tools, I think [Anaconda][22] is the easiest way to get started using Python and Jupyter. It makes installing and setting up Python, as well as common libraries, very easy, especially on Windows.
|
||||
|
||||
### Playing with the data
|
||||
|
||||
With these functions that scrape the HTML of the given page and pass it to the Natural Language API, I can run some analysis across the 723 URLs. First, I'll look at the categories relevant to the site by looking at the count of returned top categories across all pages.
|
||||
|
||||
#### Categories
|
||||
|
||||
![Categories data from example site][23]
|
||||
|
||||
This seems to be a fairly accurate representation of the key themes of this particular site. Looking at a single query that one of the top-performing pages ranks for, I can compare the other ranking pages in Google's results for that same query.
|
||||
|
||||
* _URL 1 | Top Category: /Law & Government/Legal (0.5099999904632568) of 1 total categories._
|
||||
* _No categories returned._
|
||||
* _URL 3 | Top Category: /Internet & Telecom/Mobile & Wireless (0.6100000143051147) of 1 total categories._
|
||||
* _URL 4 | Top Category: /Computers & Electronics/Software (0.5799999833106995) of 2 total categories._
|
||||
* _URL 5 | Top Category: /Internet & Telecom/Mobile & Wireless/Mobile Apps & Add-Ons (0.75) of 1 total categories._
|
||||
* _No categories returned._
|
||||
* _URL 7 | Top Category: /Computers & Electronics/Software/Business & Productivity Software (0.7099999785423279) of 2 total categories._
|
||||
* _URL 8 | Top Category: /Law & Government/Legal (0.8999999761581421) of 3 total categories._
|
||||
* _URL 9 | Top Category: /Reference/General Reference/Forms Guides & Templates (0.6399999856948853) of 1 total categories._
|
||||
* _No categories returned._
|
||||
|
||||
|
||||
|
||||
The numbers in parentheses above represent Google's confidence that the content of the page is relevant for that category. The eighth result has much higher confidence than the first result for the same category, so this doesn't seem to be a magic bullet for defining relevance for ranking. Also, the categories are much too broad to make sense for a specific search topic.
|
||||
|
||||
Looking at average confidence by ranking position, there doesn't seem to be a correlation between these two metrics, at least for this dataset:
|
||||
|
||||
![Plot of average confidence by ranking position ][24]
|
||||
|
||||
Both of these approaches make sense to review for a website at scale to ensure the content categories seem appropriate, and boilerplate or sales content isn't moving your pages out of relevance for your main expertise area. Think if you sell industrial supplies, but your pages return _Marketing_ as the main category. There doesn't seem to be a strong suggestion that category relevancy has anything to do with how well you rank, at least at a page level.
|
||||
|
||||
#### Sentiment
|
||||
|
||||
I won't spend much time on sentiment. Across all the pages that returned a sentiment from the API, they fell into two bins: 0.1 and 0.2, which is almost neutral sentiment. Based on the histogram, it is easy to tell that sentiment doesn't provide much value. It would be a much more interesting metric to run for a news or opinion site to measure the correlation of sentiment to median rank for particular pages.
|
||||
|
||||
![Histogram of sentiment for unique pages][25]
|
||||
|
||||
#### Entities
|
||||
|
||||
Entities were the most interesting part of the API, in my opinion. This is a selection of the top entities, across all pages, by salience (or relevancy to the page). Notice that Google is inferring different types for the same terms (Bill of Sale), perhaps incorrectly. This is caused by the terms appearing in different contexts in the content.
|
||||
|
||||
![Top entities for example site][26]
|
||||
|
||||
Then I looked at each entity type individually and all together to see if there was any correlation between the salience of the entity and the best-ranking position of the page. For each type, I matched the salience (overall relevance to the page) of the top entity matching that type ordered by salience (descending).
|
||||
|
||||
Some of the entity types returned zero salience across all examples, so I omitted those results from the charts below.
|
||||
|
||||
![Correlation between salience and best ranking position][27]
|
||||
|
||||
The **Consumer Good** entity type had the highest positive correlation, with a Pearson correlation of 0.15854, although since lower-numbered rankings are better, the **Person** entity had the best result with a -0.15483 correlation. This is an extremely small sample set, especially for individual entity types, so I can't make too much of the data. I didn't find any value with a strong correlation, but the **Person** entity makes the most sense. Sites usually have pages about their chief executive and other key employees, and these pages are very likely to do well in search results for those queries.
|
||||
|
||||
Moving on, while looking at the site holistically, the following themes emerge based on **entity** **name** and **entity type**.
|
||||
|
||||
![Themes based on entity name and entity type][28]
|
||||
|
||||
I blurred a few results that seem too specific to mask the site's identity. Thematically, the name information is a good way to look topically at your (or a competitor's) site to see its core themes. This was done based only on the example site's ranking URLs and not all the site's possible URLs (Since Search Console data only reports on pages that received impressions in Google), but the results would be interesting, especially if you were to pull a site's main ranking URLs from a tool like [Ahrefs][29], which tracks many, many queries and the Google results for those queries.
|
||||
|
||||
The other interesting piece in the entity data is that entities marked **CONSUMER_GOOD** tended to "look" like results I have seen in Knowledge Results, i.e., the Google Search results on the right-hand side of the page.
|
||||
|
||||
![Google search results][30]
|
||||
|
||||
Of the **Consumer Good** entity names from our data set that had three or more words, 5.8% had the same Knowledge Results as Google's results for the entity name. This means, if you searched for the term or phrase in Google, the block on the right (eg. the Knowledge Results showing Linux above), would display in the search result page. Since Google "picks" an exemplar webpage to represent the entity, it is a good opportunity to identify opportunities to be singularly featured in search results. Also of interest, of the 5.8% names that displayed these Knowledge Results in Google, none of the entities had Wikipedia URLs returned from the Natural Language API. This is interesting enough to warrant additional analysis. It would be very useful, especially for more esoteric topics that traditional global rank-tracking tools, like Ahrefs, don't have in their databases.
|
||||
|
||||
As mentioned, the Knowledge Results can be important to site owners who want to have their content featured in Google, as they are strongly highlighted on desktop search. They are also more than likely, hypothetically, to line up with knowledge-base topics from Google [Discover][31], an offering for Android and iOS that attempts to surface content for users based on topics they are interested in but haven't searched explicitly for.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
This article went over Google's Natural Language API, shared some code, and investigated ways this API may be useful for site owners. The key takeaways are:
|
||||
|
||||
* Learning to use Python and Jupyter Notebooks opens your data-gathering tasks to a world of incredible APIs and open source projects (like Pandas and NumPy) built by incredibly smart and talented people.
|
||||
* Python allows me to quickly pull and test my hypothesis about the value of an API for a particular purpose.
|
||||
* Passing a website's pages through Google's categorization API may be a good check to ensure its content falls into the correct thematic categories. Doing this for competitors' sites may also offer guidance on where to tune-up or create content.
|
||||
* Google's sentiment score didn't seem to be an interesting metric for the example site, but it may be for news or opinion-based sites.
|
||||
* Google's found entities gave a much more granular topic-level view of the website holistically and, like categorization, would be very interesting to use in competitive content analysis.
|
||||
* Entities may help define opportunities where your content can line up with Google Knowledge blocks in search results or Google Discover results. With 5.8% of our results set for longer (word count) **Consumer Goods** entities, displaying these results, there may be opportunities, for some sites, to better optimize their page's salience score for these entities to stand a better chance of capturing this featured placement in Google search results or Google Discovers suggestions.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/python-google-natural-language-api
|
||||
|
||||
作者:[JR Oakes][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jroakes
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
|
||||
[2]: https://cloud.google.com/natural-language/#natural-language-api-demo
|
||||
[3]: https://opensource.com/article/19/3/natural-language-processing-tools
|
||||
[4]: https://en.wikipedia.org/wiki/Knowledge_Graph
|
||||
[5]: https://opensource.com/sites/default/files/uploads/entities.png (Entities)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/sentiment.png (Sentiment)
|
||||
[7]: https://en.wikipedia.org/wiki/Lemmatisation
|
||||
[8]: https://en.wikipedia.org/wiki/Part-of-speech_tagging
|
||||
[9]: https://en.wikipedia.org/wiki/Parse_tree#Dependency-based_parse_trees
|
||||
[10]: https://opensource.com/sites/default/files/uploads/syntax.png (Syntax)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/categories.png (Categories)
|
||||
[12]: https://developers.google.com/webmaster-tools/
|
||||
[13]: https://github.com/MLTSEO/MLTS/blob/master/Demos.ipynb
|
||||
[14]: https://opensource.com/sites/default/files/uploads/histogram_1.png (Histogram of clicks for all pages)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/histogram_2.png (Histogram of clicks for subset of pages)
|
||||
[16]: https://pypi.org/project/google-cloud-language/
|
||||
[17]: https://cloud.google.com/natural-language/docs/quickstart
|
||||
[18]: https://opensource.com/sites/default/files/uploads/json_file.png (services.json file)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/output.png (Output from pulling API data)
|
||||
[20]: https://github.com/MLTSEO/MLTS/blob/master/Tutorials/Google_Language_API_Intro.ipynb
|
||||
[21]: https://github.com/jupyterlab/jupyterlab
|
||||
[22]: https://www.anaconda.com/distribution/
|
||||
[23]: https://opensource.com/sites/default/files/uploads/categories_2.png (Categories data from example site)
|
||||
[24]: https://opensource.com/sites/default/files/uploads/plot.png (Plot of average confidence by ranking position )
|
||||
[25]: https://opensource.com/sites/default/files/uploads/histogram_3.png (Histogram of sentiment for unique pages)
|
||||
[26]: https://opensource.com/sites/default/files/uploads/entities_2.png (Top entities for example site)
|
||||
[27]: https://opensource.com/sites/default/files/uploads/salience_plots.png (Correlation between salience and best ranking position)
|
||||
[28]: https://opensource.com/sites/default/files/uploads/themes.png (Themes based on entity name and entity type)
|
||||
[29]: https://ahrefs.com/
|
||||
[30]: https://opensource.com/sites/default/files/uploads/googleresults.png (Google search results)
|
||||
[31]: https://www.blog.google/products/search/introducing-google-discover/
|
449
sources/tech/20190731 Bash aliases you can-t live without.md
Normal file
449
sources/tech/20190731 Bash aliases you can-t live without.md
Normal file
@ -0,0 +1,449 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Bash aliases you can’t live without)
|
||||
[#]: via: (https://opensource.com/article/19/7/bash-aliases)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/marcobravohttps://opensource.com/users/samwebstudiohttps://opensource.com/users/greg-phttps://opensource.com/users/greg-p)
|
||||
|
||||
Bash aliases you can’t live without
|
||||
======
|
||||
Tired of typing the same long commands over and over? Do you feel
|
||||
inefficient working on the command line? Bash aliases can make a world
|
||||
of difference.
|
||||
![bash logo on green background][1]
|
||||
|
||||
A Bash alias is a method of supplementing or overriding Bash commands with new ones. Bash aliases make it easy for users to customize their experience in a [POSIX][2] terminal. They are often defined in **$HOME/.bashrc** or **$HOME/bash_aliases** (which must be loaded by **$HOME/.bashrc**).
|
||||
|
||||
Most distributions add at least some popular aliases in the default **.bashrc** file of any new user account. These are simple ones to demonstrate the syntax of a Bash alias:
|
||||
|
||||
|
||||
```
|
||||
alias ls='ls -F'
|
||||
alias ll='ls -lh'
|
||||
```
|
||||
|
||||
Not all distributions ship with pre-populated aliases, though. If you add aliases manually, then you must load them into your current Bash session:
|
||||
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
```
|
||||
|
||||
Otherwise, you can close your terminal and re-open it so that it reloads its configuration file.
|
||||
|
||||
With those aliases defined in your Bash initialization script, you can then type **ll** and get the results of **ls -l**, and when you type **ls** you get, instead of the output of plain old ****[ls][3].
|
||||
|
||||
Those aliases are great to have, but they just scratch the surface of what’s possible. Here are the top 10 Bash aliases that, once you try them, you won’t be able to live without.
|
||||
|
||||
### Set up first
|
||||
|
||||
Before beginning, create a file called **~/.bash_aliases**:
|
||||
|
||||
|
||||
```
|
||||
$ touch ~/.bash_aliases
|
||||
```
|
||||
|
||||
Then, make sure that this code appears in your **~/.bashrc** file:
|
||||
|
||||
|
||||
```
|
||||
if [ -e $HOME/.bash_aliases ]; then
|
||||
source $HOME/.bash_aliases
|
||||
fi
|
||||
```
|
||||
|
||||
If you want to try any of the aliases in this article for yourself, enter them into your **.bash_aliases** file, and then load them into your Bash session with the **source ~/.bashrc** command.
|
||||
|
||||
### Sort by file size
|
||||
|
||||
If you started your computing life with GUI file managers like Nautilus in GNOME, the Finder in MacOS, or Explorer in Windows, then you’re probably used to sorting a list of files by their size. You can do that in a terminal as well, but it’s not exactly succinct.
|
||||
|
||||
Add this alias to your configuration on a GNU system:
|
||||
|
||||
|
||||
```
|
||||
alias lt='ls --human-readable --size -1 -S --classify'
|
||||
```
|
||||
|
||||
This alias replaces **lt** with an **ls** command that displays the size of each item, and then sorts it by size, in a single column, with a notation to indicate the kind of file. Load your new alias, and then try it out:
|
||||
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
$ lt
|
||||
total 344K
|
||||
140K configure*
|
||||
44K aclocal.m4
|
||||
36K LICENSE
|
||||
32K config.status*
|
||||
24K Makefile
|
||||
24K Makefile.in
|
||||
12K config.log
|
||||
8.0K README.md
|
||||
4.0K info.slackermedia.Git-portal.json
|
||||
4.0K git-portal.spec
|
||||
4.0K flatpak.path.patch
|
||||
4.0K Makefile.am*
|
||||
4.0K dot-gitlab.ci.yml
|
||||
4.0K configure.ac*
|
||||
0 autom4te.cache/
|
||||
0 share/
|
||||
0 bin/
|
||||
0 install-sh@
|
||||
0 compile@
|
||||
0 missing@
|
||||
0 COPYING@
|
||||
```
|
||||
|
||||
On MacOS or BSD, the **ls** command doesn’t have the same options, so this alias works instead:
|
||||
|
||||
|
||||
```
|
||||
alias lt='du -sh * | sort -h'
|
||||
```
|
||||
|
||||
The results of this version are a little different:
|
||||
|
||||
|
||||
```
|
||||
$ du -sh * | sort -h
|
||||
0 compile
|
||||
0 COPYING
|
||||
0 install-sh
|
||||
0 missing
|
||||
4.0K configure.ac
|
||||
4.0K dot-gitlab.ci.yml
|
||||
4.0K flatpak.path.patch
|
||||
4.0K git-portal.spec
|
||||
4.0K info.slackermedia.Git-portal.json
|
||||
4.0K Makefile.am
|
||||
8.0K README.md
|
||||
12K config.log
|
||||
16K bin
|
||||
24K Makefile
|
||||
24K Makefile.in
|
||||
32K config.status
|
||||
36K LICENSE
|
||||
44K aclocal.m4
|
||||
60K share
|
||||
140K configure
|
||||
476K autom4te.cache
|
||||
```
|
||||
|
||||
In fact, even on Linux, that command is useful, because ****using **ls** lists directories and symlinks as being 0 in size, which may not be the information you actually want. It’s your choice.
|
||||
|
||||
_Thanks to Brad Alexander for this alias idea._
|
||||
|
||||
### View only mounted drives
|
||||
|
||||
The **mount** command used to be so simple. With just one command, you could get a list of all the mounted filesystems on your computer, and it was frequently used for an overview of what drives were attached to a workstation. It used to be impressive to see more than three or four entries because most computers don’t have many more USB ports than that, so the results were manageable.
|
||||
|
||||
Computers are a little more complicated now, and between LVM, physical drives, network storage, and virtual filesystems, the results of **mount** can be difficult to parse:
|
||||
|
||||
|
||||
```
|
||||
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
|
||||
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
|
||||
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=8131024k,nr_inodes=2032756,mode=755)
|
||||
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
|
||||
[...]
|
||||
/dev/nvme0n1p2 on /boot type ext4 (rw,relatime,seclabel)
|
||||
/dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro)
|
||||
[...]
|
||||
gvfsd-fuse on /run/user/100977/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=100977,group_id=100977)
|
||||
/dev/sda1 on /run/media/seth/pocket type ext4 (rw,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
|
||||
/dev/sdc1 on /run/media/seth/trip type ext4 (rw,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
|
||||
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
|
||||
```
|
||||
|
||||
To solve that problem, try an alias like this:
|
||||
|
||||
|
||||
```
|
||||
alias mnt='mount | awk -F' ' '{ printf "%s\t%s\n",$1,$3; }' | column -t | egrep ^/dev/ | sort'
|
||||
```
|
||||
|
||||
This alias uses **awk** to parse the output of **mount** by column, reducing the output to what you probably looking for (what hard drives, and not file systems, are mounted):
|
||||
|
||||
|
||||
```
|
||||
$ mnt
|
||||
/dev/mapper/fedora-root /
|
||||
/dev/nvme0n1p1 /boot/efi
|
||||
/dev/nvme0n1p2 /boot
|
||||
/dev/sda1 /run/media/seth/pocket
|
||||
/dev/sdc1 /run/media/seth/trip
|
||||
```
|
||||
|
||||
On MacOS, the **mount** command doesn’t provide terribly verbose output, so an alias may be overkill. However, if you prefer a succinct report, try this:
|
||||
|
||||
|
||||
```
|
||||
alias mnt='mount | grep -E ^/dev | column -t'
|
||||
```
|
||||
|
||||
The results:
|
||||
|
||||
|
||||
```
|
||||
$ mnt
|
||||
/dev/disk1s1 on / (apfs, local, journaled)
|
||||
/dev/disk1s4 on /private/var/vm (apfs, local, noexec, journaled, noatime, nobrowse)
|
||||
```
|
||||
|
||||
### Find a command in your grep history
|
||||
|
||||
Sometimes you figure out how to do something in the terminal, and promise yourself that you’ll never forget what you’ve just learned. Then an hour goes by, and you’ve completely forgotten what you did.
|
||||
|
||||
Searching through your Bash history is something everyone has to do from time to time. If you know exactly what you’re searching for, you can use **Ctrl+R** to do a reverse search through your history, but sometimes you can’t remember the exact command you want to find.
|
||||
|
||||
Here’s an alias to make that task a little easier:
|
||||
|
||||
|
||||
```
|
||||
alias gh='history|grep'
|
||||
```
|
||||
|
||||
Here’s an example of how to use it:
|
||||
|
||||
|
||||
```
|
||||
$ gh bash
|
||||
482 cat ~/.bashrc | grep _alias
|
||||
498 emacs ~/.bashrc
|
||||
530 emacs ~/.bash_aliases
|
||||
531 source ~/.bashrc
|
||||
```
|
||||
|
||||
### Sort by modification time
|
||||
|
||||
It happens every Monday: You get to work, you sit down at your computer, you open a terminal, and you find you’ve forgotten what you were doing last Friday. What you need is an alias to list the most recently modified files.
|
||||
|
||||
You can use the **ls** command to create an alias to help you find where you left off:
|
||||
|
||||
|
||||
```
|
||||
alias left='ls -t -1'
|
||||
```
|
||||
|
||||
The output is simple, although you can extend it with the --**long** option if you prefer. The alias, as listed, displays this:
|
||||
|
||||
|
||||
```
|
||||
$ left
|
||||
demo.jpeg
|
||||
demo.xcf
|
||||
design-proposal.md
|
||||
rejects.txt
|
||||
brainstorm.txt
|
||||
query-letter.xml
|
||||
```
|
||||
|
||||
### Count files
|
||||
|
||||
If you need to know how many files you have in a directory, the solution is one of the most classic examples of UNIX command construction: You list files with the **ls** command, control its output to be only one column with the **-1** option, and then pipe that output to the **wc** (word count) command to count how many lines of single files there are.
|
||||
|
||||
It’s a brilliant demonstration of how the UNIX philosophy allows users to build their own solutions using small system components. This command combination is also a lot to type if you happen to do it several times a day, and it doesn’t exactly work for a directory of directories without using the **-R** option, which introduces new lines to the output and renders the exercise useless.
|
||||
|
||||
Instead, this alias makes the process easy:
|
||||
|
||||
|
||||
```
|
||||
alias count='find . -type f | wc -l'
|
||||
```
|
||||
|
||||
This one counts files, ignoring directories, but _not_ the contents of directories. If you have a project folder containing two directories, each of which contains two files, the alias returns four, because there are four files in the entire project.
|
||||
|
||||
|
||||
```
|
||||
$ ls
|
||||
foo bar
|
||||
$ count
|
||||
4
|
||||
```
|
||||
|
||||
### Create a Python virtual environment
|
||||
|
||||
Do you code in Python?
|
||||
|
||||
Do you code in Python a lot?
|
||||
|
||||
If you do, then you know that creating a Python virtual environment requires, at the very least, 53 keystrokes.
|
||||
That’s 49 too many, but that’s easily circumvented with two new aliases called **ve** and **va**:
|
||||
|
||||
|
||||
```
|
||||
alias ve='python3 -m venv ./venv'
|
||||
alias va='source ./venv/bin/activate'
|
||||
```
|
||||
|
||||
Running **ve** creates a new directory, called **venv**, containing the usual virtual environment filesystem for Python3. The **va** alias activates the environment in your current shell:
|
||||
|
||||
|
||||
```
|
||||
$ cd my-project
|
||||
$ ve
|
||||
$ va
|
||||
(venv) $
|
||||
```
|
||||
|
||||
### Add a copy progress bar
|
||||
|
||||
Everybody pokes fun at progress bars because they’re infamously inaccurate. And yet, deep down, we all seem to want them. The UNIX **cp** command has no progress bar, but it does have a **-v** option for verbosity, meaning that it echoes the name of each file being copied to your terminal. That’s a pretty good hack, but it doesn’t work so well when you’re copying one big file and want some indication of how much of the file has yet to be transferred.
|
||||
|
||||
The **pv** command provides a progress bar during copy, but it’s not common as a default application. On the other hand, the **rsync** command is included in the default installation of nearly every POSIX system available, and it’s widely recognized as one of the smartest ways to copy files both remotely and locally.
|
||||
|
||||
Better yet, it has a built-in progress bar.
|
||||
|
||||
|
||||
```
|
||||
alias cpv='rsync -ah --info=progress2'
|
||||
```
|
||||
|
||||
Using this alias is the same as using the **cp** command:
|
||||
|
||||
|
||||
```
|
||||
$ cpv bigfile.flac /run/media/seth/audio/
|
||||
3.83M 6% 213.15MB/s 0:00:00 (xfr#4, to-chk=0/4)
|
||||
```
|
||||
|
||||
An interesting side effect of using this command is that **rsync** copies both files and directories without the **-r** flag that **cp** would otherwise require.
|
||||
|
||||
### Protect yourself from file removal accidents
|
||||
|
||||
You shouldn’t use the **rm** command. The **rm** manual even says so:
|
||||
|
||||
> _Warning_: If you use ‘rm’ to remove a file, it is usually possible to recover the contents of that file. If you want more assurance that the contents are truly unrecoverable, consider using ‘shred’.
|
||||
|
||||
If you want to remove a file, you should move the file to your Trash, just as you do when using a desktop.
|
||||
|
||||
POSIX makes this easy, because the Trash is an accessible, actual location in your filesystem. That location may change, depending on your platform: On a [FreeDesktop][4], the Trash is located at **~/.local/share/Trash**, while on MacOS it’s **~/.Trash**, but either way, it’s just a directory into which you place files that you want out of sight until you’re ready to erase them forever.
|
||||
|
||||
This simple alias provides a way to toss files into the Trash bin from your terminal:
|
||||
|
||||
|
||||
```
|
||||
alias tcn='mv --force -t ~/.local/share/Trash '
|
||||
```
|
||||
|
||||
This alias uses a little-known **mv** flag that enables you to provide the file you want to move as the final argument, ignoring the usual requirement for that file to be listed first. Now you can use your new command to move files and folders to your system Trash:
|
||||
|
||||
|
||||
```
|
||||
$ ls
|
||||
foo bar
|
||||
$ tcn foo
|
||||
$ ls
|
||||
bar
|
||||
```
|
||||
|
||||
Now the file is "gone," but only until you realize in a cold sweat that you still need it. At that point, you can rescue the file from your system Trash; be sure to tip the Bash and **mv** developers on the way out.
|
||||
|
||||
**Note:** If you need a more robust **Trash** command with better FreeDesktop compliance, see [Trashy][5].
|
||||
|
||||
### Simplify your Git workflow
|
||||
|
||||
Everyone has a unique workflow, but there are usually repetitive tasks no matter what. If you work with Git on a regular basis, then there’s probably some sequence you find yourself repeating pretty frequently. Maybe you find yourself going back to the master branch and pulling the latest changes over and over again during the day, or maybe you find yourself creating tags and then pushing them to the remote, or maybe it’s something else entirely.
|
||||
|
||||
No matter what Git incantation you’ve grown tired of typing, you may be able to alleviate some pain with a Bash alias. Largely thanks to its ability to pass arguments to hooks, Git has a rich set of introspective commands that save you from having to perform uncanny feats in Bash.
|
||||
|
||||
For instance, while you might struggle to locate, in Bash, a project’s top-level directory (which, as far as Bash is concerned, is an entirely arbitrary designation, since the absolute top level to a computer is the root directory), Git knows its top level with a simple query. If you study up on Git hooks, you’ll find yourself able to find out all kinds of information that Bash knows nothing about, but you can leverage that information with a Bash alias.
|
||||
|
||||
Here’s an alias to find the top level of a Git project, no matter where in that project you are currently working, and then to change directory to it, change to the master branch, and perform a Git pull:
|
||||
|
||||
|
||||
```
|
||||
alias startgit='cd `git rev-parse --show-toplevel` && git checkout master && git pull'
|
||||
```
|
||||
|
||||
This kind of alias is by no means a universally useful alias, but it demonstrates how a relatively simple alias can eliminate a lot of laborious navigation, commands, and waiting for prompts.
|
||||
|
||||
A simpler, and probably more universal, alias returns you to the Git project’s top level. This alias is useful because when you’re working on a project, that project more or less becomes your "temporary home" directory. It should be as simple to go "home" as it is to go to your actual home, and here’s an alias to do it:
|
||||
|
||||
|
||||
```
|
||||
alias cg='cd `git rev-parse --show-toplevel`'
|
||||
```
|
||||
|
||||
Now the command **cg** takes you to the top of your Git project, no matter how deep into its directory structure you have descended.
|
||||
|
||||
### Change directories and view the contents at the same time
|
||||
|
||||
It was once (allegedly) proposed by a leading scientist that we could solve many of the planet’s energy problems by harnessing the energy expended by geeks typing **cd** followed by **ls**.
|
||||
It’s a common pattern, because generally when you change directories, you have the impulse or the need to see what’s around.
|
||||
|
||||
But "walking" your computer’s directory tree doesn’t have to be a start-and-stop process.
|
||||
|
||||
This one’s cheating, because it’s not an alias at all, but it’s a great excuse to explore Bash functions. While aliases are great for quick substitutions, Bash allows you to add local functions in your **.bashrc** file (or a separate functions file that you load into **.bashrc**, just as you do your aliases file).
|
||||
|
||||
To keep things modular, create a new file called **~/.bash_functions** and then have your **.bashrc** load it:
|
||||
|
||||
|
||||
```
|
||||
if [ -e $HOME/.bash_functions ]; then
|
||||
source $HOME/.bash_functions
|
||||
fi
|
||||
```
|
||||
|
||||
In the functions file, add this code:
|
||||
|
||||
|
||||
```
|
||||
function cl() {
|
||||
DIR="$*";
|
||||
# if no DIR given, go home
|
||||
if [ $# -lt 1 ]; then
|
||||
DIR=$HOME;
|
||||
fi;
|
||||
builtin cd "${DIR}" && \
|
||||
# use your preferred ls command
|
||||
ls -F --color=auto
|
||||
}
|
||||
```
|
||||
|
||||
Load the function into your Bash session and then try it out:
|
||||
|
||||
|
||||
```
|
||||
$ source ~/.bash_functions
|
||||
$ cl Documents
|
||||
foo bar baz
|
||||
$ pwd
|
||||
/home/seth/Documents
|
||||
$ cl ..
|
||||
Desktop Documents Downloads
|
||||
[...]
|
||||
$ pwd
|
||||
/home/seth
|
||||
```
|
||||
|
||||
Functions are much more flexible than aliases, but with that flexibility comes the responsibility for you to ensure that your code makes sense and does what you expect. Aliases are meant to be simple, so keep them easy, but useful. For serious modifications to how Bash behaves, use functions or custom shell scripts saved to a location in your **PATH**.
|
||||
|
||||
For the record, there _are_ some clever hacks to implement the **cd** and **ls** sequence as an alias, so if you’re patient enough, then the sky is the limit even using humble aliases.
|
||||
|
||||
### Start aliasing and functioning
|
||||
|
||||
Customizing your environment is what makes Linux fun, and increasing your efficiency is what makes Linux life-changing. Get started with simple aliases, graduate to functions, and post your must-have aliases in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/bash-aliases
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/marcobravohttps://opensource.com/users/samwebstudiohttps://opensource.com/users/greg-phttps://opensource.com/users/greg-p
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
|
||||
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[3]: https://opensource.com/article/19/7/master-ls-command
|
||||
[4]: https://www.freedesktop.org/wiki/
|
||||
[5]: https://gitlab.com/trashy/trashy
|
@ -0,0 +1,229 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to structure a multi-file C program: Part 2)
|
||||
[#]: via: (https://opensource.com/article/19/7/structure-multi-file-c-part-2)
|
||||
[#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny)
|
||||
|
||||
How to structure a multi-file C program: Part 2
|
||||
======
|
||||
Dive deeper into the structure of a C program composed of multiple files
|
||||
in the second part of this article.
|
||||
![4 manilla folders, yellow, green, purple, blue][1]
|
||||
|
||||
In [Part 1][2], I laid out the structure for a multi-file C program called [MeowMeow][3] that implements a toy [codec][4]. I also talked about the Unix philosophy of program design, laying out a number of empty files to start with a good structure from the very beginning. Lastly, I touched on what a Makefile is and what it can do for you. This article picks up where the other one left off and now I'll get to the actual implementation of our silly (but instructional) MeowMeow codec.
|
||||
|
||||
The structure of the **main.c** file for **meow**/**unmeow** should be familiar to anyone who's read my article "[How to write a good C main function][5]." It has the following general outline:
|
||||
|
||||
|
||||
```
|
||||
/* main.c - MeowMeow, a stream encoder/decoder */
|
||||
|
||||
/* 00 system includes */
|
||||
/* 01 project includes */
|
||||
/* 02 externs */
|
||||
/* 03 defines */
|
||||
/* 04 typedefs */
|
||||
/* 05 globals (but don't)*/
|
||||
/* 06 ancillary function prototypes if any */
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
/* 07 variable declarations */
|
||||
/* 08 check argv[0] to see how the program was invoked */
|
||||
/* 09 process the command line options from the user */
|
||||
/* 10 do the needful */
|
||||
}
|
||||
|
||||
/* 11 ancillary functions if any */
|
||||
```
|
||||
|
||||
### Including project header files
|
||||
|
||||
The second section, **/* 01 project includes /***, reads like this from the source:
|
||||
|
||||
|
||||
```
|
||||
/* main.c - MeowMeow, a stream encoder/decoder */
|
||||
...
|
||||
/* 01 project includes */
|
||||
#include "main.h"
|
||||
#include "mmecode.h"
|
||||
#include "mmdecode.h"
|
||||
```
|
||||
|
||||
The **#include** directive is a C preprocessor command that causes the contents of the named file to be "included" at this point in the file. If the programmer uses double-quotes around the name of the header file, the compiler will look for that file in the current directory. If the file is enclosed in <>, it will look for the file in a set of predefined directories.
|
||||
|
||||
The file [**main.h**][6] contains the definitions and typedefs used in [**main.c**][7]. I like to collect these things here in case I want to use those definitions elsewhere in my program.
|
||||
|
||||
The files [**mmencode.h**][8] and [**mmdecode.h**][9] are nearly identical, so I'll break down **mmencode.h**.
|
||||
|
||||
|
||||
```
|
||||
/* mmencode.h - MeowMeow, a stream encoder/decoder */
|
||||
|
||||
#ifndef _MMENCODE_H
|
||||
#define _MMENCODE_H
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
int mm_encode(FILE *src, FILE *dst);
|
||||
|
||||
#endif /* _MMENCODE_H */
|
||||
```
|
||||
|
||||
The **#ifdef, #define, #endif** construction is collectively known as a "guard." This keeps the C compiler from including this file more than once per file. The compiler will complain if it finds multiple definitions/prototypes/declarations, so the guard is a _must-have_ for header files.
|
||||
|
||||
Inside the guard, there are only two things: an **#include** directive and a function prototype declaration. I include **stdio.h** here to bring in the definition of **FILE** that is used in the function prototype. The function prototype can be included by other C files to establish that function in the file's namespace. You can think of each file as a separate _namespace_, which means variables and functions in one file are not usable by functions or variables in another file.
|
||||
|
||||
Writing header files is complex, and it is tough to manage in larger projects. Use guards.
|
||||
|
||||
### MeowMeow encoding, finally
|
||||
|
||||
The meat and potatoes of this program—encoding and decoding bytes into/out of **MeowMeow** strings—is actually the easy part of this project. All of our activities until now have been putting the scaffolding in place to support calling this function: parsing the command line, determining which operation to use, and opening the files that we'll operate on. Here is the encoding loop:
|
||||
|
||||
|
||||
```
|
||||
/* mmencode.c - MeowMeow, a stream encoder/decoder */
|
||||
...
|
||||
while (![feof][10](src)) {
|
||||
|
||||
if (![fgets][11](buf, sizeof(buf), src))
|
||||
break;
|
||||
|
||||
for(i=0; i<[strlen][12](buf); i++) {
|
||||
lo = (buf[i] & 0x000f);
|
||||
hi = (buf[i] & 0x00f0) >> 4;
|
||||
[fputs][13](tbl[hi], dst);
|
||||
[fputs][13](tbl[lo], dst);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In plain English, this loop reads in a chunk of the file while there are chunks left to read (**feof(3)** and **fgets(3)**). Then it splits each byte in the chunk into **hi** and **lo** nibbles. Remember, a nibble is half of a byte, or 4 bits. The real magic here is realizing that 4 bits can encode 16 values. I use **hi** and **lo** as indices into a 16-string lookup table, **tbl**, that contains the **MeowMeow** strings that encode each nibble. Those strings are written to the destination **FILE** stream using **fputs(3)**, then we move on to the next byte in the buffer.
|
||||
|
||||
The table is initialized with a macro defined in [**table.h**][14] for no particular reason except to demonstrate including another project local header file, and I like initialization macros. We will go further into why a future article.
|
||||
|
||||
### MeowMeow decoding
|
||||
|
||||
Alright, I'll admit it took me a couple of runs at this before I got it working. The decode loop is similar: read a buffer full of **MeowMeow** strings and reverse the encoding from strings to bytes.
|
||||
|
||||
|
||||
```
|
||||
/* mmdecode.c - MeowMeow, a stream decoder/decoder */
|
||||
...
|
||||
int mm_decode(FILE *src, FILE *dst)
|
||||
{
|
||||
if (!src || !dst) {
|
||||
errno = EINVAL;
|
||||
return -1;
|
||||
}
|
||||
return stupid_decode(src, dst);
|
||||
}
|
||||
```
|
||||
|
||||
Not what you were expecting?
|
||||
|
||||
Here, I'm exposing the function **stupid_decode()** via the externally visible **mm_decode()** function. When I say "externally," I mean outside this file. Since **stupid_decode()** isn't in the header file, it isn't available to be called in other files.
|
||||
|
||||
Sometimes we do this when we want to publish a solid public interface, but we aren't quite done noodling around with functions to solve a problem. In my case, I've written an I/O-intensive function that reads 8 bytes at a time from the source stream to decode 1 byte to write to the destination stream. A better implementation would work on a buffer bigger than 8 bytes at a time. A _much_ better implementation would also buffer the output bytes to reduce the number of single-byte writes to the destination stream.
|
||||
|
||||
|
||||
```
|
||||
/* mmdecode.c - MeowMeow, a stream decoder/decoder */
|
||||
...
|
||||
int stupid_decode(FILE *src, FILE *dst)
|
||||
{
|
||||
char buf[9];
|
||||
decoded_byte_t byte;
|
||||
int i;
|
||||
|
||||
while (![feof][10](src)) {
|
||||
if (![fgets][11](buf, sizeof(buf), src))
|
||||
break;
|
||||
byte.field.f0 = [isupper][15](buf[0]);
|
||||
byte.field.f1 = [isupper][15](buf[1]);
|
||||
byte.field.f2 = [isupper][15](buf[2]);
|
||||
byte.field.f3 = [isupper][15](buf[3]);
|
||||
byte.field.f4 = [isupper][15](buf[4]);
|
||||
byte.field.f5 = [isupper][15](buf[5]);
|
||||
byte.field.f6 = [isupper][15](buf[6]);
|
||||
byte.field.f7 = [isupper][15](buf[7]);
|
||||
|
||||
[fputc][16](byte.value, dst);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
Instead of using the bit-shifting technique I used in the encoder, I elected to create a custom data structure called **decoded_byte_t**.
|
||||
|
||||
|
||||
```
|
||||
/* mmdecode.c - MeowMeow, a stream decoder/decoder */
|
||||
...
|
||||
|
||||
typedef struct {
|
||||
unsigned char f7:1;
|
||||
unsigned char f6:1;
|
||||
unsigned char f5:1;
|
||||
unsigned char f4:1;
|
||||
unsigned char f3:1;
|
||||
unsigned char f2:1;
|
||||
unsigned char f1:1;
|
||||
unsigned char f0:1;
|
||||
} fields_t;
|
||||
|
||||
typedef union {
|
||||
fields_t field;
|
||||
unsigned char value;
|
||||
} decoded_byte_t;
|
||||
```
|
||||
|
||||
It's a little complex when viewed all at once, but hang tight. The **decoded_byte_t** is defined as a **union** of a **fields_t** and an **unsigned char**. The named members of a union can be thought of as aliases for the same region of memory. In this case, **value** and **field** refer to the same 8-bit region of memory. Setting **field.f0** to 1 would also set the least significant bit in **value**.
|
||||
|
||||
While **unsigned char** shouldn't be a mystery, the **typedef** for **fields_t** might look a little unfamiliar. Modern C compilers allow programmers to specify "bit fields" in a **struct**. The field type needs to be an unsigned integral type, and the member identifier is followed by a colon and an integer that specifies the length of the bit field.
|
||||
|
||||
This data structure makes it simple to access each bit in the byte by field name and then access the assembled value via the **value** field of the union. We depend on the compiler to generate the correct bit-shifting instructions to access the fields, which can save you a lot of heartburn when you are debugging.
|
||||
|
||||
Lastly, **stupid_decode()** is _stupid_ because it only reads 8 bytes at a time from the source **FILE** stream. Usually, we try to minimize the number of reads and writes to improve performance and reduce our cost of system calls. Remember that reading or writing a bigger chunk less often is much better than reading/writing a lot of smaller chunks more frequently.
|
||||
|
||||
### The wrap-up
|
||||
|
||||
Writing a multi-file program in C requires a little more planning on behalf of the programmer than just a single **main.c**. But just a little effort up front can save a lot of time and headache when you refactor as you add functionality.
|
||||
|
||||
To recap, I like to have a lot of files with a few short functions in them. I like to expose a small subset of the functions in those files via header files. I like to keep my constants in header files, both numeric and string constants. I _love_ Makefiles and use them instead of Bash scripts to automate all sorts of things. I like my **main()** function to handle command-line argument parsing and act as a scaffold for the primary functionality of the program.
|
||||
|
||||
I know I've only touched the surface of what's going on in this simple program, and I'm excited to learn what things were helpful to you and which topics need better explanations. Share your thoughts in the comments to let me know.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/structure-multi-file-c-part-2
|
||||
|
||||
作者:[Erik O'Shaughnessy][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jnyjny
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/file_system.jpg?itok=pzCrX1Kc (4 manilla folders, yellow, green, purple, blue)
|
||||
[2]: https://opensource.com/article/19/7/how-structure-multi-file-c-program-part-1
|
||||
[3]: https://github.com/jnyjny/MeowMeow.git
|
||||
[4]: https://en.wikipedia.org/wiki/Codec
|
||||
[5]: https://opensource.com/article/19/5/how-write-good-c-main-function
|
||||
[6]: https://github.com/JnyJny/meowmeow/blob/master/main.h
|
||||
[7]: https://github.com/JnyJny/meowmeow/blob/master/main.c
|
||||
[8]: https://github.com/JnyJny/meowmeow/blob/master/mmencode.h
|
||||
[9]: https://github.com/JnyJny/meowmeow/blob/master/mmdecode.h
|
||||
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/feof.html
|
||||
[11]: http://www.opengroup.org/onlinepubs/009695399/functions/fgets.html
|
||||
[12]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
|
||||
[13]: http://www.opengroup.org/onlinepubs/009695399/functions/fputs.html
|
||||
[14]: https://github.com/JnyJny/meowmeow/blob/master/table.h
|
||||
[15]: http://www.opengroup.org/onlinepubs/009695399/functions/isupper.html
|
||||
[16]: http://www.opengroup.org/onlinepubs/009695399/functions/fputc.html
|
133
sources/tech/20190801 5 Free Partition Managers for Linux.md
Normal file
133
sources/tech/20190801 5 Free Partition Managers for Linux.md
Normal file
@ -0,0 +1,133 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 Free Partition Managers for Linux)
|
||||
[#]: via: (https://itsfoss.com/partition-managers-linux/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
5 Free Partition Managers for Linux
|
||||
======
|
||||
|
||||
_**Here’s our recommended list of partitioning tools for Linux distributions. These tools let you delete, add, tweak or resize the disk partitioning on your Linux system.**_
|
||||
|
||||
Usually, you decide the disk partitions while installing the OS. But, what if you need to modify the partitions sometime after the installation. You just can’t go back to the setup screen in any way. So, that is where partition managers (or accurately disk partition managers) come in handy.
|
||||
|
||||
In most of the cases, you do not need to separately install the partition manager because it comes pre-installed. Also, it is worth noting that you can either opt for a command-line based partition manager or something with a GUI.
|
||||
|
||||
Attention!
|
||||
|
||||
Playing with disk partitioning is a risky task. Don’t do it unless it’s absolutely necessary.
|
||||
If you are using a command line based partitioning tool, you need to learn the commands to get the job done. Or else, you might just end up wiping the entire disk.
|
||||
|
||||
### 5 Tools To Manage Disk Partitions in Linux
|
||||
|
||||
![][1]
|
||||
|
||||
The list below is in no particular order of ranking. Most of these partitioning tools should be available in your Linux distribution’s repository.
|
||||
|
||||
#### GParted
|
||||
|
||||
![GParted][2]
|
||||
|
||||
This could perhaps be the most popular GUI-based partition manager available for Linux distributions. You might have it pre-installed in some distributions. If you don’t, simply search for it in the software center to get it installed.
|
||||
|
||||
It directly prompts you to authenticate as the root user when you launch it. So, you don’t have to utilize the terminal here – at all. After authentication, it analyzes the devices and then lets you tweak the disk partitions. You will also find an option to “Attempt Data Rescue” in case of data loss or accidental deletion of files.
|
||||
|
||||
[GParted][3]
|
||||
|
||||
#### GNOME Disks
|
||||
|
||||
![Gnome Disks][4]
|
||||
|
||||
A GUI-based partition manager that comes baked in with Ubuntu or any Ubuntu-based distros like Zorin OS.
|
||||
|
||||
[][5]
|
||||
|
||||
Suggested read 7 Best Linux Tools For Digital Artists
|
||||
|
||||
It lets you delete, add, resize and tweak the partition. It even helps you in [formatting the USB in Ubuntu][6] if there is any problem.
|
||||
|
||||
You can even attempt to repair a partition with the help of this tool. The options also include editing filesystem, creating a partition image, restoring the image, and benchmarking the partition.
|
||||
|
||||
[GNOME Disks][7]
|
||||
|
||||
#### KDE Partition Manager
|
||||
|
||||
![Kde Partition Manager][8]
|
||||
|
||||
KDE Partition Manager should probably come pre-installed on KDE-based Linux distros. But, if it isn’t there – you can search for it on the software center to easily get it installed.
|
||||
|
||||
If you didn’t have it pre-installed, you might get the notice that you do not have administrative privileges when you try launching it. Without admin privileges, you cannot do anything. So, in that case, type in the following command to get started:
|
||||
|
||||
```
|
||||
sudo partitionmanager
|
||||
```
|
||||
|
||||
It will scan your devices and then you will be able to create, move, copy, delete, and resize partitions. You can also import/export partition tables along with a lot of other options available to tweak.
|
||||
|
||||
[KDE Partition Manager][9]
|
||||
|
||||
#### Fdisk [Command Line]
|
||||
|
||||
![Fdisk][10]
|
||||
|
||||
[Fdisk][11] is a command line utility that comes baked in with every unix-like OS. Fret not, even though it requires you to launch a terminal and enter commands – it isn’t very difficult. However, if you are too confused while using a text-based utility, you should stick to the GUI applications mentioned above. They all do the same thing.
|
||||
|
||||
To launch fdisk, you will have to be the root user and specify the device to manage partitions. Here’s an example for the command to start with:
|
||||
|
||||
```
|
||||
sudo fdisk /dev/sdc
|
||||
```
|
||||
|
||||
You can refer to [The Linux Documentation Project’s wiki page][12] for the list of commands and more details on how it works.
|
||||
|
||||
#### GNU Parted [Command Line]
|
||||
|
||||
![Gnu Parted][13]
|
||||
|
||||
Yet another command-line utility that you can find pre-installed on your Linux distro. You just need to enter the following command to get started:
|
||||
|
||||
```
|
||||
sudo parted
|
||||
```
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
[][14]
|
||||
|
||||
Suggested read Best Games On Steam You Can Play On Linux and Windows
|
||||
|
||||
I wouldn’t forget to mention [QtParted][15] as one of the alternatives to the list of partition managers. However, it has not been maintained for years now – so I do not recommend using it.
|
||||
|
||||
What do you think about the partition managers mentioned here? Did I miss any of your favorites? Let me know and I’ll update this list of partition manager for Linux with your suggestion.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/partition-managers-linux/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/disk-partitioning-tools-linux.jpg?resize=800%2C450&ssl=1
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/g-parted.png?ssl=1
|
||||
[3]: https://gparted.org/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/gnome-disks.png?ssl=1
|
||||
[5]: https://itsfoss.com/best-linux-graphic-design-software/
|
||||
[6]: https://itsfoss.com/format-usb-drive-sd-card-ubuntu/
|
||||
[7]: https://wiki.gnome.org/Apps/Disks
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/kde-partition-manager.jpg?resize=800%2C404&ssl=1
|
||||
[9]: https://kde.org/applications/system/org.kde.partitionmanager
|
||||
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/07/fdisk.jpg?fit=800%2C496&ssl=1
|
||||
[11]: https://en.wikipedia.org/wiki/Fdisk
|
||||
[12]: https://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/gnu-parted.png?fit=800%2C559&ssl=1
|
||||
[14]: https://itsfoss.com/best-linux-games-steam/
|
||||
[15]: http://qtparted.sourceforge.net/
|
@ -0,0 +1,123 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Bash Script to Send a Mail When a New User Account is Created in System)
|
||||
[#]: via: (https://www.2daygeek.com/linux-bash-script-to-monitor-user-creation-send-email/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
Bash Script to Send a Mail When a New User Account is Created in System
|
||||
======
|
||||
|
||||
There are many open source monitoring tools are currently available in market to monitor Linux systems performance.
|
||||
|
||||
It will send an email alert when the system reaches the specified threshold limit.
|
||||
|
||||
It monitors everything such as CPU utilization, Memory utilization, swap utilization, disk space utilization and much more.
|
||||
|
||||
But i don’t think they have an option to monitor a new user creation activity and alert when it’s happening.
|
||||
|
||||
If not, it doesn’t really matter as we can write our own bash script to achieve this.
|
||||
|
||||
We had added many useful shell scripts in the past. If you want to check those, navigate to the below link.
|
||||
|
||||
* **[How to automate day to day activities using shell scripts?][1]**
|
||||
|
||||
|
||||
|
||||
What the script does? It monitors **`/var/log/secure`**` ` file and alert admin when a new account is created in system.
|
||||
|
||||
We can’t run this script frequently since user creation is not happening very often. However, I’m planning to run this script once in a day.
|
||||
|
||||
So, that we can get a consolidated report about the user creation.
|
||||
|
||||
If useradd string was found in “/var/log/secure” file for yesterday’s date, then the script will send an email alert to given email id with new users details.
|
||||
|
||||
**Note:** You need to change the email id instead of ours.
|
||||
|
||||
```
|
||||
# vi /opt/scripts/new-user.sh
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
#Set the variable which equal to zero
|
||||
prev_count=0
|
||||
|
||||
count=$(grep -i "`date --date='yesterday' '+%b %e'`" /var/log/secure | egrep -wi 'useradd' | wc -l)
|
||||
|
||||
if [ "$prev_count" -lt "$count" ] ; then
|
||||
|
||||
# Send a mail to given email id when errors found in log
|
||||
|
||||
SUBJECT="ATTENTION: New User Account is created on server : `date --date='yesterday' '+%b %e'`"
|
||||
|
||||
# This is a temp file, which is created to store the email message.
|
||||
|
||||
MESSAGE="/tmp/new-user-logs.txt"
|
||||
|
||||
TO="[email protected]"
|
||||
|
||||
echo "Hostname: `hostname`" >> $MESSAGE
|
||||
|
||||
echo -e "\n" >> $MESSAGE
|
||||
|
||||
echo "The New User Details are below." >> $MESSAGE
|
||||
|
||||
echo "+------------------------------+" >> $MESSAGE
|
||||
|
||||
grep -i "`date --date='yesterday' '+%b %e'`" /var/log/secure | egrep -wi 'useradd' | grep -v 'failed adding'| awk '{print $4,$8}' | uniq | sed 's/,/ /' >> $MESSAGE
|
||||
|
||||
echo "+------------------------------+" >> $MESSAGE
|
||||
|
||||
mail -s "$SUBJECT" "$TO" < $MESSAGE
|
||||
|
||||
rm $MESSAGE
|
||||
|
||||
fi
|
||||
```
|
||||
|
||||
Set an executable permission to **`new-user.sh`**` ` file.
|
||||
|
||||
```
|
||||
$ chmod +x /opt/scripts/new-user.sh
|
||||
```
|
||||
|
||||
Finally add a cronjob to automate this. It will run everyday at 7'o clock.
|
||||
|
||||
```
|
||||
# crontab -e
|
||||
|
||||
0 7 * * * /bin/bash /opt/scripts/new-user.sh
|
||||
```
|
||||
|
||||
Note: You will be getting an email alert everyday at 7 o'clock, which is for yesterday's log.
|
||||
|
||||
**`Output:`**` ` You will be getting an email alert similar to below.
|
||||
|
||||
```
|
||||
# cat /tmp/logs.txt
|
||||
|
||||
Hostname: 2g.server10.com
|
||||
|
||||
The New User Details are below.
|
||||
+------------------------------+
|
||||
2g.server10.com name=magesh
|
||||
2g.server10.com name=daygeek
|
||||
+------------------------------+
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-bash-script-to-monitor-user-creation-send-email/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/category/shell-script/
|
@ -0,0 +1,126 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Manage your passwords with Bitwarden and Podman)
|
||||
[#]: via: (https://fedoramagazine.org/manage-your-passwords-with-bitwarden-and-podman/)
|
||||
[#]: author: (Eric Gustavsson https://fedoramagazine.org/author/egustavs/)
|
||||
|
||||
使用 Bitwarden 和 Podman 管理你的密码
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
在过去的一年中,你可能会遇到一些试图向你推销密码管理器的广告。比如 [LastPass][2]、[1Password][3] 或 [Dashlane][4]。密码管理器消除了记住所有网站密码的负担。你不再需要使用重复或容易记住的密码。相反,你只需要记住一个可以解锁所有其他密码的密码。
|
||||
|
||||
通过使用一个强密码而不是许多弱密码,这可以使你更安全。如果你有基于云的密码管理器(例如 LastPass、1Password 或 Dashlane),你还可以跨设备同步密码。不幸的是,这些产品都不是开源的。幸运的是,还有其他开源替代品。
|
||||
|
||||
### 开源密码管理器
|
||||
|
||||
替代方案包括 Bitwarden、[LessPass][5] 或 [KeePass][6]。Bitwarden 是一款[开源密码管理器][7],它会将所有密码加密存储在服务器上,它的工作方式与 LastPass、1Password 或 Dashlane 相同。LessPass 有点不同,因为它专注于成为无状态密码管理器。这意味着它根据主密码、网站和用户名生成密码,而不是保存加密的密码。另一方面,KeePass 是一个基于文件的密码管理器,它的插件和应用具有很大的灵活性。
|
||||
|
||||
这三个应用中的每一个都有其自身的缺点。Bitwarden 将所有东西保存在一个地方,并通过其 API 和网站接口暴露给网络。LessPass 无法保存自定义密码,因为它是无状态的,因此你需要使用它生成的密码。KeePass 是一个基于文件的密码管理器,因此无法在设备之间轻松同步。你可以使用云存储和 [WebDAV][8] 来解决此问题,但是有许多客户端不支持它,如果设备无法正确同步,你可能会遇到文件冲突。
|
||||
|
||||
本文重点介绍 Bitwarden。
|
||||
|
||||
### 运行非官方的 Bitwarden 实现
|
||||
|
||||
有一个名为 [bitwarden_rs][9] 的服务器及其 API 的社区实现。这个实现是完全开源的,因为它可以使用 SQLite 或 MariaDB/MySQL,而不是官方服务器使用的专有 Microsoft SQL Server。
|
||||
|
||||
有一点重要的是要认识到官方和非官方版本之间存在一些差异。例如,[官方服务器已经由第三方审核][10],而非官方服务器还没有。在实现方面,非官方版本缺少[电子邮件确认和采用 Duo 或邮件码的双因素身份验证][11]。
|
||||
|
||||
让我们在 SELinux 中运行服务器。根据 bitwarden_rs 的文档,你可以如下构建一个 Podman 命令:
|
||||
|
||||
```
|
||||
$ podman run -d \
|
||||
--userns=keep-id \
|
||||
--name bitwarden \
|
||||
-e SIGNUPS_ALLOWED=false \
|
||||
-e ROCKET_PORT=8080 \
|
||||
-v /home/egustavs/Bitwarden/bw-data/:/data/:Z \
|
||||
-p 8080:8080 \
|
||||
bitwardenrs/server:latest
|
||||
```
|
||||
|
||||
这将下载 bitwarden_rs 镜像并在用户命名空间下的用户容器中运行它。它使用 1024 以上的端口,以便非 root 用户可以绑定它。它还使用 _:Z_ 更改卷的 SELinux 上下文,以防止在 _ /data_ 中的读写权限问题。
|
||||
|
||||
如果你在某个域下托管它,建议将此服务器放在 Apache 或 Nginx 的反向代理下。这样,你可以使用 80 和 443 端口指向容器的 8080 端口,而无需以 root 身份运行容器。
|
||||
|
||||
### 在 systemd 下运行
|
||||
|
||||
Bitwarden 现在运行了,你可能希望保持这种状态。接下来,创建一个使容器保持运行的单元文件,如果它没有响应则自动重新启动,并在系统重启后开始运行。创建文件 _/etc/systemd/system/bitwarden.service_:
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Bitwarden Podman container
|
||||
Wants=syslog.service
|
||||
|
||||
[Service]
|
||||
User=egustavs
|
||||
Group=egustavs
|
||||
TimeoutStartSec=0
|
||||
ExecStart=/usr/bin/podman run 'bitwarden'
|
||||
ExecStop=-/usr/bin/podman stop -t 10 'bitwarden'
|
||||
Restart=always
|
||||
RestartSec=30s
|
||||
KillMode=none
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
现在使用 _[sudo][12]_ 启用并启动:
|
||||
|
||||
```
|
||||
$ sudo systemctl enable bitwarden.service && sudo systemctl start bitwarden.service
|
||||
$ systemctl status bitwarden.service
|
||||
bitwarden.service - Bitwarden Podman container
|
||||
Loaded: loaded (/etc/systemd/system/bitwarden.service; enabled; vendor preset: disabled)
|
||||
Active: active (running) since Tue 2019-07-09 20:23:16 UTC; 1 day 14h ago
|
||||
Main PID: 14861 (podman)
|
||||
Tasks: 44 (limit: 4696)
|
||||
Memory: 463.4M
|
||||
```
|
||||
|
||||
成功了!Bitwarden 现在运行了并将继续运行。
|
||||
|
||||
### 添加 LetsEncrypt
|
||||
|
||||
如果你有域名,强烈建议你使用类似 LetsEncrypt 的加密证书运行你的 Bitwarden 实例。Certbot 是一个为我们创建 LetsEncrypt 证书的机器人,这里有个[在 Fedora 中操作的指南][13]。
|
||||
|
||||
生成证书后,你可以按照 [bitwarden_rs 指南中关于 HTTPS 的部分来][14]。只要记得将 _:Z_ 附加到 LetsEncrypt 来处理权限,而不用更改端口。
|
||||
|
||||
* * *
|
||||
|
||||
* 照片由 _[_CMDR Shane_][15]_ 拍摄,发表在 [_Unsplash_][16] 上。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/manage-your-passwords-with-bitwarden-and-podman/
|
||||
|
||||
作者:[Eric Gustavsson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/egustavs/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/bitwarden-816x345.jpg
|
||||
[2]: https://www.lastpass.com
|
||||
[3]: https://1password.com/
|
||||
[4]: https://www.dashlane.com/
|
||||
[5]: https://lesspass.com/
|
||||
[6]: https://keepass.info/
|
||||
[7]: https://bitwarden.com/
|
||||
[8]: https://en.wikipedia.org/wiki/WebDAV
|
||||
[9]: https://github.com/dani-garcia/bitwarden_rs/
|
||||
[10]: https://blog.bitwarden.com/bitwarden-completes-third-party-security-audit-c1cc81b6d33
|
||||
[11]: https://github.com/dani-garcia/bitwarden_rs/wiki#missing-features
|
||||
[12]: https://fedoramagazine.org/howto-use-sudo/
|
||||
[13]: https://certbot.eff.org/instructions
|
||||
[14]: https://github.com/dani-garcia/bitwarden_rs/wiki/Enabling-HTTPS
|
||||
[15]: https://unsplash.com/@cmdrshane?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[16]: https://unsplash.com/search/photos/password?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -0,0 +1,285 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top 8 Things to do after Installing Debian 10 (Buster))
|
||||
[#]: via: (https://www.linuxtechi.com/things-to-do-after-installing-debian-10/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
Debian 10 (Buster) 安装后要做的前8件事
|
||||
======
|
||||
|
||||
Debian 10 代码名称是 Buster ,它是来自 Debian 家族的最新 LTS 发布版本,并包含大量的特色。因此,如果你已经在你的电脑上安装 Debian 10 ,并在思考接下来做什么,那么,请继续阅读这篇文章直到结尾,因为我们为你提供在安装 Debian 10 后要做的前8件事。对于还没有安装 Debian 10 的人们,请阅读这篇指南 [ ** 图解 Debian 10 (Buster) 安装步骤 ** ][1]。 让我们继续这篇文章:
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/Things-to-do-after-installing-debian10.jpg>
|
||||
|
||||
### 1) 安装和配置 sudo
|
||||
|
||||
在设置完成 Debian 10 后,你需要做的第一件事是安装 sudo 软件包,因为它能够使你获得管理员权限来安装你需要的软件包。为安装和配置 sudo ,请使用下面的命令:
|
||||
|
||||
变成 root 用户,然后使用下面的命令安装 sudo 软件包,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ su -
|
||||
Password:
|
||||
root@linuxtechi:~# apt install sudo -y
|
||||
```
|
||||
|
||||
添加你的本地用户到 sudo 组,使用下面的 [usermod][2] 命令,
|
||||
|
||||
```
|
||||
root@linuxtechi:~# usermod -aG sudo pkumar
|
||||
root@linuxtechi:~#
|
||||
```
|
||||
|
||||
现在验证是否本地用户获得 sudo 权限,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ id
|
||||
uid=1000(pkumar) gid=1000(pkumar) groups=1000(pkumar),27(sudo)
|
||||
root@linuxtechi:~$ sudo vi /etc/hosts
|
||||
[sudo] password for pkumar:
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
### 2) 校正日期和时间
|
||||
|
||||
在你成功配置 sudo 软件包后,接下来,你需要根据你的位置来校正日期和时间。为了校正日期和时间,
|
||||
|
||||
转到系统 **设置** –> **详细说明** –> **日期和时间** ,然后更改为适合你的位置的时区。
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/Adjust-date-time-Debian10.jpg>
|
||||
|
||||
一旦时区被更改,你可以看到时钟中的时间自动更改
|
||||
|
||||
### 3) 应用所有更新
|
||||
|
||||
在 Debian 10 安装后,建议安装所有 Debian 10 软件包存储库中可用的更新,执行下面的 apt 命令,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt update
|
||||
root@linuxtechi:~$ sudo apt upgrade -y
|
||||
```
|
||||
|
||||
**注意:** 如果你是 vi 编辑器的一个大粉丝,那么使用下面的 apt 命令安装 vim ,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt install vim -y
|
||||
```
|
||||
|
||||
### 4) 安装 Flash Player 插件
|
||||
|
||||
默认情况下,Debian 10 (Buster) 存储库不包含 Flash 插件,因此,用户需要遵循下面的概述来在他们的系统中查找和安装 flash player :
|
||||
|
||||
为 flash player 配置存储库:
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ echo "deb http://ftp.de.debian.org/debian buster main contrib" | sudo tee -a /etc/apt/sources.list
|
||||
deb http://ftp.de.debian.org/debian buster main contrib
|
||||
root@linuxtechi:~
|
||||
```
|
||||
|
||||
现在使用下面的命令更新软件包索引,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt update
|
||||
```
|
||||
|
||||
使用下面的 apt 命令安装 flash 插件
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt install pepperflashplugin-nonfree -y
|
||||
```
|
||||
|
||||
一旦软件包被成功安装,接下来,尝试播放 YouTube 中的视频,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/Flash-Player-plugin-Debian10.jpg>
|
||||
|
||||
### 5) 安装软件,像 VLC,SKYPE,FileZilla 和截图工具
|
||||
|
||||
如此,现在我们已经启用 flash player,是时候在我们的 Debian 10 系统中安装所有其它的软件,像 VLC,Skype,Filezilla 和截图工具,像 flameshot 。
|
||||
|
||||
**安装 VLC 多媒体播放器**
|
||||
|
||||
为在你的系统中安装 VLC 播放器,使用下面的 apt 命令,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt install vlc -y
|
||||
```
|
||||
|
||||
在成功安装 VLC 播放器后,尝试播放你喜欢的视频
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/Debian10-VLC.jpg>
|
||||
|
||||
**安装 Skype :**
|
||||
|
||||
首先,下载最新的 Skype 软件包:
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ wget https://go.skype.com/skypeforlinux-64.deb
|
||||
```
|
||||
|
||||
接下来,使用 apt 命令安装软件包:
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt install ./skypeforlinux-64.deb
|
||||
```
|
||||
|
||||
在成功安装 Skype 后,尝试访问它,并输入你的用户名和密码,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/skype-Debian10.jpg>
|
||||
|
||||
**安装 Filezilla**
|
||||
|
||||
为在你的系统中安装 Filezilla,使用下面的 apt 命令,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt install filezilla -y
|
||||
```
|
||||
|
||||
一旦 FileZilla 软件包被成功安装,尝试访问它,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/FileZilla-Debian10.jpg>
|
||||
|
||||
**安装截图工具 (flameshot)**
|
||||
|
||||
使用下面的命令来阿紫截图工具: flameshot ,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt install flameshot -y
|
||||
```
|
||||
|
||||
**注意:** Shutter 工具在 Debian 10 中被移除
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/flameshoot-debian10.jpg>
|
||||
|
||||
### 6) 启用和启动防火墙
|
||||
|
||||
总是建议启动防火墙来使你的网络安全。如果你正在期望在 Debian 10 中启用防火墙, **UFW** (简单的防火墙)是最好的处理防火墙的工具。既然 UFW 在 Debian 存储库中可用,它非常容易安装,如下显示:
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt install ufw
|
||||
```
|
||||
|
||||
在你安装 UFW 后,接下来的步骤是设置防火墙。因此,为设置防火墙,通过拒绝端口来禁用所有的传入流量,并且只允许需要的端口,像 ssh ,http 和 https 。
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo ufw default deny incoming
|
||||
Default incoming policy changed to 'deny'
|
||||
(be sure to update your rules accordingly)
|
||||
root@linuxtechi:~$ sudo ufw default allow outgoing
|
||||
Default outgoing policy changed to 'allow'
|
||||
(be sure to update your rules accordingly)
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
允许 SSH 端口
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo ufw allow ssh
|
||||
Rules updated
|
||||
Rules updated (v6)
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
假使你在系统中已经安装 Web 服务器,那么使用下面的 ufw 命令来在防火墙中允许它们的端口,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo ufw allow 80
|
||||
Rules updated
|
||||
Rules updated (v6)
|
||||
root@linuxtechi:~$ sudo ufw allow 443
|
||||
Rules updated
|
||||
Rules updated (v6)
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
最后,你可以使用下面的命令启用 UFW
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo ufw enable
|
||||
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
|
||||
Firewall is active and enabled on system startup
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
假使你想检查你的防火墙的状态,你可以使用下面的命令检查它
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo ufw status
|
||||
```
|
||||
|
||||
### 7) 安装虚拟化软件 (VirtualBox)
|
||||
|
||||
安装 Virtualbox 的第一步是将 Oracle VirtualBox 存储库的公钥导入到你的 Debian 10 系统
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
|
||||
OK
|
||||
root@linuxtechi:~$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
|
||||
OK
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
如果导入成功,你将看到一个 “OK” 显示信息。
|
||||
|
||||
接下来,你需要添加存储库到 source list
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo add-apt-repository "deb http://download.virtualbox.org/virtualbox/debian buster contrib"
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
最后,是时候在你的系统中安装 VirtualBox 6.0
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt update
|
||||
root@linuxtechi:~$ sudo apt install virtualbox-6.0 -y
|
||||
```
|
||||
|
||||
一旦 VirtualBox 软件包被成功安装,尝试访问它,并开始创建虚拟机,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/07/VirtualBox6-Debian10-Workstation.jpg>
|
||||
|
||||
### 8) 安装最新的 AMD 驱动程序
|
||||
|
||||
最后,你也可以安装需要的附加 AMD 驱动程序,像显卡,ATI 专有的和 Nvidia 图形驱动程序。为安装最新的 AMD 驱动程序,首先,我们必需修改 **/etc/apt/sources.list** 文件,在包含 **main** 和 **contrib** 的行中添加 **non-free** 单词,示例如下显示
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo vi /etc/apt/sources.list
|
||||
…………………
|
||||
deb http://deb.debian.org/debian/ buster main non-free contrib
|
||||
deb-src http://deb.debian.org/debian/ buster main non-free contrib
|
||||
|
||||
deb http://security.debian.org/debian-security buster/updates main contrib non-free
|
||||
deb-src http://security.debian.org/debian-security buster/updates main contrib non-free
|
||||
|
||||
deb http://ftp.us.debian.org/debian/ buster-updates main contrib non-free
|
||||
……………………
|
||||
```
|
||||
|
||||
现在,使用下面的 apt 命令来在 Debian 10 系统中安装最新的 AMD 驱动程序
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt update
|
||||
root@linuxtechi:~$ sudo apt install firmware-linux firmware-linux-nonfree libdrm-amdgpu1 xserver-xorg-video-amdgpu -y
|
||||
```
|
||||
|
||||
这就是这篇文章的全部内存,我希望你有在安装 Debian 10 后应该有的想法。请在下面的评论区,分享你的反馈和评论。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/things-to-do-after-installing-debian-10/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/debian-10-buster-installation-guide/
|
||||
[2]: https://www.linuxtechi.com/linux-commands-to-manage-local-accounts/
|
Loading…
Reference in New Issue
Block a user