Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2018-05-26 06:58:44 +08:00
commit bfb22bf177
17 changed files with 1305 additions and 574 deletions

View File

@ -3,7 +3,7 @@
这是我们的 LAMP 系列教程的开始:如何在 Ubuntu 上安装 Apache web 服务器。
这些说明适用于任何基于 Ubuntu 的发行版,包括 Ubuntu 14.04, Ubuntu 16.04, [Ubuntu 18.04][1],甚至非 LTS 的 Ubuntu 发行版,例如 Ubuntu 17.10。这些说明经过测试并为 Ubuntu 16.04 编写。
这些说明适用于任何基于 Ubuntu 的发行版,包括 Ubuntu 14.04、 Ubuntu 16.04、 [Ubuntu 18.04][1],甚至非 LTS 的 Ubuntu 发行版,例如 Ubuntu 17.10。这些说明经过测试并为 Ubuntu 16.04 编写。
Apache (又名 httpd) 是最受欢迎和使用最广泛的 web 服务器,所以这应该对每个人都有用。
@ -11,9 +11,9 @@ Apache (又名 httpd) 是最受欢迎和使用最广泛的 web 服务器,所
在我们开始之前,这里有一些要求和说明:
* Apache 可能已经在你的服务器上安装了,所以开始之前首先检查一下。你可以使用 "apachectl -V" 命令来显示你正在使用的 Apache 的版本和一些其他信息。
* 你需要一个 Ubuntu 服务器。你可以从 [Vultr][2] 购买一个,它们是[最便宜的云托管服务商][3]之一。它们的服务器价格每月 2.5 美元起。
* 你需要有 root 用户或具有 sudo 访问权限的用户。下面的所有命令都由 root 用户自行,所以我们不必为每个命令都添加 'sudo'
* Apache 可能已经在你的服务器上安装了,所以开始之前首先检查一下。你可以使用 `apachectl -V` 命令来显示你正在使用的 Apache 的版本和一些其他信息。
* 你需要一个 Ubuntu 服务器。你可以从 [Vultr][2] 购买一个,它们是[最便宜的云托管服务商][3]之一。它们的服务器价格每月 2.5 美元起。LCTT 译注:广告 ≤_≤
* 你需要有 root 用户或具有 sudo 访问权限的用户。下面的所有命令都由 root 用户执行,所以我们不必为每个命令都添加 `sudo`
* 如果你使用 Ubuntu则需要[启用 SSH][4],如果你使用 Windows则应该使用类似 [MobaXterm][5] 的 SSH 客户端。
这就是全部要求和注释了,让我们进入安装过程。
@ -21,16 +21,19 @@ Apache (又名 httpd) 是最受欢迎和使用最广泛的 web 服务器,所
### 在 Ubuntu 上安装 Apache
你需要做的第一件事就是更新 Ubuntu这是在你做任何事情之前都应该做的。你可以运行
```
apt-get update && apt-get upgrade
```
接下来,安装 Apache运行以下命令
```
apt-get install apache2
```
如果你愿意,你也可以安装 Apache 文档和一些 Apache 实用程序。对于我们稍后将要安装的一些模块,你将需要一些 Apache 实用程序。
```
apt-get install apache2-doc apache2-utils
```
@ -46,6 +49,7 @@ apt-get install apache2-doc apache2-utils
#### 检查 Apache 是否正在运行
默认情况下Apache 设置为在机器启动时自动启动,因此你不必手动启用它。你可以使用以下命令检查它是否正在运行以及其他相关信息:
```
systemctl status apache2
```
@ -53,6 +57,7 @@ systemctl status apache2
[![check if apache is running][6]][6]
并且你可以检查你正在使用的版本:
```
apachectl -V
```
@ -64,6 +69,7 @@ apachectl -V
如果你使用防火墙你应该使用它则可能需要更新防火墙规则并允许访问默认端口。Ubuntu 上最常用的防火墙是 UFW因此以下说明使用于 UFW。
要允许通过 80http和 443https端口的流量运行以下命令
```
ufw allow 'Apache Full'
```
@ -76,18 +82,21 @@ ufw allow 'Apache Full'
PageSpeed 模块将自动优化并加速你的 Apache 服务器。
首先,进入 [PageSpeed 下载页][7]并选择你需要的的文件。我们使用的是 64 位 Ubuntu 服务器,所以我们安装最新的稳定版本。使用 wget 下载它:
首先,进入 [PageSpeed 下载页][7]并选择你需要的的文件。我们使用的是 64 位 Ubuntu 服务器,所以我们安装最新的稳定版本。使用 `wget` 下载它:
```
wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_amd64.deb
```
然后,使用以下命令安装它:
```
dpkg -i mod-pagespeed-stable_current_amd64.deb
apt-get -f install
```
重启 Apache 以使更改生效:
```
systemctl restart apache2
```
@ -95,6 +104,7 @@ systemctl restart apache2
##### 使用 mod_rewrite 模块启动重写/重定向
顾名思义,该模块用于重写(重定向)。如果你使用 WordPress 或任何其他 CMS 来处理此问题,你就需要它。要安装它,只需运行:
```
a2enmod rewrite
```
@ -104,11 +114,13 @@ a2enmod rewrite
##### 使用 ModSecurity 模块保护你的 Apache
顾名思义ModSecurity 是一个用于安全性的模块,它基本上起着防火墙的作用,它可以监控你的流量。要安装它,运行以下命令:
```
apt-get install libapache2-modsecurity
```
再次重启 Apache
```
systemctl restart apache2
```
@ -118,40 +130,46 @@ ModSecurity 自带了一个默认的设置,但如果你想扩展它,你可
##### 使用 mod_evasive 模块抵御 DDoS 攻击
尽管 mod_evasive 在防止攻击方面有多大用处值得商榷,但是你可以使用它来阻止和防止服务器上的 DDoS 攻击。要安装它,使用以下命令:
```
apt-get install libapache2-mod-evasive
```
默认情况下mod_evasive 是禁用的,要启用它,编辑以下文件:
```
nano /etc/apache2/mods-enabled/evasive.conf
```
取消注释所有行(即删除 #),根据你的要求进行配置。如果你不知道要编辑什么,你可以保持原样。
取消注释所有行(即删除 `#`),根据你的要求进行配置。如果你不知道要编辑什么,你可以保持原样。
[![mod_evasive][9]][9]
创建一个日志文件:
```
mkdir /var/log/mod_evasive
chown -R www-data:www-data /var/log/mod_evasive
```
就是这样。现在重启 Apache 以使更改生效。
```
systemctl restart apache2
```
你可以安装和配置[附加模块][10]但完全取决于你和你使用的软件。它们通常不是必需的。甚至我们上面包含的4个模块也不是必需的。如果特定应用需要模块那么它们可能会注意到这一点。
你可以安装和配置[附加模块][10],但完全取决于你和你使用的软件。它们通常不是必需的。甚至我们上面包含的 4 个模块也不是必需的。如果特定应用需要模块,那么它们可能会注意到这一点。
#### 用 Apache2Buddy 脚本优化 Apache
Apache2Buddy 是一个可以自动调整 Apache 配置的脚本。你唯一需要做的就是运行下面的命令,脚本会自动完成剩下的工作:
```
curl -sL https://raw.githubusercontent.com/richardforth/apache2buddy/master/apache2buddy.pl | perl
```
如果你没有安装 curl那么你可能需要安装它。使用以下命令来安装 curl
如果你没有安装 `curl`,那么你可能需要安装它。使用以下命令来安装 `curl`
```
apt-get install curl
```
@ -165,15 +183,17 @@ apt-get install curl
现在我们已经完成了所有的调优工作,让我们开始创建一个实际的网站。按照我们的指示创建一个简单的 HTML 页面和一个在 Apache 上运行的虚拟主机。
你需要做的第一件事是为你的网站创建一个新的目录。运行以下命令来执行此操作:
```
mkdir -p /var/www/example.com/public_html
```
当然,将 example.com 替换为你所需的域名。你可以从 [Namecheap][11] 获得一个便宜的域名。
当然,将 `example.com` 替换为你所需的域名。你可以从 [Namecheap][11] 获得一个便宜的域名。
不要忘记在下面的所有命令中替换 example.com。
不要忘记在下面的所有命令中替换 `example.com`
接下来,创建一个简单的静态网页。创建 HTML 文件:
```
nano /var/www/example.com/public_html/index.html
```
@ -193,17 +213,20 @@ nano /var/www/example.com/public_html/index.html
保存并关闭文件。
配置目录的权限:
```
chown -R www-data:www-data /var/www/example.com
chmod -R og-r /var/www/example.com
```
为你的网站创建一个新的虚拟主机:
```
nano /etc/apache2/sites-available/example.com.conf
```
粘贴以下内容:
```
<VirtualHost *:80>
     ServerAdmin admin@example.com
@ -217,32 +240,31 @@ nano /etc/apache2/sites-available/example.com.conf
</VirtualHost>
```
这是一个基础的虚拟主机。根据你的设置,你可能需要更高级的 .conf 文件。
这是一个基础的虚拟主机。根据你的设置,你可能需要更高级的 `.conf` 文件。
在更新所有内容后保存并关闭文件。
现在,使用以下命令启用虚拟主机:
```
a2ensite example.com.conf
```
最后,重启 Apache 以使更改生效:
```
systemctl restart apache2
```
这就是全部了,你做完了。现在你可以访问 example.com 并查看你的页面。
--------------------------------------------------------------------------------
via: https://thishosting.rocks/how-to-install-optimize-apache-ubuntu/
作者:[ThisHosting][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,11 +1,11 @@
You-Get - 支持 80+ 网站的命令行多媒体下载器
You-Get:支持 80 多个网站的命令行多媒体下载器
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/you-get-1-720x340.jpg)
你们大多数人可能用过或听说过 **Youtube-dl**,这个命令行程序可以从包括 Youtube 在内的 100+ 网站下载视频。我偶然发现了一个类似的工具,名字叫做 **"You-Get"**。这是一个 Python 编写的命令行下载器,可以让你从 YoutubeFacebookTwitter 等很多热门网站下载图片,音频和视频。目前该下载器支持 80+ 站点,点击[**这里**][1]查看所有支持的网站。
你们大多数人可能用过或听说过 **Youtube-dl**,这个命令行程序可以从包括 Youtube 在内的 100+ 网站下载视频。我偶然发现了一个类似的工具,名字叫做 **You-Get**。这是一个 Python 编写的命令行下载器,可以让你从 Youtube、Facebook、Twitter 等很多热门网站下载图片,音频和视频LCTT 译注:首先,它们得是存在的网站)。目前该下载器支持 80+ 站点,点击[这里][1]查看所有支持的网站。
You-Get 不仅仅是一个下载器,它还可以将在线视频导流至你的视频播放器。更进一步,它还允许你在 Google 上搜索视频只要给出搜索项You-Get 使用 Google 搜索并下载相关度最高的视频。另外值得一提的特性是,它允许你暂停和恢复下载过程。它是一个完全自由、开源及跨平台的应用,适用于 LinuxMacOS 及 Windows。
You-Get 不仅仅是一个下载器,它还可以将在线视频导流至你的视频播放器。更进一步,它还允许你在 Google 上搜索视频只要给出搜索项You-Get 使用 Google 搜索并下载相关度最高的视频。另外值得一提的特性是,它允许你暂停和恢复下载过程。它是一个完全自由、开源及跨平台的应用,适用于 LinuxMacOS 及 Windows。
### 安装 You-Get
@ -17,35 +17,36 @@ You-Get 不仅仅是一个下载器,它还可以将在线视频导流至你的
有多种方式安装 You-Get其中官方推荐采用 pip 包管理器安装。如果你还没有安装 pip可以参考如下链接
[如何使用 pip 管理 Python 软件包][2]
- [如何使用 pip 管理 Python 软件包][2]
需要注意的是,你需要安装 Python 3 版本的 pip。
需要注意的是,你需要安装 Python 3 版本的 `pip`
接下来,运行如下命令安装 You-Get
```
$ pip3 install you-get
```
可以使用命令升级 You-Get 至最新版本:
```
$ pip3 install --upgrade you-get
```
### 开始使用 You-Get
使用方式与 Youtube-dl 工具基本一致。
**下载视频**
#### 下载视频
下载视频,只需运行:
```
$ you-get https://www.youtube.com/watch?v=HXaglTFJLMc
```
输出示例:
```
site: YouTube
title: The Last of The Mohicans by Alexandro Querevalú
@ -58,22 +59,22 @@ stream:
Downloading The Last of The Mohicans by Alexandro Querevalú.mp4 ...
100% ( 56.9/ 56.9MB) ├███████████████████████████████████████████████████████┤[1/1] 752 kB/s
```
下载视频前你可能希望查看视频的细节信息。You-Get 提供了 **info”** 或 **“-i”** 参数,使用该参数可以获得给定视频所有可用的分辨率和格式。
下载视频前你可能希望查看视频的细节信息。You-Get 提供了 `info``-i` 参数,使用该参数可以获得给定视频所有可用的分辨率和格式。
```
$ you-get -i https://www.youtube.com/watch?v=HXaglTFJLMc
```
或者
```
$ you-get --info https://www.youtube.com/watch?v=HXaglTFJLMc
```
输出示例如下:
```
site: YouTube
title: The Last of The Mohicans by Alexandro Querevalú
@ -121,18 +122,18 @@ streams: # Available quality and codecs
quality: hd720
size: 56.9 MiB (59654303 bytes)
# download-with: you-get --itag=22 [URL]
```
默认情况下You-Get 会下载标记为 **DEFAULT** 的格式。如果你对格式或分辨率不满意,可以选择你喜欢的格式,使用格式对应的 itag 值即可。
默认情况下You-Get 会下载标记为 “DEFAULT” 的格式。如果你对格式或分辨率不满意,可以选择你喜欢的格式,使用格式对应的 itag 值即可。
```
$ you-get --itag=244 https://www.youtube.com/watch?v=HXaglTFJLMc
```
**下载音频**
#### 下载音频
执行下面的命令,可以从 soundcloud 网站下载音频:
```
$ you-get 'https://soundcloud.com/uiceheidd/all-girls-are-same-999-prod-nick-mira'
Site: SoundCloud.com
@ -145,30 +146,30 @@ Downloading ALL GIRLS ARE THE SAME (PROD. NICK MIRA).mp3 ...
```
查看音频文件细节,使用 **-i** 参数:
查看音频文件细节,使用 `-i` 参数:
```
$ you-get -i 'https://soundcloud.com/uiceheidd/all-girls-are-same-999-prod-nick-mira'
```
**下载图片**
#### 下载图片
运行如下命令下载图片:
```
$ you-get https://pixabay.com/en/mountain-crumpled-cyanus-montanus-3393209/
```
You-Get 也可以下载网页中的全部图片:
You-Get can also download all images from a web page.
```
$ you-get https://www.ostechnix.com/pacvim-a-cli-game-to-learn-vim-commands/
```
**搜索视频**
#### 搜索视频
你只需向 You-Get 传递一个任意的搜索项,而无需给出有效的 URLYou-Get 会使用 Google 搜索并下载与你给出搜索项最相关的视频。(LCTT 译注Google 的机器人检测机制可能导致 503 报错导致该功能无法使用)。
你只需向 You-Get 传递一个任意的搜索项,而无需给出有效的 URLYou-Get 会使用 Google 搜索并下载与你给出搜索项最相关的视频。(译者注Google 的机器人检测机制可能导致 503 报错导致该功能无法使用)。
```
$ you-get 'Micheal Jackson'
Google Videos search:
@ -184,53 +185,53 @@ stream:
Downloading Michael Jackson - Beat It (Official Video).webm ...
100% ( 29.4/ 29.4MB) ├███████████████████████████████████████████████████████┤[1/1] 2 MB/s
```
**观看视频**
#### 观看视频
You-Get 可以将在线视频导流至你的视频播放器或浏览器跳过广告和评论部分。LCTT 译注:使用 `-p` 参数需要对应的 vlc/chrominum 命令可以调用,一般适用于具有图形化界面的操作系统)。
You-Get 可以将在线视频导流至你的视频播放器或浏览器,跳过广告和评论部分。(译者注:使用 -p 参数需要对应的 vlc/chrominum 命令可以调用,一般适用于具有图形化界面的操作系统)。
以 VLC 视频播放器为例,使用如下命令在其中观看视频:
```
$ you-get -p vlc https://www.youtube.com/watch?v=HXaglTFJLMc
```
或者
```
$ you-get --player vlc https://www.youtube.com/watch?v=HXaglTFJLMc
```
类似地,将视频导流至以 chromium 为例的浏览器中,使用如下命令:
```
$ you-get -p chromium https://www.youtube.com/watch?v=HXaglTFJLMc
```
![][3]
在上述屏幕截图中,可以看到并没有广告和评论部分,只是一个包含视频的简单页面。
**设置下载视频的路径及文件名**
#### 设置下载视频的路径及文件名
默认情况下,使用视频标题作为默认文件名,下载至当前工作目录。当然,你可以按照你的喜好进行更改,使用 `output-dir``-o` 参数可以指定路径,使用 `output-filename``-O` 参数可以指定下载文件的文件名。
默认情况下,使用视频标题作为默认文件名,下载至当前工作目录。当然,你可以按照你的喜好进行更改,使用 **output-dir/-o** 参数可以指定路径,使用 **output-filename/-O** 参数可以指定下载文件的文件名。
```
$ you-get -o ~/Videos -O output.mp4 https://www.youtube.com/watch?v=HXaglTFJLMc
```
**暂停和恢复下载**
#### 暂停和恢复下载
**CTRL+C** 可以取消下载。一个以 **.download** 为扩展名的临时文件会保存至输出路径。下次你使用相同的参数下载时,下载过程将延续上一次的过程。
`CTRL+C` 可以取消下载。一个以 `.download` 为扩展名的临时文件会保存至输出路径。下次你使用相同的参数下载时,下载过程将延续上一次的过程。
当文件下载完成后,以 .download 为扩展名的临时文件会自动消失。如果这时你使用同样参数下载You-Get 会跳过下载;如果你想强制重新下载,可以使用 **force/-f** 参数。
当文件下载完成后,以 `.download` 为扩展名的临时文件会自动消失。如果这时你使用同样参数下载You-Get 会跳过下载;如果你想强制重新下载,可以使用 `force``-f` 参数。
查看命令的帮助部分可以获取更多细节,命令如下:
```
$ you-get --help
```
这次的分享到此结束,后续还会介绍更多的优秀工具,敬请期待!
@ -246,7 +247,7 @@ via: https://www.ostechnix.com/you-get-a-cli-downloader-to-download-media-from-8
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,132 +0,0 @@
translating by MZqk
Whats next in IT automation: 6 trends to watch
======
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/cio_ai_artificial_intelligence.png?itok=o0csm9l2)
Weve recently covered the [factors fueling IT automation][1], the [current trends][2] to watch as adoption grows, and [helpful tips][3] for those organizations just beginning to automate certain processes.
Oh, and we also shared expert advice on [how to make the case for automation][4] in your company, as well as [keys for long-term success][5].
Now, theres just one question: Whats next? We asked a range of experts to share a peek into the not-so-distant future of [automation][6]. Here are six trends they advise IT leaders to monitor closely.
### 1. Machine learning matures
For all of the buzz around [machine learning][7] (and the overlapping phrase “self-learning systems”), its still very early days for most organizations in terms of actual implementations. Expect that to change, and for machine learning to play a significant role in the next waves of IT automation.
Mehul Amin, director of engineering for [Advanced Systems Concepts, Inc.][8], points to machine learning as one of the next key growth areas for IT automation.
“With the data that is developed, automation software can make decisions that otherwise might be the responsibility of the developer,” Amin says. “For example, the developer builds what needs to be executed, but identifying the best system to execute the processes might be [done] by software using analytics from within the system.”
That extends elsewhere in this same hypothetical system; Amin notes that machine learning can enable automated systems to provision additional resources when necessary to meet timelines or SLAs, as well as retire those resources when theyre no longer needed, and other possibilities.
Amin is certainly not alone.
“IT automation is moving towards self-learning,” says Kiran Chitturi, CTO architect at [Sungard Availability Services][9]. “Systems will be able to test and monitor themselves, enhancing business processes and software delivery.”
Chitturi points to automated testing as an example; test scripts are already in widespread adoption, but soon those automated testing processes may be more likely to learn as they go, developing, for example, wider recognition of how new code or code changes will impact production environments.
### 2. Artificial intelligence spawns automation opportunities
The same principles above hold true for the related (but separate) field of [artificial intelligence][10]. Depending on your definition of AI, it seems likely that machine learning will have the more significant IT impact in the near term (and were likely to see a lot of overlapping definitions and understandings of the two fields). Assume that emerging AI technologies will spawn new automation opportunities, too.
“The integration of artificial intelligence (AI) and machine learning capabilities is widely perceived as critical for business success in the coming years,” says Patrick Hubbard, head geek at [SolarWinds][11].
### 3. That doesnt mean people are obsolete
Lets try to calm those among us who are now hyperventilating into a paper bag: The first two trends dont necessarily mean were all going to be out of a job.
It is likely to mean changes to various roles and the creation of [new roles][12] altogether.
But in the foreseeable future, at least, you dont need to practice bowing to your robot overlords.
“A machine can only consider the environment variables that it is given it cant choose to include new variables, only a human can do this today,” Hubbard explains. “However, for IT professionals this will necessitate the cultivation of AI- and automation-era skills such as programming, coding, a basic understanding of the algorithms that govern AI and machine learning functionality, and a strong security posture in the face of more sophisticated cyberattacks.”
Hubbard shares the example of new tools or capabilities such as AI-enabled security software or machine-learning applications that remotely spot maintenance needs in an oil pipeline. Both might improve efficiency and effectiveness; neither automatically replaces the people necessary for information security or pipeline maintenance.
“Many new functionalities still require human oversight,” Hubbard says. “In order for a machine to determine if something predictive could become prescriptive, for example, human management is needed.”
The same principle holds true even if you set machine learning and AI aside for a moment and look at IT automation more generally, especially in the software development lifecycle.
Matthew Oswalt, lead architect for automation at [Juniper Networks][13], points out that the fundamental reason IT automation is growing is that it is creating immediate value by reducing the amount of manual effort required to operate infrastructure.
Rather than responding to an infrastructure issue at 3 a.m. themselves, operations engineers can use event-driven automation to define their workflows ahead of time, as code.
“It also sets the stage for treating their operations workflows as code rather than easily outdated documentation or tribal knowledge,” Oswalt explains. “Operations staff are still required to play an active role in how [automation] tooling responds to events. The next phase of adopting automation is to put in place a system that is able to recognize interesting events that take place across the IT spectrum and respond in an autonomous fashion. Rather than responding to an infrastructure issue at 3 a.m. themselves, operations engineers can use event-driven automation to define their workflows ahead of time, as code. They can rely on this system to respond in the same way they would, at any time.”
### 4. Automation anxiety will decrease
Hubbard of SolarWinds notes that the term “automation” itself tends to spawn a lot of uncertainty and concern, not just in IT but across professional disciplines, and he says that concern is legitimate. But some of the attendant fears may be overblown, and even perpetuated by the tech industry itself. Reality might actually be the calming force on this front: When the actual implementation and practice of automation helps people realize #3 on this list, then well see #4 occur.
“This year well likely see a decrease in automation anxiety and more organizations begin to embrace AI and machine learning as a way to augment their existing human resources,” Hubbard says. “Automation has historically created room for more jobs by lowering the cost and time required to accomplish smaller tasks and refocusing the workforce on things that cannot be automated and require human labor. The same will be true of AI and machine learning.”
Automation will also decrease some anxiety around the topic most likely to increase an IT leaders blood pressure: Security. As Matt Smith, chief architect, [Red Hat][14], recently [noted][15], automation will increasingly help IT groups reduce the security risks associated with maintenance tasks.
His advice: “Start by documenting and automating the interactions between IT assets during maintenance activities. By relying on automation, not only will you eliminate tasks that historically required much manual effort and surgical skill, you will also be reducing the risks of human error and demonstrating whats possible when your IT organization embraces change and new methods of work. Ultimately, this will reduce resistance to promptly applying security patches. And it could also help keep your business out of the headlines during the next major security event.”
**[ Read the full article: [12 bad enterprise security habits to break][16]. ] **
### 5. Continued evolution of scripting and automation tools
Many organizations see the first steps toward increasing automation usually in the form of scripting or automation tools (sometimes referred to as configuration management tools) as "early days" work.
But views of those tools are evolving as the use of various automation technologies grows.
“There are many processes in the data center environment that are repetitive and subject to human error, and technologies such as [Ansible][17] help to ameliorate those issues,” says Mark Abolafia, chief operating officer at [DataVision][18]. “With Ansible, one can write a specific playbook for a set of actions and input different variables such as addresses, etc., to automate long chains of process that were previously subject to human touch and longer lead times.”
**[ Want to learn more about this aspect of Ansible? Read the related article:[Tips for success when getting started with Ansible][19]. ]**
Another factor: The tools themselves will continue to become more advanced.
“With advanced IT automation tools, developers will be able to build and automate workflows in less time, reducing error-prone coding,” says Amin of ASCI. “These tools include pre-built, pre-tested drag-and-drop integrations, API jobs, the rich use of variables, reference functionality, and object revision history.”
### 6. Automation opens new metrics opportunities
As weve said previously in this space, automation isnt IT snake oil. It wont fix busted processes or otherwise serve as some catch-all elixir for what ails your organization. Thats true on an ongoing basis, too: Automation doesnt eliminate the need to measure performance.
**[ See our related article[DevOps metrics: Are you measuring what matters?][20] ]**
In fact, automation should open up new opportunities here.
“As more and more development activities source control, DevOps pipelines, work item tracking move to the API-driven platforms the opportunity and temptation to stitch these pieces of raw data together to paint the picture of your organization's efficiency increases,” says Josh Collins, VP of architecture at [Janeiro Digital][21].
Collins thinks of this as a possible new “development organization metrics-in-a-box.” But dont mistake that to mean machines and algorithms can suddenly measure everything IT does.
“Whether measuring individual resources or the team in aggregate, these metrics can be powerful but should be balanced with a heavy dose of context,” Collins says. “Use this data for high-level trends and to affirm qualitative observations not to clinically grade your team.”
**Want more wisdom like this, IT leaders?[Sign up for our weekly email newsletter][22].**
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2018/3/what-s-next-it-automation-6-trends-watch
作者:[Kevin Casey][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://enterprisersproject.com/user/kevin-casey
[1]:https://enterprisersproject.com/article/2017/12/5-factors-fueling-automation-it-now
[2]:https://enterprisersproject.com/article/2017/12/4-trends-watch-it-automation-expands
[3]:https://enterprisersproject.com/article/2018/1/getting-started-automation-6-tips
[4]:https://enterprisersproject.com/article/2018/1/how-make-case-it-automation
[5]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success
[6]:https://enterprisersproject.com/tags/automation
[7]:https://enterprisersproject.com/article/2018/2/how-spot-machine-learning-opportunity
[8]:https://www.advsyscon.com/en-us/
[9]:https://www.sungardas.com/en/
[10]:https://enterprisersproject.com/tags/artificial-intelligence
[11]:https://www.solarwinds.com/
[12]:https://enterprisersproject.com/article/2017/12/8-emerging-ai-jobs-it-pros
[13]:https://www.juniper.net/
[14]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
[15]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break
[16]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break?sc_cid=70160000000h0aXAAQ
[17]:https://opensource.com/tags/ansible
[18]:https://datavision.com/
[19]:https://opensource.com/article/18/2/tips-success-when-getting-started-ansible?intcmp=701f2000000tjyaAAA
[20]:https://enterprisersproject.com/article/2017/7/devops-metrics-are-you-measuring-what-matters?sc_cid=70160000000h0aXAAQ
[21]:https://www.janeirodigital.com/
[22]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ

View File

@ -0,0 +1,258 @@
Writing eBPF tracing tools in Rust
============================================================
tl;dr: I made an experimental Rust repository that lets you write BPF tracing tools from Rust! Its at [https://github.com/jvns/rust-bcc][4] or [https://crates.io/crates/bcc][5], and has a couple of hopefully easy to understand examples. It turns out that writing BPF-based tracing tools in Rust is really easy (in some ways easier than doing the same things in Python). In this post Ill explain why I think this is useful/important.
For a long time Ive been interested in the [BPF compiler collection][6], a C -> BPF compiler, C library, and Python bindings to make it easy to write tools like:
* [opensnoop][1] (spies on which files are being opened)
* [tcplife][2] (track length of TCP connections)
* [cpudist][3] (count how much time every program spends on- and off-CPU)
and a lot more. The list of available tools in [the /tools directory][7] is really impressive and I could write a whole blog post about that. If youre familiar with dtrace the idea is that BCC is a little bit like dtrace, and in fact theres a dtrace-like language [named ply][8] implemented with BPF.
This blog post isnt about `ply` or the great BCC tools though its about what tools we need to build more complicated/powerful BPF-based programs.
### What does the BPF compiler collection let you do?
Heres a quick overview of what BCC lets you do:
* compile BPF programs from C into eBPF bytecode.
* attach this eBPF bytecode to a userspace function or kernel function (as a “uprobe” / “kprobe”) or install it as XDP
* communicate with the eBPF bytecode to get information with it
A basic example of using BCC is this [strlen_count.py][9] program and I think its useful to look at this program to understand how BCC works and how you might be able to implement more advanced tools.
First, theres an eBPF program. This program is going to be attached to the `strlen` function from libc (the C standard library) every time we call `strlen`, this code will be run.
This eBPF program
* gets the first argument to the `strlen` function (the address of a string)
* reads the first 80 characters of that string (using `bpf_probe_read`)
* increments a counter in a hashmap (basically `counts[str] += 1`)
The result is that you can count every call to `strlen`. Heres the eBPF program:
```
struct key_t {
char c[80];
};
BPF_HASH(counts, struct key_t);
int count(struct pt_regs *ctx) {
if (!PT_REGS_PARM1(ctx))
return 0;
struct key_t key = {};
u64 zero = 0, *val;
bpf_probe_read(&key.c, sizeof(key.c), (void *)PT_REGS_PARM1(ctx));
val = counts.lookup_or_init(&key, &zero);
(*val)++;
return 0;
};
```
After that program is compiled, theres a Python part which does `b.attach_uprobe(name="c", sym="strlen", fn_name="count")`  it tells the Linux kernel to actually attach the compiled BPF to the `strlen` function so that it runs every time `strlen` runs.
The really exciting thing about eBPF is what comes next theres no use keeping a hashmap of string counts if you cant access it! BPF has a number of data structures that let you share information between BPF programs (that run in the kernel / in uprobes) and userspace.
So in this case the Python program accesses this `counts` data structure.
### BPF data structures: hashmaps, buffers, and more!
Theres a great list of available BPF data structures in the [BCC reference guide][10].
There are basically 2 kinds of BPF data structures data structures suitable for storing statistics (BPF_HASH, BPF_HISTOGRAM etc), and data structures suitable for storing events (like BPF_PERF_MAP) where you send a stream of events to a userspace program which then displays them somehow.
There are a lot of interesting BPF data structures (like a trie!) and I havent fully worked out what all of the possibilities are with them yet :)
### What Im interested in: BPF for profiling & tracing
Okay!! Were done with the background, lets talk about why Im interested in BCC/BPF right now.
Im interested in using BPF to implement profiling/tracing tools for dynamic programming languages, specifically tools to do things like “trace all memory allocations in this Ruby program”. I think its exciting that you can say “hey, run this tiny bit of code every time a Ruby object is allocated” and get data back about ongoing allocations!
### Rust: a way to build more powerful BPF-based tools
The issue I see with the Python BPF libraries (which are GREAT, of course) is that while theyre perfect for building tools like `tcplife` which track tcp connnection lengths, once you want to start doing more complicated experiments like “stream every memory allocation from this Ruby program, calculate some metadata about it, query the original process to find out the class name for that address, and display a useful summary”, Python doesnt really cut it.
So I decided to spend 4 days trying to build a BCC library for Rust that lets you attach + interact with BPF programs from Rust!
Basically I worked on porting [https://github.com/iovisor/gobpf][11] (a go BCC library) to Rust.
The easiest and most exciting way to explain this is to show an example of what using the library looks like.
### Rust example 1: strlen
Lets start with the strlen example from above. Heres [strlen.rs][12] from the examples!
Compiling & attaching the `strlen` code is easy:
```
let mut module = BPF::new(code)?;
let uprobe_code = module.load_uprobe("count")?;
module.attach_uprobe("/lib/x86_64-linux-gnu/libc.so.6", "strlen", uprobe_code, -1 /* all PIDs */)?;
let table = module.table("counts");
```
This table contains a hashmap mapping strings to counts. So we need to iterate over that table and print out the keys and values. This is pretty simple: it looks like this.
```
let iter = table.into_iter();
for e in iter {
// key and value are each a Vec<u8> so we need to transform them into a string and
// a u64 respectively
let key = get_string(&e.key);
let value = Cursor::new(e.value).read_u64::<NativeEndian>().unwrap();
println!("{:?} {:?}", key, value);
}
```
Basically all the data that comes out of a BPF program is an opaque `Vec<u8>`right now, so you need to figure out how to decode them yourself. Luckily decoding binary data is something that Rust is quite good at the `byteorder`crate lets you easily decode `u64`s, and translating a vector of bytes into a String is easy (I wrote a quick `get_string` helper function to do that).
I thought this was really nice because the code for this program in Rust is basically exactly the same as the corresponding Python version. So it very pretty approachable to start doing experiments and seeing whats possible.
### Reading perf events from Rust
The next thing I wanted to do after getting this `strlen` example to work in rust was to handle events!!
Events are a little different / more complicated. The way you stream events in a BCC program is it uses `perf_event_open` to create a ring buffer where the events get stored.
Dealing with events from a perf ring buffer normally is a huge pain because perf has this complicated data structure. The C BCC library makes this easier for you by letting you specify a C callback that gets called on every new event, and it handles dealing with perf. This is super helpful. To make this work with Rust, the `rust-bcc` library lets you pass in a Rust closure to run on every event.
### Rust example 2: opensnoop.rs (events!!)
To make sure reading BPF events actually worked, I implemented a basic version of `opensnoop.py` from the iovisor bcc tools: [opensnoop.rs][13].
I wont walk through the [C code][14] in this case because theres a lot of it but basically the eBPF C part generates an event every time a file is opened on the system. I copied the C code verbatim from [opensnoop.py][15].
Heres the type of the event thats generated by the BPF code:
```
#[repr(C)]
struct data_t {
id: u64, // pid + thread id
ts: u64,
ret: libc::c_int,
comm: [u8; 16], // process name
fname: [u8; 255], // filename
}
```
The Rust part starts out by compiling BPF code & attaching kprobes (to the `open`system call in the kernel, `do_sys_open`). I wont paste that code here because its basically the same as the `strlen` example. What happens next is the new part: we install a callback with a Rust closure on the `events` table, and then call `perf_map.poll(200)` in a loop. The design of the BCC library is a little confusing to me still, but you need to repeatedly poll the perf reader objects to make sure that the callbacks you installed actually get called.
```
let table = module.table("events");
let mut perf_map = init_perf_map(table, perf_data_callback)?;
loop {
perf_map.poll(200);
}
```
This is the callback code I wrote, that gets called every time. Again, it takes an opaque `Vec<u8>` event and translates it into a `data_t` struct to print it out. Doing this is kind of annoying (I actually called `libc::memcpy` which is Not Encouraged Rust Practice), I need to figure out a less gross/unsafe way to do that. The really nice thing is that if you put `#[repr(C)]` on your Rust structs it represents them in memory the exact same way C will represent that struct. So its quite easy to share data structures between Rust and C.
```
fn perf_data_callback() -> Box<Fn(Vec<u8>)> {
Box::new(|x| {
// This callback
let data = parse_struct(&x);
println!("{:-7} {:-16} {}", data.id >> 32, get_string(&data.comm), get_string(&data.fname));
})
}
```
You might notice that this is actually a weird function that returns a callback this is because I needed to install 4 callbacks (1 per CPU), and in stable Rust you cant copy closures yet.
output
Heres what the output of that `opensnoop` program looks like!
This is kind of meta these are the files that were being opened on my system when I saved this blog post :). You can see that git is looking at some files, vim is saving a file, and my static site generator Hugo is opening the changed file so that it can update the site. Neat!
```
PID COMMAND FILENAME
8519 git /home/bork/work/homepage/.gitmodules
8519 git /home/bork/.gitconfig
8519 git .git/config
22877 vim content/post/2018-02-05-writing-ebpf-programs-in-rust.markdown
22877 vim .
7312 hugo /home/bork/work/homepage/content/post/2018-02-05-writing-ebpf-programs-in-rust.markdown
7312 hugo /home/bork/work/homepage/content/post/2018-02-05-writing-ebpf-programs-in-rust.markdown
```
### using rust-bcc to implement Ruby experiments
Now that I have this basic library that I can use I can get counts + stream events in Rust, Im excited about doing some experiments with making BCC programs in Rust that talk to Ruby programs!
The first experiment (that I blogged about last week) is [count-ruby-allocs.rs][16]which prints out a live count of current allocation activity. Heres an example of what it prints out: (the numbers are counts of the number of objects allocated of that type so far).
```
RuboCop::Token 53
RuboCop::Token 112
MatchData 246
Parser::Source::Rang 255
Proc 323
Enumerator 328
Hash 475
Range 1210
??? 1543
String 3410
Array 7879
Total allocations since we started counting: 16932
Allocations this second: 954
```
### Related work
Geoffrey Couprie is interested in building more advanced BPF tracing tools with Rust too and wrote a great blog post with a cool proof of concept: [Compiling to eBPF from Rust][17].
I think the idea of not requiring the user to compile the BPF program is exciting, because you could imagine distributing a statically linked Rust binary (which links in libcc.so) with a pre-compiled BPF program that the binary just installs and then uses to do cool stuff.
Also theres another Rust BCC library at [https://bitbucket.org/photoszzt/rust-bpf/][18] at which has a slightly different set of capabilities than [jvns/rust-bcc][19] (going to spend some time looking at that one later, I just found about it like 30 minutes ago :)).
### thats it for now
This crate is still extremely sketchy and there are bugs & missing features but I wanted to put it on the internet because I think the examples of what you can do with it are really exciting!!
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2018/02/05/rust-bcc/
作者:[Julia Evans ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about/
[1]:https://github.com/iovisor/bcc/blob/master/tools/opensnoop.py
[2]:https://github.com/iovisor/bcc/blob/master/tools/tcplife.py
[3]:https://github.com/iovisor/bcc/blob/master/tools/cpudist.py
[4]:https://github.com/jvns/rust-bcc
[5]:https://crates.io/crates/bcc
[6]:https://github.com/iovisor/bcc
[7]:https://github.com/iovisor/bcc/tree/master/tools
[8]:https://github.com/iovisor/ply
[9]:https://github.com/iovisor/bcc/blob/master/examples/tracing/strlen_count.py
[10]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md#maps
[11]:https://github.com/iovisor/gobpf
[12]:https://github.com/jvns/rust-bcc/blob/f15d2983ddbe349aac3d2fcaeacf924a66db4be7/examples/strlen.rs
[13]:https://github.com/jvns/rust-bcc/blob/f15d2983ddbe349aac3d2fcaeacf924a66db4be7/examples/opensnoop.rs
[14]:https://github.com/jvns/rust-bcc/blob/f15d2983ddbe349aac3d2fcaeacf924a66db4be7/examples/opensnoop.c
[15]:https://github.com/iovisor/bcc/blob/master/tools/opensnoop.py
[16]:https://github.com/jvns/ruby-mem-watcher-demo/blob/dd189b178a2813e6445063f0f84063e6e978ee79/src/bin/count-ruby-allocs.rs
[17]:https://unhandledexpression.com/2018/02/02/poc-compiling-to-ebpf-from-rust/
[18]:https://bitbucket.org/photoszzt/rust-bpf/
[19]:https://github.com/jvns/rust-bcc

View File

@ -1,157 +0,0 @@
Translating by qhwdw
Top 9 open source ERP systems to consider | Opensource.com
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_orgchart1.png?itok=tukiFj89)
Businesses with more than a handful of employees have a lot to balance including pricing, product planning, accounting and finance, managing payroll, dealing with inventory, and more. Stitching together a set of disparate tools to handle those jobs is a quick, cheap, and dirty way to get things done.
That approach isn't scalable. It's difficult to efficiently move data between the various pieces of such an ad-hoc system. As well, it can be difficult to maintain.
Instead, most growing businesses turn to an [enterprise resource planning][1] (ERP) system.
The big guns in that space are Oracle, SAP, and Microsoft Dynamics. Their offerings are comprehensive, but also expensive. What happens if your business can't afford one of those big implementations or if your needs are simple? You turn to the open source alternatives.
### What to look for in an ERP system
Obviously, you want a system that suits your needs. Depending on those needs, more features doesn't always mean better. However, your needs might change as your business grows, so you'll want to find an ERP system that can expand to meet your new needs. That could mean the system has additional modules or just supports plugins and add-ons.
Most open source ERP systems are web applications. You can download and install them on your server. But if you don't want (or don't have the skills or staff) to maintain a system yourself, then make sure there's a hosted version of the application available.
Finally, you'll want to make sure the application has good documentation and good support—either in the form of paid support or an active user community.
There are a number of flexible, feature-rich, and cost-effective open source ERP systems out there. Here are nine to check out if you're in the market for such a system.
### ADempiere
Like most other open source ERP solutions, [ADempiere][2] is targeted at small and midsized businesses. It's been around awhile—the project was formed in 2006 as a fork from the Compiere ERP software.
Its Italian name means to achieve or satisfy, and its "multidimensional" ERP features aim to help businesses satisfy a wide range of needs. It adds supply chain management (SCM) and customer relationship management (CRM) features to its ERP suite to help manage sales, purchasing, inventory, and accounting processes in one piece of software. Its latest release, v.3.9.0, updated its user interface, point-of-sale, HR, payroll, and other features.
As a multiplatform, Java-based cloud solution, ADempiere is accessible on Linux, Unix, Windows, MacOS, smartphones, and tablets. It is licensed under [GPLv2][3]. If you'd like to learn more, take its [demo][4] for a test run or access its [source code][5] on GitHub.
### Apache OFBiz
[Apache OFBiz][6]'s suite of related business tools is built on a common architecture that enables organizations to customize the ERP to their needs. As a result, it's best suited for midsize or large enterprises that have the internal development resources to adapt and integrate it within their existing IT and business processes.
OFBiz is a mature open source ERP system; its website says it's been a top-level Apache project for a decade. [Modules][7] are available for accounting, manufacturing, HR, inventory management, catalog management, CRM, and e-commerce. You can also try out its e-commerce web store and backend ERP applications on its [demo page][8].
Apache OFBiz's source code can be found in the [project's repository][9]. It is written in Java and licensed under an [Apache 2.0 license][10].
### Dolibarr
[Dolibarr][11] offers end-to-end management for small and midsize businesses—from keeping track of invoices, contracts, inventory, orders, and payments to managing documents and supporting electronic point-of-sale system. It's all wrapped in a fairly clean interface.
If you're wondering what Dolibarr can't do, [here's some documentation about that][12].
In addition to an [online demo][13], Dolibarr also has an [add-ons store][14] where you can buy software that extends its features. You can check out its [source code][15] on GitHub; it's licensed under [GPLv3][16] or any later version.
### ERPNext
[ERPNext][17] is one of those classic open source projects; in fact, it was [featured on Opensource.com][18] way back in 2014. It was designed to scratch a particular itch, in this case replacing a creaky and expensive proprietary ERP implementation.
ERPNext was built for small and midsized businesses. It includes modules for accounting, managing inventory, sales, purchase, and project management. The applications that make up ERPNext are form-driven—you fill information in a set of fields and let the application do the rest. The whole suite is easy to use.
If you're interested, you can request a [demo][19] before taking the plunge and [downloading it][20] or [buying a subscription][21] to the hosted service.
### Metasfresh
[Metasfresh][22]'s name reflects its commitment to keeping its code "fresh." It's released weekly updates since late 2015, when its founders forked the code from the ADempiere project. Like ADempiere, it's an open source ERP based on Java targeted at the small and midsize business market.
While it's a younger project than most of the other software described here, it's attracted some early, positive attention, such as being named a finalist for the Initiative Mittelstand "best of open source" IT innovation award.
Metasfresh is free when self-hosted or for one user via the cloud, or on a monthly subscription fee basis as a cloud-hosted solution for 1-100 users. Its [source code][23] is available under the [GPLv2][24] license at GitHub and its cloud version is licensed under GPLv3.
### Odoo
[Odoo][25] is an integrated suite of applications that includes modules for project management, billing, accounting, inventory management, manufacturing, and purchasing. Those modules can communicate with each other to efficiently and seamlessly exchange information.
While ERP can be complex, Odoo makes it friendlier with a simple, almost spartan interface. The interface is reminiscent of Google Drive, with just the functions you need visible. You can [give Odoo a try][26] before you decide to sign up.
Odoo is a web-based tool. Subscriptions to individual modules will set you back $20 (USD) a month for each one. You can also [download it][27] or grab the [source code][28] from GitHub. It's licensed under [LGPLv3][29].
### Opentaps
[Opentaps][30], one of the few open source ERP solutions designed for larger businesses, packs a lot of power and flexibility. This is no surprise because it's built on top of Apache OFBiz.
You get the expected set of modules that help you manage inventory, manufacturing, financials, and purchasing. You also get an analytics feature that helps you analyze all aspects of your business. You can use that information to better plan into the future. Opentaps also packs a powerful reporting function.
On top of that, you can [buy add-ons and additional modules][31] to enhance Opentaps' capabilities. They include integration with Amazon Marketplace Services and FedEx. Before you [download Opentaps][32], give the [online demo][33] a try. It's licensed under [GPLv3][34].
### WebERP
[WebERP][35] is exactly as it sounds: An ERP system that operates through a web browser. The only other software you need is a PDF reader to view reports.
Specifically, its an accounting and business management solution geared toward wholesale, distribution, and manufacturing businesses. It also integrates with [third-party business software][36], including a point-of-sale system for multi-branch retail management, an e-commerce module, and wiki software for building a business knowledge base. It's written in PHP and aims to be a low-footprint, efficient, fast, and platform-independent system that's easy for general business users.
WebERP is actively being developed and has an active [forum][37], where you can ask questions or learn more about using the application. You can also try a [demo][38] or download the [source code][39] (licensed under [GPLv2][40]) on GitHub.
### xTuple PostBooks
If your manufacturing, distribution, or e-commerce business has outgrown its small business roots and is looking for an ERP to grow with you, you may want to check out [xTuple PostBooks][41]. It's a comprehensive solution built around its core ERP, accounting, and CRM features that adds inventory, distribution, purchasing, and vendor reporting capabilities.
xTuple is available under the Common Public Attribution License ([CPAL][42]), and the project welcomes developers to fork it to create other business software for inventory-based manufacturers. Its web app core is written in JavaScript, and its [source code][43] can be found on GitHub. To see if it's right for you, register for a free [demo][44] on xTuple's website.
There are many other open source ERP options you can choose from—others you might want to check out include [Tryton][45], which is written in Python and uses the PostgreSQL database engine, or the Java-based [Axelor][46], which touts users' ability to create or modify business apps with a drag-and-drop interface. And, if your favorite open source ERP solution isn't on the list, please share it with us in the comments. You might also check out our list of top [supply chain management tools][47].
This article is updated from a [previous version][48] authored by Opensource.com moderator [Scott Nesbitt][49].
--------------------------------------------------------------------------------
via: https://opensource.com/tools/enterprise-resource-planning
作者:[Opensource.com][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com
[1]:http://en.wikipedia.org/wiki/Enterprise_resource_planning
[2]:http://www.adempiere.net/welcome
[3]:http://wiki.adempiere.net/License
[4]:http://www.adempiere.net/web/guest/demo
[5]:https://github.com/adempiere/adempiere
[6]:http://ofbiz.apache.org/
[7]:https://ofbiz.apache.org/business-users.html#UsrModules
[8]:http://ofbiz.apache.org/ofbiz-demos.html
[9]:http://ofbiz.apache.org/source-repositories.html
[10]:http://www.apache.org/licenses/LICENSE-2.0
[11]:http://www.dolibarr.org/
[12]:http://wiki.dolibarr.org/index.php/What_Dolibarr_can%27t_do
[13]:http://www.dolibarr.org/onlinedemo
[14]:http://www.dolistore.com/
[15]:https://github.com/Dolibarr/dolibarr
[16]:https://github.com/Dolibarr/dolibarr/blob/develop/COPYING
[17]:https://erpnext.com/
[18]:https://opensource.com/business/14/11/building-open-source-erp
[19]:https://frappe.erpnext.com/request-a-demo
[20]:https://erpnext.com/download
[21]:https://erpnext.com/pricing
[22]:http://metasfresh.com/en/
[23]:https://github.com/metasfresh/metasfresh
[24]:https://github.com/metasfresh/metasfresh/blob/master/LICENSE.md
[25]:https://www.odoo.com/
[26]:https://www.odoo.com/page/start
[27]:https://www.odoo.com/page/download
[28]:https://github.com/odoo
[29]:https://github.com/odoo/odoo/blob/11.0/LICENSE
[30]:http://www.opentaps.org/
[31]:http://shop.opentaps.org/
[32]:http://www.opentaps.org/products/download
[33]:http://www.opentaps.org/products/online-demo
[34]:https://www.gnu.org/licenses/agpl-3.0.html
[35]:http://www.weberp.org/
[36]:http://www.weberp.org/Links.html
[37]:http://www.weberp.org/forum/
[38]:http://www.weberp.org/weberp/
[39]:https://github.com/webERP-team/webERP
[40]:https://github.com/webERP-team/webERP#legal
[41]:https://xtuple.com/
[42]:https://xtuple.com/products/license-options#cpal
[43]:http://xtuple.github.io/
[44]:https://xtuple.com/free-demo
[45]:http://www.tryton.org/
[46]:https://www.axelor.com/
[47]:https://opensource.com/tools/supply-chain-management
[48]:https://opensource.com/article/16/3/top-4-open-source-erp-systems
[49]:https://opensource.com/users/scottnesbitt

View File

@ -0,0 +1,194 @@
How to get a core dump for a segfault on Linux
============================================================
This week at work I spent all week trying to debug a segfault. Id never done this before, and some of the basic things involved (get a core dump! find the line number that segfaulted!) took me a long time to figure out. So heres a blog post explaining how to do those things!
At the end of this blog post, you should know how to go from “oh no my program is segfaulting and I have no idea what is happening” to “well I know what its stack / line number was when it segfaulted at at least!“.
### whats a segfault?
A “segmentation fault” is when your program tries to access memory that its not allowed to access, or tries to . This can be caused by:
* trying to dereference a null pointer (youre not allowed to access the memory address `0`)
* trying to dereference some other pointer that isnt in your memory
* a C++ vtable pointer that got corrupted and is pointing to the wrong place, which causes the program to try to execute some memory that isnt executable
* some other things that I dont understand, like I think misaligned memory accesses can also segfault
This “C++ vtable pointer” thing is what was happening to my segfaulting program. I might explain that in a future blog post because I didnt know any C++ at the beginning of this week and this vtable lookup thing was a new way for a program to segfault that I didnt know about.
But! This blog post isnt about C++ bugs. Lets talk about the basics, like, how do we even get a core dump?
### step 1: run valgrind
I found the easiest way to figure out why my program is segfaulting was to use valgrind: I ran
```
valgrind -v your-program
```
and this gave me a stack trace of what happened. Neat!
But I wanted also wanted to do a more in-depth investigation and find out more than just what valgrind was telling me! So I wanted to get a core dump and explore it.
### How to get a core dump
A core dump is a copy of your programs memory, and its useful when youre trying to debug what went wrong with your problematic program.
When your program segfaults, the Linux kernel will sometimes write a core dump to disk. When I originally tried to get a core dump, I was pretty frustrated for a long time because Linux wasnt writing a core dump!! Where was my core dump????
Heres what I ended up doing:
1. Run `ulimit -c unlimited` before starting my program
2. Run `sudo sysctl -w kernel.core_pattern=/tmp/core-%e.%p.%h.%t`
### ulimit: set the max size of a core dump
`ulimit -c` sets the maximum size of a core dump. Its often set to 0, which means that the kernel wont write core dumps at all. Its in kilobytes. ulimits are per process  you can see a processs limits by running `cat /proc/PID/limit`
For example these are the limits for a random Firefox process on my system:
```
$ cat /proc/6309/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 30571 30571 processes
Max open files 1024 1048576 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 30571 30571 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
```
The kernel uses the soft limit (in this case, “max core file size = 0”) when deciding how big of a core file to write. You can increase the soft limit up to the hard limit using the `ulimit` shell builtin (`ulimit -c unlimited`!)
### kernel.core_pattern: where core dumps are written
`kernel.core_pattern` is a kernel parameter or a “sysctl setting” that controls where the Linux kernel writes core dumps to disk.
Kernel parameters are a way to set global settings on your system. You can get a list of every kernel parameter by running `sysctl -a`, or use `sysctl kernel.core_pattern` to look at the `kernel.core_pattern` setting specifically.
So `sysctl -w kernel.core_pattern=/tmp/core-%e.%p.%h.%t` will write core dumps to `/tmp/core-<a bunch of stuff identifying the process>`
If you want to know more about what these `%e`, `%p` parameters read, see [man core][1].
Its important to know that `kernel.core_pattern` is a global settings its good to be a little careful about changing it because its possible that other systems depend on it being set a certain way.
### kernel.core_pattern & Ubuntu
By default on Ubuntu systems, this is what `kernel.core_pattern` is set to
```
$ sysctl kernel.core_pattern
kernel.core_pattern = |/usr/share/apport/apport %p %s %c %d %P
```
This caused me a lot of confusion (what is this apport thing and what is it doing with my core dumps??) so heres what I learned about this:
* Ubuntu uses a system called “apport” to report crashes in apt packages
* Setting `kernel.core_pattern=|/usr/share/apport/apport %p %s %c %d %P`means that core dumps will be piped to `apport`
* apport has logs in /var/log/apport.log
* apport by default will ignore crashes from binaries that arent part of an Ubuntu packages
I ended up just overriding this Apport business and setting `kernel.core_pattern` to `sysctl -w kernel.core_pattern=/tmp/core-%e.%p.%h.%t` because I was on a dev machine, I didnt care whether Apport was working on not, and I didnt feel like trying to convince Apport to give me my core dumps.
### So you have a core dump. Now what?
Okay, now we know about ulimits and `kernel.core_pattern` and you have actually have a core dump file on disk in `/tmp`. Amazing! Now what??? We still dont know why the program segfaulted!
The next step is to open the core file with `gdb` and get a backtrace.
### Getting a backtrace from gdb
You can open a core file with gdb like this:
```
$ gdb -c my_core_file
```
Next, we want to know what the stack was when the program crashed. Running `bt` at the gdb prompt will give you a backtrace. In my case gdb hadnt loaded symbols for the binary, so it was just like `??????`. Luckily, loading symbols fixed it.
Heres how to load debugging symbols.
```
symbol-file /path/to/my/binary
sharedlibrary
```
This loads symbols from the binary and from any shared libraries the binary uses. Once I did that, gdb gave me a beautiful stack trace with line numbers when I ran `bt`!!!
If you want this to work, the binary should be compiled with debugging symbols. Having line numbers in your stack traces is extremely helpful when trying to figure out why a program crashed :)
### look at the stack for every thread
Heres how to get the stack for every thread in gdb!
```
thread apply all bt full
```
### gdb + core dumps = amazing
If you have a core dump & debugging symbols and gdb, you are in an amazing situation!! You can go up and down the call stack, print out variables, and poke around in memory to see what happened. Its the best.
If you are still working on being a gdb wizard, you can also just print out the stack trace with `bt` and thats okay :)
### ASAN
Another path to figuring out your segfault is to do one compile the program with AddressSanitizer (“ASAN”) (`$CC -fsanitize=address`) and run it. Im not going to discuss that in this post because this is already pretty long and anyway in my case the segfault disappeared with ASAN turned on for some reason, possibly because the ASAN build used a different memory allocator (system malloc instead of tcmalloc).
I might write about ASAN more in the future if I ever get it to work :)
### getting a stack trace from a core dump is pretty approachable!
This blog post sounds like a lot and I was pretty confused when I was doing it but really there arent all that many steps to getting a stack trace out of a segfaulting program:
1. try valgrind
if that doesnt work, or if you want to have a core dump to investigate:
1. make sure the binary is compiled with debugging symbols
2. set `ulimit` and `kernel.core_pattern` correctly
3. run the program
4. open your core dump with `gdb`, load the symbols, and run `bt`
5. try to figure out what happened!!
I was able using gdb to figure out that there was a C++ vtable entry that is pointing to some corrupt memory, which was somewhat helpful and helped me feel like I understood C++ a bit better. Maybe well talk more about how to use gdb to figure things out another day!
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2018/04/28/debugging-a-segfault-on-linux/
作者:[Julia Evans ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about/
[1]:http://man7.org/linux/man-pages/man5/core.5.html

View File

@ -1,231 +0,0 @@
KevinSJ Translating
A Beginners Guide To Cron Jobs
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/cron-jobs1-720x340.jpg)
**Cron** is one of the most useful utility that you can find in any Unix-like operating system. It is used to schedule commands at a specific time. These scheduled commands or tasks are known as “Cron Jobs”. Cron is generally used for running scheduled backups, monitoring disk space, deleting files (for example log files) periodically which are no longer required, running system maintenance tasks and a lot more. In this brief guide, we will see the basic usage of Cron Jobs in Linux.
### The Beginners Guide To Cron Jobs
The typical format of a cron job is:
```
Minute(0-59) Hour(0-24) Day_of_month(1-31) Month(1-12) Day_of_week(0-6) Command_to_execute
```
Just memorize the cron job format or print the following illustration and keep it in your desk.
![][2]
In the above picture, the asterisks refers the specific blocks of time.
To display the contents of the **crontab** file of the currently logged in user:
```
$ crontab -l
```
To edit the current users cron jobs, do:
```
$ crontab -e
```
If it is the first time, you will be asked to editor to edit the jobs.
```
no crontab for sk - using an empty one
Select an editor. To change later, run 'select-editor'.
1. /bin/nano <---- easiest
2. /usr/bin/vim.basic
3. /usr/bin/vim.tiny
4. /bin/ed
Choose 1-4 [1]:
```
Choose any one that suits you. Here it is how a sample crontab file looks like.
![][3]
In this file, you need to add your cron jobs.
To edit the crontab of a different user, for example ostechnix, do:
```
$ crontab -u ostechnix -e
```
Let us see some examples.
To run a cron job **every minute** , the format should be like below.
```
* * * * * <command-to-execute>
```
To run cron job every 5 minute, add the following in your crontab file.
```
*/5 * * * * <command-to-execute>
```
To run a cron job at every quarter hour (every 15th minute), add this:
```
*/15 * * * * <command-to-execute>
```
To run a cron job every hour at 30 minutes, run:
```
30 * * * * <command-to-execute>
```
You can also define multiple time intervals separated by commas. For example, the following cron job will run three times every hour, at minutes 0, 5 and 10:
```
0,5,10 * * * * <command-to-execute>
```
Run a cron job every half hour:
```
*/30 * * * * <command-to-execute>
```
Run a job every hour:
```
0 * * * * <command-to-execute>
```
Run a job every 2 hours:
```
0 */2 * * * <command-to-execute>
```
Run a job every day (It will run at 00:00):
```
0 0 * * * <command-to-execute>
```
Run a job every day at 3am:
```
0 3 * * * <command-to-execute>
```
Run a job every sunday:
```
0 0 * * SUN <command-to-execute>
```
Or,
```
0 0 * * 0 <command-to-execute>
```
It will run at exactly at 00:00 on Sunday.
Run a job on every day-of-week from Monday through Friday i.e every weekday:
```
0 0 * * 1-5 <command-to-execute>
```
The job will start at 00:00.
Run a job every month:
```
0 0 1 * * <command-to-execute>
```
Run a job at 16:15 on day-of-month 1:
```
15 16 1 * * <command-to-execute>
```
Run a job at every quarter i.e on day-of-month 1 in every 3rd month:
```
0 0 1 */3 * <command-to-execute>
```
Run a job on a specific month at a specific time:
```
5 0 * 4 * <command-to-execute>
```
The job will start at 00:05 in April.
Run a job every 6 months:
```
0 0 1 */6 * <command-to-execute>
```
This cron job will start at 00:00 on day-of-month 1 in every 6th month.
Run a job every year:
```
0 0 1 1 * <command-to-execute>
```
This cron job will start at 00:00 on day-of-month 1 in January.
We can also use the following strings to define job.
@reboot Run once, at startup. @yearly Run once a year. @annually (same as @yearly). @monthly Run once a month. @weekly Run once a week. @daily Run once a day. @midnight (same as @daily). @hourly Run once an hour.
For example, to run a job every time the server is rebooted, add this line in your crontab file.
```
@reboot <command-to-execute>
```
To remove all cron jobs for the current user:
```
$ crontab -r
```
There is also a dedicated website named [**crontab.guru**][4] for learning cron jobs examples. This site provides a lot of cron job examples.
For more details, check man pages.
```
$ man crontab
```
And, thats all for now. At this point, you might have a basic understanding of cron jobs and how to use them in real time. More good stuffs to come. Stay tuned!!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/05/cron-job-format-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2018/05/cron-jobs-1.png
[4]:https://crontab.guru/

View File

@ -0,0 +1,92 @@
Give Your Linux Desktop a Stunning Makeover With Xenlism Themes
============================================================
_Brief: Xenlism theme pack provides an aesthetically pleasing GTK theme, colorful icons, and minimalist wallpapers to transform your Linux desktop into an eye-catching setup._
Its not every day that I dedicate an entire article to a theme unless I find something really awesome. I used to cover themes and icons regularly. But lately, I preferred having lists of [best GTK themes][6] and icon themes. This is more convenient for me and for you as well as you get to see many beautiful themes in one place.
After [Pop OS theme][7] suit, Xenlism is another theme that has left me awestruck by its look. 
![Xenlism GTK theme for Ubuntu and Other Linux](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/xenlishm-minimalism-gtk-theme-800x450.jpeg)
Xenlism GTK theme is based on the Arc theme, an inspiration behind so many themes these days. The GTK theme provides Windows buttons similar to macOS which I neither like nor dislike. The GTK theme has a flat, minimalist layout and I like that.
There are two icon themes in the Xenlism suite. Xenlism Wildfire is an old one and had already made to our list of [best icon themes][8].
![Beautiful Xenlism Wildfire theme for Ubuntu and Other Linux](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/xenlism-wildfire-theme-800x450.jpeg)
Xenlism Wildfire Icons
Xenlsim Storm is the relatively new icon theme but is equally beautiful.
![Beautiful Xenlism Storm theme for Ubuntu and Other Linux](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/xenlism-storm-theme-1-800x450.jpeg)
Xenlism Storm Icons
Xenlism themes are open source under GPL license.
### How to install Xenlism theme pack on Ubuntu 18.04
Xenlism dev provides an easier way of installing the theme pack through a PPA. Though the PPA is available for Ubuntu 16.04, I found the GTK theme wasnt working with Unity. It works fine with the GNOME desktop in Ubuntu 18.04.
Open a terminal (Ctrl+Alt+T) and use the following commands one by one:
```
sudo add-apt-repository ppa:xenatt/xenlism
sudo apt update
```
This PPA offers four packages:
* xenlism-finewalls: for a set of wallpapers that will be available directly in the wallpaper section of Ubuntu. One of the wallpapers has been used in the screenshot.
* xenlism-minimalism-theme: GTK theme
* xenlism-storm: an icon theme (see previous screenshots)
* xenlism-wildfire-icon-theme: another icon theme with several color variants (folder colors get changed in the variants)
You can decide on your own what theme component you want to install. Personally, I dont see any harm in installing all the components.
```
sudo apt install xenlism-minimalism-theme xenlism-storm-icon-theme xenlism-wildfire-icon-theme xenlism-finewalls
```
You can use GNOME Tweaks for changing the theme and icons. If you are not familiar with the procedure already, I suggest reading this tutorial to learn [how to install themes in Ubuntu 18.04 GNOME][9].
### Getting Xenlism themes in other Linux distributions
You can install Xenlism themes on other Linux distributions as well. Installation instructions for various Linux distributions can be found on its website:
[Install Xenlism Themes][10]
### What do you think?
I know not everyone would agree with me but I loved this theme. I think you are going to see the glimpse of Xenlism theme in the screenshots in future tutorials on Its FOSS.
Did you like Xenlism theme? If not, what theme do you like the most? Share your opinion in the comment section below.
#### 关于作者
I am a professional software developer, and founder of It's FOSS. I am an avid Linux lover and Open Source enthusiast. I use Ubuntu and believe in sharing knowledge. Apart from Linux, I love classic detective mysteries. I'm a huge fan of Agatha Christie's work.
--------------------------------------------------------------------------------
via: https://itsfoss.com/xenlism-theme/
作者:[Abhishek Prakash ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/abhishek/
[1]:https://itsfoss.com/author/abhishek/
[2]:https://itsfoss.com/xenlism-theme/#comments
[3]:https://itsfoss.com/category/desktop/
[4]:https://itsfoss.com/tag/themes/
[5]:https://itsfoss.com/tag/xenlism/
[6]:https://itsfoss.com/best-gtk-themes/
[7]:https://itsfoss.com/pop-icon-gtk-theme-ubuntu/
[8]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/
[9]:https://itsfoss.com/install-themes-ubuntu/
[10]:http://xenlism.github.io/minimalism/#install

View File

@ -0,0 +1,102 @@
How to Enable Click to Minimize On Ubuntu
============================================================
_Brief: This quick tutorial shows you how to enable click to minimize option on Ubuntu 18.04 and Ubuntu 16.04._
The launcher at the left hand side in [Ubuntu][7] is a handy tool for quickly accessing applications. When you click on an icon in the launcher, the application window appears in focus.
If you click again on the icon of an application already in focus, the default behavior is to do nothing. This may bother you if you expect the application window to be minimized on the second click.
Perhaps this GIF will be better in explaining the click on minimize behavior on Ubuntu.
[video](https://giphy.com/gifs/linux-ubuntu-itsfoss-52FlrSIMxnZ1qq9koP?utm_source=iframe&utm_medium=embed&utm_campaign=Embeds&utm_term=https%3A%2F%2Fitsfoss.com%2Fclick-to-minimize-ubuntu%2F%3Futm_source%3Dnewsletter&%3Butm_medium=email&%3Butm_campaign=new_linux_laptop_ubuntu_1804_flavor_reviews_meltdown_20_and_other_linux_stuff&%3Butm_term=2018-05-23)
In my opinion, this should be the default behavior but apparently Ubuntu doesnt think so. So what? Customization is one of the main reason [why I use Linux][8] and this behavior can also be easily changed.
In this quick tutorial, Ill show you how to enable click to minimize on Ubuntu 18.04 and 16.04\. Ill show both command line and the GUI methods here.
### Enable click to minimize on Ubuntu using command line (recommended)
_This method is for Ubuntu 18.04 and 17.10 users with [GNOME desktop environment][1]_ .
The first option is using the terminal. I recommend this way to minimize on click even if you are not comfortable with the command line.
Its not at all complicated. Open a terminal using Ctrl+Alt+T shortcut or searching for it in the menu. All you need is to copy paste the command below in the terminal.
```
gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
```
No need of restarting your system or any thing of that sort. You can test the minimize on click behavior immediately after it.
If you do not like click to minimize behavior, you can set it back to default using the command below:
```
gsettings reset org.gnome.shell.extensions.dash-to-dock click-action
```
### Enable click to minimize on Ubuntu using GUI tool
You can do the same steps mentioned above using a GUI tool called [Dconf Editor][10]. It is a powerful tool that allows you to change many hidden aspects of your Linux desktop. I avoid recommending it because one wrong click here and there may screw up your desktop settings. So be careful while using this tool keeping in mind that it works on single click and changes are applied immediately.
You can find and install Dconf Editor in the Ubuntu Software Center.
![dconf editor in Ubuntu software center](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/dconf-editor-ubuntu-800x250.png)
Once installed, launch Dconf Editor and go to org -> gnome -> shell -> extensions -> dash-to-dock. Scroll down a bit until you find click-action. Click on it to access the click action settings.
In here, turn off the Use default value option and change the Custom Valueto minimize.
![Enable minmize to click on Ubuntu using dconf editor](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/enable-minimize-click-dconf-800x425.png)
You can see that the minimize on click behavior has been applied instantly.
### Enable click to minimize on Ubuntu 16.04 Unity
If you are using Unity desktop environment, you can easily d it using Unity Tweak Tool. If you have not installed it already, look for Unity Tweak Tool in Software Center and install it.
Once installed, launch Unity Tweak Tool and click on Launcher here.
![Enable minmize to click using Unity Tweak Tool](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/minimiz-click-ubuntu-unity-1.png)
Check the “Minimize single window application on click” option here.
![Enable minmize to click using Unity Tweak Tool](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/minimiz-click-ubuntu-unity.png)
Thats all. The change takes into effect right away.
### Did it work for you?
I hope this quick tip helped you to enable the minimize on click feature in Ubuntu. If you are using Ubuntu 18.04, I suggest reading [GNOME customization tips][11] for more such options.
If you have any questions or suggestions, please leave a comment. If it helped you, perhaps you could share this article on various social media platforms such as Reddit and Twitter.
#### 关于作者
I am a professional software developer, and founder of It's FOSS. I am an avid Linux lover and Open Source enthusiast. I use Ubuntu and believe in sharing knowledge. Apart from Linux, I love classic detective mysteries. I'm a huge fan of Agatha Christie's work.
--------------------------------------------------------------------------------
via: https://itsfoss.com/click-to-minimize-ubuntu/
作者:[Abhishek Prakash ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/abhishek/
[1]:https://www.gnome.org/
[2]:https://itsfoss.com/author/abhishek/
[3]:https://itsfoss.com/click-to-minimize-ubuntu/#comments
[4]:https://itsfoss.com/category/how-to/
[5]:https://itsfoss.com/tag/quick-tip/
[6]:https://itsfoss.com/tag/ubuntu-18-04/
[7]:https://www.ubuntu.com/
[8]:https://itsfoss.com/reasons-switch-linux-windows-xp/
[9]:https://itsfoss.com/how-to-know-ubuntu-unity-version/
[10]:https://wiki.gnome.org/Projects/dconf
[11]:https://itsfoss.com/gnome-tricks-ubuntu/

View File

@ -0,0 +1,69 @@
KevinSJ translating
How CERN Is Using Linux and Open Source
============================================================
![CERN](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/atlas-cern.jpg?itok=IRLUYCNQ "CERN")
>CERN relies on open source technology to handle huge amounts of data generated by the Large Hadron Collider. The ATLAS (shown here) is a general-purpose detector that probes for fundamental particles. (Image courtesy: CERN)[Used with permission][2]
[CERN][3]
[CERN][6] really needs no introduction. Among other things, the European Organization for Nuclear Research created the World Wide Web and the Large Hadron Collider (LHC), the worlds largest particle accelerator, which was used in discovery of the [Higgs boson][7].  Tim Bell, who is responsible for the organizations  IT Operating Systems and Infrastructure group, says the goal of his team is “to provide the compute facility for 13,000 physicists around the world to analyze those collisions, understand what the universe is made of and how it works.”
CERN is conducting hardcore science, especially with the LHC, which [generates massive amounts of data][8] when its operational. “CERN currently stores about 200 petabytes of data, with over 10 petabytes of data coming in each month when the accelerator is running. This certainly produces extreme challenges for the computing infrastructure, regarding storing this large amount of data, as well as the having the capability to process it in a reasonable timeframe. It puts pressure on the networking and storage technologies and the ability to deliver an efficient compute framework,” Bell said.
### [tim-bell-cern.png][4]
![Tim Bell](https://www.linux.com/sites/lcom/files/styles/floated_images/public/tim-bell-cern.png?itok=5eUOpip- "Tim Bell")
Tim Bell, CERN[Used with permission][1]Swapnil Bhartiya
The scale at which LHC operates and the amount of data it generates pose some serious challenges. But CERN is not new to such problems. Founded in 1954, CERN has been around for about 60 years. “We've always been facing computing challenges that are difficult problems to solve, but we have been working with open source communities to solve them,” Bell said. “Even in the 90s, when we invented the World Wide Web, we were looking to share this with the rest of humanity in order to be able to benefit from the research done at CERN and open source was the right vehicle to do that.”
### Using OpenStack and CentOS
Today, CERN is a heavy user of OpenStack, and Bell is one of the Board Members of the OpenStack Foundation. But CERN predates OpenStack. For several years, they have been using various open source technologies to deliver services through Linux servers.
“Over the past 10 years, we've found that rather than taking our problems ourselves, we find upstream open source communities with which we can work, who are facing similar challenges and then we contribute to those projects rather than inventing everything ourselves and then having to maintain it as well,” said Bell.
A good example is Linux itself. CERN used to be a Red Hat Enterprise Linux customer. But, back in 2004, they worked with Fermilab to  build their own Linux distribution called [Scientific Linux][9]. Eventually they realized that, because they were not modifying the kernel, there was no point in spending time spinning up their own distribution; so they migrated to CentOS. Because CentOS is a fully open source and community driven project, CERN could collaborate with the project and contribute to how CentOS is built and distributed.
CERN helps CentOS with infrastructure and they also organize CentOS DoJo at CERN where engineers can get together to improve the CentOS packaging.
In addition to OpenStack and CentOS, CERN is a heavy user of other open source projects, including Puppet for configuration management, Grafana and  influxDB for monitoring, and is involved in many more.
“We collaborate with around 170 labs around the world. So every time that we find an improvement in an open source project, other labs can easily take that and use it,” said Bell, “At the same time, we also learn from others. When large scale installations like eBay and Rackspace make changes to improve scalability of solutions, it benefits us and allows us to scale.”
### Solving realistic problems
Around 2012, CERN was looking at ways to scale computing for the LHC, but the challenge was people rather than technology. The number of staff that CERN employs is fixed. “We had to find ways in which we can scale the compute without requiring a large number of additional people in order to administer that,” Bell said. “OpenStack provided us with an automated API-driven, software-defined infrastructure.” OpenStack also allowed CERN to look at problems related to the delivery of services and then automate those, without having to scale the staff.
“We're currently running about 280,000 cores and 7,000 servers across two data centers in Geneva and in Budapest. We are  using software-defined infrastructure to automate everything, which allows us to continue to add additional servers while remaining within the same envelope of staff,” said Bell.
As time progresses, CERN will be dealing with even bigger challenges. Large Hadron Collider has a roadmap out to 2035, including a number of significant upgrades. “We run the accelerator for three to four years and then have a period of 18 months or two years when we upgrade the infrastructure. This maintenance period allows us to also do some computing planning,” said Bell. CERN is also  planning High Luminosity Large Hadron Collider upgrade, which will allow for beams with higher luminosity. The upgrade would mean about 60 times more compute requirement compared to what CERN has today.
“With Moore's Law, we will maybe get one quarter of the way there, so we have to find ways under which we can be scaling the compute and the storage infrastructure correspondingly  and finding automation and solutions such as OpenStack will help that,” said Bell.
“When we started off the large Hadron collider and looked at how we would deliver the computing, it was clear that we couldn't put everything into the data center at CERN, so we devised a distributed grid structure, with tier zero at CERN and then a cascading structure around that,” said Bell. “There are around 12 large tier one centers and then 150 small universities and labs around the world. They receive samples at the data from the LHC in order to assist the physicists to understand and analyze the data.”
That structure means CERN is collaborating internationally, with hundreds of countries contributing toward the analysis of that data. It boils down to the fundamental principle that open source is not just about sharing code, its about collaboration among people to share knowledge and achieve what no single individual, organization, or company can achieve alone. Thats the Higgs boson of the open source world.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2018/5/how-cern-using-linux-open-source
作者:[SWAPNIL BHARTIYA ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/used-permission
[3]:https://home.cern/about/experiments/atlas
[4]:https://www.linux.com/files/images/tim-bell-cernpng
[5]:https://www.linux.com/files/images/atlas-cernjpg
[6]:https://home.cern/
[7]:https://home.cern/topics/higgs-boson
[8]:https://home.cern/about/computing
[9]:https://www.scientificlinux.org/

View File

@ -0,0 +1,130 @@
IT自动化的下一步是什么: 6 大趋势
======
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/cio_ai_artificial_intelligence.png?itok=o0csm9l2)
我们最近介绍了 [促进自动化的因素][1] ,目前正在被人们采用的 [趋势][2], 以及那些刚开始使用自动化部分流程组织 [有用的技巧][3] 。
噢, 我们也分享了在你的公司[如何使用自动化的案例][4] , 以及 [长期成功的关键][5].
现在, 只有一个问题: 自动化的下一步是什么? 我们邀请一系列专家分享一下 [自动化][6]不远的将来。 以下是他们建议IT领域领导需密切关注的六大趋势。
### 1. 机器学习的成熟
对于关于 [机器学习][7]的讨论 (与“自我学习系统”相似的定义),对于绝大多数组织的项目来说实际执行起来它仍然为时过早。但预计这将发生变化机器学习将在下一次IT自动化浪潮中将扮演着至关重要的角色。
[Advanced Systems Concepts, Inc.][8]公司工程总监 Mehul Amin 指出机器学习是IT自动化下一个关键增长领域之一。
“随着数据化的发展, 自动化软件理应可以自我决策,否则这就是开发人员的责任了”, Amin 说。 “例如, 开发者需要执行构建内容, 但是识别系统最佳执行流程的,可能是由系统内软件分析完成。”
假设将这个系统延伸到其他地方中。Amin 指出机器学习可以使自动化系统在必要的时候提供额外的资源以需要满足时间线或SLA同样在不需要资源的时候退出以及其他的可能性。
显然不只有 Amin 一个人这样认为。
[Sungard Availability Services][9] 公司首席架构师 Kiran Chitturi 表示,“IT自动化正在走向自我学习的方向” 。“系统将会能测试和监控自己,加强业务流程和软件交付能力。”
Chitturi 指出自动化测试就是个例子。脚本测试已经被广泛采用,但很快这些自动化测试流程将会更容易学习,更快发展,例如开发出新的代码或将更为广泛地影响生产环境。
### 2. 人工智能催生的自动化
上述原则同样适合 [人工智能][10]但是为独立的领域。假定新兴的人工智能技术将也会产生新的自动化机会。根据对人工智能的定义机器学习在短时间内可能会对IT领域产生巨大的影响并且我们可能会看到这两个领域的许多重叠的定义和理解
[SolarWinds][11]公司技术负责人 Patrick Hubbard说“人工智能AI和机器学习的整合普遍被认为对未来几年的商业成功起至关重要的作用。”
### 3. 这并不意味着不再需要人力
让我们试着安慰一下那些不知所措的人:前两种趋势并不一定意味着我们将失去工作。
这很可能意味着各种角色的改变以及[全新角色][12]的创造。
但是在可预见的将来,至少,你不必需要机器人鞠躬。
“一台机器只能运行在给定的环境变量中它不能选择包含新的变量,在今天只有人类可以这样做,” Hubbard 解释说。“但是对于IT专业人员来说这将是需要培养AI和自动化技能的时代。如对程序设计、编程、管理人工智能和机器学习功能算法的基本理解以及用强大的安全状态面对更复杂的网络攻击。”
Hubbard 分享一些新的工具或功能例子,例如支持人工智能的安全软件或机器学习的应用程序,这些应用程序可以远程发现石油管道中的维护需求。两者都可以提高效益和效果,自然不会代替需要信息安全或管道维护的人员。
“许多新功能仍需要人工监控”Hubbard 说。“例如,为了让机器确定一些‘预测’是否可能成为‘规律’,人为的管理是必要的。”
即使你把机器学习和AI先放在一边看待一般地IT自动化同样原理也是成立的,尤其是在软件开发生命周期中。
[Juniper Networks][13]公司自动化首席架构师 Matthew Oswalt 指出IT自动化增长的根本原因是它通过减少操作基础设施所需的人工工作量来创造直接价值。
在代码上操作工程师可以使用事件驱动的自动化提前定义他们的工作流程而不是在凌晨3点来应对基础设施的问题。
“它也将操作工作流程作为代码而不再是容易过时的文档或系统知识阶段”Oswalt解释说。“操作人员仍然需要在[自动化]工具响应事件方面后发挥积极作用。采用自动化的下一个阶段是建立一个能够跨IT频谱识别发生的有趣事件的系统并以自主方式进行响应。在代码上操作工程师可以使用事件驱动的自动化提前定义他们的工作流程而不是在凌晨3点来应对基础设施的问题。他们可以依靠这个系统在任何时候以同样的方式作出回应。”
### 4. 对自动化的焦虑将会减少
SolarWinds公司的 Hubbard 指出“自动化”一词本身就产生大量的不确定性和担忧不仅仅是在IT领域而且是跨专业领域他说这种担忧是合理的。但一些随之而来的担忧可能被夸大了甚至是科技产业本身。现实可能实际上是这方面的镇静力当自动化的实际实施和实践帮助人们认识到这个列表中的“3”时我们将看到“4”的出现。
“今年我们可能会看到对自动化焦虑的减少更多的组织开始接受人工智能和机器学习作为增加现有人力资源的一种方式”Hubbard说。“自动化历史上的今天为更多的工作创造了空间,通过降低成本和时间来完成较小任务,并将劳动力重新集中到无法自动化并需要人力的事情上。人工智能和机器学习也是如此。”
自动化还将减少IT领导者神经紧张主题的一些焦虑安全。正如[红帽][14]公司首席架构师 Matt Smith 最近[指出][15]的那样自动化将越来越多地帮助IT部门降低与维护任务相关的安全风险。
他的建议是“首先在维护活动期间记录和自动化IT资产之间的交互。通过依靠自动化您不仅可以消除历史上需要大量手动操作和手术技巧的任务还可以降低人为错误的风险并展示当您的IT组织采纳变更和新工作方法时可能发生的情况。最终这将迅速减少对应用安全补丁的抵制。而且它还可以帮助您的企业在下一次重大安全事件中摆脱头条新闻。”
**[ 阅读全文: [12个企业安全坏习惯要打破。][16] ] **
### 5. 脚本和自动化工具将持续发展
看到许多组织增加自动化的第一步 - 通常以脚本或自动化工具(有时称为配置管理工具)的形式 - 作为“早期”工作。
但是随着各种自动化技术的使用,对这些工具的观点也在不断发展。
[DataVision][18]首席运营官 Mark Abolafia 表示:“数据中心环境中存在很多重复性过程,容易出现人为错误,[Ansible][17]等技术有助于缓解这些问题。“通过 Ansible ,人们可以为一组操作编写特定的步骤,并输入不同的变量,例如地址等,使过去长时间的过程链实现自动化,而这些过程以前都需要人为触摸和更长的交货时间。”
**[想了解更多关于Ansible这个方面的知识吗阅读相关文章:[使用Ansible时的成功秘诀][19]。 ]**
另一个因素是:工具本身将继续变得更先进。
“使用先进的IT自动化工具开发人员将能够在更短的时间内构建和自动化工作流程减少易出错的编码” ASCI 公司的 Amin 说。“这些工具包括预先构建的预先测试过的拖放式集成API作业丰富的变量使用参考功能和对象修订历史记录。”
### 6. 自动化开创了新的指标机会
正如我们在此前所说的那样IT自动化不是万能的。它不会修复被破坏的流程或者以其他方式为您的组织提供全面的灵丹妙药。这也是持续不断的自动化并不排除衡量性能的必要性。
**[ 参见我们的相关文章 [DevOps指标你在衡量什么重要吗][20] ]**
实际上,自动化应该打开新的机会。
[Janeiro Digital][21]公司架构师总裁 Josh Collins 说,“随着越来越多的开发活动 - 源代码管理DevOps管道工作项目跟踪 - 转向API驱动的平台 - 将这些原始数据拼接在一起以描绘组织效率提升的机会和图景”。
Collins 认为这是一种可能的新型“开发组织度量指标”。但不要误认为这意味着机器和算法可以突然预测IT所做的一切。
“无论是衡量个人资源还是整体团队,这些指标都可以很强大 - 但应该用大量的背景来衡量。”Collins说“将这些数据用于高层次趋势并确认定性观察 - 而不是临床评级你的团队。”
**想要更多这样知识, IT领导者[注册我们的每周电子邮件通讯][22]。**
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2018/3/what-s-next-it-automation-6-trends-watch
作者:[Kevin Casey][a]
译者:[MZqk](https://github.com/MZqk)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://enterprisersproject.com/user/kevin-casey
[1]:https://enterprisersproject.com/article/2017/12/5-factors-fueling-automation-it-now
[2]:https://enterprisersproject.com/article/2017/12/4-trends-watch-it-automation-expands
[3]:https://enterprisersproject.com/article/2018/1/getting-started-automation-6-tips
[4]:https://enterprisersproject.com/article/2018/1/how-make-case-it-automation
[5]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success
[6]:https://enterprisersproject.com/tags/automation
[7]:https://enterprisersproject.com/article/2018/2/how-spot-machine-learning-opportunity
[8]:https://www.advsyscon.com/en-us/
[9]:https://www.sungardas.com/en/
[10]:https://enterprisersproject.com/tags/artificial-intelligence
[11]:https://www.solarwinds.com/
[12]:https://enterprisersproject.com/article/2017/12/8-emerging-ai-jobs-it-pros
[13]:https://www.juniper.net/
[14]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
[15]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break
[16]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break?sc_cid=70160000000h0aXAAQ
[17]:https://opensource.com/tags/ansible
[18]:https://datavision.com/
[19]:https://opensource.com/article/18/2/tips-success-when-getting-started-ansible?intcmp=701f2000000tjyaAAA
[20]:https://enterprisersproject.com/article/2017/7/devops-metrics-are-you-measuring-what-matters?sc_cid=70160000000h0aXAAQ
[21]:https://www.janeirodigital.com/
[22]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ

View File

@ -0,0 +1,156 @@
可以考虑的 9 个开源 ERP 系统
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_orgchart1.png?itok=tukiFj89)
拥有一定数量员工的企业就需要大量的协调工作,包括价格、生产计划、帐务和财务、管理支出、管理存货等等。把一套截然不同的工具拼接到一起去处理这些工作,是一种粗制滥造和无价值的做法。
那种方法没有任何弹性。并且那样在各种各样的自组织系统之间高效移动数据是非常困难的。同样,它也很难维护。
因此,大多数成长型企业都转而使用一个 [企业资源计划][1] (ERP) 系统。
在这个行业中的大咖有 Oracle、SAP、以及 Microsoft Dynamics。它们都提供了一个综合的系统但同时也很昂贵。如果你的企业支付不起如此昂贵的大系统或者你仅需要一个简单的系统怎么办呢你可以使用开源的产品来作为替代。
### 一个 ERP 系统中有什么东西
显然,你希望有一个满足你需要的系统。基于那些需要,更多的功能并不意味着就更好。但是,你的需要会根据你的业务的增长而变化的,因此,你希望能够找到一个 ERP 系统,它能够根据你新的需要而扩展它。那就意味着系统有额外的模块或者支持插件和附加功能。
大多数的开源 ERP 系统都是 web 应用程序。你可以下载并将它们安装到你的服务器上。但是,如果你不希望(或者没有相应技能或者人员)自己去维护系统,那么应该确保它们的应用程序提供托管版本。
最后,你还应该确保应用程序有良好的文档和支持 —— 要么是付费支持或者有一个活跃的用户社区。
有很多弹性很好的、功能丰富的、很划算的开源 ERP 系统。如果你正打算购买这样的系统,这里有我们挑选出来的 9 个。
### ADempiere
像大多数其它开源 ERP 解决方案,[ADempiere][2] 的目标客户是中小企业。它已经存在一段时间了— 这个项目出现于 2006它是 Compiere ERP 软件的一个分支。
它的意大利语名字的意思是“实现”或者“满足”,它“涉及多个方面”的 ERP 特性旨在帮企业去满足各种需求。它在 ERP 中增加了供应链管理SCM和客户关系管理CRM功能能够让 ERP 套件在一个软件中去管理销售、采购、库存、以及帐务处理。它的最新版本是 v.3.9.0,更新了用户界面、销售点、人力资源、工资、以及其它的特性。
因为是一个跨平台的、基于 Java 的云解决方案ADempiere 可以运行在Linux、Unix、Windows、MacOS、智能手机、平板电脑上。它使用 [GPLv2][3] 授权。如果你想了解更多信息,这里有一个用于测试的 [demo][4],或者也可以在 GitHub 上查看它的 [源代码][5]。
### Apache OFBiz
[Apache OFBiz][6] 的业务相关的套件是构建在通用的架构上的,它允许企业根据自己的需要去定制 ERP。因此它是有内部开发资源的大中型企业去修改和集成它到它们现有的 IT 和业务流程的最佳套件。
OFBiz 是一个成熟的开源 ERP 系统;它的网站上说它是一个有十年历史的顶级 Apache 项目。可用的 [模块][7] 有帐务、生产制造、人力资源、存货管理、目录管理、客户关系管理、以及电子商务。你可以在它的 [demo 页面][8] 上试用电子商务的网上商店以及后端的 ERP 应用程序。
Apache OFBiz 的源代码能够在它的 [项目仓库][9] 中找到。它是用 Java 写的,它在 [Apache 2.0 license][10] 下可用。
### Dolibarr
[Dolibarr][11] 提供了中小型企业端到端的业务管理,从发票跟踪、合同、存货、订单、以及支付,到文档管理和电子化 POS 系统支持。它的全部功能封装在一个清晰的界面中。
如果你担心不会使用 Dolibarr[这里有一些关于它的文档][12]。
另外,还有一个 [在线演示][13]Dolibarr 也有一个 [插件商店][14],你可以在那是购买一些软件来扩展它的功能。你可以在 GitHub 上查看它的 [源代码][15];它在 [GPLv3][16] 或者任何它的最新版本许可下面使用。
### ERPNext
[ERPNext][17] 是这类开源项目中的其中一个;实际上它最初 [出现在 Opensource.com][18]。它被设计用于打破一个陈旧而昂贵的专用 ERP 系统的垄断局面。
ERPNext 适合于中小型企业。它包含的模块有帐务、存货管理、销售、采购、以及项目管理。ERPNext 是表单驱动的应用程序 — 你可以在一组字段中填入信息,然后让应用程序去完成剩余部分。整个套件非常易用。
如果你感兴趣,在你考虑参与之前,你可以请求获取一个 [demo][19],去 [下载它][20] 或者在托管服务上 [购买一个订阅][21]。
### Metasfresh
[Metasfresh][22] 的名字表示它承诺软件的代码始终保持“新鲜”。它自 2015 年以来每周发行一个更新版本,那时,它的代码是由创始人从 ADempiere 项目中 fork 的。与 ADempiere 一样,它是一个基于 Java 的开源 ERP目标客户是中小型企业。
虽然,相比在这里介绍的其它软件来说,它是一个很 “年青的” 项目,但是它早早就引起了一起人的注意,获得很多积极的评价,比如,被提名为“最佳开源”的 IT 创新奖入围者。
Metasfresh 在自托管系统上或者在云上单用户使用时是免费的,或者可以按月交纳订阅费用。它的 [源代码][23] 在 GitHub 上,可以在遵守 [GPLv2][24] 许可的情况下使用,它的云版本是以 GPLv3 方式授权使用。
### Odoo
[Odoo][25] 是一个应用程序集成解决方案,它包含的模块有项目管理、帐单、存货管理、生产制造、以及采购。这些模块之间可以相互通讯,实现高效平滑地信息交换。
虽然 ERP 可能很复杂但是Odoo 通过简单的,甚至是简洁的界面使它变得很友好。这个界面让人联想到谷歌云盘,它只让你需要的功能可见。在你决定签定采购合同之前,你可以 [得到一个 Odoo 去试用][26]。
Odoo 是基于 web 的工具。按单个模块来订阅的话,每个模块每月需要支付 20 美元。你也可以 [下载它][27],或者可以从 GitHub 上获得 [源代码][28],它以 [LGPLv3][29] 方式授权。
### Opentaps
[Opentaps][30] 是专为大型业务设计的几个开源 ERP 解决方案之一,它的功能强大而灵活。这并不奇怪,因为它是在 Apache OFBiz 基础之上构建的。
你可以得到你所希望的模块组合来帮你管理存货、生产制造、财务、以及采购。它也有分析功能帮你去分析业务的各个方面。你可以借助这些信息让未来的计划做的更好。Opentaps 也包含一个强大的报表功能。
在它的基础之上,你还可以 [购买一些插件和附加模块][31] 去增强 Opentaps 的功能。包括与 Amazon Marketplace Services 和 FedEx 的集成等。在你 [下载 Opentaps][32] 之前,你可以到 [在线 demo][33] 上试用一下。它遵守 [GPLv3][34] 许可。
### WebERP
[WebERP][35] 是一个如它的名字所表示的那样:一个通过 Web 浏览器来使用的 ERP 系统。另外还需要的其它软件只有一个,那就是查看报告所使用的 PDF 阅读器。
具体来说,它是一个面向批发、分销、生产制造业务的账务和业务管理解决方案。它也可以与 [第三方的业务软件][36] 集成,包括多地点零售管理的销售点系统、电子商务模块、以及构建业务知识库的 wiki 软件。它是用 PHP 写的,并且它致力于成为低资源占用、高效、快速、以及平台无关的、普通商业用户易于使用的 ERP 系统。
WebERP 正在积极地进行开发,并且它有一个活跃的 [论坛][37],在那里你可以咨询问题或者学习关于如何使用这个应用程序的相关知识。你也可以试用一个 [demo][38],或者在 GitHub 上下载它的 [源代码][39](遵守 [GPLv2][40] 许可)
### xTuple PostBooks
如果你的生产制造、分销、电子商务业务已经从小规模业务成长起来了,并且正在寻找一个适合你的成长型企业的 ERP 系统,那么,你可以去了解一下 [xTuple PostBooks][41]。它是围绕核心 ERP 功能、帐务、以及可以添加存货、分销、采购、以及供应商报告等 CRM 功能构建的全面解决方案的系统。
xTuple 在通用公共属性许可证([CPAL][42])下使用,并且这个项目欢迎开发者去 fork 它,然后为基于存货的生产制造型企业开发其它的业务软件。它的基于 web 的核心是用 JavaScript 写的,它的 [源代码][43] 可以在 GitHub 上找到。你可以去在 xTuple 的网站上注册一个免费的 [demo][44] 去了解它。
还有许多其它的开源 ERP 可供你选择 — 另外你可以去了解的还有 [Tryton][45],它是用 Python 写的,并且使用的是 PostgreSQL 数据库引擎,或者基于 Java 的 [Axelor][46],它的好处是用户可以使用拖放界面来创建或者修改业务应用。如果还有在这里没有列出的你喜欢的开源 ERP 解决方案,请在下面的评论区共享出来。你也可以去查看我们的 [供应链管理工具][47] 榜单。
这篇文章是 [以前版本][48] 的一个更新版,它是由 Opensource.com 的主席 [Scott Nesbitt][49] 所写。
--------------------------------------------------------------------------------
via: https://opensource.com/tools/enterprise-resource-planning
作者:[Opensource.com][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com
[1]:http://en.wikipedia.org/wiki/Enterprise_resource_planning
[2]:http://www.adempiere.net/welcome
[3]:http://wiki.adempiere.net/License
[4]:http://www.adempiere.net/web/guest/demo
[5]:https://github.com/adempiere/adempiere
[6]:http://ofbiz.apache.org/
[7]:https://ofbiz.apache.org/business-users.html#UsrModules
[8]:http://ofbiz.apache.org/ofbiz-demos.html
[9]:http://ofbiz.apache.org/source-repositories.html
[10]:http://www.apache.org/licenses/LICENSE-2.0
[11]:http://www.dolibarr.org/
[12]:http://wiki.dolibarr.org/index.php/What_Dolibarr_can%27t_do
[13]:http://www.dolibarr.org/onlinedemo
[14]:http://www.dolistore.com/
[15]:https://github.com/Dolibarr/dolibarr
[16]:https://github.com/Dolibarr/dolibarr/blob/develop/COPYING
[17]:https://erpnext.com/
[18]:https://opensource.com/business/14/11/building-open-source-erp
[19]:https://frappe.erpnext.com/request-a-demo
[20]:https://erpnext.com/download
[21]:https://erpnext.com/pricing
[22]:http://metasfresh.com/en/
[23]:https://github.com/metasfresh/metasfresh
[24]:https://github.com/metasfresh/metasfresh/blob/master/LICENSE.md
[25]:https://www.odoo.com/
[26]:https://www.odoo.com/page/start
[27]:https://www.odoo.com/page/download
[28]:https://github.com/odoo
[29]:https://github.com/odoo/odoo/blob/11.0/LICENSE
[30]:http://www.opentaps.org/
[31]:http://shop.opentaps.org/
[32]:http://www.opentaps.org/products/download
[33]:http://www.opentaps.org/products/online-demo
[34]:https://www.gnu.org/licenses/agpl-3.0.html
[35]:http://www.weberp.org/
[36]:http://www.weberp.org/Links.html
[37]:http://www.weberp.org/forum/
[38]:http://www.weberp.org/weberp/
[39]:https://github.com/webERP-team/webERP
[40]:https://github.com/webERP-team/webERP#legal
[41]:https://xtuple.com/
[42]:https://xtuple.com/products/license-options#cpal
[43]:http://xtuple.github.io/
[44]:https://xtuple.com/free-demo
[45]:http://www.tryton.org/
[46]:https://www.axelor.com/
[47]:https://opensource.com/tools/supply-chain-management
[48]:https://opensource.com/article/16/3/top-4-open-source-erp-systems
[49]:https://opensource.com/users/scottnesbitt

View File

@ -0,0 +1,227 @@
Cron 任务入门指南
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/cron-任务s1-720x340.jpg)
** Cron **是您可以在任何类 Unix 操作系统中找到的最有用的实用程序之一。它用于安排命令在特定时间执行。这些预定的命令或任务被称为 “Cron 任务”。Cron 通常用于运行计划备份,监视磁盘空间,定期删除不再需要的文件(例如日志文件),运行系统维护任务等等。在本简要指南中,我们将看到 Linux 中 Cron 任务 的基本用法。
Cron 任务入门指南
cron 任务的典型格式是:
```
Minute(0-59) Hour(0-24) Day_of_month(1-31) Month(1-12) Day_of_week(0-6) Command_to_execute
```
只需记住 cron 任务的格式或打印下面的插图并将其放在你桌面上即可。
![][2]
在上图中,星号表示特定的时间块。
要显示当前登录用户的 ** crontab ** 文件的内容:
```
$ crontab -l
```
要编辑当前用户的 cron 任务,请执行以下操作:
```
$ crontab -e
```
如果这是第一次编辑此文件,你将需要使用编辑器来编辑此文件。
```
no crontab for sk - using an empty one
Select an editor. To change later, run 'select-editor'.
1. /bin/nano <---- easiest
2. /usr/bin/vim.basic
3. /usr/bin/vim.tiny
4. /bin/ed
Choose 1-4 [1]:
```
选择适合你的编辑器。这里是一个示例crontab文件的样子。
![][3]
在这个文件中,你需要添加你的 cron 任务s。
要编辑其他用户的crontab例如 ostechnix请执行
```
$ crontab -u ostechnix -e
```
让我们看看一些例子。
**每分钟** 执行一次 cron 任务 , 需使用如下格式.
```
* * * * * <command-to-execute>
```
要每5分钟运行一次cron 任务请在crontab文件中添加以下内容。
```
*/5 * * * * <command-to-execute>
```
要在每 1/4 个小时每15分钟运行一次 cron 任务,请添加以下内容:
```
*/15 * * * * <command-to-execute>
```
要每小时的第30分钟运行一次 cron 任务,请运行:
```
30 * * * * <command-to-execute>
```
您还可以使用逗号定义多个时间间隔。例如,以下 cron 任务 每小时运行三次,分别在第 0, 5 和 10 分钟运行:
```
0,5,10 * * * * <command-to-execute>
```
每半小时运行一次 cron 任务:
```
*/30 * * * * <command-to-execute>
```
每小时运行一次:
```
0 * * * * <command-to-execute>
```
每2小时运行一次
```
0 */2 * * * <command-to-execute>
```
每天运行一项在00:00运行
```
0 0 * * * <command-to-execute>
```
每天凌晨3点运行
```
0 3 * * * <command-to-execute>
```
每周日运行:
```
0 0 * * SUN <command-to-execute>
```
或使用,
```
0 0 * * 0 <command-to-execute>
```
它将在每周日的午夜 00:00 运行。
星期一至星期五每天运行一次,亦即每个工作日运行一次:
```
0 0 * * 1-5 <command-to-execute>
```
这项工作将于00:00开始。
每个月运行一次:
```
0 0 1 * * <command-to-execute>
```
于每月第1天的16:15运行
```
15 16 1 * * <command-to-execute>
```
每季度运行一次亦即每隔3个月的第1天运行
```
0 0 1 */3 * <command-to-execute>
```
在特定月份的特定时间运行:
```
5 0 * 4 * <command-to-execute>
```
每个四月的 00:05 运行。
每6个月运行
```
0 0 1 */6 * <command-to-execute>
```
这个定时任务将在每六个月的第一天的 00:00 运行。
每年运行:
```
0 0 1 1 * <command-to-execute>
```
这项 cron 任务将于 1 月份的第一天的 00:00 运行。
我们也可以使用以下字符串来定义任务。
@reboot 在每次启动时运行一次。 @yearly 每年运行一次。 @annually(和 @yearly 一样)。 @monthly 每月运行一次。 @weekly 每周运行一次。 @daily 每天运行一次。 @midnight (和 @daily 一样)。 @hourly 每小时运行一次。
例如,要在每次重新启动服务器时运行任务,请将此行添加到您的 crontab 文件中。
```
@reboot <command-to-execute>
```
要删除当前用户的所有 cron 任务:
```
$ crontab -r
```
还有一个名为 [** crontab.guru **] [4] 的专业网站,用于学习 cron 任务示例。这个网站提供了很多 cron 任务的例子。
有关更多详细信息,请查看手册页。
```
$ man crontab
```
那么,就是这样。到此为止,您应该对 cron 任务以及如何在世使用它们有了一个基本的了解。后续还会介绍更多的优秀工具。敬请关注!!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/a-beginners-guide-to-cron-任务s/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者KevinSJ](https://github.com/KevinSJ)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/05/cron-任务-format-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2018/05/cron-任务s-1.png
[4]:https://crontab.guru/