Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-11-16 12:33:57 +08:00
commit e17d716fd7
14 changed files with 1324 additions and 286 deletions

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11580-1.html)
[#]: subject: (Bash Script to Generate Patching Compliance Report on CentOS/RHEL Systems)
[#]: via: (https://www.2daygeek.com/bash-script-to-generate-patching-compliance-report-on-centos-rhel-systems/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
@ -10,15 +10,13 @@
在 CentOS/RHEL 系统上生成补丁合规报告的 Bash 脚本
======
如果你运行的是大型 Linux 环境,那么你可能已经将 Red Hat 与 Satellite 集成了。
![](https://img.linux.net.cn/data/attachment/album/201911/16/101428n1nsj74wifp4k1dz.jpg)
如果是的话,有一种方法可以从 Satellite 服务器导出它,因此不必担心补丁合规性报告
如果你运行的是大型 Linux 环境,那么你可能已经将 Red Hat 与 Satellite 集成了。如果是的话,你不必担心补丁合规性报告,因为有一种方法可以从 Satellite 服务器导出它。
但是,如果你运行的是没有 Satellite 集成的小型 Red Hat 环境,或者它是 CentOS 系统,那么此脚本将帮助你创建报告。
但是,如果你运行的是没有 Satellite 集成的小型 Red Hat 环境,或者它是 CentOS 系统,那么此脚本将帮助你创建报告。
补丁合规性报告通常每月创建一次或三个月一次,具体取决于公司的需求。
根据你的需要添加 cronjob 来自动执行此功能。
补丁合规性报告通常每月创建一次或三个月一次,具体取决于公司的需求。根据你的需要添加 cronjob 来自动执行此功能。
此 [bash 脚本][1] 通常适合于少于 50 个系统运行,但没有限制。
@ -26,13 +24,11 @@
以下文章可以帮助你了解有关在红帽 RHEL 和 CentOS 系统上安装安全修补程序的更多详细信息。
* **[如何检查红帽 RHEL 和 CentOS 系统上的可用安全更新][2]**
* **[在红帽 RHEL 和 CentOS 系统上安装安全更新的四种方法][3]**
* **[两种用来检查或列出红帽 RHEL 和 CentOS 系统上已安装的安全更新的方法][4]**
* [如何在 CentOS 或 RHEL 系统上检查可用的安全更新?][2]
* [在 RHEL 和 CentOS 系统上安装安全更新的四种方法][3]
* [在 RHEL 和 CentOS 上检查或列出已安装的安全更新的两种方法][4]
此教程中包含四个 [shell 脚本][5],选择适合你的脚本。
此教程中包含四个 [shell 脚本][5],请选择适合你的脚本。
### 方法 1为 CentOS / RHEL 系统上的安全修补生成补丁合规性报告的 Bash 脚本
@ -79,7 +75,7 @@ server4
+-----------------------------------+
```
现价下面的 cronjob 来每个月得到一份补丁合规性报告。
添加下面的 cronjob 来每个月得到一份补丁合规性报告。
```
# crontab -e
@ -198,7 +194,7 @@ rm /tmp/sec-up.csv
你会看到下面的输出。
![][6]
![][7]
--------------------------------------------------------------------------------
@ -207,15 +203,16 @@ via: https://www.2daygeek.com/bash-script-to-generate-patching-compliance-report
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/category/bash-script/
[2]: https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/
[2]: https://linux.cn/article-10938-1.html
[3]: https://www.2daygeek.com/install-security-updates-on-redhat-rhel-centos-system/
[4]: https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/
[4]: https://linux.cn/article-10960-1.html
[5]: https://www.2daygeek.com/category/shell-script/
[6]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[6]: https://www.2daygeek.com/wp-content/uploads/2019/11/bash-script-to-generate-patching-compliance-report-on-centos-rhel-systems-2.png
[7]: https://www.2daygeek.com/wp-content/uploads/2019/11/bash-script-to-generate-patching-compliance-report-on-centos-rhel-systems-3.png

View File

@ -1,33 +1,34 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11579-1.html)
[#]: subject: (How to manage music tags using metaflac)
[#]: via: (https://opensource.com/article/19/11/metaflac-fix-music-tags)
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
如何使用 metaflac 管理音乐标签
======
使用这个强大的开源工具可以在命令行中纠正音乐标签错误。
![website design image][1]
我将 CD 翻录到电脑已经有很长一段时间了。在此期间,我用过几种不同的翻录工具,观察到每种工具在标记上似乎有不同的做法,特别是在保存哪些音乐元数据上。所谓“观察”,是指音乐播放器似乎按照有趣的顺序对专辑进行排序,他们将一个目录中的曲目分为两张专辑,或者产生了其他令人沮丧的烦恼
> 使用这个强大的开源工具可以在命令行中纠正音乐标签错误。
我还看到有些标签非常模糊,许多音乐播放器和标签编辑器没有显示它们。即使这样,在某些极端情况下,它们仍可以使用这些标签来分类或显示音乐,例如播放器将所有包含 XYZ 标签的音乐文件与不包含该标签的所有文件分离到不同的专辑中。
![](https://img.linux.net.cn/data/attachment/album/201911/16/093629njth88bej8ttekh2.jpg)
很久以来我就将 CD 翻录到电脑。在此期间,我用过几种不同的翻录工具,观察到每种工具在标记上似乎有不同的做法,特别是在保存哪些音乐元数据上。所谓“观察”,我是指音乐播放器似乎按照有趣的顺序对专辑进行排序,它们将一个目录中的曲目分为两张专辑,或者产生了其他令人沮丧的烦恼。
我还看到有些标签非常不明确,许多音乐播放器和标签编辑器没有显示它们。即使这样,在某些极端情况下,它们仍可以使用这些标签来分类或显示音乐,例如播放器将所有包含 XYZ 标签的音乐文件与不包含该标签的所有文件分离到不同的专辑中。
那么,如果标记应用和音乐播放器没有显示“奇怪”的标记,但是它们受到了某种影响,你该怎么办?
### Metaflac 来拯救!
我一直想要熟悉 **[metaflac][2]**,它是一款开源命令行 [FLAC文件][3] 元数据编辑器,这是我选择的开源音乐文件格式。并不是说 [EasyTAG][4] 这样出色标签编辑软件有什么问题,但我想起“如果你手上有个锤子。。”这句老话(译注:原文是如果你手上有个锤子, 那么所有的东西看起来都像钉子。意指人们惯于用熟悉的方式解决问题,而不管合不合适)。另外,从实际的角度来看,运行 [Armbian][5] 和 [MPD][6]、音乐存储在本地、运行精简、仅限音乐的无头环境的小型专用服务器可以满足我的家庭和办公室立体音乐的需求,因此命令行元数据管理工具将非常有用。
我一直想要熟悉 [metaflac][2],它是一款开源命令行 [FLAC 文件][3]元数据编辑器,这是我选择的开源音乐文件格式。并不是说 [EasyTAG][4] 这样出色标签编辑软件有什么问题,但我想起“如果你手上有个锤子……”这句老话LCTT 译注:指如果你手上有个锤子,那么所有的东西看起来都像钉子。意指人们惯于用熟悉的方式解决问题,而不管合不合适)。另外,从实际的角度来看,带有 [Armbian][5] 和 [MPD][6] 的小型专用服务器,音乐存储在本地、运行精简的仅限音乐的无头环境就可以满足我的家庭和办公室的立体音乐的需求,因此命令行元数据管理工具将非常有用。
下面的截图显示了我的长期翻录程序产生的典型问题Putumayo 的哥伦比亚音乐汇编显示为两张单独的专辑,一张包含单首曲目,另一张包含其余 11 首:
下面的截图显示了我的长期翻录过程中产生的典型问题Putumayo 的哥伦比亚音乐汇编显示为两张单独的专辑,一张包含单首曲目,另一张包含其余 11 首:
![Album with incorrect tags][7]
我使用 metaflac 为目录中包含这些曲目的所有 FLAC 文件生成了所有标签的列表:
我使用 `metaflac` 为目录中包含这些曲目的所有 FLAC 文件生成了所有标签的列表:
```
rm -f tags.txt
@ -39,7 +40,7 @@ for f in *.flac; do
done
```
我将其保存为可执行的 shell 脚本(请参阅我的同事 [David Both][8] 关于 Bash shell 脚本的精彩系列专栏文章,[特别是关于循环这章][9])。基本上,我在这做的是创建一个文件 _tags.txt_,包含文件名(**echo** 命令),后面是它的所有标签,然后是下一个文件名,依此类推。 这是结果的前几行:
我将其保存为可执行的 shell 脚本(请参阅我的同事 [David Both][8] 关于 Bash shell 脚本的精彩系列专栏文章,[特别是关于循环这章][9])。基本上,我在这做的是创建一个文件 `tags.txt`,包含文件名(`echo` 命令),后面是它的所有标签,然后是下一个文件名,依此类推。这是结果的前几行:
```
@ -63,17 +64,15 @@ ALBUMARTISTSORT=50 de Joselito, Los
Cumbia Del Caribe.flac
```
经过一番调查,结果发现我同时翻录了很多 Putumayo CD并且当时我所使用的所有软件似乎给除了一个之外的所有文件加上了 MUSICBRAINZ_ 标签。 (是 bug 么大概吧。我在六张专辑中都看到了。此外关于有时不寻常的排序注意到ALBUMARTISTSORT 标签将西班牙语标题 “Los” 移到了标题的最后面(逗号之后)。
我使用了一个简单的 **awk** 脚本来列出 _tags.txt_ 中报告的所有标签:
经过一番调查,结果发现我同时翻录了很多 Putumayo CD并且当时我所使用的所有软件似乎给除了一个之外的所有文件加上了 `MUSICBRAINZ_*` 标签。(是 bug 么?大概吧。我在六张专辑中都看到了。)此外,关于有时不寻常的排序,我注意到,`ALBUMARTISTSORT` 标签将西班牙语标题 “Los” 移到了标题的最后面(逗号之后)。
我使用了一个简单的 `awk` 脚本来列出 `tags.txt` 中报告的所有标签:
```
`awk -F= 'index($0,"=") > 0 {print $1}' tags.txt | sort -u`
awk -F= 'index($0,"=") > 0 {print $1}' tags.txt | sort -u
```
这会使用 **=** 作为字段分隔符将所有行拆分为字段,并打印包含等号的行的第一个字段。结果通过使用 sort 带上 **-u** 标志来传递,从而消除了输出中的所有重复项(请参阅我的同事 Seth Kenlon 的[关于 **sort** 程序的文章][10])。对于这个 _tags.txt_ 文件,输出为:
这会使用 `=` 作为字段分隔符将所有行拆分为字段,并打印包含等号的行的第一个字段。结果通过使用 `sort` 及其 `-u` 标志来传递,从而消除了输出中的所有重复项(请参阅我的同事 Seth Kenlon 的[关于 `sort` 程序的文章][10])。对于这个 `tags.txt` 文件,输出为:
```
ALBUM
@ -94,8 +93,7 @@ TITLE
TRACKTOTAL
```
研究一会后,我发现 MUSICBRAINZ_ 标签出现在除了一个 FLAC 文件之外的所有文件上,因此我使用 metaflac 命令删除了这些标签:
研究一会后,我发现 `MUSICBRAINZ_*` 标签出现在除了一个 FLAC 文件之外的所有文件上,因此我使用 `metaflac` 命令删除了这些标签:
```
for f in *.flac; do metaflac --remove-tag MUSICBRAINZ_ALBUMARTISTID "$f"; done
@ -111,11 +109,11 @@ for f in *.flac; do metaflac --remove-tag MUSICBRAINZ_TRACKID "$f"; done
完成了12 首曲目出现在了一张专辑中。
太好了,我很喜欢 metaflac。我希望我会更频繁地使用它因为我会试图去纠正最后一些我弄乱的音乐收藏标签。强烈推荐
太好了,我很喜欢 `metaflac`。我希望我会更频繁地使用它,因为我会试图去纠正最后一些我弄乱的音乐收藏标签。强烈推荐!
### 关于音乐
我花了几个晚上在 CBC 音乐CBC 是加拿大的公共广播公司)上收听 Odario Williams 的节目 _After Dark_。感谢 Odario我听到了让我非常享受的 [Kevin Fox 的 _Songs for Cello and Voice_] [12]。在这里,他演唱了 Eurythmics 的歌曲 “[Sweet DreamsAre Made of This][13]”。
我花了几个晚上在 CBC 音乐CBC 是加拿大的公共广播公司)上收听 Odario Williams 的节目 After Dark。感谢 Odario我听到了让我非常享受的 [Kevin Fox 的 Songs for Cello and Voice] [12]。在这里,他演唱了 Eurythmics 的歌曲 “[Sweet DreamsAre Made of This][13]”。
我购买了这张 CD现在它在我的音乐服务器上还有组织正确的标签
@ -126,7 +124,7 @@ via: https://opensource.com/article/19/11/metaflac-fix-music-tags
作者:[Chris Hermansen][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Red Hat Responds to Zombieload v2)
[#]: via: (https://www.networkworld.com/article/3453596/red-hat-responds-to-zombieload-v2.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Red Hat Responds to Zombieload v2
======
Red Hat calls for updating Linux software to address Intel processor flaws that can lead to data-theft exploits
Stephen Lawson/IDG
Three Common Vulnerabilities and Exposures (CVEs) opened yesterday track three flaws in certain Intel processors, which, if exploited, can put sensitive data at risk.
Of the flaws reported, the newly discovered Intel processor flaw is a variant of the Zombieload attack discovered earlier this year and is only known to affect Intels Cascade Lake chips.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
Red Hat strongly suggests that all Red Hat systems be updated even if they do not believe their configuration poses a direct threat, and it is providing resources to their customers and to the enterprise IT community.
[][2]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][2]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
The three CVEs are:
* CVE-2018-12207 - Machine Check Error on Page Size Change
* CVE-2019-11135 - TSX Asynchronous Abort
* CVE-2019-0155 and CVE-2019-0154 - i915 graphics driver
### CVE-2018-12207
Red Hat rates this vulnerability as important. It is a vulnerability that could allow a local and unprivileged attacker to bypass security controls and cause a system-wide denial of service.
The hardware flaw was found in Intel microprocessors and is related to the Instruction Translation Lookaside Buffer (ITLB). It caches translations from virtual to physical addresses and is intended to improve performance. However, a delay in invalidating cached entries after cache page changes could lead to a processor using an invalid address translation causing a machine check error exception and moving the system into a hang state.
This kind of scenario could be crafted by an attacker to take a system down.
### CVE-2019-11135
Red Hat rates this vulnerability as moderate. This Transactional Synchronization Extensions (TSX) Asynchronous Abort is a Microarchitectural Data Sampling (MDS) flaw. A local attacker using custom code could use this flaw to gather information from cache contents on the processor and processors that support simultaneous multithreading (SMT) and TSX.
### CVE-2019-0155, CVE-2019-0154
Red Hat rates the **CVE-2019-0155** flaw as important and the CVE-2019-0154 as moderate. Both flaws are related to the i915 graphics driver.
CVE-2019-0155 allows allows an attacker to bypass conventional memory security restrictions, allowing write access to privileged memory that ought to be restricted.
CVE-2019-0154 could allow an local attacker to create an invalid system state when the Graphics Processing Unit (GPU) is in low power mode, leading to the system becoming inaccessible.
The only affected graphics card affected by CVE-2019-0154 is on the **i915** kernel module. The **lsmod** command can be used to indicate vulnerability. Any output like that shown below (i.e., starting with i915) indicates that this system is vulnerable:
```
$ lsmod | grep ^i915
i915 2248704 10
```
### Additional resources
Red Hat has provided details and further instructions to its customers and others in the following links:
<https://access.redhat.com/security/vulnerabilities/ifu-page-mce>
[https://access.redhat.com/solutions/tsx-asynchronousabort][3] [][4]
<https://access.redhat.com/solutions/i915-graphics>
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3453596/red-hat-responds-to-zombieload-v2.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[3]: https://access.redhat.com/solutions/tsx-asynchronousabort%20
[4]: https://access.redhat.com/solutions/i915-graphics
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -1,77 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why I made the switch from Mac to Linux)
[#]: via: (https://opensource.com/article/19/10/why-switch-mac-linux)
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg)
Why I made the switch from Mac to Linux
======
Thanks to a lot of open source developers, it's a lot easier to use
Linux as your daily driver than ever before.
![Hands programming][1]
I have been a huge Mac fan and power user since I started in IT in 2004. But a few months ago—for several reasons—I made the commitment to shift to Linux as my daily driver. This isn't my first attempt at fully adopting Linux, but I'm finding it easier than ever. Here is what inspired me to switch.
### My first attempt at Linux on the desktop
I remember looking up at the projector, and it looking back at me. Neither of us understood why it wouldn't display. VGA cords were fully seated with no bent pins to be found. I tapped every key combination I could think of to signal my laptop that it's time to get over the stage fright.
I ran Linux in college as an experiment. My manager in the IT department was an advocate for the many flavors out there, and as I grew more confident in desktop support and writing scripts, I wanted to learn more about it. IT was far more interesting to me than my computer science degree program, which felt so abstract and theoretical—"who cares about binary search trees?" I thought—while our sysadmin team's work felt so tangible.
This story ends with me logging into a Windows workstation to get through my presentation for class, and marks the end of my first attempt at Linux as my day-to-day OS. I admired its flexibility, but compatibility was lacking. I would occasionally write a script that SSHed into a box to run another script, but I stopped using Linux on a day-to-day basis.
### A fresh look at Linux compatibility
When I decided to give Linux another go a few months ago, I expected more of the same compatibility nightmare, but I couldn't be more wrong.
Right after the installation process completed, I plugged in a USB-C hub to see what I'd gotten myself into. Everything worked immediately. The HDMI-connected extra-wide monitor popped up as a mirrored display to my laptop screen, and I easily adjusted it to be a second monitor. The USB-connected webcam, which is essential to my [work-from-home life][2], showed up as a video with no trouble at all. Even my Mac charger, which was already plugged into the hub since I've been using a Mac, started to charge my very-not-Mac hardware.
My positive experience was probably related to some updates to USB-C, which received some needed attention in 2018 to compete with other OS experiences. As [Phoronix explained][3]:
> "The USB Type-C interface offers an 'Alternate Mode' extension for non-USB signaling and the biggest user of this alternate mode in the specification is allowing DisplayPort support. Besides DP, another alternate mode is the Thunderbolt 3 support. The DisplayPort Alt Mode supports 4K and even 8Kx4K video output, including multi-channel audio.
>
> "While USB-C alternate modes and DisplayPort have been around for a while now and is common in the Windows space, the mainline Linux kernel hasn't supported this functionality. Fortunately, thanks to Intel, that is now changing."
Thinking beyond ports, a quick scroll through the [Linux on Laptops][4] hardware options shows a much more complete set of choices than I experienced in the early 2000s.
This has been a night-and-day difference from my first attempt at Linux adoption, and it's one I welcome with open arms.
### Breaking out of Apple's walled garden
Using Linux has added new friction to my daily workflow, and I love that it has.
My Mac workflow was seamless: hop on an iPad in the morning, write down some thoughts on what my day will look like, and start to read some articles in Safari; slide over my iPhone to continue reading; then log into my MacBook where years of fine-tuning have worked out how all these pieces connect. Keyboard shortcuts are built into my brain; user experiences are as they've mostly always been. It's wildly comfortable.
That comfort comes with a cost. I largely forgot how my environment functions, and I couldn't answer questions I wanted to answer. Did I customize some [PLIST files][5] to get that custom shortcut, or did I remember to check it into [my dotfiles][6]? How did I get so dependent on Safari and Chrome when Firefox has a much better mission? Or why, specifically, won't I use an Android-based phone instead of my i-things?
On that note, I've often thought about shifting to an Android-based phone, but I would lose the connection I have across all these devices and the little conveniences designed into the ecosystem. For instance, I wouldn't be able to type in searches from my iPhone for the Apple TV or share a password with AirDrop with my other Apple-based friends. Those features are great benefits of homogeneous device environments, and it is remarkable engineering. That said, these conveniences come at a cost of feeling trapped by the ecosystem.
I love being curious about how devices work. I want to be able to explain environmental configurations that make it fun or easy to use my systems, but I also want to see what adding some friction does for my perspective. To paraphrase [Marcel Proust][7], "The real voyage of discovery consists not in seeking new lands but seeing with new eyes." My use of technology has been so convenient that I stopped being curious about how it all works. Linux gives me an opportunity to see with new eyes again.
### Inspired by you
All of the above is reason enough to explore Linux, but I have also been inspired by you. While all operating systems are welcome in the open source community, Opensource.com writers' and readers' joy for Linux is infectious. It inspired me to dive back in, and I'm enjoying the journey.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/why-switch-mac-linux
作者:[Matthew Broberg][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mbbroberg
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S (Hands programming)
[2]: https://opensource.com/article/19/8/rules-remote-work-sanity
[3]: https://www.phoronix.com/scan.php?page=news_item&px=Linux-USB-Type-C-Port-DP-Driver
[4]: https://www.linux-laptop.net/
[5]: https://fileinfo.com/extension/plist
[6]: https://opensource.com/article/19/3/move-your-dotfiles-version-control
[7]: https://www.age-of-the-sage.org/quotations/proust_having_seeing_with_new_eyes.html

View File

@ -0,0 +1,93 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Migrating to SD-WAN? Avoid these Pitfalls, Say IT Leaders)
[#]: via: (https://www.networkworld.com/article/3453198/migrating-to-sd-wan-avoid-these-pitfalls-say-it-leaders.html)
[#]: author: (Cato Networks https://www.networkworld.com/author/Matt-Conran/)
Migrating to SD-WAN? Avoid these Pitfalls, Say IT Leaders
======
Every network migration has its hidden challenges. Here are practical tips from IT pros who've already made the switch to SD-WAN
phototechno
Whether youre switching from MPLS or Internet VPNs, [SD-WAN][1] can jumpstart network performance, agility, and scalability, particularly for cloud applications. However, as with any migration, there can be challenges and surprises. Dont squash productivity with unplanned outages or security breaches. Plan your migration carefully, ask the right questions, and cover your bases. Here are some key pitfalls to avoid from those whove been there.
### Security Should Work with Your SD-WAN
If youre used to backhauling cloud traffic through data-center security via MPLS, youre bound to see a big boost in branch office cloud performance using direct Internet access. However, bypassing data-center security means you must find a way to deliver the same level of security at the branch-office level or risk a data breach. Last year, enterprises with completed SD-WAN deployments were 1.3 times more likely to experience a branch-office security breach than without, Shamus McGillicudy, Research Director at analyst firm Enterprise Management Associates reported on a [recent webinar][2].
In most cases, youll need a full suite of security functions at each location, including next-generation firewalls, IPS, malware protection, a secure Web gateway, and a cloud security broker. Andrew Thomson, director of IT systems and services at [BioIVT, a provider of biological products to life sciences and pharmaceutical firms,][3] found out just how much work securing the branch office could be when he was looking at telco SD-WAN solutions.
“Updating our security architecture was going to require running to different vendors, piecing together a solution, and going through all the deployment and management pains,” says Thomson. A simpler option may be to move traffic inspection and security policy enforcement into the cloud.
### Size Appliances with Room to Grow
Costs may be lowering when comparing [SD-WAN vs. MPLS][4], but that initial SD-WAN appliance purchase can be daunting, especially when you add on all the security functions you need to integrate and deploy. Dont let the cost scare you into skimping on sizing, especially if youre a growing business. Your branch offices will likely grow, which means more tunnels, features, and bandwidth. Even if they dont, WAN usage tends to grow with digital transformation and new applications and cloud services
A good rule of thumb is to price another 20% capacity beyond what you think you need today and compare that cost to the cost of upgrading or replacing your appliance in three years. You may want to spring for that capacity now.
### Plan for High Availability
If your business depends on network uptime, youd better have a solid plan in place for high availability. This means two SD-WAN appliances with failover capabilities at each location AND dual homing with more than one ISP across diversely routed connections for that precious last mile. (Since its often hard to be sure last-mile providers dont share the same underlying infrastructure, it's even better to use LTE with terrestrial connections.) When youre planning your SD-WAN budget, make sure to account for the licensing fees for those additional backup appliances.
[Salcomp, a manufacturer of adapters for mobile phone companies][5], had to rely on backup local Internet connections to compensate for the erratic connectivity of its MPLS providers global last mile connectivity partners.
“In Brazil we had a problem with an MPLS circuit, and the office was out for six months,” says Ville Sarja, CIO and Group Security Officer. “Luckily we had Internet redundancy, so we were able to direct traffic to the Internet, and bandwidth and connectivity were good enough.”
### Test Your Application Performance
Make sure your evaluation includes testing your data-center and cloud applications at different times of the day to ensure they perform as expected under various loads and conditions. If youre a global organization, make sure you test those applications globally at different times as well. As pointed out in the eBook, [The Internet is Broken][6], global connectivity often depends more on service provider commercial peering relationships than actual best path selection or network congestion. This means that packets may travel across longer distances and more hops than they should, with unnecessarily high latency as a result.
For its SD-WAN evaluation, Salcomp tested SharePoint file transfers and sharing, SAP user experience, and Office 365 performance from its Finland data-center locations across China, Taiwan, and India. By switching from MPLS to a global private backbone, Salcomp was able to reduce costs and improve application performance.
“Users just arent complaining anymore,” says Sarja, “And thats a very good thing.” 
### Make Some Changes and See How Long It Takes
Okay, your SD-WAN seems to be performing, but what about your SD-WAN provider? Are they responsive and competent when you need to make a change or troubleshoot a problem? The only way to find out is to test them as well. Your provider should be able to tell you what its timetable is for every type of change, whether its QOS or something else. Ask. Then include a set of predefined changes during the evaluation phase and see if your provider comes through.
[Fisher &amp; Company][7], parent of a precision metal parts company, was glad it tested changes with its chosen SD-WAN provider.
“We trialed a telco-managed SD-WAN service, but the provider was difficult to work with,” says Systems Manager Kevin McDaid. “They wanted us to submit requests for configuration changes; it was like our MPLS provider all over again.”
One way to save valuable time is to choose an SD-WAN provider with a co-management or self-service management model that allows the customer to make some network and security changes directly through a portal. You shouldnt have to rely on the providers staff to make every single change.
### One Step at a Time
Transitions can stumble, so most IT managers prefer a phased migration, with the new SD-WAN functioning side by side with your legacy WAN for a time to permit a quick cutover when necessary.
Draw up a migration plan with your supplier that will cause the least business disruption possible. At minimum, you should be able to transition one network at a time. However, if you want to minimize disruption even further you may want to consider a segment-by-segment transition. If your business depends on application performance, you may even want to transition one business-critical application at a time. 
Make sure your network and security policies are configured and ready to go with each step. You dont want to leave your organization open to malware and security breaches during the transition by configuring these on the fly.
[Financial information provider FDMG Mediagroep][8] used a carefully planned, phased approach when transitioning to a global private backbone to allay internal concerns about working with a new company. It started by connecting a few users at the Amsterdam office. It then connected an internal AWS site to evaluate cloud connectivity. Once those transitions succeeded it began converting individual production sites to SD-WAN.
And in the end, be sure to have a backup plan if something goes wrong during the cutover to SD-WAN.  Find out how long it takes your vendor to respond when issues come up or even to cut back to your legacy WAN if necessary.
SD-WAN migration can seem overwhelming when youve been relying for so long on MPLS. Theres no question there are potential pitfalls, but if you plan carefully and evaluate your strategy step by step along the way, you can reap all the benefits of SD-WAN without the migration headaches. 
** **
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3453198/migrating-to-sd-wan-avoid-these-pitfalls-say-it-leaders.html
作者:[Cato Networks][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: https://www.catonetworks.com/sd-wan?utm_source=idg
[2]: https://go.catonetworks.com/VOD_The-6-Keys-to-Successful-WAN-Transformation?utm_source=idg
[3]: https://www.catonetworks.com/customers/bioivt-connects-and-secures-global-network-with-cato-cloud-and-the-cato-managed-threat-detection-and-response-mdr-service?utm_source=idg
[4]: https://www.catonetworks.com/blog/sd-wan-vs-mpls-vs-public-internet?utm_source=idg
[5]: https://www.catonetworks.com/customers/salcomp-replaces-global-mpls-firewalls-and-wan-optimizers-with-cato-cloud?utm_source=idg
[6]: https://go.catonetworks.com/The_Internet_is_Broken?utm_source=idg
[7]: https://www.catonetworks.com/customers/fisher-company-lowers-mpls-costs-improves-wan-performance?utm_source=idg
[8]: https://www.catonetworks.com/customers/fdmg-cuts-costs-revolutionizes-mobile-experience-by-replacing-mpls-and-mobile-vpn?utm_source=idg

View File

@ -0,0 +1,60 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (USPS invests in GPU-driven servers to speed package processing)
[#]: via: (https://www.networkworld.com/article/3452521/usps-invests-in-gpu-driven-servers-to-speed-package-processing.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
USPS invests in GPU-driven servers to speed package processing
======
U.S. Postal Service plans to use servers powered by Nvidia GPUs and deep learning software to train multiple AI algorithms for image recognition, yielding a tenfold increase in package-processing speed.
Thinkstock
The U.S. Postal Service is set to purchase GPU-accelerated servers from Hewlett Packard Enterprise that it expects will help accelerate package data processing up to 10 times over previous methods.
The plan is for a spring 2020 deployment, using HPE's Apollo 6500 servers, which come with up to eight Nvidia V100 Tensor Core GPUs. The Postal Service also will use Nvidia's EGX edge computing servers at nearly 200 of its processing locations in the U.S.
**READ MORE:** [How AI can improve network capacity planning][1]
Nvidia announced the USPS's plans at its GPU Technology Conference in Washington, D.C. Ian Buck, the former Stanford professor who created the CUDA language for programming Nvidia GPUs before joining the company to head AI initiatives, made the announcement in an opening keynote focused on AI.
Buck said half of the worlds enterprises today rely on AI for network protection and security, and 80% of the telcos will rely on it to protect their networks. “AI is a wonderful tool for looking at massive amounts of data and finding anomalies, pulling needles out of a haystack,” he told the audience.
The USPS — which processes 485 million pieces of mail per day, or 146 billion pieces of mail per year — plans to use servers powered by Nvidia's GPUs and deep learning software to train multiple AI algorithms for image recognition, according to Buck. Those algorithms would then be deployed to the EGX systems at the Postal Service's package processing sites.
The aim is to improve the speed and accuracy of recognizing package labels, which would improve the speed of package delivery and reduce the need for manual involvement.
### Nvidia AI deployments and market initiatives
AI is being embraced by a number of industries, to varying degrees of success. Nvidia uses itself as a guinea pig:
“At Nvidia we have a fleet of self-driving vehicles, which we use for both collecting data and testing our self-driving capabilities. We ingest and create literally petabytes of data every week that has to be processed by our own team of labelers and processed by AIs,” Buck told the crowd. “We have literally thousands of GPUs doing training every day, which are supporting hundreds of data scientists, which are defining the self-driving car capabilities.”
The module in Nvidias self-driving car is called Pegasus and consists of two Volta GPUs and two Tegra SOCs. “Its basically an AI supercomputer inside every car processing hundreds of petabytes of data,” Buck said.
The challenge now is to actually apply AI, he said. To do so, Nvidia has a number of AI projects for the automotive, healthcare, robotics and 5G industries. For healthcare, for example, Nvidia has its Clara software development kit with pretrained models to tackle tasks such as looking for a particular kind of cancer in minutes or hours.
For IoT, Nvidia has the Metropolis Internet of Things application framework as cities build out sensors to detect unsafe driving conditions, such as a vehicle driving the wrong way onto a freeway. Nvidia also has the DRIVE autonomous vehicle platform, which spans everything from cars to trucks to robotaxis to industrial vehicles. Nvidia's Omniverse kit targets design and media, and its Aerial products are for telcos moving to 5G, along with the EGX server.
To train new developers to build AI apps on GPUs, Nvidia announced that its Deep Learning Institute just added 12 new courses focused on AI training. So far, DLI has trained more than 180,000 AI workers.
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3452521/usps-invests-in-gpu-driven-servers-to-speed-package-processing.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3338100/using-ai-to-improve-network-capacity-planning-what-you-need-to-know.html
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Space-sourced power could beam electricity where needed)
[#]: via: (https://www.networkworld.com/article/3453601/space-sourced-power-could-beam-electricity-where-needed.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Space-sourced power could beam electricity where needed
======
Harvesting solar energy in space would provide off-grid electricity at night, and for areas that dont receive much sunlight. A project has just received funding.
[dimitrisvetsikas1969][1] [(CC0)][2]
Capturing solar energy in space and then beaming it down to Earth could provide consistent electricity supplies in places that have never seen it before. Should the as-yet untested idea work and be scalable, it has applications in [IoT][3]-sensor deployments, wireless mobile network mast installs and remote edge data centers.
The radical idea is that super-efficient solar cells collect the suns power in space, convert it to radio waves, and then squirt the energy down to Earth, where it is converted into usable power. The defense industry, which is championing the concept, wants to use the satellite-based tech to provide remote power for forward-operating bases that currently require difficult and sometimes dangerous-to-obtain, escorted fuel deliveries to power electricity generators.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][4]
This replacement system could provide directed solar-produced energy at night or electricity in places without grid delivery. It could also eliminate the alternatives: expensive, wind-solutions and mechanical generators that require maintenance. Extreme northern regions (good spots for [data centers][5] because theyre cold, allowing for ambient cooling), could have, conceivably, for the first time, usable solar power in the predominantly dark winter.
[][6]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][6]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
“Developers envision a system that is a constellation of satellites with solar panels, about 10,000-square meters, or about the size of a football field or tennis court,” [writes Scott Turner of the Albuquerque Journal][7]. The Air Force Research Laboratory (AFRL), in Albuquerque, along with defense technology company Northrop Grumman have just announced that they plan to spend $100 million dollars developing the hardware, called the Space Solar Power Incremental Demonstrations and Research (SSPIDR) project.
Two kinds of solar-panel technology are in common use on land now. Photovoltaic solar panels work by converting energy from the sun into electricity. They dont have moving parts, so are inexpensive to maintain, unlike turbines. Another kind of solar panel uses mirrors and lenses. They grab, and then concentrate sunlight, producing heat, which then operates steam turbines.
“This whole project is building toward wireless power transmission,” Maj. Tim Allen, a manager on the project, told Turner. It will “beam power down when and where we choose.” Precise power beams will automatically track the target that needs the power, too. “We can put them down in specific locations and keep them there,” he says.
A significant advantage to placing solar panels in space, as opposed to on land, is that spacecraft get near constant sunlight, explains Rachel Delaney, a systems engineer on the project. Weather also becomes a non-issue, she says. It lets us “capture solar energy in space and precisely beam it to where it is needed,” Col. Eric Felt, director of the Space Vehicles Directorate at AFRL [says in a separate news release][8]. That could be as remote as the satellite footprint allows; single satellites are limited in reach as they only see the part of the Earth thats in perspective.
“I believe the commercial industry will be happy to mimic what were doing and start providing this power commercially and not just for the military,” Turner quotes Allen as saying.
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3453601/space-sourced-power-could-beam-electricity-where-needed.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://pixabay.com/en/sun-bright-yellow-sunset-sky-1953052/
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
[6]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[7]: https://www.abqjournal.com/1386648/afrl-looks-to-beam-solar-energy-from-space.html
[8]: https://afresearchlab.com/news/u-s-air-force-research-laboratory-developing-space-solar-power-beaming/
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -1,164 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (7 Best Open Source Tools that will help in AI Technology)
[#]: via: (https://opensourceforu.com/2019/11/7-best-open-source-tools-that-will-help-in-ai-technology/)
[#]: author: (Nitin Garg https://opensourceforu.com/author/nitin-garg/)
7 Best Open Source Tools that will help in AI Technology
======
[![][1]][2]
_Artificial intelligence is an exceptional technology following the futuristic approach. In this progressive era, its capturing the attention of all the multination organizations. Some of the popular names in the industry like Google, IBM, Facebook, Amazon, Microsoft constantly investing in this new-age technology._
Anticipate in business needs using artificial intelligence and take research and development on another level. This advanced technology is becoming an integral part of organizations in research and development offering ultra-intelligent solutions. It helps you maintain accuracy and increase productivity with better results.
AI open source tools and technologies are capturing the attention of every industry providing with frequent and accurate results. These tools help you analyse your performance while providing you with a boost to generate greater revenue.
Without further ado, here we have listed some of the best open-source tools to help you understand artificial intelligence better.
**1\. TensorFlow**
TensorFlow is an open-source machine learning framework used for Artificial Intelligence. It is basically developed to conduct machine learning and deep learning for research and production. TensorFlow allows developers to create dataflow graphics structure, It moves through a network or a system node, and the graph provides a multidimensional array or tensor of data.
TensorFlow is an exceptional tool that offers countless advantages.
* Simplifies the numeric computation
* TensorFlow offers flexibility on multiple models.
* TensorFlow improves business efficiency
* Highly portable
* Automatic differentiate capabilities.
**2\. Apache SystemML**
Apache SystemML is a very popular open-source machine learning platform created by IBM offering a favourable workplace using big data. It can run efficiently and on Apache Spark and automatically scale your data while determining whether your code can run on the drive or Apache Spark Cluster. Not just that, its lucrative features make it stand out in the industry offers;
* Algorithms customization
* Multiple Execution Modes
* Automatic Optimisation
It also supports deep learning while enabling developers to implement machine learning code and optimizing it with more effectiveness.
**3\. OpenNN**
OpenNN is an open-source artificial intelligence neural network library for progressive analytics. It helps you develop robust models with C++ and Python while containing algorithms and utilities to deal with machine learning solutions likes forecasting and classification. It also covers regression and association providing high performance and technology evolution in the industry.
It possesses numerous lucrative features like;
* Digital Assistance
* Predictive Analysis
* Fast Performance
* Virtual Personal Assistance
* Speech Recognition
* Advanced Analytics
It helps you design advance solutions implementing data mining methods for fruitful results.
**4\. Caffe**
Caffe (Convolutional Architecture for Fast Feature Embedding) is an open-source deep learning framework. It considers speed, modularity, and expressions the most. Caffe was originally developed at the University of California, Berkeley Vision and Learning Centre, written in C++ with a python interface. It smoothly works on operating system Linux, macOS, and Windows.
Some of the key features of Caffe that helps in AI technology.
1. Expressive Architecture
2. Extensive Code
3. Large Community
4. Active Development
5. Speedy Performance
It helps you inspire innovation while introducing stimulated growth. Make full use of this tool to get desired results.
**5\. Torch**
Torch is an open-source machine learning library which, helps you simplify complex task like serialization, object-oriented programming by offering multiple convenient functions. It offers the utmost flexibility and speed in machine learning projects. Torch is written using scripting language Lua and comes with an underlying C implementation. It is used in multiple organization and research labs.
Torch has countless advantages like;
* Fast &amp; Effective GPU Support
* Linear algebra Routines
* Support for iOS &amp; Android Platform
* Numeric Optimization Routine
* N-dimensional arrays
**6\. Accord .NET**
Accord .NET is one of the renown free, open-source AI development tool. It has a set of libraries for combining audio and image processing libraries written in C#. From computer vision to computer audition, signal processing and statistics applications it helps you build everything for commercial use. It comes with a comprehensive set of the sample application for quick running and extensive range of libraries.
You can develop an advance app using Accord .NET using attention-grabbing features like;
* Statistical Analysis
* Data Ingestions
* Adaptive
* Deep Learning
* Second-order neural network learning algorithms
* Digital Assistance &amp; Multi-languages
* Speech recognition
**7\. Scikit-Learn**
Scikit-learn is one of the popular open-source tools that will help in AI technology. It is a valuable library for machine learning in Python. It includes efficient tools like machine learning and statistical modelling including classification, clustering, regression and dimensionality reduction.
Lets find out more about Scikit-Learn features;
* Cross-validation
* Clustering and Classification
* Manifold Learning
* Machine Learning
* Virtual process Automation
* Workflow Automation
From preprocessing to model selection Scikit-learn helps you take care of everything. It simplifies the complete task from data mining to data analysis.
**Final Thought**
These are some of the popular open-source AI tools which provide with the comprehensive range of features. Before developing the new-age application, one must select one of the tools and work accordingly. These tools provide with advanced Artificial Intelligence solutions keeping recent trends in mind.
Artificial intelligence is used globally and its marking its presence all around the world. With applications like Amazon Alexa, Siri, AI is providing customers with ultimate user experience. Its offering significant benefit in the industry capturing users attention. Among all the industries like healthcare, banking, finance, e-commerce artificial intelligence is contributing to growth and productivity while saving a lot of time and efforts.
Select any one of these open-source tools for better user experience and unbelievable results. It will help you grow and get a better result in terms of quality and security.
![Avatar][3]
[Nitin Garg][4]
The author is the CEO and co-founder of BR Softech [Business intelligence software company][5]. Likes to share his opinions on IT industry via blogs. His interest is to write on the latest and advanced IT technologies which include IoT, VR &amp; AR app development, web, and app development services. Along with this, he also offers consultancy services for RPA, Big Data and Cyber Security services.
[![][6]][7]
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/7-best-open-source-tools-that-will-help-in-ai-technology/
作者:[Nitin Garg][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/nitin-garg/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2018/05/Artificial-Intelligence_EB-June-17.jpg?resize=696%2C464&ssl=1 (Artificial Intelligence_EB June 17)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2018/05/Artificial-Intelligence_EB-June-17.jpg?fit=1000%2C667&ssl=1
[3]: https://secure.gravatar.com/avatar/d4e6964b80590824b981f06a451aa9e6?s=100&r=g
[4]: https://opensourceforu.com/author/nitin-garg/
[5]: https://www.brsoftech.com/bi-consulting-services.html
[6]: https://opensourceforu.com/wp-content/uploads/2019/11/assoc.png
[7]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,229 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Creating Custom Themes in Drupal 8)
[#]: via: (https://opensourceforu.com/2019/11/creating-custom-themes-in-drupal-8/)
[#]: author: (Bhanu Prakash Poluparthi https://opensourceforu.com/author/bhanu-poluparthi/)
Creating Custom Themes in Drupal 8
======
[![][1]][2]
_A theme in a website is a set of files that defines the overall look and the user experience of a website. It usually comprises all the graphical elements such as colours and window decorations that help the user to customise the website. Drupal provides the user with a bunch of basic themes for a website that are very generic. However, these default themes do not suit all types of users. So there is a need to build themes that meet ones requirements._
Creating and customising themes in Drupal 8 is easy because of a modern template engine for PHP named Twig, which is a part of the Symfony 2 framework. Moving from a PHP template to Twig, and from the INI format to YAML, are some of the main changes in Drupal 8 theming. These changes in Drupal 8 have improved the security and inheritance, making theming more distinguished.
With reference to Figure 1,
* _.info_ provides information about your theme.
* _html.tpl.php_ displays the basic HTML structure of a single Drupal page.
* _page.tpl.php_ is the main template that defines the contents on most of the pages.
* _style.css_ is the CSS file that sets the CSS rules for the template.
* _node.tpl.php_ defines the contents of the nodes.
* _block.tpl.php_ defines the contents in the blocks.
* _comment.tpl.php_ defines the contents in the comments.
* _Template.php_ is used to hold preprocessors for generating variables before they are merged with the markup inside .tpl.php files.
* _Theme-settings.php_ is used to modify the entire theme settings form.
* _.libraries.yml_ defines your libraries (mostly your JS, CSS files).
* _.breakpoints.yml_ defines the points to fit different screen devices.
* _.theme_ is the PHP file that stores conditional logic and data preprocessing of the variables before they are merged with markup inside the .html.twig file.
* _/includes_ is where third-party libraries (like Bootstrap, Foundation, Font Awesome, etc) are put. It is a standard convention to store them in this folder.
The basic requirement to create a new Drupal theme is to have Drupal localhost installed on your system.
![Figure 1: Drupal 7 theme structure \(https://www.drupal.org/docs/7/theming/ overview-of-theme-files\)][3]
**Drupal 8 theme structure**
A custom theme can be made by following the steps mentioned below.
_**Step 1: Creating the custom themes folder**_
Go to the Drupal folder in which you can find a folder named Theme.
* Enter the folder theme.
* Create a folder custom.
* Enter the folder custom.
* Create a folder osfy.
Start creating your theme files over here. The theme name taken here is osfy.
_**Step 2: Creating a YML file**_
To inform the website about the existence of this theme, we use _.yml_ files. The basic details required in the YML are mentioned below:
1\. Name
2\. Description
3\. Type
4\. Core
```
name: osfy
description: My first responsive custom theme.
type: theme
package: custom
base theme: classy
core: 8.x
regions:
head: head
header: header
content: content
sidebar: sidebar
footer: Footer
Stylesheets-remove:
-”Remove Stylesheets”
```
We can proceed once the theme appears in the uninstalled section of your websites _Appearance_ tab.
Open the Drupal website and check for the new theme in the _Appearance_ section. It will be under the uninstalled list of themes in the _Appearance_ tab.
**Note:** 1\. Base theme indicates which base theme your custom theme is going to inherit. The default base theme provided by Drupal is Stable.
2\. Regions defines the regions in which your blocks are to be placed in your theme. If not declared, Drupal uses default regions from the core.
---
_**Step 3: Adding the .libraries.yml file:**_
We have indicated all the libraries comprising JavaScript and CSS styling, and now we will define them in the _libraries.yml_ file.
```
global-components:
version: 1.x
css:
theme:
css/style.css: {}
includes/bootstrap/css/bootstrap.css: {}
```
We will use _style.css_ for the theme styling and bootstrap.css for responsive display using Bootstrap libraries. Style.css resides in the core/css folder, whereas bootstrap.css resides in the _includes/bootstrap/css_ folder.
![Figure 2: Drupal 8 theme structure][4]
_**Step 4: Creating theme regions**_
To better understand how Twig has made things easier, use the following code:
```
<?php print render($title_prefix); ?>
<?php if ($title): ?>
<h1 class="title" id="page-title">
<?php print $title; ?>
</h1>
<?php endif; ?>
{{ title_prefix }}
{% if title %}
<h1 class="title" id="page-title">
{{ title }}
</h1>
{% endif %}
```
The template file functions are:
_html.html.twig_ Theme implementation for the basic structure of a single page
_page.html.twig_ Theme implementation to display a single page
_node.html.twig_ Default theme implementation to display a node
_region.html.twig_ Default theme implementation to display a region
_block.html.twig_ Default theme implementation to display a block
_field.html.twig_ Theme implementation for a field
To create the page.html.twig file, give the following commands:
```
/**
* @file
* Default theme implementation to display a single page.
*
* example code for basic header, footer and content page
**/
<div id="page">
{% if page.head %}
<section id="head">
<div class= "container">
{{ page.head }}
</div>
</section>
{% endif %}
<header id="header">
<div class="container">
{{ page.header }}
</div>
</header>
<section id="main">
<div class="container">
<div class="row">
<div id="content" class="col-md-10 col-sm-10 col-xs-12">
{{ page.content }}
</div>
{% if page.sidebar %}
<aside id="sidebar" class="sidebar col-md-2 col-sm-2 col-xs-12">
{{ page.sidebar}}
</aside>
{% endif %}
</div>
</div>
</section>
{% if page.footer %}
<footer id="footer">
<div class="container">
{{ page.footer }}
</div>
</footer>
{% endif %}
</div>
```
_**Step 5: Enabling the theme**_
To place content in the respective regions, in the Manage administrative menu, navigate to _Structure &gt; Block layout &gt; Custom block library (admin/structure/block/block-content)_. Click Add custom block. The Add custom block page appears. Fill in the fields and click on _Save_.
The block design used here is as in Figure 2.
**A few more things to do**
* Place a _logo.svg_ file in the theme folder. Drupal will look for it by default and enable the logo for the theme.
* To show your theme picture in the admin interface next to your theme name, place an image screenshot.png in your theme directory itself.
* Use your creativity from here onwards to style and customise the appearance of your theme.
* While writing the code for Twig files, remember to comment all the important information for future reference.
To make your theme work on your Drupal localhost, go to _/admin/appearance_ where you can find the theme osfy. Choose the option Set as default.
You can start using your theme from now.
![Avatar][5]
[Bhanu Prakash Poluparthi][6]
The author is an open source enthusiast and has been a part of
the Drupal organisation since 2017. He was an intern at Google
Summer of Code 2017 and a mentor at Google Code-In 2018.
[![][7]][8]
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/creating-custom-themes-in-drupal-8/
作者:[Bhanu Prakash Poluparthi][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/bhanu-poluparthi/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/d8-1.jpg?resize=696%2C397&ssl=1 (d8)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/d8-1.jpg?fit=788%2C449&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-1-Drupal-7-theme-structure.jpg?resize=350%2C308&ssl=1
[4]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Screenshot-from-2019-11-14-16-13-53.png?resize=350%2C298&ssl=1
[5]: https://secure.gravatar.com/avatar/a0a27865017dd4456f47f0a9e7d964a6?s=100&r=g
[6]: https://opensourceforu.com/author/bhanu-poluparthi/
[7]: https://opensourceforu.com/wp-content/uploads/2019/11/assoc.png
[8]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US

View File

@ -0,0 +1,347 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Debugging Software Deployments with strace)
[#]: via: (https://theartofmachinery.com/2019/11/14/deployment_debugging_strace.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
Debugging Software Deployments with strace
======
Most of my paid work involves deploying software systems, which means I spend a lot of time trying to answer the following questions:
* This software works on the original developers machine, so why doesnt it work on mine?
* This software worked on my machine yesterday, so why doesnt it work today?
Thats a kind of debugging, but its a different kind of debugging from normal software debugging. Normal debugging is usually about the logic of the code, but deployment debugging is usually about the interaction between the code and its environment. Even when the root cause is a logic bug, the fact that the software apparently worked on another machine means that the environment is usually involved somehow.
So, instead of using normal debugging tools like `gdb`, I have another toolset for debugging deployments. My favourite tool for “Why isnt this software working on this machine?” is `strace`.
### What is `strace`?
[`strace`][1] is a tool for “system call tracing”. Its primarily a Linux tool, but you can do the same kind of debugging tricks with tools for other systems (such as [DTrace][2] and [ktrace][3]).
The basic usage is very simple. Just run it against a command and it dumps all the system calls (youll probably need to install `strace` first):
```
$ strace echo Hello
...Snip lots of stuff...
write(1, "Hello\n", 6) = 6
close(1) = 0
close(2) = 0
exit_group(0) = ?
+++ exited with 0 +++
```
What are these system calls? Theyre like the API for the operating system kernel. Once upon a time, software used to have direct access to the hardware it ran on. If it needed to display something on the screen, for example, it could twiddle with ports and/or memory-mapped registers for the video hardware. That got chaotic when multitasking computer systems became popular because different applications would “fight” over hardware, and bugs in one application could crash other applications, or even bring down the whole system. So CPUs started supporting different privilege modes (or “protection rings”). They let an operating system kernel run in the most privileged mode with full hardware access, while spawning less-privileged software applications that must ask the kernel to interact with the hardware for them using system calls.
At the binary level, making a system call is a bit different from making a simple function call, but most programs use wrappers in a standard library. E.g. the POSIX C standard library contains a `write()` function call that contains all the architecture-dependent code for making the `write` system call.
![][4]
In short, an applications interaction with its environment (the computer system) is all done through system calls. So when software works on one machine but not another, looking at system call traces is a good way to find whats wrong. More specifically, here are the typical things you can analyse using a system call trace:
* Console input and output (IO)
* Network IO
* Filesystem access and file IO
* Process/thread lifetime management
* Raw memory management
* Access to special device drivers
### When can `strace` be used?
In theory, `strace` can be used with any userspace program because all userspace programs have to make system calls. Its more effective with compiled, lower-level programs, but still works with high-level languages like Python if you can wade through the extra noise from the runtime environment and interpreter.
`strace` shines with debugging software that works fine on one machine, but on another machine fails with a vague error message about files or permissions or failure to run some command or something. Unfortunately, its not so great with higher-level problems, like a certificate verification failure. They usually need a combination of `strace`, sometimes [`ltrace`][5], and higher-level tooling (like the `openssl` command line tool for certificate debugging).
The examples in this post are based on a standalone server, but system call tracing can often be done on more complicated deployment platforms, too. Just search for appropriate tooling.
### A simple debugging example
Lets say youre trying to run an awesome server application called foo, but heres what happens:
```
$ foo
Error opening configuration file: No such file or directory
```
Obviously its not finding the configuration file that youve written. This can happen because package managers sometimes customise the expected locations of files when compiling an application, so following an installation guide for one distro leads to files in the wrong place on another distro. You could fix the problem in a few seconds if only the error message told you where the configuration file is expected to be, but it doesnt. How can you find out?
If you have access to the source code, you could read it and work it out. Thats a good fallback plan, but not the fastest solution. You also could use a stepping debugger like `gdb` to see what the program does, but its more efficient to use a tool thats specifically designed to show the interaction with the environment: `strace`.
The output of `strace` can be a bit overwhelming at first, but the good news is that you can ignore most of it. It often helps to use the `-o` switch to save the trace to a separate file:
```
$ strace -o /tmp/trace foo
Error opening configuration file: No such file or directory
$ cat /tmp/trace
execve("foo", ["foo"], 0x7ffce98dc010 /* 16 vars */) = 0
brk(NULL) = 0x56363b3fb000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=25186, ...}) = 0
mmap(NULL, 25186, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f2f12cf1000
close(3) = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260A\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1824496, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f2f12cef000
mmap(NULL, 1837056, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f2f12b2e000
mprotect(0x7f2f12b50000, 1658880, PROT_NONE) = 0
mmap(0x7f2f12b50000, 1343488, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x22000) = 0x7f2f12b50000
mmap(0x7f2f12c98000, 311296, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x16a000) = 0x7f2f12c98000
mmap(0x7f2f12ce5000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b6000) = 0x7f2f12ce5000
mmap(0x7f2f12ceb000, 14336, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f2f12ceb000
close(3) = 0
arch_prctl(ARCH_SET_FS, 0x7f2f12cf0500) = 0
mprotect(0x7f2f12ce5000, 16384, PROT_READ) = 0
mprotect(0x56363b08b000, 4096, PROT_READ) = 0
mprotect(0x7f2f12d1f000, 4096, PROT_READ) = 0
munmap(0x7f2f12cf1000, 25186) = 0
openat(AT_FDCWD, "/etc/foo/config.json", O_RDONLY) = -1 ENOENT (No such file or directory)
dup(2) = 3
fcntl(3, F_GETFL) = 0x2 (flags O_RDWR)
brk(NULL) = 0x56363b3fb000
brk(0x56363b41c000) = 0x56363b41c000
fstat(3, {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0x8), ...}) = 0
write(3, "Error opening configuration file"..., 60) = 60
close(3) = 0
exit_group(1) = ?
+++ exited with 1 +++
```
The first page or so of `strace` output is typically low-level process startup. (You can see a lot of `mmap`, `mprotect`, `brk` calls for things like allocating raw memory and mapping dynamic libraries.) Actually, when debugging an error, `strace` output is best read from the bottom up. You can see the `write` call that outputs the error message at the end. If you work up, the first failing system call is the `openat` call that fails with `ENOENT` (“No such file or directory”) trying to open `/etc/foo/config.json`. And now we know where the configuration file is supposed to be.
Thats a simple example, but Id say at least 90% of the time I use `strace`, Im not doing anything more complicated. Heres the complete debugging formula step-by-step:
1. Get frustrated by a vague system-y error message from a program
2. Run the program again with `strace`
3. Find the error message in the trace
4. Work upwards to find the first failing system call
Theres a very good chance the system call in step 4 shows you what went wrong.
### Some tips
Before walking through a more complicated example, here are some useful tips for using `strace` effectively:
#### `man` is your friend
On many *nix systems, you can get a list of all kernel system calls by running `man syscalls`. Youll see things like `brk(2)`, which means you can get more information by running `man 2 brk`.
One little gotcha: `man 2 fork` shows me the man page for the `fork()` wrapper in GNU `libc`, which is actually now implemented using the `clone` system call instead. The semantics of `fork` are the same, but if I write a program using `fork()` and `strace` it, I wont find any `fork` calls in the trace, just `clone` calls. Gotchas like that are only confusing if youre comparing source code to `strace` output.
#### Use `-o` to save output to a file
`strace` can generate a lot of output so its often helpful to store the trace in a separate file (as in the example above). It also avoids mixing up program output with `strace` output in the console.
#### Use `-s` to see more argument data
You might have noticed that the second part of the error message doesnt appear in the example trace above. Thats because `strace` only shows the first 32 bytes of string arguments by default. If you need to capture more, add something like `-s 128` to the `strace` invocation.
#### `-y` makes it easier to track files/sockets/etc
“Everything is a file” means *nix systems do all IO using file descriptors, whether its to an actual file or over networks or through interprocess pipes. Thats convenient for programming, but makes it harder to follow whats really going on when you see generic `read` and `write` in the system call trace.
Adding the `-y` switch makes `strace` annotate every file descriptor in the output with a note about what it points to.
#### Attach to an already-running process with `-p`
As well see in the example later, sometimes you want to trace a program thats already running. If you know its running as process 1337 (say, by looking at the output of `ps`), you can trace it like this:
```
$ strace -p 1337
...system call trace output...
```
You probably need root.
#### Use `-f` to follow child processes
By default, `strace` only traces the one process. If that process spawns a child process, youll see the system call for spawning the process (normally `clone` nowadays), but not any of the calls made by the child process.
If you think the bug is in a child process, youll need to use the `-f` switch to enable tracing it. A downside is that the output can be more confusing. When tracing one process and one thread, `strace` can show you a single stream of call events. When tracing multiple processes, you might see the start of a call cut off with `<unfinished ...>`, then a bunch of calls for other threads of execution, before seeing the end of the original call with `<... foocall resumed>`. Alternatively, you can separate all the traces into different files by using the `-ff` switch as well (see [the `strace` manual][6] for details).
#### You can filter the trace with `-e`
As youve seen, the default trace output is a firehose of all system calls. You can filter which calls get traced using the `-e` flag (see [the `strace` manual][6]). The main advantage is that its faster to run the program under a filtered `strace` than to trace everything and `grep` the results later. Honestly, I dont bother most of the time.
#### Not all errors are bad
A simple and common example is a program searching for a file in multiple places, like a shell searching for which `bin/` directory has an executable:
```
$ strace sh -c uname
...
stat("/home/user/bin/uname", 0x7ffceb817820) = -1 ENOENT (No such file or directory)
stat("/usr/local/bin/uname", 0x7ffceb817820) = -1 ENOENT (No such file or directory)
stat("/usr/bin/uname", {st_mode=S_IFREG|0755, st_size=39584, ...}) = 0
...
```
The “last failed call before the error message” heuristic is pretty good at finding relevent errors. In any case, working from the bottom up makes sense.
#### C programming guides are good for understanding system calls
Standard C library calls arent system calls, but theyre only thin layers on top. So if you understand (even just roughly) how to do something in C, its easier to read a system call trace. For example, if youre having trouble debugging networking system calls, you could try skimming through [Beejs classic Guide to Network Programming][7].
### A more complicated debugging example
As I said, that simple debugging example is representative of most of my `strace` usage. However, sometimes a little more detective work is required, so heres a slightly more complicated (and real) example.
[`bcron`][8] is a job scheduler thats yet another implementation of the classic *nix `cron` daemon. Its been installed on a server, but heres what happens when someone tries to edit a job schedule:
```
# crontab -e -u logs
bcrontab: Fatal: Could not create temporary file
```
Okay, so bcron tried to write some file, but it couldnt, and isnt telling us why. This is a debugging job for `strace`:
```
# strace -o /tmp/trace crontab -e -u logs
bcrontab: Fatal: Could not create temporary file
# cat /tmp/trace
...
openat(AT_FDCWD, "bcrontab.14779.1573691864.847933", O_RDONLY) = 3
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f82049b4000
read(3, "#Ansible: logsagg\n20 14 * * * lo"..., 8192) = 150
read(3, "", 8192) = 0
munmap(0x7f82049b4000, 8192) = 0
close(3) = 0
socket(AF_UNIX, SOCK_STREAM, 0) = 3
connect(3, {sa_family=AF_UNIX, sun_path="/var/run/bcron-spool"}, 110) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f82049b4000
write(3, "156:Slogs\0#Ansible: logsagg\n20 1"..., 161) = 161
read(3, "32:ZCould not create temporary f"..., 8192) = 36
munmap(0x7f82049b4000, 8192) = 0
close(3) = 0
write(2, "bcrontab: Fatal: Could not creat"..., 49) = 49
unlink("bcrontab.14779.1573691864.847933") = 0
exit_group(111) = ?
+++ exited with 111 +++
```
Theres the error message `write` near the end, but a couple of things are different this time. First, theres no relevant system call error that happens before it. Second, we see that the error message has just been `read` from somewhere else. It looks like the real problem is happening somewhere else, and `bcrontab` is just replaying the message.
If you look at `man 2 read`, youll see that the first argument (the 3) is a file descriptor, which is what *nix uses for all IO handles. How do you know what file descriptor 3 represents? In this specific case, you could run `strace` with the `-y` switch (as explained above) and it would tell you automatically, but its useful to know how to read and analyse traces to figure things like this out.
A file descriptor can come from one of many system calls (depending on whether its a descriptor for the console, a network socket, an actual file, or something else), but in any case we can search for calls returning 3 (i.e., search for “= 3” in the trace). There are two in this trace: the `openat` at the top, and the `socket` in the middle. `openat` opens a file, but the `close(3)` afterwards shows that it gets closed again. (Gotcha: file descriptors can be reused as theyre opened and closed.) The `socket` call is the relevant one (its the last one before the `read`), which tells us `bcrontab` is talking to something over a network socket. The next line, `connect` shows file descriptor 3 being configured as a Unix domain socket connection to `/var/run/bcron-spool`.
So now we need to figure out whats listening on the other side of the Unix socket. There are a couple of neat tricks for that, both useful for debugging server deployments. One is to use `netstat` or the newer `ss` (“socket status”). Both commands describe active network sockets on the system, and take the `-l` switch for describing listening (server) sockets, and the `-p` switch to get information about what program is using the socket. (There are many more useful options, but those two are enough to get this job done.)
```
# ss -pl | grep /var/run/bcron-spool
u_str LISTEN 0 128 /var/run/bcron-spool 1466637 * 0 users:(("unixserver",pid=20629,fd=3))
```
That tells us that the listener is a command `unixserver` running as process ID 20629. (Its a coincidence that its also using file descriptor 3 for the socket.)
The second really useful tool for finding the same information is `lsof`. It can list all open files (or file descriptors) on the system. Alternatively, we can get information about a specific file:
```
# lsof /var/run/bcron-spool
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
unixserve 20629 cron 3u unix 0x000000005ac4bd83 0t0 1466637 /var/run/bcron-spool type=STREAM
```
Process 20629 is a long-running server, so we can attach `strace` to it using something like `strace -o /tmp/trace -p 20629`. If we then try to edit the cron schedule in another terminal, we can capture a trace while the error is happening. Heres the result:
```
accept(3, NULL, NULL) = 4
clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7faa47c44810) = 21181
close(4) = 0
accept(3, NULL, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=21181, si_uid=998, si_status=0, si_utime=0, si_stime=0} ---
wait4(0, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WNOHANG|WSTOPPED, NULL) = 21181
wait4(0, 0x7ffe6bc36764, WNOHANG|WSTOPPED, NULL) = -1 ECHILD (No child processes)
rt_sigaction(SIGCHLD, {sa_handler=0x55d244bdb690, sa_mask=[CHLD], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7faa47ab9840}, {sa_handler=0x55d244bdb690, sa_mask=[CHLD], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7faa47ab9840}, 8) = 0
rt_sigreturn({mask=[]}) = 43
accept(3, NULL, NULL) = 4
clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7faa47c44810) = 21200
close(4) = 0
accept(3, NULL, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=21200, si_uid=998, si_status=111, si_utime=0, si_stime=0} ---
wait4(0, [{WIFEXITED(s) && WEXITSTATUS(s) == 111}], WNOHANG|WSTOPPED, NULL) = 21200
wait4(0, 0x7ffe6bc36764, WNOHANG|WSTOPPED, NULL) = -1 ECHILD (No child processes)
rt_sigaction(SIGCHLD, {sa_handler=0x55d244bdb690, sa_mask=[CHLD], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7faa47ab9840}, {sa_handler=0x55d244bdb690, sa_mask=[CHLD], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7faa47ab9840}, 8) = 0
rt_sigreturn({mask=[]}) = 43
accept(3, NULL, NULL
```
(The last `accept` doesnt complete during the trace period.) Unfortunately, once again, this trace doesnt contain the error were after. We dont see any of the messages that we saw `bcrontab` sending to and receiving from the socket. Instead, we see a lot of process management (`clone`, `wait4`, `SIGCHLD`, etc.). This process is spawning a child process, which we can guess is doing the real work. If we want to catch a trace of that, we have to add `-f` to the `strace` invocation. Heres what we find if we search for the error message after getting a new trace with `strace -f -o /tmp/trace -p 20629`:
```
21470 openat(AT_FDCWD, "tmp/spool.21470.1573692319.854640", O_RDWR|O_CREAT|O_EXCL, 0600) = -1 EACCES (Permission denied)
21470 write(1, "32:ZCould not create temporary f"..., 36) = 36
21470 write(2, "bcron-spool[21470]: Fatal: logs:"..., 84) = 84
21470 unlink("tmp/spool.21470.1573692319.854640") = -1 ENOENT (No such file or directory)
21470 exit_group(111) = ?
21470 +++ exited with 111 +++
```
Now were getting somewhere. Process ID 21470 is getting a permission denied error trying to create a file at the path `tmp/spool.21470.1573692319.854640` (relative to the current working directory). If we just knew the current working directory, we would know the full path and could figure out why the process cant create create its temporary file there. Unfortunately, the process has already exited, so we cant just use `lsof -p 21470` to find out the current directory, but we can work backwards looking for PID 21470 system calls that change directory. (If there arent any, PID 21470 must have inherited it from its parent, and we can `lsof -p` that.) That system call is `chdir` (which is easy to find out using todays web search engines). Heres the result of working backwards through the trace, all the way to the server PID 20629:
```
20629 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7faa47c44810) = 21470
...
21470 execve("/usr/sbin/bcron-spool", ["bcron-spool"], 0x55d2460807e0 /* 27 vars */) = 0
...
21470 chdir("/var/spool/cron") = 0
...
21470 openat(AT_FDCWD, "tmp/spool.21470.1573692319.854640", O_RDWR|O_CREAT|O_EXCL, 0600) = -1 EACCES (Permission denied)
21470 write(1, "32:ZCould not create temporary f"..., 36) = 36
21470 write(2, "bcron-spool[21470]: Fatal: logs:"..., 84) = 84
21470 unlink("tmp/spool.21470.1573692319.854640") = -1 ENOENT (No such file or directory)
21470 exit_group(111) = ?
21470 +++ exited with 111 +++
```
(If youre getting lost here, you might want to read [my previous post about *nix process management and shells][9].) Okay, so the server PID 20629 doesnt have permission to create a file at `/var/spool/cron/tmp/spool.21470.1573692319.854640`. The most likely reason would be classic *nix filesystem permission settings. Lets check:
```
# ls -ld /var/spool/cron/tmp/
drwxr-xr-x 2 root root 4096 Nov 6 05:33 /var/spool/cron/tmp/
# ps u -p 20629
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
cron 20629 0.0 0.0 2276 752 ? Ss Nov14 0:00 unixserver -U /var/run/bcron-spool -- bcron-spool
```
Theres the problem! The server is running as user `cron`, but only `root` has permissions to write to that `/var/spool/cron/tmp/` directory. A simple `chown cron /var/spool/cron/tmp/` makes `bcron` work properly. (If that werent the problem, the next most likely suspect would be a kernel security module like SELinux or AppArmor, so Id check the kernel logs with `dmesg`.)
### Summary
System call traces can be overwhelming at first, but I hope Ive shown that theyre a fast way to debug a whole class of common deployment problems. Imagine trying to debug that multi-process `bcron` problem using a stepping debugger.
Working back through a chain of system calls takes practice, but as I said, most of the time I use `strace` I just get a trace and look for errors, working from the bottom up. In any case, `strace` has saved me hours and hours of debugging time. I hope its useful for you, too.
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2019/11/14/deployment_debugging_strace.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://strace.io/
[2]: http://dtrace.org/blogs/about/
[3]: https://man.openbsd.org/ktrace
[4]: https://theartofmachinery.com/images/strace/system_calls.svg
[5]: https://linux.die.net/man/1/ltrace
[6]: https://linux.die.net/man/1/strace
[7]: https://beej.us/guide/bgnet/html/index.html
[8]: https://untroubled.org/bcron/
[9]: https://theartofmachinery.com/2018/11/07/writing_a_nix_shell.html

View File

@ -0,0 +1,150 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Developing a Simple Web Application Using Flutter)
[#]: via: (https://opensourceforu.com/2019/11/developing-a-simple-web-application-using/)
[#]: author: (Jis Joe Mathew https://opensourceforu.com/author/jis-joe/)
Developing a Simple Web Application Using Flutter
======
[![][1]][2]
_This article guides readers on how to run and deploy their first Web application using Flutter._
Flutter has moved to a new stage, the Web, after having travelled a long way in Android and iOS development. Flutter 1.5 has been released by Google, along with support for Web application development.
**Configuring Flutter for the Web**
In order to use the Web package, enter the _flutter upgrade_ command to update to Flutter version 1.5.4.
* Open a terminal
* Type flutter upgrade
* Check the version by typing _flutter version_
![Figure 1: Upgrading Flutter to the latest version][3]
![Figure 2: Starting a new Flutter Web project in VSC][4]
One can also use Android Studio 3.0 or later versions for Flutter Web development, but we will use Visual Studio Code for this tutorial.
**Creating a new project with Flutter Web**
Open Visual Studio Code and press _Shift+Ctrl+P_ to start a new project. Type flutter and select _New Web Project_.
Now, name the project. I have named it _open_source_for_you_.
Open the terminal window in VSC, and type in the following commands:
```
flutter packages pub global activate webdev
flutter packages upgrade
```
Now use the following command to run the website, on localhost, with the IP address 127.0.0.1
```
flutter packages pub global run webdev serve
```
Open any browser and type, _<http://127.0.0.1:8080/>_
There is a Web folder inside the project directory which contains an _index.html_ file. The _dart_ file is compiled into a JavaScript file and is included in the HTML file using the following code:
```
<script defer src="main.dart.js" type="application/javascript"></script>
```
**Coding and making changes to the demo page**
Lets create a simple application, which will print Welcome to OSFY on the Web page.
Lets now open the Dart file, which is located in the _lib_ folder _main.dart_ (the default name) (see Figure 5).
We can now remove the debug tag using the property of _MaterialApp_, as follows:
```
debugShowCheckedModeBanner: false
```
![Figure 3: Naming the project][5]
![Figure 4: The Flutter demo application running on port 8080][6]
![Figure 5: Location of main.dart file][7]
Now, adding more into the Dart file is very similar to writing code in Flutter in Dart. For that, we can declare a class titled _MyClass_, which extends the _StatelessWidget_.
We use a _Center_ widget to position elements to the centre. We can also add a _Padding_ widget to add padding. Use the following code to obtain the output shown in Figure 5. Use the Refresh button to view the changes.
```
class MyClass extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Padding(
padding: EdgeInsets.all(20.0),
child: Text(
'Welcome to OSFY',
style: TextStyle(fontSize: 24.0, fontWeight: FontWeight.bold),
),
),
],
),
),
);
}
}
```
![Figure 6: Output of MyClass][8]
![Figure 7: Final output][9]
Lets add an image from the Internet Ive chosen the Open Source for You logo from the magazines website. We use _Image.network_.
```
Image.network(
'https://opensourceforu.com/wp-content/uploads/2014/03/OSFY-Logo.jpg',
height: 100,
width: 150
),
```
The final output is shown in Figure 7.
![Avatar][10]
[Jis Joe Mathew][11]
The author is assistant professor of computer science and engineering at Amal Jyoti College, Kanirapally, Kerala. He can be contacted at [jisjoemathew@gmail.com][12].
[![][13]][14]
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/developing-a-simple-web-application-using/
作者:[Jis Joe Mathew][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/jis-joe/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Screenshot-from-2019-11-15-16-20-30.png?resize=696%2C495&ssl=1 (Screenshot from 2019-11-15 16-20-30)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Screenshot-from-2019-11-15-16-20-30.png?fit=900%2C640&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-1-Upgrading-Flutter-to-the-latest-version.jpg?resize=350%2C230&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-2-Starting-a-new-Flutter-Web-project-in-VSC.jpg?resize=350%2C93&ssl=1
[5]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-3-Naming-the-project.jpg?resize=350%2C147&ssl=1
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-4-The-Flutter-demo-application-running-on-port-8080.jpg?resize=350%2C111&ssl=1
[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-5-Location-of-main.dart-file.jpg?resize=350%2C173&ssl=1
[8]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-6-Output-of-MyClass.jpg?resize=350%2C173&ssl=1
[9]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-7-Final-output.jpg?resize=350%2C167&ssl=1
[10]: https://secure.gravatar.com/avatar/64db0e07799ae14fd1b51d0633db6593?s=100&r=g
[11]: https://opensourceforu.com/author/jis-joe/
[12]: mailto:jisjoemathew@gmail.com
[13]: https://opensourceforu.com/wp-content/uploads/2019/11/assoc.png
[14]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why I made the switch from Mac to Linux)
[#]: via: (https://opensource.com/article/19/10/why-switch-mac-linux)
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg)
为什么我从 Mac 换到了 Linux
======
> 感谢这么多的开源开发人员,使用 Linux 作为日常使用比以往任何时候都容易得多。
![Hands programming][1]
自 2004 年开始从事 IT 工作以来,我一直是 Mac 的忠实粉丝。但是几个月前,由于种种原因,我决定将 Linux 用作日常使用。这不是我第一次尝试完全采用 Linux但是我发现它比以往更容易。这就是促使我转换的原因。
### 我在个人电脑上的第一次的 Linux 尝试
我记得我抬头看着投影机而它和我面面相觑。我们俩都不明白为什么它不会显示。VGA 线完全接好了,针脚也没有弯折。我按了我可能想到的所有按键组合,以向笔记本电脑发出信号,想让它克服舞台恐惧症。
我在大学里运行 Linux 只是作为实验。我在 IT 部门的经理是多种口味的倡导者随着我对桌面支持和编写脚本的信心增强我想了解更多有关它的信息。对我来说IT 比我的计算机科学学位课程有趣得多,课程感觉是如此抽象和理论化:“二叉树有啥用?”,我如是想 —— 而我们的系统管理员团队的工作却是如此的切实。
这个故事的结尾是,我登录 Windows 工作站通过了我的课堂演讲,标志着我将 Linux 作为我的日常操作系统的第一次尝试的终结。我很欣赏 Linux 的灵活性,但是它缺乏兼容性。我偶尔会写一个脚本,该脚本通过 SSH 连接到一个机器中以运行另一个脚本,但是我对 Linux 的日常使用仅止于此。
### Linux 兼容性的全新印象
几个月前,当我决定再试一次 Linux 时,我曾觉得我遇到更多的兼容性噩梦,但我错了。
安装过程完成后,我立即插入 USB-C 集线器以了解兼容性到底如何。一切立即工作。连接 HDMI 的超宽显示器作为镜像显示器弹出到我的笔记本电脑屏幕上我轻松地将其调整为第二台显示器。USB 连接的网络摄像头对我的[在家工作方式][2]至关重要,它可以毫无问题地显示视频。甚至自从我使用 Mac 以来就一直插在集线器的 Mac 充电器可以为我非常不 Mac 的硬件充电。
我的正面经历可能与 USB-C 的一些更新有关,它在 2018 年得到一些需要的关注,因此才能与其他 OS 体验相媲美。如 [Phoronix 解释的那样][3]
> “USB Type-C 接口为非 USB 信号提供了‘替代模式’扩展,在规范中该替代模式的最大使用场景是允许 DisplayPort。除此之外另一个替代模式是 Thunderbolt 3 的支持。DisplayPort 替代模式支持 4K甚至 8Kx4K 的视频输出,包括多声道音频。
>
> “虽然 USB-C 替代模式和 DisplayPort 已经存在了一段时间,并且在 Windows 上很常见,但是主线 Linux 内核不支持此功能。所幸的是,多亏英特尔,这种情况正在改变。”
>
而在端口之外,快速浏览一下 [笔记本电脑 Linux][4] 的硬件选择,可以显示比我 2000 年代初期经历的更加完整的选择集。
与我第一次尝试采用 Linux 相比,这已经天差地别,这是我所张开双臂欢迎的。
### 突破 Apple 的樊篱
使用 Linux 给我的日常工作流程增加了一些新的麻烦,而我喜欢这种麻烦。
我的 Mac 工作流程是无缝的:早上打开 iPad写下关于我今天想要做什么的想法然后开始在 Safari 中阅读一些文章;转到我的 iPhone 上继续阅读;然后登录我的 MacBook这些地方我进行了多年的微调已经弄清楚了所有这些部分之间的连接方式。键盘快捷键已内置在我的大脑中用户体验一如既往。简直不要太舒服了。
这种舒适需要付出代价。我基本上忘记了我的环境如何运作的,无法回答我想回答的问题。我是否自定义了一些 [PLIST 文件][5]以获得快捷方式,还是记得将其签入[我的 dotfiles][6] 当中?当 Firefox 的功能更好时,我如何还如此依赖 Safari 和 Chrome或为什么我不使用基于 Android 的手机代替我的 i-系列产品呢?
关于这一点,我经常考虑过改用基于 Android 的手机,但是我会失去在所有这些设备之间的连接以及为这种生态系统设计的一些便利。例如,我将无法在 iPhone 上为 Apple TV 输入搜索内容,也无法与其他基于 Apple 的朋友共享 AirDrop 密码。这些功能是同类设备环境的巨大好处,并且是一项了不起的工程。就是说,这些便利是被生态系统所困的代价。
我喜欢了解设备的工作方式。我希望能够解释使我的系统变得有趣或容易使用的环境配置,但我也想看看增加一些麻烦对我的观点有什么影响。用 [Marcel Proust][7] 来解释“真正的发现之旅不在于寻找新的土地而在于用新的眼光来看待。”我对技术的使用是如此的方便以至于我不再对它的工作原理感到好奇。Linux 使我有机会再次有了新的眼光。
### 受你的启发
以上所有内容足以成为探索 Linux 的理由,但我也受到了你的启发。尽管所有操作系统都受到开源社区的欢迎,但 Opensource.com 的作者和读者对 Linux 的喜悦是充满感染力的。它激发了我重新潜入的乐趣,我享受这段旅途的乐趣。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/why-switch-mac-linux
作者:[Matthew Broberg][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mbbroberg
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S (Hands programming)
[2]: https://opensource.com/article/19/8/rules-remote-work-sanity
[3]: https://www.phoronix.com/scan.php?page=news_item&px=Linux-USB-Type-C-Port-DP-Driver
[4]: https://www.linux-laptop.net/
[5]: https://fileinfo.com/extension/plist
[6]: https://opensource.com/article/19/3/move-your-dotfiles-version-control
[7]: https://www.age-of-the-sage.org/quotations/proust_having_seeing_with_new_eyes.html

View File

@ -0,0 +1,165 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (7 Best Open Source Tools that will help in AI Technology)
[#]: via: (https://opensourceforu.com/2019/11/7-best-open-source-tools-that-will-help-in-ai-technology/)
[#]: author: (Nitin Garg https://opensourceforu.com/author/nitin-garg/)
7 个对 AI 技术有帮助的最佳开源工具
======
[![][1]][2]
_人工智能是一种紧跟未来道路的卓越技术。在这个进步的时代它吸引了所有跨国组织的关注。谷歌、IBM、Facebook、亚马逊、微软等业内知名公司不断投资于这种新时代技术。_
利用人工智能预测业务需求,并在另一个层面上进行研发。这项先进技术正成为提供超智能解决方案的研发组织不可或缺的一部分。它可以帮助你保持准确性并以更好的结果提高生产率。
AI 开源工具和技术以频繁且准确的结果吸引了每个行业的关注。这些工具可帮助你分析性能,同时为你带来更大的收益。
事不宜迟,这里我们列出了一些最佳的开源工具,来帮助你更好地了解人工智能。
**1\. TensorFlow**
TensorFlow 是用于人工智能的开源机器学习框架。它主要是为了进行机器学习和深度学习的研究和生产而开发。TensorFlow 允许开发者创建数据流图形结构,它会在网络或系统节点中移动,图形提供数据的多维数组或张量。
TensorFlow 是一个出色的工具,它有无数的优势。
* 简化数值计算
  * TensorFlow 在多种模型上提供了灵活性。
  * TensorFlow 提高了业务效率
  * 高度可移植
  * 自动区分能力
**2\. Apache SystemML**
Apache SystemML 是由 IBM 创建的非常流行的开源机器学习平台,它提供了使用大数据的良好平台。它可以在 Apache Spark 上高效运行,并自动扩展数据,同时确定代码是否可以在磁盘或 Apache Spark 集群上运行。不仅如此,它丰富的功能使其在行业产品中脱颖而出;
* 算法定制
  * 多种执行模式
  * 自动优化
它还支持深度学习,让开发者更有效率地实现机器学习代码并优化。
**3\. OpenNN**
OpenNN 是用于渐进式分析的开源人工智能神经网络库。它可帮助你使用 C++ 和 Python 开发健壮的模型,它还包含用于处理机器学习解决方案(如预测和分类)的算法和程序。它还涵盖了回归和关联,可提供业界的高性能和技术演化。
它有丰富的功能,如:
* 数字化协助
  * 预测分析
  * 快速的性能
  * 虚拟个人协助
  * 语音识别
  * 高级分析
它可帮助你设计实现数据挖掘的先进方案,而从取得丰硕结果。
**4\. Caffe**
Caffe快速特征嵌入的卷积结构是一个开源深度学习框架。它优先考虑速度、模块化和表达式。Caffe 最初由加州大学伯克利分校视觉和学习中心开发,它使用 C++ 编写,带有一个 python 界面。能在 Linux、macOS 和 Windows 上正常运行。
Caffe 中的一些有助于 AI 技术的关键特性。
1. 具有表现力的结构
2. 具有扩展性的代码
3. 大型社区
4. 开发活跃
5. 性能快速
它可以帮助你激发创新,同时引入刺激性增长。充分利用此工具来获得所需的结果。
**5\. Torch**
Torch 是一个开源机器学习库通过提供多种方便的功能帮助你简化序列化、面向对象编程等复杂任务。它在机器学习项目中提供了最大的灵活性和速度。Torch 使用脚本语言 Lua 编写,底层使用 C 实现。它被用于多个组织和研究实验室中。
Torch 有无数的优势,如:
* 快速高效的 GPU 支持
* 线性代数子程序
* 支持 iOS 和 Android 平台
* 数值优化子程序
* N 维数组
**6\. Accord .NET**
Accord .NET 是著名的免费开源 AI 开发工具之一。它有一组库,用于组合用 C# 编写的音频和图像处理库。从计算机视觉到计算机听觉、信号处理和统计应用,它可以帮助你构建一切来用于商业用途。它附带了一套全面的示例应用来快速运行各类库。
你可以使用 Accord .NET 引人注意的功能开发一个高级应用,例如:
* 统计分析
* 数据接入
* 自适应
* 深度学习
* 二阶神经网络学习算法
* 数字协助和多语言
* 语音识别
**7\. Scikit-Learn**
Scikit-Learn 是流行的有助于 AI 技术的开源工具之一。它是 Python 中用于机器学习的一个很有价值的库。它包括机器学习和统计建模(包括分类、聚类、回归和降维)等高效工具。
让我们了解下 Scikit-Learn 的更多功能:
* 交叉验证
* 聚类和分类
* 流形学习
* 机器学习
* 虚拟流程自动化
* 工作流自动化
从预处理到模型选择Scikit-learn 可帮助你处理所有问题。它简化了从数据挖掘到数据分析的所有任务。
**最后的想法**
这些是一些流行的开源 AI 工具,它们提供了全面的功能。在开发新时代应用之前,必须选择其中一个工具并做相应的工作。这些工具提供先进的人工智能解决方案,并紧跟最新趋势。
人工智能在全球范围内被应用,标志着它在世界各地的存在。借助 Amazon Alexa、Siri 等应用AI 为客户提供了很好的用户体验。它在吸引用户关注的行业中具有显著优势。在医疗保健、银行、金融、电子商务等所有行业中,人工智能在促进增长和生产力的同时节省了大量的时间和精力。
选择这些开源工具中的任何一个,获得更好的用户体验和令人难以置信的结果。它将帮助你成长,并在质量和安全性方面获得更好的结果。
![Avatar][3]
[Nitin Garg][4]
作者是 BR Softech一家商业智能软件公司 的 CEO 兼联合创始人。喜欢通过博客分享他对 IT 行业的看法。他的兴趣是写最新的和先进的 IT 技术包括物联网、VR 和 AR 应用开发,网络和应用开发服务。此外,他还为 RPA、大数据和网络安全服务提供咨询。
[![][6]][7]
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/7-best-open-source-tools-that-will-help-in-ai-technology/
作者:[Nitin Garg][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/nitin-garg/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2018/05/Artificial-Intelligence_EB-June-17.jpg?resize=696%2C464&ssl=1 (Artificial Intelligence_EB June 17)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2018/05/Artificial-Intelligence_EB-June-17.jpg?fit=1000%2C667&ssl=1
[3]: https://secure.gravatar.com/avatar/d4e6964b80590824b981f06a451aa9e6?s=100&r=g
[4]: https://opensourceforu.com/author/nitin-garg/
[5]: https://www.brsoftech.com/bi-consulting-services.html
[6]: https://opensourceforu.com/wp-content/uploads/2019/11/assoc.png
[7]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US