mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-16 22:42:21 +08:00
commit
9efce7992c
@ -0,0 +1,110 @@
|
||||
Ubuntu 的 Snap、Red Hat 的 Flatpak 这种通吃所有发行版的打包方式真的有用吗?
|
||||
=================================================================================
|
||||
|
||||
![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/06/Flatpak-and-Snap-Packages.jpg)
|
||||
|
||||
**对新一代的打包格式开始渗透到 Linux 生态系统中的深入观察**
|
||||
|
||||
最近我们听到越来越多的有关于 Ubuntu 的 Snap 包和由 Red Hat 员工 Alexander Larsson 创造的 Flatpak (曾经叫做 xdg-app)的消息。
|
||||
|
||||
这两种下一代打包方法在本质上拥有相同的目标和特点:即不依赖于第三方系统功能库的独立包装。
|
||||
|
||||
这种 Linux 新技术方向似乎自然会让人脑海中浮现这样的问题:独立包的优点/缺点是什么?这是否让我们拥有更好的 Linux 系统?其背后的动机是什么?
|
||||
|
||||
为了回答这些问题,让我们先深入了解一下 Snap 和 Flatpak。
|
||||
|
||||
### 动机
|
||||
|
||||
根据 [Flatpak][1] 和 [Snap][2] 的声明,背后的主要动机是使同一版本的应用程序能够运行在多个 Linux 发行版。
|
||||
|
||||
> “从一开始它的主要目标是允许相同的应用程序运行在各种 Linux 发行版和操作系统上。” —— Flatpak
|
||||
|
||||
> “……‘snap’ 通用 Linux 包格式,使简单的二进制包能够完美的、安全的运行在任何 Linux 桌面、服务器、云和设备上。” —— Snap
|
||||
|
||||
说得更具体一点,站在 Snap 和 Flatpak (以下称之为 S&F)背后的人认为,Linux 平台存在碎片化的问题。
|
||||
|
||||
这个问题导致了开发者们需要做许多不必要的工作来使他的软件能够运行在各种不同的发行版上,这影响了整个平台的前进。
|
||||
|
||||
所以,作为 Linux 发行版(Ubuntu 和 Red Hat)的领导者,他们希望消除这个障碍,推动平台发展。
|
||||
|
||||
但是,是否是更多的个人收益刺激了 S&F 的开发?
|
||||
|
||||
#### 个人收益?
|
||||
|
||||
虽然没有任何官方声明,但是试想一下,如果能够创造这种可能会被大多数发行版(即便不是全部)所采用的打包方式,那么这个项目的领导者将可能成为一个能够决定 Linux 大船航向的重要人物。
|
||||
|
||||
### 优势
|
||||
|
||||
这种独立包的好处多多,并且取决于不同的因素。
|
||||
|
||||
这些因素基本上可以归为两类:
|
||||
|
||||
#### 用户角度
|
||||
|
||||
**+** 从 Liunx 用户的观点来看:Snap 和 Flatpak 带来了将任何软件包(软件或应用)安装在用户使用的任何发行版上的可能性。
|
||||
|
||||
例如你在使用一个不是很流行的发行版,由于开发工作的缺乏,它的软件仓库只有很稀少的包。现在,通过 S&F 你就可以显著的增加包的数量,这是一个多么美好的事情。
|
||||
|
||||
**+** 同样,对于使用流行的发行版的用户,即使该发行版的软件仓库上有很多的包,他也可以在不改变它现有的功能库的同时安装一个新的包。
|
||||
|
||||
比方说, 一个 Debian 的用户想要安装一个 “测试分支” 的包,但是他又不想将他的整个系统变成测试版(来让该包运行在更新的功能库上)。现在,他就可以简单的想安装哪个版本就安装哪个版本,而不需要考虑库的问题。
|
||||
|
||||
对于持后者观点的人,可能基本上都是使用源文件编译他们的包的人,然而,除非你使用类似 Gentoo 这样基于源代码的发行版,否则大多数用户将从头编译视为是一个恶心到吐的事情。
|
||||
|
||||
**+** 高级用户,或者称之为 “拥有安全意识的用户” 可能会觉得更容易接受这种类型的包,只要它们来自可靠来源,这种包倾向于提供另一层隔离,因为它们通常是与系统包想隔离的。
|
||||
|
||||
\* 不论是 Snap 还是 Flatpak 都在不断努力增强它们的安全性,通常他们都使用 “沙盒化” 来隔离,以防止它们可能携带病毒感染整个系统,就像微软 Windows 系统中的 .exe 程序一样。(关于微软和 S&F 后面还会谈到)
|
||||
|
||||
#### 开发者角度
|
||||
|
||||
与普通用户相比,对于开发者来说,开发 S&F 包的优点可能更加清楚。这一点已经在上一节有所提示。
|
||||
|
||||
尽管如此,这些优点有:
|
||||
|
||||
**+** S&F 通过统一开发的过程,将多发行版的开发变得简单了起来。对于需要将他的应用运行在多个发行版的开发者来说,这大大的减少了他们的工作量。
|
||||
|
||||
**++** 因此,开发者能够更容易的使他的应用运行在更多的发行版上。
|
||||
|
||||
**+** S&F 允许开发者私自发布他的包,不需要依靠发行版维护者在每一个/每一次发行版中发布他的包。
|
||||
|
||||
**++** 通过上述方法,开发者可以不依赖发行版而直接获取到用户安装和卸载其软件的统计数据。
|
||||
|
||||
**++** 同样是通过上述方法,开发者可以更好的直接与用户互动,而不需要通过中间媒介,比如发行版这种中间媒介。
|
||||
|
||||
### 缺点
|
||||
|
||||
**–** 膨胀。就是这么简单。Flatpak 和 Snap 并不是凭空变出来它的依赖关系。相反,它是通过将依赖关系预构建在其中来代替使用系统中的依赖关系。
|
||||
|
||||
就像谚语说的:“山不来就我,我就去就山”。
|
||||
|
||||
**–** 之前提到安全意识强的用户会喜欢 S&F 提供的额外的一层隔离,只要该应用来自一个受信任的来源。但是从另外一个角度看,对这方面了解较少的用户,可能会从一个不靠谱的地方弄来一个包含恶意软件的包从而导致危害。
|
||||
|
||||
上面提到的观点可以说是有很有意义的,虽说今天的流行方法,像 PPA、overlay 等也可能是来自不受信任的来源。
|
||||
|
||||
但是,S&F 包更加增加这个风险,因为恶意软件开发者只需要开发一个版本就可以感染各种发行版。相反,如果没有 S&F,恶意软件的开发者就需要创建不同的版本以适应不同的发行版。
|
||||
|
||||
### 原来微软一直是正确的吗?
|
||||
|
||||
考虑到上面提到的,很显然,在大多数情况下,使用 S&F 包的优点超过缺点。
|
||||
|
||||
至少对于二进制发行版的用户,或者重点不是轻量级的发行版的用户来说是这样的。
|
||||
|
||||
这促使我问出这个问题,可能微软一直是正确的吗?如果是的,那么当 S&F 变成 Linux 的标准后,你还会一如既往的使用 Linux 或者类 Unix 系统吗?
|
||||
|
||||
很显然,时间会是这个问题的最好答案。
|
||||
|
||||
不过,我认为,即使不完全正确,但是微软有些地方也是值得赞扬的,并且以我的观点来看,所有这些方式在 Linux 上都立马能用也确实是一个亮点。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.iwillfolo.com/ubuntus-snap-red-hats-flatpack-and-is-one-fits-all-linux-packages-useful/
|
||||
|
||||
作者:[Editorials][a]
|
||||
译者:[Chao-zhi](https://github.com/Chao-zhi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.iwillfolo.com/category/editorials/
|
||||
[1]: http://flatpak.org/press/2016-06-21-flatpak-released.html
|
||||
[2]: https://insights.ubuntu.com/2016/06/14/universal-snap-packages-launch-on-multiple-linux-distros
|
@ -7,7 +7,7 @@ LXDE、Xfce 及 MATE 桌面环境下的又一系统监视器应用:Multiload-n
|
||||
|
||||
Multiload-ng 的特点有:
|
||||
|
||||
- 支持图形化: CPU,内存,网络,交换空间,平均负载,磁盘以及温度;
|
||||
- 支持以下资源的图形块: CPU,内存,网络,交换空间,平均负载,磁盘以及温度;
|
||||
- 高度可定制;
|
||||
- 支持配色方案;
|
||||
- 自动适应容器(面板或窗口)的改变;
|
||||
@ -15,7 +15,7 @@ Multiload-ng 的特点有:
|
||||
- 提供基本或详细的提示信息;
|
||||
- 可自定义双击触发的动作。
|
||||
|
||||
相比于原来的 Multiload 应用,Multiload-ng 含有一个额外的图形块(温度),更多独立的图形自定义选项,例如独立的边框颜色,支持配色方案,可根据自定义的动作对鼠标的点击做出反应,图形块的方向可以被设定为与面板的方向无关。
|
||||
相比于原来的 Multiload 应用,Multiload-ng 含有一个额外的图形块(温度),以及更多独立的图形自定义选项,例如独立的边框颜色,支持配色方案,可根据自定义的动作对鼠标的点击做出反应,图形块的方向可以被设定为与面板的方向无关。
|
||||
|
||||
它也可以运行在一个独立的窗口中,而不需要面板:
|
||||
|
||||
@ -29,15 +29,15 @@ Multiload-ng 的特点有:
|
||||
|
||||
![](https://1.bp.blogspot.com/-WAD5MdDObD8/V7GixgVU0DI/AAAAAAAAYS8/uMhHJri1GJccRWvmf_tZkYeenVdxiENQwCLcB/s400/multiload-ng-xfce-vertical.png)
|
||||
|
||||
这个应用的偏好设置窗口虽然不是非常好看,但有很多方式去改进它:
|
||||
这个应用的偏好设置窗口虽然不是非常好看,但是有计划去改进它:
|
||||
|
||||
![](https://2.bp.blogspot.com/-P-ophDpc-gI/V7Gi_54b7JI/AAAAAAAAYTA/AHQck_JF_RcwZ1KbgHbaO2JRt24ZZdO3gCLcB/s320/multiload-ng-preferences.png)
|
||||
|
||||
Multiload-ng 当前使用的是 GTK2,所以它不能在构建自 GTK3 下的 Xfce 或 MATE 桌面环境(面板)下工作。
|
||||
|
||||
对于 Ubuntu 系统而言,只有 Ubuntu MATE 16.10 使用 GTK3。但是鉴于 MATE 的系统监视器应用也是 Multiload GNOME 的一个分支,所以它们共享大多数的特点(除了 Multiload-ng 提供的额外自定义选项和温度图形块)。
|
||||
对于 Ubuntu 系统而言,只有 Ubuntu MATE 16.10 使用 GTK3。但是鉴于 MATE 的系统监视器应用也是 Multiload GNOME 的一个分支,所以它们大多数的功能相同(除了 Multiload-ng 提供的额外自定义选项和温度图形块)。
|
||||
|
||||
该应用的 [愿望清单][2] 中提及到了计划支持 GTK3 的集成以及各种各样的改进,例如温度块资料的更多来源,能够显示十进制(KB, MB, GB...)或二进制(KiB, MiB, GiB...)单位等等。
|
||||
该应用的[愿望清单][2] 中提及到了计划支持 GTK3 的集成以及各种各样的改进,例如温度块资料的更多来源,能够显示十进制(KB、MB、GB……)或二进制(KiB、MiB、GiB……)单位等等。
|
||||
|
||||
### 安装 Multiload-ng
|
||||
|
||||
@ -76,7 +76,7 @@ sudo apt install mate-multiload-ng-applet
|
||||
sudo apt install multiload-ng-standalone
|
||||
```
|
||||
|
||||
一旦安装完毕,便可以像其他应用那样添加到桌面面板中了。需要注意的是在 LXDE 中,Multiload-ng 不能马上出现在面板清单中,除非面板被重新启动。你可以通过重启会话(登出后再登录)或者使用下面的命令来重启面板:
|
||||
一旦安装完毕,便可以像其他应用那样添加到桌面面板中了。需要注意的是在 LXDE 中,Multiload-ng 不能马上出现在面板清单中,除非重新启动面板。你可以通过重启会话(登出后再登录)或者使用下面的命令来重启面板:
|
||||
|
||||
```
|
||||
lxpanelctl restart
|
||||
@ -85,13 +85,14 @@ lxpanelctl restart
|
||||
独立的 Multiload-ng 应用可以像其他正常应用那样从菜单中启动。
|
||||
|
||||
如果要下载源码或报告 bug 等,请看 Multiload-ng 的 [GitHub page][3]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.webupd8.org/2016/08/alternative-system-monitor-applet-for.html
|
||||
|
||||
作者:[Andrew][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,84 +0,0 @@
|
||||
LinuxBars translating
|
||||
|
||||
Torvalds 2.0: Patricia Torvalds on computing, college, feminism, and increasing diversity in tech
|
||||
================================================================================
|
||||
|
||||
![Image by : Photo by Becky Svartström. Modified by Opensource.com. CC BY-SA 4.0](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-lead-patriciatorvalds.png)
|
||||
Image by : Photo by Becky Svartström. Modified by Opensource.com. [CC BY-SA 4.0][1]
|
||||
|
||||
Patricia Torvalds isn't the Torvalds name that pops up in Linux and open source circles. Yet.
|
||||
|
||||
![](http://opensource.com/sites/default/files/images/life-uploads/ptorvalds.png)
|
||||
|
||||
At 18, Patricia is a feminist with a growing list of tech achievements, open source industry experience, and her sights set on diving into her freshman year of college at Duke University's Pratt School of Engineering. She works for [Puppet Labs][2] in Portland, Oregon, as an intern, but soon she'll head to Durham, North Carolina, to start the fall semester of college.
|
||||
|
||||
In this exclusive interview, Patricia explains what got her interested in computer science and engineering (spoiler alert: it wasn't her father), what her high school did "right" with teaching tech, the important role feminism plays in her life, and her thoughts on the lack of diversity in technology.
|
||||
|
||||
![](http://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png)
|
||||
|
||||
### What made you interested in studying computer science and engineering? ###
|
||||
|
||||
My interest in tech really grew throughout high school. I wanted to go into biology for a while, until around my sophomore year. I had a web design internship at the Portland VA after my sophomore year. And I took an engineering class called Exploratory Ventures, which sent an ROV into the Pacific ocean late in my sophomore year, but the turning point was probably when I was named a regional winner and national runner up for the [NCWIT Aspirations in Computing][3] award halfway through my junior year.
|
||||
|
||||
The award made me feel validated in my interest, of course, but I think the most important part of it was getting to join a Facebook group for all the award winners. The girls who have won the award are absolutely incredible and so supportive of each other. I was definitely interested in computer science before I won the award, because of my work in XV and at the VA, but having these girls to talk to solidified my interest and has kept it really strong. Teaching XV—more on that later—my junior and senior year, also, made engineering and computer science really fun for me.
|
||||
|
||||
### What do you plan to study? And do you already know what you want to do after college? ###
|
||||
|
||||
I hope to major in either Mechanical or Electrical and Computer Engineering as well as Computer Science, and minor in Women's Studies. After college, I hope to work for a company that supports or creates technology for social good, or start my own company.
|
||||
|
||||
### My daughter had one high school programming class—Visual Basic. She was the only girl in her class, and she ended up getting harassed and having a miserable experience. What was your experience like? ###
|
||||
|
||||
My high school began offering computer science classes my senior year, and I took Visual Basic as well! The class wasn't bad, but I was definitely one of three or four girls in the class of 20 or so students. Other computing classes seemed to have similar gender breakdowns. However, my high school was extremely small and the teacher was supportive of inclusivity in tech, so there was no harassment that I noticed. Hopefully the classes become more diverse in future years.
|
||||
|
||||
### What did your schools do right technology-wise? And how could they have been better? ###
|
||||
|
||||
My high school gave us consistent access to computers, and teachers occasionally assigned technology-based assignments in unrelated classes—we had to create a website for a social studies class a few times—which I think is great because it exposes everyone to tech. The robotics club was also pretty active and well-funded, but fairly small; I was not a member. One very strong component of the school's technology/engineering program is actually a student-taught engineering class called Exploratory Ventures, which is a hands-on class that tackles a new engineering or computer science problem every year. I taught it for two years with a classmate of mine, and have had students come up to me and tell me they're interested in pursuing engineering or computer science as a result of the class.
|
||||
|
||||
However, my high school was not particularly focused on deliberately including young women in these programs, and it isn't very racially diverse. The computing-based classes and clubs were, by a vast majority, filled with white male students. This could definitely be improved on.
|
||||
|
||||
### Growing up, how did you use technology at home? ###
|
||||
|
||||
Honestly, when I was younger I used my computer time (my dad created a tracker, which logged us off after an hour of Internet use) to play Neopets or similar games. I guess I could have tried to mess with the tracker or played on the computer without Internet use, but I just didn't. I sometimes did little science projects with my dad, and I remember once printing "Hello world" in the terminal with him a thousand times, but mostly I just played online games with my sisters and didn't get my start in computing until high school.
|
||||
|
||||
### You were active in the Feminism Club at your high school. What did you learn from that experience? What feminist issues are most important to you now? ###
|
||||
|
||||
My friend and I co-founded Feminism Club at our high school late in our sophomore year. We did receive lots of resistance to the club at first, and while that never entirely went away, by the time we graduated feminist ideals were absolutely a part of the school's culture. The feminist work we did at my high school was generally on a more immediate scale and focused on issues like the dress code.
|
||||
|
||||
Personally, I'm very focused on intersectional feminism, which is feminism as it applies to other aspects of oppression like racism and classism. The Facebook page [Guerrilla Feminism][4] is a great example of an intersectional feminism and has done so much to educate me. I currently run the Portland branch.
|
||||
|
||||
Feminism is also important to me in terms of diversity in tech, although as an upper-class white woman with strong connections in the tech world, the problems here affect me much less than they do other people. The same goes for my involvement in intersectional feminism. Publications like [Model View Culture][5] are very inspiring to me, and I admire Shanley Kane so much for what she does.
|
||||
|
||||
### What advice would you give parents who want to teach their children how to program? ###
|
||||
|
||||
Honestly, nobody ever pushed me into computer science or engineering. Like I said, for a long time I wanted to be a geneticist. I got a summer internship doing web design for the VA the summer after my sophomore year and totally changed my mind. So I don't know if I can fully answer that question.
|
||||
|
||||
I do think genuine interest is important, though. If my dad had sat me down in front of the computer and told me to configure a webserver when I was 12, I don't think I'd be interested in computer science. Instead, my parents gave me a lot of free reign to do what I wanted, which was mostly coding terrible little HTML sites for my Neopets. Neither of my younger sisters are interested in engineering or computer science, and my parents don't care. I'm really lucky my parents have given me and my sisters the encouragement and resources to explore our interests.
|
||||
|
||||
Still, I grew up saying my future career would be "like my dad's"—even when I didn't know what he did. He has a pretty cool job. Also, one time when I was in middle school, I told him that and he got a little choked up and said I wouldn't think that in high school. So I guess that motivated me a bit.
|
||||
|
||||
### What suggestions do you have for leaders in open source communities to help them attract and maintain a more diverse mix of contributors? ###
|
||||
|
||||
I'm actually not active in particular open source communities. I feel much more comfortable discussing computing with other women; I'm a member of the [NCWIT Aspirations in Computing][6] network and it's been one of the most important aspects of my continued interest in technology, as well as the Facebook group [Ladies Storm Hackathons][7].
|
||||
|
||||
I think this applies well to attracting and maintaining a talented and diverse mix of contributors: Safe spaces are important. I have seen the misogynistic and racist comments made in some open source communities, and subsequent dismissals when people point out the issues. I think that in maintaining a professional community there have to be strong standards on what constitutes harassment or inappropriate conduct. Of course, people can—and will—have a variety of opinions on what they should be able to express in open source communities, or any community. However, if community leaders actually want to attract and maintain diverse talent, they need to create a safe space and hold community members to high standards.
|
||||
|
||||
I also think that some community leaders just don't value diversity. It's really easy to argue that tech is a meritocracy, and the reason there are so few marginalized people in tech is just that they aren't interested, and that the problem comes from earlier on in the pipeline. They argue that if someone is good enough at their job, their gender or race or sexual orientation doesn't matter. That's the easy argument. But I was raised not to make excuses for mistakes. And I think the lack of diversity is a mistake, and that we should be taking responsibility for it and actively trying to make it better.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://opensource.com/life/15/8/patricia-torvalds-interview
|
||||
|
||||
作者:[Rikki Endsley][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://opensource.com/users/rikki-endsley
|
||||
[1]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[2]:https://puppetlabs.com/
|
||||
[3]:https://www.aspirations.org/
|
||||
[4]:https://www.facebook.com/guerrillafeminism
|
||||
[5]:https://modelviewculture.com/
|
||||
[6]:https://www.aspirations.org/
|
||||
[7]:https://www.facebook.com/groups/LadiesStormHackathons/
|
@ -1,112 +0,0 @@
|
||||
Translating by Chao-zhi
|
||||
|
||||
Ubuntu’s Snap, Red Hat’s Flatpak And Is ‘One Fits All’ Linux Packages Useful?
|
||||
=================================================================================
|
||||
|
||||
![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/06/Flatpak-and-Snap-Packages.jpg)
|
||||
|
||||
An in-depth look into the new generation of packages starting to permeate the Linux ecosystem.
|
||||
|
||||
|
||||
Lately we’ve been hearing more and more about Ubuntu’s Snap packages and Flatpak (formerly referred to as xdg-app) created by Red Hat’s employee Alexander Larsson.
|
||||
|
||||
These 2 types of next generation packages are in essence having the same goal and characteristics which are: being standalone packages that doesn’t rely on 3rd-party system libraries in order to function.
|
||||
|
||||
This new technology direction which Linux seems to be headed is automatically giving rise to questions such as, what are the advantages / disadvantages of standalone packages? does this lead us to a better Linux overall? what are the motives behind it?
|
||||
|
||||
To answer these questions and more, let us explore the things we know about Snap and Flatpak so far.
|
||||
|
||||
### The Motive
|
||||
|
||||
According to both [Flatpak][1] and [Snap][2] statements, the main motive behind them is to be able to bring one and the same version of application to run across multiple Linux distributions.
|
||||
|
||||
>“From the very start its primary goal has been to allow the same application to run across a myriad of Linux distributions and operating systems.” Flatpak
|
||||
|
||||
>“… ‘snap’ universal Linux package format, enabling a single binary package to work perfectly and securely on any Linux desktop, server, cloud or device.” Snap
|
||||
|
||||
To be more specific, the guys behind Snap and Flatpak (S&F) believe that there’s a barrier of fragmentation on the Linux platform.
|
||||
|
||||
A barrier which holds back the platform advancement by burdening developers with more, perhaps unnecessary, work to get their software run on the many distributions out there.
|
||||
|
||||
Therefore, as leading Linux distributions (Ubuntu & Red Hat), they wish to eliminate the barrier and strengthen the platform in general.
|
||||
|
||||
But what are the more personal gains which motivate the development of S&F?
|
||||
|
||||
#### Personal Gains?
|
||||
|
||||
Although not officially stated anywhere, it may be assumed that by leading the efforts of creating a unified package that could potentially be adopted by the vast majority of Linux distros (if not all of them), the captains of these projects could assume a key position in determining where the Linux ship sails.
|
||||
|
||||
### The Advantages
|
||||
|
||||
The benefits of standalone packages are diverse and can depend on different factors.
|
||||
|
||||
Basically however, these factors can be categorized under 2 distinct criteria:
|
||||
|
||||
#### User Perspective
|
||||
|
||||
+ From a Linux user point of view, Snap and Flatpak both bring the possibility of installing any package (software / app) on any distribution the user is using.
|
||||
|
||||
That is, for instance, if you’re using a not so popular distribution which has only a scarce supply of packages available in their repo, due to workforce limitations probably, you’ll now be able to easily and significantly increase the amount of packages available to you – which is a great thing.
|
||||
|
||||
+ Also, users of popular distributions that do have many packages available in their repos, will enjoy the ability of installing packages that might not have behaved with their current set of installed libraries.
|
||||
|
||||
For example, a Debian user who wants to install a package from ‘testing branch’ will not have to convert his entire system into ‘testing’ (in order for the package to run against newer libraries), rather, that user will simply be able to install only the package he wants from whichever branch he likes and on whatever branch he’s on.
|
||||
|
||||
The latter point, was already basically possible for users who were compiling their packages straight from source, however, unless using a source based distribution such as Gentoo, most users will see this as just an unworthily hassle.
|
||||
|
||||
+ The advanced user, or perhaps better put, the security aware user might feel more comfortable with this type of packages as long as they come from a reliable source as they tend to provide another layer of isolation since they are generally isolated from system packages.
|
||||
|
||||
* Both S&F are being developed with enhanced security in mind, which generally makes use of “sandboxing” i.e isolation in order to prevent cases where they carry a virus which can infect the entire system, similar to the way .exe files on MS Windows may. (More on MS and S&F later)
|
||||
|
||||
#### Developer Perspective
|
||||
|
||||
For developers, the advantages of developing S&F packages will probably be a lot clearer than they are to the average user, some of these were already hinted in a previous section of this post.
|
||||
|
||||
Nonetheless, here they are:
|
||||
|
||||
+ S&F will make it easier on devs who want to develop for more than one Linux distribution by unifying the process of development, therefore minimizing the amount of work a developer needs to do in order to get his app running on multiple distributions.
|
||||
|
||||
++ Developers could therefore gain easier access to a wider range of distributions.
|
||||
|
||||
+ S&F allow devs to privately distribute their packages without being dependent on distribution maintainers to stabilize their package for each and every distro.
|
||||
|
||||
++ Through the above, devs may gain access to direct statistics of user adoption / engagement for their software.
|
||||
|
||||
++ Also through the above, devs could get more directly involved with users, rather than having to do so through a middleman, in this case, the distribution.
|
||||
|
||||
### The Downsides
|
||||
|
||||
– Bloat. Simple as that. Flatpak and Snap aren’t just magic making dependencies evaporate into thin air. Rather, instead of relying on the target system to provide the required dependencies, S&F comes with the dependencies prebuilt into them.
|
||||
|
||||
As the saying goes “if the mountain won’t come to Muhammad, Muhammad must go to the mountain…”
|
||||
|
||||
– Just as the security-aware user might enjoy S&F packages extra layer of isolation, as long they come from a trusted source. The less knowledgeable user on the hand, might be prone to the other side of the coin hazard which is using a package from an unknown source which may contain malicious software.
|
||||
|
||||
The above point can be said to be valid even with today’s popular methods, as PPAs, overlays, etc might also be maintained by untrusted sources.
|
||||
|
||||
However, with S&F packages the risk increases since malicious software developers need to create only one version of their program in order to infect a large number of distributions, whereas, without it they’d needed to create multiple versions in order to adjust their malware to other distributions.
|
||||
|
||||
Was Microsoft Right All Along?
|
||||
|
||||
With all that’s mentioned above in mind, it’s pretty clear that for the most part, the advantages of using S&F packages outweighs the drawbacks.
|
||||
|
||||
At the least for users of binary-based distributions, or, non lightweight focused distros.
|
||||
|
||||
Which eventually lead me to asking the above question – could it be that Microsoft was right all along? if so and S&F becomes the Linux standard, would you still consider Linux a Unix-like variant?
|
||||
|
||||
Well apparently, the best one to answer those questions is probably time.
|
||||
|
||||
Nevertheless, I’d argue that even if not entirely right, MS certainly has a good point to their credit, and having all these methods available here on Linux out of the box is certainly a plus in my book.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.iwillfolo.com/ubuntus-snap-red-hats-flatpack-and-is-one-fits-all-linux-packages-useful/
|
||||
|
||||
作者:[Editorials][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.iwillfolo.com/category/editorials/
|
@ -0,0 +1,50 @@
|
||||
Adobe's new CIO shares leadership advice for starting a new role
|
||||
====
|
||||
|
||||
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO_Leadership_3.png?itok=QWUGMw-V)
|
||||
|
||||
I’m currently a few months into a new CIO role at a highly-admired, cloud-based technology company. One of my first tasks was to get to know the organization’s people, culture, and priorities.
|
||||
|
||||
As part of that goal, I am visiting all the major IT sites. While In India, less than two months into the job, I was asked directly: “What are you going to do? What is your plan?” My response, which will not surprise seasoned CIOs, was that I was still in discovery mode, and I was there to listen and learn.
|
||||
|
||||
I’ve never gone into an organization with a set blueprint for what I’ll do. I know some CIOs have a playbook for how they will operate. They’ll come in and blow the whole organization up and put their set plan in motion.
|
||||
|
||||
Yes, there may be situations where things are massively broken and not working, so that course of action makes sense. Once I’m inside a company, however, my strategy is to go through a discovery process. I don’t want to have any preconceived notions about the way things should be or what’s working versus what’s not.
|
||||
|
||||
Here are my guiding principles as a newly-appointed leader:
|
||||
|
||||
### Get to know your people
|
||||
|
||||
This means building relationships, and it includes your IT staff as well as your business users and your top salespeople. What are the top things on their lists? What do they want you to focus on? What’s working well? What’s not? How is the customer experience? Knowing how you can help everyone be more successful will help you shape the way you deliver services to them.
|
||||
|
||||
If your department is spread across several floors, as mine is, consider meet-and-greet lunches or mini-tech fairs so people can introduce themselves, discuss what they’re working on, and share stories about their family, if they feel comfortable doing that. If you have an open-door office policy, make sure they know that as well. If your staff spreads across countries or continents, get out there and visit as soon as you reasonably can.
|
||||
|
||||
### Get to know your products and company culture
|
||||
|
||||
One of the things that surprised me coming into to Adobe was how broad our product portfolio is. We have a platform of solutions and services across three clouds – Adobe Creative Cloud, Document Cloud and Marketing Cloud – and a vast portfolio of products within each. You’ll never know how much opportunity your new company presents until you get to know your products and learn how to support all of them. At Adobe we use many of our digital media and digital marketing solutions as Customer Zero, so we have first-hand experiences to share with our customers
|
||||
|
||||
### Get to know customers
|
||||
|
||||
Very early on, I started getting requests to meet with customers. Meeting with customers is a great way to jump-start your thinking into the future of the IT organization, which includes the different types of technologies, customers, and consumers we could have going forward.
|
||||
|
||||
### Plan for the future
|
||||
|
||||
As a new leader, I have a fresh perspective and can think about the future of the organization without getting distracted by challenges or obstacles.
|
||||
|
||||
What CIOs need to do is jump-start IT into its next generation. When I meet my staff, I’m asking them what we want to be three to five years out so we can start positioning ourselves for that future. That means discussing the initiatives and priorities.
|
||||
|
||||
After that, it makes sense to bring the leadership team together so you can work to co-create the next generation of the organization – its mission, vision, modes of alignment, and operating norms. If you start changing IT from the inside out, it will percolate into business and everything else you do.
|
||||
|
||||
Through this whole process, I’ve been very open with people that this is not going to be a top-down directive. I have ideas on priorities and what we need to focus on, but we have to be in lockstep, working as a team and figuring out what we want to do jointly.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://enterprisersproject.com/article/2016/9/adobes-new-cio-shares-leadership-advice-starting-new-role
|
||||
|
||||
作者:[Cynthia Stoddard][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://enterprisersproject.com/user/cynthia-stoddard
|
@ -0,0 +1,62 @@
|
||||
Linus Torvalds reveals his favorite programming laptop
|
||||
====
|
||||
|
||||
>It's the Dell XPS 13 Developer Edition. Here's why.
|
||||
|
||||
I recently talked with some Linux developers about what the best laptop is for serious programmers. As a result I checked out several laptops from a programmer's viewpoint. The winner in my book? The 2016 Dell XPS 13 Developer Edition. I'm in good company. Linus Torvalds, Linux's creator, agrees. The Dell XPS 13 Developer Edition, for him, is the best laptop around.
|
||||
|
||||
![](http://zdnet3.cbsistatic.com/hub/i/r/2016/07/18/702609c3-db38-4603-9f5f-4dcc3d71b140/resize/770xauto/50a8ba1c2acb1f0994aec2115d2e55ce/2016-dell-xps-13.jpg)
|
||||
|
||||
Torvald's requirements may not be yours though.
|
||||
|
||||
On Google+, Torvalds explained, "First off: [I don't use my laptop as a desktop replacement][1], and I only travel for a small handful of events each year. So for me, the laptop is a fairly specialized thing that doesn't get daily (or even weekly) use, so the main criteria are not some kind of "average daily use", but very much "travel use".
|
||||
|
||||
Therefore, for Torvalds, "I end up caring a lot about it being fairly small and light, because I may end up carrying it around all day at a conference. I also want it to have a good screen, because by now I'm just used to it at my main desktop, and I want my text to be legible but small."
|
||||
|
||||
The Dell's display is powered by Intel's Iris 540 GPU. In my experience it works really well.
|
||||
|
||||
The Iris powers a 13.3 inch display with a 3,200×1,800 touchscreen. That's 280 pixels per inch, 40 more than my beloved [2015 Chromebook Pixel][2] and 60 more than a [MacBook Pro with Retina][3].
|
||||
|
||||
However, getting that hardware to work and play well with the [Gnome][4] desktop isn't easy. As Torvalds explained in another post, it "has the [same resolution as my desktop][5], but apparently because the laptop screen is smaller, Gnome seems to decide on its own that I need an automatic scaling factor of 2, which blows up all the stupid things (window decorations, icons etc) to a ridiculous degree".
|
||||
|
||||
The solution? You can forget about looking to the user interface. You need to go to the shell and run: gsettings set org.gnome.desktop.interface scaling-factor 1.
|
||||
|
||||
Torvalds may use Gnome, but he's [never liked the Gnome 3.x family much][6]. I can't argue with him. That's why I use [Cinnamon][7] instead.
|
||||
|
||||
He also wants "a reasonably powerful CPU, because when I'm traveling I still build the kernel a lot. I don't do my normal full 'make allmodconfig' build between each pull request like I do at home, but I'd like to do it more often than I did with my previous laptop, which is actually (along with the screen) the main reason I wanted to upgrade."
|
||||
|
||||
Linus doesn't describe the features of his XPS 13, but my review unit was a high-end model. It came with dual-core, 2.2GHz 6th Generation Intel Core i7-6560U Skylake processor and 16GBs of DDR3 RAM with a half a terabyte, PCIe solid state drive (SSD). I'm sure Torvalds' system is at least that well-equipped.
|
||||
|
||||
Some features you may care about aren't on Torvalds' list.
|
||||
|
||||
>"What I don't tend to care about is touch-screens, because my fingers are big and clumsy compared to the text I'm looking at (I also can't handle the smudges: maybe I just have particularly oily fingers, but I really don't want to touch that screen).
|
||||
|
||||
I also don't care deeply about some 'all day battery life', because quite frankly, I can't recall the last time I didn't have access to power. I might not want to bother to plug it in for some quick check, but it's just not a big overwhelming issue. By the time battery life is in 'more than a couple of hours', I just don't care very much any more."
|
||||
Dell claims the XPS 13, with its 56wHR, 4-Cell Battery, has about a 12-hour battery life. It has well over 10 in my experience. I haven't tried to run it down to the dregs.
|
||||
|
||||
Torvalds also didn't have any trouble with the Intel Wi-Fi set. The non Developer Edition uses a Broadcom chip set and that has proven troublesome for both Windows and Linux users. Dell technical support was extremely helpful to me in getting this problem under control.
|
||||
|
||||
Some people have trouble with the XPS 13 touchpad. Neither I nor Torvalds have any worries. Torvalds wrote, the "XPS13 touchpad works very well for me. That may be a personal preference thing, but it seems to be both smooth and responsive."
|
||||
|
||||
Still, while Torvalds likes the XPS 13, he's also fond of the latest Lenovo X1 Carbon, HP Spectre 13 x360, and last year's Lenovo Yoga 900. Me? I like the XPS 13 Developer Editor. The price tag, which for the model I reviewed was $1949.99, may keep you from reaching for your credit card.
|
||||
|
||||
Still, if you want to develop like one of the world's top programmers, the Dell XPS 13 Developer Edition is worth the money.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.zdnet.com/article/linus-torvalds-reveals-his-favorite-programming-laptop/
|
||||
|
||||
作者:[Steven J. Vaughan-Nichols ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
|
||||
[1]: https://plus.google.com/+LinusTorvalds/posts/VZj8vxXdtfe
|
||||
[2]: http://www.zdnet.com/article/the-best-chromebook-ever-the-chromebook-pixel-2015/
|
||||
[3]: http://www.zdnet.com/product/apple-15-inch-macbook-pro-with-retina-display-mid-2015/
|
||||
[4]: https://www.gnome.org/
|
||||
[5]: https://plus.google.com/+LinusTorvalds/posts/d7nfnWSXjfD
|
||||
[6]: http://www.zdnet.com/article/linus-torvalds-finds-gnome-3-4-to-be-a-total-user-experience-design-failure/
|
||||
[7]: http://www.zdnet.com/article/how-to-customise-your-linux-desktop-cinnamon/
|
@ -0,0 +1,54 @@
|
||||
Should Smartphones Do Away with the Headphone Jack? Here Are Our Thoughts
|
||||
====
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/09/Writers-Opinion-Headphone-Featured.jpg)
|
||||
|
||||
Even though Apple removing the headphone jack from the iPhone 7 has been long-rumored, after the official announcement last week confirming the news, it has still become a hot topic.
|
||||
|
||||
For those not in the know on this latest news, Apple has removed the headphone jack from the phone, and the headphones will now plug into the lightning port. Those that want to still use their existing headphones may, as there is an adapter that ships with the phone along with the lightning headphones. They are also selling a new product: AirPods. These are wireless and are inserted into your ear. The biggest advantage is that by eliminating the jack they were able to make the phone dust and water-resistant.
|
||||
|
||||
Being it’s such a big story right now, we asked our writers, “What are your thoughts on Smartphones doing away with the headphone jack?”
|
||||
|
||||
### Our Opinion
|
||||
|
||||
Derrik believes that “Apple’s way of doing it is a play to push more expensive peripherals that do not comply to an open standard.” He also doesn’t want to have to charge something every five hours, meaning the AirPods. While he understands that the 3.5mm jack is aging, as an “audiophile” he would love a new, open standard, but “proprietary pushes” worry him about device freedom.
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/09/headphone-jacks.jpg)
|
||||
|
||||
Damien doesn’t really even use the headphone jack these days as he has Bluetooth headphones. He hates that wire anyway, so feels “this is a good move.” Yet he also understands Derrik’s point about the wireless headphones running out of battery, leaving him with “nothing to fall back on.”
|
||||
|
||||
Trevor is very upfront in saying he thought it was “dumb” until he heard you couldn’t charge the phone and use the headphones at the same time and realized it was “dumb X 2.” He uses the headphones/headphone jack all the time in a work van without Bluetooth and listens to audio or podcasts. He uses the plug-in style as Bluetooth drains his battery.
|
||||
|
||||
Simon is not a big fan. He hasn’t seen much reasoning past it leaving more room within the device. He figures “it will then come down to whether or not consumers favor wireless headphones, an adapter, and water-resistance over not being locked into AirPods, lightning, or an adapter”. He fears it might be “too early to jump into removing ports” and likes a “one pair fits all” standard.
|
||||
|
||||
James believes that wireless technology is progressive, so he sees it as a good move “especially for Apple in terms of hardware sales.” He happens to use expensive headphones, so personally he’s “yet to be convinced,” noting his Xperia is waterproof and has a jack.
|
||||
|
||||
Jeffry points out that “almost every transition attempt in the tech world always starts with strong opposition from those who won’t benefit from the changes.” He remembers the flak Apple received when they removed the floppy disk drive and decided not to support Flash, and now both are industry standards. He believes everything is evolving for the better, removing the audio jack is “just the first step toward the future,” and Apple is just the one who is “brave enough to lead the way (and make a few bucks in doing so).”
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/09/Writers-Opinion-Headphone-Headset.jpg)
|
||||
|
||||
Vamsi doesn’t mind the removal of the headphone jack as long as there is a “better solution applicable to all the users that use different headphones and other gadgets.” He doesn’t feel using headphones via a lightning port is a good solution as it renders nearly all other headphones obsolete. Regarding Bluetooth headphones, he just doesn’t want to deal with another gadget. Additionally, he doesn’t get the argument of it being good for water resistance since there are existing devices with headphone jacks that are water resistant.
|
||||
|
||||
Mahesh prefers a phone with a jack because many times he is charging his phone and listening to music simultaneously. He believes we’ll get to see how it affects the public in the next few months.
|
||||
|
||||
Derrik chimed back in to say that by “removing open standard ports and using a proprietary connection too.,” you can be technical and say there are adapters, but Thunderbolt is also closed, and Apple can stop selling those adapters at any time. He also notes that the AirPods won’t be Bluetooth.
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/09/Writers-Opinion-Headphone-AirPods.jpg)
|
||||
|
||||
As for me, I’m always up for two things: New technology and anything Apple. I’ve been using iPhones since a few weeks past the very first model being introduced, yet I haven’t updated since 2012 and the iPhone 5, so I was overdue. I’ll be among the first to get my hands on the iPhone 7. I hate that stupid white wire being in my face, so I just might be opting for AirPods at some point. I am very appreciative of the phone becoming water-resistant. As for charging vs. listening, the charge on new iPhones lasts so long that I don’t expect it to be much of a problem. Even my old iPhone 5 usually lasts about twenty hours on a good day and twelve hours on a bad day. So I don’t expect that to be a problem.
|
||||
|
||||
### Your Opinion
|
||||
|
||||
Our writers have given you a lot to think about. What are your thoughts on Smartphones doing away with the headphone jack? Will you miss it? Is it a deal breaker for you? Or do you relish the upgrade in technology? Will you be trying the iPhone 5 or the AirPods? Let us know in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/should-smartphones-do-away-with-the-headphone-jack/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
|
||||
|
||||
作者:[Laura Tucker][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.maketecheasier.com/author/lauratucker/
|
@ -0,0 +1,57 @@
|
||||
What the rise of permissive open source licenses means
|
||||
====
|
||||
|
||||
Why restrictive licenses such as the GNU GPL are steadily falling out of favor.
|
||||
|
||||
"If you use any open source software, you have to make the rest of your software open source." That's what former Microsoft CEO Steve Ballmer said back in 2001, and while his statement was never true, it must have spread some FUD (fear, uncertainty and doubt) about free software. Probably that was the intention.
|
||||
|
||||
This FUD about open source software is mainly about open source licensing. There are many different licenses, some more restrictive (some people use the term "protective") than others. Restrictive licenses such as the GNU General Public License (GPL) use the concept of copyleft, which grants people the right to freely distribute copies and modified versions of a piece of software as long as the same rights are preserved in derivative works. The GPL (v3) is used by open source projects such as bash and GIMP. There's also the Affero GPL, which provides copyleft to software that is offered over a network (for example as a web service.)
|
||||
|
||||
What this means is that if you take code that is licensed in this way and you modify it by adding some of your own proprietary code, then in some circumstances the whole new body of code, including your code, becomes subject to the restrictive open source license. It was this type of license that Ballmer was probably referring to when he made his statement.
|
||||
|
||||
But permissive licenses are a different animal. The MIT License, for example, lets anyone take open source code and do what they want with it — including modifying and selling it — as long as they provide attribution and don't hold the developer liable. Another popular permissive open source license, the Apache License 2.0, also provides an express grant of patent rights from contributors to users. JQuery, the .NET Core and Rails are licensed using the MIT license, while the Apache 2.0 license is used by software including Android, Apache and Swift.
|
||||
|
||||
Ultimately both license types are intended to make software more useful. Restrictive licenses aim to foster the open source ideals of participation and sharing so everyone gets the maximum benefit from software. And permissive licenses aim to ensure that people can get the maximum benefit from software by allowing them to do what they want with it — even if that means they take the code, modify it and keep it for themselves or even sell the resulting work as proprietary software without contributing anything back.
|
||||
|
||||
Figures compiled by open source license management company Black Duck Software show that the restrictive GPL 2.0 was the most commonly used open source license last year with about 25 percent of the market. The permissive MIT and Apache 2.0 licenses were next with about 18 percent and 16 percent respectively, followed by the GPL 3.0 with about 10 percent. That's almost evenly split at 35 percent restrictive and 34 percent permissive.
|
||||
|
||||
But this snapshot misses the trend. Black Duck's data shows that in the six years from 2009 to 2015 the MIT license's share of the market has gone up 15.7 percent and Apache's share has gone up 12.4 percent. GPL v2 and v3's share during the same period has dropped by a staggering 21.4 percent. In other words there was a significant move away from restrictive licenses and towards permissive ones during that period.
|
||||
|
||||
And the trend is continuing. Black Duck's [latest figures][1] show that MIT is now at 26 percent, GPL v2 21 percent, Apache 2 16 percent, and GPL v3 9 percent. That's 30 percent restrictive, 42 percent permissive — a huge swing from last year’s 35 percent restrictive and 34 percent permissive. Separate [research][2] of the licenses used on GitHub appears to confirm this shift. It shows that MIT is overwhelmingly the most popular license with a 45 percent share, compared to GLP v2 with just 13 percent and Apache with 11 percent.
|
||||
|
||||
![](http://images.techhive.com/images/article/2016/09/open-source-licenses.jpg-100682571-large.idge.jpeg)
|
||||
|
||||
### Driving the trend
|
||||
|
||||
What’s behind this mass move from restrictive to permissive licenses? Do companies fear that if they let restrictive software into the house they will lose control of their proprietary software, as Ballmer warned? In fact, that may well be the case. Google, for example, has [banned Affero GPL software][3] from its operations.
|
||||
|
||||
Jim Farmer, chairman of [Instructional Media + Magic][4], a developer of open source technology for education, believes that many companies avoid restrictive licenses to avoid legal difficulties. "The problem is really about complexity. The more complexity in a license, the more chance there is that someone has a cause of action to bring you to court. Complexity makes litigation more likely," he says.
|
||||
|
||||
He adds that fear of restrictive licenses is being driven by lawyers, many of whom recommend that clients use software that is licensed with the MIT or Apache 2.0 licenses, and who specifically warn against the Affero license.
|
||||
|
||||
This has a knock-on effect with software developers, he says, because if companies avoid software with restrictive licenses then developers have more incentive to license their new software with permissive ones if they want it to get used.
|
||||
|
||||
But Greg Soper, CEO of SalesAgility, the company behind the open source SuiteCRM, believes that the move towards permissive licenses is also being driven by some developers. "Look at an application like Rocket.Chat. The developers could have licensed that with GPL 2.0 or Affero but they chose a permissive license," he says. "That gives the app the widest possible opportunity, because a proprietary vendor can take it and not harm their product or expose it to an open source license. So if a developer wants an application to be used inside a third-party application it makes sense to use a permissive license."
|
||||
|
||||
Soper points out that restrictive licenses are designed to help an open source project succeed by stopping developers from taking other people's code, working on it, and then not sharing the results back with the community. "The Affero license is critical to the health of our product because if people could make a fork that was better than ours and not give the code back that would kill our product," he says. "For Rocket.Chat it's different because if it used Affero then it would pollute companies' IP and so it wouldn't get used. Different licenses have different use cases."
|
||||
|
||||
Michael Meeks, an open source developer who has worked on Gnome, OpenOffice and now LibreOffice, agrees with Jim Farmer that many companies do choose to use software with permissive licenses for fear of legal action. "There are risks with copyleft licenses, but there are also huge benefits. Unfortunately people listen to lawyers, and lawyers talk about risk but they never tell you that something is safe."
|
||||
|
||||
Fifteen years after Ballmer made his inaccurate statement it seems that the FUD it generated it is still having an effect — even if the move from restrictive licenses to permissive ones is not quite the effect he intended.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cio.com/article/3120235/open-source-tools/what-the-rise-of-permissive-open-source-licenses-means.html
|
||||
|
||||
作者:[Paul Rubens ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.cio.com/author/Paul-Rubens/
|
||||
[1]: https://www.blackducksoftware.com/top-open-source-licenses
|
||||
[2]: https://github.com/blog/1964-open-source-license-usage-on-github-com
|
||||
[3]: http://www.theregister.co.uk/2011/03/31/google_on_open_source_licenses/
|
||||
[4]: http://immagic.com/
|
||||
|
84
sources/tech/20160506 Setup honeypot in Kali Linux.md
Normal file
84
sources/tech/20160506 Setup honeypot in Kali Linux.md
Normal file
@ -0,0 +1,84 @@
|
||||
Setup honeypot in Kali Linux
|
||||
====
|
||||
|
||||
The Pentbox is a safety kit containing various tools for streamlining PenTest conducting a job easily. It is programmed in Ruby and oriented to GNU / Linux, with support for Windows, MacOS and every systems where Ruby is installed. In this small article we will explain how to set up a honeypot in Kali Linux. If you don’t know what is a honeypot, “a honeypot is a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems.”
|
||||
|
||||
### Download Pentbox:
|
||||
|
||||
Simply type in the following command in your terminal to download pentbox-1.8.
|
||||
|
||||
```
|
||||
root@kali:~# wget http://downloads.sourceforge.net/project/pentbox18realised/pentbox-1.8.tar.gz
|
||||
```
|
||||
|
||||
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-1.jpg)
|
||||
|
||||
### Uncompress pentbox files
|
||||
|
||||
Decompressing the file with the following command:
|
||||
|
||||
```
|
||||
root@kali:~# tar -zxvf pentbox-1.8.tar.gz
|
||||
```
|
||||
|
||||
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-2.jpg)
|
||||
|
||||
### Run pentbox ruby script
|
||||
|
||||
Change directory into pentbox folder
|
||||
|
||||
```
|
||||
root@kali:~# cd pentbox-1.8/
|
||||
```
|
||||
|
||||
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-3.jpg)
|
||||
|
||||
Run pentbox using the following command
|
||||
|
||||
```
|
||||
root@kali:~# ./pentbox.rb
|
||||
```
|
||||
|
||||
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-4.jpg)
|
||||
|
||||
### Setup a honeypot
|
||||
|
||||
Use option 2 (Network Tools) and then option 3 (Honeypot).
|
||||
|
||||
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-5.jpg)
|
||||
|
||||
Finally for first test, choose option 1 (Fast Auto Configuration)
|
||||
|
||||
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-6.jpg)
|
||||
|
||||
This opens up a honeypot in port 80. Simply open browser and browse to http://192.168.160.128 (where 192.168.160.128 is your IP Address. You should see an Access denied error.
|
||||
|
||||
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-7.jpg)
|
||||
|
||||
and in the terminal you should see “HONEYPOT ACTIVATED ON PORT 80” followed by “INTRUSION ATTEMPT DETECTED”.
|
||||
|
||||
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-8.jpg)
|
||||
|
||||
Now, if you do the same steps but this time select Option 2 (Manual Configuration), you should see more extra options
|
||||
|
||||
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-9.jpg)
|
||||
|
||||
Do the same steps but select port 22 this time (SSH Port). Then do a port forwarding in your home router to forward port external port 22 to this machines’ port 22. Alternatively, set it up in a VPS in your cloud server.
|
||||
|
||||
You’d be amazed how many bots out there scanning port SSH continuously. You know what you do then? You try to hack them back for the lulz!
|
||||
|
||||
Here’s a video of setting up honeypot if video is your thing:
|
||||
|
||||
<https://youtu.be/NufOMiktplA>
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.blackmoreops.com/2016/05/06/setup-honeypot-in-kali-linux/
|
||||
|
||||
作者:[blackmoreops.com][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: blackmoreops.com
|
@ -0,0 +1,219 @@
|
||||
Understanding Different Classifications of Shell Commands and Their Usage in Linux
|
||||
====
|
||||
|
||||
When it comes to gaining absolute control over your Linux system, then nothing comes close to the command line interface (CLI). In order to become a Linux power user, one must understand the [different types of shell commands][1] and the appropriate ways of using them from the terminal.
|
||||
|
||||
In Linux, there are several types of commands, and for a new Linux user, knowing the meaning of different commands enables for efficient and precise usage. Therefore, in this article, we shall walk through the various classifications of shell commands in Linux.
|
||||
|
||||
One important thing to note is that the command line interface is different from the shell, it only provides a means for you to access the shell. The shell, which is also programmable then makes it possible to communicate with the kernel using commands.
|
||||
|
||||
Different classifications of Linux commands fall under the following classifications:
|
||||
|
||||
### 1. Program Executables (File System Commands)
|
||||
|
||||
When you run a command, Linux searches through the directories stored in the $PATH environmental variable from left to right for the executable of that specific command.
|
||||
|
||||
You can view the directories in the $PATH as follows:
|
||||
|
||||
```
|
||||
$ echo $PATH
|
||||
/home/aaronkilik/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
|
||||
```
|
||||
|
||||
In the above order, the directory /home/aaronkilik/bin will be searched first followed by /usr/local/sbin and so on, the order is significant in the search process.
|
||||
|
||||
Examples of file system commands in /usr/bin directory:
|
||||
|
||||
```
|
||||
$ ll /bin/
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
```
|
||||
total 16284
|
||||
drwxr-xr-x 2 root root 4096 Jul 31 16:30 ./
|
||||
drwxr-xr-x 23 root root 4096 Jul 31 16:29 ../
|
||||
-rwxr-xr-x 1 root root 6456 Apr 14 18:53 archdetect*
|
||||
-rwxr-xr-x 1 root root 1037440 May 17 16:15 bash*
|
||||
-rwxr-xr-x 1 root root 520992 Jan 20 2016 btrfs*
|
||||
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfs-calc-size*
|
||||
lrwxrwxrwx 1 root root 5 Jul 31 16:19 btrfsck -> btrfs*
|
||||
-rwxr-xr-x 1 root root 278376 Jan 20 2016 btrfs-convert*
|
||||
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfs-debug-tree*
|
||||
-rwxr-xr-x 1 root root 245368 Jan 20 2016 btrfs-find-root*
|
||||
-rwxr-xr-x 1 root root 270136 Jan 20 2016 btrfs-image*
|
||||
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfs-map-logical*
|
||||
-rwxr-xr-x 1 root root 245368 Jan 20 2016 btrfs-select-super*
|
||||
-rwxr-xr-x 1 root root 253816 Jan 20 2016 btrfs-show-super*
|
||||
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfstune*
|
||||
-rwxr-xr-x 1 root root 245368 Jan 20 2016 btrfs-zero-log*
|
||||
-rwxr-xr-x 1 root root 31288 May 20 2015 bunzip2*
|
||||
-rwxr-xr-x 1 root root 1964536 Aug 19 2015 busybox*
|
||||
-rwxr-xr-x 1 root root 31288 May 20 2015 bzcat*
|
||||
lrwxrwxrwx 1 root root 6 Jul 31 16:19 bzcmp -> bzdiff*
|
||||
-rwxr-xr-x 1 root root 2140 May 20 2015 bzdiff*
|
||||
lrwxrwxrwx 1 root root 6 Jul 31 16:19 bzegrep -> bzgrep*
|
||||
-rwxr-xr-x 1 root root 4877 May 20 2015 bzexe*
|
||||
lrwxrwxrwx 1 root root 6 Jul 31 16:19 bzfgrep -> bzgrep*
|
||||
-rwxr-xr-x 1 root root 3642 May 20 2015 bzgrep*
|
||||
```
|
||||
|
||||
### 2. Linux Aliases
|
||||
|
||||
These are user defined commands, they are created using the alias shell built-in command, and contain other shell commands with some options and arguments. The ideas is to basically use new and short names for lengthy commands.
|
||||
|
||||
The syntax for creating an alias is as follows:
|
||||
|
||||
```
|
||||
$ alias newcommand='command -options'
|
||||
```
|
||||
|
||||
To list all aliases on your system, issue the command below:
|
||||
|
||||
```
|
||||
$ alias -p
|
||||
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
|
||||
alias egrep='egrep --color=auto'
|
||||
alias fgrep='fgrep --color=auto'
|
||||
alias grep='grep --color=auto'
|
||||
alias l='ls -CF'
|
||||
alias la='ls -A'
|
||||
alias ll='ls -alF'
|
||||
alias ls='ls --color=auto'
|
||||
```
|
||||
|
||||
To create a new alias in Linux, go through some below examples.
|
||||
|
||||
```
|
||||
$ alias update='sudo apt update'
|
||||
$ alias upgrade='sudo apt dist-upgrade'
|
||||
$ alias -p | grep 'up'
|
||||
```
|
||||
|
||||
![](http://www.tecmint.com/wp-content/uploads/2016/08/Create-Aliase-in-Linux.png)
|
||||
|
||||
However, the aliases we have created above only work temporarily, when the system is restarted, they will not work after the next boot. You can set permanent aliases in your `.bashrc` file as shown below.
|
||||
|
||||
![](http://www.tecmint.com/wp-content/uploads/2016/08/Set-Linux-Aliases-Permanent.png)
|
||||
|
||||
After adding them, run the command below to active.
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
```
|
||||
|
||||
### 3. Linux Shell Reserved Words
|
||||
|
||||
In shell programming, words such as if, then, fi, for, while, case, esac, else, until and many others are shell reserved words. As the description implies, they have specialized meaning to the shell.
|
||||
|
||||
You can list out all Linux shell keywords using type command as shown:
|
||||
|
||||
```
|
||||
$ type if then fi for while case esac else until
|
||||
if is a shell keyword
|
||||
then is a shell keyword
|
||||
fi is a shell keyword
|
||||
for is a shell keyword
|
||||
while is a shell keyword
|
||||
case is a shell keyword
|
||||
esac is a shell keyword
|
||||
else is a shell keyword
|
||||
until is a shell keyword
|
||||
```
|
||||
|
||||
Suggested Read: 10 Useful Linux Chaining Operators with Practical Examples
|
||||
|
||||
### 4. Linux Shell Functions
|
||||
|
||||
A shell function is a group of commands that are executed collectively within the current shell. Functions help to carry out a specific task in a shell script. The conventional form of writing shell functions in a script is:
|
||||
|
||||
```
|
||||
function_name() {
|
||||
command1
|
||||
command2
|
||||
…….
|
||||
}
|
||||
```
|
||||
|
||||
Alternatively,
|
||||
|
||||
```
|
||||
function function_name {
|
||||
command1
|
||||
command2
|
||||
…….
|
||||
}
|
||||
```
|
||||
|
||||
Let’s take a look at how to write shell functions in a script named shell_functions.sh.
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
#write a shell function to update and upgrade installed packages
|
||||
upgrade_system(){
|
||||
sudo apt update;
|
||||
sudo apt dist-upgrade;
|
||||
}
|
||||
#execute function
|
||||
upgrade_system
|
||||
```
|
||||
|
||||
Instead of executing the two commands: sudo apt update and sudo apt dist-upgrade from the command line, we have written a simple shell function to execute the two commands as a single command, upgrade_system within a script.
|
||||
|
||||
Save the file and thereafter, make the script executable. Finally run it as below:
|
||||
|
||||
```
|
||||
$ chmod +x shell_functions.sh
|
||||
$ ./shell_functions.sh
|
||||
```
|
||||
|
||||
![](http://www.tecmint.com/wp-content/uploads/2016/08/Linux-Shell-Functions-Script.png)
|
||||
|
||||
### 5. Linux Shell Built-in Commands
|
||||
|
||||
These are Linux commands that built into the shell, thus you cannot find them within the file system. They include pwd, cd, bg, alias, history, type, source, read, exit and many others.
|
||||
|
||||
You can list or check Linux built-in commands using type command as shown:
|
||||
|
||||
```
|
||||
$ type pwd
|
||||
pwd is a shell builtin
|
||||
$ type cd
|
||||
cd is a shell builtin
|
||||
$ type bg
|
||||
bg is a shell builtin
|
||||
$ type alias
|
||||
alias is a shell builtin
|
||||
$ type history
|
||||
history is a shell builtin
|
||||
```
|
||||
|
||||
Learn about some Linux built-in Commands usage:
|
||||
|
||||
- [15 ‘pwd’ Command Examples in Linux][2]
|
||||
- [15 ‘cd’ Command Examples in Linux][3]
|
||||
- [Learn The Power of Linux ‘history’ Command][4]
|
||||
|
||||
### Conclusion
|
||||
|
||||
As a Linux user, it is always important to know the type of command you are running. I believe, with the precise and simple-to-understand explanation above including a few relevant illustrations, you probably have a good understanding of the [various categories of Linux commands][5].
|
||||
|
||||
You can as well get in tough through the comment section below for any questions or supplementary ideas that you would like to offer us.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
|
||||
|
||||
作者:[Aaron Kili ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.tecmint.com/author/aaronkili/
|
||||
[1]: http://www.tecmint.com/different-types-of-linux-shells/
|
||||
[2]: http://www.tecmint.com/pwd-command-examples/
|
||||
[3]: http://www.tecmint.com/cd-command-in-linux/
|
||||
[4]: http://www.tecmint.com/history-command-examples/
|
||||
[5]: http://www.tecmint.com/category/linux-commands/
|
78
sources/tech/20160827 4 Best Linux Boot Loaders.md
Normal file
78
sources/tech/20160827 4 Best Linux Boot Loaders.md
Normal file
@ -0,0 +1,78 @@
|
||||
translating by ucasFL
|
||||
|
||||
4 Best Linux Boot Loaders
|
||||
====
|
||||
|
||||
When you turn on your machine, immediately after POST (Power On Self Test) is completed successfully, the BIOS locates the configured bootable media, and reads some instructions from the master boot record (MBR) or GUID partition table which is the first 512 bytes of the bootable media. The MBR contains two important sets of information, one is the boot loader and two, the partition table.
|
||||
|
||||
### What is a Boot Loader?
|
||||
|
||||
A boot loader is a small program stored in the MBR or GUID partition table that helps to load an operating system into memory. Without a boot loader, your operating system can not be loaded into memory.
|
||||
|
||||
There are several boot loaders we can install together with Linux on our systems and in this article, we shall briefly talk about a handful of the best Linux boot loaders to work with.
|
||||
|
||||
### 1. GNU GRUB
|
||||
|
||||
GNU GRUB is a popular and probably the most used multiboot Linux boot loader available, based on the original GRUB (GRand Unified Bootlader) which was created by Eirch Stefan Broleyn. It comes with several improvements, new features and bug fixes as enhancements of the original GRUB program.
|
||||
|
||||
Importantly, GRUB 2 has now replaced the GRUB. And notably, the name GRUB was renamed to GRUB Legacy and is not actively developed, however, it can be used for booting older systems since bug fixes are still on going.
|
||||
|
||||
GRUB has the following prominent features:
|
||||
|
||||
- Supports multiboot
|
||||
- Supports multiple hardware architectures and operating systems such as Linux and Windows
|
||||
- Offers a Bash-like interactive command line interface for users to run GRUB commands as well interact with configuration files
|
||||
- Enables access to GRUB editor
|
||||
- Supports setting of passwords with encryption for security
|
||||
- Supports booting from a network combined with several other minor features
|
||||
|
||||
Visit Homepage: <https://www.gnu.org/software/grub/>
|
||||
|
||||
### 2. LILO (Linux Loader)
|
||||
|
||||
LILO is a simple yet powerful and stable Linux boot loader. With the growing popularity and use of GRUB, which has come with numerous improvements and powerful features, LILO has become less popular among Linux users.
|
||||
|
||||
While it loads, the word “LILO” is displayed on the screen and each letter appears before or after a particular event has occurred. However, the development of LILO was stopped in December 2015, it has a number of features as listed below:
|
||||
|
||||
- Does not offer an interactive command line interface
|
||||
- Supports several error codes
|
||||
- Offers no support for booting from a network
|
||||
- All its files are stored in the first 1024 cylinders of a drive
|
||||
- Faces limitation with BTFS, GPT and RAID plus many more.
|
||||
|
||||
Visit Homepage: <http://lilo.alioth.debian.org/>
|
||||
|
||||
### 3. BURG – New Boot Loader
|
||||
|
||||
Based on GRUB, BURG is a relatively new Linux boot loader. Because it is derived from GRUB, it ships in with some of the primary GRUB features, nonetheless, it also offers remarkable features such as a new object format to support multiple platforms including Linux, Windows, Mac OS, FreeBSD and beyond.
|
||||
|
||||
Additionally, it supports a highly configurable text and graphical mode boot menu, stream plus planned future improvements for it to work with various input/output devices.
|
||||
|
||||
Visit Homepage: <https://launchpad.net/burg>
|
||||
|
||||
### 4. Syslinux
|
||||
|
||||
Syslinux is an assortment of light weight boot loaders that enable booting from CD-ROMs, from a network and so on. It supports filesystems such as FAT for MS-DOS, and ext2, ext3, ext4 for Linux. It as well supports uncompressed single-device Btrfs.
|
||||
|
||||
Note that Syslinux only accesses files in its own partition, therefore, it does not offer multi-filesystem boot capabilities.
|
||||
|
||||
Visit Homepage: <http://www.syslinux.org/wiki/index.php?title=The_Syslinux_Project>
|
||||
|
||||
### Conclusion
|
||||
|
||||
A boot loader allows you to manage multiple operating systems on your machine and select which one to use at a particular time, without it, your machine can not load the kernel and the rest of the operating system files.
|
||||
|
||||
Have we missed any tip-top Linux boot loader here? If so, then let us know by using the comment form below by making suggestions of any commendable boot loaders that can support Linux operating system.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.tecmint.com/best-linux-boot-loaders/
|
@ -0,0 +1,308 @@
|
||||
17 tar command practical examples in Linux
|
||||
=====
|
||||
|
||||
Tar (tape archive ) is the most widely used command in Unix like operating system for creating archive of multiple files and folders into a single archive file and that archive file can be further compressed using gzip and bzip2 techniques. In other words we can say that tar command is used to take backup by archiving multiple files and directory into a single tar or archive file and later on files & directories can be extracted from the tar compressed file.
|
||||
|
||||
In this article we will discuss 17 practical examples of tar command in Linux.
|
||||
|
||||
Syntax of tar command:
|
||||
|
||||
```
|
||||
# tar <options> <files>
|
||||
```
|
||||
|
||||
Some of the commonly used options in tar command are listed below :
|
||||
|
||||
![](http://www.linuxtechi.com/wp-content/uploads/2016/09/tar-command-options.jpg)
|
||||
|
||||
Note : hyphen ( – ) in the tar command options are optional.
|
||||
|
||||
### Example: 1 Create a tar archive file
|
||||
|
||||
Let’s create a tar file of /etc directory and ‘/root/ anaconda-ks.cfg’ file.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -cvf myarchive.tar /etc /root/anaconda-ks.cfg
|
||||
```
|
||||
|
||||
Above command will create a tar file with the name “myarchive” in the current folder. Tar file contains all the files and directories of /etc folder and anaconda-ks.cfg file.
|
||||
|
||||
In the tar command ‘-c‘ option specify to create a tar file, ‘-v’ is used for verbose output and ‘-f’ option is used to specify the archive file name.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ls -l myarchive.tar
|
||||
-rw-r--r--. 1 root root 22947840 Sep 7 00:24 myarchive.tar
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Example:2 List the contents of tar archive file.
|
||||
|
||||
Using ‘–t‘ option in tar command we can view the contents of tar files without extracting it.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -tvf myarchive.tar
|
||||
```
|
||||
|
||||
Listing a specific file or directory from tar file. In the below example i am trying to view whether ‘anaconda-ks.cfg’ file is there in the tar file or not.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -tvf myarchive.tar root/anaconda-ks.cfg
|
||||
-rw------- root/root 953 2016-08-24 01:33 root/anaconda-ks.cfg
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Example:3 Append or add files to end of archive or tar file.
|
||||
|
||||
‘-r‘ option in the tar command is used to append or add file to existing tar file. Let’s add /etc/fstab file in ‘data.tar‘
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -rvf data.tar /etc/fstab
|
||||
```
|
||||
|
||||
Note: In the Compressed tar file we can’t append file or directory.
|
||||
|
||||
### Example:4 Extracting files and directories from tar file.
|
||||
|
||||
‘-x‘ option is used to extract the files and directories from the tar file. Let’s extract the content of above created tar file.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -xvf myarchive.tar
|
||||
```
|
||||
|
||||
This command will extract all the files and directories of myarchive tar file in the current working directory.
|
||||
|
||||
### Example:5 Extracting tar file to a particular folder.
|
||||
|
||||
In case you want to extract tar file to a particular folder or directory then use ‘-C‘ option and after that specify the path of a folder.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -xvf myarchive.tar -C /tmp/
|
||||
```
|
||||
|
||||
### Example:6 Extracting particular file or directory from tar file.
|
||||
|
||||
Let’s assume you want to extract only anaconda-ks.cfg file from the tar file under /tmp folder.
|
||||
|
||||
Syntax :
|
||||
|
||||
```
|
||||
# tar –xvf {tar-file } {file-to-be-extracted } -C {path-where-to-extract}
|
||||
|
||||
[root@linuxtechi tmp]# tar -xvf /root/myarchive.tar root/anaconda-ks.cfg -C /tmp/
|
||||
root/anaconda-ks.cfg
|
||||
[root@linuxtechi tmp]# ls -l /tmp/root/anaconda-ks.cfg
|
||||
-rw-------. 1 root root 953 Aug 24 01:33 /tmp/root/anaconda-ks.cfg
|
||||
[root@linuxtechi tmp]#
|
||||
```
|
||||
|
||||
### Example:7 Creating and compressing tar file (tar.gz or .tgz )
|
||||
|
||||
Let’s assume that we want to create a tar file of /etc and /opt folder and also want to compress it using gzip tool. This can be achieved using ‘-z‘ option in tar command. Extensions of such tar files will be either tar.gz or .tgz
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zcpvf myarchive.tar.gz /etc/ /opt/
|
||||
```
|
||||
|
||||
Or
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zcpvf myarchive.tgz /etc/ /opt/
|
||||
```
|
||||
|
||||
### Example:8 Creating and compressing tar file ( tar.bz2 or .tbz2 )
|
||||
|
||||
Let’s assume that we want to create compressed (bzip2) tar file of /etc and /opt folder. This can be achieved by using the option ( -j) in the tar command.Extensions of such tar files will be either tar.bz2 or .tbz
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -jcpvf myarchive.tar.bz2 /etc/ /opt/
|
||||
```
|
||||
|
||||
Or
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -jcpvf myarchive.tbz2 /etc/ /opt/
|
||||
```
|
||||
|
||||
### Example:9 Excluding particular files or type while creating tar file.
|
||||
|
||||
Using “–exclude” option in tar command we can exclude the particular file or file type while creating tar file. Let’s assume we want to exclude the file type of html while creating the compressed tar file.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zcpvf myarchive.tgz /etc/ /opt/ --exclude=*.html
|
||||
```
|
||||
|
||||
### Example:10 Listing the contents of tar.gz or .tgz file
|
||||
|
||||
Contents of tar file with the extensions tar.gz or .tgz is viewed by using the option ‘-t’. Example is shown below :
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -tvf myarchive.tgz | more
|
||||
.............................................
|
||||
drwxr-xr-x root/root 0 2016-09-07 08:41 etc/
|
||||
-rw-r--r-- root/root 541 2016-08-24 01:23 etc/fstab
|
||||
-rw------- root/root 0 2016-08-24 01:23 etc/crypttab
|
||||
lrwxrwxrwx root/root 0 2016-08-24 01:23 etc/mtab -> /proc/self/mounts
|
||||
-rw-r--r-- root/root 149 2016-09-07 08:41 etc/resolv.conf
|
||||
drwxr-xr-x root/root 0 2016-09-06 03:55 etc/pki/
|
||||
drwxr-xr-x root/root 0 2016-09-06 03:15 etc/pki/rpm-gpg/
|
||||
-rw-r--r-- root/root 1690 2015-12-09 04:59 etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
|
||||
-rw-r--r-- root/root 1004 2015-12-09 04:59 etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-Debug-7
|
||||
-rw-r--r-- root/root 1690 2015-12-09 04:59 etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-Testing-7
|
||||
-rw-r--r-- root/root 3140 2015-09-15 06:53 etc/pki/rpm-gpg/RPM-GPG-KEY-foreman
|
||||
..........................................................
|
||||
```
|
||||
|
||||
### Example:11 Listing the contents of tar.bz2 or .tbz2 file.
|
||||
|
||||
Contents of tar file with the extensions tar.bz2 or .tbz2 is viewed by using the option ‘-t’. Example is shown below :
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -tvf myarchive.tbz2 | more
|
||||
........................................................
|
||||
rwxr-xr-x root/root 0 2016-08-24 01:25 etc/pki/java/
|
||||
lrwxrwxrwx root/root 0 2016-08-24 01:25 etc/pki/java/cacerts -> /etc/pki/ca-trust/extracted/java/cacerts
|
||||
drwxr-xr-x root/root 0 2016-09-06 02:54 etc/pki/nssdb/
|
||||
-rw-r--r-- root/root 65536 2010-01-12 15:09 etc/pki/nssdb/cert8.db
|
||||
-rw-r--r-- root/root 9216 2016-09-06 02:54 etc/pki/nssdb/cert9.db
|
||||
-rw-r--r-- root/root 16384 2010-01-12 16:21 etc/pki/nssdb/key3.db
|
||||
-rw-r--r-- root/root 11264 2016-09-06 02:54 etc/pki/nssdb/key4.db
|
||||
-rw-r--r-- root/root 451 2015-10-21 09:42 etc/pki/nssdb/pkcs11.txt
|
||||
-rw-r--r-- root/root 16384 2010-01-12 15:45 etc/pki/nssdb/secmod.db
|
||||
drwxr-xr-x root/root 0 2016-08-24 01:26 etc/pki/CA/
|
||||
drwxr-xr-x root/root 0 2015-06-29 08:48 etc/pki/CA/certs/
|
||||
drwxr-xr-x root/root 0 2015-06-29 08:48 etc/pki/CA/crl/
|
||||
drwxr-xr-x root/root 0 2015-06-29 08:48 etc/pki/CA/newcerts/
|
||||
drwx------ root/root 0 2015-06-29 08:48 etc/pki/CA/private/
|
||||
drwx------ root/root 0 2015-11-20 06:34 etc/pki/rsyslog/
|
||||
drwxr-xr-x root/root 0 2016-09-06 03:44 etc/pki/pulp/
|
||||
..............................................................
|
||||
```
|
||||
|
||||
### Example:12 Extracting or unzip tar.gz or .tgz files.
|
||||
|
||||
tar files with extension tar.gz or .tgz is extracted or unzipped with option ‘-x’ and ‘-z’. Example is shown below :
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zxpvf myarchive.tgz -C /tmp/
|
||||
```
|
||||
|
||||
Above command will extract tar file under /tmp folder.
|
||||
|
||||
Note : Now a days tar command will take care compression file types automatically while extracting, it means it is optional for us to specify compression type in tar command. Example is shown below :
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -xpvf myarchive.tgz -C /tmp/
|
||||
```
|
||||
|
||||
### Example:13 Extracting or unzip tar.bz2 or .tbz2 files
|
||||
|
||||
tar files with the extension tar.bz2 or .tbz2 is extract with option ‘-j’ and ‘-x’. Example is shown below:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -jxpvf myarchive.tbz2 -C /tmp/
|
||||
```
|
||||
|
||||
Or
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar xpvf myarchive.tbz2 -C /tmp/
|
||||
```
|
||||
|
||||
### Example:14 Scheduling backup with tar command
|
||||
|
||||
There are some real time scenarios where we have to create the tar files of particular files and directories for backup purpose on daily basis. Let’s suppose we have to take backup of whole /opt folder on daily basis, this can be achieved by creating a cron job of tar command. Example is shown below :
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -zcvf optbackup-$(date +%Y-%m-%d).tgz /opt/
|
||||
```
|
||||
|
||||
Create a cron job for above command.
|
||||
|
||||
### Example:15 Creating compressed archive or tar file with -T and -X option.
|
||||
|
||||
There are some real time scenarios where we want tar command to take input from a file and that file will consists of path of files & directory that are to be archived and compressed, there might some files that we would like to exclude in the archive which are mentioned in input file.
|
||||
|
||||
In the tar command input file is specified after ‘-T’ option and file which consists of exclude list is specified after ‘-X’ option.
|
||||
|
||||
Let’s suppose we want to archive and compress the directories like /etc , /opt and /home and want to exclude the file ‘/etc/sysconfig/kdump’ and ‘/etc/sysconfig/foreman‘, Create a text file ‘/root/tar-include’ and ‘/root/tar-exclude’ and put the following contents in respective file.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cat /root/tar-include
|
||||
/etc
|
||||
/opt
|
||||
/home
|
||||
[root@linuxtechi ~]#
|
||||
[root@linuxtechi ~]# cat /root/tar-exclude
|
||||
/etc/sysconfig/kdump
|
||||
/etc/sysconfig/foreman
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Now run the below command to create and compress archive file.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar zcpvf mybackup-$(date +%Y-%m-%d).tgz -T /root/tar-include -X /root/tar-exclude
|
||||
```
|
||||
|
||||
### Example:16 View the size of .tar , .tgz and .tbz2 file.
|
||||
|
||||
Use the below commands to view the size of tar and compressed tar files.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# tar -czf - data.tar | wc -c
|
||||
427
|
||||
[root@linuxtechi ~]# tar -czf - mybackup-2016-09-09.tgz | wc -c
|
||||
37956009
|
||||
[root@linuxtechi ~]# tar -czf - myarchive.tbz2 | wc -c
|
||||
30835317
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Example:17 Splitting big tar file into smaller files.
|
||||
|
||||
In Unix like operating big file is divided or split into smaller files using split command. Big tar file can also be divided into the smaller parts using split command.
|
||||
|
||||
Let’s assume we want to split ‘mybackup-2016-09-09.tgz‘ file into smaller parts of each 6 MB.
|
||||
|
||||
```
|
||||
Syntax : split -b <Size-in-MB> <tar-file-name>.<extension> “prefix-name”
|
||||
```
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# split -b 6M mybackup-2016-09-09.tgz mybackup-parts
|
||||
```
|
||||
|
||||
Above command will split the mybackup compressed tar file into the smaller files each of size 6 MB in current working directory and split file names will starts from mybackup-partsaa … mybackup-partsag. In case if you want to append numbers in place of alphabets then use ‘-d’ option in above split command.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ls -l mybackup-parts*
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsaa
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsab
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsac
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsad
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsae
|
||||
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsaf
|
||||
-rw-r--r--. 1 root root 637219 Sep 10 03:05 mybackup-partsag
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Now we can move these files into another server over the network and then we can merge all the files into a single tar compressed file using below mentioned command.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cat mybackup-partsa* > mybackup-2016-09-09.tgz
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
That’s all, hope you like different examples of tar command. Please share your feedback and comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxtechi.com/17-tar-command-examples-in-linux/
|
||||
|
||||
作者:[Pradeep Kumar ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.linuxtechi.com/author/pradeep/
|
@ -0,0 +1,125 @@
|
||||
15 Top Open Source Artificial Intelligence Tools
|
||||
====
|
||||
|
||||
Artificial Intelligence (AI) is one of the hottest areas of technology research. Companies like IBM, Google, Microsoft, Facebook and Amazon are investing heavily in their own R&D, as well as buying up startups that have made progress in areas like machine learning, neural networks, natural language and image processing. Given the level of interest, it should come as no surprise that a recent [artificial intelligence report][1] from experts at Stanford University concluded that "increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030."
|
||||
|
||||
In a recent [article][2], we provided an overview of 45 AI projects that seem particularly promising or interesting. In this slideshow, we're focusing in on open source artificial intelligence tools, with a closer look at fifteen of the best-known open source AI projects.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/5668/00AI.jpg)
|
||||
|
||||
Open Source Artificial Intelligence
|
||||
|
||||
These open source AI applications are on the cutting edge of artificial intelligence research.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/8922/01Caffe.JPG)
|
||||
|
||||
### 1. Caffe
|
||||
|
||||
The brainchild of a UC Berkeley PhD candidate, Caffe is a deep learning framework based on expressive architecture and extensible code. It's claim to fame is its speed, which makes it popular with both researchers and enterprise users. According to its website, it can process more than 60 million images in a single day using just one NVIDIA K40 GPU. It is managed by the Berkeley Vision and Learning Center (BVLC), and companies like NVIDIA and Amazon have made grants to support its development.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/1232/02CNTK.JPG)
|
||||
|
||||
### 2. CNTK
|
||||
|
||||
Short for Computational Network Toolkit, CNTK is one of Microsoft's open source artificial intelligence tools. It boasts outstanding performance whether it is running on a system with only CPUs, a single GPU, multiple GPUs or multiple machines with multiple GPUs. Microsoft has primarily utilized it for research into speech recognition, but it is also useful for applications like machine translation, image recognition, image captioning, text processing, language understanding and language modeling.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/2901/03Deeplearning4j.JPG)
|
||||
|
||||
### 3. Deeplearning4j
|
||||
|
||||
Deeplearning4j is an open source deep learning library for the Java Virtual Machine (JVM). It runs in distributed environments and integrates with both Hadoop and Apache Spark. It makes it possible to configure deep neural networks, and it's compatible with Java, Scala and other JVM languages.
|
||||
|
||||
The project is managed by a commercial company called Skymind, which offers paid support, training and an enterprise distribution of Deeplearning4j.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/7269/04DMLT.JPG)
|
||||
|
||||
### 4. Distributed Machine Learning Toolkit
|
||||
|
||||
Like CNTK, the Distributed Machine Learning Toolkit (DMTK) is one of Microsoft's open source artificial intelligence tools. Designed for use in big data applications, it aims to make it faster to train AI systems. It consists of three key components: the DMTK framework, the LightLDA topic model algorithm, and the Distributed (Multisense) Word Embedding algorithm. As proof of DMTK's speed, Microsoft says that on an eight-cluster machine, it can "train a topic model with 1 million topics and a 10-million-word vocabulary (for a total of 10 trillion parameters), on a document collection with over 100-billion tokens," a feat that is unparalleled by other tools.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/2890/05H2O.JPG)
|
||||
|
||||
### 5. H20
|
||||
|
||||
Focused more on enterprise uses for AI than on research, H2O has large companies like Capital One, Cisco, Nielsen Catalina, PayPal and Transamerica among its users. It claims to make is possible for anyone to use the power of machine learning and predictive analytics to solve business problems. It can be used for predictive modeling, risk and fraud analysis, insurance analytics, advertising technology, healthcare and customer intelligence.
|
||||
|
||||
It comes in two open source versions: standard H2O and Sparkling Water, which is integrated with Apache Spark. Paid enterprise support is also available.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/1127/06Mahout.JPG)
|
||||
|
||||
### 6. Mahout
|
||||
|
||||
An Apache Foundation project, Mahout is an open source machine learning framework. According to its website, it offers three major features: a programming environment for building scalable algorithms, premade algorithms for tools like Spark and H2O, and a vector-math experimentation environment called Samsara. Companies using Mahout include Adobe, Accenture, Foursquare, Intel, LinkedIn, Twitter, Yahoo and many others. Professional support is available through third parties listed on the website.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/4038/07MLlib.JPG)
|
||||
|
||||
### 7. MLlib
|
||||
|
||||
Known for its speed, Apache Spark has become one of the most popular tools for big data processing. MLlib is Spark's scalable machine learning library. It integrates with Hadoop and interoperates with both NumPy and R. It includes a host of machine learning algorithms for classification, regression, decision trees, recommendation, clustering, topic modeling, feature transformations, model evaluation, ML pipeline construction, ML persistence, survival analysis, frequent itemset and sequential pattern mining, distributed linear algebra and statistics.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/839/08NuPIC.JPG)
|
||||
|
||||
### 8. NuPIC
|
||||
|
||||
Managed by a company called Numenta, NuPIC is an open source artificial intelligence project based on a theory called Hierarchical Temporal Memory, or HTM. Essentially, HTM is an attempt to create a computer system modeled after the human neocortex. The goal is to create machines that "approach or exceed human level performance for many cognitive tasks."
|
||||
|
||||
In addition to the open source license, Numenta also offers NuPic under a commercial license, and it also offers licenses on the patents that underlie the technology.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/99/09OpenNN.JPG)
|
||||
|
||||
### 9. OpenNN
|
||||
|
||||
Designed for researchers and developers with advanced understanding of artificial intelligence, OpenNN is a C++ programming library for implementing neural networks. Its key features include deep architectures and fast performance. Extensive documentation is available on the website, including an introductory tutorial that explains the basics of neural networks. Paid support for OpenNNis available through Artelnics, a Spain-based firm that specializes in predictive analytics.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/4168/10OpenCyc.JPG)
|
||||
|
||||
### 10. OpenCyc
|
||||
|
||||
Developed by a company called Cycorp, OpenCyc provides access to the Cyc knowledge base and commonsense reasoning engine. It includes more than 239,000 terms, about 2,093,000 triples, and about 69,000 owl:sameAs links to external semantic data namespaces. It is useful for rich domain modeling, semantic data integration, text understanding, domain-specific expert systems and game AIs. The company also offers two other versions of Cyc: one for researchers that is free but not open source and one for enterprise use that requires a fee.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/9761/11Oryx2.JPG)
|
||||
|
||||
### 11. Oryx 2
|
||||
|
||||
Built on top of Apache Spark and Kafka, Oryx 2 is a specialized application development framework for large-scale machine learning. It utilizes a unique lambda architecture with three tiers. Developers can use Oryx 2 to create new applications, and it also includes some pre-built applications for common big data tasks like collaborative filtering, classification, regression and clustering. The big data tool vendor Cloudera created the original Oryx 1 project and has been heavily involved in continuing development.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/7423/12.%20PredictionIO.JPG)
|
||||
|
||||
### 12. PredictionIO
|
||||
|
||||
In February this year, Salesforce bought PredictionIO, and then in July, it contributed the platform and its trademark to the Apache Foundation, which accepted it as an incubator project. So while Salesforce is using PredictionIO technology to advance its own machine learning capabilities, work will also continue on the open source version. It helps users create predictive engines with machine learning capabilities that can be used to deploy Web services that respond to dynamic queries in real time.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/6886/13SystemML.JPG)
|
||||
|
||||
### 13. SystemML
|
||||
|
||||
First developed by IBM, SystemML is now an Apache big data project. It offers a highly-scalable platform that can implement high-level math and algorithms written in R or a Python-like syntax. Enterprises are already using it to track customer service on auto repairs, to direct airport traffic and to link social media data with banking customers. It can run on top of Spark or Hadoop.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/5742/14TensorFlow.JPG)
|
||||
|
||||
### 14. TensorFlow
|
||||
|
||||
TensorFlow is one of Google's open source artificial intelligence tools. It offers a library for numerical computation using data flow graphs. It can run on a wide variety of different systems with single- or multi-CPUs and GPUs and even runs on mobile devices. It boasts deep flexibility, true portability, automatic differential capabilities and support for Python and C++. The website includes a very extensive list of tutorials and how-tos for developers or researchers interested in using or extending its capabilities.
|
||||
|
||||
![](http://www.datamation.com/imagesvr_ce/9018/15Torch.JPG)
|
||||
|
||||
### 15. Torch
|
||||
|
||||
Torch describes itself as "a scientific computing framework with wide support for machine learning algorithms that puts GPUs first." The emphasis here is on flexibility and speed. In addition, it's fairly easy to use with packages for machine learning, computer vision, signal processing, parallel processing, image, video, audio and networking. It relies on a scripting language called LuaJIT that is based on Lua.
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.datamation.com/open-source/slideshows/15-top-open-source-artificial-intelligence-tools.html
|
||||
|
||||
作者:[Cynthia Harvey][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.datamation.com/author/Cynthia-Harvey-6460.html
|
||||
[1]: https://ai100.stanford.edu/sites/default/files/ai_100_report_0906fnlc_single.pdf
|
||||
[2]: http://www.datamation.com/applications/artificial-intelligence-software-45-ai-projects-to-watch-1.html
|
@ -0,0 +1,74 @@
|
||||
8 best practices for building containerized applications
|
||||
====
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/containers_2015-2-osdc-lead.png?itok=0yid3gFY)
|
||||
|
||||
Containers are a major trend in deploying applications in both public and private clouds. But what exactly are containers, why have they become a popular deployment mechanism, and how will you need to modify your application to optimize it for a containerized environment?
|
||||
|
||||
### What are containers?
|
||||
|
||||
The technology behind containers has a long history beginning with SELinux in 2000 and Solaris zones in 2005. Today, containers are a combination of several kernel features including SELinux, Linux namespaces, and control groups, providing isolation of end user processes, networking, and filesystem space.
|
||||
|
||||
### Why are they so popular?
|
||||
|
||||
The recent widespread adoption of containers is largely due to the development of standards aimed at making them easier to use, such as the Docker image format and distribution model. This standard calls for immutable images, which are the launching point for a container runtime. Immutable images guarantee that the same image the development team releases is what gets tested and deployed into the production environment.
|
||||
|
||||
The lightweight isolation that containers provide creates a better abstraction for an application component. Components running in containers won't interfere with each other the way they might running directly on a virtual machine. They can be prevented from starving each other of system resources, and unless they are sharing a persistent volume won't block attempting to write to the same files. Containers have helped to standardize practices like logging and metric collection, and they allow for increased multi-tenant density on physical and virtual machines, all of which leads to lower deployment costs.
|
||||
|
||||
### How do you build a container-ready application?
|
||||
|
||||
Changing your application to run inside of a container isn't necessarily a requirement. The major Linux distributions have base images that can run anything that runs on a virtual machine. But the general trend in containerized applications is following a few best practices:
|
||||
|
||||
- 1. Instances are disposable
|
||||
|
||||
Any given instance of your application shouldn't need to be carefully kept running. If one system running a bunch of containers goes down, you want to be able to spin up new containers spread out across other available systems.
|
||||
|
||||
- 2. Retry instead of crashing
|
||||
|
||||
When one service in your application depends on another service, it should not crash when the other service is unreachable. For example, your API service is starting up and detects the database is unreachable. Instead of failing and refusing to start, you design it to retry the connection. While the database connection is down the API can respond with a 503 status code, telling the clients that the service is currently unavailable. This practice should already be followed by applications, but if you are working in a containerized environment where instances are disposable, then the need for it becomes more obvious.
|
||||
|
||||
- 3. Persistent data is special
|
||||
|
||||
Containers are launched based on shared images using a copy-on-write (COW) filesystem. If the processes the container is running choose to write out to files, then those writes will only exist as long as the container exists. When the container is deleted, that layer in the COW filesystem is deleted. Giving a container a mounted filesystem path that will persist beyond the life of the container requires extra configuration, and extra cost for the physical storage. Clearly defining the abstraction for what storage is persisted promotes the idea that instances are disposable. Having the abstraction layer also allows a container orchestration engine to handle the intricacies of mounting and unmounting persistent volumes to the containers that need them.
|
||||
|
||||
- 4. Use stdout not log files
|
||||
|
||||
You may now be thinking, if persistent data is special, then what do I do with log files? The approach the container runtime and orchestration projects have taken is that processes should instead [write to stdout/stderr][1], and have infrastructure for archiving and maintaining [container logs][2].
|
||||
|
||||
- 5. Secrets (and other configurations) are special too
|
||||
|
||||
You should never hard-code secret data like passwords, keys, and certificates into your images. Secrets are typically not the same when your application is talking to a development service, a test service, or a production service. Most developers do not have access to production secrets, so if secrets are baked into the image then a new image layer will have to be created to override the development secrets. At this point, you are no longer using the same image that was created by your development team and tested by quality engineering (QE), and have lost the benefit of immutable images. Instead, these values should be abstracted away into environment variables or files that are injected at container startup.
|
||||
|
||||
- 6. Don't assume co-location of services
|
||||
|
||||
In an orchestrated container environment you want to allow the orchestrator to send your containers to whatever node is currently the best fit. Best fit could mean a number of things: it could be based on whichever node has the most space right now, the quality of service the container is requesting, whether the container requires persistent volumes, etc. This could easily mean your frontend, API, and database containers all end up on different nodes. While it is possible to force an API container to each node (see [DaemonSets][3] in Kubernetes), this should be reserved for containers that perform tasks like monitoring the nodes themselves.
|
||||
|
||||
- 7. Plan for redundancy / high availability
|
||||
|
||||
Even if you don't have enough load to require an HA setup, you shouldn't write your service in a way that prevents you from running multiple copies of it. This will allow you to use rolling deployments, which make it easy to move load off one node and onto another, or to upgrade from one version of a service to the next without taking any downtime.
|
||||
|
||||
- 8. Implement readiness and liveness checks
|
||||
|
||||
It is common for applications to have startup time before they are able to respond to requests, for example, an API server that needs to populate in-memory data caches. Container orchestration engines need a way to check that your container is ready to serve requests. Providing a readiness check for new containers allows a rolling deployment to keep an old container running until it is no longer needed, preventing downtime. Similarly, a liveness check is a way for the orchestration engine to continue to check that the container is in a healthy state. It is up to the application creator to decide what it means for their container to be healthy, or "live". A container that is no longer live will be killed, and a new container created in its place.
|
||||
|
||||
### Want to find out more?
|
||||
|
||||
I'll be at the Grace Hopper Celebration of Women in Computing in October, come check out my talk: [Containerization of Applications: What, Why, and How][4]. Not headed to GHC this year? Then read on about containers, orchestration, and applications on the [OpenShift][5] and [Kubernetes][6] project sites.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/16/9/8-best-practices-building-containerized-applications
|
||||
|
||||
作者:[Jessica Forrester ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jwforres
|
||||
[1]: https://docs.docker.com/engine/reference/commandline/logs/
|
||||
[2]: http://kubernetes.io/docs/getting-started-guides/logging/
|
||||
[3]: http://kubernetes.io/docs/admin/daemons/
|
||||
[4]: https://www.eiseverywhere.com/ehome/index.php?eventid=153076&tabid=351462&cid=1350690&sessionid=11443135&sessionchoice=1&
|
||||
[5]: https://www.openshift.org/
|
||||
[6]: http://kubernetes.io/
|
@ -0,0 +1,257 @@
|
||||
Content Security Policy, Your Future Best Friend
|
||||
=====
|
||||
|
||||
A long time ago, my personal website was attacked. I do not know how it happened, but it happened. Fortunately, the damage from the attack was quite minor: A piece of JavaScript was inserted at the bottom of some pages. I updated the FTP and other credentials, cleaned up some files, and that was that.
|
||||
|
||||
One point made me mad: At the time, there was no simple solution that could have informed me there was a problem and — more importantly — that could have protected the website’s visitors from this annoying piece of code.
|
||||
|
||||
A solution exists now, and it is a technology that succeeds in both roles. Its name is content security policy (CSP).
|
||||
|
||||
### What Is A CSP? Link
|
||||
|
||||
The idea is quite simple: By sending a CSP header from a website, you are telling the browser what it is authorized to execute and what it is authorized to block.
|
||||
|
||||
Here is an example with PHP:
|
||||
|
||||
```
|
||||
<?php
|
||||
header("Content-Security-Policy: <your directives>");
|
||||
?>
|
||||
```
|
||||
|
||||
#### SOME DIRECTIVES LINK
|
||||
|
||||
You may define global rules or define rules related to a type of asset:
|
||||
|
||||
```
|
||||
default-src 'self' ;
|
||||
# self = same port, same domain name, same protocol => OK
|
||||
```
|
||||
|
||||
The base argument is default-src: If no directive is defined for a type of asset, then the browser will use this value.
|
||||
|
||||
```
|
||||
script-src 'self' www.google-analytics.com ;
|
||||
# JS files on these domains => OK
|
||||
```
|
||||
|
||||
In this example, we’ve authorized the domain name www.google-analytics.com as a source of JavaScript files to use on our website. We’ve added the keyword 'self'; if we redefined the directive script-src with another rule, it would override default-src rules.
|
||||
|
||||
If no scheme or port is specified, then it enforces the same scheme or port from the current page. This prevents mixed content. If the page is https://example.com, then you wouldn’t be able to load http://www.google-analytics.com/file.js because it would be blocked (the scheme wouldn’t match). However, there is an exception to allow a scheme upgrade. If http://example.com tries to load https://www.google-analytics.com/file.js, then the scheme or port would be allowed to change to facilitate the scheme upgrade.
|
||||
|
||||
```
|
||||
style-src 'self' data: ;
|
||||
# Data-Uri in a CSS => OK
|
||||
```
|
||||
|
||||
In this example, the keyword data: authorizes embedded content in CSS files.
|
||||
|
||||
Under the CSP level 1 specification, you may also define rules for the following:
|
||||
|
||||
- `img-src`
|
||||
|
||||
valid sources of images
|
||||
|
||||
- `connect-src`
|
||||
|
||||
applies to XMLHttpRequest (AJAX), WebSocket or EventSource
|
||||
|
||||
- `font-src`
|
||||
|
||||
valid sources of fonts
|
||||
|
||||
- `object-src`
|
||||
|
||||
valid sources of plugins (for example, `<object>, <embed>, <applet>`)
|
||||
|
||||
- `media-src`
|
||||
|
||||
valid sources of `<audio> and <video>`
|
||||
|
||||
|
||||
CSP level 2 rules include the following:
|
||||
|
||||
- `child-src`
|
||||
|
||||
valid sources of web workers and elements such as `<frame>` and `<iframe>` (this replaces the deprecated frame-src from CSP level 1)
|
||||
|
||||
- `form-action`
|
||||
|
||||
valid sources that can be used as an HTML `<form>` action
|
||||
|
||||
- `frame-ancestors`
|
||||
|
||||
valid sources for embedding the resource using `<frame>, <iframe>, <object>, <embed> or <applet>`.
|
||||
|
||||
- `upgrade-insecure-requests`
|
||||
|
||||
instructs user agents to rewrite URL schemes, changing HTTP to HTTPS (for websites with a lot of old URLs that need to be rewritten).
|
||||
|
||||
For better backwards-compatibility with deprecated properties, you may simply copy the contents of the actual directive and duplicate them in the deprecated one. For example, you may copy the contents of child-src and duplicate them in frame-src.
|
||||
|
||||
CSP 2 allows you to whitelist paths (CSP 1 allows only domains to be whitelisted). So, rather than whitelisting all of www.foo.com, you could whitelist www.foo.com/some/folder to restrict it further. This does require CSP 2 support in the browser, but it is obviously more secure.
|
||||
|
||||
#### AN EXAMPLE
|
||||
|
||||
I made a simple example for the Paris Web 2015 conference, where I presented a talk entitled “[CSP in Action][1].”
|
||||
Without CSP, the page would look like this:
|
||||
|
||||
![](https://www.smashingmagazine.com/wp-content/uploads/2016/09/csp_smashing1b-500.jpg)
|
||||
|
||||
Not very nice. What if we enabled the following CSP directives?
|
||||
|
||||
```
|
||||
<?php
|
||||
header("Content-Security-Policy:
|
||||
default-src 'self' ;
|
||||
script-src 'self' www.google-analytics.com stats.g.doubleclick.net ;
|
||||
style-src 'self' data: ;
|
||||
img-src 'self' www.google-analytics.com stats.g.doubleclick.net data: ;
|
||||
frame-src 'self' ;");
|
||||
?>
|
||||
```
|
||||
|
||||
What would the browser do? It would (very strictly) apply these directives under the primary rule of CSP, which is that anything not authorized in a CSP directive will be blocked (“blocked” meaning not executed, not displayed and not used by the website).
|
||||
|
||||
By default in CSP, inline scripts and styles are not authorized, which means that every `<script>`, onclick or style attribute will be blocked. You could authorize inline CSS with style-src 'unsafe-inline' ;.
|
||||
|
||||
In a modern browser with CSP support, the example would look like this:
|
||||
|
||||
![](https://www.smashingmagazine.com/wp-content/uploads/2016/09/csp_smashing5-500.jpg)
|
||||
|
||||
What happened? The browser applied the directives and rejected anything that was not authorized. It sent these notifications to the console:
|
||||
|
||||
![](https://www.smashingmagazine.com/wp-content/uploads/2016/09/csp_smashing2-500.jpg)
|
||||
|
||||
If you’re still not convinced of the value of CSP, have a look at Aaron Gustafson’s article “[More Proof We Don’t Control Our Web Pages][2].”
|
||||
|
||||
Of course, you may use stricter directives than the ones in the example provided above:
|
||||
|
||||
- set default-src to 'none',
|
||||
- specify what you need for each rule,
|
||||
- specify the exact paths of required files,
|
||||
- etc.
|
||||
|
||||
|
||||
### More Information On CSP
|
||||
|
||||
#### SUPPORT
|
||||
|
||||
CSP is not a nightly feature requiring three flags to be activated in order for it to work. CSP levels 1 and 2 are candidate recommendations! [Browser support for CSP level 1][3] is excellent.
|
||||
|
||||
![](https://www.smashingmagazine.com/wp-content/uploads/2016/09/csp_smashing3-500.jpg)
|
||||
|
||||
The [level 2 specification][4] is more recent, so it is a bit less supported.
|
||||
|
||||
![](https://www.smashingmagazine.com/wp-content/uploads/2016/09/csp_smashing4-500.jpg)
|
||||
|
||||
CSP level 3 is an early draft now, so it is not yet supported, but you can already do great things with levels 1 and 2.
|
||||
|
||||
#### OTHER CONSIDERATIONS
|
||||
|
||||
CSP has been designed to reduce cross-site scripting (XSS) risks, which is why enabling inline scripts in script-src directives is not recommended. Firefox illustrates this issue very nicely: In the browser, hit Shift + F2 and type security csp, and it will show you directives and advice. For example, here it is used on Twitter’s website:
|
||||
|
||||
![](https://www.smashingmagazine.com/wp-content/uploads/2016/09/csp_smashing6b-500.jpg)
|
||||
|
||||
Another possibility for inline scripts or inline styles, if you really have to use them, is to create a hash value. For example, suppose you need to have this inline script:
|
||||
|
||||
```
|
||||
<script>alert('Hello, world.');</script>
|
||||
```
|
||||
|
||||
You might add 'sha256-qznLcsROx4GACP2dm0UCKCzCG-HiZ1guq6ZZDob_Tng=' as a valid source in your script-src directives. The hash generated is the result of this in PHP:
|
||||
|
||||
```
|
||||
<?php
|
||||
echo base64_encode(hash('sha256', "alert('Hello, world.');", true));
|
||||
?>
|
||||
```
|
||||
|
||||
I said earlier that CSP is designed to reduce XSS risks — I could have added, “… and reduce the risks of unsolicited content.” With CSP, you have to know where your sources of content are and what they are doing on your front end (inline styles, etc.). CSP can also help you force contributors, developers and others to respect your rules about sources of content!
|
||||
|
||||
Now your question is, “OK, this is great, but how do we use it in a production environment?”
|
||||
|
||||
### How To Use It In The Real World
|
||||
|
||||
The easiest way to get discouraged with using CSP the first time is to test it in a live environment, thinking, “This will be easy. My code is bad ass and perfectly clean.” Don’t do this. I did it. It’s stupid, trust me.
|
||||
|
||||
As I explained, CSP directives are activated with a CSP header — there is no middle ground. You are the weak link here. You might forget to authorize something or forget a piece of code on your website. CSP will not forgive your oversight. However, two features of CSP greatly simplify this problem.
|
||||
|
||||
#### REPORT-URI
|
||||
|
||||
Remember the notifications that CSP sends to the console? The directive report-uri can be used to tell the browser to send them to the specified address. Reports are sent in JSON format.
|
||||
|
||||
```
|
||||
report-uri /csp-parser.php ;
|
||||
```
|
||||
|
||||
So, in the csp-parser.php file, we can process the data sent by the browser. Here is the most basic example in PHP:
|
||||
|
||||
```
|
||||
$data = file_get_contents('php://input');
|
||||
|
||||
if ($data = json_decode($data, true)) {
|
||||
$data = json_encode(
|
||||
$data,
|
||||
JSON_PRETTY_PRINT | JSON_UNESCAPED_SLASHES
|
||||
);
|
||||
mail(EMAIL, SUBJECT, $data);
|
||||
}
|
||||
```
|
||||
|
||||
This notification will be transformed into an email. During development, you might not need anything more complex than this.
|
||||
|
||||
For a production environment (or a more visited development environment), you’d better use a way other than email to collect information, because there is no auth or rate limiting on the endpoint, and CSP can be very noisy. Just imagine a page that generates 100 CSP notifications (for example, a script that display images from an unauthorized source) and that is viewed 100 times a day — you could get 10,000 notifications a day!
|
||||
|
||||
A service such as report-uri.io can be used to simplify the management of reporting. You can see other simple examples for report-uri (with a database, with some optimizations, etc.) on GitHub.
|
||||
|
||||
### REPORT-ONLY
|
||||
|
||||
As we have seen, the biggest issue is that there is no middle ground between CSP being enabled and disabled. However, a feature named report-only sends a slightly different header:
|
||||
|
||||
```
|
||||
<?php
|
||||
header("Content-Security-Policy-Report-Only: <your directives>");
|
||||
?>
|
||||
```
|
||||
|
||||
Basically, this tells the browser, “Act as if these CSP directives were being applied, but do not block anything. Just send me the notifications.” It is a great way to test directives without the risk of blocking any required assets.
|
||||
|
||||
With report-only and report-uri, you can test CSP directives with no risk, and you can monitor in real time everything CSP-related on a website. These two features are really powerful for deploying and maintaining CSP!
|
||||
|
||||
### Conclusion
|
||||
|
||||
#### WHY CSP IS COOL
|
||||
|
||||
CSP is most important for your users: They don’t have to suffer any unsolicited scripts or content or XSS vulnerabilities on your website.
|
||||
|
||||
The most important advantage of CSP for website maintainers is awareness. If you’ve set strict rules for image sources, and a script kiddie attempts to insert an image on your website from an unauthorized source, that image will be blocked, and you will be notified instantly.
|
||||
|
||||
Developers, meanwhile, need to know exactly what their front-end code does, and CSP helps them master that. They will be prompted to refactor parts of their code (avoiding inline functions and styles, etc.) and to follow best practices.
|
||||
|
||||
#### HOW CSP COULD BE EVEN COOLER LINK
|
||||
|
||||
Ironically, CSP is too efficient in some browsers — it creates bugs with bookmarklets. So, do not update your CSP directives to allow bookmarklets. We can’t blame any one browser in particular; all of them have issues:
|
||||
|
||||
- Firefox
|
||||
- Chrome (Blink)
|
||||
- WebKit
|
||||
|
||||
Most of the time, the bugs are false positives in blocked notifications. All browser vendors are working on these issues, so we can expect fixes soon. Anyway, this should not stop you from using CSP.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.smashingmagazine.com/2016/09/content-security-policy-your-future-best-friend/?utm_source=webopsweekly&utm_medium=email
|
||||
|
||||
作者:[Nicolas Hoffmann][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.smashingmagazine.com/author/nicolashoffmann/
|
||||
[1]: https://rocssti.net/en/example-csp-paris-web2015
|
||||
[2]: https://www.aaron-gustafson.com/notebook/more-proof-we-dont-control-our-web-pages/
|
||||
[3]: http://caniuse.com/#feat=contentsecuritypolicy
|
||||
[4]: http://caniuse.com/#feat=contentsecuritypolicy2
|
||||
|
@ -0,0 +1,44 @@
|
||||
Five Linux Server Distros Worth Checking Out
|
||||
====
|
||||
|
||||
>Pretty much any of the nearly 300 Linux distributions you'll find listed on Distrowatch can be made to work as servers. Here are those that stand out above the rest.
|
||||
|
||||
![](http://windowsitpro.com/site-files/windowsitpro.com/files/imagecache/large_img/uploads/2016/09/cloudservers.jpg)
|
||||
|
||||
Pretty much any of the nearly 300 Linux distributions you'll find listed on Distrowatch can be made to work as servers. Since Linux's earliest days, users have been provisioning "all purpose" distributions such as Slackware, Debian and Gentoo to do heavy lifting as servers for home and business. That may be fine for the hobbyist, but its a lot of unnecessary work for the professional.
|
||||
|
||||
From the beginning, however, there have been distributions with no other purpose but to serve files and applications, help workstations share common peripherals, serve-up web pages and all the other things we ask servers to do, whether in the cloud, in a data center or on a shelf in a utility closet.
|
||||
|
||||
Here's a look at four of the most used Linux server distros, as well as one distro that might fit the bill for smaller businesses.
|
||||
|
||||
**Red Hat Enterprise Linux**: Perhaps the best known of Linux server distros, RHEL has a reputation for being a solid distribution ready for the most demanding mission critical tasks -- like running the New York Stock Exchange for instance. It's also backed by Red Hat's best-of-breed support.
|
||||
|
||||
The downside? While Red Hat is known for offering customer service and support that's second to none, its support subscriptions aren't cheap. Some might point out, however, that you get what you pay for. Cheaper third party support for RHEL is available, but you might want to do some research before going that route.
|
||||
|
||||
**CentOS**: Anyone who likes RHEL but would like to avoid shoveling money to Red Hat for support should take a look at CentOS, which is basically an RHEL fork. Although it's been around since 2004, in 2014 it became officially sponsored by Red Hat, which now employs most of the project's developers. This means that security patches and bug fixes are made available to CentOS soon after they're pushed to Red Hat.
|
||||
|
||||
If you're going to deploy CentOS, you'll need people with Linux skills on staff, because as far as technical support goes, you're mainly on your own. The good news is that the CentOS community offers excellent resources, such as mailing lists, web forums, and chat rooms, so help is available to those who search.
|
||||
|
||||
**Ubuntu Server**: When Canonical announced many years back that it was coming out with a server edition of Ubuntu, you could hear the snickers. Laughter turned into amazement rather quickly, however, as Ubuntu Server rapidly took hold. This was partly due to the DNA it shares as a derivative of Debian, which has long been a favorite base for Linux servers. Ubuntu filled a gap by adding affordable technical support, superior hardware support, developer tools and lots of polish.
|
||||
|
||||
How popular is Ubuntu Server? Recent figures show it being the most deployed operating system both on OpenStack and on the Amazon Elastic Compute Cloud, where it outpaces second place Amazon Linux Amazon Machine Image by a mile and leaves third place Windows in the virtual dust. Another study shows it as the most used Linux web server.
|
||||
|
||||
**SUSE Linux Enterprise Server**: This German distro has a large base of users in Europe, and was a top server distro on this side of the Atlantic until PR issues arose after it was bought by Novell in the early part of the century. With those days long behind it, SUSE has been gaining ground in the US, and its use will probably accelerate now that HPE is naming it as its preferred Linux partner.
|
||||
|
||||
SUSE Linux Enterprise Server, or SLES, is stable and easy to maintain, which you'd expect for a distro that's been around for nearly as long as Linux itself. Affordable 24/7 "rapid-response" technical support is available, making it suitable for mission critical deployments.
|
||||
|
||||
**ClearOS**: Based on RHEL, ClearOS is included here because it's simple enough for anyone, even most non-techies, to configure. Targeted at small to medium sized businesses, it can also be used as an entertainment server by home users. Using a web-based administration interface for ease-of-use, it's built with the premise in mind that "building your IT infrastructure should be as simple as downloading apps on your smart phone."
|
||||
|
||||
The latest release, version 7.2, includes capabilities that might not be expected from a "lightweight" offering, such as VM support which includes Microsoft Hyper-V, support for the XFS and BTRFS file systems, as well as support for LVM caching and IPv6. Available in a free version or in an inexpensive "professional" version that comes with a variety of support options.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://windowsitpro.com/industry/five-linux-server-distros-worth-checking-out
|
||||
|
||||
作者:[Christine Hall][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://windowsitpro.com/industry/five-linux-server-distros-worth-checking-out
|
@ -0,0 +1,50 @@
|
||||
4 big ways companies benefit from having open source program offices
|
||||
====
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_creativity.png?itok=x2HTRKVW)
|
||||
|
||||
In the first article in my series on open source program offices, I took a deep dive into [what an open source program office is and why your company might need one][1]. Next I looked at [how Google created a new kind of open source program office][2]. In this article, I'll explain a few benefits of having an open source program office.
|
||||
|
||||
At first glance, one big reason why a company not in the business of software development might more enthusiastically embrace an open source program office is because they have less to lose. After all, they're not gambling with software products that are directly tied to revenue. Facebook, for example, can easily unleash a distributed key-value datastore as an open source project because they don't sell a product called "enterprise key-value datastore." That answers the question of risk, but it still doesn't answer the question of what they gain from contributing to the open source ecosystem. Let's look at a few potential reasons and then tackle each. You'll notice a lot of overlap with vendor open source program offices, but some of the motivations are slightly different.
|
||||
|
||||
### Recruiting
|
||||
|
||||
Recruiting is perhaps the easiest way to sell an open source program office to upper management. Show them the costs associated with recruiting, as well as the return on investment, and then explain how developing relationships with talented engineers results in a pipeline of talented developers who are actually familiar with your technology and excited to help work on it. We don't really need to go in more depth here—it's self-explanatory, right?
|
||||
|
||||
### Technology influence
|
||||
|
||||
Once upon a time, companies that didn't specialize in selling software were powerless to influence development cycles of their software vendors directly, especially if they were not a large customer. Open source completely changed that dynamic and brought users onto a more level playing field with vendors. With the rise of open source development, anyone could push technology into a chosen direction, assuming they were willing to invest the time and resources. But these companies learned that simply investing in developer time, although fruitful, could be even more useful if tied to an overarching strategic effort&mdash. Think bug fixes vs. software architects—lots of companies push bug fixes to upstream open source projects, but some of these same companies began to learn that coordinating a sustained effort with a deeper commitment paid off with faster feature development, which could be good for business. With the open source program office model, companies have staffers who can sniff out strategically important open source communities in which they then invest developer resources.
|
||||
|
||||
With rapidly growing companies such as Google and Facebook, providing leadership in existing open source projects still proved insufficient for their expanding businesses. Facing the challenges of intense growth and building out hyperscale systems, many of the largest companies had built highly customized stacks of software for internal use only. What if they could convince others to collaborate on some of these infrastructure projects? Thus, while they maintained investments in areas such as the Linux kernel, Apache, and other existing projects, they also began to release their own large projects. Facebook released Cassandra, Twitter created Mesos, and eventually Google created the Kubernetes project. These projects have become major platforms for industry innovation, proving to be spectacular successes for the companies involved. (Note that Facebook stopped using Cassandra internally after it needed to create a new software project to solve the problem at a larger scale. However, by that time Cassandra had already become popular and DataStax had formed to take on development). Each of these projects have spawned entire ecosystems of developers, related projects, and end users that serve to accelerate growth and development.
|
||||
|
||||
This would not have been possible without coordination between an open source program office and strategic company initiatives. Without that effort, each of the companies mentioned would still be trying to solve these problems individually—and more slowly. Not only have these projects helped solve business problems internally, they also helped establish the companies that created them as industry heavyweights. Sure, Google has been an industry titan for a few years now, but the growth of Kubernetes ensures both better software, and a direct say in the future direction of container technologies, even more than it already had. These companies are still known for their hyperscale infrastructure and for simply being large Silicon Valley stalwarts. Lesser known, but possibly even more important, is their new relevance as technology producers. Open source program offices guide these efforts and maximize their impact, through technology recommendations and relationships with influential developers, not to mention deep expertise in community governance and people management.
|
||||
|
||||
### Marketing power
|
||||
|
||||
Going hand-in-hand with technology influence is how each company talks about its open source efforts. By honing the messages around these projects and communities, an open source program office is able to deliver maximum impact through targeted marketing campaigns. Marketing has long been a dirty word in open source circles, because everyone has had a bad experience with corporate marketing. In the case of open source communities, marketing takes on a vastly different form from a traditional approach and involves amplifying what is already happening in the communities of strategic importance. Thus, an open source program office probably won't create whiz-bang slides about a project that hasn't even released any code yet, but they'll talk about the software they've created and other initiatives they've participated in. Basically, no vaporware here.
|
||||
|
||||
Think of the first efforts made by Google's open source program office. They didn't simply contribute code to the Linux kernel and other projects—they talked about it a lot, often in keynotes at open source conferences. They didn't just give money to students who write open source code—they created a global program, the Google Summer of Code, that became a cultural touchstone of open source development. This marketing effort cemented Google's status as a major open source developer long before Kubernetes was even developed. As a result, Google wielded major influence during the creation of the GPLv3 license, and company speakers and open source program office representatives became staples at tech events. The open source program office is the entity best situated to coordinate these efforts and deliver real value for the parent company.
|
||||
|
||||
### Improve internal processes
|
||||
|
||||
Improving internal processes may not sound like a big benefit, but overcoming chaotic internal processes is a challenge for every open source program office, whether software vendor or company-driven. Whereas a software vendor must make sure that their processes don't step on products they release (for example, unintentionally open sourcing proprietary software), a user is more concerned with infringement of intellectual property (IP) law: patents, copyrights, and trademarks. No one wants to get sued simply for releasing software. Without an active open source program office to manage and coordinate licensing and other legal questions, large companies face great difficulty in arriving at a consensus around open source processes and governance. Why is this important? If different groups release software under incompatible licenses, not only will this prove to be an embarrassment, it also will provide a significant obstacle to achieving one of the most basic goals—improved collaboration.
|
||||
|
||||
Combined with the fact that many of these companies are still growing incredibly quickly, an inability to establish basic rules around process will prove unwieldy sooner than expected. I've seen large spreadsheets with a matrix of approved and unapproved licenses as well as guidelines for how to (and how not to) create open source communities while complying with legal restrictions. The key is to have something that developers can refer to when they need to make decisions, without incurring the legal overhead of a massive, work-slowing IP review every time a developer wants to contribute to an open source community.
|
||||
|
||||
Having an active open source program office that maintains rules over license compliance and source contribution, as well as establishing training programs for engineers, helps to avoid potential legal pitfalls and costly lawsuits. After all, what good is better collaboration on open source projects if the company loses real money because someone didn't read the license? The good news is that companies have less to worry about with respect to proprietary IP when compared to software vendors. The bad news is that their legal questions are no less complex, especially when they run directly into the legal headwinds of a software vendor.
|
||||
|
||||
How has your organization benefited from having an open source program office? Let me know about it in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/business/16/9/4-big-ways-companies-benefit-having-open-source-program-offices
|
||||
|
||||
作者:[John Mark Walker][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/johnmark
|
||||
[1]: https://opensource.com/business/16/5/whats-open-source-program-office
|
||||
[2]: https://opensource.com/business/16/8/google-open-source-program-office
|
@ -0,0 +1,58 @@
|
||||
How to Use Markdown in WordPress to Improve Workflow
|
||||
====
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/09/markdown-wordpress-featured-2.jpg)
|
||||
|
||||
Markdown is a simple markup language that helps you format your plain text documents with minimal effort. You may be used to formatting your articles using HTML or the Visual Editor in WordPress, but using markdown makes formatting a lot easier, and you can always export it to several formats including (but not limited to) HTML.
|
||||
|
||||
WordPress does not come with native markdown support, but there are plugins that can add this functionality to your website if you so desire.
|
||||
|
||||
In this tutorial I will demonstrate how to use the popular WP-Markdown plugin to add markdown support to a WordPress website.
|
||||
|
||||
### Installation
|
||||
|
||||
You can install this plugin directly by navigating to “Plugins -> Add New” and entering “[wp-markdown][1]” in the search box provided. The plugin should appear as the first option on the list. Click “Install Now” to install it.
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/markdown-wordpress-install-plugin-1.png)
|
||||
|
||||
### Configuration
|
||||
|
||||
Once you have installed the plugin and activated it, navigate to “Settings -> Writing” in the menu and scroll down until you get to the markdown section.
|
||||
|
||||
You can enable markdown support in posts, pages and comments. You can also enable a help bar for your post editor or comments which could be handy if you’re just learning the markdown syntax.
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/09/markdown-wordpress-configuration.png)
|
||||
|
||||
If you include code snippets in your blog posts, enabling the “Prettify syntax highlighter” option will automatically provide syntax highlighting for your code snippets.
|
||||
|
||||
Once you are satisfied with your selections, click “Save Changes” to save your settings.
|
||||
|
||||
### Write your posts with Markdown
|
||||
|
||||
Once you have enabled markdown support on your website, you can start using it right away.
|
||||
|
||||
Create a new post by going to “Posts -> Add New.” You will notice that the default Visual and Plain Text editors have been replaced by the markdown editor.
|
||||
|
||||
If you did not enable the markdown help bar in the configuration options, you will not see a live preview of your formatted markdown. Nonetheless, as long as your syntax is correct, your markdown will be converted to valid HTML when you save or publish the post.
|
||||
|
||||
However, if you’re a beginner to markdown and the live preview feature is important to you, simply go back to the settings to enable the help bar option, and you will get a nice live preview area at the bottom of your posts. In addition, you also get some buttons on top that will help you quickly insert markdown syntax into your posts. This could be a potentially amazing setting if people use it. You can adjust the priority of the notification on individual apps. This will let you choose what you see in the notification bar.
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/markdown-wordpress-create-post.png)
|
||||
|
||||
### Wrap up
|
||||
|
||||
As you can see adding markdown support to a WordPress website is really easy, and it will only take a few minutes of your time. If you are completely new to markdown, you might also check out our [markdown cheatsheet][2] which provides a comprehensive reference to the markdown syntax.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/use-markdown-in-wordpress/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
|
||||
|
||||
作者:[Ayo Isaiah][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.maketecheasier.com/author/ayoisaiah/
|
||||
[1]: https://wordpress.org/plugins/wp-markdown/
|
||||
[2]: https://www.maketecheasier.com/productive-with-markdown-cheatsheet/
|
@ -0,0 +1,207 @@
|
||||
Monitoring Docker Containers with Elasticsearch and cAdvisor
|
||||
=======
|
||||
|
||||
If you’re running a Swarm Mode cluster or even a single Docker engine, you’ll end up asking this question:
|
||||
|
||||
>How do I keep track of all that’s happening?
|
||||
|
||||
The answer is “not easily.”
|
||||
|
||||
You need a few things to have a complete overview of stuff like:
|
||||
|
||||
|
||||
1. Number and status of containers
|
||||
2. If, where, and when a container has been moved to another node
|
||||
3. Number of containers on a given node
|
||||
4. Traffic peaks at a given time
|
||||
5. Orphan volumes and networks
|
||||
6. Free disk space, free inodes
|
||||
7. Number of containers against number of veths attached to the docker0 and docker_gwbridge bridges
|
||||
8. Up and down Swarm nodes
|
||||
9. Centralize logs
|
||||
|
||||
The goal of this post is to demonstrate the use of [Elasticsearch][1] + [Kibana][2] + [cAdvisor][3] as tools to analyze and gather metrics and visualize dashboards for Docker containers.
|
||||
|
||||
Later on in this post, you can find a dashboard trying to address a few points from the previous list. There are also points that can’t be addressed by simply using cAdvisor, like the status of Swarm Mode nodes.
|
||||
|
||||
Also, if you have specific needs that aren’t covered by cAdvisor or another tool, I encourage you to write your own data collector and data shipper (e.g., [Beats][4]). Note that I won’t be showing you how to centralize Docker containers log on Elasticsearch.
|
||||
|
||||
>[“How do you keep track of all that’s happening in a Swarm Mode cluster? Not easily.” via @fntlnz][5]
|
||||
|
||||
### Why Do We Need to Monitor Containers?
|
||||
|
||||
Imagine yourself in the classic situation of managing a virtual machine, either just one or several. You are a tmux hero, so you have your sessions preconfigured to do basically everything, monitoring included. There’s a problem in production? You just do a top, htop, iotop, jnettop, whatevertop on all your machines, and you’re ready for troubleshooting!
|
||||
|
||||
Now imagine that you have the same three nodes but split into 50 containers. You need some history displayed nicely in a single place where you can perform queries to know what happened instead of just risking your life in front of those ncurses tools.
|
||||
|
||||
### What Is the Elastic Stack?
|
||||
|
||||
The Elastic Stack is a set of tools composed of:
|
||||
|
||||
- Elasticsearch
|
||||
- Kibana
|
||||
- Logstash
|
||||
- Beats
|
||||
|
||||
We’re going to use a few open-source tools from the Elastic Stack, such as Elasticsearch for the JSON-based analytics engine and Kibana to visualize data and create dashboards.
|
||||
|
||||
Another important piece of the Elastic Stack is Beats, but in this post, we’re focused on containers. There’s no official Beat for Docker, so we’ll just use cAdvisor that can natively talk with Elasticsearch.
|
||||
|
||||
cAdvisor is a tool that collects, aggregates, and exports metrics about running containers. In our case, those metrics are being exported to an Elasticsearch storage.
|
||||
|
||||
Two cool facts about cAdvisor are:
|
||||
|
||||
- It’s not limited to Docker containers.
|
||||
- It has its own webserver with a simple dashboard to visualize gathered metrics for the current node.
|
||||
|
||||
### Set Up a Test Cluster or BYOI
|
||||
|
||||
As I did in my previous posts, my habit is to provide a small script to allow the reader to set up a test environment on which to try out my project’s steps in no time. So you can use the following not-for-production-use script to set up a little Swarm Mode cluster with Elasticsearch running as a container.
|
||||
|
||||
>If you have enough time/experience, you can BYOI (Bring Your Own Infrastructure).
|
||||
|
||||
|
||||
To follow this post, you’ll just need:
|
||||
|
||||
- One or more nodes running the Docker daemon >= 1.12
|
||||
- At least a stand-alone Elasticsearch node 2.4.X
|
||||
|
||||
Again, note that this post is not about setting up a production-ready Elasticsearch cluster. A single node cluster is not recommended for production. So if you’re planning a production installation, please refer to [Elastic guidelines][6].
|
||||
|
||||
### A friendly note for early adopters
|
||||
|
||||
I’m usually an early adopter (and I’m already using the latest alpha version in production, of course). But for this post, I chose not to use the latest Elasticsearch 5.0.0 alpha. Their roadmap is not perfectly clear to me, and I don’t want be the root cause of your problems!
|
||||
|
||||
So the Elasticsearch reference version for this post is the latest stable version, 2.4.0 at the moment of writing.
|
||||
|
||||
### Test cluster setup script
|
||||
|
||||
As said, I wanted to provide this script for everyone who would like to follow the blog without having to figure out how to create a Swarm cluster and install an Elasticsearch. Of course, you can skip this if you choose to use your own Swarm Mode engines and your own Elasticsearch nodes.
|
||||
|
||||
To execute the setup script, you’ll need:
|
||||
|
||||
- [Docker Machine][7] – latest version: to provision Docker engines on DigitalOcean
|
||||
- [DigitalOcean API Token][8]: to allow docker-machine to start nodes on your behalf
|
||||
|
||||
![](https://resources.codeship.com/hubfs/CTAs/EVAL/Codeship_Request_Trial_Access.png?t=1473869513342)
|
||||
|
||||
### Create Cluster Script
|
||||
|
||||
Now that you have everything we need, you can copy the following script in a file named create-cluster.sh:
|
||||
|
||||
```
|
||||
#!/usr/bin/env bash
|
||||
#
|
||||
# Create a Swarm Mode cluster with a single master and a configurable number of workers
|
||||
|
||||
workers=${WORKERS:-"worker1 worker2"}
|
||||
|
||||
#######################################
|
||||
# Creates a machine on Digital Ocean
|
||||
# Globals:
|
||||
# DO_ACCESS_TOKEN The token needed to access DigitalOcean's API
|
||||
# Arguments:
|
||||
# $1 the actual name to give to the machine
|
||||
#######################################
|
||||
create_machine() {
|
||||
docker-machine create \
|
||||
-d digitalocean \
|
||||
--digitalocean-access-token=$DO_ACCESS_TOKEN \
|
||||
--digitalocean-size 2gb \
|
||||
$1
|
||||
}
|
||||
|
||||
#######################################
|
||||
# Executes a command on the specified machine
|
||||
# Arguments:
|
||||
# $1 The machine on which to run the command
|
||||
# $2..$n The command to execute on that machine
|
||||
#######################################
|
||||
machine_do() {
|
||||
docker-machine ssh $@
|
||||
}
|
||||
|
||||
main() {
|
||||
|
||||
if [ -z "$DO_ACCESS_TOKEN" ]; then
|
||||
echo "Please export a DigitalOcean Access token: https://cloud.digitalocean.com/settings/api/tokens/new"
|
||||
echo "export DO_ACCESS_TOKEN=<yourtokenhere>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$WORKERS" ]; then
|
||||
echo "You haven't provided your workers by setting the \$WORKERS environment variable, using the default ones: $workers"
|
||||
fi
|
||||
|
||||
# Create the first and only master
|
||||
echo "Creating the master"
|
||||
|
||||
create_machine master1
|
||||
|
||||
master_ip=$(docker-machine ip master1)
|
||||
|
||||
# Initialize the swarm mode on it
|
||||
echo "Initializing the swarm mode"
|
||||
machine_do master1 docker swarm init --advertise-addr $master_ip
|
||||
|
||||
# Obtain the token to allow workers to join
|
||||
worker_tkn=$(machine_do master1 docker swarm join-token -q worker)
|
||||
echo "Worker token: ${worker_tkn}"
|
||||
|
||||
# Create and join the workers
|
||||
for worker in $workers; do
|
||||
echo "Creating worker ${worker}"
|
||||
create_machine $worker
|
||||
machine_do $worker docker swarm join --token $worker_tkn $master_ip:2377
|
||||
done
|
||||
}
|
||||
|
||||
main $@
|
||||
```
|
||||
|
||||
And make it executable:
|
||||
|
||||
```
|
||||
chmod +x create-cluster.sh
|
||||
```
|
||||
|
||||
### Create the cluster
|
||||
|
||||
As the name suggests, we’ll use the script to create the cluster. By default, the script will create a cluster with a single master and two workers. If you want to configure the number of workers, you can do that by setting the WORKERS environment variable.
|
||||
|
||||
Now, let’s create that cluster!
|
||||
|
||||
```
|
||||
./create-cluster.sh
|
||||
```
|
||||
|
||||
Ok, now you can go out for a coffee. This will take a while.
|
||||
|
||||
Finally the cluster is ready!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.codeship.com/monitoring-docker-containers-with-elasticsearch-and-cadvisor/
|
||||
|
||||
作者:[Lorenzo Fontana][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://blog.codeship.com/author/lorenzofontana/
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[1]: https://github.com/elastic/elasticsearch
|
||||
[2]: https://github.com/elastic/kibana
|
||||
[3]: https://github.com/google/cadvisor
|
||||
[4]: https://github.com/elastic/beats
|
||||
[5]: https://twitter.com/share?text=%22How+do+you+keep+track+of+all+that%27s+happening+in+a+Swarm+Mode+cluster%3F+Not+easily.%22+via+%40fntlnz&url=https://blog.codeship.com/monitoring-docker-containers-with-elasticsearch-and-cadvisor/
|
||||
[6]: https://www.elastic.co/guide/en/elasticsearch/guide/2.x/deploy.html
|
||||
[7]: https://docs.docker.com/machine/install-machine/
|
||||
[8]: https://cloud.digitalocean.com/settings/api/tokens/new
|
@ -0,0 +1,62 @@
|
||||
Ryver: Why You Should Be Using It instead of Slack
|
||||
=====
|
||||
|
||||
It seems like everyone has heard of Slack, a team communication tool that can be used across multiple platforms to stay in the loop. It has revolutionised the way users discuss and plan projects, and it’s a clear upgrade to emails.
|
||||
|
||||
I work in small writing teams, and I’ve never had a problem with communicating with others on my phone or computer while using it. If you want to keep up to date with team of any size, it’s a great way to stay in the loop.
|
||||
|
||||
So, why are we here? Ryver is supposed to be the next big thing, offering an upgraded service in comparison to Slack. It’s completely free, and they’re pushing for a larger share of the market.
|
||||
|
||||
Is it good enough to be a Slack-Killer? What are the differences between two similar sounding services?
|
||||
|
||||
Read on to find out more.
|
||||
|
||||
### Why Ryver?
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/04/Ryver.jpg)
|
||||
|
||||
Why mess with something that works? The developers at Ryver are well aware of Slack, and they’re hoping their improved service will be enough to make you switch over. They promise a completely free team-communication service with no hidden charges along the way.
|
||||
|
||||
Thankfully, they deliver on their main aim with a high quality product.
|
||||
|
||||
Extra content is the name of the game, and they promise to remove some of the limits you’ll find on a free account with Slack. Unlimited data storage is a major plus point, and it’s also more open in a number of ways. If storage limits are an issue for you, you have to check out Ryver.
|
||||
|
||||
It’s a simple system to use, as it was built so that all functions are always one click away. It’s a mantra used to great success by Apple, and there aren’t many growing pains when you first get started.
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/09/ryver-web-interface.png)
|
||||
|
||||
Conversations are split between personal chats and public posts, and it means there’s a clear line between team platforms and personal use. It should help to avoid broadcasting any embarrassing announcements to your colleagues, and I’ve seen a few during my time as a Slack user.
|
||||
|
||||
Integration with a number of existing apps is supported, and there are native applications for most platforms.
|
||||
|
||||
You can add guests when needed at no additional cost, and it’s useful if you deal with external clients regularly. Guests can add more guests, so there’s an element of fluidity that isn’t seen with the more popular option.
|
||||
|
||||
Think of Ryver as a completely different service that will cater to different needs. If you need to deal with numerous clients on the same account, it’s worth trying out.
|
||||
|
||||
The question is how is it free? The quick answer is premium users will be paying your way. Like Spotify and other services, there’s a minority paying for the rest of us. Here’s a direct link to their download page if you’re interested in giving it a go.
|
||||
|
||||
### Should You Switch to Ryver?
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/04/Slack-homepage.jpg)
|
||||
|
||||
Slack is great as long as you stick to smaller teams like I do, but Ryver has a lot to offer. The idea of a completely free team messaging program is noble, and it works perfectly.
|
||||
|
||||
There’s nothing wrong with using both, so make sure to try out the competition if you’re not willing to pay for a premium Slack account. You might find that both are better in different situations, depending on what you need.
|
||||
|
||||
Above all, Ryver is a great free alternative, and it’s more than just a Slack clone. They have a clear idea of what they’re trying to achieve, and they have a decent product that offers something different in a crowded marketplace.
|
||||
|
||||
However, there’s a chance that it will disappear if there’s a sustained lack of funding in the future. It could leave your teams and discussions in disarray. Everything is fine for now, but be careful if you plan to export a larger business over to the new upstart.
|
||||
|
||||
If you’re tired of Slack’s limitations on a free account, you’ll be impressed by what Ryver has to offer. To learn more, check out their website for information about the service.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/why-use-ryver-instead-of-slack/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
|
||||
|
||||
作者:[James Milin-Ashmore][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.maketecheasier.com/author/james-ashmore/
|
@ -0,0 +1,52 @@
|
||||
Taskwarrior: A Brilliant Command-Line TODO App For Linux
|
||||
====
|
||||
|
||||
Taskwarrior is a simple, straight-forward command-line based TODO app for Ubuntu/Linux. This open-source app has to be one of the easiest of all [CLI based apps][4] I've ever used. Taskwarrior helps you better organize yourself, and without installing bulky new apps which sometimes defeats the whole purpose of TODO apps.
|
||||
|
||||
![](https://2.bp.blogspot.com/-pQnRlOUNIxk/V9cuc3ytsBI/AAAAAAAAKHs/yYxyiAk4PwMIE0HTxlrm6arWOAPcBRRywCLcB/s1600/taskwarrior-todo-app.png)
|
||||
|
||||
### Taskwarrior: A Simple CLI Based TODO App That Gets The Job Done!
|
||||
|
||||
Taskwarrior is an open-source and cross-platform, command-line based TODO app, which lets you manage your to-do lists right from the Terminal. The app lets you add tasks, shows you the list, and removes tasks from that list with much ease. And what's more, it's available within your default repositories, no need to fiddle with PPAs. In Ubuntu 16.04 LTS and similar, do the following in Terminal to install Taskwarrior.
|
||||
|
||||
```
|
||||
sudo apt-get install task
|
||||
```
|
||||
|
||||
A simple use case can be as follows:
|
||||
|
||||
```
|
||||
$ task add Read a book
|
||||
Created task 1.
|
||||
$ task add priority:H Pay the bills
|
||||
Created task 2.
|
||||
```
|
||||
|
||||
This is the same example I used in the screenshot above. Yes, you can set priority levels (H, L or M) as shown. And then you can use 'task' or 'task next' commands to see your newly-created todo list. For example:
|
||||
|
||||
```
|
||||
$ task next
|
||||
|
||||
ID Age P Description Urg
|
||||
-- --- - -------------------------------- ----
|
||||
2 10s H Pay the bills 6
|
||||
1 20s Read a book 0
|
||||
```
|
||||
|
||||
And once its completed, you can use 'task 1 done' or 'task 2 done' commands to clear the lists. A more comprehensive list of commands, use-cases [can be found here][1]. Also, Taskwarrior is cross-platform, which means you'll find a version that [fits your needs][2] no matter what. There's even an [Android version][3] if you want one. Enjoy!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.techdrivein.com/2016/09/taskwarrior-command-line-todo-app-linux.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+techdrivein+%28Tech+Drive-in%29
|
||||
|
||||
作者:[Manuel Jose ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.techdrivein.com/2016/09/taskwarrior-command-line-todo-app-linux.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+techdrivein+%28Tech+Drive-in%29
|
||||
[1]: https://taskwarrior.org/docs/
|
||||
[2]: https://taskwarrior.org/download/
|
||||
[3]: https://taskwarrior.org/news/news.20160225.html
|
||||
[4]: http://www.techdrivein.com/search/label/Terminal
|
@ -0,0 +1,95 @@
|
||||
The Five Principles of Monitoring Microservices
|
||||
====
|
||||
|
||||
![](http://thenewstack.io/wp-content/uploads/2016/09/toppicsysdig.jpg)
|
||||
|
||||
The need for microservices can be summed up in just one word: speed. The need to deliver more functionality and reliability faster has revolutionized the way developers create software. Not surprisingly, this change has caused ripple effects within software management, including monitoring systems. In this post, we’ll focus on the radical changes required to monitor your microservices in production efficiently. We’ll lay out five guiding principles for adapting your monitoring approach for this new software architecture.
|
||||
|
||||
Monitoring is a critical piece of the control systems of microservices, as the more complex your software gets, the harder it is to understand its performance and troubleshoot problems. Given the dramatic changes to software delivery, however, monitoring needs an overhaul to perform well in a microservice environment. The rest of this article presents the five principles of monitoring microservices, as follows:
|
||||
|
||||
1. Monitor containers and what’s inside them.
|
||||
2. Alert on service performance, not container performance.
|
||||
3. Monitor services that are elastic and multi-location.
|
||||
4. Monitor APIs.
|
||||
5. Map your monitoring to your organizational structure.
|
||||
|
||||
Leveraging these five principles will allow you to establish more effective monitoring as you make your way towards microservices. These principles will allow you to address both the technological changes associated with microservices, in addition to the organizational changes related to them.
|
||||
|
||||
### The Principles of Microservice Monitoring
|
||||
|
||||
#### 1. Monitor Containers and What’s Running Inside Them
|
||||
|
||||
Containers gained prominence as the building blocks of microservices. The speed, portability, and isolation of containers made it easy for developers to embrace a microservice model. There’s been a lot written on the benefits of containers so we won’t recount it all here.
|
||||
|
||||
Containers are black boxes to most systems that live around them. That’s incredibly useful for development, enabling a high level of portability from development through production, from developer laptop to cloud. But when it comes to operating, monitoring and troubleshooting a service, black boxes make common activities harder, leading us to wonder: what’s running in the container? How is the application/code performing? Is it spitting out important custom metrics? From the DevOps perspective, you need deep visibility inside containers rather than just knowing that some containers exist.
|
||||
|
||||
![](http://thenewstack.io/wp-content/uploads/2016/09/greatfordev.jpg)
|
||||
|
||||
The typical process for instrumentation in a non-containerized environment — an agent that lives in the user space of a host or VM — doesn’t work particularly well for containers. That’s because containers benefit from being small, isolated processes with as few dependencies as possible.
|
||||
|
||||
And, at scale, running thousands of monitoring agents for even a modestly-sized deployment is an expensive use of resources and an orchestration nightmare. Two potential solutions arise for containers: 1) ask your developers to instrument their code directly, or 2) leverage a universal kernel-level instrumentation approach to see all application and container activity on your hosts. We won’t go into depth here, but each method has pros and cons.
|
||||
|
||||
#### 2. Leverage Orchestration Systems to Alert on Service Performance
|
||||
|
||||
Making sense of operational data in a containerized environment is a new challenge. The metrics of a single container have a much lower marginal value than the aggregate information from all the containers that make up a function or a service.
|
||||
|
||||
This particularly applies to application-level information, like which queries have the slowest response times or which URLs are seeing the most errors, but also applies to infrastructure-level monitoring, like which services’ containers are using the most resources beyond their allocated CPU shares.
|
||||
|
||||
Increasingly, software deployment requires an orchestration system to “translate” a logical application blueprint into physical containers. Common orchestration systems include Kubernetes, Mesosphere DC/OS and Docker Swarm. Teams use an orchestration system to (1) define your microservices and (2) understand the current state of each service in deployment. You could argue that the orchestration system is even more important than the containers. The actual containers are ephemeral — they matter only for the short time that they exist — while your services matter for the life of their usefulness.
|
||||
|
||||
DevOps teams should redefine alerts to focus on characteristics that get as close to monitoring the experience of the service as possible. These alerts are the first line of defense in assessing if something is impacting the application. But getting to these alerts is challenging, if not impossible unless your monitoring system is container-native.
|
||||
|
||||
Container-native solutions leverage orchestration metadata to dynamically aggregate container and application data and calculate monitoring metrics on a per-service basis. Depending on your orchestration tool, you might have different layers of a hierarchy that you’d like to drill into. For example, in Kubernetes, you typically have a Namespace, ReplicaSets, Pods and some containers. Aggregating at these various layers is essential for logical troubleshooting, regardless of the physical deployment of the containers that make up the service.
|
||||
|
||||
![](http://thenewstack.io/wp-content/uploads/2016/09/servicemonitoring.jpg)
|
||||
|
||||
#### 3. Be Prepared for Services that are Elastic and Multi-Location
|
||||
|
||||
Elastic services are certainly not a new concept, but the velocity of change is much faster in container-native environments than virtualized environments. Rapidly changing environments can wreak havoc on brittle monitoring systems.
|
||||
|
||||
Frequently monitoring legacy systems required manual tuning of metrics and checks based on individual deployments of software. This tuning can be as specific as defining the individual metrics to be captured, or configuring collection based on what application is operating in a particular container. While that may be acceptable on a small scale (think tens of containers), it would be unbearable in anything larger. Microservice focused monitoring must be able to comfortably grow and shrink in step with elastic services, without human intervention.
|
||||
|
||||
For example, if the DevOps team must manually define what service a container is included in for monitoring purposes, they no doubt drop the ball as Kubernetes or Mesos spins up new containers regularly throughout the day. Similarly, if Ops were required to install a custom stats endpoint when new code is built and pushed into production, challenges may arise as developers pull base images from a Docker registry.
|
||||
|
||||
In production, build monitoring toward a sophisticated deployment that spans multiple data centers or multiple clouds. Leveraging, for example, AWS CloudWatch will only get you so far if your services span your private data center as well as AWS. That leads back to implementing a monitoring system that can span these different locations as well as operate in dynamic, container-native environments.
|
||||
|
||||
#### 4. Monitor APIs
|
||||
|
||||
In microservice environments, APIs are the lingua franca. They are essentially the only elements of a service that are exposed to other teams. In fact, response and consistency of the API may be the “internal SLA” even if there isn’t a formal SLA defined.
|
||||
|
||||
As a result, API monitoring is essential. API monitoring can take many forms but clearly, must go beyond binary up/down checks. For instance, it’s valuable to understand the most frequently used endpoints as a function of time. This allows teams to see if anything noticeable has changed in the usage of services, whether it be due to a design change or a user change.
|
||||
|
||||
You can also consider the slowest endpoints of your service, as these can reveal significant problems, or, at the very least, point to areas that need the most optimization in your system.
|
||||
|
||||
Finally, the ability to trace service calls through your system represents another critical capability. While typically used by developers, this type of profiling will help you understand the overall user experience while breaking information down into infrastructure and application-based views of your environment.
|
||||
|
||||
#### 5. Map Monitoring to Your Organizational Structure
|
||||
|
||||
While most of this post has been focused on the technological shift in microservices and monitoring, like any technology story, this is as much about people as it is about software bits.
|
||||
|
||||
For those of you familiar with Conway’s law, he reminds us that the design of systems is defined by the organizational structure of the teams building them. The allure of creating faster, more agile software has pushed teams to think about restructuring their development organization and the rules that govern it.
|
||||
|
||||
![](http://thenewstack.io/wp-content/uploads/2016/09/mapmonitoring.jpg)
|
||||
|
||||
So if an organization wants to benefit from this new software architecture approach, their teams must, therefore, mirror microservices themselves. That means smaller teams, loosely coupled; that can choose their direction as long as it still meets the needs of the whole. Within each team, there is more control than ever over languages used, how bugs are handled, or even operational responsibilities.
|
||||
|
||||
DevOps teams can enable a monitoring platform that does exactly this: allows each microservice team to isolate their alerts, metrics, and dashboards, while still giving operations a view into the global system.
|
||||
|
||||
### Conclusion
|
||||
|
||||
There’s one, clear trigger event that precipitated the move to microservices: speed. Organizations wanted to deliver more capabilities to their customers in less time. Once this happened, technology stepped in, the architectural move to micro-services and the underlying shift to containers make speed happen. Anything that gets in the way of this progress train is going to get run over on the tracks.
|
||||
|
||||
As a result, the fundamental principles of monitoring need to adapt to the underlying technology and organizational changes that accompany microservices. Operations teams that recognize this shift can adapt to microservices earlier and easier.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
|
||||
|
||||
作者:[Apurva Dave][a] [Loris Degioanni][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://thenewstack.io/author/apurvadave/
|
||||
[b]: http://thenewstack.io/author/lorisdegioanni/
|
@ -0,0 +1,85 @@
|
||||
Down and dirty with Windows Nano Server 2016
|
||||
====
|
||||
|
||||
![](http://images.techhive.com/images/article/2016/04/pokes-fun-at-1164459_1280-100654917-primary.idge.jpg)
|
||||
|
||||
>Nano Server is a very fast, powerful tool for remotely administering Windows servers, but you need to know what you're doing
|
||||
|
||||
There's been a good deal of talk around the [upcoming Nano version of Windows Server 2016][1], the remote-administered, command-line version designed with private clouds and datacenters in mind. But there's also a big difference between talking about it and getting your hands into it. Let's get into the guts.
|
||||
|
||||
Nano has no local login, is 64-bit all the way (applications, tools, and agents), and is fast to set up, update, and restart (for the rare times it needs to restart). It's perfect for compute hosts in or out of a cluster, a storage host, a DNS server, an IIS web server, and any server-hosting applications running in a container or virtual-machine guest operating system.
|
||||
|
||||
A Nano Server isn't all that fun to play with: You have to know what you want to accomplish. Otherwise, you'll be looking at a remote PowerShell connection and wondering what you're supposed to do next. But if you know what you want, it's very fast and powerful.
|
||||
|
||||
Microsoft has provided a [quick-start guide][2] to setting up Nano Server. Here, I take the boots-on-the-ground approach to show you what it's like in the real world.
|
||||
|
||||
First, you have to create a .vhd virtual hard drive file. As you can see in Figure 1, I had a few issues with files not being in the right place. PowerShell errors often indicate a mistyped line, but in this case, I had to keep double-checking where I put the files so that it could use the ISO information (which has to be copied and pasted to the server you want to create the .vhd file on). Once you have everything in place, you should see it go through the process of creating the .vhd file.
|
||||
|
||||
![](http://images.techhive.com/images/article/2016/09/nano-server-1-100682371-large.idge.jpg)
|
||||
>Figure 1: One of the many file path errors I got when trying to run the New-NanoServerImage script. Once I worked out the file-location issues, it went through and created the .vhd file (as shown here).
|
||||
|
||||
Next, when you create the VM in Hyper-V using the VM wizard, you need to point to an existing virtual hard disk and point to the new .vhd file you created (Figure 2).
|
||||
|
||||
![](http://images.techhive.com/images/article/2016/09/nano-server-2-100682368-large.idge.jpg)
|
||||
>Figure 2: Connecting to a virtual hard disk (the one you created at the start).
|
||||
|
||||
When you start up the Nano server, you may get a memory error depending on how much memory you allocated and how much memory the Hyper-V server has left if you have other VMs running. I had to shut off a few VMs and increase the RAM until it finally started up. That was unexpected -- [Microsoft's Nano system][3] requirements say you can run it with 512MB, although it recommends you give it at least 800MB. (I ended up allocating 8GB after 1GB didn't work; I was impatient, so I didn't try increments in between.)
|
||||
|
||||
I finally came to the login screen, then signed in to get the Nano Server Recovery Console (Figure 3), which is essentially Nano server's terminal screen.
|
||||
|
||||
![](http://images.techhive.com/images/article/2016/09/nano-server-3-100682369-large.idge.jpg)
|
||||
>Figure 3: The Nano Server Recovery Console.
|
||||
|
||||
Once I was in, I thought I was golden. But in trying to figure out a few details (how to join a domain, how to inject drivers I might not have, how to add roles), I realized that some configuration pieces would have been easier to add when I ran the New-NanoServerImage cmdlet by popping in a few more parameters.
|
||||
|
||||
However, once you have the server up and running, there are ways to configure it live. It all starts with a Remote PowerShell connection, as Figure 4 shows.
|
||||
|
||||
![](http://images.techhive.com/images/article/2016/09/nano-server-4-100682370-large.idge.jpg)
|
||||
>Figure 4: Getting information from the Nano Server Recovery Console that you can use to perform a PowerShell Remote connection.
|
||||
|
||||
Microsoft provides direction on how to make the connection happen, but after trying four different sites, I found MSDN has the clearest (working) direction on the subject. Figure 5 shows the result.
|
||||
|
||||
![](http://images.techhive.com/images/article/2016/09/nano-server-5-100682372-large.idge.jpg)
|
||||
>Figure 5: Making the remote PowerShell connection to your Nano Server.
|
||||
|
||||
Note: Once you've done the remote connection the long way, you can connect more quickly using a single line:
|
||||
|
||||
```
|
||||
Enter-PSSession –ComputerName "192.168.0.100"-Credential ~\Administrator.
|
||||
```
|
||||
|
||||
If you knew ahead of time that this server was going to be a DNS server or be part of a compute cluster and so on, you would have added those roles or feature packages when you were creating the .vhd image in the first place. If you're looking to do so after the fact, you'll need to make the remote PowerShell connection, then install the NanoServerPackage and import it. Then you can see which packages you want to deploy using Find-NanoServerPackage (shown in Figure 6).
|
||||
|
||||
![](http://images.techhive.com/images/article/2016/09/nano-server-6-100682373-large.idge.jpg)
|
||||
>Figure 6: Once you have installed and imported the NanoServerPackage, you can find the one you need to get your Nano Server up and running with the roles and features you require.
|
||||
|
||||
I tested this out by running the DNS package with the following command: `Install-NanoServerPackage –Name Microsoft-NanoServer-DNS-Package`. Once it was installed, I had to enable it with the following command: `Enable-WindowsOptionalFeature –Online –FeatureName DNS-Server-Full-Role`.
|
||||
|
||||
Obviously I didn't know these commands ahead of time. I have never run them before in my life, nor had I ever enabled a DNS role this way, but with a little research I had a DNS (Nano) Server up and running.
|
||||
|
||||
The next part of the process involves using PowerShell to configure the DNS server. That's a completely different topic and one best researched online. But it doesn't appear to be mind-blowingly difficult once you've learned the cmdlets to use: Add a zone? Use the Add-DNSServerPrimaryZone cmdlet. Add a record in that zone? Use the Add-DNSServerResourceRecordA. And so on.
|
||||
|
||||
After doing all this command-line work, you'll likely want proof that any of this is working. You should be able to do a quick review of PowerShell commands and not the many DNS ones that now present themselves (using Get-Command).
|
||||
|
||||
But if you need a GUI-based confirmation, you can open Server Manager on a GUI-based server and add the IP address of the Nano Server. Then right-click that server and choose Manage As to provide your credentials (~\Administrator and password). Once you have connected, right-click the server in Server Manager and choose Add Roles and Features; it should show that you have DNS installed as a role, as Figure 7 shows.
|
||||
|
||||
![](http://images.techhive.com/images/article/2016/09/nano-server-7-100682374-large.idge.jpg)
|
||||
>Figure 7: Proving through the GUI that DNS was installed.
|
||||
|
||||
Don't bother trying to remote-desktop into the server. There is only so much you can do through the Server Manager tool, and that isn't one of them. And just because you can confirm the DNS role doesn't mean you have the ability to add new roles and features through the GUI. It's all locked down. Nano Server is how you'll make any needed adjustments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/3119770/windows-server/down-and-dirty-with-windows-nano-server-2016.html?utm_source=webopsweekly&utm_medium=email
|
||||
|
||||
作者:[J. Peter Bruzzese ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.infoworld.com/author/J.-Peter-Bruzzese/
|
||||
[1]: http://www.infoworld.com/article/3049191/windows-server/nano-server-a-slimmer-slicker-windows-server-core.html
|
||||
[2]: https://technet.microsoft.com/en-us/windows-server-docs/compute/nano-server/getting-started-with-nano-server
|
||||
[3]: https://technet.microsoft.com/en-us/windows-server-docs/get-started/system-requirements--and-installation
|
||||
|
@ -0,0 +1,90 @@
|
||||
How to Speed Up LibreOffice with 4 Simple Steps
|
||||
====
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-featured-2.jpg)
|
||||
|
||||
For many fans and supporters of Open Source software, LibreOffice is the best alternative to Microsoft Office, and it has definitely seen huge improvements over the last few releases. However, the initial startup experience still leaves a lot to be desired. There are ways to improve launch time and overall performance of LibreOffice.
|
||||
|
||||
I will go over some practical steps that you can take to improve the load time and responsiveness of LibreOffice in the paragraphs below.
|
||||
|
||||
### 1. Increase Memory Per Object and Image Cache
|
||||
|
||||
This will help the program load faster by allocating more memory resources to the image cache and objects.
|
||||
|
||||
1. Launch LibreOffice Writer (or Calc)
|
||||
|
||||
2. Navigate to “Tools -> Options” in the menubar or use the keyboard shortcut “Alt + F12.”
|
||||
|
||||
3. Click “Memory” under LibreOffice and increase “Use for LibreOffice” to 128MB.
|
||||
|
||||
4. Also increase “Memory per object” to 20Mb.
|
||||
|
||||
5. Click “Ok” to save your changes.
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-step-1.png)
|
||||
|
||||
Note: You can set the numbers higher or lower than the suggested values depending on how powerful your machine is. It is best to experiment and see which value gives you the optimum performance.
|
||||
|
||||
### 2. Enable LibreOffice QuickStarter
|
||||
|
||||
If you have a generous amount of RAM on your machine, say 4GB and above, you can enable the “Systray Quickstarter” option to keep part of LibreOffice in memory for quicker response with opening new documents.
|
||||
|
||||
You will definitely see improved performance in opening new documents after enabling this option.
|
||||
|
||||
1. Open the options dialog by navigating to “Tools -> Options.”
|
||||
|
||||
2. In the sidebar under “LibreOffice”, select “Memory.”
|
||||
|
||||
3. Tick the “Enable Systray Quickstarter” checkbox.
|
||||
|
||||
4. Click “OK” to save the changes.
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-2.png)
|
||||
|
||||
Once this option is enabled, you will see the LibreOffice icon in your system tray with options to open any type of document.
|
||||
|
||||
### 3. Disable Java Runtime
|
||||
|
||||
Another easy way to speed up the launch time and responsiveness of LibreOffice is to disable Java.
|
||||
|
||||
1. Open the Options dialog using “Alt + F12.”
|
||||
|
||||
2. In the sidebar, select “LibreOffice,” then “Advanced.”
|
||||
|
||||
3. Uncheck the “Use Java runtime environment” option.
|
||||
|
||||
4. Click “OK” to close the dialog.
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-3.png)
|
||||
|
||||
If all you use is Writer and Calc, disabling Java will not stop you from working with your files as normal. But to use LibreOffice Base and some other special features, you may need to re-enable it again. In that case, you will get a popup asking if you wish to turn it back on.
|
||||
|
||||
### 4. Reduce Number of Undo Steps
|
||||
|
||||
By default, LibreOffice allows you to undo up to 100 changes to a document. Most users do not need anywhere near that, so holding that many steps in memory is largely a waste of resources.
|
||||
|
||||
I recommend that you reduce this number to 20 to free up memory for other things, but feel free to customise this part to suit your needs.
|
||||
|
||||
1. Open the options dialog by navigating to “Tools -> Options.”
|
||||
|
||||
2. In the sidebar under “LibreOffice,” select “Memory.”
|
||||
|
||||
3. Under “Undo” and change the number of steps to your preferred value.
|
||||
|
||||
4. Click “OK” to save the changes.
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-5.png)
|
||||
|
||||
If the tips provided helped you speed up the launch time of your LibreOffice Suite, let us know in the comments. Also, please share any other tips you may know for others to benefit as well.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/speed-up-libreoffice/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
|
||||
|
||||
作者:[Ayo Isaiah][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.maketecheasier.com/author/ayoisaiah/
|
@ -0,0 +1,66 @@
|
||||
Translating by bianjp
|
||||
|
||||
It's time to make LibreOffice and OpenOffice one again
|
||||
==========
|
||||
|
||||
![](http://tr2.cbsistatic.com/hub/i/2016/09/14/2e91089b-7ebd-4579-bf8f-74c34d1a94ce/e7e9c8dd481d8e068f2934c644788928/openofficedeathhero.jpg)
|
||||
|
||||
Let's talk about OpenOffice. More than likely you've already read, countless times, that Apache OpenOffice is near the end. The last stable iteration was 4.1.2 (released October, 2015) and a recent major security flaw took a month to patch. A lack of coders has brought development to a creeping crawl. And then, the worst possible news hit the ether; the project suggested users switch to MS Office (or LibreOffice).
|
||||
|
||||
For whom the bells tolls? The bell tolls for thee, OpenOffice.
|
||||
|
||||
I'm going to say something that might ruffle a few feathers. Are you ready for it?
|
||||
|
||||
The end of OpenOffice will be a good thing for open source and for users.
|
||||
|
||||
Let me explain.
|
||||
|
||||
### One fork to rule them all
|
||||
|
||||
When LibreOffice was forked from OpenOffice we saw yet another instance of the fork not only improving on the original, but vastly surpassing it. LibreOffice was an instant success. Every Linux distribution that once shipped with OpenOffice migrated to the new kid on the block. LibreOffice burst out of the starting gate and immediately hit its stride. Updates came at an almost breakneck speed and the improvements were plenty and important.
|
||||
|
||||
After a while, OpenOffice became an afterthought for the open source community. This, of course, was exacerbated when Oracle decided to discontinue the project in 2011 and donated the code to the Apache Project. By this point OpenOffice was struggling to move forward and that brings us to now. A burgeoning LibreOffice and a suffering, stuttering OpenOffice.
|
||||
|
||||
But I say there is a light at the end of this rather dim tunnel.
|
||||
|
||||
### Unfork them
|
||||
|
||||
This may sound crazy, but I think it's time LibreOffice and OpenOffice became one again. Yes, I know there are probably political issues and egos at stake, but I believe the two would be better served as one. The benefits of this merger would be many. Off the top of my head:
|
||||
|
||||
- Bring the MS Office filters together: OpenOffice has a strong track record of better importing certain files from MS Office (whereas LibreOffice has been known to be improving, but spotty).
|
||||
- More developers for LibreOffice: Although OpenOffice wouldn't bring with it a battalion of developers, it would certainly add to the mix.
|
||||
- End the confusion: Many users assume OpenOffice and LibreOffice are the same thing. Some don't even know that LibreOffice exists. This would end that confusion.
|
||||
- Combine their numbers: Separate, OpenOffice and LibreOffice have impressive usage numbers. Together, they would be a force.
|
||||
### A golden opportunity
|
||||
|
||||
The possible loss of OpenOffice could actually wind up being a golden opportunity for open source office suites in general. Why? I would like to suggest something that I believe has been necessary for a while now. If OpenOffice and LibreOffice were to gather their forces, diff their code, and merge, they could then do some much-needed retooling of not just the internal works of the whole, but also of the interface.
|
||||
|
||||
Let's face it, the LibreOffice and (by extension) OpenOffice UIs are both way out of date. When I install LibreOffice 5.2.1.2 the tool bar is an absolute disaster (Figure A).
|
||||
|
||||
### Figure A
|
||||
|
||||
![](http://tr2.cbsistatic.com/hub/i/2016/09/14/cc5250df-48cd-40e3-a083-34250511ffab/c5ac8eb1e2cb12224690a6a3525999f0/openofficea.jpg)
|
||||
|
||||
#### The LibreOffice default toolbar setup.
|
||||
|
||||
As much as I support and respect (and use daily) LibreOffice, it has become all too clear the interface needs a complete overhaul. What we're dealing with now is a throwback to the late 90s/early 2000s and it has to go. When a new user opens up LibreOffice for the first time, they are inundated with buttons, icons, and toolbars. Ubuntu Unity helped this out with the Head up Display (HUD), but that did nothing for other desktops and distributions. Sure, the enlightened user has no problem knowing what to look for and where it is (or to even customize the toolbars to reflect their specific needs), but for a new or average user, that interface is a nightmare. Now would be the perfect time for this change. Bring in the last vestiges of the OpenOffice developers and have them join the fight for an improved interface. With the combination of the additional import filters from OpenOffice and a modern interface, LibreOffice could finally make some serious noise on both the home and business desktops.
|
||||
|
||||
### Will this actually happen?
|
||||
|
||||
This needs to happen. Will it? I have no idea. But even if the powers that be decide the UI isn't in need of retooling (which would be a mistake), bringing OpenOffice into the fold would still be a big step forward. The merging of the two efforts would bring about a stronger focus on development, easier marketing, and far less confusion by the public at large.
|
||||
|
||||
I realize this might seem a bit antithetical to the very heart and spirit of open source, but merging LibreOffice and OpenOffice would combine the strengths of the two constituent pieces and possibly jettison the weaknesses.
|
||||
|
||||
From my perspective, that's a win-win.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.techrepublic.com/article/its-time-to-make-libreoffice-and-openoffice-one-again/
|
||||
|
||||
作者:[Jack Wallen ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.techrepublic.com/search/?a=jack%2Bwallen
|
@ -0,0 +1,98 @@
|
||||
NitroShare – Easily Share Files Between Multiple Operating Systems on Local Network
|
||||
====
|
||||
|
||||
One of the most important uses of a network is for file sharing purposes. There are multiple ways Linux and Windows, Mac OS X users on a network can now share files with each other and in this post, we shall cover Nitroshare, a cross-platform, open-source and easy-to-use application for sharing files across a local network.
|
||||
|
||||
Nitroshare tremendously simplifies file sharing on a local network, once installed, it integrates with the operating system seamlessly. On Ubuntu, simply open it from the applications indicator, and on Windows, check it in the system tray.
|
||||
|
||||
Additionally, it automatically detects every other device on a network that has Nitroshare installed thereby enabling a user to easily transfer files from one machine to another by selecting which device to transfer to.
|
||||
|
||||
The following are the illustrious features of Nitroshare:
|
||||
|
||||
- Cross-platform, runs on Linux, Windows and Mac OS X
|
||||
- Easy to setup, no configurations required
|
||||
- It’s simple to use
|
||||
- Supports automatic discovery of devices running Nitroshare on local network
|
||||
- Supports optional TSL encryption for security
|
||||
- Works at high speeds on fast networks
|
||||
- Supports transfer of files and directories (folders on Windows)
|
||||
- Supports desktop notifications about sent files, connected devices and more
|
||||
|
||||
The latest version of Nitroshare was developed using Qt 5, it comes with some great improvements such as:
|
||||
|
||||
- Polished user interfaces
|
||||
- Simplified device discovery process
|
||||
- Removal of file size limitation from other versions
|
||||
- Configuration wizard has also been removed to make it easy to use
|
||||
|
||||
### How To Install Nitroshare on Linux Systems
|
||||
|
||||
|
||||
NitroShare is developed to run on a wide variety of modern Linux distributions and desktop environments.
|
||||
|
||||
#### On Debian Sid and Ubuntu 16.04+
|
||||
|
||||
NitroShare is included in the Debian and Ubuntu software repositories and can be easily installed with the following command.
|
||||
|
||||
```
|
||||
$ sudo apt-get install nitroshare
|
||||
```
|
||||
|
||||
But the available version might be out of date, however, to install the latest version of Nitroshare, issue the command below to add the PPA for the latest packages:
|
||||
|
||||
```
|
||||
$ sudo apt-add-repository ppa:george-edison55/nitroshare
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install nitroshare
|
||||
```
|
||||
|
||||
#### On Fedora 24-23
|
||||
|
||||
Recently, NitroShare has been included to Fedora repositories and can be installed with the following command:
|
||||
|
||||
```
|
||||
$ sudo dnf install nitroshare
|
||||
```
|
||||
|
||||
#### On Arch Linux
|
||||
|
||||
For Arch Linux, NitroShare packages are available from the AUR and can be built/installed with the following commands:
|
||||
|
||||
```
|
||||
# wget https://aur.archlinux.org/cgit/aur.git/snapshot/nitroshare.tar.gz
|
||||
# tar xf nitroshare.tar.gz
|
||||
# cd nitroshare
|
||||
# makepkg -sri
|
||||
```
|
||||
|
||||
### How to Use NitroShare on Linux
|
||||
|
||||
Note: As I had already mentioned earlier on, all other machines that you wish to share files with on the local network must have Nitroshare installed and running.
|
||||
|
||||
After successfully installing it, search for Nitroshare in the system dash or system menu and launch it.
|
||||
|
||||
![](http://www.tecmint.com/wp-content/uploads/2016/09/NitroShare-Send-Files.png)
|
||||
|
||||
After selecting the files, click on “Open” to proceed to choosing the destination device as in the image below. Select the device and click “Ok” that is if you have any devices running Nitroshare on the local network.
|
||||
|
||||
![](http://www.tecmint.com/wp-content/uploads/2016/09/NitroShare-Local-Devices.png)
|
||||
|
||||
![](http://www.tecmint.com/wp-content/uploads/2016/09/NitroShare-File-Transfer-Progress.png)
|
||||
|
||||
From the NitroShare settings – General tab, you can add the device name, set default downloads location and in Advance settings you can set port, buffer, timeout, etc. only if you needed.
|
||||
|
||||
Homepage: <https://nitroshare.net/index.html>
|
||||
|
||||
That’s it for now, if you have any issues regarding Nitroshare, you can share with us using our comment section below. You can as well make suggestions and let us know of any wonderful, cross-platform file sharing applications out there that we probably have no idea about and always remember to stay connected to Tecmint.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.tecmint.com/author/aaronkili/
|
@ -0,0 +1,364 @@
|
||||
Server Monitoring with Shinken on Ubuntu 16.04
|
||||
=====
|
||||
|
||||
|
||||
Shinken is an open source computer and network monitoring framework written in python and compatible with Nagios. Shinken can be used on all operating systems that can run python applications like Linux, Unix, and Windows. Shinken was written by Jean Gabes as proof of concept for a new Nagios architecture, but it was turned down by the Nagios author and became an independent network and system monitoring tool that stays compatible with Nagios.
|
||||
|
||||
In this tutorial, I will show you how to install Shinken from source and add a Linux host to the monitoring system. I will use Ubuntu 16.04 Xenial Xerus as the operating system for the Shinken server and monitored host.
|
||||
|
||||
### Step 1 - Install Shinken Server
|
||||
|
||||
Shinken is a python framework, we can install it with pip or install it from source. In this step, we will install Shinken from source.
|
||||
|
||||
There are some tasks that have to be completed before we start installing Shinken.
|
||||
|
||||
Install some new python packages and create Linux user with the name "shinken":
|
||||
|
||||
```
|
||||
sudo apt-get install python-setuptools python-pip python-pycurl
|
||||
useradd -m -s /bin/bash shinken
|
||||
```
|
||||
|
||||
Download the Shinken source from GitHub repository:
|
||||
|
||||
```
|
||||
git clone https://github.com/naparuba/shinken.git
|
||||
cd shinken/
|
||||
```
|
||||
|
||||
Then install Shinken with the command below:
|
||||
|
||||
```
|
||||
git checkout 2.4.3
|
||||
python setup.py install
|
||||
```
|
||||
|
||||
Next, for better results, we need to install 'python-cherrypy3' from the ubuntu repository:
|
||||
|
||||
```
|
||||
sudo apt-get install python-cherrypy3
|
||||
```
|
||||
|
||||
Now Shinken is installed, next we add Shinken to start at boot time and start it:
|
||||
|
||||
```
|
||||
update-rc.d shinken defaults
|
||||
systemctl start shinken
|
||||
```
|
||||
|
||||
### Step 2 - Install Shinken Webui2
|
||||
|
||||
Webui2 is the Shinken web interface available from shinken.io. The easiest way to install Sshinken webui2 is by using the shinken CLI command (which has to be executed as shinken user).
|
||||
|
||||
Login to the shinken user:
|
||||
|
||||
```
|
||||
su - shinken
|
||||
```
|
||||
|
||||
Initialize the shinken configuration file - The command will create a new configuration .shinken.ini:
|
||||
|
||||
```
|
||||
shinken --init
|
||||
```
|
||||
|
||||
And install webui2 with this shinken CLI command:
|
||||
|
||||
```
|
||||
shinken install webui2
|
||||
```
|
||||
|
||||
![](https://www.howtoforge.com/images/server-monitoring-with-shinken-on-ubuntu-16-04/6.png)
|
||||
|
||||
Webui2 is installed, but we need to install MongoDB and another python package with pip. Run command below as root:
|
||||
|
||||
```
|
||||
sudo apt-get install mongodb
|
||||
pip install pymongo>=3.0.3 requests arrow bottle==0.12.8
|
||||
```
|
||||
|
||||
Next, go to the shinken directory and add the new webui2 module by editing the 'broker-master.cfg' file:
|
||||
|
||||
```
|
||||
cd /etc/shinken/brokers/
|
||||
vim broker-master.cfg
|
||||
```
|
||||
|
||||
Add a new option inside module on line 40:
|
||||
|
||||
```
|
||||
modules webui2
|
||||
```
|
||||
|
||||
Save the file and exit the editor.
|
||||
|
||||
Now go to the contacts directory and edit the file 'admin.cfg' for the admin configuration.
|
||||
|
||||
```
|
||||
cd /etc/shinken/contacts/
|
||||
vim admin.cfg
|
||||
```
|
||||
|
||||
Change the values shown below:
|
||||
|
||||
```
|
||||
contact_name admin # Username 'admin'
|
||||
password yourpass # Pass 'mypass'
|
||||
```
|
||||
|
||||
Save and exit.
|
||||
|
||||
### Step 3 - Install Nagios-plugins and Shinken Packages
|
||||
|
||||
In this step, we will install Nagios-plugins and some Perl module. Then install additional shinken packages from shinken.io to perform the monitoring.
|
||||
|
||||
Install Nagios-plugins and cpanminus which is required for building and installing the Perl modules:
|
||||
|
||||
```
|
||||
sudo apt-get install nagios-plugins* cpanminus
|
||||
```
|
||||
|
||||
Install these Perl modules with the cpanm command:
|
||||
|
||||
```
|
||||
cpanm Net::SNMP
|
||||
cpanm Time::HiRes
|
||||
cpanm DBI
|
||||
```
|
||||
|
||||
Now create new link for utils.pm file to shinken the directory and create a new directory for Log_File_Health:
|
||||
|
||||
```
|
||||
chmod u+s /usr/lib/nagios/plugins/check_icmp
|
||||
ln -s /usr/lib/nagios/plugins/utils.pm /var/lib/shinken/libexec/
|
||||
mkdir -p /var/log/rhosts/
|
||||
touch /var/log/rhosts/remote-hosts.log
|
||||
```
|
||||
|
||||
Next, install the shinken packages ssh and linux-snmp for monitoring SSH and SNMP sources from shinken.io:
|
||||
|
||||
```
|
||||
su - shinken
|
||||
shinken install ssh
|
||||
shinken install linux-snmp
|
||||
```
|
||||
|
||||
### Step 4 - Add a New Linux Host/host-one
|
||||
|
||||
We will add a new Linux host that shall be monitored by using an Ubuntu 16.04 server with IP address 192.168.1.121 and hostname 'host-one'.
|
||||
|
||||
Connect to the Linux host-one:
|
||||
|
||||
```
|
||||
ssh host1@192.168.1.121
|
||||
```
|
||||
|
||||
Install the snmp and snmpd packages from the Ubuntu repository:
|
||||
|
||||
```
|
||||
sudo apt-get install snmp snmpd
|
||||
```
|
||||
|
||||
Next, edit the configuration file 'snmpd.conf' with vim:
|
||||
|
||||
```
|
||||
vim /etc/snmp/snmpd.conf
|
||||
```
|
||||
|
||||
Comment line 15 and uncomment line 17:
|
||||
|
||||
```
|
||||
#agentAddress udp:127.0.0.1:161
|
||||
agentAddress udp:161,udp6:[::1]:161
|
||||
```
|
||||
|
||||
Comment line 51 and 53, then add new line configuration below:
|
||||
|
||||
```
|
||||
#rocommunity mypass default -V systemonly
|
||||
#rocommunity6 mypass default -V systemonly
|
||||
|
||||
rocommunity mypass
|
||||
```
|
||||
|
||||
Save and exit.
|
||||
|
||||
Now start the snmpd service with the systemctl command:
|
||||
|
||||
```
|
||||
systemctl start snmpd
|
||||
```
|
||||
|
||||
Go to the shinken server and define the new host by creating a new file in the 'hosts' directory.
|
||||
|
||||
```
|
||||
cd /etc/shinken/hosts/
|
||||
vim host-one.cfg
|
||||
```
|
||||
|
||||
Paste configuration below:
|
||||
|
||||
```
|
||||
define host{
|
||||
use generic-host,linux-snmp,ssh
|
||||
contact_groups admins
|
||||
host_name host-one
|
||||
address 192.168.1.121
|
||||
_SNMPCOMMUNITY mypass # SNMP Pass Config on snmpd.conf
|
||||
}
|
||||
```
|
||||
|
||||
Save and exit.
|
||||
|
||||
Edit the SNMP configuration on the Shinken server:
|
||||
|
||||
```
|
||||
vim /etc/shinken/resource.d/snmp.cfg
|
||||
```
|
||||
|
||||
Change 'public' to 'mypass' - must be the same password that you used in the snmpd configuration file on the client host-one.
|
||||
|
||||
```
|
||||
$SNMPCOMMUNITYREAD$=mypass
|
||||
```
|
||||
|
||||
Save and exit.
|
||||
|
||||
Now reboot both servers - Shinken server and the monitored Linux host:
|
||||
|
||||
```
|
||||
reboot
|
||||
```
|
||||
|
||||
The new Linux host has been added successfully to the Shinken server.
|
||||
|
||||
### Step 5 - Access Shinken Webui2
|
||||
|
||||
Visit the Shinken webui2 on port 7677 (replace the IP in the URL with your IP):
|
||||
|
||||
```
|
||||
http://192.168.1.120:7767
|
||||
```
|
||||
|
||||
Log in with user admin and your password (the one that you have set in the admin.cfg configuration file).
|
||||
|
||||
![](https://www.howtoforge.com/images/server-monitoring-with-shinken-on-ubuntu-16-04/1.png)
|
||||
|
||||
Shinken Dashboard in Webui2.
|
||||
|
||||
![](https://www.howtoforge.com/images/server-monitoring-with-shinken-on-ubuntu-16-04/2.png)
|
||||
|
||||
Our 2 servers are monitored with Shinken.
|
||||
|
||||
![](https://www.howtoforge.com/images/server-monitoring-with-shinken-on-ubuntu-16-04/3.png)
|
||||
|
||||
List all services that are monitored with linux-snmp.
|
||||
|
||||
![](https://www.howtoforge.com/images/server-monitoring-with-shinken-on-ubuntu-16-04/4.png)
|
||||
|
||||
Status of all hosts and services.
|
||||
|
||||
|
||||
![](https://www.howtoforge.com/images/server-monitoring-with-shinken-on-ubuntu-16-04/5.png)
|
||||
|
||||
### Step 6 - Common Problems with Shinken
|
||||
|
||||
- Problems with the NTP server
|
||||
|
||||
When you get this error with NTP.
|
||||
|
||||
```
|
||||
TimeSync - CRITICAL ( NTP CRITICAL: No response from the NTP server)
|
||||
TimeSync - CRITICAL ( NTP CRITICAL: Offset unknown )
|
||||
```
|
||||
|
||||
To solve this problem, install ntp on all Linux hosts.
|
||||
|
||||
```
|
||||
sudo apt-get install ntp ntpdate
|
||||
```
|
||||
|
||||
Edit the ntp configuration:
|
||||
|
||||
```
|
||||
vim /etc/ntp.conf
|
||||
```
|
||||
|
||||
Comment all the pools and replace it with:
|
||||
|
||||
```
|
||||
#pool 0.ubuntu.pool.ntp.org iburst
|
||||
#pool 1.ubuntu.pool.ntp.org iburst
|
||||
#pool 2.ubuntu.pool.ntp.org iburst
|
||||
#pool 3.ubuntu.pool.ntp.org iburst
|
||||
|
||||
pool 0.id.pool.ntp.org
|
||||
pool 1.asia.pool.ntp.org
|
||||
pool 0.asia.pool.ntp.org
|
||||
```
|
||||
|
||||
Next, add a new line inside restrict:
|
||||
|
||||
```
|
||||
# Local users may interrogate the ntp server more closely.
|
||||
restrict 127.0.0.1
|
||||
restrict 192.168.1.120 #shinken server IP address
|
||||
restrict ::1
|
||||
NOTE: 192.168.1.120 is the Shinken server IP address.
|
||||
```
|
||||
|
||||
Save and exit.
|
||||
|
||||
Start ntp and check the Shinken dashboard:
|
||||
|
||||
```
|
||||
ntpd
|
||||
```
|
||||
|
||||
- Problem check_netint.pl Not Found
|
||||
|
||||
Download the source from the github repository to the shinken lib directory:
|
||||
|
||||
```
|
||||
cd /var/lib/shinken/libexec/
|
||||
wget https://raw.githubusercontent.com/Sysnove/shinken-plugins/master/check_netint.pl
|
||||
chmod +x check_netint.pl
|
||||
chown shinken:shinken check_netint.pl
|
||||
```
|
||||
|
||||
- Problem with NetworkUsage
|
||||
|
||||
There is error message:
|
||||
|
||||
```
|
||||
ERROR : Unknown interface eth\d+
|
||||
```
|
||||
|
||||
Check your network interface and edit the linux-snmp template.
|
||||
|
||||
On my Ubuntu server, the network interface is 'enp0s8', not eth0, so I got this error.
|
||||
|
||||
Edit the linux-snmp template packs with vim:
|
||||
|
||||
```
|
||||
vim /etc/shinken/packs/linux-snmp/templates.cfg
|
||||
```
|
||||
|
||||
Add the network interface to line 24:
|
||||
|
||||
```
|
||||
_NET_IFACES eth\d+|em\d+|enp0s8
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/server-monitoring-with-shinken-on-ubuntu-16-04/
|
||||
|
||||
作者:[Muhammad Arul][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.howtoforge.com/tutorial/server-monitoring-with-shinken-on-ubuntu-16-04/
|
||||
Save and exit.
|
@ -0,0 +1,129 @@
|
||||
GOOGLER: NOW YOU CAN GOOGLE FROM LINUX TERMINAL!
|
||||
====
|
||||
|
||||
![](https://itsfoss.com/wp-content/uploads/2016/09/google-from-linux-terminal.jpg)
|
||||
|
||||
A quick question: What do you do every day? Of course, a lot of things. But I can tell one thing, you search on Google almost every day (if not every day). Am I right?
|
||||
|
||||
Now, if you are a Linux user (which I’m guessing you are) here’s another question: wouldn’t it be nice if you can Google without even leaving the terminal? Without even firing up a Browser window?
|
||||
|
||||
If you are a *nix enthusiast and also one of those people who just love the view of the terminal, I know your answer is – Yes. And I think, the rest of you will also like the nifty little tool I’m going to introduce today. It’s called Googler!
|
||||
|
||||
### GOOGLER: GOOGLE IN YOUR LINUX TERMINAL
|
||||
|
||||
Googler is a straightforward command-line utility for Google-ing right from your terminal window. Googler mainly supports three types of Google Searches:
|
||||
|
||||
- Google Search: Simple Google searching, equivalent to searching on Google homepage.
|
||||
- Google News Search: Google searching for News, equivalent to searching on Google News.
|
||||
- Google Site Search: Google searching for results from a specific site.
|
||||
|
||||
Googler shows the search results with the title, URL and page excerpt. The search results can be opened directly in the browser with only a couple of keystrokes.
|
||||
|
||||
![](https://itsfoss.com/wp-content/uploads/2016/09/googler-1.png)
|
||||
|
||||
### INSTALLATION ON UBUNTU
|
||||
|
||||
Let’s go through the installation process first.
|
||||
|
||||
At first make sure you have python version 3.3 or later using this command:
|
||||
|
||||
```
|
||||
python3 --version
|
||||
```
|
||||
|
||||
If not, upgrade it. Googler requires python 3.3+ for running.
|
||||
|
||||
Though Googler is yet not available through package repository on Ubuntu, we can easily install it from the GitHub repository. All we have to do is run the following commands:
|
||||
|
||||
```
|
||||
cd /tmp
|
||||
git clone https://github.com/jarun/googler.git
|
||||
cd googler
|
||||
sudo make install
|
||||
cd auto-completion/bash/
|
||||
sudo cp googler-completion.bash /etc/bash_completion.d/
|
||||
```
|
||||
|
||||
And that’s it. Googler is installed along with command autocompletion feature.
|
||||
|
||||
### FEATURES & BASIC USAGE
|
||||
|
||||
If we go through all its features, Googler is actually quite powerful a tool. Some of the main features are:
|
||||
|
||||
Interactive Interface: Run the following command in terminal:
|
||||
|
||||
```
|
||||
googler
|
||||
```
|
||||
|
||||
The interactive interface will be opened. The developer of Googler, Arun [Prakash Jana][1] calls it the omniprompt. You can enter ? for available commands on omniprompt.
|
||||
|
||||
![](https://itsfoss.com/wp-content/uploads/2016/09/googler-2.png)
|
||||
|
||||
From the omniprompt, enter any search phrases to initiate the search. You can then enter n or p to navigate next or previous page of search results.
|
||||
|
||||
To open any search result in a browser window, just enter the index number of that result. Or you can open the search page itself by entering o .
|
||||
|
||||
- News Search: If you want to search News, start googler with the N optional argument:
|
||||
|
||||
```
|
||||
googler -N
|
||||
```
|
||||
|
||||
The subsequent omniprompt will fetch results from Google News.
|
||||
|
||||
- Site Search: If you want to search pages from a specific site, run googler with w {domain} argument:
|
||||
|
||||
```
|
||||
googler -w itsfoss.com
|
||||
```
|
||||
|
||||
The subsequent omniprompt with fetch results only from It’s FOSS blog!
|
||||
|
||||
- Manual Page: Run the following command for Googler manual page equipped with various examples:
|
||||
|
||||
```
|
||||
man googler
|
||||
```
|
||||
|
||||
- Google country/domain specific search:
|
||||
|
||||
```
|
||||
googler -c in "hello world"
|
||||
```
|
||||
|
||||
The above example command will open search results from Google’s Indian domain (in for India).
|
||||
|
||||
- Filter search results by duration and language preference.
|
||||
- Google search keywords support, such as: site:example.com or filetype:pdf etc.
|
||||
- HTTPS proxy support.
|
||||
- Shell commands autocomplete.
|
||||
- Disable automatic spelling correction.
|
||||
|
||||
There are much more. You can twist Googler to suit your needs.
|
||||
|
||||
Googler can also be integrated with a text-based browser ( like – [elinks][2], [links][3], [lynx][4], w3m etc.), so that you wouldn’t even need to leave the terminal for browsing web pages. The instructions can be found on the [GitHub project page of Googler][5].
|
||||
|
||||
If you want a graphical demonstration of Googler’s various features, feel free to check the terminal recording attached to the GitHub project page : [jarun/googler v2.7 quick demo][6].
|
||||
|
||||
### THOUGHTS ON GOOGLER?
|
||||
|
||||
Though Googler might not feel necessary or desired to everybody, for someone who doesn’t want to open the browser just for searching on google or simply want to spend as much as time possible on the terminal window, it is a great tool indeed. What do you think?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
|
||||
|
||||
作者:[Munif Tanjim][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/munif/
|
||||
[1]: https://github.com/jarun
|
||||
[2]: http://elinks.or.cz/
|
||||
[3]: http://links.twibright.com/
|
||||
[4]: http://lynx.browser.org/
|
||||
[5]: https://github.com/jarun/googler#faq
|
||||
[6]: https://asciinema.org/a/85019
|
@ -0,0 +1,51 @@
|
||||
Insomnia 3.0 Is a Slick Desktop REST Client for Linux
|
||||
=====
|
||||
|
||||
![](http://www.omgubuntu.co.uk/wp-content/uploads/2016/09/insomnia-app-screenshot.png)
|
||||
|
||||
Looking for a free REST client for the Linux desktop? Don’t lose sleep: get [Insomnia][1].
|
||||
|
||||
The app is cross-platform and works on Linux, macOS and Windows. Its developer, Gregory Schier, told us that he created the app “to help developers communicate with [REST APIs][2].”
|
||||
|
||||
He also told that Insomnia already has around 10,000 active users — 9% of which are on Linux.
|
||||
|
||||
“So far, the feedback from Linux users has been very positive because similar applications (not nice ones anyway) aren’t usually available for Linux.”
|
||||
|
||||
Insomnia aims to ‘speed up your API testing workflow’, by letting you organise, run and debug HTTP requests through a cleanly designed interface.
|
||||
|
||||
The app also includes advanced features like cookie management, global environments, SSL validation, and code snippet generation.
|
||||
|
||||
As I am not a developer I can’t evaluate this app first-hand, nor tell you why it rocks or highlight any major feature deficiencies.
|
||||
|
||||
But I thought I’d bring the app to your attention and let you decide for yourself. If you’ve been hunting for a slickly designed GUI alternative to command-line tools like HTTPie, it might be well worth giving it a whirl.
|
||||
|
||||
### Download Insomnia 3.0 for Linux
|
||||
|
||||
Insomnia 3.0 (not to be confused with Insomnia v2.0 which is only available on Chrome) is available to download for Windows, macOS and Linux.
|
||||
|
||||
[Download Insomnia 3.0][4]
|
||||
|
||||
An installer is available for Ubuntu 14.04 LTS and up, as is a cross-distro AppImage:
|
||||
|
||||
[Download Insomnia 3.0 (.AppImage)][5]
|
||||
|
||||
If you want to keep pace with development of the app you can follow [Insomnia on Twitter][6].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2016/09/insomnia-3-is-free-rest-client-for-linux?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+d0od+%28OMG%21+Ubuntu%21%29
|
||||
|
||||
作者:[JOEY-ELIJAH SNEDDON ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]: http://insomnia.rest/
|
||||
[2]: https://en.wikipedia.org/wiki/Representational_state_transfer
|
||||
[3]: https://github.com/jkbrzt/httpie
|
||||
[4]: https://insomnia.rest/download/
|
||||
[5]: https://builds.insomnia.rest/downloads/linux/latest
|
||||
[6]: https://twitter.com/GetInsomnia
|
@ -0,0 +1,159 @@
|
||||
Using Ansible to Provision Vagrant Boxes
|
||||
====
|
||||
|
||||
![](https://i1.wp.com/cdn.fedoramagazine.org/wp-content/uploads/2016/08/vagrant-plus-ansible.jpg?w=1352&ssl=1)
|
||||
|
||||
Ansible is a great tool for system administrators who want to automate system administration tasks. From configuration management to provisioning and managing containers for application deployments, Ansible [makes it easy][1]. The lightweight module based architecture is ideal for system administration. One advantage is that when the node is not being managed by Ansible, no resources are used.
|
||||
|
||||
This article covers how to use Ansible to provision Vagrant boxes. A [Vagrant box][2] in simple terms is a virtual machine prepackaged with tools required to run the development environment. You can use these boxes to distribute the development environment used by other team members for project work. Using Ansible, you can automate provisioning the Vagrant boxes with your development packages.
|
||||
|
||||
This tutorial uses Fedora 24 as the host system and [CentOS][3] 7 as the Vagrant box.
|
||||
|
||||
### Setting up prerequisites
|
||||
|
||||
To configure Vagrant boxes using Ansible, you’ll need a few things setup. This tutorial requires you to install Ansible and Vagrant on the host machine. On your host machine, execute the following command to install the tools:
|
||||
|
||||
```
|
||||
sudo dnf install ansible vagrant vagrant-libvirt
|
||||
```
|
||||
|
||||
The above command installs both Ansible and Vagrant on your host system, along with Vagrant’s libvirt provider. Vagrant doesn’t provide functionality to host your virtual machine guests (VMs). Rather, it depends on third party providers such as libvirt, VirtualBox, VMWare, etc. to host the VMs. This provider works directly with libvirt and KVM on your Fedora system.
|
||||
|
||||
Next, make sure your user is in the wheel group. This special group allows you to run system administration commands. If you created your user as an administrator, such as during installation, you’ll have this group membership. Run the following command:
|
||||
|
||||
```
|
||||
id | grep wheel
|
||||
```
|
||||
|
||||
If you see output, your user is in the group, and you can move on to the next section. If not, run the following command. You’ll need to provide the password for the root account. Substitute your user name for the text <username>:
|
||||
|
||||
```
|
||||
su -c 'usermod -a -G wheel <username>'
|
||||
```
|
||||
|
||||
Then you will need to logout, and log back in, to inherit the group membership properly.
|
||||
|
||||
Now it’s time to create your first Vagrant box, which you’ll then configure using Ansible.
|
||||
|
||||
### Setting up the Vagrant box
|
||||
|
||||
Before you use Ansible to provision a box, you must create the box. To start, create a new directory which will store files related to the Vagrant box. To create this directory and make it the current working directory, issue the following command:
|
||||
|
||||
```
|
||||
mkdir -p ~/lampbox && cd ~/lampbox
|
||||
```
|
||||
|
||||
Before you create the box, you should understand the goal. This box is a simple example that runs CentOS 7 as its base system, along with the Apache web server, MariaDB (the popular open source database server from the original developers of MySQL) and PHP.
|
||||
|
||||
To initialize the Vagrant box, use the vagrant init command:
|
||||
|
||||
```
|
||||
vagrant init centos/7
|
||||
```
|
||||
|
||||
This command initializes the Vagrant box and creates a file named Vagrantfile, with some pre-configured variables. Open this file so you can modify it. The following line lists the base box used by this configuration.
|
||||
|
||||
```
|
||||
config.vm.box = "centos/7"
|
||||
```
|
||||
|
||||
Now setup port forwarding, so after you finish setup and the Vagrant box is running, you can test the server. To setup port forwarding, add the following line just before the end statement in Vagrantfile:
|
||||
|
||||
```
|
||||
config.vm.network "forwarded_port", guest: 80, host: 8080
|
||||
```
|
||||
|
||||
This option maps port 80 of the Vagrant Box to port 8080 of the host machine.
|
||||
|
||||
The next step is to set Ansible as our provisioning provider for the Vagrant Box. Add the following lines before the end statement in your Vagrantfile to set Ansible as the provisioning provider:
|
||||
|
||||
```
|
||||
config.vm.provision :ansible do |ansible|
|
||||
ansible.playbook = "lamp.yml"
|
||||
end
|
||||
```
|
||||
|
||||
(You must add all three lines before the final end statement.) Notice the statement ansible.playbook = “lamp.yml”. This statement defines the name of the playbook used to provision the box.
|
||||
|
||||
### Creating the Ansible playbook
|
||||
|
||||
In Ansible, playbooks describe a policy to be enforced on your remote nodes. Put another way, playbooks manage configurations and deployments on remote nodes. Technically speaking, a playbook is a YAML file in which you write tasks to perform on remote nodes. In this tutorial, you’ll create a playbook named lamp.yml to provision the box.
|
||||
|
||||
To make the playbook, create a file named lamp.yml in the same directory where your Vagrantfile is located and add the following lines to it:
|
||||
|
||||
```
|
||||
---
|
||||
- hosts: all
|
||||
become: yes
|
||||
become_user: root
|
||||
tasks:
|
||||
- name: Install Apache
|
||||
yum: name=httpd state=latest
|
||||
- name: Install MariaDB
|
||||
yum: name=mariadb-server state=latest
|
||||
- name: Install PHP5
|
||||
yum: name=php state=latest
|
||||
- name: Start the Apache server
|
||||
service: name=httpd state=started
|
||||
- name: Install firewalld
|
||||
yum: name=firewalld state=latest
|
||||
- name: Start firewalld
|
||||
service: name=firewalld state=started
|
||||
- name: Open firewall
|
||||
command: firewall-cmd --add-service=http --permanent
|
||||
```
|
||||
|
||||
An explanation of each line of lamp.yml follows.
|
||||
|
||||
- hosts: all specifies the playbook should run over every host defined in the Ansible configuration. Since no hosts are configured hosts yet, the playbook will run on localhost.
|
||||
- sudo: true states the tasks should be performed with root privileges.
|
||||
- tasks: specifies the tasks to perform when the playbook runs. Under the tasks section:
|
||||
- - name: … provides a descriptive name to the task
|
||||
- - yum: … specifies the task should be executed by the yum module. The options name and state are key=value pairs for use by the yum module.
|
||||
|
||||
When this playbook executes, it installs the latest versions of the Apache (httpd) web server, MariaDB, and PHP. Then it installs and starts firewalld, and opens a port for the Apache server. You’re now done writing the playbook for the box. Now it’s time to provision it.
|
||||
|
||||
### Provisioning the box
|
||||
|
||||
A few final steps remain before using the Vagrant Box provisioned using Ansible. To run this provisioning, execute the following command:
|
||||
|
||||
```
|
||||
vagrant up --provider libvirt
|
||||
```
|
||||
|
||||
The above command starts the Vagrant box, downloads the base box image to the host system if not already present, and then runs the playbook lamp.yml to provision.
|
||||
|
||||
If everything works fine, the output looks somewhat similar to this example:
|
||||
|
||||
![](https://i1.wp.com/cdn.fedoramagazine.org/wp-content/uploads/2016/08/vagrant-ansible-playbook-run.png?w=574&ssl=1)
|
||||
|
||||
This output shows that the box has been provisioned. Now check whether the server is accessible. To confirm, open your web browser on the host machine and point it to the address http://localhost:8080. Remember, port 8080 of the local host is forwarded to port 80 of the Vagrant box. You should be greeted with the Apache welcome page like the one shown below:
|
||||
|
||||
![](https://i0.wp.com/cdn.fedoramagazine.org/wp-content/uploads/2016/08/vagrant-ansible-apache-up.png?w=1004&ssl=1)
|
||||
|
||||
To make changes to your Vagrant box, first edit the Ansible playbook lamp.yml. You can find plentiful documentation on Ansible at [its official website][4]. Then run the following command to re-provision the box:
|
||||
|
||||
```
|
||||
vagrant provision
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
You’ve now seen how to use Ansible to provision Vagrant boxes. This was a basic example, but you can use these tools for many other use cases. For example, you can deploy complete applications along with up-to-date version of required tools. Be creative as you use Ansible to provision your remote nodes or containers.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/using-ansible-provision-vagrant-boxes/
|
||||
|
||||
作者:[Saurabh Badhwar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://h4xr.id.fedoraproject.org/
|
||||
[1]: https://ansible.com/
|
||||
[2]: https://www.vagrantup.com/
|
||||
[3]: https://centos.org/
|
||||
[4]: http://docs.ansible.com/ansible/index.html
|
@ -0,0 +1,82 @@
|
||||
translating by ucasFL
|
||||
|
||||
How to Install Latest XFCE Desktop in Ubuntu 16.04 and Fedora 22-24
|
||||
====
|
||||
|
||||
Xfce is a modern, open source and lightweight desktop environment for Linux systems. It also works well on many other Unix-like systems such as Mac OS X, Solaris, *BSD plus several others. It is fast and also user friendly with a simple and elegant user interface.
|
||||
|
||||
Installing a desktop environment on servers can sometimes prove helpful, as certain applications may require a desktop interface for efficient and reliable administration and one of the remarkable properties of Xfce is its low system resources utilization such as low RAM consumption, thereby making it a recommended desktop environment for servers if need be.
|
||||
|
||||
### XFCE Desktop Features
|
||||
|
||||
Additionally, some of its noteworthy components and features are listed below:
|
||||
|
||||
- Xfwm windows manager
|
||||
- Thunar file manager
|
||||
- User session manger to deal with logins, power management and beyond
|
||||
- Desktop manager for setting background image, desktop icons and many more
|
||||
- An application manager
|
||||
- It’s highly pluggable as well plus several other minor features
|
||||
|
||||
The latest stable release of this desktop is Xfce 4.12, all its features and changes from previous versions are listed here.
|
||||
|
||||
#### Install Xfce Desktop on Ubuntu 16.04
|
||||
|
||||
Linux distributions such as Xubuntu, Manjaro, OpenSUSE, Fedora Xfce Spin, Zenwalk and many others provide their own Xfce desktop packages, however, you can install the latest version as follows.
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
$ sudo apt install xfce4
|
||||
```
|
||||
|
||||
Wait for the installation process to complete, then logout out of your current session or you can possibly restart your system as well. At the login interface, choose Xfce desktop and login as in the screen shot below:
|
||||
|
||||
![](http://www.tecmint.com/wp-content/uploads/2016/09/Select-Xfce-Desktop-at-Login.png)
|
||||
|
||||
![](http://www.tecmint.com/wp-content/uploads/2016/09/XFCE-Desktop.png)
|
||||
|
||||
#### Install Xfce Desktop in Fedora 22-24
|
||||
|
||||
If you have an existing Fedora distribution and wanted to install xfce desktop, you can use yum or dnf to install it as shown.
|
||||
|
||||
```
|
||||
-------------------- On Fedora 22 --------------------
|
||||
# yum install @xfce
|
||||
-------------------- On Fedora 23-24 --------------------
|
||||
# dnf install @xfce-desktop-environment
|
||||
```
|
||||
|
||||
After installing Xfce, you can choose the xfce login from the Session menu or reboot the system.
|
||||
|
||||
![](http://www.tecmint.com/wp-content/uploads/2016/09/Select-Xfce-Desktop-at-Fedora-Login.png)
|
||||
|
||||
![](http://www.tecmint.com/wp-content/uploads/2016/09/Install-Xfce-Desktop-in-Fedora.png)
|
||||
|
||||
If you don’t want Xfce desktop on your system anymore, use the command below to uninstall it:
|
||||
|
||||
```
|
||||
-------------------- On Ubuntu 16.04 --------------------
|
||||
$ sudo apt purge xfce4
|
||||
$ sudo apt autoremove
|
||||
-------------------- On Fedora 22 --------------------
|
||||
# yum remove @xfce
|
||||
-------------------- On Fedora 23-24 --------------------
|
||||
# dnf remove @xfce-desktop-environment
|
||||
```
|
||||
|
||||
In this simple how-to guide, we walked through the steps for installation of latest version of Xfce desktop, which I believe were easy to follow. If all went well, you can enjoy using xfce, as one of the [best desktop environments for Linux systems][1].
|
||||
|
||||
However, to get back to us, you can use the feedback section below and remember to always stay connected to Tecmint.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
|
||||
|
||||
作者:[Aaron Kili ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.tecmint.com/author/aaronkili/
|
||||
[1]: http://www.tecmint.com/best-linux-desktop-environments/
|
@ -0,0 +1,160 @@
|
||||
Part 13 - How to Write Scripts Using Awk Programming Language
|
||||
====
|
||||
|
||||
All along from the beginning of the Awk series up to Part 12, we have been writing small Awk commands and programs on the command line and in shell scripts respectively.
|
||||
|
||||
However, Awk, just as Shell, is also an interpreted language, therefore, with all that we have walked through from the start of this series, you can now write Awk executable scripts.
|
||||
|
||||
Similar to how we write a shell script, Awk scripts start with the line:
|
||||
|
||||
```
|
||||
#! /path/to/awk/utility -f
|
||||
```
|
||||
|
||||
For example on my system, the Awk utility is located in /usr/bin/awk, therefore, I would start an Awk script as follows:
|
||||
|
||||
```
|
||||
#! /usr/bin/awk -f
|
||||
```
|
||||
|
||||
Explaining the line above:
|
||||
|
||||
```
|
||||
#! – referred to as Shebang, which specifies an interpreter for the instructions in a script
|
||||
/usr/bin/awk – is the interpreter
|
||||
-f – interpreter option, used to read a program file
|
||||
```
|
||||
|
||||
That said, let us now dive into looking at some examples of Awk executable scripts, we can start with the simple script below. Use your favorite editor to open a new file as follows:
|
||||
|
||||
```
|
||||
$ vi script.awk
|
||||
```
|
||||
|
||||
And paste the code below in the file:
|
||||
|
||||
```
|
||||
#!/usr/bin/awk -f
|
||||
BEGIN { printf "%s\n","Writing my first Awk executable script!" }
|
||||
```
|
||||
|
||||
Save the file and exit, then make the script executable by issuing the command below:
|
||||
|
||||
```
|
||||
$ chmod +x script.awk
|
||||
```
|
||||
|
||||
Thereafter, run it:
|
||||
|
||||
```
|
||||
$ ./script.awk
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
```
|
||||
Writing my first Awk executable script!
|
||||
```
|
||||
|
||||
A critical programmer out there must be asking, “where are the comments?”, yes, you can also include comments in your Awk script. Writing comments in your code is always a good programming practice.
|
||||
|
||||
It helps other programmers looking through your code to understand what you are trying to achieve in each section of a script or program file.
|
||||
|
||||
Therefore, you can include comments in the script above as follows.
|
||||
|
||||
```
|
||||
#!/usr/bin/awk -f
|
||||
#This is how to write a comment in Awk
|
||||
#using the BEGIN special pattern to print a sentence
|
||||
BEGIN { printf "%s\n","Writing my first Awk executable script!" }
|
||||
```
|
||||
|
||||
Next, we shall look at an example where we read input from a file. We want to search for a system user named aaronkilik in the account file, /etc/passwd, then print the username, user ID and user GID as follows:
|
||||
|
||||
Below is the content of our script called second.awk.
|
||||
|
||||
```
|
||||
#! /usr/bin/awk -f
|
||||
#use BEGIN sepecial character to set FS built-in variable
|
||||
BEGIN { FS=":" }
|
||||
#search for username: aaronkilik and print account details
|
||||
/aaronkilik/ { print "Username :",$1,"User ID :",$3,"User GID :",$4 }
|
||||
```
|
||||
|
||||
Save the file and exit, make the script executable and execute it as below:
|
||||
|
||||
```
|
||||
$ chmod +x second.awk
|
||||
$ ./second.awk /etc/passwd
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
```
|
||||
Username : aaronkilik User ID : 1000 User GID : 1000
|
||||
```
|
||||
|
||||
In the last example below, we shall use do while statement to print out numbers from 0-10:
|
||||
|
||||
Below is the content of our script called do.awk.
|
||||
|
||||
```
|
||||
#! /usr/bin/awk -f
|
||||
#printing from 0-10 using a do while statement
|
||||
#do while statement
|
||||
BEGIN {
|
||||
#initialize a counter
|
||||
x=0
|
||||
do {
|
||||
print x;
|
||||
x+=1;
|
||||
}
|
||||
while(x<=10)
|
||||
}
|
||||
```
|
||||
|
||||
After saving the file, make the script executable as we have done before. Afterwards, run it:
|
||||
|
||||
```
|
||||
$ chmod +x do.awk
|
||||
$ ./do.awk
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
```
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
6
|
||||
7
|
||||
8
|
||||
9
|
||||
10
|
||||
```
|
||||
|
||||
### Summary
|
||||
|
||||
We have come to the end of this interesting Awk series, I hope you have learned a lot from all the 13 parts, as an introduction to Awk programming language.
|
||||
|
||||
As I mentioned from the beginning, Awk is a complete text processing language, for that reason, you can learn more other aspects of Awk programming language such as environmental variables, arrays, functions (built-in & user defined) and beyond.
|
||||
|
||||
There is yet additional parts of Awk programming to learn and master, so, below, I have provided some links to important online resources that you can use to expand your Awk programming skills, these are not necessarily all that you need, you can also look out for useful Awk programming books.
|
||||
|
||||
|
||||
For any thoughts you wish to share or questions, use the comment form below. Remember to always stay connected to Tecmint for more exciting series.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/write-shell-scripts-in-awk-programming/
|
||||
|
||||
作者:[Aaron Kili |][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.tecmint.com/author/aaronkili/
|
@ -0,0 +1,246 @@
|
||||
How to Use Flow Control Statements in Awk - part12
|
||||
====
|
||||
|
||||
When you review all the Awk examples we have covered so far, right from the start of the Awk series, you will notice that all the commands in the various examples are executed sequentially, that is one after the other. But in certain situations, we may want to run some text filtering operations based on some conditions, that is where the approach of flow control statements sets in.
|
||||
|
||||
![](http://www.tecmint.com/wp-content/uploads/2016/08/Use-Flow-Control-Statements-in-Awk.png)
|
||||
|
||||
There are various flow control statements in Awk programming and these include:
|
||||
|
||||
- if-else statement
|
||||
- for statement
|
||||
- while statement
|
||||
- do-while statement
|
||||
- break statement
|
||||
- continue statement
|
||||
- next statement
|
||||
- nextfile statement
|
||||
- exit statement
|
||||
|
||||
However, for the scope of this series, we shall expound on: if-else, for, while and do while statements. Remember that we already walked through how to use next statement in Part 6 of this Awk series.
|
||||
|
||||
### 1. The if-else Statement
|
||||
|
||||
The expected syntax of the if statement is similar to that of the shell if statement:
|
||||
|
||||
```
|
||||
if (condition1) {
|
||||
actions1
|
||||
}
|
||||
else {
|
||||
actions2
|
||||
}
|
||||
```
|
||||
|
||||
In the above syntax, condition1 and condition2 are Awk expressions, and actions1 and actions2 are Awk commands executed when the respective conditions are satisfied.
|
||||
|
||||
When condition1 is satisfied, meaning it’s true, then actions1 is executed and the if statement exits, otherwise actions2 is executed.
|
||||
|
||||
The if statement can also be expanded to a if-else_if-else statement as below:
|
||||
|
||||
```
|
||||
if (condition1){
|
||||
actions1
|
||||
}
|
||||
else if (conditions2){
|
||||
actions2
|
||||
}
|
||||
else{
|
||||
actions3
|
||||
}
|
||||
```
|
||||
|
||||
For the form above, if condition1 is true, then actions1 is executed and the if statement exits, otherwise condition2 is evaluated and if it is true, then actions2 is executed and the if statement exits. However, when condition2 is false then, actions3 is executed and the if statement exits.
|
||||
|
||||
Here is a case in point of using if statements, we have a list of users and their ages stored in the file, users.txt.
|
||||
|
||||
We want to print a statement indicating a user’s name and whether the user’s age is less or more than 25 years old.
|
||||
|
||||
```
|
||||
aaronkilik@tecMint ~ $ cat users.txt
|
||||
Sarah L 35 F
|
||||
Aaron Kili 40 M
|
||||
John Doo 20 M
|
||||
Kili Seth 49 M
|
||||
```
|
||||
|
||||
We can write a short shell script to carry out our job above, here is the content of the script:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
awk ' {
|
||||
if ( $3 <= 25 ){
|
||||
print "User",$1,$2,"is less than 25 years old." ;
|
||||
}
|
||||
else {
|
||||
print "User",$1,$2,"is more than 25 years old" ;
|
||||
}
|
||||
}' ~/users.txt
|
||||
```
|
||||
|
||||
Then save the file and exit, make the script executable and run it as follows:
|
||||
|
||||
```
|
||||
$ chmod +x test.sh
|
||||
$ ./test.sh
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
```
|
||||
User Sarah L is more than 25 years old
|
||||
User Aaron Kili is more than 25 years old
|
||||
User John Doo is less than 25 years old.
|
||||
User Kili Seth is more than 25 years old
|
||||
```
|
||||
|
||||
### 2. The for Statement
|
||||
|
||||
In case you want to execute some Awk commands in a loop, then the for statement offers you a suitable way to do that, with the syntax below:
|
||||
|
||||
Here, the approach is simply defined by the use of a counter to control the loop execution, first you need to initialize the counter, then run it against a test condition, if it is true, execute the actions and finally increment the counter. The loop terminates when the counter does not satisfy the condition.
|
||||
|
||||
```
|
||||
for ( counter-initialization; test-condition; counter-increment ){
|
||||
actions
|
||||
}
|
||||
```
|
||||
|
||||
The following Awk command shows how the for statement works, where we want to print the numbers 0-10:
|
||||
|
||||
```
|
||||
$ awk 'BEGIN{ for(counter=0;counter<=10;counter++){ print counter} }'
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
```
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
6
|
||||
7
|
||||
8
|
||||
9
|
||||
10
|
||||
```
|
||||
|
||||
### 3. The while Statement
|
||||
|
||||
The conventional syntax of the while statement is as follows:
|
||||
|
||||
```
|
||||
while ( condition ) {
|
||||
actions
|
||||
}
|
||||
```
|
||||
|
||||
The condition is an Awk expression and actions are lines of Awk commands executed when the condition is true.
|
||||
|
||||
Below is a script to illustrate the use of while statement to print the numbers 0-10:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
awk ' BEGIN{ counter=0 ;
|
||||
while(counter<=10){
|
||||
print counter;
|
||||
counter+=1 ;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Save the file and make the script executable, then run it:
|
||||
|
||||
```
|
||||
$ chmod +x test.sh
|
||||
$ ./test.sh
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
```
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
6
|
||||
7
|
||||
8
|
||||
9
|
||||
10
|
||||
```
|
||||
|
||||
### 4. The do while Statement
|
||||
|
||||
It is a modification of the while statement above, with the following underlying syntax:
|
||||
|
||||
```
|
||||
do {
|
||||
actions
|
||||
}
|
||||
while (condition)
|
||||
```
|
||||
|
||||
The slight difference is that, under do while, the Awk commands are executed before the condition is evaluated. Using the very example under while statement above, we can illustrate the use of do while by altering the Awk command in the test.sh script as follows:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
awk ' BEGIN{ counter=0 ;
|
||||
do{
|
||||
print counter;
|
||||
counter+=1 ;
|
||||
}
|
||||
while (counter<=10)
|
||||
}
|
||||
'
|
||||
```
|
||||
|
||||
After modifying the script, save the file and exit. Then make the script executable and execute it as follows:
|
||||
|
||||
```
|
||||
$ chmod +x test.sh
|
||||
$ ./test.sh
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
```
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
6
|
||||
7
|
||||
8
|
||||
9
|
||||
10
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
This is not a comprehensive guide regarding Awk flow control statements, as I had mentioned earlier on, there are several other flow control statements in Awk.
|
||||
|
||||
Nonetheless, this part of the Awk series should give you a clear fundamental idea of how execution of Awk commands can be controlled based on certain conditions.
|
||||
|
||||
You can as well expound more on the rest of the flow control statements to gain more understanding on the subject matter. Finally, in the next section of the Awk series, we shall move into writing Awk scripts.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/use-flow-control-statements-with-awk-command/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.tecmint.com/author/aaronkili/
|
||||
|
||||
|
1088
translated/tech/20160917 A Web Crawler With asyncio Coroutines.md
Normal file
1088
translated/tech/20160917 A Web Crawler With asyncio Coroutines.md
Normal file
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue
Block a user